Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Note that bitflips in RAM have two primary causes: poor engineering on the motherboard trace layout, and failing DRAM chips on your DIMMs. If your system experiences DRAM bit flips but passes a DRAM diagnostic, your motherboard is poo poo. Poorly engineered motherboards are extremely common in servers, and in many cases the primary job of ECC is to let the system work with a motherboard that is so lovely it would cause BSODs in a desktop computer without ECC. If a system that previously didn't have bit flips starts logging them, a DRAM chip has probably begun to fail. The idea that bitflips happen randomly as a result of radiation effects is basically an urban legend of computers.

Adbot
ADBOT LOVES YOU

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

FaustianQ posted:

https://forums.anandtech.com/threads/official-amd-ryzen-benchmarks-reviews-prices-and-discussion.2499879/page-32

Rumor going around (Well, it is the largest tech publication in Turkey) that the R7 1700 is only capable of 4.0Ghz across all cores and will literally kill the VRMs if you attempt to push anymore voltage with low end boards.

However, the R7 1700X and 1800X, from the same source as above, will beat the 7700Ks stock performance (so 4.4Ghz 1800X ~ 4.5Ghz 7700K?). Temps, power draw, voltage, etc were within safe limits unlike R7 1700. Seems R7 1700s (and likely all non X CPUs) are the real garbage parts which can't clock worth a drat. This is a similar story to Polaris 11, @ 850Mhz it pulls 30-35W, @ 1200Mhz it'll pull near 65W+.

I guess Kabylake still has a niche in absolute niche for ST performance but honestly if the 1800X is hitting 4.4Ghz all cores without throwing up any alarms, then later 1600X and 1400X should be monsters, and a 1400X vs 7700K comparison would heavily favor the 1400X from a price/perf point.

Previous articles showed that the Crosshair board from Asus got the 1700 to 4GHz, iirc. Everything else got to 3.8GHz due to their VRMs, so it seems board quality is going to be important.

Alereon posted:

Note that bitflips in RAM have two primary causes: poor engineering on the motherboard trace layout, and failing DRAM chips on your DIMMs. If your system experiences DRAM bit flips but passes a DRAM diagnostic, your motherboard is poo poo. Poorly engineered motherboards are extremely common in servers, and in many cases the primary job of ECC is to let the system work with a motherboard that is so lovely it would cause BSODs in a desktop computer without ECC. If a system that previously didn't have bit flips starts logging them, a DRAM chip has probably begun to fail. The idea that bitflips happen randomly as a result of radiation effects is basically an urban legend of computers.

What is the expected value for bitflips by radiation at sea level?

PerrineClostermann fucked around with this message at 01:02 on Feb 25, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
--edit: ^^^ An urban legend? Really? I thought it was proved that it does happen?

Klyith posted:

e: since the memory corruption can possibly flip more than one bit, and ECC can only handle 1 bit errors. If your system isn't detecting the rowhammer attack, multiple attempts eventually work. but current and near future hardware has protections against this type of attack, without the need for ECC.
It can correct 1 bit errors, and detect more than that. In latter case, it generates a machine check exception, on which the OS is supposed to panic.

buglord
Jul 31, 2010

Cheating at a raffle? I sentence you to 1 year in jail! No! Two years! Three! Four! Five years! Ah! Ah! Ah! Ah!

Buglord
re: bad overclocking with R7-1700

I thought AMDs direction with Zen was less manual overclocking and more of that automatic boost technology based on cooling solutions and the CPU being able to boost/overclock itself like Pascal GPUs to some extent?

Are they still doing that with Zen like said? I know I'm most likely massively simplifying things, and I know dick about pc components.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

--edit: ^^^ An urban legend? Really? I thought it was proved that it does happen?

It can and does happen. The question is, how often?

https://en.wikipedia.org/wiki/Soft_error posted:

One experiment measured the soft error rate at the sea level to be 5,950 failures in time (FIT) per DRAM chip. When the same test setup was moved to an underground vault, shielded by over 50 feet (15 m) of rock that effectively eliminated all cosmic rays, zero soft errors were recorded.[9] In this test, all other causes of soft errors are too small to be measured, compared to the error rate caused by cosmic rays.
1 FIT = 1 failure per billion hours

If keep your fileserver running for the next 20 years you're statistically average to see 1 bit-flip error from cosmic rays. If you keep the computer in the basement rather than the attic, I bet you'd halve that. If you live in boulder CO your error rate would be increased by approximately 3 times.


Surplus medical lead blankets are only $100-150, you could get one of those to drape over your fileserver. I mean, you can get ECC ram but consider the memory cache on your HD is the most likely place for a bit-flip to actually get written to permanent storage. Can't ECC that, better invest in rad shielding your pc!

Or consider that over the course of 20 years you're about a million times more likely to gently caress up your data by accidentally doing something that's your own drat fault. A 1 bit error corrupts the picture of your kid, and for some reason you don't have a thousand more kid pics? Or you accidentally rm-rf every pic you ever took because you were typing in the wrong terminal and delete all your pics at once?

repiv
Aug 13, 2009

OCUKs Gibbo passed along a memo from Asus. Apparently Ryzen will struggle with high-frequency DDR4 on launch, especially with 4 sticks, but they expect a BIOS fix in 1-2 months.

Platystemon
Feb 13, 2012

BREADS

Combat Pretzel posted:

--edit: ^^^ An urban legend? Really? I thought it was proved that it does happen?

Thermal effects are stronger, but who cares?

Bit flips happen occasionally. Silent data corruption is bad.

PC LOAD LETTER
May 23, 2005
WTF?!

PerrineClostermann posted:

What is the expected value for bitflips by radiation at sea level?
Thought it was supposed to be something like 1 per year however I also remember something about memory density making a difference too.

Googling turns up some stuff from a wiki on the subject:

quote:

Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month.

And a old 2008 article about a Intel patent regarding cosmic ray detectors they were considering putting in every chip because of the issue.

quote:

"Cosmic ray induced computer crashes have occurred and are expected to increase with frequency as devices (for example, transistors) decrease in size in chips. This problem is projected to become a major limiter of computer reliability in the next decade."

More recent studies information that seems pretty thorough on the subject:
2009 Google's paper "DRAM Errors in the Wild: A Large-Scale Field Study" says that there can be up to 25000-75000 one-bit FIT per Mbit (failures in time per billion hours), which is equal to 1 - 5 bit errors per hour for 8GB of RAM after my calculations. Paper says the same: "mean correctable error rates of 2000–6000 per GB per year".

2012 Sandia report "Detection and Correction of Silent Data Corruptionfor Large-Scale High-Performance Computing": "double bit flips were deemed unlikely" but at ORNL's dense Cray XT5 they are "at a rate of one per day for 75,000+ DIMMs" even with ECC. And single-bit errors should be higher.

So it seems like its real thing and not a urban legend but its also not a giant problem for a desktop gaming PC that is overclocked anyways. Doing something :siren:MISSION CRITICAL:siren: or at least with really high amounts of RAM it seems ECC RAM is if not necessary then definitely a real good idea.

edit: \/\/\/\/in case you're being serious that would stop X-Rays (at least I'm sure it would, that is about what they'd put into the walls at a X-Ray lab I worked at in a hospital) but cosmic rays (which can be Gamma or higher) can be much higher energy so it might not cut it. IBM had put their test DRAM in a cave (this was at seal level) to protect them from cosmic ray strikes in one of their tests so its apparently really hard to do properly.

PC LOAD LETTER fucked around with this message at 02:36 on Feb 25, 2017

Klyith
Aug 3, 2007

GBS Pledge Week
https://www.amazon.com/Sheet-Lead-12-Rotometals/dp/B00IS5EEZ6/
1/8" sheet lead, 12"x12"

Get a few pieces of that, do a radshield casemod for your PC!

Platystemon
Feb 13, 2012

BREADS
lol if you don’t shield your computer with low‐radio lead salvaged from Roman shipwrecks.

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

FaustianQ posted:

https://forums.anandtech.com/threads/official-amd-ryzen-benchmarks-reviews-prices-and-discussion.2499879/page-32

Rumor going around (Well, it is the largest tech publication in Turkey) that the R7 1700 is only capable of 4.0Ghz across all cores and will literally kill the VRMs if you attempt to push anymore voltage with low end boards.

However, the R7 1700X and 1800X, from the same source as above, will beat the 7700Ks stock performance (so 4.4Ghz 1800X ~ 4.5Ghz 7700K?). Temps, power draw, voltage, etc were within safe limits unlike R7 1700. Seems R7 1700s (and likely all non X CPUs) are the real garbage parts which can't clock worth a drat. This is a similar story to Polaris 11, @ 850Mhz it pulls 30-35W, @ 1200Mhz it'll pull near 65W+.

I guess Kabylake still has a niche in absolute niche for ST performance but honestly if the 1800X is hitting 4.4Ghz all cores without throwing up any alarms, then later 1600X and 1400X should be monsters, and a 1400X vs 7700K comparison would heavily favor the 1400X from a price/perf point.

Are boards going to certify now which skus you can use safely? If board VRMs are clearly frying with a 65w part, there should be no way you'd put in a 95w+ part. These boards would have to not accept the 1700x and 1800x skus if this were the case.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

repiv posted:

OCUKs Gibbo passed along a memo from Asus. Apparently Ryzen will struggle with high-frequency DDR4 on launch, especially with 4 sticks, but they expect a BIOS fix in 1-2 months.

It really sounds like Ryzen is fine, and just board partners are dropping the ball a bit here.

Risky Bisquick posted:

Are boards going to certify now which skus you can use safely? If board VRMs are clearly frying with a 65w part, there should be no way you'd put in a 95w+ part. These boards would have to not accept the 1700x and 1800x skus if this were the case.

It's not a 65W part at 4.0Ghz across all cores, it sounds more like close to 200W. R7 1700 is 65W at baseclock or single core boost to 3.7Ghz. It also sounds they're talking about B350 boards which might be some underpowered 4+2 poo poo.

Again the R7 1700 sounds like the dump stat for the SKU release, apparently the 1700X and 1800X are much better behaved.

Kazinsal
Dec 13, 2011


FaustianQ posted:

it sounds more like close to 200W.

:catstare: what the gently caress

PC LOAD LETTER
May 23, 2005
WTF?!

Kazinsal posted:

:catstare: what the gently caress
That is typical for a 8C chip these days. Intel's are no better once you start OC'ing them when it comes to power usage. Its why I expected Zen to be a furnace when OC'd.

edit:\/\/\/\/AMD's TDP's for stock are probably legit, for OC'd there is no way they'll be pulling only 95W or 65W but its always been true that when OC'd a CPU will pull lots more watts so I don't see anything here for AMD to comment about. Its not like they're making you OC and when you OC you're purposefully running the hardware outside of their set specs.\/\/\/\/

PC LOAD LETTER fucked around with this message at 03:45 on Feb 25, 2017

GRINDCORE MEGGIDO
Feb 28, 1985


Kazinsal posted:

:catstare: what the gently caress

95W when all cores are loaded is absolutely a pipedream, and I wonder how reviewers will react to it, and if AMD will comment on it.

GRINDCORE MEGGIDO fucked around with this message at 03:39 on Feb 25, 2017

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Kazinsal posted:

:catstare: what the gently caress

Old 4+2 and 6+2 VRM setups for AM3+ boards would support 125-140W, so my impression is that the R7 1700 is consuming more than that and it's doing it because you have to crank the voltage stupid high, such that going from 3.8Ghz to 4.0Ghz is a tremendous increase. The 1700X and 1800X are way better binned and perform better in comparison.

Broadwell has a similar issue honestly, it hits a voltage wall and then you have to crank it up and thermals/power draw go nuts for a few extra ghz.

hifi
Jul 25, 2012

tdp has never been rated on an overclock. come on

http://www.anandtech.com/show/10968/the-intel-core-i7-7700k-91w-review-the-new-stock-performance-champion/11 200 watts with a 7700k

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
Wait a second here. Isn't AMD the one with the honest TDP rating, whereas Intel is the one with the under normal conditions TDP?

PC LOAD LETTER
May 23, 2005
WTF?!

Risky Bisquick posted:

Wait a second here. Isn't AMD the one with the honest TDP rating, whereas Intel is the one with the under normal conditions TDP?

Rated TDP's are for stock clocks not overclocks.

This guy gets it

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
You have FaustianQ implying the AMD 65w TDP is for a single core aka idle for a 8c part.

hifi
Jul 25, 2012

Risky Bisquick posted:

Wait a second here. Isn't AMD the one with the honest TDP rating, whereas Intel is the one with the under normal conditions TDP?

tdp has never been really honest. Intel made up sdp and it looks like they added another 2 tdp numbers that are vendor configurable, and there's also the option of just ignoring TDP limits in the bios without overclocking:

edit: and furthermore, AMD has been really lax with motherboard certifications: previous CPUs were known to fry VRMs on cheap mobos, but people laughed it off because the amds of a few years ago were power hungry pigs. Conversely, intel requires a z-series chipset to overclock and you end up spending $150 on an Asus Republic Of Gamers motherboard for your crappy $50 anniversary edition pentium. Neither option is great but the pattern with AMD has been to let you shoot yourself in the foot to save a few bucks.

hifi fucked around with this message at 03:56 on Feb 25, 2017

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Single bit errors happen more regularly than you'd think:
https://www.youtube.com/watch?v=aT7mnSstKGs

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Risky Bisquick posted:

You have FaustianQ implying the AMD 65w TDP is for a single core aka idle for a 8c part.

Where am I implying that? I said 65W baseclock or single core 3.7Ghz turbo, because I'm unsure of whether the rated TDP takes into account the turbo.

PC LOAD LETTER
May 23, 2005
WTF?!

Risky Bisquick posted:

You have FaustianQ implying the AMD 65w TDP is for a single core aka idle for a 8c part.
A single core cannot be a 8C part! And I don't see how you can read his post that way since he is specifying parts by name (ie. 1800X, R7 1700) and they're all 8C parts. He also isn't talking about idle power either. edit: he doesn't even use the word "idle" in his post.

The stock TDP's AMD gives are for stock clocks at load guys. If you OC you WILL use more power!! That is why mobos designed for OC'ing will usually be able to deliver lots more power to the CPU and have such crazy over spec VRM's.

That is also why mobos not designed for OC'ing have such "wimpy" VRM's. They don't need them since they're only running the chip at stock clocks anyways so not much power will be needed.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

If I was going to pull a number out of my rear end, it would be around 140W just based on the heatsinks used and temps, which could easily be absolutely wrong.
But I'm 100% sure 95W is a pipedream.

For a point of comparison here, the 5820K goes from roughly 115W at stock clocks to roughly 240 watts at 4.5 GHz.

My 5820K hits 4.13 GHz all-core, at stock voltage. I realize I'm probably pulling a bit more power than I would at stock clocks - but there's absolutely zero question that the last 500 MHz or so is just brutal, you hit a vertical wall in terms of power consumption.

Smaller chips tend to allocate part of their TDP spec to the graphics processor, and HEDT chips don't have that, but honestly the curve is steeper than you would expect even knowing that.

I really don't expect Ryzen to be any different. I would bet that a 95W Ryzen pulls roughly 200 watts at 4.4-4.5 GHz all-core.

By the way none of these are measured under Prime95. Running AVX nonstop like Prime95 does can easily add another 100 watts to these figures.

Paul MaudDib fucked around with this message at 04:06 on Feb 25, 2017

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

FaustianQ posted:

Where am I implying that? I said 65W baseclock or single core 3.7Ghz turbo, because I'm unsure of whether the rated TDP takes into account the turbo.

Well one this is for sure if it doesn't take into account the turbo it will never sustain 3.7 across 8 cores with their Wraith Spire design.

PC LOAD LETTER posted:

A single core cannot be a 8C part! And I don't see how you can read his post that way since he is specifying parts by name (ie. 1800X, R7 1700) and they're all 8C parts. He also isn't talking about idle power either. edit: he doesn't even use the word "idle" in his post.

The stock TDP's AMD gives are for stock clocks at load guys. If you OC you WILL use more power!! That is why mobos designed for OC'ing will usually be able to deliver lots more power to the CPU and have such crazy over spec VRM's.

That is also why mobos not designed for OC'ing have such "wimpy" VRM's. They don't need them since they're only running the chip at stock clocks anyways so not much power will be needed.

Way to be pedantic, single core @ load of an 8c part.

PC LOAD LETTER
May 23, 2005
WTF?!

Risky Bisquick posted:

Well one this is for sure if it doesn't take into account the turbo it will never sustain 3.7 across 8 cores with their Wraith Spire design.
That is normal. Even Intel's chips only run the boost clocks at peak for 1 or 2 cores for long periods of time.

Risky Bisquick posted:

Way to be pedantic, single core @ load of an 8c part.
Words mean things. Especially when talking about boost clocks, base clocks, and TDP's. And what about the rest of the post? Did what I said make sense or what? I don't think I was being harsh at all.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Risky Bisquick posted:

You have FaustianQ implying the AMD 65w TDP is for a single core aka idle for a 8c part.

I read it as 65W for one core at full boost and the others at whatever they can run at to stay within TDP limits, much like boosting works on Intel's CPUs.

Platystemon
Feb 13, 2012

BREADS

Rexxed posted:

Single bit errors happen more regularly than you'd think:
https://www.youtube.com/watch?v=aT7mnSstKGs

https://www.youtube.com/watch?v=9Sgaq6OYLX8

There’s also this one from DEFCON 21 that covers the same topic.

Kazinsal
Dec 13, 2011


I just want to get off this loving i7-3820 that can't get past 4.1 GHz without collapsing, and I don't want to pay Intel $500 to loving do it, god dammit AMD

PC LOAD LETTER
May 23, 2005
WTF?!

hifi posted:

edit: and furthermore, AMD has been really lax with motherboard certifications: previous CPUs were known to fry VRMs on cheap mobos.... the pattern with AMD has been to let you shoot yourself in the foot to save a few bucks.
Wasn't it that AMD released a high clocked Bulldozer CPU (FX950-something) of some sort that used more power (I think 140W by default) than was originally spec'd (original max was like 120W or something) for AM3 mobos for stock clocks and when people put them in mobos that weren't spec'd for it they'd burn up?

Obviously bad but not the same situation as what you're presenting. I only remember it because of the whole AM3+ socket compatibility thing that was brought up at the time and the higher TDP chip support was the only major difference.

Kazinsal posted:

god dammit AMD
The process tech they have to work with isn't as good as Intel's and even Intel's high core parts burn high watts when OC'd to 4.somethingGhz. It'd be a miracle for that not to be true of Zen too. I was really expecting the situation to be worse with stock clocked top end Zen's having 140W TDP's all over again but they've done better than I expected.

PC LOAD LETTER fucked around with this message at 04:17 on Feb 25, 2017

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Kazinsal posted:

I just want to get off this loving i7-3820 that can't get past 4.1 GHz without collapsing, and I don't want to pay Intel $500 to loving do it, god dammit AMD

You seem to really be upset at AMD about Ryzen when tentative signs give us reason to be cautiously optimistic that it's exactly where we thought it'd be performance wise and that's perfectly fine.

Let's wait until March 2nd before the salt mines fully reopen.

Col.Kiwi
Dec 28, 2004
And the grave digger puts on the forceps...

PC LOAD LETTER posted:

Wasn't it that AMD released a high clocked Bulldozer CPU (FX950-something) of some sort that used more power (I think 140W by default) than was originally spec'd (original max was like 120W or something) for AM3 mobos for stock clocks and when people put them in mobos that weren't spec'd for it they'd burn up?

Obviously bad but not the same situation as what you're presenting. I only remember it because of the whole AM3+ socket compatibility thing that was brought up at the time and the higher TDP chip support was the only major difference.
FX-9590, it's 220W. Most boards with the correct socket will not run stable with this chip, only boards designed to handle it. I think generally incompatible boards just crash but I think there were some issues with VRMs blowing up too.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

PC LOAD LETTER posted:

Wasn't it that AMD released a high clocked Bulldozer CPU (FX950-something) of some sort that used more power (I think 140W by default) than was originally spec'd (original max was like 120W or something) for AM3 mobos for stock clocks and when people put them in mobos that weren't spec'd for it they'd burn up?

Obviously bad but not the same situation as what you're presenting. I only remember it because of the whole AM3+ socket compatibility thing that was brought up at the time and the higher TDP chip support was the only major difference.

The FX-9590, yeah. Standard Piledriver has a 125W TDP, the 9570 and 9590 have a 220W TDP. They need a specific chipset (990FX) capable of dealing with the power, and they were usually packaged with a liquid cooler so you didn't instantly flame out your crappy stock heatsink.

Those two requirements naturally made them quite a bit more expensive than the standard FX CPUs, and so they were in direct price competition with Intel with a processor that had half of the IPC. :lol:

http://www.anandtech.com/show/8316/amds-5-ghz-turbo-cpu-in-retail-the-fx9590-and-asrock-990fx-extreme9-review

hifi
Jul 25, 2012

PC LOAD LETTER posted:

Wasn't it that AMD released a high clocked Bulldozer CPU (FX950-something) of some sort that used more power (I think 140W by default) than was originally spec'd (original max was like 120W or something) for AM3 mobos for stock clocks and when people put them in mobos that weren't spec'd for it they'd burn up?

Obviously bad but not the same situation as what you're presenting. I only remember it because of the whole AM3+ socket compatibility thing that was brought up at the time and the higher TDP chip support was the only major difference.

The process tech they have to work with isn't as good as Intel's and even Intel's high core parts burn high watts when OC'd to 4.somethingGhz. It'd be a miracle for that not to be true of Zen too. I was really expecting the situation to be worse with stock clocked top end Zen's having 140W TDP's all over again but they've done better than I expected.

http://support.amd.com/en-us/search/faq/295

the 9xxx series was the only one that had a real list of requirements just to use it. Everything else you still probably needed some beefy power delivery on the motherboard to overclock

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

hifi posted:

http://support.amd.com/en-us/search/faq/295

the 9xxx series was the only one that had a real list of requirements just to use it. Everything else you still probably needed some beefy power delivery on the motherboard to overclock

The 9590 was nothing more than a binned FX8350 - so taking another FX processor to an equivalent level needs just as much power as the 9590, if not more.

PC LOAD LETTER
May 23, 2005
WTF?!

Col.Kiwi posted:

FX-9590, it's 220W. Most boards with the correct socket will not run stable with this chip, only boards designed to handle it. I think generally incompatible boards just crash but I think there were some issues with VRMs blowing up too.
OK so I was sorta right.

Paul MaudDib posted:

Those two requirements naturally made them quite a bit more expensive than the standard FX CPUs, and so they were in direct price competition with Intel with a processor that had half of the IPC. :lol:
Yeah that was a desperation move on their part. I think Bulldozer had some interesting ideas but it just wasn't executed properly and idea to clock it real high was a reeeaaaallly bad one (especially after they saw what happened with Netburst).

FuturePastNow
May 19, 2014


There are also quite a lot of AM3/AM3+ 740G/780G/880G mATX mobos that cannot run processors >95W and carry warnings not to use those. That's just mobo makers being cheap with the VRMs.

GRINDCORE MEGGIDO
Feb 28, 1985


PC LOAD LETTER posted:

OK so I was sorta right.

Yeah that was a desperation move on their part. I think Bulldozer had some interesting ideas but it just wasn't executed properly and idea to clock it real high was a reeeaaaallly bad one (especially after they saw what happened with Netburst).

Bulldozer surprised me. I wish I knew more about the justifications for the choices made in it, and enough about chip design to understand them and why it really didn't work out. I'm reading the poo poo out of Inside the Machine by Jon Stokes, maybe that will help.

@Paul MaudDib - whoah, so with AVX a big Intel chip, overclocked, can go around 340W? Cooling them quietly must be a good challenge, holy poo poo.

I wish this and the Intel threads were merged. I'm pretty sure most of us read both threads, anyway, and there is a fair bit of parallel discussion going on.

GRINDCORE MEGGIDO fucked around with this message at 04:51 on Feb 25, 2017

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

@Paul MaudDib - whoah, so with AVX a big Intel chip, overclocked, can go around 340W? Cooling them quietly must be a good challenge, holy poo poo.

Something in that neighborhood, yeah.



388 watts when overclocked under load, versus 57 watts idle at stock = 331W of delta there. Plus, some of that idle power is also going through the CPU - let's say 30 watts.

Corsair did some stress testing of clocks versus required voltage. I'm not sure what their stress test was (i.e. whether it's AVX or not) but it kind of gives you an idea of the curves involved here.



Obviously AMD is on a different process, but Intel didn't make any huge strides with Broadwell-E's efficiency either, and Intel 14nm is still quite a ways ahead of GloFo's 14nm (which in many respects is more comparable to an Intel 22nm like Haswell). If they've managed to beat Chipzilla at their own game then hats off... but I doubt it. Intel spends a buttload to keep their processes ahead of the competition and there's no One Weird Trick Invented By A GloFo Engineer (intel hates him!).

The soldered heatspreaders do help quite a bit I think. I am running a 140mm AIO cooler, doing 4.13 GHz all-core at stock voltages on my 5820K, and I peak at under 80C under Prime95 which is A-OK in my book. I want to say it was 75C after 15m and rose to like 77C after 45m or so, at which point I called it quits.

Paul MaudDib fucked around with this message at 05:38 on Feb 25, 2017

  • Locked thread