Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Klyith
Aug 3, 2007

GBS Pledge Week

kliras posted:

When the 5800X3D presumably comes out, I assume it's also going to run a little cooler because they're downclocking it a bit?

I think most everyone is assuming that the reason it's downclocked vs the normal 5800 is because it's hotter.

The cache itself generates some extra heat. And then because the v-cache is stacked on the center IO zone, they have to put some blank silicon on top of the CCXs to match the height. So that means the cores are now a tiny bit further away from metal. Silicon isn't bad at thermal conduction, but it's worse than copper or aluminum.


Wild-rear end guess, but I expect the 5800X3D to be pretty hot. Not because they will run 250 watts like recent intels, but like even a high-performance water cooling setup they might be 80C or something. 7nm thermal density with extra insulation, I would be expecting that.

Adbot
ADBOT LOVES YOU

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
Do we know yet if the 5800X3D CCX dies are spun with those extra silicon layers just for this very package, or are they binned dies from other product lines? Seems like it would make more sense to reuse other Ryzen 3 dies and add TIM on top to match the height instead

Klyith
Aug 3, 2007

GBS Pledge Week

Sidesaddle Cavalry posted:

Do we know yet if the 5800X3D CCX dies are spun with those extra silicon layers just for this very package, or are they binned dies from other product lines? Seems like it would make more sense to reuse other Ryzen 3 dies and add TIM on top to match the height instead

Since the CCX dies are part of the same integrated chiplet as the part that gets the v-cache stacked on it, they're definitely not reused from other products. Whatever modifications needed to make the attachment points from chiplet "up" to the v-cache, I'm sure those weren't on the standard Zen 3 this whole time. (Refer to this pic, v-cache gets stacked on the purple & green and blank silicon get stacked on the orange cores.)

Whether it works the other direction -- if a 5800X3D chiplet that fails on one core can be shipped as a 5600X by just not gluing on the v-cache -- is another question. Maybe, but I'd be unsurprised if the answer was no. The failure rate from the bonding process is gonna be worse than anything else.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Also this overview from AMD, which illustrates that the spacers are blank silicon: https://www.amd.com/en/campaigns/3d-v-cache

I couldn't find any references in a few moments of searching, but I swear that when this was announced (the tech, not the 3800X specifically; that was months later), people from AMD explicitly stated that the spacers were inert silicon.

Edit:

https://www.anandtech.com/show/16725/amd-demonstrates-stacked-vcache-technology-2-tbsec-for-15-gaming

mdxi fucked around with this message at 21:06 on Feb 28, 2022

Canna Happy
Jul 11, 2004
The engine, code A855, has a cast iron closed deck block and split crankcase. It uses an 8.1:1 compression ratio with Mahle cast eutectic aluminum alloy pistons, forged connecting rods with cracked caps and threaded-in 9 mm rod bolts, and a cast high

v1ld posted:

Dang, that's tempting.

E: So tempting I didn't wait to get home and bought it on the phone. The 3950x never dropped below $675 or so, looks like.

450 for the 5900x and 230 for the 5600x as well.

Klyith
Aug 3, 2007

GBS Pledge Week
Oh right, another thing: the chips that are getting v-cache are made extra-thin, so that the completed stack will have the same height as the standard ryzen or epyc chips. That way AMD doesn't have to make all-new packaging for thicc chips.

So not only are the chiplets made for v-cache different from normal ones, they can't be used elsewhere if flawed.

Richlove
Jul 24, 2009

Paragon of primary care

"What?!?! You stuck that WHERE?!?!

:staredog:


Just bought and installed the 5950x, upgrading from a 3900x. The thing is a beast and cut my video editing/encoding in adobe suite and handbrake almost in half.

Arzachel
May 12, 2012

Klyith posted:

Oh right, another thing: the chips that are getting v-cache are made extra-thin, so that the completed stack will have the same height as the standard ryzen or epyc chips. That way AMD doesn't have to make all-new packaging for thicc chips.

So not only are the chiplets made for v-cache different from normal ones, they can't be used elsewhere if flawed.

There's probably going to be a server product that they'll use harvested chiplets for.

Cygni
Nov 12, 2005

raring to post

Arzachel posted:

There's probably going to be a server product that they'll use harvested chiplets for.

Milan X is already in service with the hyperscalers and is in preorder for everyone else. The cheapest SKU is $4,300.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Last night I started updating my crunchboxes to the newest-but-one BIOS for their mobos. I was initially only going to update the ones which are getting 5000 series CPUs swapped in, but there was a big note asking people to update due to security fixes, so I decided to do them all. This'll be the last flash for these mobos in any case, since AM5 will have taken over by the next time I do a round of upgrades.

The point is that the PBO options in the BIOS were completely overhauled. All the same stuff was there, but it was laid out in a very different way. I managed to half-muscle-memory, half-eyeball my way to setting a THERMAL limit of 85C (with wide open PBO) rather than a PPT limit of 85W (with auto thermal throttling). It took me three hours to figure out why the machine kept having temps slowly creep up before flatlining at 85C.

So if you have a Gigabyte board, and you're upgrading to 5000 series chips, pay close attention to the overclocking menus. A lot of stuff that used to be closer to top-level is now gated behind things defaulting to "Auto"; setting them to "Manual" or "Advanced" pops new options into existance. (There was already some of this, but now there's a lot more.)

Baby Proof
May 16, 2009

Would upgrading from a 2700 to a 5600x have any advantages re: faster memory timings? I've got 2 8GB memory sticks that were advertised as DDR-3600 CL19 (I want to say Micron-E?), but they won't work with my current DDR-3200 memory. I'd be perfectly happy with running them both at 3200 CL16, but for whatever reason that's not happening with my current cpu / Asus B450-F board. Of course I didn't check for compatibility before buying.

Baby Proof fucked around with this message at 04:16 on Mar 3, 2022

Dr. Video Games 0031
Jul 17, 2004

You can't have two memory sticks in the same system running at different speeds. Every stick has to run at the exact same speed and timings, and if one or more of the sticks can't hit those speeds and timings, then this is what happens (3200 CL16 is a faster spec than 3600 CL19). I don't think a CPU upgrade will help here, to be honest. You could try upping the voltage by 50 or 100mV to help your new sticks hit 3200 CL16, but if that doesn't work then I'd just return those and buy a proper 2x8GB kit of 3200 CL16 instead.

Klyith
Aug 3, 2007

GBS Pledge Week
Running 4 sticks is harder than 2 on the memory controller. It is normally the case that you can't run 4 at their rated timings unless you bought something by looking at your mobo's QVL for the few (and expensive) kits that get a 4 stick checkmark.

Some mobos are better at running 4 sticks, because instead of a daisy-chain path between sticks they use T topology -- the wires connecting to the sockets split in a 'T' for equal signal length. But yours is the normal daisy-chain. (And T is not common at this point, because they are worse with 2 sticks and still aren't massively better with 4.)

Additionally, if your older sticks are dual-rank they're going to be even harder to run in 4 stick configuration.


A 5600X might be able to run them faster than a 2700, the memory controller was one of the things that got improvement over each Zen iteration. But probably still not at full speed.

If you need fast memory *and* you actually need 32GB, the best way to accomplish that is buy a decent 2x16GB set and flog off your current ram. But most people don't really need 32GB right now, and those that do generally don't need ultra-fast gamer ram.

Baby Proof
May 16, 2009

OK, that gives me something to look into. I don't particularly need 32GB or a faster processor, I'm just getting the itch to upgrade and looking for excuses.

Kibner
Oct 21, 2008

Acguy Supremacy
There is a big difference in memory controller speeds. My 5950x is able to run four 3600 rated sticks at their rated speed while my old 2700x could only hold them steady at 2933.

FuturePastNow
May 19, 2014


Can't wait to see some actual reviews and testing of the 5800X3D

Klyith
Aug 3, 2007

GBS Pledge Week

Kibner posted:

There is a big difference in memory controller speeds. My 5950x is able to run four 3600 rated sticks at their rated speed while my old 2700x could only hold them steady at 2933.

They've improved but for most people there isn't that big a difference. I'd expect the 2700X could have done something slightly better than that with some manual OCing / SoC boost.

My experience with going from 1600X to 3700X with some not-great-for-ryzen ram:
• the 1600X couldn't run it at XMP speed & timings, the 3700X can
• Doing fully manual timings there isn't much room between them. The sticks are 3000-C15, for both CPUs I run them 3200-C16. The difference between them is that the 1600X needed a tiny +0.05V boost to SoC voltage, and the 3700X can do 1T command rate while the 1600X had to 2T. 1T is nice but not ground breaking.


Now, if I had some different ram I might have seen a bigger difference -- iirc a bunch of pre-ryzen 3600 sticks the early ryzens just can't run 3600 and new ones can. But if you get into manual memory OCing the old ryzens could produce decent results. They're just touchy. You have to go back to old school methods of trial and error and incremental twiddling.

Klyith fucked around with this message at 15:21 on Mar 3, 2022

Inept
Jul 8, 2003

Klyith posted:

They've improved but for most people there isn't that big a difference. I'd expect the 2700X could have done something slightly better than that with some manual OCing / SoC boost.

My experience with going from 1600X to 3700X with some not-great-for-ryzen ram:
• the 1600X couldn't run it at XMP speed & timings, the 3700X can
• Doing fully manual timings there isn't much room between them. The sticks are 3000-C15, for both CPUs I run them 3200-C16. The difference between them is that the 1600X needed a tiny +0.05V boost to SoC voltage, and the 3700X can do 1T command rate while the 1600X had to 2T. 1T is nice but not ground breaking.


Now, if I had some different ram I might have seen a bigger difference -- iirc a bunch of pre-ryzen 3600 sticks the early ryzens just can't run 3600 and new ones can. But if you get into manual memory OCing the old ryzens could produce decent results. They're just touchy. You have to go back to old school methods of trial and error and incremental twiddling.

The thing is a lot of people don't want to deal with the trial and error of memory timing and finding out if it crashes later on. Not being able to run at XMP by default is pretty bad. I have a 1600 AF and have the same issue, and not having that be a problem when I first got it would have been a lot nicer than my computer randomly crashing while I was trying to play a new game.

Kibner
Oct 21, 2008

Acguy Supremacy

Klyith posted:

They've improved but for most people there isn't that big a difference. I'd expect the 2700X could have done something slightly better than that with some manual OCing / SoC boost.

My experience with going from 1600X to 3700X with some not-great-for-ryzen ram:
• the 1600X couldn't run it at XMP speed & timings, the 3700X can
• Doing fully manual timings there isn't much room between them. The sticks are 3000-C15, for both CPUs I run them 3200-C16. The difference between them is that the 1600X needed a tiny +0.05V boost to SoC voltage, and the 3700X can do 1T command rate while the 1600X had to 2T. 1T is nice but not ground breaking.


Now, if I had some different ram I might have seen a bigger difference -- iirc a bunch of pre-ryzen 3600 sticks the early ryzens just can't run 3600 and new ones can. But if you get into manual memory OCing the old ryzens could produce decent results. They're just touchy. You have to go back to old school methods of trial and error and incremental twiddling.

I did use manual trial and error. Could never get those b-die sticks stable at anything above 2933 on either of two different motherboards with the 2700x. Even tried running with super loose timings and a wide-range of voltages. Just couldn't do it.

I'm doing the manual process again with the 5950x and was already stable at 3600 running tighter primary timings than at 2933 with the 2700x.

(still going through and adjusting the secondary and tertiary timings; it's taking forever)

Kibner fucked around with this message at 16:37 on Mar 3, 2022

hobbesmaster
Jan 28, 2008

I also went from zen+ to zen2 and agree with zen+ memory speeds being terrible in comparison.

The real question is does that actually matter? Technically yes, practically is it worth over $200? Idk, but if that isn’t a lot of money for you then go get a 5600x.

Klyith
Aug 3, 2007

GBS Pledge Week

Inept posted:

The thing is a lot of people don't want to deal with the trial and error of memory timing and finding out if it crashes later on. Not being able to run at XMP by default is pretty bad. I have a 1600 AF and have the same issue, and not having that be a problem when I first got it would have been a lot nicer than my computer randomly crashing while I was trying to play a new game.

Oh yeah, the process of manual OCing is super tedious. And I would say that it probably isn't worth it, because increasing your memory speed isn't that big a difference in real-world performance. My manual OC to 3200 likely was just a 1-3% boost to FPS etc. It's "free" performance which is nice, but IMO anything below 5% change is in the realm of who cares. If you reset it back to 2933 I wouldn't notice.

I would not recommend people spend an entire saturday twiddling numbers on the bios screen, unless you enjoy twiddling numbers on the bios screen. (Though you don't want to wait until something crashes to test stability, you hammer on it with memtest.)

But I just wanted to show an example of how there are other factors besides just physical improvements to the memory controller: the bios is better at picking auto numbers, ram is made with ryzen in mind these days, etc. Being able to load XMP and have it Just Work is much better.


Kibner posted:

I did use manual trial and error. Could never get those b-die sticks stable at anything above 2933 on either of two different motherboards with the 2700x. Even tried running with super loose timings and a wide-range of voltages. Just couldn't do it.

I'm doing the manual process again with the 5950x and was already stable at 3600 running tighter primary timings than at 2933 with the 2700x.

Huh, same mobo as well? :shrug: Guess it's possible your 2700x was unusually bad.

Kibner posted:

(still going through and adjusting the secondary and tertiary timings; it's taking forever)

All you need to care about is the primaries, tRC, tRFC, and command rate. Everything else secondary & tertiary is incredibly marginal and not worth changing from auto, there's functionally zero performance change.

tRFC does have performance impact and is often set very conservatively by the bios / auto-sense, so is one where you can get decent gains with manual experimentation. But it's also the hardest to know when you're really stable: it will produce super-infrequent errors when you're on the margin. You need to run a complete memtest cycle on it to be sure.

Kibner
Oct 21, 2008

Acguy Supremacy

Klyith posted:

Huh, same mobo as well? :shrug: Guess it's possible your 2700x was unusually bad.
Yeah, same motherboard, too.

Klyith posted:

All you need to care about is the primaries, tRC, tRFC, and command rate. Everything else secondary & tertiary is incredibly marginal and not worth changing from auto, there's functionally zero performance change.

tRFC does have performance impact and is often set very conservatively by the bios / auto-sense, so is one where you can get decent gains with manual experimentation. But it's also the hardest to know when you're really stable: it will produce super-infrequent errors when you're on the margin. You need to run a complete memtest cycle on it to be sure.

If this is true, I may go back and put all those other timings I have done on auto and just focus on the ones you mentioned.

CaptainSarcastic
Jul 6, 2013



FuturePastNow posted:

Can't wait to see some actual reviews and testing of the 5800X3D

:same:

I'm also curious about pricing, because it is looking like it would most likely be the capstone CPU for my current desktop.

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
400 dollars probably

Seamonster
Apr 30, 2007

IMMER SIEGREICH
Might as well make it $420.69.

As a launch day 3800X owner, I'm saving up for whatever 16 core monstrosity comes erupting out of AM5.

New Zealand can eat me
Aug 29, 2008

:matters:


When does the X3D embargo end?

Seamonster posted:

As a launch day 3800X owner, I'm saving up for whatever 16 core monstrosity comes erupting out of AM5.

More or less same, the 5950X coming down in price to less than what I paid for the 3950X is a tempting upgrade... but my first thought was "who are you kidding, you're probably going to end up spending more than that on DDR5 alone when AM5 drops!" and so the piggy bank is safe for a few more months

CaptainSarcastic
Jul 6, 2013



I'm allergic to first generation memory so plan to ride out my DDR4 rig for at least another year or two, so the 5800X3D is pretty tempting to me. That way I can allocate funds to buy a GPU or a car.

Dr. Video Games 0031
Jul 17, 2004

Yeah, I got a 5600X, and I figure that I'll just buy a 4K monitor if I end up running into CPU bottlenecks before Zen 5 comes out. Just force all the work onto your GPU, problem solved!

Cygni
Nov 12, 2005

raring to post

As always, GPU is gonna be a much more important buy unless you are trying to do 1080p/240hz. For a set dollar amount, 5800X3D or 5950X + smaller GPU will be less performant than a 5600X + bigger GPU.

For example, cyberpunk with all the zen generations:



(Assuming you can get a GPU etc etc)

CaptainSarcastic
Jul 6, 2013



Cygni posted:

As always, GPU is gonna be a much more important buy unless you are trying to do 1080p/240hz. For a set dollar amount, 5800X3D or 5950X + smaller GPU will be less performant than a 5600X + bigger GPU.

For example, cyberpunk with all the zen generations:



(Assuming you can get a GPU etc etc)

That seems like kind of a misfire, since you are only showing 6-core CPUs and it's not clear if that benchmark is using DLSS or RTX in whatever combination. A more useful chart would be showing performance with the same videocard and multiple CPU configurations, not just the 6-core iterations of each generation. I have a 2070 Super which is holding its own just fine, and with the GPU market where it is the cost/benefit ratio of replacing it doesn't seem likely to favor a purchase for a while. The reports on the 5800X3D so far suggest it could provide a modest improvement, and would also give me MOAR CORES, so the cost/benefit ratio seems like it might be a little more reasonable.

Cygni
Nov 12, 2005

raring to post

The point is that more CPU does not equal more frames, even with huge IPC and clock improvements. Nearly all current games won’t benefit from more than 6 cores / 12 threads. Assuming you have something relatively recent on the CPU side, the 2070 Super will continue to be the bottleneck with a CPU upgrade.

Not saying don’t get a fancy new thing if you want it, just know that especially at 1440p, you should have realistic expectations of the gains.

CaptainSarcastic
Jul 6, 2013



Cygni posted:

The point is that more CPU does not equal more frames, even with huge IPC and clock improvements. Nearly all current games won’t benefit from more than 6 cores / 12 threads. Assuming you have something relatively recent on the CPU side, the 2070 Super will continue to be the bottleneck with a CPU upgrade.

Not saying don’t get a fancy new thing if you want it, just know that especially at 1440p, you should have realistic expectations of the gains.

I don't really disagree, but it is looking like there could be some uplift, and there is a smidge of future-proofing going to 8/16 over 6/12, too. I currently have a 3600X and it does fine, but if I can get a little improvement and also not have to worry about 6 cores becoming a limitation then it's a net benefit.

If the cost is too high or the gains are too low then I might just ride out the 3600X, but I'm at least going to consider the 5800X3D.

Dr. Video Games 0031
Jul 17, 2004

The 5800X3D will most likely be a faster gaming CPU than any of the previous Zen 3 CPUs, but it will not magically make your GPU go faster if you are not already running into any CPU bottlenecks. The 5600X is more performant in every conceivable way than the 1600X for instance, and yet it's not any faster in cyberpunk when paired a 3070 at 1440p. The reason for this has nothing to do with core count and everything to do with the fact that the 3070 is already running as fast as it can with the 1600X. So not even a 12900K isn't going to make the 3070 go faster in that game, and neither would a 5800X3D for that matter. Your 2070 Super is slower than the 3070, which means that your CPU will have even less work to do and you are even less likely to see a performance uplift. There could be a handful of single-threaded games where superior single-core performance could help you, but even then your GPU may be too slow for it to matter.

So yes, there will be some uplift for people with fast GPUs, but probably not for you and your 2070 Super. It could be a useful upgrade if you end up buying a 40-series GPU or something, though.

Begall
Jul 28, 2008

CaptainSarcastic posted:

That seems like kind of a misfire, since you are only showing 6-core CPUs and it's not clear if that benchmark is using DLSS or RTX in whatever combination. A more useful chart would be showing performance with the same videocard and multiple CPU configurations, not just the 6-core iterations of each generation. I have a 2070 Super which is holding its own just fine, and with the GPU market where it is the cost/benefit ratio of replacing it doesn't seem likely to favor a purchase for a while. The reports on the 5800X3D so far suggest it could provide a modest improvement, and would also give me MOAR CORES, so the cost/benefit ratio seems like it might be a little more reasonable.

The point is that to a greater or lesser extent, every one of these tests is demonstrating a GPU bottleneck. Neither additional cache nor MOAR CORES will provide much/any additional performance in this case, since it is almost entirely or actually entirely bound by the GPUs performance.

E: lol forgot to refresh

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

Cygni posted:

The point is that more CPU does not equal more frames, even with huge IPC and clock improvements. Nearly all current games won’t benefit from more than 6 cores / 12 threads. Assuming you have something relatively recent on the CPU side, the 2070 Super will continue to be the bottleneck with a CPU upgrade.

Not saying don’t get a fancy new thing if you want it, just know that especially at 1440p, you should have realistic expectations of the gains.

In Cyberpunk I get bottlenecked by my CPU in some places, even at 1440p (11600K w/ 3060ti). I mostly play with DLSS, but I'm pretty sure there are moments I'd be CPU bound even without it. I have no doubt I'd have a better experience with a better CPU.

I'm not debating your greater point, just wanted to note that benchmarks don't always tell the whole picture.

SwissArmyDruid
Feb 14, 2014

by sebmojo
My next computer is going to be ultimately decided by whether or not Intel makes SR-IOV available to consumer desktop, and if that makes AMD or Nvidia try to offer feature parity.

Then I'm grabbing whatever 16 core monster they have and porting my VMs over.

Ihmemies
Oct 6, 2012

Cyberpunk gives huge gains with more cores/clocks/IPC. I got up to 50% more fps with 8700k->12700k. Same memory, 3600cl15 ddr4, same rtx 3080 with dlss and rtx on. I had 100% cpu usage with the 8700k on task manager.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

SwissArmyDruid posted:

My next computer is going to be ultimately decided by whether or not Intel makes SR-IOV available to consumer desktop
Hope is the last thing to die.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Combat Pretzel posted:

Hope is the last thing to die.

I mean, I finally got one wish of mine:

https://twitter.com/TechEpiphany/status/1499357898933161987

We might finally have some no-poo poo no-compromises mobile devices at the end of the year.

So I can keep on hoping on the Intel SR-IOV GPU poo poo.

edit: poo poo, this is eminently playable.

https://twitter.com/TechEpiphany/status/1500070017345507329

SwissArmyDruid fucked around with this message at 13:34 on Mar 5, 2022

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

SwissArmyDruid posted:

So I can keep on hoping on the Intel SR-IOV GPU poo poo.
IDK, my X399 board lists SRV-IO in the BIOS as option to enable. Over here it'd be just contingent on NVidia. It was already a miracle that they finally allowed one GPU paravirt session on their consumer models. And that's probably mostly just forced by Microsoft to enable GPU accelerated rendering within their WDAG VM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply