Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sincx
Jul 13, 2012

furiously masturbating to anime titties
.

Only registered members can see post attachments!

sincx fucked around with this message at 05:50 on Mar 23, 2021

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Welcome to the discrete north/southbridge, where old is new and new is old again!

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

EmpyreanFlux posted:

Finished watching AdoredTV vid and honestly while I can see where he is coming from, a lot of it seems kinda nuts still.
Welcome to AdoredTV

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

sincx posted:

Epyc should still be more power efficient from a performance-per-watt perspective. That's more than enough for data centers to stick with the enterprise chips, given how quickly power bills add up to exceed the purchase price of the chips.

Absolutely true if you just let everything run at stock. Seems foolish though to do that with a 5GHz 64-core if performance-per-watt is a bigger concern than raw performance.

Khorne posted:

I'm not sure this is a concern for AMD. AMD wouldn't let someone like Dell sell TR/AM4 cpus in the way you're describing, and if the Backblaze of datacenters wanted to try and setup threadripper racks no one is going to stop them. TR isn't as space efficient as epyc, and the epyc feature set actually matters for most data center uses. AMD also cuts deals with the big boys and certain institutions. No one pays full retail for things.

Look at the 1080Ti or a few previous generation consumer nvidia cards vs their tesla offerings. You'd save 10x-20x for the same or even slightly better performance and feature set for many compute tasks*. In the data center, and at the enterprise level, you still end up with the Tesla cards because it's what vendors are selling and supporting.

Also, all of the x370/x470 and probably b350/b450 motherboards and zen1/zen2 CPUs support ECC. Just not officially. But it works.

*I'm aware the differences, ECC on the Tesla cards being theoretically a big one, also aware that nvidia has been trying really hard to cripple their consumer line for compute and prevent people from using these in data centers. Some universities have clusters of 1080Ti and earlier consumer GPUs.

I can't really argue with the "well, no one ACTUALLY pays $10000 for that chip" because while I do know that's true, I don't know what they do actually pay to know how much it matters. If the price/performance differential is big enough though between proper server chips and HEDT (or whatever the hell you would call 5GHz 64-cores limited to 1S) the cloud providers at least are going to consider just building whiteboxes and telling traditional vendors to take a hike like they did with Ethernet switching.

Re: 1080Ti vs. Tesla, it's my novice and maybe incorrect understanding that the 1080Ti is perfectly usable as a training/practice card to learn and hone deep learning techniques (so yeah, university labs!) but the various-precision FP limitations do actually cripple it for a lot of serious work, especially if you're a cloud provider trying to sell VM instances to people and don't know beforehand that the Geforce card will be acceptable for what they wish to do. This is ignoring too the hoops that you have to jump through to get a GF card working with a VM; I'll assume that they're not a significant hurdle here. For those reasons I'm uncertain how well of a comparison this makes to TR/Epyc.

I don't think I've said this yet but I also just wonder - why would anyone buy a 64-core TR if not to run a server on it? 32 cores already is a huge quantity for a workstation, and if someone said they "needed" 32 cores for an application I'd already be wondering if that load could be moved to some kind of compute cluster instead of an end user's machine. What would be the actual demand for this chip other than 1S servers and people who don't actually need that many cores but have more money than sense?

Eletriarnation fucked around with this message at 20:44 on Dec 5, 2018

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

I'm trying to suppress a giggle here but cmon, if this was a reasonable solution for GPUs it'd have been done long before.

Eletriarnation posted:

What would be the actual demand for this chip other than 1S servers and people who don't actually need that many cores but have more money than sense?

That's like, the whole point of HEDT market. Also to totally, utterly poo poo on Intel for the foreseeable future.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
I agree that those "leaked" specs and prices of Zen 2 are just too good to be true. But if they are true...:stare::fh:

SwissArmyDruid
Feb 14, 2014

by sebmojo

EmpyreanFlux posted:

Finished watching AdoredTV vid and honestly while I can see where he is coming from, a lot of it seems kinda nuts still. Every supposition so far relies on a second I/O die that contains a DDR4, GDDR5, GDDR6 and HBM controller plus PCIE connections for IF connections. It relies on the idea that moving the IMC off die for a GPU is really doable, and it relies on AMD getting 40CU under 100mm˛ (so as to be as physically comparable to a Zen2 die as possible).

Like take a step back and realize the enormity of AMD moving the IMC off the GPU would mean. It'd mean they solved the issue of GPU scalability for a multi die approach, as without a monolithic die what is the OS or program recognizing as a single GPU, the command processor or memory controller? If it's the first, why can't that logic be moved to the I/O die, have the I/O die recognized as the "GPU" and have any configuration of shader engine blocks be attached to the I/O die? Why couldn't this have been done even earlier?

I do want to say actual silicon physical area was what was keeping us from getting to this point, but I'm sure that answer is still far too simplistic. Remember how long we were stuck on 28nm, and how AMD literally couldn't put anything more onto a single Fiji package? I can't find the quote now, but there was that quote saying that they barely managed to leave a millimeter border around the entire conglomeration for some reason or another?

Two years and a die shrink later, the Vega 64 has 3 billion more transistors, with the same number of CUs and ROPs, higher base and boost clocks, memory frequency, none of the same complain-bragging about hitting areal size limits, and, probably most importantly: Aircooled, compared to watercooled.

That said, I think it's highly unrealistic for a single I/O die to contain ALL of the above memory controllers, and that AMD is more likely to adopt a mix-and-match approach based on product stack, and I'm bloating on my daily sodium intake with all these rumours.

EmpyreanFlux posted:

I'm trying to suppress a giggle here but cmon, if this was a reasonable solution for GPUs it'd have been done long before.

We also thought the same thing about Nvidia eliminating the hardware scheduler, offloading that bit of silicon's work to the CPU, and reclaiming die space in the name of performance and TDP to get a lead over AMD during Maxwell, but here we are.

SwissArmyDruid fucked around with this message at 20:38 on Dec 5, 2018

Anime Schoolgirl
Nov 28, 2002

The great thing about the chiplet is that that's one less 7nm die they have to fabricate for a different SKU for the APU like they had to with Zen 1. The GPU could be its own chip (unlikely to be on 7nm) or part of the 14nm IO chip (likely, also cheaper to fab one of these)

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Lol I have a bridge to sell y'all

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr.Radar posted:

Fix games from migrating/splitting threads between chiplets all the time causing huge frametime spikes, same as on the current Threadripper chips.
The current Threadrippers have an issue because the non-uniform memory bullshit. With the IO die, it is mostly gone.

PC LOAD LETTER
May 23, 2005
WTF?!

Combat Pretzel posted:

With the IO die, it is mostly gone.
Should be entirely gone though right?

The IO die is functioning as the memory controller, like an old school North Bridge but just on package and not on the mobo, for all the dies so there'll be a uniform means of access to memory across all dies presented to the OS. No more NUMA stuff to worry about at all for main memory accesses now. Accessing data stored in caches on other dies will still incur a significant penalty (in terms of latency) but that is something that is true for pretty much all x86 multi core CPU's.

There is some interesting informed speculation here on the amount of added latency that having the memory controller off the CPU die will incur and it seems like it'll be quite small (only a couple added ns of latency) so that is a pretty big deal if true.

If AMD has improved the IF bus and/or caches significantly (which going by rumors they seem to) than that minor penalty will be more than offset by those changes. On average, across all dies, Zen2 might actually have lower memory latency than Zen/Zen+.

yergacheffe
Jan 22, 2007
Whaler on the moon.

Don't worry guys the leaks are all true. Want to know why? It's cause I just caved during black Friday sales and bought a 2700 thinking the next gen was gonna be whatever. It'll turn out to be everything it's hyped up to be and more just to spite me.

Craptacular!
Jul 9, 2001

Fuck the DH
I don't see 16-thread + Graphics part as being possible both on chip and without blowing out many people's VRMs. 6c/12t, sure, but a theoretical 8/16 APU is going to need enough power that I don't see it working on most Bx50 boards and even some X370s.

We're currently at 4/8 on APUs with people wondering where the 6 core part is. 8? Like that screams low-confidence to me more than anything else on here.

Craptacular! fucked around with this message at 10:39 on Dec 6, 2018

Arzachel
May 12, 2012

Craptacular! posted:

I don't see 16-thread + Graphics part as being possible both on chip and without blowing out many people's VRMs. 6c/12t, sure, but a theoretical 8/16 APU is going to need enough power that I don't see it working on most Bx50 boards and even some X370s.

We're currently at 4/8 on APUs with people wondering where the 6 core part is. 8? Like that screams low-confidence to me more than anything else on here.

You can run a 95w CPU perfectly fine on a cheapo B series board right now.

yergacheffe posted:

Don't worry guys the leaks are all true. Want to know why? It's cause I just caved during black Friday sales and bought a 2700 thinking the next gen was gonna be whatever. It'll turn out to be everything it's hyped up to be and more just to spite me.

At least it'll use the same socket :shobon:

Arzachel fucked around with this message at 11:51 on Dec 6, 2018

Truga
May 4, 2014
Lipstick Apathy

EmpyreanFlux posted:

Finished watching AdoredTV vid and honestly while I can see where he is coming from, a lot of it seems kinda nuts still.

otoh, kyle bennet says: There is a whole lot of reality in that video. A lot. There is a little wrong, but not a lot.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Truga posted:

otoh, kyle bennet says: There is a whole lot of reality in that video. A lot. There is a little wrong, but not a lot.

Welp, if Kyle says it's true, then it's got to be so completely and utterly bogus that we should have all of our heads checked for even daring to THINK it might have had any basis in reality. =P

Back to the bad old years, people! =P

SwissArmyDruid fucked around with this message at 12:09 on Dec 6, 2018

PC LOAD LETTER
May 23, 2005
WTF?!

Craptacular! posted:

I don't see 16-thread + Graphics part as being possible both on chip and without blowing out many people's VRMs. 6c/12t, sure, but a theoretical 8/16 APU is going to need enough power that I don't see it working on most Bx50 boards and even some X370s.
I dunno if these latest leaks are accurate (the Adored one seems more realistic but still fairly nuts) but TSMC's 7nm is supposed to give a pretty big improvement in power usage if clocks are kept about the same or slightly lower than where they are currently for their APU's. ~40% power reduction vs their 10nm process is what TSMC is saying, and current Zen+'s use GF's "12nm" process (which apparently is more like GF's version of a 14nm+ rather than a true optical shrink).

So power probably won't be the issue here.

At least at stock.

If you want to start overclocking either the CPU or GPU then I think you'll hit the limits of their VRM's real fast though. Especially since on many of them the portion of the VRM meant for the iGPU to use is generally even more cut rate than on the CPU side.

PC LOAD LETTER fucked around with this message at 13:40 on Dec 6, 2018

Eyes Only
May 20, 2008

Do not attempt to adjust your set.

PC LOAD LETTER posted:

I dunno if these latest leaks are accurate (the Adorded one seems more realistic but still fairly nuts) but TSMC's 7nm is supposed to give a pretty big improvement in power usage if clocks are kept about the same or slightly lower than where they are currently for their APU's. ~40% power reduction vs their 10nm process is what TSMC is saying, and current Zen+'s use GF's "12nm" process (which apparently is more like GF's version of a 14nm+ rather than a true optical shrink).

Marketing departments at fabs say this every node, and while it's been almost-true for lower frequency chips like mobile or GPUs, 4ghz+ stuff has never received the full benefit.

PC LOAD LETTER
May 23, 2005
WTF?!

Eyes Only posted:

4ghz+ stuff has never received the full benefit.....while it's been almost-true for lower frequency chips
But I did say that they'd have to keep clocks the same or slightly lower to get those power savings.

Also note that while the Adored rumored boost clocks are often quite a bit higher than current Zen+ chips the base clocks (which are what really matter) really aren't all that far from the current Zen+ chips either for most of the models. For the APU's they're actually lower by a decent amount (R5 2400G= 3.6Ghz base clocks, rumored R5 3600G= 3.2Ghz base, R3 2200G= 3.5Ghz base clocks, rumored R3 3300G base clocks= 3Ghz).

Yeah there are more cores and bigger iGPU's but given the CPU clock reductions the rumored TDP's don't seem implausible given there is a pretty big process jump going on here from GF's "12nm" to TSMC's 7nm. edit: Actually those APU's are still made on GF's 14nm process not the "12nm" one so it'll be a even bigger jump.

FWIW TSMC has been pretty open about saying you can get the power savings OR the clock increases but not both. Which is also the norm with any process shrink from anyone.

IMO the most unbelievable part of the Adored rumors has been the prices for the parts and not the power ratings or clock speeds or core counts. Those prices are well into the "bargain" range for a CPU that is rumored to be going to meet or beat Intel on single thread performance and going by those Adored rumors will also have a significant edge when it comes to mo' threadz/corez. AMD has never been shy about pricing their stuff high if they can meet or beat Intel on performance.

PC LOAD LETTER fucked around with this message at 14:51 on Dec 6, 2018

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

PC LOAD LETTER posted:

Should be entirely gone though right?
I sure hope so, I'm going for the successor of the 2950X, whatever it is called by then (apparently 3920X). I mean, 48 cores and higher base clock on the 3950X also sounds fun, but the price extra, meh. Assuming this ain't all bullshit (which it probably is).

wargames
Mar 16, 2008

official yospos cat censor

yergacheffe posted:

Don't worry guys the leaks are all true. Want to know why? It's cause I just caved during black Friday sales and bought a 2700 thinking the next gen was gonna be whatever. It'll turn out to be everything it's hyped up to be and more just to spite me.

Thank you for your sacrifice, I will have to endure the glorious new 7nm AMD cpu when i build my new computer.

fknlo
Jul 6, 2009


Fun Shoe
Just did a new build with a 2700x. Idle temps are fluctuating quite a bit causing my fans to spin up when they spike. I'm running a Coolermaster ML360R. Idle CPU temps seem to bottom out around 30 degrees and will jump up to 40 and then fall back down. Package temps will drop to around the same but jump up to 44 degrees on occasion and then drop back down. I enabled the XMP profile for the RAM but haven't touched anything else. Is that normal?

NewFatMike
Jun 11, 2015

fknlo posted:

Just did a new build with a 2700x. Idle temps are fluctuating quite a bit causing my fans to spin up when they spike. I'm running a Coolermaster ML360R. Idle CPU temps seem to bottom out around 30 degrees and will jump up to 40 and then fall back down. Package temps will drop to around the same but jump up to 44 degrees on occasion and then drop back down. I enabled the XMP profile for the RAM but haven't touched anything else. Is that normal?

Sounds fine. Those spikes are probably just random tasks happening in the background.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Those spikes are XFR.

Llamadeus
Dec 20, 2005
Also just set the fan curve to be a flat line until at least 50 C or so.

fknlo
Jul 6, 2009


Fun Shoe

Llamadeus posted:

Also just set the fan curve to be a flat line until at least 50 C or so.
Gonna do this when I get back home.

What should it be sitting at as far as clock speed at idle? It was generally at 4.0 GHz with random drops from that on various cores.

ufarn
May 30, 2009

fknlo posted:

Just did a new build with a 2700x. Idle temps are fluctuating quite a bit causing my fans to spin up when they spike. I'm running a Coolermaster ML360R. Idle CPU temps seem to bottom out around 30 degrees and will jump up to 40 and then fall back down. Package temps will drop to around the same but jump up to 44 degrees on occasion and then drop back down. I enabled the XMP profile for the RAM but haven't touched anything else. Is that normal?
That's the normal behaviour. Make sure you:

1) Move the fan curve a little bit past the max peak for now
2) Make sure your power profile is Balanced (not Ryzen Balanced)
3) Increase the Minimum Processor State in the battery profile setting under Processing Power Management until your temps don't fluctuates too much; in other words, give it enough juice to handle services and other background processes that spin up and down. Mine is 20% on air with a Noctua D14.

I've been told HWINFO64 is the most reliable for monitoring temps (like, people were practically yelling at me not to use HWMonitor).

Also, the latest Nvidia driver is trash and can make your GPU go nuts, so go back to the second-latest with Display Driver Uninstaller if your new build has an Nvidia GPU.

Klyith
Aug 3, 2007

GBS Pledge Week

fknlo posted:

Gonna do this when I get back home.

What should it be sitting at as far as clock speed at idle? It was generally at 4.0 GHz with random drops from that on various cores.

with the giant gently caress-off radiator the question becomes more about what the rest of the case has / needs for ventilation. like you could probably set the rad fans to zero if you also have intake fans and just use the positive pressure.

if the rad fans are your main air movers then you can't turn them off, so just find the spot on the PWM that's barely audible.

fknlo
Jul 6, 2009


Fun Shoe

ufarn posted:

That's the normal behaviour. Make sure you:

1) Move the fan curve a little bit past the max peak for now
2) Make sure your power profile is Balanced (not Ryzen Balanced)
3) Increase the Minimum Processor State in the battery profile setting under Processing Power Management until your temps don't fluctuates too much; in other words, give it enough juice to handle services and other background processes that spin up and down. Mine is 20% on air with a Noctua D14.

I've been told HWINFO64 is the most reliable for monitoring temps (like, people were practically yelling at me not to use HWMonitor).

Also, the latest Nvidia driver is trash and can make your GPU go nuts, so go back to the second-latest with Display Driver Uninstaller if your new build has an Nvidia GPU.

Did all those and things seem to be fine now. I'm glad those temp variations are normal because they had me a little worried. I'm used to temps being super constant at idle.

I'm gonna get everything set up and then probably play around with overclocking it.


Klyith posted:

with the giant gently caress-off radiator the question becomes more about what the rest of the case has / needs for ventilation. like you could probably set the rad fans to zero if you also have intake fans and just use the positive pressure.

if the rad fans are your main air movers then you can't turn them off, so just find the spot on the PWM that's barely audible.

I do have intake fans. Bumping the curve up to 50 degrees has it more than quiet enough for me.

NewFatMike
Jun 11, 2015



Apparently TR 2990WX is going for ~$1,500, so like $200 off. If that's VAT included, then it's $1,200 which is a bonkers loving deal.

If Navi is even remotely close to the leaks, then I'm definitely picking one up for VM passthrough. Quick and easy way to get around that pesky GSync tax I already paid :shep:

NewFatMike fucked around with this message at 17:12 on Dec 10, 2018

SwissArmyDruid
Feb 14, 2014

by sebmojo
God I want to jump on that hard, but I kind of want 8-core CCXes more before staking my tech level for a while.

....except PCIE 4 is coming,

....and 5 shortly after that,

and DDR5 is just over the horizon.....

oh my god, it's like the last times I built a computer all over again around the AGP -> PCIe switchover and PCIe -> PCIe 2 and DDR2 -> DDR3 switchovers

SwissArmyDruid fucked around with this message at 22:04 on Dec 10, 2018

Seamonster
Apr 30, 2007

IMMER SIEGREICH
I'm not too hyped for DDR5 considering DDR4 prices are still in the process of coming down. A new standard is just another chance for those cocksuckers to run up their cartel power and gouge (again).

3peat
May 6, 2010

NewFatMike posted:



Apparently TR 2990WX is going for ~$1,500, so like $200 off. If that's VAT included, then it's $1,200 which is a bonkers loving deal.

If Navi is even remotely close to the leaks, then I'm definitely picking one up for VM passthrough. Quick and easy way to get around that pesky GSync tax I already paid :Shep:

All retail prices in EU must have VAT included, in this case 19%

Mr Shiny Pants
Nov 12, 2012
It's over 2000 Euro's now......

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Anybody here running Epyc servers after coming off Xeons? I have a core-hungry workload and the benchmarks I'm seeing say I can get roughly 50% more IOPS for my dollar but it would be nice to talk to someone who has made the switch about any caveats that have come up.

sauer kraut
Oct 2, 2004

Mr Shiny Pants posted:

It's over 2000 Euro's now......

Mindstar are limited time offers for clearance or loss leaders like that WX, and the closest thing we got in Germany to US prices :unsmith:

fknlo
Jul 6, 2009


Fun Shoe
Are the performance boost overdrive and power enhancment levels in the BIOS meant to be used together or is that unnecessary?

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

BangersInMyKnickers posted:

Anybody here running Epyc servers after coming off Xeons? I have a core-hungry workload and the benchmarks I'm seeing say I can get roughly 50% more IOPS for my dollar but it would be nice to talk to someone who has made the switch about any caveats that have come up.

I can't even find any Epyc servers in my region. Waiting for a 12 core Zen 2 Ryzen to replace my workstation at the moment, which won't be released until around March next year.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Devian666 posted:

I can't even find any Epyc servers in my region. Waiting for a 12 core Zen 2 Ryzen to replace my workstation at the moment, which won't be released until around March next year.

I'm on the "please please please let me have a _v2 Azure Epyc instance pretty please" list but they started doing that like a year ago and its still not in general availability for some reason so who knows

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012

sauer kraut posted:

Mindstar are limited time offers for clearance or loss leaders like that WX, and the closest thing we got in Germany to US prices :unsmith:

Should have known a little earlier. :)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply