Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
eames
May 9, 2009

AMD TR2 interview with Jim Anderson

https://www.youtube.com/watch?v=E73H8HvqjEM

This kind of confirms that Intel expected to counter their 24 core TR2 with the 28 core part and the 32 core came as a surprise. :lol:
He also mentions "new features" in Ryzen Master, probably related to UMA/NUMA. I wish I had a use case for all those cores.

eames fucked around with this message at 13:06 on Jun 7, 2018

Adbot
ADBOT LOVES YOU

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

snickothemule posted:

Can't wait to see the power consumption on that "5ghz" CPU before it falls over.

Estimates I've seen are ~1.5kW on the system alone, plus compressor overhead for the water chiller loop so tack on another 50-100% overhead for that. If you're doing it at home and you don't have the chiller heat exchanger hooked up to outside air and are dumping it in to inside space then add another 50% on top of THAT for additional AC load. We're talking something like 3-5kW in total which is loving Stupid.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I wonder if the 16C Threadripper is still going to be just two dies with double bandwidth between them, or for sake of cost savings, they're going to do some four die bullshit for the second gen (i.e. 4x4)

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I assume that would only be a cost savings if there's still a substantial yield of dies which have more than one unviable core on a CCX and can't therefore be made into Ryzen 5s. Even those could go into the 1900X since it's 2 dies, each with 2 CCXes, each with 2 cores, but that's probably not a very popular part.

Otherwise they are cutting down more perfectly good dies and having to use a bigger interposer to connect them, I think.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

I wonder if the 16C Threadripper is still going to be just two dies with double bandwidth between them, or for sake of cost savings, they're going to do some four die bullshit for the second gen (i.e. 4x4)

If it doesn't result in massive performance drops, I don't see the problem to be honest.

GRINDCORE MEGGIDO
Feb 28, 1985


Mr Shiny Pants posted:

If it doesn't result in massive performance drops, I don't see the problem to be honest.

Be a shame as I'd like one for gaming on as well as being an overkill workstation.

Mr Shiny Pants
Nov 12, 2012

GRINDCORE MEGGIDO posted:

Be a shame as I'd like one for gaming on as well as being an overkill workstation.

You can buy my 1950X and I'll put that to a TR2. ;)

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Most workloads didn't seem to have an appreciable penalty for hitting the crossbar or whatever they call it to access a memory controller on a different node. I doubt it will matter much except for things that are extremely latency sensitive like high frequency trading or heavily loaded virtualization hosts where memory bandwidth is in contention.

GRINDCORE MEGGIDO
Feb 28, 1985


Mr Shiny Pants posted:

You can buy my 1950X and I'll put that to a TR2. ;)

I'm gonna wait for TR2 to crater the prices :getin:

Mr Shiny Pants
Nov 12, 2012

BangersInMyKnickers posted:

Most workloads didn't seem to have an appreciable penalty for hitting the crossbar or whatever they call it to access a memory controller on a different node. I doubt it will matter much except for things that are extremely latency sensitive like high frequency trading or heavily loaded virtualization hosts where memory bandwidth is in contention.

This is also my understanding. So it won't matter much if they do make it 4 dies.

Cygni
Nov 12, 2005

raring to post

I dunno, i feel like how AMD manages the core allocation to minimize those hits is gonna be huge for lots of programs. Especially thinking long term as MOAR CORES is clearly where the mainstream is headed too.

Like if the scheduler is dumb and parks your game processes on a die with the second fiddle memory access, and puts like fuckin Google's background upgrader threads and whatever on a die with direct access, that will be Bad. AMD's answer last time was game mode that just turned the second die/NUMA node off entirely, but that doesn't seem very elegant and would be more complicated for 4 dies. Would be better if the scheduler knew there were A and B cores and could intelligently sort processes between them.

I know AMD has been working with MS to release OS scheduler patches and such so maybe they already addressed this, i dunno.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I wonder if the new Intel 28 core might just be outright slower than the TR2 32 core by running into thermal and power limits much sooner? I mean, the 7980XE was already pushing it, and XFR2+PBO is way better than Turbo Boost. The (guessing) 8980XE might have a lower or simply comparable frequency cap in practical use.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

NUMA awareness has been a thing for a long time and if the specific program does have problems due to memory latency from the architecture it will be completely within the developers ability to optimize around, as the layout will be exposed to the application.

Khorne
May 1, 2002

Cygni posted:

I dunno, i feel like how AMD manages the core allocation to minimize those hits is gonna be huge for lots of programs. Especially thinking long term as MOAR CORES is clearly where the mainstream is headed too.

Like if the scheduler is dumb and parks your game processes on a die with the second fiddle memory access, and puts like fuckin Google's background upgrader threads and whatever on a die with direct access, that will be Bad. AMD's answer last time was game mode that just turned the second die/NUMA node off entirely, but that doesn't seem very elegant and would be more complicated for 4 dies. Would be better if the scheduler knew there were A and B cores and could intelligently sort processes between them.

I know AMD has been working with MS to release OS scheduler patches and such so maybe they already addressed this, i dunno.
The new modes will be essentially "game mode", "tr1 mode", "all the cores mode".

LRADIKAL
Jun 10, 2001

Fun Shoe

BangersInMyKnickers posted:

NUMA awareness has been a thing for a long time and if the specific program does have problems due to memory latency from the architecture it will be completely within the developers ability to optimize around, as the layout will be exposed to the application.

How does this type of thing work? Is information about the architecture exposed to applications which can request low latency cores? Is the information only exposed to the OS which doles out cores per application?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
In an OS written to be NUMA-aware applications can call an API to request a batch of threads to be located in the same node, do a memory allocation on a specific node, figure out which nodes have how much memory adjacent, etc. At least, that's how it seems to work in Windows according to this page I just found from Microsoft: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363804%28v=vs.85%29.aspx

As an end user, you can also set core affinity for processes yourself if you really want to fine-tune it that much. I'm not sure if there's a way to manually manage memory allocation like that though.

Eletriarnation fucked around with this message at 21:17 on Jun 7, 2018

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Not memory allocations, no. It's all virtual memory, abstracted and allocated by the OS. You just need to be aware of how the underlying cores map and configure your application core affinity accordingly. I suspect a lot of the work the AMD drivers are doing it optimizing process core assignments for their architecture when the application devs won't do the work.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I wonder if (when) we’ll end up with NUMA management drivers that do GPU-style app detection or peephole optimization to automatically “optimize” how things are allocated across the package.

Broose
Oct 28, 2007
I wish video games didn't depend on an individual core having high clock rates more than having a poo poo ton of cores. Just so I could have an excuse to buy and install something called "THREADRIPPER 2000" and not have worse performance than a $200 cpu.

Woe to the casual user.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Well Ryzen 2 is gonna have 12-16 cores which is a good balance of moar corez and not having to deal with the NUMA stuff or high cost of TR.

Mr Shiny Pants
Nov 12, 2012

LRADIKAL posted:

How does this type of thing work? Is information about the architecture exposed to applications which can request low latency cores? Is the information only exposed to the OS which doles out cores per application?

In Windows you can ask the scheduler to schedule your threads on a NUMA basis.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now?

Pretty sure the only person who thought it wasn't overclocked was Ian Cutress of AT. Even Ryan Smith thought he was full of poo poo.

quote:

Personally, I feel this new processor is not a higher binned Platinum 8180. Going up from 2.8 GHz base / 3.5 GHz turbo to 5.0 GHz all-core frequency is a big step, assuming the 5.0 GHz value was not an overclock. I would fully expect that this is the point where Intel starts introducing EMIB to CPUs. (ed: FWIW, I disagree with Ian; my money is on a heavily binned 28-core XCC processor made on 14++. We've seen that Intel can do 5GHz on that process with the 8086K)

Last week I discussed the potential death of Intel’s low-end core design for high-end desktop, because it was being eclipsed by the mainstream parts. The only way Intel would be able to reuse the server versions of those low-core count designs would be to enable its embedded multi-die interconnect bridge (EMIB) technology to put two or more of the smaller dies on the same package. This would allow Intel do amortize costs in the same way AMD does by making use of higher yielding parts (as die size goes down, yield goes up).

Intel’s EMIB has a potentially high bi-directional bandwidth, so it would be interesting to see if Intel would bind two dies together and if there is any additional latency or bandwidth decrease with two dies together. With 28 cores, that would subdivide by two to 14-each, but not to four. So this processor is likely to be two 14-core dies using EMIB… which would actually be Intel’s HCC (high-core-count) processor design.

To add something extra to the mix, Intel might not be using EMIB at all. It could just as easily be the QPI interface on package, much how the company is using the Xeon + FPGA products announced recently.

I called it straight off and generally people think the whole "28-phase motherboard with sub-ambient cooling" thing is pretty hilarious(ly transparent).

BangersInMyKnickers posted:

Thanks, Intel, for making an entire computer around the Bitchin' Fast 3D 2000 joke



Paul MaudDib fucked around with this message at 04:55 on Jun 9, 2018

Khorne
May 1, 2002

SwissArmyDruid posted:

Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now?
I got massively downvoted on r/intel for pointing out intel processors could hit 5 in 2011. I'm talking over -40 on my profile sheet. That was in the anniversary edition cpu thread with people saying "Wow first 5ghz processor" and stuff.

Khorne fucked around with this message at 05:00 on Jun 9, 2018

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Khorne posted:

I got massively downvoted on r/intel for pointing out intel processors could hit 5 in 2011.

Somehow, I get upvoted in r/amd when I post factual stuff even if it's against AMD.

Nobody actually likes intel enough to fanboy about them. Even the most diehard Intel fans acknowledge they're a poo poo company that happens to produce the best product around if you have the money to spend and disregard their general approach to business and customer relations.

r/intel is just overflow for bored r/amd posters. If you look at the subreddit subscriber numbers compared to actual marketshare, AMD users are about 25x as likely to sub on a per-user basis as Intel users.

Khorne
May 1, 2002

Paul MaudDib posted:

Nobody actually likes intel enough to fanboy about them. r/intel is just overflow for bored r/amd posters.

If you look at the subreddit subscriber numbers compared to actual marketshare, AMD users are about 25x as likely to sub on a per-user basis as Intel users.
I like things that are good, personally. As lovely as intel has been for the past 5-6 years or so, AMD couldn't even compete and buying an i7 during sandy bridge/ivy bridge was one of the greatest value buys in the history of buying computer stuff.

In 2019 AMD should be the leader in actual good processors, probably, hopefully. And I'll be buying AMD then unless intel pulls off some real magic, but I don't see how that will be possible until the early '20s.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Khorne posted:

I like things that are good, personally. As lovely as intel has been for the past 5-6 years or so, AMD couldn't even compete and buying an i7 during sandy bridge/ivy bridge was one of the greatest value buys in the history of buying computer stuff.

In 2019 AMD should be the leader in actual good processors, probably, hopefully. And I'll be buying AMD then unless intel pulls off some real magic, but I don't see how that will be possible until the early '20s.

Yeah, same, the golden number is 5 GHz, if they can deliver that with Zen+ level latency then Intel won't have much argument left on the gaming front anymore. I'm holding off until at least 8C Coffee Lake but I'm thinking seriously about waiting for Zen2 and maybe third-gen Threadripper to replace my 5820K.

Speaking of which, am I incorrect that the plan for 64C Epyc pretty much blows the lid on a new 8C CCX? 4 dies at 64C = 16C per die... meaning a 2x8C CCX or 4x4C CCX configuration.

I said right from the start that 8C makes way more sense than 6C. 8C is a 2x2x2 topology, 6C is... nothing. At small scales (i.e. intra-CCX) the hypercube absolutely does make sense as an interconnect topology... it's what AMD is using right now. Seems like people didn't really have a basis for 6C other than "it's like, 50% bigger than 4C, that sounds good, and Ryzen means AMD needs to do everything small, right!?". If AMD is going to do a small CCX it's going to be 4C again, big CCX it's going to be 8C.

(and the 4x4C CCX topology is not that insane when you consider that AMD is trialling whole dies that are not even directly hooked to memory with 2nd-gen TR)

Paul MaudDib fucked around with this message at 05:21 on Jun 9, 2018

Arzachel
May 12, 2012
4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Arzachel posted:

4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up.

I think the 2200G/2400G are suggestive that the 4x4CCX configuration is significantly less likely to happen. Going cross-CCX eats lanes and AMD seems to be allergic to that for some reason. They literally could have gone with an 8C configuration on the 2400G, and did not, nor did they even expose the full 16-lane PEG capability of the die. Intra-CCX seems to be more scalable for now.

Paul MaudDib fucked around with this message at 03:56 on Jun 10, 2018

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Arzachel posted:

4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up.

At 7 nm an 8 core CCX is like 30% larger than the current 4 core CCX at 14 nm, and would take about as much power. It's totally possible to do that without needing a higher TDP, or really changing the packaging much.

Also moving the interconnect from PCIe-3 equivalent to PCI-e 4 would basically double the intra-CCX bandwidth, improving things substantially.

Mark Larson
Dec 27, 2003

Interesting...
Does anyone have a 2200G or 2400G and is overclocking it? I think I have my 2200G stable at 3.9 Ghz with 1.3125v but can't hit 4Ghz even with bumping the voltage up to 1.325v. Or rather, I can hit it but it crashes after a few minutes at full load. Temperature showed 73C at the end. I don't have the best cooling so I might have to stay content with 3.9Ghz but would like to know what numbers others are getting.

I don't think there's too much of a gain from 3.7Ghz to 3.9Ghz for my video encoding needs but it's for the 8=====D e-cred.

PC LOAD LETTER
May 23, 2005
WTF?!
I don't have one of those but did you try bumping vSOC up to 1.1 or so? You'd think only vCore would matter but not necessarily apparently.

That generally helped lots with stability on other Ryzen OC'ing.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:50 on Mar 23, 2021

repiv
Aug 13, 2009

Watch out for thermal throttling too, running the CPU too hot could indirectly make the GPU clock lower.

Mark Larson
Dec 27, 2003

Interesting...
I tried 3.925 Ghz at 1.3v. I don't think I'm 100% stable yet. Prime95 kept running for a while, until the whole system unceremoniously hung. I was watching the Windows Hardware Error errors accumulate in HWInfo, whereas Prime95 displayed no errors until the whole system crashed. The max CPU temp was 84 degC.

I've made a graph that shows the Vcore, temperatures and so on. You can see that I tried with a higher Vcore (1.3125v) at the beginning, then as soon as I lowered it past 1.3v in Ryzen Master, the WHEA errors started occurring. The errors seemed to stop when I bumped the Vcore back up to 1.3v but then the system crashed shortly thereafter.

I've attached a case fan to the intake now, to see if that makes a difference in overall temps.

EDIT:
I'm trying prime95 with the CPU completely stock, to set a baseline reference for temperatures and Vcore. It seems that I was undervolting quite a bit so it's no surprise that it wasn't stable, since HWInfo reports 1.325v at 3.7Ghz (stock turbo speed).

It already seems to be helping the hard drives but CPU temps are already up to 82C after 10 minutes of prime95, with an ambient around 24-25C.

I might need a better CPU cooler since I'll definitely need more than 1.3v to get to 4Ghz. :20bux:

PC LOAD LETTER
May 23, 2005
WTF?!
If you're that determined to get to 4Ghz save your money and just de-lid and repaste the IHS like some have been doing for Intel. Your current HSF is probably good enough to make it work and the volts you've needed so far really aren't that bad either so it'll probably work.

More info here: https://www.gamersnexus.net/guides/3237-amd-r3-2200g-delid-temperatures-and-liquid-metal

If de-lidding the CPU scares you than you can give better cooling a shot I guess but you'll probably need to spend more than $20 to get cooling better enough to make it work at 4Ghz.

Cygni
Nov 12, 2005

raring to post

AMD launching a 4/8 Ryzen 5 2500X and a 4/4 Ryzen 3 2300X soon with 4.0ghz turbos.

https://wccftech.com/amd-ryzen-5-2500x-ryzen-3-2300x-specs-performance-price-revealed/

Could be some good budget choices, but I'm sorta of the opinion at this point that if you are gonna drop $200 on RAM just to turn the thing on, you might as well spend the extra ~$50 and get a 2600 with 6/12 that will have much better longevity.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Cygni posted:

AMD launching a 4/8 Ryzen 5 2500X and a 4/4 Ryzen 3 2300X soon with 4.0ghz turbos.

https://wccftech.com/amd-ryzen-5-2500x-ryzen-3-2300x-specs-performance-price-revealed/

Could be some good budget choices, but I'm sorta of the opinion at this point that if you are gonna drop $200 on RAM just to turn the thing on, you might as well spend the extra ~$50 and get a 2600 with 6/12 that will have much better longevity.

The 2300X is such a "why" part when an Athlon based off Raven performs equally as good. Further, The Raven Ridge 12nm refresh is coming soon so...why? It's not even going to have a clock speed advantage, or a power advantage. The 2500X at least makes some sense with a massive 16MB L3 Cache but as you point out a 2600/2600X is not much more expensive in the grand scheme of building a computer, is way better in productivity and actually has a performance advantage in games, while a 2500X will struggle to make itself standout from a 2400G, much as the 1500X did with the 1400.

Further by AMDs own admission 7nm Zen2 is coming 1H 2019. They're osborning themselves a bit here but 7nm promises a lot so why over invest in the AM4 platform right now?

They'd make sense if the 2500X was 130$ and the 2300X was 80$, as good entry level processors.

LRADIKAL
Jun 10, 2001

Fun Shoe
I'm thinking of getting a cheaper Ryzen sooner and upgrading to Ryzen 2 next year selling the old one, but I think again and there's still no reason to not just get a more expensive version and sell that. Weird.

Adbot
ADBOT LOVES YOU

Craptacular!
Jul 9, 2001

Fuck the DH
Wrong thread

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply