|
AMD TR2 interview with Jim Anderson https://www.youtube.com/watch?v=E73H8HvqjEM This kind of confirms that Intel expected to counter their 24 core TR2 with the 28 core part and the 32 core came as a surprise. He also mentions "new features" in Ryzen Master, probably related to UMA/NUMA. I wish I had a use case for all those cores. eames fucked around with this message at 13:06 on Jun 7, 2018 |
# ? Jun 7, 2018 12:58 |
|
|
# ? Jun 8, 2024 08:11 |
|
snickothemule posted:Can't wait to see the power consumption on that "5ghz" CPU before it falls over. Estimates I've seen are ~1.5kW on the system alone, plus compressor overhead for the water chiller loop so tack on another 50-100% overhead for that. If you're doing it at home and you don't have the chiller heat exchanger hooked up to outside air and are dumping it in to inside space then add another 50% on top of THAT for additional AC load. We're talking something like 3-5kW in total which is loving Stupid.
|
# ? Jun 7, 2018 15:13 |
|
I wonder if the 16C Threadripper is still going to be just two dies with double bandwidth between them, or for sake of cost savings, they're going to do some four die bullshit for the second gen (i.e. 4x4)
|
# ? Jun 7, 2018 15:42 |
|
I assume that would only be a cost savings if there's still a substantial yield of dies which have more than one unviable core on a CCX and can't therefore be made into Ryzen 5s. Even those could go into the 1900X since it's 2 dies, each with 2 CCXes, each with 2 cores, but that's probably not a very popular part. Otherwise they are cutting down more perfectly good dies and having to use a bigger interposer to connect them, I think.
|
# ? Jun 7, 2018 15:48 |
|
Combat Pretzel posted:I wonder if the 16C Threadripper is still going to be just two dies with double bandwidth between them, or for sake of cost savings, they're going to do some four die bullshit for the second gen (i.e. 4x4) If it doesn't result in massive performance drops, I don't see the problem to be honest.
|
# ? Jun 7, 2018 18:13 |
|
Mr Shiny Pants posted:If it doesn't result in massive performance drops, I don't see the problem to be honest. Be a shame as I'd like one for gaming on as well as being an overkill workstation.
|
# ? Jun 7, 2018 18:16 |
|
GRINDCORE MEGGIDO posted:Be a shame as I'd like one for gaming on as well as being an overkill workstation. You can buy my 1950X and I'll put that to a TR2.
|
# ? Jun 7, 2018 18:18 |
|
Most workloads didn't seem to have an appreciable penalty for hitting the crossbar or whatever they call it to access a memory controller on a different node. I doubt it will matter much except for things that are extremely latency sensitive like high frequency trading or heavily loaded virtualization hosts where memory bandwidth is in contention.
|
# ? Jun 7, 2018 18:18 |
|
Mr Shiny Pants posted:You can buy my 1950X and I'll put that to a TR2. I'm gonna wait for TR2 to crater the prices
|
# ? Jun 7, 2018 18:23 |
|
BangersInMyKnickers posted:Most workloads didn't seem to have an appreciable penalty for hitting the crossbar or whatever they call it to access a memory controller on a different node. I doubt it will matter much except for things that are extremely latency sensitive like high frequency trading or heavily loaded virtualization hosts where memory bandwidth is in contention. This is also my understanding. So it won't matter much if they do make it 4 dies.
|
# ? Jun 7, 2018 18:31 |
|
I dunno, i feel like how AMD manages the core allocation to minimize those hits is gonna be huge for lots of programs. Especially thinking long term as MOAR CORES is clearly where the mainstream is headed too. Like if the scheduler is dumb and parks your game processes on a die with the second fiddle memory access, and puts like fuckin Google's background upgrader threads and whatever on a die with direct access, that will be Bad. AMD's answer last time was game mode that just turned the second die/NUMA node off entirely, but that doesn't seem very elegant and would be more complicated for 4 dies. Would be better if the scheduler knew there were A and B cores and could intelligently sort processes between them. I know AMD has been working with MS to release OS scheduler patches and such so maybe they already addressed this, i dunno.
|
# ? Jun 7, 2018 19:25 |
|
I wonder if the new Intel 28 core might just be outright slower than the TR2 32 core by running into thermal and power limits much sooner? I mean, the 7980XE was already pushing it, and XFR2+PBO is way better than Turbo Boost. The (guessing) 8980XE might have a lower or simply comparable frequency cap in practical use.
|
# ? Jun 7, 2018 19:31 |
|
NUMA awareness has been a thing for a long time and if the specific program does have problems due to memory latency from the architecture it will be completely within the developers ability to optimize around, as the layout will be exposed to the application.
|
# ? Jun 7, 2018 19:32 |
|
Cygni posted:I dunno, i feel like how AMD manages the core allocation to minimize those hits is gonna be huge for lots of programs. Especially thinking long term as MOAR CORES is clearly where the mainstream is headed too.
|
# ? Jun 7, 2018 19:55 |
|
BangersInMyKnickers posted:NUMA awareness has been a thing for a long time and if the specific program does have problems due to memory latency from the architecture it will be completely within the developers ability to optimize around, as the layout will be exposed to the application. How does this type of thing work? Is information about the architecture exposed to applications which can request low latency cores? Is the information only exposed to the OS which doles out cores per application?
|
# ? Jun 7, 2018 21:05 |
|
In an OS written to be NUMA-aware applications can call an API to request a batch of threads to be located in the same node, do a memory allocation on a specific node, figure out which nodes have how much memory adjacent, etc. At least, that's how it seems to work in Windows according to this page I just found from Microsoft: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363804%28v=vs.85%29.aspx As an end user, you can also set core affinity for processes yourself if you really want to fine-tune it that much. I'm not sure if there's a way to manually manage memory allocation like that though. Eletriarnation fucked around with this message at 21:17 on Jun 7, 2018 |
# ? Jun 7, 2018 21:11 |
|
Not memory allocations, no. It's all virtual memory, abstracted and allocated by the OS. You just need to be aware of how the underlying cores map and configure your application core affinity accordingly. I suspect a lot of the work the AMD drivers are doing it optimizing process core assignments for their architecture when the application devs won't do the work.
|
# ? Jun 7, 2018 21:36 |
|
I wonder if (when) we’ll end up with NUMA management drivers that do GPU-style app detection or peephole optimization to automatically “optimize” how things are allocated across the package.
|
# ? Jun 7, 2018 21:56 |
|
I wish video games didn't depend on an individual core having high clock rates more than having a poo poo ton of cores. Just so I could have an excuse to buy and install something called "THREADRIPPER 2000" and not have worse performance than a $200 cpu. Woe to the casual user.
|
# ? Jun 7, 2018 22:01 |
|
Well Ryzen 2 is gonna have 12-16 cores which is a good balance of moar corez and not having to deal with the NUMA stuff or high cost of TR.
|
# ? Jun 7, 2018 22:43 |
|
LRADIKAL posted:How does this type of thing work? Is information about the architecture exposed to applications which can request low latency cores? Is the information only exposed to the OS which doles out cores per application? In Windows you can ask the scheduler to schedule your threads on a NUMA basis.
|
# ? Jun 7, 2018 22:50 |
|
Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now?
|
# ? Jun 9, 2018 04:09 |
|
SwissArmyDruid posted:Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now? Pretty sure the only person who thought it wasn't overclocked was Ian Cutress of AT. Even Ryan Smith thought he was full of poo poo. quote:Personally, I feel this new processor is not a higher binned Platinum 8180. Going up from 2.8 GHz base / 3.5 GHz turbo to 5.0 GHz all-core frequency is a big step, assuming the 5.0 GHz value was not an overclock. I would fully expect that this is the point where Intel starts introducing EMIB to CPUs. (ed: FWIW, I disagree with Ian; my money is on a heavily binned 28-core XCC processor made on 14++. We've seen that Intel can do 5GHz on that process with the 8086K) I called it straight off and generally people think the whole "28-phase motherboard with sub-ambient cooling" thing is pretty hilarious(ly transparent). BangersInMyKnickers posted:Thanks, Intel, for making an entire computer around the Bitchin' Fast 3D 2000 joke Paul MaudDib fucked around with this message at 04:55 on Jun 9, 2018 |
# ? Jun 9, 2018 04:50 |
|
SwissArmyDruid posted:Wait, people thought the sub-ambient cooled 28-core Intel chip WASN'T overclocked? What is the Intel thread even like right now? Khorne fucked around with this message at 05:00 on Jun 9, 2018 |
# ? Jun 9, 2018 04:55 |
|
Khorne posted:I got massively downvoted on r/intel for pointing out intel processors could hit 5 in 2011. Nobody actually likes intel enough to fanboy about them. Even the most diehard Intel fans acknowledge they're a poo poo company that happens to produce the best product around if you have the money to spend and disregard their general approach to business and customer relations. r/intel is just overflow for bored r/amd posters. If you look at the subreddit subscriber numbers compared to actual marketshare, AMD users are about 25x as likely to sub on a per-user basis as Intel users.
|
# ? Jun 9, 2018 04:59 |
|
Paul MaudDib posted:Nobody actually likes intel enough to fanboy about them. r/intel is just overflow for bored r/amd posters. In 2019 AMD should be the leader in actual good processors, probably, hopefully. And I'll be buying AMD then unless intel pulls off some real magic, but I don't see how that will be possible until the early '20s.
|
# ? Jun 9, 2018 05:01 |
|
Khorne posted:I like things that are good, personally. As lovely as intel has been for the past 5-6 years or so, AMD couldn't even compete and buying an i7 during sandy bridge/ivy bridge was one of the greatest value buys in the history of buying computer stuff. Yeah, same, the golden number is 5 GHz, if they can deliver that with Zen+ level latency then Intel won't have much argument left on the gaming front anymore. I'm holding off until at least 8C Coffee Lake but I'm thinking seriously about waiting for Zen2 and maybe third-gen Threadripper to replace my 5820K. Speaking of which, am I incorrect that the plan for 64C Epyc pretty much blows the lid on a new 8C CCX? 4 dies at 64C = 16C per die... meaning a 2x8C CCX or 4x4C CCX configuration. I said right from the start that 8C makes way more sense than 6C. 8C is a 2x2x2 topology, 6C is... nothing. At small scales (i.e. intra-CCX) the hypercube absolutely does make sense as an interconnect topology... it's what AMD is using right now. Seems like people didn't really have a basis for 6C other than "it's like, 50% bigger than 4C, that sounds good, and Ryzen means AMD needs to do everything small, right!?". If AMD is going to do a small CCX it's going to be 4C again, big CCX it's going to be 8C. (and the 4x4C CCX topology is not that insane when you consider that AMD is trialling whole dies that are not even directly hooked to memory with 2nd-gen TR) Paul MaudDib fucked around with this message at 05:21 on Jun 9, 2018 |
# ? Jun 9, 2018 05:11 |
|
4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up.
|
# ? Jun 9, 2018 07:35 |
|
Arzachel posted:4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up. I think the 2200G/2400G are suggestive that the 4x4CCX configuration is significantly less likely to happen. Going cross-CCX eats lanes and AMD seems to be allergic to that for some reason. They literally could have gone with an 8C configuration on the 2400G, and did not, nor did they even expose the full 16-lane PEG capability of the die. Intra-CCX seems to be more scalable for now. Paul MaudDib fucked around with this message at 03:56 on Jun 10, 2018 |
# ? Jun 9, 2018 14:13 |
|
Arzachel posted:4 CCX per die is significantly more likely than 8 core CCX, as long as they can get the interconnect speed up. At 7 nm an 8 core CCX is like 30% larger than the current 4 core CCX at 14 nm, and would take about as much power. It's totally possible to do that without needing a higher TDP, or really changing the packaging much. Also moving the interconnect from PCIe-3 equivalent to PCI-e 4 would basically double the intra-CCX bandwidth, improving things substantially.
|
# ? Jun 9, 2018 17:02 |
|
Does anyone have a 2200G or 2400G and is overclocking it? I think I have my 2200G stable at 3.9 Ghz with 1.3125v but can't hit 4Ghz even with bumping the voltage up to 1.325v. Or rather, I can hit it but it crashes after a few minutes at full load. Temperature showed 73C at the end. I don't have the best cooling so I might have to stay content with 3.9Ghz but would like to know what numbers others are getting. I don't think there's too much of a gain from 3.7Ghz to 3.9Ghz for my video encoding needs but it's for the 8=====D e-cred.
|
# ? Jun 13, 2018 23:04 |
|
I don't have one of those but did you try bumping vSOC up to 1.1 or so? You'd think only vCore would matter but not necessarily apparently. That generally helped lots with stability on other Ryzen OC'ing.
|
# ? Jun 13, 2018 23:59 |
|
.
sincx fucked around with this message at 05:50 on Mar 23, 2021 |
# ? Jun 14, 2018 00:28 |
|
Watch out for thermal throttling too, running the CPU too hot could indirectly make the GPU clock lower.
|
# ? Jun 14, 2018 00:52 |
|
I tried 3.925 Ghz at 1.3v. I don't think I'm 100% stable yet. Prime95 kept running for a while, until the whole system unceremoniously hung. I was watching the Windows Hardware Error errors accumulate in HWInfo, whereas Prime95 displayed no errors until the whole system crashed. The max CPU temp was 84 degC. I've made a graph that shows the Vcore, temperatures and so on. You can see that I tried with a higher Vcore (1.3125v) at the beginning, then as soon as I lowered it past 1.3v in Ryzen Master, the WHEA errors started occurring. The errors seemed to stop when I bumped the Vcore back up to 1.3v but then the system crashed shortly thereafter. I've attached a case fan to the intake now, to see if that makes a difference in overall temps. EDIT: I'm trying prime95 with the CPU completely stock, to set a baseline reference for temperatures and Vcore. It seems that I was undervolting quite a bit so it's no surprise that it wasn't stable, since HWInfo reports 1.325v at 3.7Ghz (stock turbo speed). It already seems to be helping the hard drives but CPU temps are already up to 82C after 10 minutes of prime95, with an ambient around 24-25C. I might need a better CPU cooler since I'll definitely need more than 1.3v to get to 4Ghz.
|
# ? Jun 14, 2018 01:15 |
|
If you're that determined to get to 4Ghz save your money and just de-lid and repaste the IHS like some have been doing for Intel. Your current HSF is probably good enough to make it work and the volts you've needed so far really aren't that bad either so it'll probably work. More info here: https://www.gamersnexus.net/guides/3237-amd-r3-2200g-delid-temperatures-and-liquid-metal If de-lidding the CPU scares you than you can give better cooling a shot I guess but you'll probably need to spend more than $20 to get cooling better enough to make it work at 4Ghz.
|
# ? Jun 14, 2018 03:48 |
|
AMD launching a 4/8 Ryzen 5 2500X and a 4/4 Ryzen 3 2300X soon with 4.0ghz turbos. https://wccftech.com/amd-ryzen-5-2500x-ryzen-3-2300x-specs-performance-price-revealed/ Could be some good budget choices, but I'm sorta of the opinion at this point that if you are gonna drop $200 on RAM just to turn the thing on, you might as well spend the extra ~$50 and get a 2600 with 6/12 that will have much better longevity.
|
# ? Jun 15, 2018 23:17 |
|
Cygni posted:AMD launching a 4/8 Ryzen 5 2500X and a 4/4 Ryzen 3 2300X soon with 4.0ghz turbos. The 2300X is such a "why" part when an Athlon based off Raven performs equally as good. Further, The Raven Ridge 12nm refresh is coming soon so...why? It's not even going to have a clock speed advantage, or a power advantage. The 2500X at least makes some sense with a massive 16MB L3 Cache but as you point out a 2600/2600X is not much more expensive in the grand scheme of building a computer, is way better in productivity and actually has a performance advantage in games, while a 2500X will struggle to make itself standout from a 2400G, much as the 1500X did with the 1400. Further by AMDs own admission 7nm Zen2 is coming 1H 2019. They're osborning themselves a bit here but 7nm promises a lot so why over invest in the AM4 platform right now? They'd make sense if the 2500X was 130$ and the 2300X was 80$, as good entry level processors.
|
# ? Jun 16, 2018 00:36 |
|
I'm thinking of getting a cheaper Ryzen sooner and upgrading to Ryzen 2 next year selling the old one, but I think again and there's still no reason to not just get a more expensive version and sell that. Weird.
|
# ? Jun 16, 2018 02:55 |
|
|
# ? Jun 8, 2024 08:11 |
|
Wrong thread
|
# ? Jun 16, 2018 03:00 |