|
EmpyreanFlux posted:
HalloKitty posted:Too many At the same time, I have been wondering if AMD's quietly shuffling Threadripper 3 off the stage is an indication of binning on the new process, or a shift in actual demand forcing AMD to re-allocate dies to EPYC that would previously have gone to Threadripper.
|
# ? May 8, 2019 14:57 |
|
|
# ? Jun 5, 2024 16:28 |
|
SwissArmyDruid posted:At the same time, I have been wondering if AMD's quietly shuffling Threadripper 3 off the stage is an indication of binning on the new process, or a shift in actual demand forcing AMD to re-allocate dies to EPYC that would previously have gone to Threadripper. I don't know how AMD works internally so that's just speculation, but to me that's the simplest answer. TR isn't a huge priority for them, and it's not like they have any competition in that space. If you need what TR offers and a 16c zen2 isn't enough you're still going to buy TR. Even if it doesn't get refreshed. Khorne fucked around with this message at 15:20 on May 8, 2019 |
# ? May 8, 2019 15:17 |
|
Just gets hard to justify the 2950X unless you really need quad channel. But yeah EPYC2 is a way higher priority for them, this is a really good opportunity to deliver on a product Intel won't have a real answer for until like...2021 I think? We talk about AMD needing an overly compelling product to really shift market share appreciably, and EPYC2 is that product when the best Intel can throw at it is Eta Carinae as a CPU.
|
# ? May 8, 2019 18:21 |
|
A little more info on the 16C AM4 Zen2 part. Base/boost clocks of 3.3/4.2Ghz which seem mediocre at first glance but apparently it has a TDP of close to 100W and is a "very early" engineering sample so final clocks, or overclocks, could be a fair amount higher. Especially if they bump allowed TDP's to 120W. Others in the twitter thread are saying the 12C Zen2 ES has much higher clocks FWIW. Zen (1st version) ES part base clocks were about 800Mhz lower than final release base clocks for reference. Different design, process, etc. and history won't necessarily repeat perfectly here but still it does illustrate that final clocks can end up being quite a bit higher than ES parts. edit: seeing some hints elsewhere that 5Ghz or more is probably not going to be doable without top notch cooling, TSMC's 7nm won't clock as well as Intel's 14nm++++++ still. PC LOAD LETTER fucked around with this message at 15:59 on May 9, 2019 |
# ? May 9, 2019 15:34 |
|
I'm hoping this means there will be a 8C/16T "3600X" for ~$250.
|
# ? May 9, 2019 18:06 |
|
I just want to know clock-per-clock IPC increase. Zen+ is like what, average 3-5% behind Intel?
|
# ? May 9, 2019 20:37 |
|
Alpha Mayo posted:I just want to know clock-per-clock IPC increase. Zen+ is like what, average 3-5% behind Intel? Zen2 closes the gap on all of the gigantic deficits. Where AMD is 15%+ behind with zen+ should now be -3% to 3% behind with zen2. The optimizations they did should also close the last percent or two on the operations where they're close, although that's a bit unclear because the IO die may introduce additional latency. The latency and clock speeds will likely keep Intel in a slight lead in games. I'd expect it will still be behind in gaming a bit, but in productivity if it can actually hit 4.5+ all core than it should out perform the 9900k at equal core counts. AMD has a much stronger SMT implementation. I hope we get another benchmark leak soon. There are a few credible sources saying zen2 turbos higher than 4.5. The best clocks we've seen out of zen2 "probably legitimate" leaks so far are one of the parties responsible for zen1 leaks leaking 4.7 single core with 4.8 unstable was possible on a 4c es sample. Khorne fucked around with this message at 23:10 on May 9, 2019 |
# ? May 9, 2019 22:00 |
|
spasticColon posted:I'm hoping this means there will be a 8C/16T "3600X" for ~$250. One can dream, but my guess would be that price points per core will not move and that the 12C / 16C parts will be more expensive than current 2700X parts were at launch. Still better performance per dollar because of higher clocks.
|
# ? May 9, 2019 22:19 |
|
I feel like we'll see less per core but won't see the 12 core quite as cheap as the 8 core (I just can't see them bumping the 3700x's price up 50% over the 2700x), unless rumors are true and they're hoarding the pristine 16 cores for dunking on Intel's upcoming 10 core. something something ayymd comet lake impact crater
|
# ? May 9, 2019 22:36 |
|
God COMPUTEX CANNOT GET HERE FAST ENOUGH.
|
# ? May 9, 2019 23:07 |
|
SwissArmyDruid posted:God COMPUTEX CANNOT GET HERE FAST ENOUGH. Yeah I've seen like half a dozen Ryzen 3000 leaks now and I don't feel like I have any more certainly about final specs beyond core counts.
|
# ? May 9, 2019 23:35 |
|
X570 is def coming at Computex at least, so theres at least that. The board partners have more or less leaked all of that at this point.
|
# ? May 9, 2019 23:46 |
|
SwissArmyDruid posted:God COMPUTEX CANNOT GET HERE FAST ENOUGH. Mentally I've decided to expect 5% IPC and 10% clocks to go with moar coarez and that's all I need...
|
# ? May 10, 2019 00:01 |
|
I just want a new PC, dammit.
Lambert fucked around with this message at 00:06 on May 10, 2019 |
# ? May 10, 2019 00:03 |
|
I need a new VM-running machine, damnit.
|
# ? May 10, 2019 00:42 |
|
I don't need any of this but I want it too.
|
# ? May 10, 2019 02:18 |
|
SwissArmyDruid posted:I need a new VM-running machine, damnit.
|
# ? May 10, 2019 13:52 |
|
My fear for Zen 2 is that in certain applications such as games the latency introduced by moving the memory controller and I/O off chip will negate the IPC and clock increases
|
# ? May 10, 2019 15:06 |
|
3peat posted:My fear for Zen 2 is that in certain applications such as games the latency introduced by moving the memory controller and I/O off chip will negate the IPC and clock increases Shun the nonbeliever!
|
# ? May 10, 2019 15:46 |
|
Deuce posted:I don't need any of this but I want it too. kind of here but also kind of need to upgrade, i have a 4960 non-k and its showing its age when rendering videos.
|
# ? May 10, 2019 15:51 |
|
You think you need an upgrade with Haswell, and here I'm using an X5660 from 2010 with an X58 motherboard from 2008 that doesn't have UEFI or SATA3. I'm just hoping for something about as fast as a 9900K that uses less power. Being able to upgrade to 16C down the line would be nice too.
|
# ? May 10, 2019 18:32 |
|
Is the "edge" Intel will have in gaming just a few percentage points? Like is it something a person who isn't a benchmark monkey even gonna notice? I'm on Haswell also and would like to upgrade and get on the Lisa Su choo choo into Thread Town.
|
# ? May 10, 2019 19:35 |
|
Fabulousity posted:Is the "edge" Intel will have in gaming just a few percentage points? Like is it something a person who isn't a benchmark monkey even gonna notice? Clock for clock, the deficit is actually very little today. With Zen 2, who knows. Hopefully it will deliver. Matching clock speeds 'alone' would be enough to make a serious dent in Intel's marketshare, honestly.
|
# ? May 10, 2019 19:44 |
|
SwissArmyDruid posted:I need a new VM-running machine, damnit. I need more cores and those AVX speedups for crunching more science.
|
# ? May 10, 2019 19:45 |
|
I suspect whether you're happy with any processor comes down to what you want to play and what other hardware bottlenecks you might "enjoy" I have a 2600x paired with a gtx 1060 6gb, which is about the most average 1080p system you can own it runs poor performing ubisoft games just fine
|
# ? May 10, 2019 19:50 |
|
3peat posted:My fear for Zen 2 is that in certain applications such as games the latency introduced by moving the memory controller and I/O off chip will negate the IPC and clock increases But on the flipside, a uniform amount of latency to main memory could even things out and keep them from stalling or having those odd race conditions we saw with the 4 chiplet threadrippers.
|
# ? May 10, 2019 20:22 |
|
wasn't most of that just the client-windows scheduler sucking rear end at scheduling on NUMA processors?
|
# ? May 10, 2019 20:25 |
|
Fabulousity posted:Is the "edge" Intel will have in gaming just a few percentage points? Like is it something a person who isn't a benchmark monkey even gonna notice? Comparing zen+ to current intel, it only really matters for high refresh rate gaming. The 15% clock speed advantage is the biggest gap. There are one or two AAA games that choke on the memory latency and see a ~11%-12% ipc gap. Most other games fall in the 3%-8% ipc gap range. It gets complicated because lots of people are limited by their GPU due to the graphics settings they use. On the flip side, many esports titles can hit 165 fps on a zen+ and some esports titles 240+. In those cases, zen+ already matches the 9900k in real world performance. Zen2 should remove most of the ipc gap, but we don't know about memory latency or actual clock speeds. Paul MaudDib posted:wasn't most of that just the client-windows scheduler sucking rear end at scheduling on NUMA processors? Khorne fucked around with this message at 21:11 on May 10, 2019 |
# ? May 10, 2019 20:52 |
|
There was a Livestream from Level1Techs a few weeks ago (which had a bunch of audio issues and was quickly unlisted) that had better performance running W10 under a Linux hypervisor than native on a 2990WX, so virtualizing out from Microsoft's fuckery is a tempting use case.
|
# ? May 10, 2019 20:57 |
|
It is quite fun to watch Microsoft try so hard to make their WSL2 as attractive as possible so their OS doesn’t end up in a KVM VM. I still hope the latter wins somehow.
|
# ? May 10, 2019 21:33 |
|
eames posted:It is quite fun to watch Microsoft try so hard to make their WSL2 as attractive as possible so their OS doesnt end up in a KVM VM. I still hope the latter wins somehow. That's my plan when I upgrade.. my old 2500K doesn't have VT-d and that leaves me in Windows most the time. Been seeing a ton of deals on older/current Ryzens lately. Hopefully that is a good sign they are clearing out as much old inventory as they can because Zen2 is just such a big leap forward.
|
# ? May 10, 2019 22:08 |
|
Those system calls aren't a two way street, are they? Microsoft wouldn't just hand Codeweavers the keys to cross compatibility like that, would they?
|
# ? May 10, 2019 22:20 |
|
Fabulousity posted:Is the "edge" Intel will have in gaming just a few percentage points? Like is it something a person who isn't a benchmark monkey even gonna notice? Most people display the gap by benchmarking at 720p or something. To actually have the CPU bottlenecking everything else, you ought to have a $700+ graphics card to have more GPU power than you really need for whatever you run. The additional price of the 9900K isn't that expensive if you can afford a 2080ti.
|
# ? May 11, 2019 00:34 |
|
NewFatMike posted:There was a Livestream from Level1Techs a few weeks ago (which had a bunch of audio issues and was quickly unlisted) that had better performance running W10 under a Linux hypervisor than native on a 2990WX, so virtualizing out from Microsoft's fuckery is a tempting use case. Linux thread: non-SCO infringing version: virtualizing out Microsoft's fuckery
|
# ? May 11, 2019 01:26 |
|
Fabulousity posted:Is the "edge" Intel will have in gaming just a few percentage points? Like is it something a person who isn't a benchmark monkey even gonna notice? X% better or worse at gaming is a fallacious way of looking at it. Performance is a function of CPU, GPU, and resolution, changing any variable will result in different performance. Intel can get a nice lead at lower resolutions with a beefy GPU, but that doesn't translate to anything else. With a 60hz monitor there's no difference as both a readily capable of pushing that, and if your GPU isn't capable of pushing 90+ fps in whatever resolution you set it will never matter either, you've got to be targeting >90-100 fps and have the other hardware to back that up for there to be any difference.
|
# ? May 11, 2019 01:52 |
|
It depends, because in the modern era people tend to go through at least 2-3 GPUs on one CPU. Intel's performance lead current gen has kept them a solid value proposition, since the hardware will last much longer before becoming frequently CPU bound, and the things that are CPU bound already are faster. Zen 2 will probably close the gap enough that the value proposition for Intel becomes pretty garbage, but we can't really be sure of anything until it's actually out.
|
# ? May 11, 2019 02:02 |
K8.0 posted:It depends, because in the modern era people tend to go through at least 2-3 GPUs on one CPU. Intel's performance lead current gen has kept them a solid value proposition, since the hardware will last much longer before becoming frequently CPU bound, and the things that are CPU bound already are faster. Zen 2 will probably close the gap enough that the value proposition for Intel becomes pretty garbage, but we can't really be sure of anything until it's actually out. Yup! I am still on a 3570? or something and on my third graphics card, a 1070 that I'll be bringing to my zen2 build.
|
|
# ? May 11, 2019 04:51 |
|
ItBreathes posted:X% better or worse at gaming is a fallacious way of looking at it. Performance is a function of CPU, GPU, and resolution, changing any variable will result in different performance. Intel can get a nice lead at lower resolutions with a beefy GPU, but that doesn't translate to anything else. With a 60hz monitor there's no difference as both a readily capable of pushing that, and if your GPU isn't capable of pushing 90+ fps in whatever resolution you set it will never matter either, you've got to be targeting >90-100 fps and have the other hardware to back that up for there to be any difference. CPU often improves 1% frame time which is pretty important for keeping above that 60 fps 100% of the time in a competitive game.
|
# ? May 11, 2019 05:11 |
|
hobbesmaster posted:CPU often improves 1% frame time which is pretty important for keeping above that 60 fps 100% of the time in a competitive game. And a 2600 is perfectly capable of holding 60fps at 1%. My point was though that the statement "$CPU is X% better at gaming", as measured by a 720p low test with a 1080, is meaningless. Being behind at 720p doesn't translate to other scenarios, be aware of your use case and measure performance accordingly.
|
# ? May 11, 2019 05:46 |
|
|
# ? Jun 5, 2024 16:28 |
|
ItBreathes posted:And a 2600 is perfectly capable of holding 60fps at 1%. It actually does translate to other scenarios. Ryzen is even farther behind in productivity since that's fully CPU-bottlenecked, there's no GPU bottleneck to hide behind like 1440p and 4K. The 9900K is just straight-up 25% faster across the board (not even in Adobe, just in general). Also, 720p ultra is more analogous to, say, 1080p medium gaming as far as graphics intensity is concerned. 1080p ultra is analogous to, say, 1440p medium. Running everything at max settings isn't exactly relevant to real-world users either. So it does matter for predicting those lower settings used by the vast majority of gamers. The analytical goal of 720p/1080p tests is to determine what framerate a CPU can support in a game, independent of the graphics card. It's a test of a component, not a test of a system. As such, yeah you test with lower resolutions and the fastest graphics card, to see what the CPU can do. If your CPU can only do 120 fps, then no matter what GPU you ever throw at it, or what settings you drop, you will never get a framerate higher than 120 fps. That's an important fact to know. Of course there are other components in your system, and this is not a guarantee that the rest of the system can actually utilize that performance, but yeah, 720p performance actually is indicative of how much faster a CPU is than another at a given game. You can think of it as being the same thing as Cinebench if you like. Cinebench is a measure of a perfectly parallel productivity task, most real workloads aren't actually like that either. It's a "synthetic" workload that is created from a real-world application to more perfectly measure some aspect of a CPU. Real-world, you will not see performance as good as Cinebench implies, there are other bottlenecks in your system. (but looking at a batch of games does give you a wider picture than just one task like Cinebench...) Paul MaudDib fucked around with this message at 06:18 on May 11, 2019 |
# ? May 11, 2019 06:12 |