|
EmpyreanFlux posted:Rome wasn't built on one die.
|
# ? Nov 7, 2018 19:59 |
|
|
# ? Jun 8, 2024 07:08 |
|
Also doesn't this mean GloFo and AMD don't need to heavily renegotiate the wafer supply agreement? GloFo can now crank out I/O dies from here until the end of time and make billions doing so on a really safe die design. Also indicates AMD may do like a 12nm EUV update at some point? No reason for GloFo not to use EUV on "12nm" right?Scarecow posted:God drat Miss Fixd!
|
# ? Nov 7, 2018 20:41 |
|
EmpyreanFlux posted:Also doesn't this mean GloFo and AMD don't need to heavily renegotiate the wafer supply agreement? GloFo can now crank out I/O dies from here until the end of time and make billions doing so on a really safe die design. Also indicates AMD may do like a 12nm EUV update at some point? No reason for GloFo not to use EUV on "12nm" right? I mean, the WSA has been amended TO DEATH already. The amendment they did in the wake of the news of GloFo dropping out of 7nm will have been their 7th amendment, and AMD had already ponied up during the 6th back in 2016 get most of that albatross off their neck and open up the possibility of getting the TSMC 7nm chiplets they're using NOW. Furthermore, the WSA with GloFo ends next year, so it's not like they have much longer to go. SwissArmyDruid fucked around with this message at 20:56 on Nov 7, 2018 |
# ? Nov 7, 2018 20:50 |
|
Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago. Re: process shrink to 12nm for these chips: the reason the IO chips are 14nm is because they don't scale well with process shrinks. If they moved up to 12nm, my guess is it would be for mobile IO chips to squeeze every last mW they can for battery life. I'm so ready for Threadripper 3. I wonder if I'll be able to virtualize both a gaming PC and a stream transcoding machine simultaneously.
|
# ? Nov 7, 2018 21:22 |
|
PC LOAD LETTER posted:Wow didn't know that. Do you have a link to more of a story on that or was it something that was just mentioned off hand by someone in the know? I hadn't thought about it but for some reason my brain has it internalized it as a fact, when the source was just posters on the beyond3d and anandtech forums talking about it back when it was found out that Kaveri supported GDDR5, so for all I know it could be bullshit. Sorry about that NewFatMike posted:Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago. People are speculating that it's 14nm specifically because GloFlo's 14HP process supports eDRAM.
|
# ? Nov 7, 2018 21:50 |
|
This is the first time in over a decade I've been legitimately pumped for a CPU from AMD and putting off my new build to see exactly what Zen 2 will be capable of.
|
# ? Nov 7, 2018 22:10 |
|
NewFatMike posted:Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago. That's where my thoughts went, EUV on 12nm would be for the much, much better voltage curve and possibly improved clock performance NewFatMike posted:I'm so ready for Threadripper 3. I wonder if I'll be able to virtualize both a gaming PC and a stream transcoding machine simultaneously. Get a 64C threadripper, play games, stream, render and emulate an Intel PC simultaneously. Arzachel posted:IPeople are speculating that it's 14nm specifically because GloFlo's 14HP process supports eDRAM. Does it? *googles* This is what I got - https://fuse.wikichip.org/news/956/globalfoundries-14hp-process-a-marriage-of-two-technologies/4/ TR claims 256MB L3, among 8 dies that'd only be 128MB L3 accounted for, so I/O die might have an additional 128MB L3 cache. The eDRAM on 14HP has comparable access time to L2 (when used in place of L2), so I can see how this overcomes latency issues. I don't know if 128MB would be enough for an APU, but if AMD only has the one I/O design, I don't see the point of disabling the L3 on higher end APUs. gently caress there might be a specialist I/O die that just maximizes eDRAM capacity, and this is the actual APU bandwidth issue solution going forward.
|
# ? Nov 7, 2018 22:59 |
|
PC LOAD LETTER posted:To me this is as big or a bigger of a deal: Their is a lot of low hanging fruit to optimize Zen, process shrink or no. The current front end chokes its cores: prefetch is too narrow and better branch prediction combined with a longer skylakish muop cache would enable higher clocks. The wider fp is of course a boon too, but my point here is that zen is really young and isn't maxed, particularly compared to skylake. (and I'm not saying *lake is bad, but it's basically a pentium 3 with a few features from netburst. There isn't much left to optimize aside from process).
|
# ? Nov 8, 2018 00:53 |
|
Arzachel posted:Sorry about that EmpyreanFlux posted:That's where my thoughts went, EUV on 12nm would be for the much, much better voltage curve and possibly improved clock performance TSMC will start using EUV with 7nm+ and you might see CPU's using it in mid-ish to late 2019. Its supposed to give a 20% boost in logic density and a 10% boost in clocks vs 7nm that will be used with Rome/Zen2 FWIW. For what the IO chip is doing anyways there might not be much if any benefit switching to a more advanced process. Particularly once you factor in costs of more advanced processes which are real high. If it really does have a big hunk of eDRAM on there like some suspect it'll HAVE to stay on GF's 14nm process no matter what since I guess that is about the only process that can do eDRAM and still be 14nm. Yields are supposed to be real bad for big huge eDRAM caches though so it'll be real impressive if AMD really did put some eDRAM in there. EmpyreanFlux posted:I don't know if 128MB would be enough for an APU There is lots of older but still interesting articles on the 32MB ESRAM cache the original XB1 (which tended to limit it to 720p resolutions) had on this topic you can google for if you want to spitball/speculate some more. Or at least that is where I'd start anyways. Yudo posted:Their is a lot of low hanging fruit to optimize Zen, process shrink or no.
|
# ? Nov 8, 2018 02:21 |
|
wargames posted:clocks matter alot when it comes to rendering video, and the 9900k is is one of the best out there for pure video rendering. That's only really true with Adobe products, things like Davinci Resolve will be able to scale far better with multiple threads. So the real answer depends on what software suite you end up using. See benchmarking done by Puget systems for ex. https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Threadripper-2990WX-2950X-Performance-1219/#ColorTabFPS-BenchmarkAnalysis
|
# ? Nov 8, 2018 03:03 |
|
Hmmm. https://i.imgur.com/7QXlMWR.jpg --edit: Nevermind, it's a 'shop. Combat Pretzel fucked around with this message at 03:40 on Nov 8, 2018 |
# ? Nov 8, 2018 03:37 |
I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose. I've been suspecting Zen 2 is a good spot to get on the AMD bus for a long while now, since the Zen 1 release, which was basically Broawell IPC but way more cores for your money. Just enough time for the arch to mature and my Devil's Canyon workstation to age enough to justify it, especially if it's IPC gains put it ahead of Skylake.
|
|
# ? Nov 8, 2018 04:37 |
|
Laslow posted:I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose. except R7 1700. I'll donate the CPU, mobo, and RAM to my startup to be the dedicated Render Bender. Anyone else praying for 8C/8T being Zen2's entry level/mobile starting point?
|
# ? Nov 8, 2018 04:43 |
|
No, I think they want midrange to be AM4, and I have doubts on anything beyond 8c being able to work on the AM4 boards they’ve been selling in 2018, let alone 2017. Besides, “entry level” will usually be an APU, and that limits how many votes they can use.
|
# ? Nov 8, 2018 04:51 |
|
I need more threads, I just don't want to spend on ddr4 and an associated MB with ddr5 around the corner. I did that with haswell/ddr3 and guess what I still use? Faster ram would be nice anyways especially if core counts keep increasing on consumer, dual channel platforms.
|
# ? Nov 8, 2018 04:53 |
NewFatMike posted:except R7 1700. I'll donate the CPU, mobo, and RAM to my startup to be the dedicated Render Bender. ... ... Coffee Lake? Jesus Christ!
|
|
# ? Nov 8, 2018 04:54 |
|
Laslow posted:I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose. If they would release a consumer Radeon with SR-IOV....... Man that would be awesome.
|
# ? Nov 8, 2018 06:35 |
|
There's probably some sort of VGPU capability in the consumer cards, just locked away in an annoying fashion. From what I spied in the NVidia release notes for instance, is that their consumer cards do the Quadro poo poo, but only for the Application Guard, to enable sandboxed Edge to run with hardware acceleration (and it ain't RemoteFX).
|
# ? Nov 8, 2018 07:06 |
|
That has been the MO with the "professional" cards for basically ever. Their presence unlocks software: with a handful of exceptions, it's the the same or almost the same silicon.
|
# ? Nov 8, 2018 07:09 |
|
Combat Pretzel posted:There's probably some sort of VGPU capability in the consumer cards, just locked away in an annoying fashion. From what I spied in the NVidia release notes for instance, is that their consumer cards do the Quadro poo poo, but only for the Application Guard, to enable sandboxed Edge to run with hardware acceleration (and it ain't RemoteFX). I hate that, it's not like that now I can't have it I am going to go out and buy a Quadro or anything. Companies want support on their stuff so they will happily buy the more expensive version. It would have been really cool to run some hardware accelerated desktops on one card.
|
# ? Nov 8, 2018 08:03 |
|
Seeing how NVidia is seemingly actively sabotaging GPU passthrough on their consumer parts, I expect nothing from them. Hopefully this'll be the extreme opposite with AMD. Intel has this iGVT-g stuff of theirs and wants anyone to use it, maybe it'll be also in their upcoming discrete cards, assuming they'll even be worth a drat for gaming and such.
|
# ? Nov 8, 2018 14:37 |
|
I thought gpu passthrough works with latest nvidia driver again?
|
# ? Nov 8, 2018 14:45 |
|
jisforjosh posted:This is the first time in over a decade I've been legitimately pumped for a CPU from AMD and putting off my new build to see exactly what Zen 2 will be capable of. The bright outlook and the timing of when I expect to have some expendable income from my new (soon to be big-boy!) job puts me in your boat. I was very happy with how well I'd put together my Broadwell build a few years ago and I hope to turn that up to 11. Hopefully coming back to Team Red for the first time since my Phenom II X4!
|
# ? Nov 8, 2018 15:11 |
|
8c CPU chiplet, cut down IO die, 1Hi HBM2 hooked up to a GPU chip through 2x IF links. Give it to me AMD, I've been waiting since Llano.
|
# ? Nov 8, 2018 16:36 |
|
Maybet his belongs in the GPU thread, but AMD did an after presentation on Vega 20 https://www.youtube.com/watch?v=m0h6-VfH3Xo Just picking it out due to chiplet design relevance - IF has a latency of 60-70ns, so it's only slightly slower than really good L3. Probably bodes really well for Rome and what I'm going to call Zenith Ridge (AMD marketing sucks if they don't pick that), but it also seems that Rome design solves GPU scalability as well. Large I/O die that gets recognized as a GPU, and small ALU dies connected via LF Gen2 (or 3) on Arcturus. Like, Imagine making two I/O die designs, that have the hardware scheduler, geometry processor and memory bus, a 256 bit one and a 4096 bit one compatible with HBM on 14nm. The chiplets would be composed of ALUs, TMUs and ROPs, and on the latest process. I know AMD has said they wanted to do something like this before, but held off because they didn't have a solution to getting it all recognized as a single GPU, the I/O die allows that to happen though. Certain members of the Zen team were reassigned to to RTG, and I bet it's specifically for this reason; Navi is the last monolithic design, and I bet Arcturus (the 256 bit I/O) and Betelgeuse (the HBM2/3 I/O) are the replacements in very late 2020 or mid 2021.
|
# ? Nov 8, 2018 17:52 |
|
Truga posted:I thought gpu passthrough works with latest nvidia driver again?
|
# ? Nov 8, 2018 18:07 |
|
Combat Pretzel posted:There's always this sing and dance of hiding the KVM hypervisor and ideally not using the Hyper-V extensions (especially the SynIC helps with performance). Did this change? I've had very good results with hiding KVM and leaving all the Hyper-V extensions enabled, but changing the Hyper-V vendor string to something non-default. It seems like NVIDIA's consumer drivers look for particular hypervisor vendor strings to detect GPU passthrough, but don't actually check for or care about the actual paravirtualization features.
|
# ? Nov 8, 2018 22:12 |
|
Combat Pretzel posted:There's always this sing and dance of hiding the KVM hypervisor and ideally not using the Hyper-V extensions (especially the SynIC helps with performance). Did this change? I only have kvm = hidden on. It works really well.
|
# ? Nov 9, 2018 08:50 |
|
Some news from the other side of the wall: https://www.youtube.com/watch?v=kmAWqyHdebI Hardware Unboxed retested their Core i9-9900K, while clamping the chip to a 95W TDP. Intel has been using their motherboard partners to have their boards load a default clock multiplier table that violates the crap out of Intel's own official power spec, essentially trying to sneak being pre-overclocked from the factory over people. The results are astonishing... the 9900K is now neck-and-neck, or the outright loser in many of their retested workload benchmarks to the 2700X... and the 2700X is still cheaper! The AMD part still loses out in AVX workloads, but the margin in many cases has dropped to single-digits. https://www.techspot.com/review/1744-core-i9-9900k-round-two/ "but what about gaming," you cry. Yes, the 2700X still posts lower framerates than the 95W TDP-clamped 9900K, until the point at which your games become GPU-bound, but again, it seems the adage of buying Intel if you only game, and AMD if you game and work, is now "buy AMD and pocket the extra $200 if you work, buy Intel if you don't or have AVX-heavy workloads". SwissArmyDruid fucked around with this message at 15:47 on Nov 10, 2018 |
# ? Nov 10, 2018 15:42 |
|
quote:and the 2700X is still cheaper GRINDCORE MEGGIDO fucked around with this message at 16:54 on Nov 10, 2018 |
# ? Nov 10, 2018 16:04 |
|
Fair enough, expose the 95W TDP as being the lie we all knew it was. But hamstringing the chip then measuring performance doesn't seem like a very interesting thing with regard to making a purchasing decision.
|
# ? Nov 10, 2018 16:29 |
|
Doesn't the 2700X also pull more than the advertised 105W with all the cores loaded/XFR?
|
# ? Nov 10, 2018 16:29 |
|
SwissArmyDruid posted:Hardware Unboxed retested their Core i9-9900K, while clamping the chip to a 95W TDP. Intel has been using their motherboard partners to have their boards load a default clock multiplier table that violates the crap out of Intel's own official power spec, essentially trying to sneak being pre-overclocked from the factory over people. I am not sure I agree. It isn't like AMD has a long history of treating their own TPD numbers as strict rules, and not some vague guideline based on arbitrary numbers. Sauce for the goose is sauce for the gander. If all the boards are doing it, and the systems are stable, then as far as I'm concerned it's a fair result. Numbers are going to be inconsistent between different sites depending on what settings they use, but that's nothing new. The only thing I see being a problem is that most reviewers are testing systems on benchtop and so the extra watts don't create problems with case temperature. The main thing I'd want to see is reviewers testing systems with some longer timedemos or whatnot and throw out the first few minutes, to represent real-world game performance and not an extra boost that you only get while the thermals hold.
|
# ? Nov 10, 2018 16:50 |
|
ConanTheLibrarian posted:Fair enough, expose the 95W TDP as being the lie we all knew it was. But hamstringing the chip then measuring performance doesn't seem like a very interesting thing with regard to making a purchasing decision. This.
|
# ? Nov 10, 2018 16:53 |
|
Arzachel posted:Doesn't the 2700X also pull more than the advertised 105W with all the cores loaded/XFR? Typically you'll be looking at ~120W or less on real world work loads even with XFR on though for a 2700X. So it won't be as much of a power hog as a 9900k and will be much easier to cool since the soldered TIM it has is better implemented. The stock HSF it comes with can usually do a OK job believe it or not of giving XFR enough cooling headroom to be useful though of course a AIO watercooler will be better. Once you consider the ~$1000 cost of the i9 9900k + the cost of the fairly good watercooling (you really want something like a 3x 120mm fan radiator watercooling loop AIO for one otherwise you get a heat feedback loop at high clocks which can cause thermal throttling) needed to really run the thing at the ~5Ghz all core 24/7 speeds necessary to make it interesting performance-wise vs the 2700X... Well the top end of the market has generally always been lousy from a value perspective but in comparison to a 2700X its an extremely poor value even with "unlimited" TDP to allow those 5Ghz clocks. And if you limit that i9 9900k to 95W TDP's its clearly a flat out stupid idea to buy vs the ~$300 2700X since performance wise it'll be effectively the same for the most part. Which was kinda what the video was pointing out. Now normally one would think that no one is ever really going to buy a i9 9900k to run at stock clocks or keep it at the stock listed 95W TDP so that issue would be moot but the video mentioned that around half of the people planning on buying these things aren't gonna OC them at all. And that about half of the other half were just gonna use some sort of auto-OC software (either Intel's or the mobo vendors') to do their OC'ing for them and leave it at that due to the issues of trying to cool the thing at 5Ghz on all cores 24/7. Which means they'll probably get to ~4.7Ghz all core 24/7 OC and a ~150W TDP which you can cool well enough with a generic 2x 120mm AIO watercooler. Which isn't too bad of a performance boost vs stock clocks but still is a highly lousy value vs a ~4.2Ghz 2700X which is about what XFR will get you and OK cooling. So essentially when you consider all the angles, even from a PC enthusiast perspective of GOTTA GO FAST BRO MUH FRAMES, it seems the i9 9900k really doesn't make sense to buy ever. Well, maybe if $1K+ is cheap and easy pocket money to you, then OK sure it makes some sense. Otherwise no, not really.
|
# ? Nov 10, 2018 17:15 |
|
SwissArmyDruid posted:"but what about gaming," you cry. Yes, the 2700X still posts lower framerates than the 95W TDP-clamped 9900K, until the point at which your games become GPU-bound, but again, it seems the adage of buying Intel if you only game, and AMD if you game and work, is now "buy AMD and pocket the extra $200 if you work, buy Intel if you don't or have AVX-heavy workloads". Its not even that complicated. Buy Intel if you're not GPU/monitor bound and you care about the frames (or are doing AVX stuff), otherwise get Ryzen. The notion that x or y component is better at gaming is nonsense, no part operates in a vacuum. Under some circumstances Intel will deliver 10-15% more frames, but this doesn't translate into the general notion that its 10-15% better at gaming, unless you're in the same circumstances.
|
# ? Nov 10, 2018 18:25 |
|
Price/performance is the big factor in a lot of people's purchasing decisions, I certainly won't be paying double for just 10-15% more performance. At higher resolutions (like 4k) the gap disappears, and the difference really only manifests at 1080p - which begs the question, who the gently caress would play at 1080p with a 9900k?
|
# ? Nov 10, 2018 18:38 |
|
Zedsdeadbaby posted:Price/performance is the big factor in a lot of people's purchasing decisions, I certainly won't be paying double for just 10-15% more performance. At higher resolutions (like 4k) the gap disappears, and the difference really only manifests at 1080p - which begs the question, who the gently caress would play at 1080p with a 9900k? At 1080p there's a gaming streaming use case, as it's phenomenal at maintaining frame rates while streaming with good quality, then there's competitive gaming where you're using 144/240 Hz monitors. vvvv MSRP is $488-$499 Winks fucked around with this message at 19:42 on Nov 10, 2018 |
# ? Nov 10, 2018 19:28 |
|
PC LOAD LETTER posted:Once you consider the ~$1000 cost of the i9 9900k It's $550 ($600 msrp). The price on newegg is because it's out of stock.
|
# ? Nov 10, 2018 19:42 |
|
|
# ? Jun 8, 2024 07:08 |
|
TDP hasn't meant anything on desktop for either side in years honestly
|
# ? Nov 10, 2018 19:45 |