Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Anime Schoolgirl
Nov 28, 2002

EmpyreanFlux posted:

Rome wasn't built on one die.
:shepicide:

Adbot
ADBOT LOVES YOU

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Also doesn't this mean GloFo and AMD don't need to heavily renegotiate the wafer supply agreement? GloFo can now crank out I/O dies from here until the end of time and make billions doing so on a really safe die design. Also indicates AMD may do like a 12nm EUV update at some point? No reason for GloFo not to use EUV on "12nm" right?

Scarecow posted:

God drat Miss

Fixd!

SwissArmyDruid
Feb 14, 2014

by sebmojo

EmpyreanFlux posted:

Also doesn't this mean GloFo and AMD don't need to heavily renegotiate the wafer supply agreement? GloFo can now crank out I/O dies from here until the end of time and make billions doing so on a really safe die design. Also indicates AMD may do like a 12nm EUV update at some point? No reason for GloFo not to use EUV on "12nm" right?

I mean, the WSA has been amended TO DEATH already. The amendment they did in the wake of the news of GloFo dropping out of 7nm will have been their 7th amendment, and AMD had already ponied up during the 6th back in 2016 get most of that albatross off their neck and open up the possibility of getting the TSMC 7nm chiplets they're using NOW.

Furthermore, the WSA with GloFo ends next year, so it's not like they have much longer to go.

SwissArmyDruid fucked around with this message at 20:56 on Nov 7, 2018

NewFatMike
Jun 11, 2015

Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago.

Re: process shrink to 12nm for these chips: the reason the IO chips are 14nm is because they don't scale well with process shrinks. If they moved up to 12nm, my guess is it would be for mobile IO chips to squeeze every last mW they can for battery life.

I'm so ready for Threadripper 3. I wonder if I'll be able to virtualize both a gaming PC and a stream transcoding machine simultaneously.

Arzachel
May 12, 2012

PC LOAD LETTER posted:

Wow didn't know that. Do you have a link to more of a story on that or was it something that was just mentioned off hand by someone in the know?

I hadn't thought about it but for some reason my brain has it internalized it as a fact, when the source was just posters on the beyond3d and anandtech forums talking about it back when it was found out that Kaveri supported GDDR5, so for all I know it could be bullshit.

Sorry about that :v:

NewFatMike posted:

Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago.

Re: process shrink to 12nm for these chips: the reason the IO chips are 14nm is because they don't scale well with process shrinks. If they moved up to 12nm, my guess is it would be for mobile IO chips to squeeze every last mW they can for battery life.

I'm so ready for Threadripper 3. I wonder if I'll be able to virtualize both a gaming PC and a stream transcoding machine simultaneously.

People are speculating that it's 14nm specifically because GloFlo's 14HP process supports eDRAM.

jisforjosh
Jun 6, 2006

"It's J is for...you know what? Fuck it, jizz it is"
This is the first time in over a decade I've been legitimately pumped for a CPU from AMD and putting off my new build to see exactly what Zen 2 will be capable of.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

NewFatMike posted:

Yeah, my guess is that GloFo realized they could still make back on AMD moving to 7nm if they just made the IO chips and negotiated on that a few months ago.

Re: process shrink to 12nm for these chips: the reason the IO chips are 14nm is because they don't scale well with process shrinks. If they moved up to 12nm, my guess is it would be for mobile IO chips to squeeze every last mW they can for battery life.

That's where my thoughts went, EUV on 12nm would be for the much, much better voltage curve and possibly improved clock performance

NewFatMike posted:

I'm so ready for Threadripper 3. I wonder if I'll be able to virtualize both a gaming PC and a stream transcoding machine simultaneously.

Get a 64C threadripper, play games, stream, render and emulate an Intel PC simultaneously.

Arzachel posted:

IPeople are speculating that it's 14nm specifically because GloFlo's 14HP process supports eDRAM.

Does it? *googles* This is what I got - https://fuse.wikichip.org/news/956/globalfoundries-14hp-process-a-marriage-of-two-technologies/4/

TR claims 256MB L3, among 8 dies that'd only be 128MB L3 accounted for, so I/O die might have an additional 128MB L3 cache. The eDRAM on 14HP has comparable access time to L2 (when used in place of L2), so I can see how this overcomes latency issues. I don't know if 128MB would be enough for an APU, but if AMD only has the one I/O design, I don't see the point of disabling the L3 on higher end APUs. gently caress there might be a specialist I/O die that just maximizes eDRAM capacity, and this is the actual APU bandwidth issue solution going forward.

Yudo
May 15, 2003

PC LOAD LETTER posted:

To me this is as big or a bigger of a deal:


So near 30% IPC improvement on combined int/fp tasks for at least some work loads which is some insane BS if it holds up.

Their is a lot of low hanging fruit to optimize Zen, process shrink or no. The current front end chokes its cores: prefetch is too narrow and better branch prediction combined with a longer skylakish muop cache would enable higher clocks. The wider fp is of course a boon too, but my point here is that zen is really young and isn't maxed, particularly compared to skylake. (and I'm not saying *lake is bad, but it's basically a pentium 3 with a few features from netburst. There isn't much left to optimize aside from process).

PC LOAD LETTER
May 23, 2005
WTF?!

Arzachel posted:

Sorry about that :v:
No worries. That happens to everyone, including me. Those sites are normally, or used to be anyways, good sources of information just of the more speculative type.

EmpyreanFlux posted:

That's where my thoughts went, EUV on 12nm would be for the much, much better voltage curve and possibly improved clock performance
I don't think GF or TSMC are doing EUV on 12nm anything. And GF's/TSMC's 12nm is more like a 14nm+ since it wasn't a optical shrink at all, just a optimization of their current 14nm HP process.

TSMC will start using EUV with 7nm+ and you might see CPU's using it in mid-ish to late 2019. Its supposed to give a 20% boost in logic density and a 10% boost in clocks vs 7nm that will be used with Rome/Zen2 FWIW.

For what the IO chip is doing anyways there might not be much if any benefit switching to a more advanced process. Particularly once you factor in costs of more advanced processes which are real high. If it really does have a big hunk of eDRAM on there like some suspect it'll HAVE to stay on GF's 14nm process no matter what since I guess that is about the only process that can do eDRAM and still be 14nm. Yields are supposed to be real bad for big huge eDRAM caches though so it'll be real impressive if AMD really did put some eDRAM in there.

EmpyreanFlux posted:

I don't know if 128MB would be enough for an APU
It'd be enough to hold the frame buffer for 1080p or less resolutions + some needful textures/meshes perhaps + some other stuff for the CPU/iGPU (since it'd be a shared cache) which would still give you a nice performance boost for those resolutions I would think. How much exactly I don't know though.

There is lots of older but still interesting articles on the 32MB ESRAM cache the original XB1 (which tended to limit it to 720p resolutions) had on this topic you can google for if you want to spitball/speculate some more. Or at least that is where I'd start anyways.

Yudo posted:

Their is a lot of low hanging fruit to optimize Zen, process shrink or no.
I figured it'd more stuff like improving the caches, CCX's, etc. since that is what I thought of as the relatively low hanging fruit instead of the other stuff. Its real cool to see they can actually get some big boosts in performance by improving other stuff too of course!

digitalwatchmaker
Dec 11, 2008

wargames posted:

clocks matter alot when it comes to rendering video, and the 9900k is is one of the best out there for pure video rendering.

That's only really true with Adobe products, things like Davinci Resolve will be able to scale far better with multiple threads. So the real answer depends on what software suite you end up using. See benchmarking done by Puget systems for ex. https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Threadripper-2990WX-2950X-Performance-1219/#ColorTabFPS-BenchmarkAnalysis

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Hmmm.

https://i.imgur.com/7QXlMWR.jpg

--edit: Nevermind, it's a 'shop.

Combat Pretzel fucked around with this message at 03:40 on Nov 8, 2018

Laslow
Jul 18, 2007
I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose.

I've been suspecting Zen 2 is a good spot to get on the AMD bus for a long while now, since the Zen 1 release, which was basically Broawell IPC but way more cores for your money. Just enough time for the arch to mature and my Devil's Canyon workstation to age enough to justify it, especially if it's IPC gains put it ahead of Skylake.

NewFatMike
Jun 11, 2015

Laslow posted:

I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose.

I've been suspecting Zen 2 is a good spot to get on the AMD bus for a long while now, since the Zen 1 release, which was basically Broawell IPC but way more cores for your money. Just enough time for the arch to mature and my Devil's Canyon workstation to age enough to justify it, especially if it's IPC gains put it ahead of Skylake.

:same: except R7 1700. I'll donate the CPU, mobo, and RAM to my startup to be the dedicated Render Bender.

Anyone else praying for 8C/8T being Zen2's entry level/mobile starting point?

Craptacular!
Jul 9, 2001

Fuck the DH
No, I think they want midrange to be AM4, and I have doubts on anything beyond 8c being able to work on the AM4 boards they’ve been selling in 2018, let alone 2017.

Besides, “entry level” will usually be an APU, and that limits how many votes they can use.

Yudo
May 15, 2003

I need more threads, I just don't want to spend on ddr4 and an associated MB with ddr5 around the corner. I did that with haswell/ddr3 and guess what I still use?

Faster ram would be nice anyways especially if core counts keep increasing on consumer, dual channel platforms.

Laslow
Jul 18, 2007

NewFatMike posted:

:same: except R7 1700. I'll donate the CPU, mobo, and RAM to my startup to be the dedicated Render Bender.

Anyone else praying for 8C/8T being Zen2's entry level/mobile starting point?
That would be nice, but even 6c/12t would be cool, seeing as that 5820k was HEDT not all that long ago. It's so cool to finally see some progress on CPU's again, in regards to raw CPU power and core count getting cheaper year over year now as opposed to whatever the gap was between Sandy Bridge and........
...
...
Coffee Lake? Jesus Christ!

Mr Shiny Pants
Nov 12, 2012

Laslow posted:

I think I might target a Zen 2 Threadripper and multiple video cards for IOMMU for my next build, then I can have like 4 8c16t computers in one, all at native speed. For what purpose? Just to have it, I suppose.

I've been suspecting Zen 2 is a good spot to get on the AMD bus for a long while now, since the Zen 1 release, which was basically Broawell IPC but way more cores for your money. Just enough time for the arch to mature and my Devil's Canyon workstation to age enough to justify it, especially if it's IPC gains put it ahead of Skylake.



If they would release a consumer Radeon with SR-IOV....... Man that would be awesome.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
There's probably some sort of VGPU capability in the consumer cards, just locked away in an annoying fashion. From what I spied in the NVidia release notes for instance, is that their consumer cards do the Quadro poo poo, but only for the Application Guard, to enable sandboxed Edge to run with hardware acceleration (and it ain't RemoteFX).

Yudo
May 15, 2003

That has been the MO with the "professional" cards for basically ever. Their presence unlocks software: with a handful of exceptions, it's the the same or almost the same silicon.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

There's probably some sort of VGPU capability in the consumer cards, just locked away in an annoying fashion. From what I spied in the NVidia release notes for instance, is that their consumer cards do the Quadro poo poo, but only for the Application Guard, to enable sandboxed Edge to run with hardware acceleration (and it ain't RemoteFX).

I hate that, it's not like that now I can't have it I am going to go out and buy a Quadro or anything. Companies want support on their stuff so they will happily buy the more expensive version.

It would have been really cool to run some hardware accelerated desktops on one card.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Seeing how NVidia is seemingly actively sabotaging GPU passthrough on their consumer parts, I expect nothing from them. Hopefully this'll be the extreme opposite with AMD. Intel has this iGVT-g stuff of theirs and wants anyone to use it, maybe it'll be also in their upcoming discrete cards, assuming they'll even be worth a drat for gaming and such.

Truga
May 4, 2014
Lipstick Apathy
I thought gpu passthrough works with latest nvidia driver again?

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

jisforjosh posted:

This is the first time in over a decade I've been legitimately pumped for a CPU from AMD and putting off my new build to see exactly what Zen 2 will be capable of.

The bright outlook and the timing of when I expect to have some expendable income from my new (soon to be big-boy!) job puts me in your boat. I was very happy with how well I'd put together my Broadwell build a few years ago and I hope to turn that up to 11. Hopefully coming back to Team Red for the first time since my Phenom II X4!

Arzachel
May 12, 2012
8c CPU chiplet, cut down IO die, 1Hi HBM2 hooked up to a GPU chip through 2x IF links. Give it to me AMD, I've been waiting since Llano.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Maybet his belongs in the GPU thread, but AMD did an after presentation on Vega 20

https://www.youtube.com/watch?v=m0h6-VfH3Xo

Just picking it out due to chiplet design relevance - IF has a latency of 60-70ns, so it's only slightly slower than really good L3. Probably bodes really well for Rome and what I'm going to call Zenith Ridge (AMD marketing sucks if they don't pick that), but it also seems that Rome design solves GPU scalability as well. Large I/O die that gets recognized as a GPU, and small ALU dies connected via LF Gen2 (or 3) on Arcturus.

Like, Imagine making two I/O die designs, that have the hardware scheduler, geometry processor and memory bus, a 256 bit one and a 4096 bit one compatible with HBM on 14nm. The chiplets would be composed of ALUs, TMUs and ROPs, and on the latest process. I know AMD has said they wanted to do something like this before, but held off because they didn't have a solution to getting it all recognized as a single GPU, the I/O die allows that to happen though. Certain members of the Zen team were reassigned to to RTG, and I bet it's specifically for this reason; Navi is the last monolithic design, and I bet Arcturus (the 256 bit I/O) and Betelgeuse (the HBM2/3 I/O) are the replacements in very late 2020 or mid 2021.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Truga posted:

I thought gpu passthrough works with latest nvidia driver again?
There's always this sing and dance of hiding the KVM hypervisor and ideally not using the Hyper-V extensions (especially the SynIC helps with performance). Did this change?

SamDabbers
May 26, 2003



Combat Pretzel posted:

There's always this sing and dance of hiding the KVM hypervisor and ideally not using the Hyper-V extensions (especially the SynIC helps with performance). Did this change?

I've had very good results with hiding KVM and leaving all the Hyper-V extensions enabled, but changing the Hyper-V vendor string to something non-default. It seems like NVIDIA's consumer drivers look for particular hypervisor vendor strings to detect GPU passthrough, but don't actually check for or care about the actual paravirtualization features.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

There's always this sing and dance of hiding the KVM hypervisor and ideally not using the Hyper-V extensions (especially the SynIC helps with performance). Did this change?

I only have kvm = hidden on. It works really well.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Some news from the other side of the wall:

https://www.youtube.com/watch?v=kmAWqyHdebI

Hardware Unboxed retested their Core i9-9900K, while clamping the chip to a 95W TDP. Intel has been using their motherboard partners to have their boards load a default clock multiplier table that violates the crap out of Intel's own official power spec, essentially trying to sneak being pre-overclocked from the factory over people.

The results are astonishing... the 9900K is now neck-and-neck, or the outright loser in many of their retested workload benchmarks to the 2700X... and the 2700X is still cheaper! The AMD part still loses out in AVX workloads, but the margin in many cases has dropped to single-digits.

https://www.techspot.com/review/1744-core-i9-9900k-round-two/

"but what about gaming," you cry. Yes, the 2700X still posts lower framerates than the 95W TDP-clamped 9900K, until the point at which your games become GPU-bound, but again, it seems the adage of buying Intel if you only game, and AMD if you game and work, is now "buy AMD and pocket the extra $200 if you work, buy Intel if you don't or have AVX-heavy workloads".

SwissArmyDruid fucked around with this message at 15:47 on Nov 10, 2018

GRINDCORE MEGGIDO
Feb 28, 1985


quote:

and the 2700X is still cheaper
Not the first time Intel has charged more for a chip that... clocks significantly higher.

GRINDCORE MEGGIDO fucked around with this message at 16:54 on Nov 10, 2018

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Fair enough, expose the 95W TDP as being the lie we all knew it was. But hamstringing the chip then measuring performance doesn't seem like a very interesting thing with regard to making a purchasing decision.

Arzachel
May 12, 2012
Doesn't the 2700X also pull more than the advertised 105W with all the cores loaded/XFR?

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

Hardware Unboxed retested their Core i9-9900K, while clamping the chip to a 95W TDP. Intel has been using their motherboard partners to have their boards load a default clock multiplier table that violates the crap out of Intel's own official power spec, essentially trying to sneak being pre-overclocked from the factory over people.

The results are astonishing... the 9900K is now neck-and-neck, or the outright loser in many of their retested workload benchmarks to the 2700X... and the 2700X is still cheaper! The AMD part still loses out in AVX workloads, but the margin in many cases has dropped to single-digits.

I am not sure I agree. It isn't like AMD has a long history of treating their own TPD numbers as strict rules, and not some vague guideline based on arbitrary numbers. Sauce for the goose is sauce for the gander.


If all the boards are doing it, and the systems are stable, then as far as I'm concerned it's a fair result. Numbers are going to be inconsistent between different sites depending on what settings they use, but that's nothing new. The only thing I see being a problem is that most reviewers are testing systems on benchtop and so the extra watts don't create problems with case temperature.

The main thing I'd want to see is reviewers testing systems with some longer timedemos or whatnot and throw out the first few minutes, to represent real-world game performance and not an extra boost that you only get while the thermals hold.

GRINDCORE MEGGIDO
Feb 28, 1985


ConanTheLibrarian posted:

Fair enough, expose the 95W TDP as being the lie we all knew it was. But hamstringing the chip then measuring performance doesn't seem like a very interesting thing with regard to making a purchasing decision.

This.

PC LOAD LETTER
May 23, 2005
WTF?!

Arzachel posted:

Doesn't the 2700X also pull more than the advertised 105W with all the cores loaded/XFR?
With XFR yes it can easily do so on real world stuff and not power virus/synth benches. If you turn off XFR the 105W rated TDP is fairly realistic for real world stuff on a 2700X but that does kinda defeat the purpose of buying one instead of a 2700.

Typically you'll be looking at ~120W or less on real world work loads even with XFR on though for a 2700X. So it won't be as much of a power hog as a 9900k and will be much easier to cool since the soldered TIM it has is better implemented. The stock HSF it comes with can usually do a OK job believe it or not of giving XFR enough cooling headroom to be useful though of course a AIO watercooler will be better.

Once you consider the ~$1000 cost of the i9 9900k + the cost of the fairly good watercooling (you really want something like a 3x 120mm fan radiator watercooling loop AIO for one otherwise you get a heat feedback loop at high clocks which can cause thermal throttling) needed to really run the thing at the ~5Ghz all core 24/7 speeds necessary to make it interesting performance-wise vs the 2700X... Well the top end of the market has generally always been lousy from a value perspective but in comparison to a 2700X its an extremely poor value even with "unlimited" TDP to allow those 5Ghz clocks. And if you limit that i9 9900k to 95W TDP's its clearly a flat out stupid idea to buy vs the ~$300 2700X since performance wise it'll be effectively the same for the most part.

Which was kinda what the video was pointing out.

Now normally one would think that no one is ever really going to buy a i9 9900k to run at stock clocks or keep it at the stock listed 95W TDP so that issue would be moot but the video mentioned that around half of the people planning on buying these things aren't gonna OC them at all. And that about half of the other half were just gonna use some sort of auto-OC software (either Intel's or the mobo vendors') to do their OC'ing for them and leave it at that due to the issues of trying to cool the thing at 5Ghz on all cores 24/7. Which means they'll probably get to ~4.7Ghz all core 24/7 OC and a ~150W TDP which you can cool well enough with a generic 2x 120mm AIO watercooler. Which isn't too bad of a performance boost vs stock clocks but still is a highly lousy value vs a ~4.2Ghz 2700X which is about what XFR will get you and OK cooling.

So essentially when you consider all the angles, even from a PC enthusiast perspective of GOTTA GO FAST BRO MUH FRAMES, it seems the i9 9900k really doesn't make sense to buy ever. Well, maybe if $1K+ is cheap and easy pocket money to you, then OK sure it makes some sense. Otherwise no, not really.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

SwissArmyDruid posted:

"but what about gaming," you cry. Yes, the 2700X still posts lower framerates than the 95W TDP-clamped 9900K, until the point at which your games become GPU-bound, but again, it seems the adage of buying Intel if you only game, and AMD if you game and work, is now "buy AMD and pocket the extra $200 if you work, buy Intel if you don't or have AVX-heavy workloads".

Its not even that complicated. Buy Intel if you're not GPU/monitor bound and you care about the frames (or are doing AVX stuff), otherwise get Ryzen.

The notion that x or y component is better at gaming is nonsense, no part operates in a vacuum. Under some circumstances Intel will deliver 10-15% more frames, but this doesn't translate into the general notion that its 10-15% better at gaming, unless you're in the same circumstances.

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
Price/performance is the big factor in a lot of people's purchasing decisions, I certainly won't be paying double for just 10-15% more performance. At higher resolutions (like 4k) the gap disappears, and the difference really only manifests at 1080p - which begs the question, who the gently caress would play at 1080p with a 9900k?

Winks
Feb 16, 2009

Alright, who let Rube Goldberg in here?

Zedsdeadbaby posted:

Price/performance is the big factor in a lot of people's purchasing decisions, I certainly won't be paying double for just 10-15% more performance. At higher resolutions (like 4k) the gap disappears, and the difference really only manifests at 1080p - which begs the question, who the gently caress would play at 1080p with a 9900k?

At 1080p there's a gaming streaming use case, as it's phenomenal at maintaining frame rates while streaming with good quality, then there's competitive gaming where you're using 144/240 Hz monitors.

vvvv MSRP is $488-$499

Winks fucked around with this message at 19:42 on Nov 10, 2018

Klyith
Aug 3, 2007

GBS Pledge Week

PC LOAD LETTER posted:

Once you consider the ~$1000 cost of the i9 9900k

It's $550 ($600 msrp). The price on newegg is because it's out of stock.

Adbot
ADBOT LOVES YOU

Cygni
Nov 12, 2005

raring to post

TDP hasn't meant anything on desktop for either side in years honestly

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply