Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
repiv
Aug 13, 2009

TBR isn't a completely free lunch, they need to carve out a large chunk of L2 cache to hold the tile data. In hindsight that's why Maxwell had such a massive cache compared to Kepler.

AMD hasn't told us how much cache Vega has, have they? I'm wondering if they failed to scale it up and ended up with another lopsided design, where the TBR ends up starving the other cache functions when its enabled.

Adbot
ADBOT LOVES YOU

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I don't know why you wouldn't, it's quite clear that it is a superior way doing things. Actually, that would be yet another regression from Polaris.

Article on Maxwell posted:

The L2 cache was increased from 256 KiB on Kepler to 2 MiB on Maxwell, reducing the need for more memory bandwidth.


I don't think it's an L2 Cache problem or RTG is staffed with drooling idiots somewhere. Also this makes me sad Polaris never received TBR, seems it would have greatly helped and the design could accommodate it, might have made it more competitive with the 1070. That might be what the supposed Vega 11 is though Just Polaris 10 with all of Vegas features reverse engineered in.

repiv
Aug 13, 2009

PC Perspectives article is up. The Titan Xp is even winning by a large margin in LuxMark, a pure compute benchmark that won't hit any of Vegas geometry/rasterization pain points.



Bonus:

https://twitter.com/GamersNexus/status/880841367076917248

:cripes:

repiv fucked around with this message at 19:23 on Jun 30, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

Also holy gently caress I wonder if this is what is responsible for all those ridiculous power dips but no changes in performance, it's just toggling TBR and certain parts of the geometry processor on and off at plaid. Here is my theory, Vega has a ridiculous degree of power gating, like able to turn off shader groups (maybe even individual shaders), buses, etc at will, thus the huge power dips. That means at 1440Mhz and 240W it's only running like, 3072 shaders. We don't see dips in frequency because the vast majority of shaders are still reporting 1440Mhz. Then a new workload comes in and Vega decides, nah, now I need 3584 shaders but at 1380Mhz - another power dip, change in frequency but no real noticeable change in performance. This also explains the hilarious overclocking results where a 1682Mhz card gets like a 2-3% performance boost. Yes, it's at 1682Mhz, but only running 2816 shaders to stay in thermal constraints. I wonder if if LN2 would allow this thing to clock itself to the moon while causing a local brown out.

This is an interesting thought. What you've just described is clock thrashing, and actually it is one of the main bugbears of the big.LITTLE architecture. You have a set of big fast cores for gettin' poo poo done, and you have a set of low-power cores for when you're just idling and doing background poo poo. The tendency is for the big cores to clock up, get everything done, and switch back to the little cores. But it actually takes quite a bit of time to swap threads between the big and little cores, so by the time the little core is booted back in then you have a big pile of work to do and it swaps everything back to the big cores. You need a lot of hysteresis to get sane behavior.

If Vega, analogously, is trying to twiddle settings too quickly relative to how fast the hardware actually clocks up/down, or shifts render modes, they could get themselves into trouble.

However there's lots of ways this could play out in practice, you'd need to be able to look at what's going on at a hardware level. For example, if you're geometry-bottlenecked, it probably would actually be the correct decision to run at your max possible clocks (get geometry throughput up) but turn off all the shader cores that you aren't using. But if that power manager is making bad decisions yeah, that could definitely be exacerbating some of these performance issues, no question at all. They probably have it running well enough that it's not massively off but could easily be thwarting performance by 5-10% in specific applications that trigger misbehavior.

Dunno about it turning tiling on or off, one would imagine (:jeb:) that that would be a setting that the driver would try to avoid toggling. My hunch would be that you would flag it at an exe level - when you see witcher3.exe running then you turn on TBR for that process. I think the more likely situation is that nobody at AMD realized people would be checking on this and didn't whitelist David Kanter's tool as needing TBR. Or, that the way David Kanter's tool is halting things mid-render isn't quite working properly on Vega's TBR, and he isn't actually stopping in the middle of a tile like he was on Maxwell. This was a hand-coded thing he used to prove a theory about Maxwell (and Pascal is a direct Maxwell descendent), there's no guarantee that it works perfectly on arbitrary TBR architectures.

Generally though the Luxmark benches pretty much sealed it. Even if AMD overcomes their FLOPS-to-framerate disadvantage relative to NVIDIA with ~*driver improvements*~, they just simply aren't going to jump massively ahead to the extent they can overcome an actual disadvantage in raw compute throughput. Vega is just barely faster than GP102 on paper (12.5 TFLOP vs 12.1 TFLOP) and AMD seems to have massive problems actually getting that hardware into action (as usual). Even in compute, which should be the easiest thing.

There's enough loose ends there that I could see them maybe picking up 10% across the board within 6-12 months, and there are games with pretty obvious regressions that will be tuned. But I can't see them picking up 30-50% across the board, just not going to happen.

Paul MaudDib fucked around with this message at 19:37 on Jun 30, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

TBR isn't a completely free lunch, they need to carve out a large chunk of L2 cache to hold the tile data. In hindsight that's why Maxwell had such a massive cache compared to Kepler.

AMD hasn't told us how much cache Vega has, have they? I'm wondering if they failed to scale it up and ended up with another lopsided design, where the TBR ends up starving the other cache functions when its enabled.

They have not, although Polaris has 2 MB so one would think (:jeb:) that Vega has at least that much? Hopefully actually more given that Vega has 30% more cores than Polaris 10?

edit: Fiji also has 2 MB of L2 so that actually may not be enough given the additional load placed by TBR. The assumption here is that AMD has traditionally had more L2 cache than NVIDIA and TBR probably further increases consumption.

But again this comes back to the question... is Vega the original design for 20nm Fiji? That's sure another interesting data point, you would think that if Polaris had more L2 per core that this improvement would have been carried into Vega.

Paul MaudDib fucked around with this message at 19:43 on Jun 30, 2017

repiv
Aug 13, 2009

PCper also measured the die size - it's 564mm². That's actually larger than most of the pixel-counted estimates :suicide:

https://www.pcper.com/news/Graphics-Cards/Radeon-Vega-Frontier-Edition-GPU-and-PCB-Exposed

For reference GP104 is 314mm² and GP102 is 471mm².

Rastor
Jun 2, 2001

Seriously AMD like WTF did you put in that thing

Who is it even for

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

PCper also measured the die size - it's 564mm². That's actually larger than most of the pixel-counted estimates :suicide:

https://www.pcper.com/news/Graphics-Cards/Radeon-Vega-Frontier-Edition-GPU-and-PCB-Exposed

For reference GP104 is 314mm² and GP102 is 471mm².

JFC so it's literally twice as big as the 1080 for the same performance (80% larger). It's 20% larger than GP102.

I realize the chances of this are nil, but AMD really needs to stop and take a long, hard look at their uarch. They are clearly wasting tons and tons of die space and power on compute crap they don't need - schedulers, async engines, work stealing, etc. I suspect that a lot of this scales up in complexity as you increase the core count.

All of that is just overhead on the actual graphics work. Get loving rid of it and get a larger number of scalable cores on the silicon.

It seems like AMD fans love this somehow, like the fact that GCN can shuffle paperwork faster than anyone else totally excuses the factory floor being a disaster area, and it being a totally awesome thing that you need to write low-level code to extract good performance from it ("but my cleaner APIs!")

Paul MaudDib fucked around with this message at 19:57 on Jun 30, 2017

eames
May 9, 2009

I don't understand why they didn't scrap Vega and scale Polaris up. What a total trainwreck.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Rastor posted:

Seriously AMD like WTF did you put in that thing

Who is it even for

It's obvious that AMD has aspirations of competing in the datacenter/enterprise market. It was obviously a core focus of the Fiji/Vega design and right now they're doing a big push to get their ROC software ecosystem launched. I'm sure AMD likes this market in theory since they don't have to have tons of high-paid driver devs tuning every single game, if your compute program doesn't run then it sucks to be you. But the NVIDIA ecosystem lock-in is rock solid here and I just think they don't really have any chance. Particularly not if they're pushing a 1080 that pulls 300W. Even at a bargain price, let alone at the margins that AMD wants.

But was that actually plan A? Is what we're seeing a reaction to knowing the silicon was trash for gaming and not much could be done, and pushing launch back by 6 months to get the enterprise ecosystem up, rather than to get consumer drivers polished up?

Because as far as I can tell, it really looks like the AMD driver devs have spent the last 6 months jerking off. Performance looks pretty much like it did when they demoed Doom an eternity ago (3 months?), they haven't significantly broached that 1070/1080-ish level of performance in any title so far.

As far as this specific card... it's just out so they can say to the shareholders that they launched. If AMD hadn't publically promised Vega in 1H, I really think this would be at least 3 months out if not just outright smothered in the cradle.

Paul MaudDib fucked around with this message at 20:13 on Jun 30, 2017

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
Maybe you download some DLC to unlock more registers or CUs

HamHawkes
Jan 25, 2014
In the comments section on the. Pcper article thy said they ran some triangle test and found out the Tile Based Rasterizer wasn't turned on the Vega FE. Not sure if that means much.

Cygni
Nov 12, 2005

raring to post

Thing could pretty easily be made into a Vega Nano with all that extra PCB space... but you probably need to do a monoblock water cooler or something with all its heat issues.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
PCPer's review is out. GN has a card and Blender has crashed twice during rendering for them, running in Pro mode :lol:

Of note in PCPer's review, the power and clock speeds section. It does appear to be thermal throttling on the default fan curves, it starts backing off the throttle at 85C. However clock speeds do not drop, so it does look like there's a new form of gating in play.

I still think 300W is an artificial power target though, since it's not boosting up to its full-rated 1600 MHz hardly ever (PCPer show it touching 1600 in heaven once, for a single sample). I am pretty sure that means the full boost clocks require 375W. I wouldn't be surprised if FaustianQ is right here and this means it's choosing to turn off some cores to boost higher, that would certainly help get the geometry throughput up.





Blower is terrible, very loud at 100% (guessing it's the same RX 480 blower), but card doesn't break 60C at 100%.

Conclusion: performance falls between a 1070 and a 1080, but in professional tasks it does flirt with P5000 and P6000 and scores some wins there. But while a Titan can game, there is really no reason to consider this card unless you spend all day in Maya.

Paul MaudDib fucked around with this message at 20:38 on Jun 30, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Cygni posted:

Thing could pretty easily be made into a Vega Nano with all that extra PCB space... but you probably need to do a monoblock water cooler or something with all its heat issues.



Oh by the way, PCPer did a teardown on the PCB too.



I think this gives us a definite postmortem on this processor though. Too many neurotypicals, there's no way a proper engineering team would have passed this chip with the off-center alignment of those HBM stacks on the package, or the wavy-rear end lines of discretes around the edges :spergin:

sauer kraut
Oct 2, 2004
Somehow I have a bad feeling that AMD will do a token launch next month of a gaming model that no one would want (until it gets inventory flushed for 200$), and never talk about it again.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Paul MaudDib posted:

It's obvious that AMD has aspirations of competing in the datacenter/enterprise market.

I'm almost postive this is it, everything they've added is really to just crunch a massive amount of numbers. FP16+HBCC makes the most sense here, and I'm sure enterprise/HPC loves the IDEA of a GPU with fine granular power gating to minimize power spent for a specific workload. But Vega is literally just poor execution of this in every way, to the point of why would anyone even bother. You have to trade in too much to change over to Vega from whatever Nvidia is supplying you, so either AMD offers entirely complete systems (and IBM, because they share a GloFo as a foundry, they might be interested in boosting AMD as well. Whatever sells PowerPC), or no one will give a shift because money and :effort:.

There were no compromises in Vegas design, they tried to throw everything they could into it. Now it's not good at anything.

repiv posted:

PCper also measured the die size - it's 564mm². That's actually larger than most of the pixel-counted estimates :suicide:

https://www.pcper.com/news/Graphics-Cards/Radeon-Vega-Frontier-Edition-GPU-and-PCB-Exposed

For reference GP104 is 314mm² and GP102 is 471mm².

Big Polaris (Polaris 13? 23?) would have stomped the poo poo out of Vega because Polaris is a better overall gaming Uarch. IMHO, some changes to power gating, TBR, increased L2, optimization for increased clockspeeds, and a slightly better geometry processor would fix most of what ailes Polaris. Maybe that is Vega 11, all the HPC poo poo cut out, etc, but if it's not if I were AMD I'd start forking Polaris and Vega, Polaris as gaming focused and Vega for Datacenter/Enterprise. Winnow each design down into specialized Uarchs.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

I'm almost postive this is it, everything they've added is really to just crunch a massive amount of numbers. FP16+HBCC makes the most sense here, and I'm sure enterprise/HPC loves the IDEA of a GPU with fine granular power gating to minimize power spent for a specific workload. But Vega is literally just poor execution of this in every way, to the point of why would anyone even bother. You have to trade in too much to change over to Vega from whatever Nvidia is supplying you, so either AMD offers entirely complete systems (and IBM, because they share a GloFo as a foundry, they might be interested in boosting AMD as well. Whatever sells PowerPC), or no one will give a shift because money and :effort:.

...

Big Polaris (Polaris 13? 23?) would have stomped the poo poo out of Vega because Polaris is a better overall gaming Uarch. IMHO, some changes to power gating, TBR, increased L2, optimization for increased clockspeeds, and a slightly better geometry processor would fix most of what ailes Polaris. Maybe that is Vega 11, all the HPC poo poo cut out, etc, but if it's not if I were AMD I'd start forking Polaris and Vega, Polaris as gaming focused and Vega for Datacenter/Enterprise. Winnow each design down into specialized Uarchs.

Again, we don't really know if power gating is a problem or not. This chip could have been even hotter if it couldn't turn off a third of its cores while boosting, or else it could have been even slower as it clocked down and lost geometry performance. If you assume that this chip needs to clock as high as possible to get its geometry throughput up, it's actually a vital feature to be able to manage its power by some other means.

Since Pascal it just seems abundantly clear that AMD is overengineering this problem, compared to what the gaming market actually needs. With Maxwell there was a solid argument that NVIDIA might have gimped their uarch in a way that actually mattered, and that NVIDIA might have to back off and add some of the stuff they cut from the gaming dies. When Pascal came out and DX12 + async performance improved... it was obvious they had managed to work around the problem without touching most of the scheduling hardware. And even worse, they did put all that stuff back on Compute Pascal, which was basically Fiji Done Right. AMD doesn't seem to be able to make a single shared die work well for both these markets, it's time to split it. Not that they can afford that.

I just don't think they'll have any takeup in the enterprise market with this product, especially not with Volta right around the corner. The enterprise market doesn't believe in FineWine™, they've been burned by that far too many times before. And really the compute performance isn't going to change much, unless there's bugs in caching/etc. It's just too hot and too slow, and this isn't a segment that is swayed by a $100 savings on a GPU. The software ecosystem release (ROC) is necessary, but not sufficient, and nobody wants to deploy beta software in production.

You have to be careful with predictions like "Fat Polaris would have stomped the poo poo out of Vega" because you can't just double the cores and get double the performance. Fiji is the poster child for that. But yes, I agree in general, Vega seems to be a regression over Polaris's IPC. Polaris with more geometry engines would probably have done better.

AMD had Fat Polaris in the 2015 roadmaps, they pulled it in Dec 2015 to move Vega up to Dec 2016. Then 6 months later they pushed Vega back again. Oh, to be a fly on the wall at that boardroom meeting. What was said that made them push off a product that was still 6 months out? That's an immense amount of time to file off rough corners, what made them say they thought they'd suddenly need twice as much time? Who knew what and when?

I don't even know where AMD goes from here. Frankly I've been wondering how far they would go with all of this anyway - after all they just sold XB1X with a Polaris GPU, so they are trapped supporting that uarch for a while. If the drivers are that different, doesn't that lose most of the "GCN advantage" of all being small variations on a common basic uarch? So maybe they would backport whatever they could into a new Polaris or Small Vega and dump the rest. But on the other hand I can't see them admitting failure and dumping TBR to the curb either, especially since they still have a huge die size and efficiency disadvantage.

Hell, I wonder what the current status of Vega 11 is too. We haven't heard jack poo poo, I wonder if it was paused a year ago until AMD knew whether Vega 10 was going to work or not.

Paul MaudDib fucked around with this message at 21:30 on Jun 30, 2017

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!
Oh lordy, on Vega FE a 20% increase in power target results in 350W draw and a boost of 60-70MHz in clock speed to ~1500MHz, based on that we could be seeing 390W+ to get Vega running at 1600MHz.

ufarn
May 30, 2009
Obviously, the reasonable course of action is to

https://www.youtube.com/watch?v=bOR38552MJA

blame GloFo

SwissArmyDruid
Feb 14, 2014

by sebmojo

Paul MaudDib posted:

Conclusion: performance falls between a 1070 and a 1080, but in professional tasks it does flirt with P5000 and P6000 and scores some wins there. But while a Titan can game, there is really no reason to consider this card unless you spend all day in Maya.

Again, my dreams of a card that can tackle Solidworks and gaming equally are dashed. Alas, alas.

Paul MaudDib posted:

Oh by the way, PCPer did a teardown on the PCB too.



I think this gives us a definite postmortem on this processor though. Too many neurotypicals, there's no way a proper engineering team would have passed this chip with the off-center alignment of those HBM stacks on the package, or the wavy-rear end lines of discretes around the edges :spergin:



I don't think AMD has hand-laid out circuitry in their parts for... almost a decade now? I remember them touting their areal savings in the early days of Bulldozer, and throwing around the buzzwords "high-density libraries" and "performance per square inch".

Kazinsal
Dec 13, 2011
I mean it still blows my mind that in 2017 I'm sitting here pricing out an AMD CPU to go with my Nvidia GPU like it's the early 2000s again.

I don't need Radeon to poo poo out anything other than what it's doing. I'm in my happy place.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

AVeryLargeRadish posted:

Oh lordy, on Vega FE a 20% increase in power target results in 350W draw and a boost of 60-70MHz in clock speed to ~1500MHz, based on that we could be seeing 390W+ to get Vega running at 1600MHz.

I mean, 20% of 300W is 60W so... exactly what you would expect? :confused:

Until we get more information I'm not super worried about the minutia of clocks and power consumption. There is definitely a Pascal-style micro-power-management system in play there, and we don't know its behavior yet. All the PCPer guys are really saying is that it's definitely clocking down by 85C. I would not be surprised to hear that there are additional boost increments below that - maybe it will only boost to 1600 below 75C or something.

That stuff is just re-arranging deck chairs on the Titanic, to be honest. This card is way, way down in power and performance, unless they can pull some big driver gains out of their hat it's all ogre.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Paul MaudDib posted:

I mean, 20% of 300W is 60W so... exactly what you would expect? :confused:

Until we get more information I'm not super worried about the minutia of clocks and power consumption. There is definitely a Pascal-style micro-power-management system in play there, and we don't know its behavior yet. All the PCPer guys are really saying is that it's definitely clocking down by 85C. I would not be surprised to hear that there are additional boost increments below that - maybe it will only boost to 1600 below 75C or something.

That stuff is just re-arranging deck chairs on the Titanic, to be honest. This card is way, way down in power and performance, unless they can pull some big driver gains out of their hat it's all ogre.

I'm not surprised that 20% gets you 350W, I'm surprised that it only gets you 60-70MHz. Is RX Vega going to be a 400W card? 450W? The drat thing might as well need a dedicated PSU!

repiv
Aug 13, 2009

AVeryLargeRadish posted:

I'm not surprised that 20% gets you 350W, I'm surprised that it only gets you 60-70MHz. Is RX Vega going to be a 400W card? 450W? The drat thing might as well need a dedicated PSU!

GamersNexus has an FE and usually does AIO watercooler mods on new cards, so we'll probably find out how much power it can really draw before long :unsmigghh:

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Paul MaudDib posted:

You have to be careful with predictions like "Fat Polaris would have stomped the poo poo out of Vega" because you can't just double the cores and get double the performance. Fiji is the poster child for that. But yes, I agree in general, Vega seems to be a regression over Polaris's IPC. Polaris with more geometry engines would probably have done better.

Nah, I'm fairly confident in my prediction Fat Polaris was better. The difference in performance between the 1070 and RX 480 is about the distance between the RX 560 and the RX 580, so about 3328-3584 shaders on a 384 bit bus would have matched or beaten the 1070, and if you could actually hit ~1500-1600Mhz core clock it might put up a fight with the 1080. I'd wager a 200W TDP as well, 275W when fully overclocked. It'd also be hell of a lot smaller, like 350-370mm² at the absolute worst.

That IMHO, is indeed "stomp the poo poo out of Vega" territory, and we're only talking raw shaders/clocks/rops, not something like enough L2 to feed TBR which might have made it close to the 1080 stock and have consistently beaten when overclocked. Like, Polaris is not that far behind Pascal in compute per watt, it's about properly utilizing it all. Unlike Vega which seems god awful in perf/watt period. AMD bet the farm on Vega, or I should say RTG did. AMD actually bet the farm on Ryzen and it looks like they won.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

AVeryLargeRadish posted:

I'm not surprised that 20% gets you 350W, I'm surprised that it only gets you 60-70MHz. Is RX Vega going to be a 400W card? 450W? The drat thing might as well need a dedicated PSU!

You're thinking backwards. That's how traditional cards work, this is a Pascal-style GPU Boost 3.0 system.

Package temperature will determine clock speed. Power limit will determine how many of the cores are gated off and disabled.

Think about Pascal, what is the highest temperature you can run and still get the highest boost bins (2050+)? It's like 75C, right? By "upper 70s" you are definitely getting some bins locked out.

edit:

repiv posted:

GamersNexus has an FE and usually does AIO watercooler mods on new cards, so we'll probably find out how much power it can really draw before long :unsmigghh:

Bingo, this is the test that needs to be done.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Cygni
Nov 12, 2005

raring to post

SwissArmyDruid posted:

I don't think AMD has hand-laid out circuitry in their parts for... almost a decade now? I remember them touting their areal savings in the early days of Bulldozer, and throwing around the buzzwords "high-density libraries" and "performance per square inch".

They still do hand work for their CPUs (them, Intel, and Apple are the only big players that still do from what I've read), but everyones GPUs tend to be mostly synthesized as far as I know.

nerdrum
Aug 17, 2007

where am I

AVeryLargeRadish posted:

Oh lordy, on Vega FE a 20% increase in power target results in 350W draw and a boost of 60-70MHz in clock speed to ~1500MHz, based on that we could be seeing 390W+ to get Vega running at 1600MHz.



Looking forward to SLI 1080 power consumption on a single card at its factory clocks.

Craptacular!
Jul 9, 2001

Fuck the DH
Dumb question for people who know the industry better than I do: is AMD setting themselves up for Apple?

Apple has so much money that the market is starting to get pissed that they're approaching Switzerland in net worth yet not doing anything with it. Their last round of MacBooks were criticized for design compromises they threw at Intel.

A company making a competitive CPU architecture to Intel that has a sorta-kinda-who cares graphics department seems like the kind of thing that they would be interested in.

eames
May 9, 2009

They would lose their x86 license the second they're acquired, does that answer your question? :)

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Craptacular! posted:

Dumb question for people who know the industry better than I do: is AMD setting themselves up for Apple?

Apple has so much money that the market is starting to get pissed that they're approaching Switzerland in net worth yet not doing anything with it. Their last round of MacBooks were criticized for design compromises they threw at Intel.

A company making a competitive CPU architecture to Intel that has a sorta-kinda-who cares graphics department seems like the kind of thing that they would be interested in.

Intel and AMD have each other's x86 / x64 licenses, it is mutually assured destruction if either of them are acquired or go bankrupt, Intel is happy to keep AMD as a fake-competitor forever.

NewFatMike
Jun 11, 2015

This makes me very uninspired for my future APU experiments

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

eames posted:

They would lose their x86 license the second they're acquired, does that answer your question? :)

Intel would lose it too though, and I suspect they'd come back to the table. AMD and Apple might be able to do a JV too, IIRC AMD did one recently in China.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Craptacular! posted:

Dumb question for people who know the industry better than I do: is AMD setting themselves up for Apple?

This has come up before, and there are two problems with this theory:

(1) If Apple was interested, one would expect they'd have tried to buy in a year or two ago when AMD was struggling like hell and the stock was cheap. Depending on how far back you go over the last year or two, the stock is up 200-350%.

(2) The licensing agreements between AMD and Intel about x86 are non-trivial to figure out, and there's a reasonable question about whether the cross-license would survive an acquisition like that. That alone might be enough to make Apple wave off on purchasing AMD as a whole, and who the hell would want just RTG at this point? Apple doesn't do HPC much, and hilariously power-hungry cards would be the worst possible fit for Apple's desktop lineup (yes, I know they're going in there anyhow...but that's mostly out of spite against NVidia).

GRINDCORE MEGGIDO
Feb 28, 1985


Is there any reason to expect the Vega card is huge and weird because it's doing something Apple specifically wants?

SwissArmyDruid
Feb 14, 2014

by sebmojo

Cygni posted:

They still do hand work for their CPUs (them, Intel, and Apple are the only big players that still do from what I've read), but everyones GPUs tend to be mostly synthesized as far as I know.

Not as of Carrizo, they weren't.

http://www.anandtech.com/show/9319/amd-launches-carrizo-the-laptop-leap-of-efficiency-and-architecture-updates/3

SwissArmyDruid fucked around with this message at 23:27 on Jun 30, 2017

Cygni
Nov 12, 2005

raring to post

My guess is Apple isn't buying AMD, because they don't need to. They already have a full in-house design team for both CPUs and GPUs for their profit driving products, and they have such incredible buying power that they can force their fabs to take all the risk to make their products. For example, today's news that the A10X, a SoC pretty close to Ryzen in complexity and transistor count, is actually using TSMC's brand new 10nm process. Thats a huge risk, and I guarantee you that risk was on TSMC's side more than Apple's.

On the Mac side, its a tiny slice of their revenue pie and the market as a whole, but they make all of the premium profits there. So they can force their suppliers to bid to rock bottom prices and stretch themselves to the limit, because they sell all the premium products that the suppliers need to move for their own bottom lines. Their suppliers take all the risks and the margin hits, and Apple takes all the profit.

Buying AMD would give them access to some engineering talent and a patent library, but they can poach every engineer in silicon valley they want anyway. I don't think they need it.

E:

Yeah, my understanding (and i could be totally wrong, this aint my field. just a nerdy thing i read about) is that the design libraries concept reshuffled the die design and redid some units, but some of those original blocks being moved around were still done with top plot initially. Heres a Bristol Ridge die shot I pulled up. You can clearly see the synthesized/auto placed stuff in the GPU on the right (the blobby looking telltales), and inside the compute cores, but there are also some more hand designed looking sections in there.

Again, I could be totally wrong. I know the Cat-cores were nearly all synthesized, for example.

Cygni fucked around with this message at 23:36 on Jun 30, 2017

Adbot
ADBOT LOVES YOU

Generic Monk
Oct 31, 2011

eames posted:

This youtube video is about the best thing there is, the behaviour isn't very well documented yet and hard to reverse-engineer because it appears to be managed by on-die circuitry. His testing suggests that Pascal not only throttles clocks but also silently shuts down shader units when it's too far out of spec.

https://www.youtube.com/watch?v=bflLDenKirQ

my testing suggests that this guy needs to cut his loving hair

GRINDCORE MEGGIDO posted:

Is there any reason to expect the Vega card is huge and weird because it's doing something Apple specifically wants?

most if not all apple products at this point are characterised by being as small as possible and incredibly tdp limited because of that; this is the opposite of what they would want

Craptacular! posted:

Dumb question for people who know the industry better than I do: is AMD setting themselves up for Apple?

Apple has so much money that the market is starting to get pissed that they're approaching Switzerland in net worth yet not doing anything with it. Their last round of MacBooks were criticized for design compromises they threw at Intel.

A company making a competitive CPU architecture to Intel that has a sorta-kinda-who cares graphics department seems like the kind of thing that they would be interested in.

apple already make CPUs for ios devices that rival comparable intel (let alone amd) parts (seriously look at the benches for the new ipad pro; it smokes core m in the retina macbook and is nipping at the heels of the chip in the 13 inch macbook pro. they also cut imagination technologies loose pretty recently and have reportedly formed their own internal GPU design team. i don't think they'd start designing their own CPUs and GPUs for the mac though since the opportunity cost is probably too high for a platform so comparatively low volume, but the same goes for acquiring AMD. sure would be nice on the GPU side tho - that or switching back to nvidia - what with AMD's output being so woeful recently. i guess we just have to wait until apple starts getting billed for damages when vega in the imac pro melts through the casing. and floor.

Generic Monk fucked around with this message at 01:23 on Jul 1, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply