Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
eames
May 9, 2009

SourKraut posted:

I don't think Volta is getting released this year?

Yeah the CEO's comments made it very clear that Volta isn't coming out this year, Q2/2018 seems likely now. (truth be told they could even squeeze in a Pascal refresh and push Volta back even further :suicide:)

PC Gamer posted:

"Volta for gaming, we haven't announced anything. And all I can say is that our pipeline is filled with some exciting new toys for the gamers, and we have some really exciting new technology to offer them in the pipeline. But for the holiday season for the foreseeable future, I think Pascal is just unbeatable," Huang stated during a recent earnings call.

"It's just the best thing out there. And everybody who's looking forward to playing Call of Duty or Destiny 2, if they don't already have one, should run out and get themselves a Pascal."
source

Poor Volta has to wait another six months for its launch because Vega isn't competitive.

Adbot
ADBOT LOVES YOU

wargames
Mar 16, 2008

official yospos cat censor

FaustianQ posted:

Yeah, not this year and I kind of expect it either in March or in June, leaning March. The issue with June is that if AMD is indeed taping out Navi on 7nm in like, October/November and it it's not a horrorshow, the process advantage alone could make the competitive and they'd be able to do a holiday release. So if Nvidia waits too long AMD could theoretically get a bunch of people to wait on purchasing Volta. Releasing in March avoids this entirely. Having the process advantage would be a no joke way for AMD to match Nvidia even now with Vega, as there is something like a 60% reduction in power at similar performance as 14nm, so Vega would sit ~160W average and 200W max overclock. Basically the later Nvidia releases Volta, the more AMD is likely to fully recover from Polaris/Vega and f Nvidia is interested in cornering the market they'll keep kicking AMD while they're down until they stop twitching. IIRC, Nvidia isn't planning a release on 7nm until late 2019, that's a lot of time.

Keep in mind AMD has canceled any future planes for Vega and NCU, they did this in like very late May or June IIRC, as there were plans originally for moving Vega to 7nm that were circulated around in 2016, this probably prior to figuring out how much of a horrorshow Vega was. Navi is likely to simply be that much better, so I'm guessing AMD is going to give Vega one more respin on 14nm for early 2018 (Vega 11 and 12, they won't bother with Vega 10) to deal with Volta, and then it'd be onward to Navi for very late 2018/early 2019. Vega 10 will get shuttered off to exclusively Server/Workstation/HPC until it's replaced mid 2019.

Nvidia just has too much mindshare and no one is going to wait for NAVI, people may buy volta then flip to NAVI if NAVI is amazing.

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
Navi will probably be amazing

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

wargames posted:

Nvidia just has too much mindshare and no one is going to wait for NAVI, people may buy volta then flip to NAVI if NAVI is amazing.

Yet as things go that might happen much faster than expected. I just don't see why Nvidia won't have a march release of Volta, wait too long and they're either stuck not releasing Volta (which they have sunk an incredible amount of time and money into) or they release it into a situation that's just not competitive. Mindshare or not, that's how you lose sales and let your competition recover.

wargames
Mar 16, 2008

official yospos cat censor

FaustianQ posted:

Yet as things go that might happen much faster than expected. I just don't see why Nvidia won't have a march release of Volta, wait too long and they're either stuck not releasing Volta (which they have sunk an incredible amount of time and money into) or they release it into a situation that's just not competitive. Mindshare or not, that's how you lose sales and let your competition recover.

Or releash it 3-5 months before for inflated, mindshare buys alot of cards and no one has had a new card in what almost 2 years by then, everyone switches, navi get released then some people flip, you still sold a card, just not retained a customer.

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin
There is the possibility that the next Geforce won't be based on Volta at all. Volta is very obviously a Compute-orientated design. Nvidia could well execute a revision to Pascal on the improved 16nm (12nm) process (they may have to increase area slightly though).

I mean logically whatever arrives will be an evolution of Pascal no matter what the name. We could get GP20x around Spring, which would essentially be Maxwell 4.0.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



So is there potentially going to be a refresh soon of Pascal, or Is it safe to get a 1080?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Riflen posted:

There is the possibility that the next Geforce won't be based on Volta at all. Volta is very obviously a Compute-orientated design. Nvidia could well execute a revision to Pascal on the improved 16nm (12nm) process (they may have to increase area slightly though).

I mean logically whatever arrives will be an evolution of Pascal no matter what the name. We could get GP20x around Spring, which would essentially be Maxwell 4.0.

Or yeah, Volta could in fact just be their compute oriented solution and they'll continue to forever release iterations of Maxwell for consumer because Nvidia just has that kind of money to be able to properly service two markets. This also gets their higher margin customers off their back about how weak Maxwell was for their purposes.

wargames
Mar 16, 2008

official yospos cat censor

SourKraut posted:

So is there potentially going to be a refresh soon of Pascal, or Is it safe to get a 1080?

I do not think a refresh will happen soon.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

SourKraut posted:

So is there potentially going to be a refresh soon of Pascal, or Is it safe to get a 1080?

If you're thinking of a 1080, think hard about a 1080 Ti.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

eames posted:



Poor Volta has to wait another six months for its launch because Vega isn't competitive.

Because GDDR6 isn't ready until q1

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Don Lapre posted:

Because GDDR6 isn't ready until q1

I thought Micron was going to have GDDR6 in like, September-October?

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin
GDDR6 is not necessary for all products in the Geforce range. GDDR5X is specd to run up to 14Gbps and the fastest we've see so far is 11.4 on the Titan Xp.

I can see GDDR6 (beginning at 16Gbps) being used on the 2018 Titan though.

For the guy asking about GTX 1080; since 2012 the average time between x80 Geforce releases has been ~16 months. The largest gap so far was 20 months between 980 and 1080. GTX 1080 was released May 2016.

Nvidia only talk about a product a few weeks before it's available, so no-one except Nvidia know at this point. Consensus is that nothing new is coming until Q1 or Q2 2018.

If you're wanting to play a lot over Autumn / Winter I would just buy now and resell in 2018, but I'm lucky enough to have plenty of money for toys.

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

AMD statement on RX Vega 64 pricing posted:

Radeon RX Vega 64 demand continues to exceed expectations. AMD is working closely with its partners to address this demand. Our initial launch quantities included standalone Radeon RX Vega 64 at SEP of $499, Radeon RX Vega 64 Black Packs at SEP of $599, and Radeon RX Vega 64 Aqua Packs at SEP of $699. We are working with our partners to restock all SKUs of Radeon RX Vega 64 including the standalone cards and Gamer Packs over the next few weeks, and you should expect quantities of Vega to start arriving in the coming days.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Riflen posted:

GDDR6 is not necessary for all products in the Geforce range. GDDR5X is specd to run up to 14Gbps and the fastest we've see so far is 11.4 on the Titan Xp.

I can see GDDR6 (beginning at 16Gbps) being used on the 2018 Titan though.

For the guy asking about GTX 1080; since 2012 the average time between x80 Geforce releases has been ~16 months. The largest gap so far was 20 months between 980 and 1080. GTX 1080 was released May 2016.

Nvidia only talk about a product a few weeks before it's available, so no-one except Nvidia know at this point. Consensus is that nothing new is coming until Q1 or Q2 2018.

If you're wanting to play a lot over Autumn / Winter I would just buy now and resell in 2018, but I'm lucky enough to have plenty of money for toys.

GDDR5X seems more like a stepping stone though, IIRC only Micron is making it while GDDR6 looks to be made by Samsung and SK Hynix. GDDR5 seems to be scaling upwards fine as well, so I'm just not seeing the space for GDDR5X anymore.

1gnoirents
Jun 28, 2014

hello :)

Ah yes the EVGA "we dont have parts for the SC model just the SC+ but we are working on it" except its the actual GPU company

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Twerk from Home posted:

If you're thinking of a 1080, think hard about a 1080 Ti.

When I have time for games it's typically Overwatch, Diablo 3, WoW, Civ VI, Halo Wars 2, and probably the upcoming Call of Duty. Hitting 144 Hz is nice at high settings but with G-sync I'm not concerned with maxing everything out.

metallicaeg
Nov 28, 2005

Evil Red Wings Owner Wario Lemieux Steals Stanley Cup

SourKraut posted:

When I have time for games it's typically Overwatch, Diablo 3, WoW, Civ VI, Halo Wars 2, and probably the upcoming Call of Duty. Hitting 144 Hz is nice at high settings but with G-sync I'm not concerned with maxing everything out.

Unless you're running ultrawide 1440 or 4k (doubtful with the mention of 144hz Gsync), a 1080 will play all of that just fine.

Sincerely,
A 1440p/144Hz Gsync 1080 owner

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin

FaustianQ posted:

GDDR5X seems more like a stepping stone though, IIRC only Micron is making it while GDDR6 looks to be made by Samsung and SK Hynix. GDDR5 seems to be scaling upwards fine as well, so I'm just not seeing the space for GDDR5X anymore.

You're probably right. I just did some more reading and it actually seems like Micron, Samsung and SK Hynix are all going to manufacture GDDR6, but that SK Hynix's initial chips will max out at 14Gbps.

Probably there is now no reason for GDDR5X (which looks like it will end at 12Gbps) on the next x80 Geforce. They lowered the starting speed for GDDR6 from 16 to 14 and the top speed for GDDR5X from 14 to 12.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

eames posted:

Could somebody give me a quick TLDR what happened between the Vega FE launch and now?
From what I can tell it delivers performance comparable to 1070/1080 at hilariously worse efficiency and comparable/higher price points, I don't understand why people are getting so worked up about a product that was always looked like it'll be mediocre at best.

Because AMD lied to reviewers and the public about the MSRP pricing. The cards cost more than MSRP wholesale to retailers so the only way retailers can currently sell cards at MSRP is via AMD reimbursing them for the loss they take by selling at the MSRP, AMD did that for the very first wave of cards then the retailers asked if they would be reimbursed for anything beyond that and AMD was like "lol, nope". So unless that changes the MSRP is a fantasy, just nonsense that AMD told everyone to get positive day one press coverage. No one expected the cards to stay at MSRP, everyone expected shortages, gouging and so on, but everyone thought the MSRPs were at least mathematically possible, now it turns out that they were impossible from the start and AMD knew that but didn't really give a drat if the prices were true or not, nor did they care about whether they were telling us the truth or lying to our faces. AMD does not care about being truthful so no one should trust anything they say in the slightest.

GRINDCORE MEGGIDO
Feb 28, 1985


That sucks. And they were doing so well with Zen.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

AVeryLargeRadish posted:

AMD does not care about being truthful so no one should trust anything they say in the slightest.

See also: 4 GB of HBM2 holds more than 6 GB or 8 GB of GDDR.

More recently, the whole "the reason our TDPs don't match up to the real power measurements is because running the chip hotter makes it draw less power" song-and-dance.

AMD has a habit of making some outrageous technical arguments when they are on the defensive. People implicitly trust the credentials of the speaker (I mean surely a Chief Gaming Scientist knows what they're taking about?) and the arguments are technical enough that people's eyes glaze over, but if you dig into the actual arguments they never hold water.

HBM's performance doesn't affect the speed at which you can swap over the PCIe bus, which is catastrophically slow compared to the performance of VRAM (about 5% on Fiji). The data you need is not in memory, so the performance of the memory is irrelevant, the bottleneck is in a totally different spot.

And while the equation Robert is showing doesn't directly include power, the factors in the equation do include power. Which should make intuitive sense, the equation implicitly includes the amount of power you're dissipating, otherwise you end up at the obviously nonsensical conclusion that a single tiny heatsink could dissipate infinite amounts of power as long as it was made from something like copper or diamond that could pipe heat away really easily. Not that the material makes no difference, but it doesn't let you magically dissipate the energy of the sun on a single heatsink either. And while it's true that you can temporarily sink a lot of power into a cold heatsink, during steady-state operation you need to push heat out at the same average rate it goes in.

Paul MaudDib fucked around with this message at 19:40 on Aug 18, 2017

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

So they have not just solved leakage they have reversed it. Amazing

Arzachel
May 12, 2012

That link is absolutely correct and a higher temperature delta between the chip and the case air will most definitely increase the heat transfer rate. It's still dumb because everyone expects TDP to be a reasonably close approximation of power draw and the actual rated dissipation values are not very meaningful for consumers so the whole thing is clearly self-serving for AMD, but did you read the thing you quoted?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Arzachel posted:

That link is absolutely correct and a higher temperature delta between the chip and the case air will most definitely increase the heat transfer rate. It's still dumb because everyone expects TDP to be a reasonably close approximation of power draw and the actual rated dissipation values are not very meaningful for consumers so the whole thing is clearly self-serving for AMD, but did you read the thing you quoted?

Yes. But as you noted, it's still wrong/dumb so I'm not sure what your point is.

Thermal watts equals electrical watts, period the end. All power put into a CPU ends up turning into heat, minus let's say no more than a tenth of a watt of RF energy and inductive losses. An utterly insignificant amount. And regardless of operating temperature, during steady-state operation the cooler needs to dissipate the same amount of heat that is put into it. Otherwise the cooler will heat up, i.e. not steady-state operation.

Robert shows his hand right at the end with this argument:

quote:

The point, here, is that TDP is a cooler spec to achieve what's printed on the box. Nothing more, nothing less, and power has nothing to do with that. It is absolutely possible to run electrical power in excess of TDP, because it takes time for that electrical energy to manifest as excess heat in the system. That heat can be amortized over time by wicking it into the silicon, into the HSF, into the IHS, into the environment. That's how you can use more electrical energy than your TDP rating without breaking your TDP rating or affecting your thermal performance.

He is not talking about steady state operation, i.e. the thing you design a cooler for (which throws the idea that this is somehow a number "for partners to match thermal solutions" into the dumpster). He is specifically talking about the period of time during which the CPU is still coming up to temperature and pretending like that also means that the cooler somehow doesn't have to dissipate all that energy during steady-state operation. The stuff about "thermal vs electrical watts" and "operating temperature" is just hand-waving to make your eyes glaze over. It's true under the very narrow circumstances which he only outlines at the very end, but it's utterly irrelevant to his core argument which is that these things somehow affect the TDP, which is utterly false. If the processor pulls 50 watts of power on average, you need to dissipate 50 watts of heat on average, end of story.

This is what I mean about AMD making absolutely outrageous technical arguments to try and cover up weaknesses in their products. All the best lies have a few nuggets of truth in them so idiots will nod along and go "yeah that's true". But the overall argument that you can somehow decrease heat output power below electrical input power by increasing the operating temperature is ludicrous.

Where else would the power be going if not the cooler? Answer: the CPU heats up, i.e. it hasn't reached its steady-state temperature yet.

Paul MaudDib fucked around with this message at 20:43 on Aug 18, 2017

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Paul MaudDib posted:

Yes. But as you noted, it's still wrong/dumb so I'm not sure what your point is.

Thermal watts equals electrical watts, period the end. All power put into a CPU ends up turning into heat, minus let's say no more than a tenth of a watt of RF energy and inductive losses. An utterly insignificant amount. And regardless of operating temperature, during steady-state operation the cooler needs to dissipate the same amount of heat that is put into it. Otherwise the cooler will heat up, i.e. not steady-state operation.
Not to go full sperg and I agree with a lot of what you said, but a CPU and by (thermal) extension its cooler are typically not at true steady state during normal usage. The cooler attenuates a lot of the variation but there is some.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SourKraut posted:

Not to go full sperg and I agree with a lot of what you said, but a CPU and by (thermal) extension its cooler are typically not at true steady state during normal usage. The cooler attenuates a lot of the variation but there is some.

Small variations during normal operation fall under the "average" part. Yeah sometimes it's a degree or two hotter or colder, that's why we're talking about averages. On average the cooler needs to move the average heat input back out.

There is also the "ultrabook model" where you use an insufficient cooler and hope the task is done before you hit Tjunction and have to throttle down and cool the processor off. Even with the ultrabook model, it's customary to rate TDP at a minimum of all-core load at base clocks, and turbo pulls whatever it pulls. AMD's TDP allotment doesn't even cover that much, the 1700 is more like 90-100W at base clocks and the 1700X and 1800X are more like 120W, i.e. roughly 30% higher than the official ratings.

Their TDP numbers are pure fantasy, you cannot hit them under an all-core load in a real-world situation. Then they resort to this kind of handwaving to justify it. No, running hot does not reduce your TDP, period, and nobody designs workstation cooling solutions on the expectation of bursty loads with lots of cool-down time afterwards.

Paul MaudDib fucked around with this message at 21:09 on Aug 18, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Paul MaudDib posted:

Yes. But as you noted, it's still wrong/dumb so I'm not sure what your point is.

Thermal watts equals electrical watts, period the end. All power put into a CPU ends up turning into heat, minus let's say no more than a tenth of a watt of RF energy and inductive losses. An utterly insignificant amount. And regardless of operating temperature, during steady-state operation the cooler needs to dissipate the same amount of heat that is put into it. Otherwise the cooler will heat up, i.e. not steady-state operation.

Robert shows his hand right at the end with this argument:


He is not talking about steady state operation, i.e. the thing you design a cooler for (which throws the idea that this is somehow a number "for partners to match thermal solutions" into the dumpster). He is specifically talking about the period of time during which the CPU is still coming up to temperature and pretending like that also means that the cooler somehow doesn't have to dissipate all that energy during steady-state operation. The stuff about "thermal vs electrical watts" and "operating temperature" is just hand-waving to make your eyes glaze over. It's true under the very narrow circumstances which he only outlines at the very end, but it's utterly irrelevant to his core argument which is that these things somehow affect the TDP, which is utterly false. If the processor pulls 50 watts of power on average, you need to dissipate 50 watts of heat on average, end of story.

This is what I mean about AMD making absolutely outrageous technical arguments to try and cover up weaknesses in their products. All the best lies have a few nuggets of truth in them so idiots will nod along and go "yeah that's true". But the overall argument that you can somehow decrease heat output power below electrical input power by increasing the operating temperature is ludicrous.

Where else would the power be going if not the cooler? Answer: the CPU heats up, i.e. it hasn't reached its steady-state temperature yet.

First off, what I am going to type is not an argument but me trying to link my pov of graphic card tech tricks to what you are saying.

On my nvidia 1070, if it stays at 80c (read from afterburner so accuracy is up in the air) or something higher for too long, it will reduce the power and the core clock permenantly until I just about reboot the entire system. Pulling up the core clock and allowed power threshold after it slams down in clock and voltage does nothing at all to bring the original core speed back.

Now on my old 7970 GHz, the core and mem clock is built in to start at 1050/1500 respectfully. As you may have seen a couple of times in the thread, I didn't have the original insane heatsink and fans so as soon as I turned it on with a third party heat sink not built for OC it would hit 100c and then slam down the core and not the power. It will slowly come out of the funk if I set the core to normal reference clock rates. It will never get back to 1050 ever with that third party heatsink until I got better fans attached. As soon as it got close to 80 it would back off the core.

One thing I learned from these experiences is these cards are way better at protecting themselves than the video cards back during the 2000s where a chip will fry itself with nothing stopping it if you forgot to put paste on.

From what I read from your posts I keep thinking you are talking about the safeguards but I know you are not. What am I missing from trying to get your posts?

Cygni
Nov 12, 2005

raring to post

Custom PCB Vega cards are starting to show up. ASUS's solution to the Vega performance issues? More power, bigger cooler. Card uses 40-50w more than reference. When OCed, can pull over 400w...

http://wccftech.com/asus-rog-strix-radeon-rx-vega-64-gets-reviewed/

redeyes
Sep 14, 2002

by Fluffdaddy
Nice 3 slot cooler.

quote:

ASUS ROG STRIX Vega 64 Can Achieve An Overclock of 1980 MHz (Core), 1000 MHz (Memory) But Sips in Over 500 Watts of Power

Is that a record? Maybe not..

repiv
Aug 13, 2009

redeyes posted:

Is that a record? Maybe not..

Buildzoid mentioned that you can easily get Vega to "run" at 1800-2000mhz but performance is usually worse than stock, so there's some kind of Pascal-style silent throttling going on.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Okay, when a card uses 500 watts of power, you are not allowed to use the word "sips" anymore. That's loving "quaffing" at that point, from a 100oz gas station soda container.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EVIL Gibson posted:

First off, what I am going to type is not an argument but me trying to link my pov of graphic card tech tricks to what you are saying.

On my nvidia 1070, if it stays at 80c (read from afterburner so accuracy is up in the air) or something higher for too long, it will reduce the power and the core clock permenantly until I just about reboot the entire system. Pulling up the core clock and allowed power threshold after it slams down in clock and voltage does nothing at all to bring the original core speed back.

Now on my old 7970 GHz, the core and mem clock is built in to start at 1050/1500 respectfully. As you may have seen a couple of times in the thread, I didn't have the original insane heatsink and fans so as soon as I turned it on with a third party heat sink not built for OC it would hit 100c and then slam down the core and not the power. It will slowly come out of the funk if I set the core to normal reference clock rates. It will never get back to 1050 ever with that third party heatsink until I got better fans attached. As soon as it got close to 80 it would back off the core.

One thing I learned from these experiences is these cards are way better at protecting themselves than the video cards back during the 2000s where a chip will fry itself with nothing stopping it if you forgot to put paste on.

From what I read from your posts I keep thinking you are talking about the safeguards but I know you are not. What am I missing from trying to get your posts?

Older cards are typically "voltage limited", i.e. as you increase the clocks the core crashes. You feed in more voltage to get the transistors to switch faster, and that makes it a little more stable, but the stability improvements get smaller and smaller as you go. At some point you just can't increase the voltage any further because it kills the chip.

Pascal has a very strict power limiter, and it usually runs into this before it runs into its voltage limit. Increasing the voltage actually increases the amount of power the card uses, and because Pascal has is "power limited" this can decrease performance. Running the chip at a hotter temperature also increases power consumption, just like voltage, because the transistor tends to "leak" more electricity at higher temperatures.

Another factor is GPU Boost 3.0. You can think of it as a combination of "auto overclocking" and much more aggressive power management. It continuously monitors temperature and power consumption, and it uses a lookup table to select a maximum boost clock, which it can choose to enable based on the load it sees. For the most part this is good but chip stability decreases as the chip gets hotter, reducing the maximum possible overclock. As the chip gets hotter, GPU Boost 3.0 backs off the overclocks.

This actually occurs a lot earlier than most people realize. By 60C it is locking out some of the highest boost bins (>2100 MHz). By 80C you are definitely getting noticeable throttling, but you are still technically at boost speeds, just not overclock speeds. By 90C you are in full thermal throttling and will be lucky to hit base clocks.

So what you are seeing is basically an artifact of Pascal's new power management system. In contrast previous generations of cards would just run at whatever clocks you told them to, and only try to prevent outright damage to the card. Pascal is micromanaging the cores, deciding whether there is enough load to justify turning on all the cores or increasing its boost clock. By doing that it can save some TDP right now that it can "spend" later when it needs to sprint a little faster than normal. Sometimes you pull 170W, sometimes you pull 190W, it averages out to 180W but you get a little more performance when you need it.

Practical upshot of all of this: Pascal just wants to have its power limit turned up all the way and be kept nice and cool. Undervolting is actually the best way to overclock Pascal, because the limit is power and undervolting gets you higher clocks from the same amount of power. Fine-tuning an undervolt can be a lot of work, if you don't want to go to all that trouble, just increase the power limit to the maximum, crank your fan curve really high to keep it cool, and increase your core clock by like 100 MHz and you're done. You can even skip the core clock part, really just the power limit and the fan curve make 90% of the difference.

Ryzen does this too. But none of this changes the fact that on average if the processor puts out 180W of heat then the cooler needs to dissipate 180W. We are only pushing the power around to use it when it's most useful, we are not actually increasing total power, everything averages out. And running hotter actually makes the problem worse, because of the leakage and stability issues.

Now, what's not normal is the whole "clocks don't come back to normal" thing. Assuming you exit from your game the GPU will cool off, and as the temperature comes back down more boost bins will be available. This should be immediately visible, like as soon as you're down to 75C you should be boosting back to 1900 or higher. My suspicion would be you have a very low fan-curve set and you're not actually getting the temperatures down as quickly as you think you are. AIB partners tend to exploit this: people don't really have any good reference for performance other than "this new card is faster!", and it takes a few minutes to come up to temperature and the card starts throttling, so you have to go out of your way to test for it, whereas the noise from an aggressive fan curve is immediately apparent. So AIB partners optimize for noise rather than performance...

I would open up GPU-Z to the sensor tab and use a cryptomining program to load up the GPU (there is usually a "benchmark" mode if you want). There's nothing special about mining, it's just an easy way to put load on the GPU. Then try killing the miner and restarting it at various temperatures to see what happens to your boost clock. If you are actually getting "stuck clocks" my next move would be to use Display Driver Uninstaller and do a clean install of the drivers and see if that helps. At that point if you're still having trouble let us know and we'll go from there.

But yeah, modern CPUs and GPUs are really good about not letting you destroy the chip.

Paul MaudDib fucked around with this message at 22:18 on Aug 18, 2017

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

repiv posted:

Buildzoid mentioned that you can easily get Vega to "run" at 1800-2000mhz but performance is usually worse than stock, so there's some kind of Pascal-style silent throttling going on.

IIRC, he was saying it's possible it can be doing that and you can only be certain by running 3DMark and checking against your previous scores. His experience seemed to indicate it'd start doing it at 2000Mhz and that he was able to push it close to 1900Mhz and still get better performance.

This does confirm one thing though, there doesn't seem to be any inherent process limitation on clockspeed for GF14nm, rather it's simply a design issue for GCN. Based on the whitepaper though, Pascal and Vega are actually really similar sans the command processor but I really don't want to attribute a doubling of power consumption to just the command processor. Like, wouldn't it be easy to confirm this internally how much power the command processor is sucking down by disabling it and using software instead, and seeing how that effects performance?

wargames
Mar 16, 2008

official yospos cat censor
https://www.youtube.com/watch?v=D_DZA5NsnNc

NewFatMike
Jun 11, 2015

SwissArmyDruid posted:

Okay, when a card uses 500 watts of power, you are not allowed to use the word "sips" anymore. That's loving "quaffing" at that point, from a 100oz gas station soda container.

We at ROG have tuned the fans to sound like the wails of miners' wives whose husbands died to provide electricity for such a high power card.

redeyes
Sep 14, 2002

by Fluffdaddy
Lets say you were AMD. You just burned through a few years of capital developing an architecture that basically is a complete failure. What do you do? Eat the cost and dev a new architecture or spin it like Trump?

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

redeyes posted:

Lets say you were AMD. You just burned through a few years of capital developing an architecture that basically is a complete failure. What do you do? Eat the cost and dev a new architecture or spin it like Trump?

I choose to go to DnD

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


I'm still tempted to try to get my hands on a 1080Ti to replace my 980Ti, which struggles some with 1440P@144 without lowering quality settings. Please stop me from/encourage me into making bad decisions with my money.

Adbot
ADBOT LOVES YOU

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

PerrineClostermann posted:

I choose to go to DnD

:hfive:

Ciaphas posted:

I'm still tempted to try to get my hands on a 1080Ti to replace my 980Ti, which struggles some with 1440P@144 without lowering quality settings. Please stop me from/encourage me into making bad decisions with my money.

how low settings are you talking? Have you tried looking up what settings actually matter in the particular game you have having issues with? Sometimes stuff like reflections takes up way too much resources for what you may not even notice.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply