Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Ganondork
Dec 26, 2012

Ganondork
All of this is entirely dependent on the abstractions and tools available to devs working on those platforms. Otherwise, devs would have to hand roll support between generations if any special hardware or features of the previous gen architecture were used.

The MS example is great, because they seem to be having some reasonable success creating tooling that works cross platform, which you’d assume means across generations too, but that’s entirely up to MS.

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo
Yes with agreement to all the above, I thought that was a given.

Ganondork
Dec 26, 2012

Ganondork

SwissArmyDruid posted:

Yes with agreement to all the above, I thought that was a given.

It is, but :actually: not everyone may have the software dev perspective.

Edward IV
Jan 15, 2006

So just how hosed is my HD7970?



A restart clears it but stability in general comes and goes. The odd thing is that I don't remember it being this unstable when I used it for a span of 5 months last year up until August. Back then, though, it was driving a 1080p screen instead of a 4K TV at 30Hz. I don't know how much that makes a difference but the graphics drivers are up to date for what that matters.

Fortunately, this is no longer my primary PC and I'm mainly using it to remote into my workstation so these crashes and glitches aren't harming my work besides lost time.

Edward IV fucked around with this message at 17:48 on Apr 9, 2020

orcane
Jun 13, 2012

Fun Shoe
It's dead, Jim.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
That's classic video memory corruption. Sometimes it's just because the vram is currently overheating, but it's usually permanent damage.

I will say there is some small chance that it's an issue specific to driving 4k@30hz, presumably over HDMI, and it would behave differently on a different display, but that's probably not it.

forest spirit
Apr 6, 2009

Frigate Hetman Sahaidachny
First to Fight Scuttle, First to Fall Sink


That's what my 7970 looked like when it died. Only card I've had that's actually bit the dust

Setset
Apr 14, 2012
Grimey Drawer

Edward IV posted:

So just how hosed is my HD7970?



A restart clears it but stability in general comes and goes. The odd thing is that I don't remember it being this unstable when I used it for a span of 5 months last year up until August. Back then, though, it was driving a 1080p screen instead of a 4K TV at 30Hz. I don't know how much that makes a difference but the graphics drivers are up to date for what that matters.

Fortunately, this is no longer my primary PC and I'm mainly using it to remote into my workstation so these crashes and glitches aren't harming my work besides lost time.

Are you able to downclock the memory by some large percentage? I'd try that and lowering voltage before giving up on it. Then again new cards are relatively cheap..

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
There's no coming back from that, the memory's physically hosed

VelociBacon
Dec 8, 2009

Edward IV posted:

So just how hosed is my HD7970?



A restart clears it but stability in general comes and goes. The odd thing is that I don't remember it being this unstable when I used it for a span of 5 months last year up until August. Back then, though, it was driving a 1080p screen instead of a 4K TV at 30Hz. I don't know how much that makes a difference but the graphics drivers are up to date for what that matters.

Fortunately, this is no longer my primary PC and I'm mainly using it to remote into my workstation so these crashes and glitches aren't harming my work besides lost time.

Beyond what everyone else is saying have you tried running the display from the motherboard just to confirm GPU failure?

Edward IV
Jan 15, 2006

VelociBacon posted:

Beyond what everyone else is saying have you tried running the display from the motherboard just to confirm GPU failure?

Unfortunately, my motherboard has the P67 chipset so no video out and I don't have any other graphics cards in my possession besides an HD4850 at my parent's house. Really just wondering how bad it actually is and if the HD7970 is the cause. At least it isn't my primary PC and it works well enough for what I need despite the inconvenience it causes. Plus, any replacements or upgrades will probably involve everything except maybe the hard drives which may end up living in a NAS anyways and maybe the SSD.

Also, it turns out that card was running at 60C at stock clocks while under idle to low load. So I've downclocked to 400MHz core and 600 MHz memory and manually set the fan to blow at 40% to get it closer to 50C. While 60C is under what it could achieve under load, it shouldn't be running that hot while sitting on the desktop or remoting into another computer. Probably doesn't help that it has the stock reference centrifugal blower and the whole thing is in a somewhat cramped HTPC desktop case.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Zedsdeadbaby posted:

There's no coming back from that, the memory's physically hosed

Repasting it and replacing the thermal pads might be enough to keep it below the temperature it starts glitching out. Also, baking it will sometimes help for a little bit.

So there are some options to try and stretch it a little bit. But yeah, overall it’s on the way out and it’s time to look for a replacement.

Geemer
Nov 4, 2010



When my 7870 started doing that, I tried the baking thing and it worked for a grand total of 2 weeks before the glitches came back with a vengeance. I'd start looking for a replacement.

deadly_pudding
May 13, 2009

who the fuck is scraeming
"LOG OFF" at my house.
show yourself, coward.
i will never log off
How much of a performance bump am I looking at if I go from a GTX 970 to a 1660ti? Mostly I want more VRAM for huge texture resolution, but I'll take what I can get in overall performance, too.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
It varies widely, but if you're GPU limited a 1660 Super is typically something like 25-50% performance gains depending on title/settings. Hard to argue for a 1660 Ti over a Super (15% more money for almost no more performance). I think the 1660 Super is one of the most sane GPU upgrades possible right now. Yes, the upgrade isn't huge, but you can still sell a 970 for SOMETHING right now, and the 1660 Super will still be worth something when you make your next GPU upgrade, to the point where your overall cost is very little.

deadly_pudding
May 13, 2009

who the fuck is scraeming
"LOG OFF" at my house.
show yourself, coward.
i will never log off

K8.0 posted:

It varies widely, but if you're GPU limited a 1660 Super is typically something like 25-50% performance gains depending on title/settings. Hard to argue for a 1660 Ti over a Super (15% more money for almost no more performance). I think the 1660 Super is one of the most sane GPU upgrades possible right now. Yes, the upgrade isn't huge, but you can still sell a 970 for SOMETHING right now, and the 1660 Super will still be worth something when you make your next GPU upgrade, to the point where your overall cost is very little.

Yeah, I saw that the 970 is going for like 150bux on facebook marketplace around here. I'll keep 1660 Super under advisement. Maybe I'll even upgrade monitors if the economy ever recovers :negative:

ufarn
May 30, 2009
I think Nvidia are scheduled to announce new cards in August. There are gonna be a lot of interesting used cards to buy shortly after the announcement, and you can save pretty significant amounts of money on used. I don't know how many people are mining crypto these days, so as long as it's no a 1080ti, it's probably fine, as long as there are photos and some details about the card. EVGA also has transferrable warranties on their GPUs, too.

I know buying used is kinda scary, but you can straight up save crazy amounts of money on cards if you're lucky.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
There's not a whole lot of mining going on these days, as the ROI is poo poo (and often negative). A 1080Ti would probably be out of warranty this summer, but otherwise would likely be a good price:performance card to pick up after the new ones are announced, since it'll be two models out by then, and no one sane bothered to mine on them for long (not that mining hurt NVidia cards to begin with).

The bigger concern for a 10-series card going forward is the lack of DLSS 2.0 support. If that keeps expanding across games, can reliably do 30-50% performance gain at nearly indistinguishable IQ levels, and NVidia pushes the tensor core support down the stack, even a 1080Ti might end up effectively struggling against the new xx60 part.

Cactus
Jun 24, 2006

Probably a dumb question but do cards have any way of showing how much they've been used? Like some kind of built-in milage indicator analogous to what cars have?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Cactus posted:

Probably a dumb question but do cards have any way of showing how much they've been used? Like some kind of built-in milage indicator analogous to what cars have?

Nope. And even if they did, it wouldn't really matter. You can't "wear out" a GPU in the same way you can a car, where you know you can expect more problems at 100k miles than 50k miles, for example. The only part that's likely to fail just through normal use is a fan, and those just kinda go whenever they feel like it. Past that they're likely to last for years and years until some random part decides to give up the ghost--often a capacitor somewhere. Now people who are cooking their cards with voltage mods or something are a different story, but that's pretty hard to do these days, anyhow.

This is part of the reason that used GPUs are often recommended for budget buyers: they probably will work just fine for years to come.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
That's not entirely correct. Semiconductors do "wear out" with use, and with how hard GPUs are pushed it's possible to damage them. With AMD GPUs being relatively unlocked, damage can be a concern, but since most of the modified GPUs were used by miners and they targeted efficiency, the only real concern was finding the correct high-performance BIOS to flash back onto one. For modern Nvidia GPUs, they are so locked down that you really can't hurt them without physical modifications.

You are correct that you can buy used Nvidia GPUs with almost complete confidence, and even AMD GPUs it's absolutely worth the risk. You should really only buy new GPUs if there isn't a good used market for what you want.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

K8.0 posted:

That's not entirely correct. Semiconductors do "wear out" with use, and with how hard GPUs are pushed it's possible to damage them.

In a very technical sense, sure. But then you have to have a discussion on what actually wears out semiconductors vs other parts of the board. If you're overvolting the hell out of it, yeah, you can get slow degradation of the substrate leading to tunneling. But as noted, that's not likely in the current environment. So what are you going to pin wear on? Load/unload cycles? Total time at load? Various things wear various parts of the board at different rates, with 1000x loads of 50% can be more damaging than 100x loads of 100% on some components, and vice versa on others. This makes talking about "wear" pretty hard to do in a meaningful sense, and unless you happen to get a board with some underlying manufacturing fault (or straight up design fault, like a few of NVidia's board have had in the past), your most likely failure mode is a dead fan or a blown cap--both of which increase in probability over simple calendar time more closely than they do with hours on.

But yeah, point is used cards are a Good Deal and as long as they card does what you want, you shouldn't need to worry about buying it only to have it "wear out" and die 6 months later.

Ganondork
Dec 26, 2012

Ganondork
The majority of the failures I’ve experienced with video cards have been popped capacitors, probably due to overclocking. Can’t say I’ve ever had a GPU go on it’s own.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Ganondork posted:

The majority of the failures I’ve experienced with video cards have been popped capacitors, probably due to overclocking. Can’t say I’ve ever had a GPU go on it’s own.

Yeah, the "running it too hard" bit was a lot more applicable to older cards where you could actually adjust the voltage to :catdrugs: levels without needing to do physical mods. Modern cards don't let you get too adventurous with that anymore.

Ganondork
Dec 26, 2012

Ganondork

DrDork posted:

Yeah, the "running it too hard" bit was a lot more applicable to older cards where you could actually adjust the voltage to :catdrugs: levels without needing to do physical mods. Modern cards don't let you get too adventurous with that anymore.

The last card I had that popped was a MSI Gaming X 1070. Luckily, it was still something like 2 months still in warrantee. Perfect timing

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

DrDork posted:

In a very technical sense, sure. But then you have to have a discussion on what actually wears out semiconductors vs other parts of the board. If you're overvolting the hell out of it, yeah, you can get slow degradation of the substrate leading to tunneling. But as noted, that's not likely in the current environment. So what are you going to pin wear on? Load/unload cycles? Total time at load? Various things wear various parts of the board at different rates, with 1000x loads of 50% can be more damaging than 100x loads of 100% on some components, and vice versa on others. This makes talking about "wear" pretty hard to do in a meaningful sense, and unless you happen to get a board with some underlying manufacturing fault (or straight up design fault, like a few of NVidia's board have had in the past), your most likely failure mode is a dead fan or a blown cap--both of which increase in probability over simple calendar time more closely than they do with hours on.

But yeah, point is used cards are a Good Deal and as long as they card does what you want, you shouldn't need to worry about buying it only to have it "wear out" and die 6 months later.

I agree that there isn't a meaningful way to measure it. IME the normal wear accelerating factor on GPUs is temperature. I think Turing and Navi have changed this somewhat, but they didn't used to measure temperature in that many locations and we've all seen dead GPUs with obviously clogged up coolers.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

Ganondork posted:

The last card I had that popped was a MSI Gaming X 1070. Luckily, it was still something like 2 months still in warrantee. Perfect timing

Overclocking on a 10x0 series by itself is unlikely to cause a capacitor to go, but running things hot for prolonged periods can hurt it, so that's a secondary effect of overclocking.

Generally I wouldn't worry too much about gpu mileage as long as it's reasonably clean (a good sign the case had airflow) and the fan seems to run ok.

Geemer
Nov 4, 2010



Put a teraflop counter on the GPU, limit warranty to a ridiculously low number that still sounds impressive because no layman knows what a teraflop really is.
Just like a lamp power hours counter on a Deuterium lamp on UV/VIS equipment, except designed to screw over consumers instead of preventing out of band analyses:

Ugly In The Morning
Jul 1, 2010
Pillbug
I had a nightmare last night that I bought a second 2070 Super to set up in SLI.


My brain is weird when I’m sick.
Anyway, that made me wonder:
I was out of the tech loop for a while, what was the point where SLI went from “expensive but at least it does something” to “expensive and barely does anything at best, is actively detrimental at worst”? I had an SLI setup with GeForce 7800’s in 2005 and I remember that it was immediately outclassed by a single 8800 when those came out, but at least it had a noticeable benefit over a single 7800.

Fabulousity
Dec 29, 2008

Number One I order you to take a number two.

My guess would be DX12 implementing multiple GPU support at the API gave AMD/nVidia a way out from having to dump resources into supporting a tiny sliver of a sliver of market share which were the SLI/Crossfire folks. Now for multi-GPU the albatross is wholly around the application developer's neck to do it instead being shared with the driver developers.

The value proposition of SLI/Crossfire was always dubious since it never was or could ever be a 200% increase in performance, and as you pointed out such setups would often produce their own problems in practice.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I think towards the end of the 9-series was where they actively stopped caring about it, but it kinda still existed for the broke-brains. The 10-series straight up didn't even pretend that they gave a gently caress about it, and SLI wouldn't even work in a large number of games, and in some of them actually gave negative performance differences.

And that's not even starting to talk about the cost:performance issues.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
If I could go back and do it for free, I don't think I'd ever take SLI. The frame pacing issues made it garbage across the board.

Shaocaholica
Oct 29, 2002

Fig. 5E
I paid real money for this in 2020



ATI Firepro M7740 1GB Dell MXM module for M6400 laptop. Extremely rare BTO option.

No idea what the BTO upgrade cost was on this part originally in 2009. $1000?

Shaocaholica fucked around with this message at 23:00 on Apr 11, 2020

FuturePastNow
May 19, 2014


Most of my computers are old as hell so I max them out with the options that would have cost a fortune when they were new. Currently looking for a cheap Quadro FX 4500 that I can flash for the PowerMac G5 (need one of the early revision ones with RAM on both sides, indicated by the heatspeader on the bottom). I take no shame in this.

Ugly In The Morning
Jul 1, 2010
Pillbug

K8.0 posted:

If I could go back and do it for free, I don't think I'd ever take SLI. The frame pacing issues made it garbage across the board.

I don’t think I knew what frame pacing even was when I had an SLI setup- isnt that something that mostly got discovered when more advanced capture cards came out and people realized “oh, THAT’S why it felt like poo poo even though my FPS numbers were fine”?

Because yeah, when I switched back to a single-card setup (thanks, capacitor plague!) everything felt a lot better.

repiv
Aug 13, 2009

Fabulousity posted:

My guess would be DX12 implementing multiple GPU support at the API gave AMD/nVidia a way out from having to dump resources into supporting a tiny sliver of a sliver of market share which were the SLI/Crossfire folks.

The other reason was the rise of temporal tricks in engines, the most reliable way to support SLi through the driver was alternate-frame-rendering where one GPU renders even frames and the other GPU renders odd frames, but once you introduce TAA and other techniques that depend on data from the previous frame that model breaks because (a) the last frame may not even be rendered yet and (b) even if it is, getting that data requires a slow trip over the PCI-E bus.

Shaocaholica
Oct 29, 2002

Fig. 5E

FuturePastNow posted:

Most of my computers are old as hell so I max them out with the options that would have cost a fortune when they were new. Currently looking for a cheap Quadro FX 4500 that I can flash for the PowerMac G5 (need one of the early revision ones with RAM on both sides, indicated by the heatspeader on the bottom). I take no shame in this.

Haha that sounds cool. I have the top end ATI GPU for the G5 I can't even remember what it was I got it so long ago.

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!

FuturePastNow posted:

Most of my computers are old as hell so I max them out with the options that would have cost a fortune when they were new.
That's a classic post/username right there. The idea of having the best stuff from the past is cool too.

Ugly In The Morning
Jul 1, 2010
Pillbug

lllllllllllllllllll posted:

That's a classic post/username right there. The idea of having the best stuff from the past is cool too.

I’ve thought of making a kickass 2005 or 2009 computer out of used parts now that it can be done for like a hundred or two tops. You can get a GeForce 9800 GTX for like ten bucks now, put one of those with a Phenom or something, install Windows XP, and you’ve got something that can do emulation and play the stuff that refuses to work with Windows 10.

Adbot
ADBOT LOVES YOU

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!
Now I want more space where I live and fill it with old computers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply