Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin

Brownie posted:

I imagine we’ll just get a lot games like Arkham Knight, with tons of stuttering on PC regardless of your setup. That game took advantage of the console shared system memory in a way that PC architecture just wasn’t able to reproduce and so it still underperforms on most PC systems.

The PC version was badly made. https://sherief.fyi/post/arkham-quixote/

"TL;DR turns out the streaming system keeps trying to create thousands of new textures instead of recycling them, among other issues."

That's not to say that such bad practices will not be happening in the future, but it has nothing to do with the capabilities of the PC platform or the APIs.

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Riflen posted:

The PC version was badly made. https://sherief.fyi/post/arkham-quixote/

"TL;DR turns out the streaming system keeps trying to create thousands of new textures instead of recycling them, among other issues."

That's not to say that such bad practices will not be happening in the future, but it has nothing to do with the capabilities of the PC platform or the APIs.

I haven't done console development, but this feels a lot like an operation that would be much faster on a shared memory architecture. If you have shared memory, creating those texture resources within the same memory space is likely going to be a lot faster and easier than trying to shuffle a bunch of data out of main memory, across PCIe, and into the GPU's local memory.

For this one game, it's possible to patch in texture pools and get some performance benefits. But, it's easy to come up with situations where that might not be true. Just increasing the texture variety - and decreasing what's possible to throw into the pool and reuse - could do it.

Brownie
Jul 21, 2007
The Croatian Sensation

Riflen posted:

The PC version was badly made. https://sherief.fyi/post/arkham-quixote/

"TL;DR turns out the streaming system keeps trying to create thousands of new textures instead of recycling them, among other issues."

That's not to say that such bad practices will not be happening in the future, but it has nothing to do with the capabilities of the PC platform or the APIs.

Isn't that kind of my point though? They made a technical choice enabled by the XB1 and PS4 architectures and that same technique could not be ported directly to PC and didn't address it before launch. That blogger found a clean (perhaps obvious) solution to the problem, but that won't always be the case and I'm sure we'll see similar cases with the first wave of games for next gen hardware, which will inevitably be rushed to market.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
No, that's really not the case. The people who ported the game were garbage and just didn't do something that literally every game needs to do. It's about an extremely lazy approach to porting that has more to do with software architecture than hardware.

With the new consoles being relatively powerful machines at launch (unlike the PS4/Bone), no doubt you will see more rough edges in ports being obvious, but that will again largely be due to poor porting work rather than architectural differences.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

EoRaptor posted:

Yes, should have been GB, not Gbit. Don't know why I mentally swapped that.

Where are you getting PCI 3.0 SSD's top out at 2.5GB/sec though? I top out at 3.5GB/sec with my WD which was barely over $100 cndn for 512GB, of course that's never 'real world' but I doubt Sony/MS is using any different metrics for the 'raw' speed of their SSD's either.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

Brownie posted:

Isn't that kind of my point though? They made a technical choice enabled by the XB1 and PS4 architectures and that same technique could not be ported directly to PC and didn't address it before launch. That blogger found a clean (perhaps obvious) solution to the problem, but that won't always be the case and I'm sure we'll see similar cases with the first wave of games for next gen hardware, which will inevitably be rushed to market.

Bear in mind it's not like his solution is a magic cure-all though, I mean his explanation makes sense but it did nothing for my small stuttering I still get when running at 60fps, and others in his reddit thread also reported no difference, or never had any stuttering issues to begin with. Even DF did a video about a year back seeing what it took to get a locked 60 at 4k and were able to get it, and also 1080p/60 with no stuttering across several PC's (which I have never been able to duplicate). It's just a very weird game with respect to performance.

Brownie
Jul 21, 2007
The Croatian Sensation

Happy_Misanthrope posted:

Bear in mind it's not like his solution is a magic cure-all though, I mean his explanation makes sense but it did nothing for my small stuttering I still get when running at 60fps, and others in his reddit thread also reported no difference, or never had any stuttering issues to begin with. Even DF did a video about a year back seeing what it took to get a locked 60 at 4k and were able to get it, and also 1080p/60 with no stuttering across several PC's (which I have never been able to duplicate). It's just a very weird game with respect to performance.

It's an insanely weird game, true. I had little to no stuttering but had constant texture corruption. For instance, the rain textures layered on top of most surfaces never loaded, in properly at the highest detail level causing them to constantly cycle and flicker.

Brownie fucked around with this message at 20:16 on Mar 22, 2020

lDDQD
Apr 16, 2006

Happy_Misanthrope posted:

Again though, that's just not the case. The presumption you're making is that GDDR6 actually has significantly higher latency and thus will affect the CPU negatively, we don't see any evidence of that and this was brought up earlier and was shown not really be true with this gen. If the CPU cores in these console APU's will be slower than their equivalent PC offerings it will perhaps be due to less cache, but not because of GDDR over DDR.

After all, if DDR's latency was vastly superior for CPU's, then the Xbox One would have shown a significant advantage in CPU-bottlenecked titles (of which there are many) over the PS4 as it uses not only DDR for main ram, but an extremely fast on-chip cache to boot.

GDDR6 has significantly higher latency for some of its operations, and will affect the CPU negatively :) - the typical data access pattern for a CPU workload would lead to very poor bus utilization of GDDRn memories.

In the interests of keeping the bus busy, let's take a look at two memory timings that would impact our ability to keep putting transactions on the bus:
tRRD ("row-to-row delay") limits how often we can select a row inside a memory bank
+ For peak bus utilization, we want to be able to select a new row every time we're done working with one bank; this depends on the amount of "hits" (how many columns we want to grab from that bank/row combination before we're done with it).
++ The desired minimum tRRD would be 2(num_hits * burst_length); where burst length is the amount of time it takes you to send one complete piece of data over the bus. So, if we can expect to service multiple hits inside a bank, the requirements for desired minimum tRRD can relax quite a bit - after all, we're not interested in switching banks nearly as quickly; we've still got more bursts to send off! Why, on the other hand, is it twice the num_hits * burst length? The whole point of organizing memory into banks was to hide some latencies associated with accessing things by having these things take place simultaneously - so, hopefully, you can have one bank put data on the bus, while some other bank is only getting ready to do it. Then, it takes its turn putting data on the bus just as the first bank is finished using it. Since we aim to interleave accesses between at least two different banks in this scheme, this further relaxes the desired minimum tRRD requirement by a factor of two.

tCCD ("column-to-column delay") limits how often we can select a column inside a row
+ Again, in the interests of peak bus utilization, we want to be able to be done with the column-accessing stuff every time we're finished putting a burst on the bus, so we can just keep firing off bursts back-to-back-to-back-...
++ The desired minimum tCCD would be 2(burst_length). Similar strategy with interleaved column accesses has been implemented here, which is why we can relax this by a factor of two.

Anyway, so the big-picture idea here is that:
1) we can tolerate higher tRRD, as long as we expect more hits within the same bank
2) we really want our tCCD to be shorter, or at least equal to the burst length
If both these requirements are satisfied, we can achieve the coveted 100% bus utilization.

Well, let's head over to a memory vendor, and see what the memory they're selling us can actually do, and also compare it to what we need for either CPU or GPU workloads.

Typical CPU workload (random pattern; 1 hit per page)


Typical GPU workload (2.5 hits per page average)


I've also included tRC in this comparison, which is the "row cycle" time (which includes the amount of time we need to open a page, get data, close page and precharge), but let's not worry about that too much; the problem is already readily apparent from tRRD alone; namely that we can't get anywhere near desired tRRD for a typical CPU workload from anything other than memories specifically designed to handle this kind of access pattern.

On the flip side, if might look like DDR4, 5, as well as LPDDR could work great for graphics workloads - this is true only as far as latency is concerned. You're not going to get the same amount of bandwidth out of them, which is the engineering tradeoff made by those architectures.

In conclusion, there's no such thing as a one-size-fits-all DRAM architecture. As long as you've got a CPU, you should hang on to your DDRn memory. It only starts to make sense in the context of a unified device, such as an APU - if you've got a smart enough memory controller, it could schedule memory accesses to both the GPU cores & CPU cores in such a way, that the relatively random access pattern of the CPU is combined with the relatively predictable access pattern of a GPU in order to fill up the large gaps in the bus utilization that would otherwise be left there by the CPU. The obvious downside for this is latency - you need to wait for a bunch of transactions to be requested by both units, and then you have to wait for the memory controller to have the opportunity to reorder them. The GPU probably doesn't care so much about the added latency (being a high bandwidth machine), but the CPU (being a low latency machine) certainly does.

lDDQD fucked around with this message at 01:28 on Nov 28, 2020

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

lDDQD posted:

GDDR6 has significantly higher latency for some of its operations, and will affect the CPU negatively :) - the typical data access pattern for a CPU workload would lead to very poor bus utilization of GDDRn memories.

In the interests of keeping the bus busy, let's take a look at two memory timings that would impact our ability to keep putting transactions on the bus:
tRRD ("row-to-row delay") limits how often we can select a row inside a memory bank
+ For peak bus utilization, we want to be able to select a new row every time we're done working with one bank; this depends on the amount of "hits" (how many columns we want to grab from that bank/row combination before we're done with it).
++ The desired minimum tRRD would be 2(num_hits * burst_length); where burst length is the amount of time it takes you to send one complete piece of data over the bus. So, if we can expect to service multiple hits inside a bank, the requirements for desired minimum tRRD can relax quite a bit - after all, we're not interested in switching banks nearly as quickly; we've still got more bursts to send off! Why, on the other hand, is it twice the num_hits * burst length? The whole point of organizing memory into banks was to hide some latencies associated with accessing things by having these things take place simultaneously - so, hopefully, you can have one bank put data on the bus, while some other bank is only getting read to do it. Then, it takes its turn putting data on the bus just as the first bank is finished using it. Since we aim to interleave accesses between at least two different banks in this scheme, this further relaxes the desired minimum tRRD requirement by a factor of two.

tCCD ("column-to-column delay") limits how often we can select a column inside a row
+ Again, in the interests of peak bus utilization, we want to be able to be done with the column-accessing stuff every time we're finished putting a burst on the bus, so we can just keep firing off bursts back-to-back-to-back-...
++ The desired minimum tCCD would be 2(burst_length). Similar strategy with interleaved column accesses has been implemented here, which is why we can relax this by a factor of two.

Anyway, so the big-picture idea here is that:
1) we can tolerate higher tRRD, as long as we expect more hits within the same bank
2) we really want our tCCD to be shorter, or at least equal to the burst length
If both these requirements are satisfied, we can achieve the coveted 100% bus utilization.

Well, let's head over to a memory vendor, and see what the memory they're selling us can actually do, and also compare it to what we need for either CPU or GPU workloads.

Typical CPU workload (random pattern; 1 hit per page)


Typical GPU workload (2.5 hits per page average)


I've also included tRC in this comparison, which is the "row cycle" time (which includes the amount of time we need to open a page, get data, close page and precharge), but let's not worry about that too much; the problem is already readily apparent from tRRD alone; namely that we can't get anywhere near desired tRRD for a typical CPU workload from anything other than memories specifically designed to handle this kind of access pattern.

On the flip side, if might look like DDR4, 5, as well as LPDDR could work great for graphics workloads - this is true only as far as latency is concerned. You're not going to get the same amount of bandwidth out of them, which is the engineering tradeoff made by those architectures.

In conclusion, there's no such thing as a one-size-fits-all DRAM architecture. As long as you've got a CPU, you should hang on to your DDRn memory. It only starts to make sense in the context of a unified device, such as an APU - if you've got a smart enough memory controller, it could schedule memory accesses to both the GPU cores & CPU cores in such a way, that the relatively random access pattern of the CPU is combined with the relatively predictable access pattern of a GPU in order to fill up the large gaps in the bus utilization that would otherwise be left there by the CPU. The obvious downside for this is latency - you need to wait for a bunch of transactions to be requested by both units, and then you have to wait for the memory controller to have the opportunity to reorder them. The GPU probably doesn't care so much about the added latency (being a high bandwidth machine), but the CPU (being a low latency machine) certainly does.

pfft thats what caching is for! everything can be solved with enough caches and indirection

Arzachel
May 12, 2012

Malcolm XML posted:

pfft thats what caching is for! everything can be solved with enough caches and indirection

Speaking of, the console APUs likely have half the L3 cache like Renoir to save die area :v:

lDDQD
Apr 16, 2006

Malcolm XML posted:

pfft thats what caching is for! everything can be solved with enough caches and indirection

with a big enough cache, why even have dram?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

lDDQD posted:

with a big enough cache, why even have dram?

:hmmyes:

i want an HBM on interposer APU, like that one frankenstein intel/amd chip.

I think milan/genoa are supposed to have something like it sans the GPU

forest spirit
Apr 6, 2009

Frigate Hetman Sahaidachny
First to Fight Scuttle, First to Fall Sink


GPU thread, I have the opportunity to purchase a EVGA GeForce RTX 2080 Ti XC Ultra for 1300$ canadian. This would be like 600$ cheaper than buying it new. I know it's in good condition, as well.

Currently I have a 1070 for my vive and a 1440p 144hz monitor. That's why I've been jonesing for an upgrade, but was waiting for the next generation of cards to come out. What would you do? I can afford it, I'm just diverting budget for an index towards this. I can wait much longer on that, at least until a wireless solution is figured out

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Penpal posted:

GPU thread, I have the opportunity to purchase a EVGA GeForce RTX 2080 Ti XC Ultra for 1300$ canadian. This would be like 600$ cheaper than buying it new. I know it's in good condition, as well.

Currently I have a 1070 for my vive and a 1440p 144hz monitor. That's why I've been jonesing for an upgrade, but was waiting for the next generation of cards to come out. What would you do? I can afford it, I'm just diverting budget for an index towards this. I can wait much longer on that, at least until a wireless solution is figured out

The prospective performance increase of the 2080Ti's replacement has been the magically reducing number in WCCFtech articles. It started as high as +70%, and last I saw it was down to ~+30%. nVidia's not going to sell an ultra-premium GPU during a recession/depression, either.

So if you trust the seller/source and have the money to spend now, :shrug:. But do you *really* need to be spending $1300 Canuckbux right before things could potentially get really dicey economy-wise?

SwissArmyDruid
Feb 14, 2014

by sebmojo
Looking at the way Canadian rupees are going, they might hit 2:1 CAD/USD in the near future so..... hope you're an American?

fuf
Sep 12, 2004

haha
What's the best way to check if my PSU is powerful enough for my GPU? (it's a pretty old 650W PSU for a 2070S)
Could it work but underperform?

orcane
Jun 13, 2012

Fun Shoe
In general a PSU works until something tries to pull more power than your PSU supplies (even for a very short, but long enough spike) - at the very least you'd be looking at sudden shutoffs/reboots at that point, which can potentially damage any hardware connected to the PSU. It won't just keep running the card with lower power/performance limits, if that's what you mean.

What happens with your specific PSU depends on what else you have in your computer. The RTX 2070 Super has a TDP of 215 W, consumed closer to 230 W in some reviews and peaks between 1 and 10 ns (which trigger some PSU's protection curcuits) can go up to 330 W. You can use a calculator like https://outervision.com/power-supply-calculator to see what you can expect from the rest of your computer and add those ~330 W.

Depending on the actual PSU's model and age, replacing it might be the sensible option because potentially damaging your computer because of a cheap/old/insufficient PSU is not my idea of fun.

orcane fucked around with this message at 12:43 on Mar 23, 2020

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
650 W is way more than you need to drive a 2070 S with a modern PSU.

orcane
Jun 13, 2012

Fun Shoe

Lambert posted:

650 W is way more than you need to drive a 2070 S with a modern PSU.

It really depends what

fuf posted:

(it's a pretty old 650W PSU for a 2070S)
means exactly.

fuf
Sep 12, 2004

haha
It says Corsair TX650W on it...

just installed the 2070S and so far so good

VelociBacon
Dec 8, 2009

fuf posted:

It says Corsair TX650W on it...

just installed the 2070S and so far so good

It's only about whether your PSU is in warranty or not. If it's not in warranty, it could fry your whole system and you'll be hosed. An in-warranty 650w PSU is great with that GPU. An out-of-warranty 650w PSU is unacceptable with that GPU.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

fuf posted:

It says Corsair TX650W on it...

just installed the 2070S and so far so good

The TX650 has a 5 year warranty on it. As a general rule, PSUs should be eyed for replacement once they exit their warranty period, as they do slowly degrade over time. But, yeah, 650 is plenty for a 2070.

repiv
Aug 13, 2009

https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

Nvidia took another shot at improving DLSS, maybe it's good now? They're back to running an AI model on the tensor cores after that weird not-really-AI iteration.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Still a bit suspicious given all their shots are basically best-case, quality mode applied to 1080p. Want to see some shots at higher res / mid/low quality mode before I conclude it has any value.

Nvidia probably really needs this to work eventually because they seem committed to keeping tensor cores on consumer cards, and if they do nothing that gives up a lot of their advantage over AMD.

Llamadeus
Dec 20, 2005
https://www.techspot.com/article/1992-nvidia-dlss-2020/

It was already implemented in Wolfenstein a month ago, so there are quite a few comparisons out there already.

BOOTY-ADE
Aug 30, 2006

BIG KOOL TELLIN' Y'ALL TO KEEP IT TIGHT

DrDork posted:

The TX650 has a 5 year warranty on it. As a general rule, PSUs should be eyed for replacement once they exit their warranty period, as they do slowly degrade over time. But, yeah, 650 is plenty for a 2070.

Eh, even then I just did a quick online search & am finding review articles showing that PSU was released a while ago, like anywhere from 2008 to 2013-14. Might be a good idea for him to shop around to be on the safe side depending on what the rest of the system looks like, I have my doubts that the PSU is still under warranty.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
IDK where the goon phobia of old PSUs detonating and taking hardware with them comes from. It's really not justified. PSUs die, and they can kill stuff, but it's not really a common occurrence outside of garbage quality PSUs you shouldn't be buying to begin with.

Hardware isn't useful forever, run it hard until it dies, and as long as your PSU isn't a cheap piece of poo poo don't be afraid of running more demanding hardware on it. The chances it actually costs you anything beyond possibly replacing the PSU are incredibly slim.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

K8.0 posted:

IDK where the goon phobia of old PSUs detonating and taking hardware with them comes from.

15-20 years ago it was basically a lot more common. Even if it wasn't going to outright destroy your hardware, you could be almost certain that if you bought the cheapest PSU from a local store, it would have ripple that was so far out of spec you'd experience random crashes and other issues.

I can see where you're coming from - these days people are better informed, and there just isn't the market there once was for those very low end parts. I remember when you could buy a case with a PSU (lowest of the low quality) for less than you can buy the cheapest case today. And it was terrible, I mean really terrible, we're talking no rolled edges terrible. In fact, unless my memory is totally wrong, it feels like it used to be more common to find cases WITH power supplies than without, which has totally turned on its head. The PSU was almost seen as a throw-in freebie.

HalloKitty fucked around with this message at 00:40 on Mar 24, 2020

NotNut
Feb 4, 2020
I've been having various problems with my 5700 XT like one or both screens going black when I'm running a game, Photoshop loving up when I'm using a lot of graphical memory, or Chrome crashing whenever I go to Imgur. Is this likely caused by the card itself or the drivers?

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

You were also able to get a very good PSU for $50 every day of the week a few years ago. I recommend people replace out of warranty PSUs on new builds because a. it was cheap and b. either they had a lovely PSU to begin or they've had it for over 5 years and if they put it in a new box it'll probably be another >5 years before they think about it again. But for swapping in a single component I've always told people not to bother. Warranty periods are a convenient rule of thumb, it's not like they become bombs as soon as the warranty is up.

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know

BIG HEADLINE posted:

The prospective performance increase of the 2080Ti's replacement has been the magically reducing number in WCCFtech articles. It started as high as +70%, and last I saw it was down to ~+30%. nVidia's not going to sell an ultra-premium GPU during a recession/depression, either.

I like how you simultaneously dismiss WCCFtech and also use them as a narrative benchmark for a future graphics card that we know literally nothing about.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Taima posted:

I like how you simultaneously dismiss WCCFtech and also use them as a narrative benchmark for a future graphics card that we know literally nothing about.

And I like how you perfectly illustrate why it's becoming so unenjoyable to post in SH/SC. If you weren't so eager to nail me to the loving wall for :smug: points, you'd have been able to pick up on the subtext of the "magically reducing number" mention.

It's probably going to be the usual +20% boost.

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
Dude calm down. My main point was that you're basing your conjecture on nothing. Sorry if that upsets you. No one is trying to "nail you to the loving wall" (?)

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

It was never going to be 70% anyways. Honestly, talking about anything in wccftech like it means anything should just be grounds for a ban.

sauer kraut
Oct 2, 2004

NotNut posted:

I've been having various problems with my 5700 XT like one or both screens going black when I'm running a game, Photoshop loving up when I'm using a lot of graphical memory, or Chrome crashing whenever I go to Imgur. Is this likely caused by the card itself or the drivers?

I don't even know what to tell you :smith:
You can chime in with the 650 replies meltdown thread about the new March drivers here https://www.reddit.com/r/Amd/comments/fla9q2/adrenalin_2020_edition_2031_released/ or start a new topic, after you did a clean install with DDU.
Maybe throw in your lot with the hundreds of posts who claim to have sold their 5700 to get a 2070 Super and lived happily ever after.

Vega and Navi are extremely cursed at the moment.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Digital Foundries did their piece on Doom 2020: https://www.youtube.com/watch?v=UsmqWSZpgJY

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

repiv posted:

https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

Nvidia took another shot at improving DLSS, maybe it's good now? They're back to running an AI model on the tensor cores after that weird not-really-AI iteration.
DLSS at 1440p in Deliver Us The Moon actually improved visuals and performance for me noticeably so I was able to run full ray-tracing (which looked phenomenal). First time it's been really useful so far.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

HalloKitty posted:

In fact, unless my memory is totally wrong, it feels like it used to be more common to find cases WITH power supplies than without, which has totally turned on its head. The PSU was almost seen as a throw-in freebie.

Your memory serves. I remember for a long period my advice to people was "Buy Case, throw away PSU".

I think a lot of people, the last time they upgraded their cards, maybe had to replace their old 400-500w PSUs, but a 650+w PSU is most likely still going to be fine unless it's getting really old. Using Warranty as a benchmark is conservative but fine, but I wouldn't say you're at any major risk if your PSU is 6 years old or anything, especially if the use has been reasonably light.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Lockback posted:

Using Warranty as a benchmark is conservative but fine, but I wouldn't say you're at any major risk if your PSU is 6 years old or anything, especially if the use has been reasonably light.

Yeah, pretty much this. It's not like a PSU is gonna fall over dead a month after the warranty ends (unlike some cars), but more that it's just a good rule of thumb, especially for cases like his where the original wattage is sufficient, but not overly so, for the planned use: a 650W is certainly enough for a 2070+system, but it's not like it's a 750+ where he'd have hundreds of spare watts--the harder you push a PSU, the shorter its life.

It's also worth mentioning that a bunch of the Corsair, Seasonic, etc, PSUs have up to 10 year warranties and only cost a few bucks more than some of the cheaper versions, so there's not a whole lot of reason to be getting a cut-rate one anymore. Just get something quality and then not worry about it for a decade.

Adbot
ADBOT LOVES YOU

TorakFade
Oct 3, 2006

I strongly disapprove


DrDork posted:

Yeah, pretty much this. It's not like a PSU is gonna fall over dead a month after the warranty ends (unlike some cars), but more that it's just a good rule of thumb, especially for cases like his where the original wattage is sufficient, but not overly so, for the planned use: a 650W is certainly enough for a 2070+system, but it's not like it's a 750+ where he'd have hundreds of spare watts--the harder you push a PSU, the shorter its life.

It's also worth mentioning that a bunch of the Corsair, Seasonic, etc, PSUs have up to 10 year warranties and only cost a few bucks more than some of the cheaper versions, so there's not a whole lot of reason to be getting a cut-rate one anymore. Just get something quality and then not worry about it for a decade.

I have an Evga G3 750W, 10+2 years warranty :v: this thing, purchased at the end of 2018, will last me until 2030 if we're still alive by then.

Now if there was a scenario where I could use all that power, say some new powerful yet relatively affordable GPUs, I'd be even happier...

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply