Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Dr. Video Games 0031 posted:

Doing everything in one big socketed chip on a regular motherboard with maybe a couple vestigial PCIe slots would probably be the best option from a performance standpoint though.

Bah gawd, that's Apple's music!

Adbot
ADBOT LOVES YOU

kliras
Mar 27, 2021

Weird Pumpkin posted:

Updated my Nvidia drivers yesterday, and now when something runs in windowed mode I'm getting like.. weird screen flickering. I've never seen it before, and it doesn't seem to manifest running things full screen/in furmark so I don't think it's the card itself? It's weird though because it definitely looks like something is bugging out, I had a similar issue with the 3090 that's in there before

Gonna revert to the previous version and see if that fixes it for whatever reason. I think I've heard of G-sync issues using windowed screens, but I thought that was just from running two different applications or something?

Edit: seems to have fixed it, but I'll have to keep an eye out for a bit. I wonder if it's just something weird in this release that doesn't like my configuration
it's a known issue, there's a hotfix for it here

Shipon
Nov 7, 2005

SourKraut posted:

Bah gawd, that's Apple's music!

Yup and look forward to being able to do nothing interesting with your computer! No more undervolting (security risk, you know), no more customizing your OS, no more buying or installing software from anywhere but "trusted" stores.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

Shipon posted:

Yup and look forward to being able to do nothing interesting with your computer! No more undervolting (security risk, you know), no more customizing your OS, no more buying or installing software from anywhere but "trusted" stores.

Oh noes, my undervolting!

The only point to undervolting is to make (by comparison to the M1 Pro/Max/Ultra) the incredibly power-hungry x86 CPU's not blast your fans at load. Most people would rather just have their CPU's run cool and quiet without that tinkering.

As for restricting your software to the Apple Store, not on Mac OS. When/if they ever make that leap to restrict apps to the Mac Store, then that will be a relevant concern. I seriously doubt that's going to happen on Mac OS anytime soon, regardless that's a separate issue from having powerful APU's. The X86 market is not Apple, discrete GPU's/CPU's won't dissapear until the power/price advantage of APU's makes separate components untenable from a price/performance perspective, and that's going to be a long, long while.

Yudo
May 15, 2003

I don't bother undevolting my cpu: it uses ~100w under load, which I think is reasonable for something with 12 full featured cores at 4.5ghz. A gpu at 300w+ is where undevolting is not a feature I would want to part with.

Weird Pumpkin
Oct 7, 2007

kliras posted:

it's a known issue, there's a hotfix for it here

oh dang, thanks for the link! bout gave me a heart attack at first thinking somehow my card had died

Now I know what that looks like at least, might just wait for the next version to try it out since I wasn't really having any issues with the previous ones and I'm not playing Diablo 4

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

Paul MaudDib posted:

If you want stacked APUs you give up the user-configurability, that is the apple approach.
User-configurability isn't as high a priority to most of the low-mid end market as it is to people who post here. Most folks would probably regard an APU with RAM similarly to a laptop, you just buy a new one when the old one gets too slow.

Josh Lyman
May 24, 2009


Dr. Video Games 0031 posted:

I mean, there's talk of Intel's upcoming L4 cache solution supporting up to 1GB (on paper, early products won't have that much), so that's pretty much halfway to having the memory on the package already.
But how many 4k ultra textures does that hold???

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Shipon posted:

Yup and look forward to being able to do nothing interesting with your computer! No more undervolting (security risk, you know), no more customizing your OS, no more buying or installing software from anywhere but "trusted" stores.

Overvolting is going away entirely. Undervolting is going away mostly entirely. The 7800X3D is the first in a series of companies that will learn this lesson painfully, the tolerances of 7nm is amazingly narrow and 5nm and 3nm have only gotten worse. Integrating across nodes (and remember the cache die is a N6 cache-optimized nodelet) is already painful and stacking makes the margins even finer again. And soon people will be building products with heterogeneous fabs/metal stacks/etc. The PHY and link still have to be very carefully specified, actually probably more finely than before because now it's dumping right into another team's chip.

You know where the optimum of the chip is better than the user. Most things aren't really "factory overvolted", they are reliably volted. It has to work in 100% of situations for 100% of users under 100% of workloads. Users hand tuning it to save 5% in 99% of situations is w/e, the company just doesn't want returns. And now overvolting is no bueno entirely, poo poo's going to burn out.

the long term fix is FIVR/DLVR, the core itself is in charge of dynamically slewing core voltage based on actual load, forecast load, and temperature/voltage across individual hotspot sensors inside the core. Much finer control of every voltage on the chip, and if it slews out of the expected bounds it can immediately clock stretch/etc. It's also way more efficient to move power at a higher voltage, and then you step it down at the point of use. I completely expect AMD is going to start flirting with DLVR soon if they haven't, and while MLID is MLID the rumor of Raptor Lake Refresh getting more MT out of getting the DLVR working would not surprise me at all. It's a target everyone needs to be reaching for soon.

Skylake-X was a god drat disaster of a uarch but FIVR is a tech that the time is right for. AMD tried out a much more dynamic boost clock with Vega before they did Zen2 (dynamic boost and clock stretching), NVIDIA with Pascal, Intel went the other way with Skylake-X and did voltage control. But the need for some form of clock/voltage management like this is obvious at 7nm. And it only gets more necessary in 2.5D and especially 3D integration, now you are going to see companies do both.

What does overclocking look like on a 3nm node with a knife-edge between full boost and overheat/runaway with stacked cache or active logic stacking or full inter-silicon 3D design? Probably nothing. Even negative voltage offsets: why, and at what risk of weird edge-case failures? Let the chip run. It is like the Itanium lesson: power management is something that has to be done at runtime, based on knowledge of the actual conditions inside the chip. As the necessary margin of performance becomes more and more narrow, the conditions of operation need to be more tightly constrained.

The computer of the future includes a CPU and a dog, the CPU does the processing and the dog bites the human if they reach for the voltage controls.

Paul MaudDib fucked around with this message at 18:28 on Jun 3, 2023

Dr. Video Games 0031
Jul 17, 2004

3060 Ti for $280: https://www.bhphotovideo.com/c/product/1700891-REG/msi_g306tv2x8cl_geforce_rtx_3060_ti.html
G6X version for $300: https://www.bhphotovideo.com/c/product/1744205-REG/asus_dual_rtx3060ti_o8gd6x_white_dual_geforce_rtx_3060.html

Now the good discounts are really rolling in. (just get the cheaper regular one imo)

Yudo
May 15, 2003


About loving time. Thanks for posting this.

Dr. Video Games 0031
Jul 17, 2004

I'm only now noticing that B&H is on vacation and checkout is disabled until Saturday evening, so be sure to set an alarm for 9:45 PM ET I guess.

Yudo
May 15, 2003

Dr. Video Games 0031 posted:

I'm only now noticing that B&H is on vacation and checkout is disabled until Saturday evening, so be sure to set an alarm for 9:45 PM ET I guess.

It's shabbat and B&H takes the sabbath very seriously. It's a cool store if you are ever in NYC.

ughhhh
Oct 17, 2012

Yudo posted:

It's shabbat and B&H takes the sabbath very seriously. It's a cool store if you are ever in NYC.

They hosed over their warehouse workers and have repeatedly been accused of racism both in the workplace and to customers so... I guess their convayer belt system for bringing products from storage to the counter is cool.

Yudo
May 15, 2003

ughhhh posted:

They hosed over their warehouse workers and have repeatedly been accused of racism both in the workplace and to customers so... I guess their convayer belt system for bringing products from storage to the counter is cool.

It is cool, thanks for explaining.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
When are we supposed to get desktop APUs, anyway? I wanna make an A620 / Ryzen 7600G build or however they're gonna name it

njsykora
Jan 23, 2012

Robots confuse squirrels.


ughhhh posted:

They hosed over their warehouse workers and have repeatedly been accused of racism both in the workplace and to customers so... I guess their convayer belt system for bringing products from storage to the counter is cool.

It's nice that America has a true equivalent to Argos in the UK.

Yudo
May 15, 2003

njsykora posted:

It's nice that America has a true equivalent to Argos in the UK.

B&H is not even remotely as large as a
Argos: they have one retail location. The US equivalent would be something like Target, which is national discount department store.

shrike82
Jun 11, 2005

I bought a bunch of computer parts from them a decade ago while living in NYC, pretty good service

Looks like they never flamed out unlike Newegg or Tiger

Arivia
Mar 17, 2011

Dr. Video Games 0031 posted:

No, I mean a stacked APU with cpu+gpu+memory on a board that plugs into a backplane that can have other add-in boards if you want them.

edit: I guess in this scenario, the memory is soldered onto the board like a GPU and isn't on the package, so AIB partners can do different memory configurations if they want. Doing everything in one big socketed chip on a regular motherboard with maybe a couple vestigial PCIe slots would probably be the best option from a performance standpoint though.

Bring back S100

MarcusSA
Sep 23, 2007


drat just missed this micro center deal



You can get a pretty decent budget gaming Pc going with these prices.

Shipon
Nov 7, 2005

Paul MaudDib posted:

Overvolting is going away entirely. Undervolting is going away mostly entirely. The 7800X3D is the first in a series of companies that learn this rule painfully, the tolerances of 7nm is amazingly narrow and 5nm and 3nm have only gotten worse. Integrating across nodes (and remember the cache die is a N6 cache-optimized nodelet) is already painful and stacking makes the margins even finer again. And soon people will be building products with heterogeneous fabs/metal stacks/etc. The PHY and link still have to be very carefully specified, actually probably more finely than before because now it's dumping right into another team's chip.

You know where the optimum of the chip is better than the user. Most things aren't really undervolted, they are reliably volted. It has to work in 100% of situations for 100% of users under 100% of workloads. Users hand tuning it to save 5% in 99% of situations is w/e, the company just doesn't want returns. And now overvolting is no bueno, poo poo's going to burn out.

the long term fix is FIVR/DLVR, the core itself is in charge of dynamically slewing core voltage based on actual load, forecast load, and temperature/voltage across individual hotspot sensors inside the core. Much finer control of every voltage on the chip, and if it slews out of the expected bounds it can immediately clock stretch/etc. It's also way more efficient to move power at a higher voltage, and then you step it down at the point of use. I completely expect AMD is going to start flirting with DLVR soon if they haven't, and while MLID is MLID the rumor of Raptor Lake Refresh getting more MT out of getting the DLVR working would not surprise me at all. It's a target everyone needs to be reaching for soon.

Skylake-X was a god drat disaster of a uarch but FIVR is a tech that the time is right for. AMD tried out a much more dynamic boost clock with Vega before they did Zen2 (dynamic boost and clock stretching), NVIDIA with Pascal, Intel went the other way with Skylake-X and did voltage control. But the need for something like this is obvious at 7nm. And it only gets more necessary in 2.5D and especially 3D integration.

What does overclocking look like on a 3nm node with a knife-edge between full boost and overheat/runaway with stacked cache or active logic stacking or full inter-silicon 3D design? Probably nothing. Even negative voltage offsets: why, and at what risk of weird edge-case failures? Let the chip run. It is like the Itanium lesson: power management is something that has to be done at runtime, based on knowledge of the actual conditions inside the chip. As the necessary margin of performance becomes more and more narrow, the conditions of operation need to be more tightly constrained.

The computer of the future includes a CPU and a dog, the CPU does the processing and the dog bites the human if they reach for the voltage controls.

these are valid points but my point is more that we're going to be forced to deal with locked-down configurations beyond which are necessary for silicon reliability in the name of forcing you to buy from specific vendors, or because infosec researchers need to keep publishing conference papers and forcing people to mitigate the most edge of edge cases. and any sort of big architecture change which forces a clean break from something you've been using for decades is going to make it a whole lot easier to force on people

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know

Zero VGS posted:

Oh I forgot to hit post, but I was gonna say, TLOU is better after patches but still doesn't run too great if you're 10GB VRAM or less. It highlights the shortcomings of many cards instead of the capabilities.

Another good candidate would be Elden Ring because is isn't super hard to hit 4K60, it has fantastic art direction, and the gameplay is great.

This video really drives home how many breathtaking scenes and set pieces there are:

https://www.youtube.com/watch?v=9LkuXemiZ-I

Yeah sorry maybe I wasn't clear- I have the best gaming computer, I don't think it could be better in any specific way, I'm just trying to figure out the best showcase titles for it so we have some mindblowing stuff to play when my brother comes down. I am not concerned how the games function on a card with less RAM and whatnot (though I am definitely sympathetic to people who do have to consider those things).

I really appreciate all of the answers, this thread is so good.

I've been running through a lot of titles to find the best looking ones and here's what I've come up with so far:

- TLOU
- RDR 2 especially with the draw distance all the way up, loving fantastic
- God of War 1 on pc is kind of astonishing at times but mostly due to art direction. Still though, the audio and visuals hold up really well imo.
- Lego Builder's Journey, surprisingly. The ray tracing implementation is bonkers imo.

So as you can see I'm a total sucker for Sony ports.

The main reasons I was curious what people thought is I haven't been able to try some of the new hot games like Jedi Survivor and other recent showcase titles, so I have no idea on a personal level how those measure up graphically.

I'm going to take the recommendation of CP2077 with the new ray tracing mode for sure, but I'm actually kind of surprised how many people brought up that game as an example of GPU greatness. Don't get me wrong, it's very good, but I don't know- for me, the game as a whole doesn't come together artistically on the same level as a title like TLOU. It has some dated textures and things like that in my experience...

Is it possible I'm doing something wrong here? Are people saying the game looks fantastic doing user mods too and stuff? It's also possible (probable even) that I have bad taste.

Taima fucked around with this message at 09:52 on Jun 3, 2023

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
How many fancy looking games does he want to check out? Maybe you should just pick one and have some fun two player thing lined up for afters like Cuphead.

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
That is a good point. He has two kids and his wife has crohn's disease, so a lot of responsibility is on his shoulders. It's insane that he even has a few days to come spend with me- I'm really honored that he chose me to spend his free time with, so I just want to show him all of the stuff he wasn't able to check out over the last few years.

But you're right, maybe I'm approaching this the wrong way. I would be totally ok with just going with a top 3 situation. I just want to do right by him and give him a taste of all the stuff he's wanted to enjoy but can't. And I haven't been able to play every game, especially the recent ones, so I value yalls opinion; the expertise that is regularly displayed in this thread is incredible and I have a lot of faith in you guys to steer me right.

Anyways the suggestions have been really helpful and I don't want to be annoying so we can talk about something better, cheers :)

e: I will leave yall with one other worthy recommendation: surprisingly, the new Zelda on Yuzu is buck wild! The art style is explicitly made to look good on a potato, so scaling it up to 4K (or whatever your native res is in your case) looks shockingly nice for what it is. And the game even supports Windows Auto HDR which is mind boggling and makes the game look even more fantastic.

That's really the secret sauce behind the Sony ports looking so good, of course- they prioritize art direction to maximize performance, and they are also made for HDR. I totally get that the pc space doesn't really do HDR until mid-high end but it's one of those things that, imo, transforms how vibrant and detailed a game can look. Conversely, a lot of pc games just kinda brute force graphics by stuffing in a bunch of random visual fluff that is less expensive to produce. Which can be fine, a lot of pc-first titles look great. But imo there's something special about a game that's produced for next gen consoles and then ports over to PC with modern GPU firepower. That being said, UE5 might flip that the other way in the future.

Nintendo would make so much money if it would release titles for pc but require that you owned a Switch and the retail game before you could do so. That way the Switch can stay "for kids" which has been the excuse for the lacking system power since forever, and tryhard adults and teenagers could run it on their GPUs. It wouldn't have to be optimized whatsoever, because the games are so sparse from a graphics standpoint. Just make sure there aren't many platform-specific bugs and ship that poo poo. Watch the money roll in. poo poo, literally just condone an official emulator package like they themselves do on the Switch for older systems, literally anything but forcing us to run a console that plays at 25 fps at 144p :shrug:

Taima fucked around with this message at 12:25 on Jun 3, 2023

ijyt
Apr 10, 2012

Yudo posted:

B&H is not even remotely as large as a
Argos: they have one retail location. The US equivalent would be something like Target, which is national discount department store.

it was a conveyor belt joke

Indiana_Krom
Jun 18, 2007
Net Slacker

Control, especially with the new HDR patch looks amazing with everything turned up. Dying Light 2 is another one that pulls off some incredible results during the mornings/evenings with ray tracing and everything set to max. And I haven't played it myself but people say Metro Exodus Enhanced edition is another good RTX showcase.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
It seems CDPR cleaned up the DX12 version of W3 in the last few months. Cause it now runs reasonably well (as well as I would expect a 3060ti to handle an RT title) and very smoothly, whereas around launch it was a stutterfest.

Stanley Pain
Jun 16, 2001

by Fluffdaddy
Darktide with all the RTX stuff on is probably one of the best looking games (lighting, atmosphere) out right now.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Taima posted:

Is it possible I'm doing something wrong here? Are people saying the game looks fantastic doing user mods too and stuff? It's also possible (probable even) that I have bad taste.

The Cyberpunk aesthetic is garish and grungy by nature, and that game really piles on the neon lights and saturation, so yeah artistically it's going to be too busy and cramped. It's meant to be offensive to the senses, even down to that demon porno music. RDR2, Elden Ring or emulated Zelda are gonna give you some big expanses and setpieces to take it all in.

Taima posted:

the new Zelda on Yuzu is buck wild! The art style is explicitly made to look good on a potato, so scaling it up to 4K (or whatever your native res is in your case) looks shockingly nice for what it is. And the game even supports Windows Auto HDR which is mind boggling and makes the game look even more fantastic.

Another title that punches above it's weight like that is Sekiro (2019), instead of raytracing it impresses with action and cadence. You get all these scenic Japanese vistas, rooftop ninja battles, and fights against huge bosses so you can appreciate their animation. I mean check this out, they made a giant ape farting and throwing poo poo at ya look fantastic:

https://www.youtube.com/watch?v=eZYCrnN0rTs

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Shipon posted:

these are valid points but my point is more that we're going to be forced to deal with locked-down configurations beyond which are necessary for silicon reliability in the name of forcing you to buy from specific vendors, or because infosec researchers need to keep publishing conference papers and forcing people to mitigate the most edge of edge cases. and any sort of big architecture change which forces a clean break from something you've been using for decades is going to make it a whole lot easier to force on people

I just think that very soon there won't be any voltage controls anyway. 3D stacking is going to mean it's essentially pointless, why even bother surfacing the controls anymore? Literally even today it's almost entirely downside for the vendor, and even for the user there is almost entirely no upside anymore. People can't beat the turbo algorithm anymore, and even a voltage offset really doesn't get you hardly anything anymore. You might squeeze out another 5%, or it might cause problems in edge-case workloads the user isn't testing and doesn't know about, or it might actually be causing clock stretching/etc which lowers scores without the user noticing, etc. SGX/Plundervolt style exploits are just the cherry on top of a risk-reward benefit that has already almost entirely shifted against allowing users to tinker with this.

2.5D cache, 2.5D logic, and 3D logic are all going to shift the balance even farther away from manual tinkering producing any real gains. It's almost purely downside in the future, you almost certainly will not be able to produce any real gains and there is massive risk of edge-case problems like Plundervolt cropping up, not just in SGX but in your actual core logic. If you are "weaving" logic between two dies that are direct-bonded face-to-face, if you undervolt enough then half of your logic might deliver incorrect logical operations in a user-controlled fashion (like power-glitching a console). Hopefully it would just crash but Plundervolt shows this is not always the case even with a simple monolithic die.

Client Skylake, Zen+, Maxwell, and Polaris were the end of an era, the last basic-bitch clock control architectures before true dynamic turbo took off. Now is the time of monsters.

There's a very few things like VSOC and VDIMM that may continue to be exposed, and it's entirely possible that special XOC BIOS releases continue to expose these controls. That's how it's done for GPUs too - Galax HOF or EVGA Kingpin default BIOS are not what the card can actually do, you need to talk to the vendor and prove your credentials and then they'll give you the VBIOS that lets you run 1000A, because that will blow up the card on ambient.

But even with memory overclocking... if you were AMD would you continue to allow running past the official spec knowing that vendors are gonna be goosing VSOC? Even when Asus isn't insta-popping chips it's still driving up AMD and Intel's warranty RMAs over the 3 year window, XMP kicking up voltages is nothing new and chips do fail in the 3-5 year window quite frequently. My delidded 9900K was killed by XMP, golden sample too :smith:

The only real counterbalance is that AMD is currently the one who benefits more from memory clocks due to fabric, so, locking it down would hurt their scores a bit more than Intel. But in a vacuum I think the 7800X3D incident drives home the problems with letting users tinker with something even as banal as memory clocks (and associated voltages). The official spec is set where it's set for a reason, if AMD was confident that a chip could survive a higher voltage long-term they would set that voltage out of the box and raise the official memory clock spec. It sounds like it's a problem in edge-cases everywhere (some 5700X chips also flamed out) but 7800X3D is more vulnerable because of the stacked die, and that's going to continue to be a problem too most likely. Stacking adds new integration problems around thermals and voltages now, this is an overall problem-area that's been rising for a while and will be generally problematic for 2.5D and 3D stacking regardless of what AMD does. You have to control everything more and more finely to get these highly integrated stacks to run stable.

Also, in the "everything runs from on-chip DLVR-controlled voltages" world, the board partners have far less control or ability to tinker anyway. Boards will get a lot simpler, you might not even need separate rails for VSOC and such anymore, because that's just converted at point-of-use from a supply voltage running 2V or whatever. And that completely shifts the locus of control away from partners and over to AMD - partners simply will not be able to set voltages higher than AMD wants them to set, because AMD controls the voltage regulators. If AGESA says you can't set 1.4V on VSOC, you won't be able to do it.

Pushing that type of fine-grained control away from vendors is probably a good idea anyway. From the vendor perspective it's simple, you need to drive a flat 2V (or whatever) and that's it, the supply voltage will be selected to be high enough that droop isn't a problem/etc. There'll be no LLC or anything like that anymore either because that type of final regulation is all handled at the DLVR level. Overvolting from the motherboard side won't really exist, it'll all be handled by the onboard regulators. Any tinkering would have to happen via controls exposed by AGESA for the onboard DLVRs. So once DLVRs are used for all the major rails, we should never see another incident of the partners "setting too high a voltage", at that point it would be AMD who is determining the valid ranges.

And when you have 2.5D stacked logic or 3D logic, that higher voltage lets each chiplet run its own voltages even if they're different foundries or nodes, and just step it down to whatever that chiplet wants. It seems practically necessary to have DLVR to step into a true heterogeneous 2.5D/3D world instead of just 7nm-cache-on-7nm-logic stacking. If you want off-the-shelf tiles, first you have to agree on a supply voltage and a PHY.

Like it's just inevitably going to go away for a ton of reasons, it's not just "haha security researchers trying to justify their existence with plundervolt", that stuff is going to become a bigger and bigger problem for rapid aging/electromigration (which is already a big problem) or logic faults, and it's going to be enough of a pain to integrate 2.5D/3D already without users tinkering. The benefit of allowing tinkering is already tiny and going to become even smaller because the system is going to be run at a tighter and tighter margin of stability/operation as these integration needs deepen.

Paul MaudDib fucked around with this message at 05:08 on Jun 4, 2023

kliras
Mar 27, 2021
i like the benefit of curve optimization for my 5800X3D, but i can't imagine the abject misery of being a developer whose games might be crashing because gamers think they know better and oc their vram, undervolt their gpu's, and curve optimize their cpu's

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Shipon posted:

these are valid points but my point is more that we're going to be forced to deal with locked-down configurations beyond which are necessary for silicon reliability in the name of forcing you to buy from specific vendors, or because infosec researchers need to keep publishing conference papers and forcing people to mitigate the most edge of edge cases. and any sort of big architecture change which forces a clean break from something you've been using for decades is going to make it a whole lot easier to force on people

Paul already put this far better than I would have, but we're rapidly hitting a point where you're effectively being "locked down" by the limitations of physics anyway. Yeah, I guess leaving full access to any form of control in the BIOS would still be ideal, but then it also opens it up for people to inadvertently gently caress their system up because they hosed with poo poo they shouldn't have.

And if by some chance you posses both the knowledge and funds to go into areas like exotic cooling, in order to squeeze out whatever minor performance gains you can, but at that point the person has more money than brains and would be better served keeping the system they already and putting those upgrade funds to something better, like savings or charity.

I think the main long-term user adjustment that will remain is the ability to undervolt, since we've seen that provide some benefit, but as Paul already mentioned above, there's realistically limits and benefits to that too.

As for:

Shipon posted:

Yup and look forward to being able to do nothing interesting with your computer! No more undervolting (security risk, you know), no more customizing your OS, no more buying or installing software from anywhere but "trusted" stores.

I think this already covers it properly, but this seems more like "let's poo poo on Apple" than actually based in proper justification. For all the issues/etc. people seem to believe Apple will do, when it comes to macOS, they've still largely remained... hands off, at least in terms of restrictions. They don't even try to push people toward the Mac App Store, and actual users know to typically avoid it when/where possible, because often the software that ends up on the App Store is feature-reduced relative to what you can get from the developer themselves (:looking at you Parallels and Coherence mode:)

Happy_Misanthrope posted:

As for restricting your software to the Apple Store, not on Mac OS. When/if they ever make that leap to restrict apps to the Mac Store, then that will be a relevant concern. I seriously doubt that's going to happen on Mac OS anytime soon, regardless that's a separate issue from having powerful APU's. The X86 market is not Apple, discrete GPU's/CPU's won't dissapear until the power/price advantage of APU's makes separate components untenable from a price/performance perspective, and that's going to be a long, long while.

And my main original point is that what Dr. Video Games 0031 mentioned in terms of one big socketed chip with some vestigial PCIe slots, is basically the direction that it is rumored Apple is taking the Apple Silicon Mac Pro, though in Apple's case the socketed chip is probably soldered instead, but a socketed SoC on a motherboard would still probably be user-replaceable and not "locked down".

CaptainSarcastic
Jul 6, 2013



Taima posted:

But you're right, maybe I'm approaching this the wrong way. I would be totally ok with just going with a top 3 situation. I just want to do right by him and give him a taste of all the stuff he's wanted to enjoy but can't. And I haven't been able to play every game, especially the recent ones, so I value yalls opinion; the expertise that is regularly displayed in this thread is incredible and I have a lot of faith in you guys to steer me right.

Anyways the suggestions have been really helpful and I don't want to be annoying so we can talk about something better, cheers :)

I'm going to be annoying and add one more comment, but another possible thing you could use to showcase is to take an older came and dial up the postprocessing really high in the Nvidia Control Panel. How well this works varies depending on the exact game, but I've had impressive results with a few. I want to say the Dishonored games benefited from this, and stuff like the Mass Effect Legendary Deluxe Extreme Titanium editions or whatever the rereleases were called. Max out the settings in-game and then have Nvidia add even more AA and such can make older games look new again.

UHD
Nov 11, 2006


kliras posted:

i like the benefit of curve optimization for my 5800X3D, but i can't imagine the abject misery of being a developer whose games might be crashing because gamers think they know better and oc their vram, undervolt their gpu's, and curve optimize their cpu's

game devs have already been annoyed at poo poo like this for decades

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

CaptainSarcastic posted:

I'm going to be annoying and add one more comment, but another possible thing you could use to showcase is to take an older came and dial up the postprocessing really high in the Nvidia Control Panel. How well this works varies depending on the exact game, but I've had impressive results with a few. I want to say the Dishonored games benefited from this, and stuff like the Mass Effect Legendary Deluxe Extreme Titanium editions or whatever the rereleases were called. Max out the settings in-game and then have Nvidia add even more AA and such can make older games look new again.

Mass Effect also really benefits from DSR because of some quirk about its renderer and antialiasing/post processing. Can't remember if it's that it's a deferred renderer or that it's not a deferred renderer.

Josh Lyman
May 24, 2009


Has anyone had the experience that the "High Performance" power plan in Windows actually improves FPS? Maybe Blizzard made some optimizations since the Diablo 4 server slam weekend (unlikely) but I switched from "Balanced" and I'm actually getting 80fps in town on 1440p high with FSR quality with my 1070 as opposed to 70fps before.

Stanley Pain
Jun 16, 2001

by Fluffdaddy

Josh Lyman posted:

Has anyone had the experience that the "High Performance" power plan in Windows actually improves FPS? Maybe Blizzard made some optimizations since the Diablo 4 server slam weekend (unlikely) but I switched from "Balanced" and I'm actually getting 80fps in town on 1440p high with FSR quality with my 1070 as opposed to 70fps before.

There were 2 drivers that came out for NV. Could be that too if you upgraded.

Josh Lyman
May 24, 2009


Stanley Pain posted:

There were 2 drivers that came out for NV. Could be that too if you upgraded.
Ah yes, I did, that makes a lot more sense. Game is now very playable, which is nice since this will be my most demanding use case until the 50 series comes out. Good guy Nvidia? :thunk:

Josh Lyman fucked around with this message at 23:22 on Jun 3, 2023

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

kliras posted:

i like the benefit of curve optimization for my 5800X3D, but i can't imagine the abject misery of being a developer whose games might be crashing because gamers think they know better and oc their vram, undervolt their gpu's, and curve optimize their cpu's

this is 1000% bullshit and pulled out of my rear end, but I truly always have wondered if some of the 5700XT problems were segfault related too. Overwatch "Render Target Lost" yes that's very obviously a driver (or user) problem. But for the general "poo poo still crashes for some people" problems I think there are probably a couple overlapping problems. Like there almost has to be.

Power quality is clearly one problem, that fixes it for a lot of people. Sometimes changing completely random stuff seems to help, and maybe that is causing some very slight SOC/system agent problems that translate into slight PHY or controller instability for PCIe, and maybe 5700XT is particularly susceptible to that for some reason (early PCIe 4.0 PHY?). Sometimes switching down to PCIe 3.0 helps, sometimes it doesn't, that kind of thing. Usually the linux drivers just seem to run better for some people who still have crashes?

But I always wonder about the segfault bug too. It was not Linux-only, it was reproducible on windows with some compiler workloads. People on Linux just compile more than people on windows. And compiling regularly shakes out all kinds of core microarchitectural bugs and cache bugs in general, it's a crazy intensive workload for the scheduling and scoreboarding and overall correctness of the core and cache.

What if, say, shader compiling triggered it, or something else? And maybe it's not the Linux drivers per-se but that there's more identification and mitigation of the problem under linux? Again, it's not Overwatch we're talking about here, but if you're walking around in an open-world game you're compiling 24/7.

But other people obviously get it on pretty much all conceivable hardware specs (including Zen2+), and some people have great PSUs, and some people have problems even at PCIe 3, etc. That's why I truly think it's an amalgam of different problems that are all pretty commonly manifesting as "PHY shits out" or "voltage shits out" or something like that. A couple of common problems that just are impossible to diagnose because the presentation is near-identical, and the failure modes are common.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply