|
Geemer posted:Only if your monitor is on the whitelist. Otherwise it's a checkbox you gotta tick on the nvidia control panel. "gsync compatible" means the monitor is on the whitelist
|
# ? Mar 28, 2023 23:15 |
|
|
# ? May 31, 2024 21:41 |
|
you’re on the whitelist
|
# ? Mar 29, 2023 00:32 |
|
Paul MaudDib posted:gsync compatible (and native gsync) should automatically turn themselves on, unless it's something disabled by default in the monitor OSD I had to turn it on in the OSD for some reason. I guess I can start cranking settings up until I drop under 120 fps and see if it works.
|
# ? Mar 29, 2023 01:06 |
|
Kazinsal posted:I had to turn it on in the OSD for some reason. I guess I can start cranking settings up until I drop under 120 fps and see if it works. It was probably a compatible monitor without the “gsync compatible” branding.
|
# ? Mar 29, 2023 01:13 |
|
repiv posted:if a game is potentially going to use up nearly all of the steam refund window just compiling shaders then it would be prudent to provide some way to do it without technically playing the game as far as steam is concerned I really don't think having the player wait in the menu for pre-compilation is tenable as a long-term solution. It doesn't have to be this way. When you boot up the RE4 remake for the first time, you can immediately hit new game, and it just works. No waiting for pre-compilation, no shader comp stutter in the game, nothing. I'm not sure what it's doing, to be honest. I think there's some precompilation happening during the opening cutscene, but that can't be the entirety of the shaders dealt with, so maybe they mix some light pre-compilation with some smart predictive compilation during gameplay. Either way, I want more like that and less like TLOU.
|
# ? Mar 29, 2023 01:55 |
|
it's partially down to the workflow the developer uses - there doesn't have to be hundreds of thousands of unique shaders to churn though, that's just how some particularly egregious games have ended up for one reason or another. idtech6/7 is the poster child for the opposite extreme, they have one meticulously crafted shader that's used for nearly every material in the game and IIRC they said they have about 300 shader pipelines total in the entire renderer. it compiles them all so fast during startup that they didn't even bother making a progress UI for it. that's only possible with very tight coordination between the art and tech sides of the studio though.
|
# ? Mar 29, 2023 02:02 |
|
why do the shaders need to be compiled in the first place?
|
# ? Mar 29, 2023 02:05 |
|
because unlike CPUs where you can compile once for x86_64 and run that code on any AMD or Intel chip past or future, GPUs don't have a consistent ISA between manufacturers or even from generation to generation from the same manufacturer. the solution is to bundle shaders with the game in a higher level form (some kind of bytecode) which the graphics driver then compiles for whatever hardware is installed. consoles don't have this problem because they do have a consistent hardware target, so console games can and do come with shaders compiled all the way down to GCN/RDNA/Maxwell binaries already.
|
# ? Mar 29, 2023 02:09 |
|
For the Gsync thing, I have a laptop panel, and I have an adapter on the way that converts the eDP to DP. All the laptops this panel is used on are official Gsync-compatible gaming laptops. Can I preemptively set my NVidia Profile to be like "no, gently caress you, I don't care what you think, use adaptive sync"? From what I understand this adapter passes the eDP along to the desktop without doing much; it doesn't have a Freesync On/Off button it it's own OSD.
|
# ? Mar 29, 2023 02:22 |
|
repiv posted:because unlike CPUs where you can compile once for x86_64 and run that code on any AMD or Intel chip past or future, GPUs don't have a consistent ISA between manufacturers or even from generation to generation from the same manufacturer. Or the steam deck can also just download the precompiled shader since every steam deck has the same config. That's why it was such a good elden ring machine.
|
# ? Mar 29, 2023 02:31 |
|
that also works although it's a bit of a hack, they rely on crowdsourcing shaders from users so the caches it downloads aren't guaranteed to be comprehensive (especially shortly after a game update)
|
# ? Mar 29, 2023 02:40 |
|
repiv posted:because unlike CPUs where you can compile once for x86_64 and run that code on any AMD or Intel chip past or future, GPUs don't have a consistent ISA between manufacturers or even from generation to generation from the same manufacturer. drat, that's brutal. Makes sense, but one would think you should at least be able to take code compiled for a 3090 and, say, use it on a 3060. Or is the nature of GPU design just that the scaled down dies have different instruction sets?
|
# ? Mar 29, 2023 02:41 |
|
Completely ignorant question, but is there any way gpu manufacturers could address the issue in future gpus? Some kind of new standardization, or is that just not how this work? E:kinda fb
|
# ? Mar 29, 2023 02:46 |
|
Shipon posted:drat, that's brutal. Makes sense, but one would think you should at least be able to take code compiled for a 3090 and, say, use it on a 3060. Or is the nature of GPU design just that the scaled down dies have different instruction sets? on paper i think you could share shader binaries between closely related cards like that, in CUDA you can actually compile and distribute native GPU binaries and each binary targets a particular "compute capability" which covers a number of cards. for example all current ada cards are CC 8.9 and would run the same binary. neither directx or vulkan support any means of loading native binaries though, there's no appetite for doing that in the gaming space. presumably because you'd have to compile a stupid number of variants to cover every architecture and even if you covered all of them at the time of shipping you can't cover architectures that haven't launched yet.
|
# ? Mar 29, 2023 02:54 |
|
repiv posted:it's partially down to the workflow the developer uses - there doesn't have to be hundreds of thousands of unique shaders to churn though, that's just how some particularly egregious games have ended up for one reason or another.
|
# ? Mar 29, 2023 03:10 |
|
I'm a broke brained idiot loser baby and just bought a Sapphire Pulse 7900 XTX (a month after pulling the trigger on a $230 6600 XT for my brother) to replace my EVGA 3070 Ti FTW3 model due to VRAM issues I've been encountering, most specifically with Resident Evil 4 Remake and trying to run the game at ultrawide 1440 resolution with textures that don't look like they're from the original GameCube game. Probably a dumb purchase, but alas here I am. Couple of questions that I should have asked before spending $1000 on a piece of computer hardware: My case is a Cooler Master NR200P, so max card size is 330mm length and triple-slot height. The Sapphire Nitro+ is too big, so I opted for the Pulse, but it looks like the ASRock Phantom Gaming OC, which also fits in my case, is a better performing card. With both at $1000, should I have gone with the ASRock? I always thought Sapphire was kind of like the EVGA of the AMD-exclusive brands, so I just defaulted to them, but I don't really know where I got that idea or if it's even remotely accurate. Secondly, I have an LG 34GP950G-B monitor with a dedicated G-Sync Ultimate module in it. I know that hardware G-Sync doesn't work with AMD GPUs, but I've read that the monitor supports regular VRR (aka bog-standard FreeSync), but I can't find anything about what the variable range on it would be. I really don't want to also have to replace my monitor, if the VRR window is some stupidly narrow range that ends up useless, since I've only had it for about 8 months at this point. Please tell me how much of an idiot I am for replacing a good GPU that has questionable VRAM allocation with an arguably way overkill GPU for my gaming resolution without asking some pretty basic-rear end questions before hand.
|
# ? Mar 29, 2023 03:29 |
|
Eh it's just money, if you like playing games and have the spare cash I wouldn't sweat it
|
# ? Mar 29, 2023 03:30 |
|
Branch Nvidian posted:I'm a broke brained idiot loser baby and just bought a Sapphire Pulse 7900 XTX (a month after pulling the trigger on a $230 6600 XT for my brother) to replace my EVGA 3070 Ti FTW3 model due to VRAM issues I've been encountering, most specifically with Resident Evil 4 Remake and trying to run the game at ultrawide 1440 resolution with textures that don't look like they're from the original GameCube game. Probably a dumb purchase, but alas here I am. Couple of questions that I should have asked before spending $1000 on a piece of computer hardware: I really would not expect more than a percentage point of performance difference between different AIB models of any given GPU, including the 7900 XTX. And according to rtings, the 34GP950G supports freesync over DisplayPort but not HDMI, so make sure you use DP. The refresh window should work just fine all the way up to 180hz, though I don't know if you get low-framerate compensation (sub-60hz vrr) with freesync. I don't expect that you'll be under 60fps very often with an XTX though.
|
# ? Mar 29, 2023 03:45 |
|
The driver should be able to enable LFC with that monitor no matter if it explicitly supports it for Freesync or not, the range is definitely wide enough with a 180Hz max refresh rate (I believe AMD requries a 2.4x range for LFC). There are some minor technical differences in how GSync does LFC, the physical module monitors have a buffer on them that holds the frame and repeats it whereas Freesync does LFC on the video card side (by re-sending the same frame), but the real world performance difference is not noticeable.
|
# ? Mar 29, 2023 06:49 |
|
Curious how some games avoid the micro stuttering. Re4make supposedly runs on dx12 and I didn't notice it doing a precompile on pc. Maybe it does it, but hides the process? Its fully smooth without a single hitch in either case.
|
# ? Mar 29, 2023 07:45 |
|
Mindblast posted:Curious how some games avoid the micro stuttering. Re4make supposedly runs on dx12 and I didn't notice it doing a precompile on pc. Maybe it does it, but hides the process? Its fully smooth without a single hitch in either case. With no evidence I'm gonna blame this on Unreal Engine. Specifically Cliffy B.
|
# ? Mar 29, 2023 08:21 |
|
ouch https://twitter.com/HardwareUnboxed/status/1640895315422351360 obviously you can just turn down the texture settings, though i don't know how that diminishes the visual presentation in this game
|
# ? Mar 29, 2023 09:04 |
|
They are very, very forthcoming about the vram constraints immediately upon opening the settings. I don't know if that makes it more ok, or if that makes it a mistake they knew they were making, or something in between. It honestly comes off more like some kind of warning to the player upon starting the title, which is an interesting philosophical choice. I guess we also have to put this game on the short list of titles that are capable of CPU limiting a 4090 at 4K in various circumstances. I don't consider a 5800X to be an old processor by any means. It's definitely worrisome... hopefully this is a port issue and not some kind of foreshadowing of games to come. As an aside, I kinda wish the game would compile shaders during the installation, instead of when you start the title itself to play. There is a strange conflict there that doesn't need to exist. When I am installing the game, if a text box said "hey spend 30 minutes compiling shaders for X reasons, or don't, but please do" I'm going to press OK and then go make a coffee or eat a sandwich or something. When I start a new day 1 game, choose my settings, and get ready to play and am then prompted to compile shaders, that's a blueball right there. It's directly in conflict with the hype that is being cultivated by the developer when you get to the splash screen of a new title. If PS5 crossovers are going to demand compilation, that should really occur in the install itself imo. This is admittedly a "game feel" request that doesn't change the realities of what you're doing, but it would feel so much better than "oh hey you just started this awesome game! Hope you can now gently caress off for a really long time for extremely poorly explained and presented reasons! Taima fucked around with this message at 12:16 on Mar 29, 2023 |
# ? Mar 29, 2023 12:04 |
|
repiv posted:on paper i think you could share shader binaries between closely related cards like that, in CUDA you can actually compile and distribute native GPU binaries and each binary targets a particular "compute capability" which covers a number of cards. for example all current ada cards are CC 8.9 and would run the same binary. I've written shaders for a few OpenGL toy apps so it was small enough to not be worth thinking more about. Is there no mechanism to pre-compile them on first startup, and then only if the GPU is replaced? Seems like that would solve the problem.
|
# ? Mar 29, 2023 12:16 |
|
mobby_6kl posted:I've written shaders for a few OpenGL toy apps so it was small enough to not be worth thinking more about. Is there no mechanism to pre-compile them on first startup, and then only if the GPU is replaced? Seems like that would solve the problem. Yes, there is, and that's what we're complaining about because the process can take over an hour with TLOU's PC port.
|
# ? Mar 29, 2023 12:24 |
|
Dr. Video Games 0031 posted:Yes, there is, and that's what we're complaining about because the process can take over an hour with TLOU's PC port. lol someone in the steam deck thread said it was going to be 3 hours and the battery died before it finished. ( They are writing a tech article on the port)
|
# ? Mar 29, 2023 12:27 |
|
Dr. Video Games 0031 posted:Yes, there is, and that's what we're complaining about because the process can take over an hour with TLOU's PC port. Oh poo poo my bad. I thought the complain was still about the stuttering caused by on the fly shader compilation. Lmao.
|
# ? Mar 29, 2023 12:36 |
|
The ideal approach is likely a hybrid approach that mixes some pre-compilation with predictive background compilation during gameplay, which is what I believe Horizon Zero Dawn does after it was patched. Having the player pre-compile every single shader in the game before they can even start is just ridiculous. It's better than letting the game be a stutter fest during gameplay, but the developers should really be expected to do better if the pre-compilation step is going to take so long.
|
# ? Mar 29, 2023 12:55 |
|
another problem is that building a game with millions of shaders can negatively affect performance even after compilation, including on consoles, so there's more reason to move away from that workflow than just making PC ports less annoying. as i understand it AMD architectures can only have draw calls for up to 7 different PSOs in flight simultaneously, so if the engine does many small draws with different shaders it can bottleneck there. that's one of the reasons why idtech aimed to use as few shaders as possible.
|
# ? Mar 29, 2023 13:24 |
|
BurritoJustice posted:There are some minor technical differences in how GSync does LFC, the physical module monitors have a buffer on them that holds the frame and repeats it whereas Freesync does LFC on the video card side (by re-sending the same frame), but the real world performance difference is not noticeable. does nvidia even ship the module anymore, i thought it was all g(free)sync now.
|
# ? Mar 29, 2023 13:54 |
|
they do still make it, but the cheaper alternatives have mostly caught up so you only really find the gsync module in bleeding edge monitors now for example the alienware OLED was only available with a gsync module for like 6 months until an appropriate freesync controller became available
|
# ? Mar 29, 2023 14:02 |
|
wargames posted:does nvidia even ship the module anymore, i thought it was all g(free)sync now. Monitors labeled as “G-Sync Ultimate” still have modules in them. When they started supporting software VRR as “G-Sync Compatible” they rebranded the version using hardware modules. It has led to more than a little confusion at times. On the topic of TLOU PC: maybe I’m approaching it from the wrong perspective or just don’t quite understand things correctly, but it seems quite bizarre to me that a PS5 with an APU roughly equivalent to a Ryzen 3600 and a 6600 XT is able to run the game at 4K without choking to death even at 30fps, but a PC with a top shelf CPU and GPU, with vastly higher performance specs, are struggling to run the game.
|
# ? Mar 29, 2023 14:08 |
|
Dr. Video Games 0031 posted:The ideal approach is likely a hybrid approach that mixes some pre-compilation with predictive background compilation during gameplay, which is what I believe Horizon Zero Dawn does after it was patched. Having the player pre-compile every single shader in the game before they can even start is just ridiculous. It's better than letting the game be a stutter fest during gameplay, but the developers should really be expected to do better if the pre-compilation step is going to take so long. To my naive brain, it seems like it would make sense to pre-compile the shaders for things that are likely to spawn in during action scenes. So, character models, common props, and terrain. Not sure how much of the game's total shader count that would be, though.
|
# ? Mar 29, 2023 14:20 |
|
well if it were a good port then mid-to-high-end systems wouldn't be struggling to run the game, yeah
|
# ? Mar 29, 2023 14:23 |
|
Branch Nvidian posted:Monitors labeled as “G-Sync Ultimate” still have modules in them. When they started supporting software VRR as “G-Sync Compatible” they rebranded the version using hardware modules. It has led to more than a little confusion at times. How much more powerful do you think high-end PCs are? The 3090 is generally "only" 2.2 times as performant as the 6600 XT at native 4K (less so at lower resolutions). And the PC port's ultra settings are a bit heavier than the console settings, so that should roughly account for the GPU performance we're seeing. CPU performance is also not really "vastly" higher with the PC CPUs most users have. The goon who was reporting a CPU bottleneck here at 4K had a 4090 and a Zen 3 CPU, which will only be so much faster than the PS5's Zen 2 CPU. If the PS5 game was designed to apply a heavy load on the CPU with the 60fps mode enabled, then I can see a Zen 3 CPU struggling to do better than 80fps, especially when considering the heavier-than-console settings.
|
# ? Mar 29, 2023 14:30 |
|
Dr. Video Games 0031 posted:How much more powerful do you think high-end PCs are? The 3090 is generally "only" 2.2 times as performant as the 6600 XT at native 4K (less so at lower resolutions). And the PC port's ultra settings are a bit heavier than the console settings, so that should roughly account for the GPU performance we're seeing. This is what I get for conflating floating point operations with actual graphical performance for some reason, and using TechPowerUp’s “relative performance scale” to translate to every use case.
|
# ? Mar 29, 2023 14:33 |
|
The consoles are actually shockingly powerful this generation. Consoles have historically shipped with some heavy compromises in order to get their costs down, but when the PS5 and XSX shipped in 2020, they were about as powerful as an upper-midrange PC at the time. That's probably going to spell bad news for people with lower-end hardware once more next-gen exclusive games ship and get ported to PC.
|
# ? Mar 29, 2023 14:44 |
|
PS5 ports seems to have some significant overhead though. Maybe it's trying to replicate some of the SSD streaming, maybe it's just taking some time to learn the best ways to port over, I dunno. But Returnal and TLOU both seem like they suck up way more resource that you'd think looking at the game.
|
# ? Mar 29, 2023 14:49 |
|
Dr. Video Games 0031 posted:The consoles are actually shockingly powerful this generation. Consoles have historically shipped with some heavy compromises in order to get their costs down, but when the PS5 and XSX shipped in 2020, they were about as powerful as an upper-midrange PC at the time. That's probably going to spell bad news for people with lower-end hardware once more next-gen exclusive games ship and get ported to PC. Once this generation fully settles in and we start seeing console games target only the new platforms at 30fps, things are going to get really interesting. Even top-end PC CPUs are not twice as fast as console CPUs, so we could easily see games unable to hit 60fps from a CPU perspective. Only top-end GPUs are twice as fast as console GPUs, and PC gamers are less likely to accept a floating dynamic resolution and prefer to render at native res.
|
# ? Mar 29, 2023 14:56 |
|
|
# ? May 31, 2024 21:41 |
|
Twerk from Home posted:Even top-end PC CPUs are not twice as fast as console CPUs, so we could easily see games unable to hit 60fps from a CPU perspective. how much faster is a 13th-gen i9 compared to a Ryzen 3600?
|
# ? Mar 29, 2023 15:16 |