Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Indiana_Krom
Jun 18, 2007
Net Slacker

Animal posted:

I agree with this.

The reason why I stick to monitors with a G-Sync Ultimate module is that the VRR range is down to 0hz. Does Freensync go below 40hz these days? The whole point of VRR is to smooth out gameplay, I want it to smooth things out when it dips below 40.

If the monitor has "g-sync compatible" certification from Nvidia it should support LFC down to single digit rates. Basically Nvidia started their own certification program to fix AMDs poo poo for them.

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker

astral posted:

During the Vista era I kept getting a ton of nvidia display driver "TDR" errors (timeout detection and recovery). When I replaced the RAM they went away. :iiam:

During the late XP/early Vista era I was getting driver crashes and TDRs playing one specific game, what resolved it was overlocking my CPU from 2.4 GHz to 3 GHz. Ironically my best guess was the problem was the game was running so slowly and saturating so much CPU on the Core 2 Duo E6600 I was using at the time that it was stalling for long enough to trip the lower precision timers responsible for TDRs even though nothing was actually wrong, overclocking bought enough performance that it got around to resetting the timers more frequently which stopped the forced driver resets from happening. Later on I upgraded to a i7-2700k with the same video card/memory amount and that particular game ran 3 times faster at the same resolution.

Indiana_Krom
Jun 18, 2007
Net Slacker

Branch Nvidian posted:

Would be nice if there was a switch somewhere to just use the legacy context menu. Though that's as likely to happen as Microsoft letting us go back to the legacy File/Edit/View menus in Office programs.
There is a registry key that does just that, if you want a program with a switch to do it instead just grab winaero tweaker.

Indiana_Krom
Jun 18, 2007
Net Slacker

Taima posted:

Tangential question, is DLSS3 FG still bad, and if not, under what circumstances is it good? I tried it once with CP2077 and thought it was gross, but that was a long time ago (well, relatively; when it came out).

FG is always going to be bad in most games because it adds 1 (real) frame of latency to the pipeline. The only place it won't be bad is in games that aren't latency sensitive at all.

Indiana_Krom
Jun 18, 2007
Net Slacker
Welp, I saw the 4090 founders edition available at best buy so I ordered one. Supposedly will be ready for pickup on the 15th. Also ordered the EKWB full cover block for it.

Now all I need short term is a new PSU with a native 16 pin that can handle it. Sure my 850w shouldn't be okay since it already deals with a 3080 Ti that pulls 400w, but then I'd have to use that ridiculous adapter. And long term I'll need a CPU upgrade because no way in hell will this 9900k be able to keep it fed at 1440p, but I'm less sold on any current CPUs because of a crippling aversion to buying the first platform of a new DDR generation.

Indiana_Krom
Jun 18, 2007
Net Slacker

Animal posted:

My Corsair SF750 handles a 130% power target 4090FE, 7800X3D, 64GB DDR5, 2x Samsung NVME... no problem. You're fine.

Yeah, I'm not worried about my PSU not being able to handle the power draw, that should be totally fine since I have ~300w to spare by my estimates, I just don't want to use that stupid 4 plug adapter and have all that cable clutter in my case.

Indiana_Krom
Jun 18, 2007
Net Slacker

Cross-Section posted:

Is there truly any difference in setting a FPS cap in NVCP vs Adrenalin vs SpecialK vs RTSS vs in-game
Yes. Because they may use different methods, different methods will produce different results depending on the how the specific game tolerates them.

But generally start with NVCP and only go further down the chain if it doesn't work well with a specific game, because it should work in *most* cases.

Indiana_Krom
Jun 18, 2007
Net Slacker
Got my 4090 founders edition installed today, actually pretty impressive how quiet it is, I could totally ditch the water cooler and just go air on this thing. (But I already have the block for it, so I'll slap it on in a few days after I am comfortable the 4090 is going to ride the bathtub curve.) A huge difference in fan noise and tone compared to my previous EVGA 3080 Ti FTW3 on its stock cooler, that thing was loud AF.

Performance wise, uhhhh, my 5 year old 9900k CPU was already struggling to feed the 3080 Ti, the 4090 is just a snore fest. Oh how amusing the first game I tried that managed to scrape 100% GPU utilization out of it and not just stuck at a CPU limit was Portal RTX.

Control with the recent HDR patch and its "More ray tracing! (DANGER: EXPENSIVE DO NOT USE)" toggle that still runs ~100 FPS is also talking the card up into comfortably over 425w consumption (in native resolution DLAA mode). Seems I've found my GPU burn-in tester. Maybe there is space to hit the GPU limits if I start throwing DSR into the mix elsewhere, but honestly the benefit over DLSS quality seems pretty debatable. On the plus side overall system power consumption is down a fair amount in games that were already hitting the CPU limit, because the 4090 uses less power than a 3080 Ti when operating at the same CPU limited performance levels.

Indiana_Krom
Jun 18, 2007
Net Slacker

hark posted:

is there any consensus here on waterblocks for gpus? are they good/worth it? specific brands to look for? I've never had one, but the idea is appealing to me.

They are awesome, I have my 4090 founders under a ekwb full cover block, I've been water cooling CPU+GPU in a custom loop for something like 6 years now. But "worth it" definitely requires a bit more motivation/investment than just "appealing".

Also the performance isn't really the point, although at full load 430w my 4090 tops off around high 50 to low 60C core with upper 60C hot spot temps, but what actually matters is the massive amount of radiator space I have hooked up means it maintains these decent temps without ever hearing the fans spin up.

Indiana_Krom
Jun 18, 2007
Net Slacker

Anti-Hero posted:

Would an i9-9900K meaningfully bottleneck a 4090K using DLSS @ 4K?

I've been on a 3080Ti since Oct 2021 and was at 1440P, but have since upgraded to 4K OLED (LG C2) precisely one year ago. I haven't played too many games that I couldn't make work just fine at 4K...but Alan Wake 2 and CP2077 look like games I really want to turn all the knobs up on. It's a very bad use of money and it looks like Founders 4090s are impossible to find...but I want it.

I have a 4090 founders running on an i9-9900k, both are stock, at 1440p the CPU bottlenecks in quite a few games, though that is generally with DLSS quality. I haven't tried CP2077 yet though.

I also upgraded from a 3080 Ti, which was actually occasionally hitting the CPU bottleneck itself in Dying Light 2, but not many other places.

I'm going to upgrade the CPU eventually, the Intel "14th" gen are power hogging pieces of poo poo, so no. The 7800X3D would be my first choice, but I have a crippling aversion to buying the first platform using a new DDR generation. Unless I run out of patience, I'll probably just get whatever second generation DDR5 X3D chip AMD puts out.

Indiana_Krom
Jun 18, 2007
Net Slacker
But it is safe to say the framebuffer itself in isolation isn't a significant driver of vram consumption anymore.

Indiana_Krom
Jun 18, 2007
Net Slacker

Cyrano4747 posted:

Quick question: What is generally considered a good target GPU temp for a 20-series card?

I rebuilt my PC a week ago - took it down to the loving screws, totally disassembled the case, etc. This was ultimately in service to a CPU upgrade to kick the can on a full system refresh, but I took the opportunity to deep clean everything for dust, un-gently caress my cable routing from probably five years of tinkering and lazy shortcuts, and redid the case cooling. Long story short, this also got me looking at my 2080 again, redid the OC, and now I'm playing with fan curves.

On the one hand I'd like it to not sound like a leaf blower 24/7. It's in the same room as our TV, and if I can keep it quiet for my wife that's a plus. On the other hand I don't want to burn my card out.

Lazy googling has turned up a range of people saying that 100C is totally fine and normal all the way through people making GBS threads bricks if their card gets over 70. My old rule of thumb was that 80 was a good target, but that's sitting right on the knife's edge of where fan speeds start to really kick in and matter. I can keep it under 80 with the fans at 95-100% basically full time, but if I step the fans back to ~80-85% temp overs around the 82/83 ballpark.

edit: the difference between 85 and 95% fans is REALLY noticeable with this card.

Reduce power limit to compensate and slow down the fans, the gains from OC probably aren't worth the heat/noise anyway. As far as killing the card or chip, the temperature limits available within common OC tools are totally safe, if the cooler can't handle it the card will simply power/frequency throttle to stay within the temperature limit.

Indiana_Krom
Jun 18, 2007
Net Slacker
I just started playing through Star Wars: The Force Unleashed for the first time in many years. It is hilarious playing on a 4090, with the default 30 FPS cap the game has the GPU doesn't even come out of its 210 MHz idle desktop clocks to run it (and only reaches like 40% utilization at that), the power consumption on the 4090 is all of 20w, when it idles at 16. I remember when this game brought a GPU of mine to its knees.

Indiana_Krom
Jun 18, 2007
Net Slacker

*sigh*

I was using one of the 180 degree versions of these in my build that I completed just last weekend, pulled it out and no signs of problems, but I did make sure it was fully seated and locked in when I installed it. So instead I am now making the 180 degree bend in just the cable. I like how they say "don't bend the cable closer than 30cm from the plug" when there is no space or proper angle in the case to do so because of the stupid plug placement designed only to deter/annoy data center use. At least mine makes a nice audible and tactile click when it locks in, something about the EKWB water block I put on makes the plug much easier and much more obvious that it has locked in.

This connector is junk, but if video cards and CPUs are going to be pulling 300w or more then perhaps it is time to start thinking about stepping the supply side up to 48v (or higher) so we aren't pumping 30-50A around inside the case. At 48v with the same 12.5A restriction, the old 8 pin connectors would be rated for 600w. But because the connectors themselves are actually rated for 27 amps and not the 12.5 the PCIe spec set, the actual limits of an 8 pin plug would be 1296 watts or a safety margin greater than 2.

Indiana_Krom
Jun 18, 2007
Net Slacker

I donno, I used a GTX 1080 for about 4 years and it remained reasonably competent in most new releases minus RTX the whole time. I think the 4090 might have that amount of life in it if one keeps reasonable expectations about settings and resolution in the stuff that ships 4-5 years from now.

Indiana_Krom
Jun 18, 2007
Net Slacker
Pascal had a really good lifespan for the high end cards, it was basically still relevant through most of the covid era. It has only just recently dropped out of being relevant in current games. Like I was playing Jedi Survivor the other day and noticed that after a while it settles at about 20 GB of VRAM usage (and also over 20 GB of system RAM). I don't expect modern cross platform games like that to be able to scale down to Pascal well at all these days, but keep in mind it is an architecture that shipped in 2016.

Indiana_Krom
Jun 18, 2007
Net Slacker
I just loaded up a case full of radiators with Arctic P12 and P14 fans (the non-RGB variants) and I have to say I am impressed how quiet they are even as they ramp up in speed. When my coolant is below 35C I have them idle with the P12s at around 875 RPM and the P14s at around 600 RPM where they are completely inaudible from only about a foot away. They also seem to move more air than the Noctua chromax I was using before for the same or even lower subjective noise levels, which is really impressive since they are less than one quarter the price per unit.

Indiana_Krom
Jun 18, 2007
Net Slacker
Welp, sent in to cablemod for my adapter recall, they wanted a picture of it "disabled" but their example video bending the pins with a screwdriver looked insufficient to me, so I instead removed the metal cover and then snapped the PCB in half. Also I was curious how robust it really was, yeah the PCB is mostly okay, the copper lines are a little small (although it is still the connector that fails not the PCB). But still, considering the amount of thermal putty and sizable aluminum block they put on it, perhaps it would have been better to just size up the actual power traces so it wouldn't heat up in the first place...

Overall I think the whole 12VHPWR standard was a bad idea, it is just too small and too finicky to deal with 50 amps. For comparison, if this was wiring in your residence designed to handle 50 amps, you would use 6 AWG wire for it, which is typically multi-strand copper wire about as thick as a #2 pencil that is incredibly stiff and hard to work with. I think if they really want to get back to a single reasonable sized connector, they should just go back to the old 6 or 8 pin plugs but switch to 48 or 60 volts (keyed differently of course) so the connector only needs to handle 10-13 amps to deliver 600W. The old 6 and 8 pin connectors physical designs are good for like double the amperage of the PCIe specs they were limited to and it clearly paid off since you don't hear many cases of them melting under load, I kinda doubt 12VHPWR has that level of built in safety margin. IIRC the 8 pin connector is rated for 27 amps (324W @ 12v), but the PCIe spec limited it to 12.5 amps (150w @ 12v). Well 12.5 amps at 48v is the same 600W that the 12VHPWR design was supposed to handle.

Indiana_Krom
Jun 18, 2007
Net Slacker

runaway dog posted:

I just don't understand why we need 4 pins that are smaller and in a different spot and also recessed, like surely it would've costed less to fabricate a connector with 16 equal sized pins.

Those 4 pins aren't carrying any current, they don't need to be as big as the rest. They are just there to tell the card how much current is available (and they work in the most simple analog way imaginable).

Indiana_Krom
Jun 18, 2007
Net Slacker
The AI salesman's claim of "Now you won't have to do <bullshit task> anymore!" is technically true, but usually omits the other part: "So faceless evil megacorp you/others were formerly employed at to do <bullshit task> won't have to pay for workers at all anymore, you're fired for failing to do <bullshit task> as quickly as <bullshit task robot>, now go lay in a ditch and loving die, no severance or unemployment for you!".

Both parts are coming, and I have very low expectations that enough people in high places will realize "If we use AI to replace all workers, who is going to be able to afford our product?" as they watch number go up while slowly bleeding the rest of the world dry.

Indiana_Krom
Jun 18, 2007
Net Slacker

Star Man posted:

How much would an RTX 4070 Super be bogged down by a system of an Intel 8600K, 16GB of RAM, and in a PCIe 3.0 slot?

I currently run a GTX 1060. The plan is to replace the rest of the system, but the video card will be the single biggest line item. Am I better off replacing everything else first?

Depends on the resolution/refresh rate (and game to some extent).

Like at 4k/60, totally fine, GPU limit will hit way before the 8600K CPU limit, slap it in and don't worry. But say 1920x1080@240 Hz display in a game that is around 3 years old? That CPU is going to hold a 4070 super back by as much as half its potential performance.

Indiana_Krom
Jun 18, 2007
Net Slacker
Then most likely the 8600K CPU will hobble your performance some in newer games. I ran a 3080 Ti on an Intel 9900k with a 2560x1440 @ 240 Hz display and saw some pretty significant CPU limits in some games, and a 4070 super is roughly equivalent to a 3080 Ti.

I later upgraded to a 4090 founders edition and saw basically no performance uplift vs the 3080 Ti at all until I completed the rest of my upgrade to an AMD 7800X3D, which in some cases more than doubled the performance.

2560x1440 and high refresh basically demands an all around powerful system, much more so than 4k or 1080p which shift loads more towards one component or the other.

Indiana_Krom
Jun 18, 2007
Net Slacker

Saukkis posted:

If there a way to set a different FPS limits on separate displays? My main gaming monitor is 165Hz 1440p, but I also game on my 120Hz 4K LG TV. On NVIDIA Control Panel I seem to only be able to set a common 115 frame limit.

Easiest way is to check and see how your games perform latency wise with just plain vsync (you should already have low latency mode enabled). A few games have horrible buffering with vsync enabled because the engine is poo poo and buffers a bazillion frames ahead when GPU or vsync limited (eg: overwatch) and absolutely require a FPS limit (meaning a CPU framerate cap) to maintain reasonable latency. However most games do not, and you will barely notice the difference between a 4 FPS less than the VRR limit cap, and just plain vsync at the VRR limit. So if your games perform well with just plain vsync, then use that, it will automatically set to whatever refresh rate the display is capable of.

In the case of games that do have a horrible vsync implementation, you can use a per-game profile to frame limit just that game.

Indiana_Krom
Jun 18, 2007
Net Slacker

Shipon posted:

~120 FPS IMO. The leap from 60-120 is absolutely mindblowing and you really can't go back once you've crossed it. I haven't seen 240 in person before so I can't tell if it'll be similar, but I can't really care too much about the difference between 120 and 160.

I've been on 240 for many years now, up in the 180+ range it is pretty smooth, but not mind blowing better than 120, well into diminishing returns (360 Hz is an option with some 1080p displays even). I will mainly stick with 240 Hz because on a native gsync display with up to 240 Hz it means I can pretty much skip all that frame capping bullshit and just run vsync because I will hardly ever reach it and even if I do the maximum latency penalty for buffering 2 frames is 9 MS so who gives a gently caress...

Indiana_Krom
Jun 18, 2007
Net Slacker
Dlss quality looks better than native taa in a fair amount of games.

Indiana_Krom
Jun 18, 2007
Net Slacker

Branch Nvidian posted:

My approach to PSUs is that I’ll only trust them for as long as a manufacturer trusts them to provide a warranty for.

I've had a couple PSUs die in the warranty period, but the only symptom was the machine would fail to power on, or would require multiple attempts (something something charging up capacitors). In both cases the rest of the PC survived completely unharmed and was fixed by swapping out the PSU. But yes, this is generally a good rule of thumb and not even that unreasonable since it isn't hard to find 10 or even 12 year warranties now.

Indiana_Krom
Jun 18, 2007
Net Slacker
Once in a while Dying Light 2 just really nails the lighting streaming in to otherwise shadowed or indoor areas and it makes you stop and just look around because its dynamic lighting that isn't broken in some fundamental video gamey way that it always was before.

There was always this jank around the edges in dynamic lighting where some shadow or light broke/leaked/etc, pretty much not seeing that in a game anymore is simply amazing. It makes me want to see RTX remix in some 2005-2015 era games that otherwise still look largely acceptable but have that lighting jank. Like what would Far Cry 2 look like with RTX global illumination (and a rust filter delete).

Indiana_Krom
Jun 18, 2007
Net Slacker

njsykora posted:

Actually a dude in the retrogaming thread made a 5090 cable.

Yeah, lets push 60 amps through some barrel connectors, what's the worst that could happen?

Indiana_Krom
Jun 18, 2007
Net Slacker

pyrotek posted:

As a side-note, I recently got a Voodoo 3 for a retro computer, which I never had before (I had a TNT2 Ultra at the time.) I always thought that the "22-bit color" output was bullshit, but what do you know, with the filter enabled it does look better than other cards at 16-bit color. It isn't as good as true 32-bit color, of course, but a lot of older games only supported 16-bit color anyway.

Now I kind of want a Voodoo 4 4500 or Voodoo 5 5500 to get that, 32-bit color support, and to enable MSAA for old games (especially Glide games) but those are ridiculously expensive.

I had a Voodoo 2 (8 MB) and later a Voodoo3 3000 AGP back in the day. At the time dithering was pretty important and the voodoo cards were widely accepted as having the best method, so even with the 16 bit color limitation you would see very little color banding artifacts (also greatly assisted by the fuzzy/blurry nature of the CRT displays that dominated the era). Good times.

Indiana_Krom
Jun 18, 2007
Net Slacker
No, the 4090 FE box is this entirely excessive solid cube of plastic cardboard, it is unreasonably heavy even without the card it it. The card is also roughly the size and consistency of a brick.

E: don't get me wrong, the presentation it makes is great, but it really brings nothing to the table over a plain folded cardboard box with an insert other than the knowledge that the margins Nvidia makes on this are absolutely sick if they can splurge that much on just the box.

Indiana_Krom fucked around with this message at 22:49 on Apr 14, 2024

Indiana_Krom
Jun 18, 2007
Net Slacker

Chuu posted:

I picked it up in person at best buy and it's still sealed in the foxconn packaging, with a UPS label to the best buy location on the box. Unfortunately I do not want to even think about opening it until I get the power supply, since you can't open the exterior packaging with it being evident. It's sealed with glue with a pull tab, not tape.

I've googled the various codes on the box and they all show up as related to a 4090. The best buy sku on that last picture is attached via a sticker though, not printed.

Does anything about this look suspicious?





That is a box identical to the one my 4090 came in.

Indiana_Krom
Jun 18, 2007
Net Slacker
Also the warranty period begins from the date of purchase, not the date of opening the box. You can safely open the box.

Indiana_Krom
Jun 18, 2007
Net Slacker

YerDa Zabam posted:

Part of the reason I just got a 4090 recently is that I don't trust global supply chains/global shitshows any more. I'd hate to wait and then not get one because of a ship stuck in a canal or a massive invasion of Taiwan. Or even worse, another crypto thing, yuk.
Then again maybe that's just me.

This is one of the more compelling reasons for just getting the 4090 now. Even if a 5090 is 70% better than a 4090 or some poo poo, how long will it take to actually get one? I got a 4090 FE last summer, which was the earliest I could get a 4090 FE without paying a scalper, paying for a bot to buy it (more scalping), or trying to shiv someone in the best buy parking lot so I could steal theirs.

Also I have a EKWB full cover water block on my 4090, and I had a full cover water block on the 3080 Ti I used before that. Upgrading to a $1200-$1600 video card, carefully shucking the cooler off, then installing a water block, putting the loop back together, pressure testing and all that is actually quite the pain in the rear end and I would like to go a few years before doing it again (even with the soft/flexible tubing I use).

Indiana_Krom
Jun 18, 2007
Net Slacker
I once had dust building up on the PCB cause a video card to flake out, but that was back in the days when the RAM chips had pins on the sides and some dust had shorted a pair of pins (modern cards have been using BGA forever, which while not entirely impervious to contamination is much much harder for air/debris to get into). Cleaning the card resolved it. Also undervolting wouldn't help stability at all and could potentially make it worse. If anything dragging the power limit up and then applying a negative clock offset and or dragging the temperature limit down would push things into the more stable range. Make sure afterburner isn't doing something stupid to clocks/power/thermals by resetting all afterburner settings entirely to default and disabling any low level voltage settings in it.

Indiana_Krom
Jun 18, 2007
Net Slacker

pyrotek posted:

I wonder how far that is off, really? I think those candles would light up the room a bit more than that, but nowhere near as much as the traditional lighting shows. The bigger problem might be that interior spaces aren't designed for realistic lighting.

It is probably quite close to accurate, candles are probably even dimmer than that in real life and the textures the light is bouncing off of are also pretty dark, so basically the whole scene is catastrophically under lit for how it is exposed in the second image. It might work slightly better in full end to end HDR with some automatic iris/exposure adjustments happening but basically you are correct that the space isn't designed for realistic lighting which is a super common issue with conventionally hand lit game scenes. The original developers would just throw in some candles and then adjust the lighting till the room is about correct for traditional rasterization, it worked fine for the time even if the result is some candles or windows that act thousands of times brighter than they would be in reality. So when you come back and inject realistic lighting while still compressing the result down to standard dynamic range, the whole scene either washes out from extreme brightness, or collapses into almost complete darkness like that.

Indiana_Krom
Jun 18, 2007
Net Slacker
Yeah, the windows are probably not throwing as much as they should, assuming full sunlight on the other side anyway. But it could also be evening or cloudy or something like that, the room most likely would be very dark if that scene was recreated in real life (although your eyes would normally adjust allowing you to see better than that).

This is the cool thing about realistic ray traced lighting though, it just works and conforms to what would happen in reality, it isn't always broken around the edges in some strange video gamey way, but it also means you can't just bullshit a brightly lit space anymore. The light has to come from somewhere.

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker

FuturePastNow posted:

If you get a little shock while plugging in the cable, it's because you make a ground loop if the computer and display are plugged into separate outlets

There are a couple different ways that could happen.
Separate circuits (especially if they are on opposite sides of the split phase).
One or the other side isn't grounded properly.
Separate outlets is iffy, as long as it is on the same circuit and both are properly grounded then it shouldn't happen (because the grounds would already be bonded).

Best way to avoid it is to make sure your outlets are grounded and plug the PC and all its peripherals into a single power strip, thus guaranteeing the ground plane is bonded across all devices.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply