Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shaocaholica
Oct 29, 2002

Fig. 5E
Switch devs not knowing what to do with too much power. Keeps making games look the same.

Adbot
ADBOT LOVES YOU

Animal
Apr 8, 2003

Indiana_Krom posted:

Welp, I saw the 4090 founders edition available at best buy so I ordered one. Supposedly will be ready for pickup on the 15th. Also ordered the EKWB full cover block for it.

Now all I need short term is a new PSU with a native 16 pin that can handle it. Sure my 850w shouldn't be okay since it already deals with a 3080 Ti that pulls 400w, but then I'd have to use that ridiculous adapter. And long term I'll need a CPU upgrade because no way in hell will this 9900k be able to keep it fed at 1440p, but I'm less sold on any current CPUs because of a crippling aversion to buying the first platform of a new DDR generation.

My Corsair SF750 handles a 130% power target 4090FE, 7800X3D, 64GB DDR5, 2x Samsung NVME... no problem. You're fine.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Paul MaudDib posted:

Anyway I also think that NVIDIA has a strong chance of being one of the "large external sales" mentioned for Intel, I think they're that ruthlessly mercenary. Intel really really needs a win here and if they are willing to do Samsung 2.0 and let NVIDIA take it off their hands cheap (because they are deriving value from having the fabs active/having the marketing win of Intel Custom Foundry signing a big client) I absolutely think NVIDIA would be down to play ball. By all accounts the newer nodes aren't the bottleneck for Intel anymore, it's the IP teams being able to consistently get products working and out the door. But even if it is, there's products like tegra where you can make a "lovely version" first and then do a pin-compatible upgrade next year once intel fixes the fabs, or stuff like big Mellanox or NVSwitch ASICs for adapter cards or switched-fabric switches. And I think NVIDIA is diversified enough that they're not tied to consumer products being the only thing they could possibly launch.

I'm assuming you're referring to this being on Intel 4? Or would it still be on Intel 7? Because it seems likely that Intel will have a lot of Intel 7 capacity to spare, while reserving Intel 4 for Meteor Lake and Intel's other cutting edge silicon.

I'm also a bit skeptical as to how initially high-volume Intel 4 will be; they're using EUV for it I believe, which is good and should result in significant improvement, but isn't it also their first attempt at a high-volume node that implements EUV? Seems like there'd be a decent chance at seeing some initial, higher-than-desired issues as they start high rate production.

If Nvidia would be using Intel 7 though, then I'm not sure it even really makes sense relative to what's likely a very-cheap Samsung 8nm, which gives the 5nm discussion more credence. I don't know how Samsung's 5nm compares against TSMC and Intel though.

Edit: Meant Samsung 5nm, not 8nm, in the last line.

Canned Sunshine fucked around with this message at 03:48 on Sep 8, 2023

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SourKraut posted:

I'm assuming you're referring to this being on Intel 4? Or would it still be on Intel 7? Because it seems likely that Intel will have a lot of Intel 7 capacity to spare, while reserving Intel 4 for Meteor Lake and Intel's other cutting edge silicon.

18A

https://seekingalpha.com/article/4633190-intel-massive-18a-foundry-prepayment

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION




Oh, yeah, not sure about other locations, but if they're planning on 18A being ready before 2025 or even 2026 at their Arizona fabs, "good luck", lol.

Edit: Wow, that is a very... Intel-favorable author, lol.

Canned Sunshine fucked around with this message at 03:58 on Sep 8, 2023

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
for this week's update, A Plague Tale Requiem, Atomic Heart, Hardspace Shipbreaker, Insurgency Sandstorm, Shadowrun Hong Kong, and Snowrunner, which are all on Game Pass, can now be played on GeForce Now via Game Pass

(there is a joke going around the community that Starfield isn't on GFN because the Coffee Lake / 2080 combo of the first paid tier would struggle to run it, and even the 4080 Ultimate tier would have a hard time)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

gradenko_2000 posted:

for this week's update, A Plague Tale Requiem, Atomic Heart, Hardspace Shipbreaker, Insurgency Sandstorm, Shadowrun Hong Kong, and Snowrunner, which are all on Game Pass, can now be played on GeForce Now via Game Pass

it's so insane to me that you can license your game such that it's not allowed to be played on rented cloud hardware, and that this is just haphazardly scattered over gog and steam's catalog.

(the mess of music licensing in movies/shows/games being non-perpetual and forcing either a large royalty or lame substitutions/reboots/etc is another one. but theoretically that's a chance for the estate to renegotiate if a property is suddenly valuable etc. life + 75 years is a long-rear end time and it has a lot of these outcomes where even 25 or 50 years later a property can't be performed in its correct context without arguing with some rando estate or a record company who doesn't license or whatever.)

Licensing gameplay recording and streaming separately and doing takedowns on streamers is another crazy one. this is so culturally normalized now that people are often shocked to hear that no, it's not legal, it's something the copyright holder may not care about (or explicitly licenses you to do) but if they care then yeah, they can takedown streams. wack.

for that matter nvidia's "no datacenter use of geforce cards, except cryptocurrency mining" is crazy too. Supposedly CUDA works via the new open-kernel driver so if you want to do datacenter compute you just run that I guess, and if they get Vulkan running they'll benefit from all the dxvk/whatever. The only thing that seems notionally locked down would be using the normal driver stack for gaming in the datacenter, so basically running a geforce now competitor that gives you a windows vm etc, without using quadros.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SourKraut posted:

Oh, yeah, not sure about other locations, but if they're planning on 18A being ready before 2025 or even 2026 at their Arizona fabs, "good luck", lol.

Edit: Wow, that is a very... Intel-favorable author, lol.

(there's plenty of other sources reporting on this, there was an investor release I think, and intel's been priming people that they're courting major customers for a while now, and results seem positive on test chips so far.)

it's seekingalpha, so everyone is explicitly selling you their stock picks lol. And honestly I think he's probably right in general that now is (hopefully) the bottom, people are very very negative about intel right now and you buy when blood is in the streets. it's not a sleeper pick if everyone loves their product and thinks they're doing awesome. you're doing the counterfactual theorizing about why the market is wrong.

I think the biggest thing is the value of intel share will never be zero. like they just will not be allowed to go out of business. it's like boeing, it's a strategic capability to have leading fabs in the US (TSMC's AZ 5nm fabs won't even be leading by the time they're up) under US domestic control. And given the fuckshow of ISA licensing (this should not be allowed, or should be forced FRAND licensing of any patents necessary to implement your processor's instruction set) the x86 license is not peanuts either, that's one of two golden tickets in the world and nobody wants an actual literal monopoly (or, at that point a lot of places would think seriously about ARM or POWER or RISC-V). And most of those patents from AMD etc are crosslicensed contingent on not being sold or otherwise effectively controlled by another corporation so that dissolves etc so you can't really do a buyout or anything either. the US will throw CHIP act style subsidies and pity contracts for supercomputers etc.

Just from the outside I can't say that I see much going right on the IP side. Sapphire Rapids-W supposedly might need a stepping? And Saphhire , and network, and Alder/Raptor AVX fails and DLVR fails and e-core latency etc. And Arrow Lake might be this weird intermediate between the current p-core era and this new SMT-less thing, but it also isn't Royal Core/no rentable units, so it's just like 8C/8T lol. The fabs are supposed to be the bright point lol, intel is very confident about 4 nodes in 5 years or whatever it is, and supposedly the fabs themselves are ready to go more than the product teams, which haven't been able to keep up with their own poo poo. Hence seeking external clients etc.

At some point they have to catch traction and things start going gradually more right. And I think it's probably better than it was 2017 or 2018 or whatever. There's just a ton of organizational rot, and it's gonna be a massive challenge to hire+retain good staff to make things right, and there's a massive amount of work to be done, but I think gelsinger seems to be acting fairly sensibly in terms of steering the ship and setting corporate priorities.

hopefully that means it's like AMD 2012 or whatever. things mostly bottomed out, but starting to rebuild and set things straight. But people didn't think very kindly of AMD 2012 either, and it was a long time until you made the AMD bux. but on a 10-15 year timeline I really can't see anything being worse than intel is right now, and I can't see them going out of business.

Paul MaudDib fucked around with this message at 05:32 on Sep 9, 2023

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.

Dr. Video Games 0031 posted:

Just eyeballing things, the die would likely shrink so much going from Samsung 8nm to TSMC 4N that, economically, it's probably a wash. If it ends up costing more per die, it can't be by much despite how much more the node costs per wafer. I'm high on hopium though. We already know that Orin has Ada's power efficiency features backported to it despite having an Ampere GPU. So it being on TSMC 4N would deliver killer power efficiency, as we've already seen. That would be great to have on a gaming handheld, and I don't think it's totally outside of what Nintendo would be comfortable doing. The biggest issue probably is wafer availability. Nvidia may have some capacity to spare considering that H100 is bottlenecked at the packaging stage and not wafer production, and they've cut back their orders on ada wafers. Not sure if that would be enough though.

The biggest disappointment of all would be if they stick to Samsung 8nm, which almost seems like the most likely option because, well, it's Nintendo. Not sure how they hit their performance and battery life targets like that though.

they pretty clearly don't hit their battery life targets if they stick to Samsung 8nm, unless the leaked specs with the 12 SM GPU are wrong. either they're actually using a different, much less powerful chip than the T239, or they're using a newer node. given all the leaks floating around at the moment are exceeding expectations with reports of graphics approaching current-gen consoles (even if most of that is probably exaggerated/fairly superficial) i'd expect it's the latter.

the cost per die for Samsung 8nm to TSMC 4N being basically a wash or not too bad seems pretty plausible too.

kliras
Mar 27, 2021
ouch

https://twitter.com/ghost_motley/status/1699873951420141663

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
I'm not sure if straight up ignoring an entire GPU vendor is a smart play, they are no AMD or Nvidia right now but they will be one day, and they still sell decently well. It's unfair to Arc users.

Truga
May 4, 2014
Lipstick Apathy

Dr. Video Games 0031 posted:

can't wait for switch 2 to become the console with the best image quality thanks to dlss

lol, they'll have DLSS alright

and render at 144p because gotta have raytracing!

orcane
Jun 13, 2012

Fun Shoe

Truga posted:

lol, they'll have DLSS alright

and render at 144p because gotta have raytracing!
Better than native(TM).

Truga
May 4, 2014
Lipstick Apathy

Zedsdeadbaby posted:

I'm not sure if straight up ignoring an entire GPU vendor is a smart play, they are no AMD or Nvidia right now but they will be one day, and they still sell decently well. It's unfair to Arc users.
intel should fix it then, pretty sure they still have the money to burn

Dr. Video Games 0031
Jul 17, 2004

Zedsdeadbaby posted:

I'm not sure if straight up ignoring an entire GPU vendor is a smart play, they are no AMD or Nvidia right now but they will be one day, and they still sell decently well. It's unfair to Arc users.

What is Bethesda supposed to do? Intel's drivers are completely hosed, and it's on Intel to fix them.

repiv
Aug 13, 2009

they could have given intel access to builds ahead of release so they could at least start unfucking their drivers ahead of time instead of having to crunch over a weekend

Dr. Video Games 0031
Jul 17, 2004

Has it been confirmed that this happened?

Indiana_Krom
Jun 18, 2007
Net Slacker

Animal posted:

My Corsair SF750 handles a 130% power target 4090FE, 7800X3D, 64GB DDR5, 2x Samsung NVME... no problem. You're fine.

Yeah, I'm not worried about my PSU not being able to handle the power draw, that should be totally fine since I have ~300w to spare by my estimates, I just don't want to use that stupid 4 plug adapter and have all that cable clutter in my case.

Enos Cabell
Nov 3, 2004


Indiana_Krom posted:

Yeah, I'm not worried about my PSU not being able to handle the power draw, that should be totally fine since I have ~300w to spare by my estimates, I just don't want to use that stupid 4 plug adapter and have all that cable clutter in my case.

Is it modular? I bought a new cable from Seasonic for mine, no adapter now and a hell of a lot cheaper than replacing a psu in perfect working order.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

Truga posted:

lol, they'll have DLSS alright

and render at 144p because gotta have raytracing!

Yeah, DLSS at Switch 2 resolutions and framerates is probably going to be... not great. And with both the hardware in general and Nintendo's art style not really pushing massive detail per pixel, it's probably not going to have huge upside either.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
look if the Switch 2 can do DLSS, then it can also do DLAA :unsmigghh:

Truga
May 4, 2014
Lipstick Apathy
someone make a vid what DLAA looks like at 18fps :v:

repiv
Aug 13, 2009

first party at least are good at hitting their 30/60 targets, it's the third parties you have to worry about

slidebite
Nov 6, 2005

Good egg
:colbert:

Assuming the same price availability, what's the go to for a 4090:

FE or Zotac Trinity?

No physical case space restraints either (I understand the zotac is slightly larger?)

BlankSystemDaemon
Mar 13, 2009




Because of the vagaries of the the Danish market, and maybe a badly priced product, I managed to get in an order for a AsRock AMD Radeon RX 7800 XT
Phantom Gaming 16GB OC for $21 more than the equivalent 7700 XT.

I'm okay with this.

EDIT: Lmao, my order went through, but now the price difference is the expected $50.

EDIT2: Yeah, the SKUs are RX7800XT CL 16GO and RX7800XT PG 16GO, so I'm thinking someone typed in the wrong price.

BlankSystemDaemon fucked around with this message at 15:27 on Sep 8, 2023

Yudo
May 15, 2003

slidebite posted:

Assuming the same price availability, what's the go to for a 4090:

FE or Zotac Trinity?

No physical case space restraints either (I understand the zotac is slightly larger?)

There isn't a go to. If the FE were more widely available, perhaps it would fit the bill. Still, no 4090 common to the US market is bad, and it generally isn't worth it to deviate too far from msrp.

The zotac amp extreme airo has a better vrm than the trinity and is about the same price. The Galax 4090 also has a better vrm than the trinity (though it isn't as beefy as the amp airo) and is less expensive. Its cooler may be better than the zotac cards as well. The only other near msrp model on Amazon atm are the pny 4090s. They are both reference models (as is the zotac trinity), but they do have good coolers.

Scarabrae
Oct 7, 2002

repiv posted:

they could have given intel access to builds ahead of release so they could at least start unfucking their drivers ahead of time instead of having to crunch over a weekend

AMD paid them not to do that

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
A groundbreaking win for the consumer! Thank you AMD!

Tha_Joker_GAmer
Aug 16, 2006
I might buy the 7800xt to replace my half borked 980, it feels like the least scammy card of this generation.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
replaced my 3060ti FE with a Gigabyte 4070 and it should be illegal for cards to be this large

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Dr. Video Games 0031 posted:

Just eyeballing things, the die would likely shrink so much going from Samsung 8nm to TSMC 4N that, economically, it's probably a wash. If it ends up costing more per die, it can't be by much despite how much more the node costs per wafer. I'm high on hopium though. We already know that Orin has Ada's power efficiency features backported to it despite having an Ampere GPU. So it being on TSMC 4N would deliver killer power efficiency, as we've already seen. That would be great to have on a gaming handheld, and I don't think it's totally outside of what Nintendo would be comfortable doing. The biggest issue probably is wafer availability. Nvidia may have some capacity to spare considering that H100 is bottlenecked at the packaging stage and not wafer production, and they've cut back their orders on ada wafers. Not sure if that would be enough though.

The biggest disappointment of all would be if they stick to Samsung 8nm, which almost seems like the most likely option because, well, it's Nintendo. Not sure how they hit their performance and battery life targets like that though.

lih posted:

the cost per die for Samsung 8nm to TSMC 4N being basically a wash or not too bad seems pretty plausible too.

yooo I super disagree that going from samsung 8 to tsmc 4n could ever be a wash. can you show the numbers on that one?

remember that PHYs don't shrink at the same levels as logic, neither does cache/SRAM (30% shrink N7->N5 iirc, none at N3) and you have to consider the whole cost of the SOC with the various bits all scaling independently. Orin is a LPDDR5 design which I do agree would require smaller, less powerful PHYs than a full-on GDDR design but SOCs have a lot of onboard poo poo that doesn't necessarily scale either.

Orin press die shot, annotated from wikipedia, so this is your samsung 8 starting point:


edit: also appears to source to locuza too.

Also while this isn't directly relevant per se, google had a happy accident and linked me this from Locuza, and while apple is not NVIDIA, maybe this gives a little bit of a scope of design comparison between a SOC and a normal dGPU die. And also note that M1 is 5nm so maybe that's a particularly good comparison point.

https://twitter.com/Locuza_/status/1450271726827413508

Paul MaudDib fucked around with this message at 06:00 on Sep 9, 2023

Shaocaholica
Oct 29, 2002

Fig. 5E
Since consoles are now using GDDR as unified memory with the CPU, are there any quirks with that arrangement vs a transitional CPU+DDR? GDDR latency differences?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Shaocaholica posted:

Since consoles are now using GDDR as unified memory with the CPU, are there any quirks with that arrangement vs a transitional CPU+DDR? GDDR latency differences?

Well, it's the same as orin, both use LPDDR5, that's one reason I liked this comparison. I realize that Apple probably is willing to spend more on big caches/deep reorder/lots of execution resources but maybe you could infer the configuration from annotated grace die shots, if any exist (probably locuza again). But this is the most apples-to-apples I think you can get, 4N is probably more or less N5P or very slightly better. if you assume this is like tsmc 10nm on the nvidia orin design and figure out the scaling from there based on scaling of the given blocks, based on either the die shot or analogy to grace (locuza might have die shots). there aren't an overwhelming number of other comparison points for areal scaling of ultra-low-power 5nm SOC designs, certainly don't compare to intel lol. I'll do it at some point if nobody else does, it's an interesting comparison, but I'm not wading into that tonight.

to answer your question tho, not really. LPDDR is sort of a wider/slower design, like HBM-lite. But unlike HBM it's based off DDR not GDDR and the capacities/bus widths/bandwidth are tailored to that, and generally targeting lower-power execution (but without performance compromise any more than needed). But it retains all the normal latency of DDR.

And what Apple has done is just stack a fuckload of those up until you hit GDDR level bandwidth, to almost PS5 level on the M1 Max (as the image shows, 405 vs 440 GB/s). They have like an Epyc Milan worth of actual DDR bandwidth or whatever. So you have absolutely none of the latency penalty of normal graphics memory - you can flip something to the GPU and have it flip back coherently, in a single unified memory space (with no caching caveats) when it's done. So the penalty for mixing GPU async tasks and AI tasks etc is a lot lower. And all your CPU tasks run at full speed, unlike GDDR where the latency has a very significant effect on CPU performance. And all of the GPU memory access patterns are latency-efficient just as they would be on CPU, because... it's the same memory tier. No more array-of-struct vs struct-of-arrays etc.

The only real caveat afaik is that the CPU itself cannot access more than about 30% of the total bus bandwidth. Still unified, still cache-coherent, but the memory port bandwidth isn't designed to route the whole thing to the CPU at once. But anything you can flip off to an async GPU task is free, from that perspective.

They (realistically) have to be soldered to the motherboard to make this work, the signaling requirements are tight, this is like smartphone memory and it's not hanging out 2+ inches from the center of the SOC package. But they also are a smaller package and a much lower-power package and lower-power PHY to communicate with it. And that means smaller too, which matters more at advanced nodes (apple was at 5nm much earlier than others).

And Apple was just like "but we can just put them right on top of the die, and then if we design the processor big and slow it'll have lots of space underneath, right?". Coherent design goals - a valid and efficient tradeoff in the space. They designed the die around the footprint they needed for their target memory subsystem.

(also the tradeoff vs HBM is that 4 stacks of HBM2E would have fuckloads more bandwidth, that'd be 1.6gbps for hi-clock mode? and this is only 400GB/s in the same footprint. But the latency is better.)

Maybe I guess the answer is NVIDIA wants to do that same kind of thing for the handheld space? I still think they were very serious about the ARM acquisition and about pushing NVIDIA IP as default GPU and cementing smartphone as another CUDA win. But I also think they want to build some competitor to AMD's APUs, they have to be aware that letting AMD control the desktop console market (and with Steam Deck, a foothold into the Switch handheld market, followed by a torrent of clones) is a long term threat. And maybe the NVIDIA N1 Max is the product they can make to fix that, and capitalize on their efficiency lead, etc. Maybe that is a product they'd develop for themselves anyway at this moment in time. Not stacked/etc but N5P/4N switch with a traditional smartphone motherboard and a performance focus? I could see that. Maybe Nintendo at least wants to keep up with the deck, which is based on relatively old tech and released a number of years ago.

There also is the other other factor that NVIDIA may have a TSMC N5-family wafer surplus, a packaging bottleneck in the AI/enterprise market, and a very soft consumer gaming market, compounded by a large stockpile of ada inventory (a lot of it built out already, in configurations the market kinda doesn't want) on top of some low-end 30-series. Maybe they'd sell wafers to Nintendo at below actual cost just to burn some of that surplus.

Low-key I think if Apple can actually get adoption on Metal it's a very desirable set of hardware underneath, and my wishcasting dream is that maybe the computer-touchers starting to get apple hardware will lead to some CUDA-style platform effects and codebase growth around the capabilities of the hardware available to other nerds. Things like GGML and so on. Small market, small target market, but fully-supported unix on the desktop with Asrock-tier meme hardware design and an a fat budget in your target market is super appealing. And I've seen a lot of places roll it out with JAMF Connect over the last couple years, that enterprise management bit is sticky but I think engineers often learn to love unix on the desktop. That's the breakout for a CUDA competitor imo, it's either that or oneapi managing to solve the problem with sycl. Someone's gonna have to sit down and write a lot of code, unless things start getting real portable. Who's worth porting over? The laptop you're working on.

(a 256gb rackmount mac pro running asahi linux with a bunch of pcie cards would be absolutely sick, asahi doesn't support thunderbolt yet but mac pro doesn't need thunderbolt, it has pcie. you can theoretically run anything there's an armv8 *nix hardware driver for. yeah it's not infinite ram but it's a superfast chip for the ram that it can run, that'd be a sick postgres server with a SSD array underneath etc.)

Paul MaudDib fucked around with this message at 09:35 on Sep 9, 2023

kliras
Mar 27, 2021
https://www.youtube.com/watch?v=ciOFwUBTs5s

if you like nerdy videos, alex has you covered

some fsr2 + foliage scenes are downright nauseating

repiv
Aug 13, 2009

over 600k downloads between the different DLSS injectors on nexusmods now lol

money well spent AMD

hobbesmaster
Jan 28, 2008

repiv posted:

over 600k downloads between the different DLSS injectors on nexusmods now lol

money well spent AMD

I want to see stats for NVIDIA developer docs/blogs like this one:
https://developer.nvidia.com/blog/nvidia-dlss-updates-for-super-resolution-and-unreal-engine/

Btw, if you are getting crashes in starfield with the DLSS mods and are using frame generation that is the culprit and you should probably turn it off.

njsykora
Jan 23, 2012

Robots confuse squirrels.


hobbesmaster posted:

Btw, if you are using frame generation you should probably turn it off.

hobbesmaster
Jan 28, 2008

So, this is something you have to see to believe and not many people have seen it but frame generation actually has a much bigger effect than you might think.

If this makes sense it’s best thought of as a technology like variable refresh rate. For example, running path tracing in cyberpunk may cause even your high end 4090 to drop to 30-40 fps. Without a VRR monitor at that refresh rate you’d get tearing and other weirdness without V sync and massive delays with V sync. With VRR you’ll have a “floaty” feeling at that framerate from noticeable pacing issues. FG helps the “floatiness” from those frame pacing issues and is actually a quite noticeable improvement.

Weird Pumpkin
Oct 7, 2007

hobbesmaster posted:

I want to see stats for NVIDIA developer docs/blogs like this one:
https://developer.nvidia.com/blog/nvidia-dlss-updates-for-super-resolution-and-unreal-engine/

Btw, if you are getting crashes in starfield with the DLSS mods and are using frame generation that is the culprit and you should probably turn it off.

For some reason the dlss mod completely breaks steam link streaming to my TV. I just get a black screen

Otherwise it's honestly pretty incredible how well it works with minimal effort

Adbot
ADBOT LOVES YOU

Alchenar
Apr 9, 2008

repiv posted:

over 600k downloads between the different DLSS injectors on nexusmods now lol

money well spent AMD

They literally paid to advertise DLSS, and you know at some point soon they'll implement it officially because it's absurd to hold that they can't.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply