Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Scarabrae
Oct 7, 2002

Would I be able to get the goon consensus on the best bang for the buck 4070? I would like to keep a 8 pin power connector, TUF Gaming looks good but its massive. Right now my brain is trying to check reviews and toss up between the Gigabyte Windforce or the ASUS Dual.

Adbot
ADBOT LOVES YOU

Dr. Video Games 0031
Jul 17, 2004

Scarabrae posted:

Would I be able to get the goon consensus on the best bang for the buck 4070? I would like to keep a 8 pin power connector, TUF Gaming looks good but its massive. Right now my brain is trying to check reviews and toss up between the Gigabyte Windforce or the ASUS Dual.

Massive isn't necessarily a con in my opinion, unless your system doesn't have the physical space necessary to accommodate a large GPU. That said, it's not worth paying more than the $599 MSRP in my opinion, so I'd just stick to models at that price point. The Asus Dual has a good reputation and is a solid choice I think.

Yudo
May 15, 2003

Scarabrae posted:

Would I be able to get the goon consensus on the best bang for the buck 4070? I would like to keep a 8 pin power connector, TUF Gaming looks good but its massive. Right now my brain is trying to check reviews and toss up between the Gigabyte Windforce or the ASUS Dual.

Asus dual is the better card, though neither are bad. No gpu of the 40xx series is a good value, just different degrees of bad. In that sense, the 4070 I think is the least offensive of the bunch.

Edit: I agree with the above post. Don't pay more than msrp.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

Yudo posted:

Chiplets, for example, and mixed process fabrication are way, way more important innovations than frame generation or AI raytracing. Also, AMD cards can do matrix math just fine, and simd has been a thing long before Nvidia was even a company.

Sure, MCM is hard, and Nvidia will have to go through the same thing at some point, but right now it provides no tangible benefit.

If you're right about what Nvidia is doing being trivial, at some point AMD will rocket ahead, but I doubt it. AMD has made the wrong bets about the short term future, and it's going to require serious focus to get even close in the realm of path tracing, which is well on its way to the mainstream, unbelievable as it was a few years ago.

sauer kraut
Oct 2, 2004
e; nm the 4070 Dual seems to not suck according to TPU
That wasn't always the case, historically.

sauer kraut fucked around with this message at 01:34 on Sep 4, 2023

Dr. Video Games 0031
Jul 17, 2004

sauer kraut posted:

Asus Dual at 200W+ is not gonna be silent at all, take it from someone who has cheaped out before :smith:
If your case can fit the TUF, I would go with that.

The Asus Dual is one of the quietest 4070 models on the market. It's deceptively huge for a dual-fan card.

edit:

Dr. Video Games 0031 fucked around with this message at 01:35 on Sep 4, 2023

sauer kraut
Oct 2, 2004
Yeah I saw that too late. My 580 Dual was hell compared to the beefier models back in the day.

Scarabrae
Oct 7, 2002

Very interesting appreciate the info I think a TUF would fit but it’ll completely cover up my nvme which I guess sure whatever I wasn’t planning on swapping the drives out anytime soon

Dr. Video Games 0031
Jul 17, 2004

Or for another perspective, this is from Hardware Unboxed's 4070 roundup video:



The Windforce's cooler seems kinda crappy. I think the Dual is the way to go when it comes to the 4070. The TUF Gaming and Gaming X Trio are really good, but they're not worth the extra money.

Yudo
May 15, 2003

K8.0 posted:

Sure, MCM is hard, and Nvidia will have to go through the same thing at some point, but right now it provides no tangible benefit.

If you're right about what Nvidia is doing being trivial, at some point AMD will rocket ahead, but I doubt it. AMD has made the wrong bets about the short term future, and it's going to require serious focus to get even close in the realm of path tracing, which is well on its way to the mainstream, unbelievable as it was a few years ago.

The reticle limit of 2nm will be around 400mm^2. Monster chips are going the way of the dodo. Amd has been gaining experience with technologies relevant to chiplets since starting development of Fiji back in 2007. The benefit of chiplets is that amd is learning how to do it, as after rdna4 and Ada next, it will be a design requirement. Intel has been tripping over themselves for years trying to catch up. If you want fancier graphics, in other words, chiplets may be the only way to get there.

How many games support path tracing, exactly? So no, path tracing is not on the way to being mainstream. It will be a decade before any console will be able to manage it.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Whatever you want to tell yourself. Go ahead and bookmark your post so you can quote it two years from now, you'll look like a fuckin genius if you're right and Nvidia is hopelessly in the dust while the future of rendering is raw rasterization.

Yudo
May 15, 2003

K8.0 posted:

Whatever you want to tell yourself. Go ahead and bookmark your post so you can quote it two years from now, you'll look like a fuckin genius if you're right and Nvidia is hopelessly in the dust while the future of rendering is rasterization.

No, no bookmarks. This is an excellent post though.

Dr. Video Games 0031
Jul 17, 2004

2nm is still regular EUV, so it will have an 800mm2 reticle limit still. High-NA EUV will have a limit closer to 400mm2. Nvidia does have some MCM designs for datacenter chips, and something tells me that they won't have much trouble pulling off chiplets in home GPUs if that's where the future ends up lying.

edit: Actually, I got TSMC's process nodes mixed up, and it seems 2nm will be the first to use High-NA. Still, I think it's weird to pose this as some kind of problem that will be really difficult for Nvidia to overcome somehow. They have some of the most talented engineers in the world, I'm sure they'll figure it out if it becomes necessary to do so.

Dr. Video Games 0031 fucked around with this message at 02:03 on Sep 4, 2023

Scarabrae
Oct 7, 2002

Dr. Video Games 0031 posted:

Or for another perspective, this is from Hardware Unboxed's 4070 roundup video:



The Windforce's cooler seems kinda crappy. I think the Dual is the way to go when it comes to the 4070. The TUF Gaming and Gaming X Trio are really good, but they're not worth the extra money.

:hmmyes: ok that pretty much convinces me

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Yudo posted:

The reticle limit of 2nm will be around 400mm^2. Monster chips are going the way of the dodo. Amd has been gaining experience with technologies relevant to chiplets since starting development of Fiji back in 2007. The benefit of chiplets is that amd is learning how to do it, as after rdna4 and Ada next, it will be a design requirement. Intel has been tripping over themselves for years trying to catch up. If you want fancier graphics, in other words, chiplets may be the only way to get there.

How many games support path tracing, exactly? So no, path tracing is not on the way to being mainstream. It will be a decade before any console will be able to manage it.

I'm pretty sure he just has a massive hardon of hate for AMD ('s gaming division) such that anything AMD has done, will be trivial for nVidia, while anything Nvidia has done, will take years/decades for AMD to catch up, if they actually want to.

While I mostly joke about AMD leaving the dGPU space, as I don't actually believe they will, I will say that I think it's much easier to catch up on the software side of things, than it is to gain the necessary experience on the hardware side when about to run into a hard wall/limit.

It still seems like AMD misjudged where things were going, because it seems like they should reasonably be able to add the hardware to catch up to Nvidia within a few generations.

Dr. Video Games 0031 posted:

2nm is still regular EUV, so it will have an 800mm2 reticle limit still. High-NA EUV will have a limit closer to 400mm2. Nvidia does have some MCM designs for datacenter chips, and something tells me that they won't have much trouble pulling off chiplets in home GPUs if that's where the future ends up lying.

edit: Actually, I got TSMC's process nodes mixed up, and it seems 2nm will be the first to use High-NA. Still, I think it's weird to pose this as some kind of problem that will be really difficult for Nvidia to overcome somehow. They have some of the most talented engineers in the world, I'm sure they'll figure it out if it becomes necessary to do so.

The bigger challenge for Nvidia will likely be fighting Apple for TSMC cutting-edge nodes. There's a reason TSMC gives Apple right of first refusal for every new node...

steckles
Jan 14, 2006

K8.0 posted:

If you're right about what Nvidia is doing being trivial, at some point AMD will rocket ahead, but I doubt it. AMD has made the wrong bets about the short term future, and it's going to require serious focus to get even close in the realm of path tracing, which is well on its way to the mainstream, unbelievable as it was a few years ago.
I dunno, ray/triangle and ray/box intersection hardware isn't that hard. But I think its a matter of what Sony and Microsoft want for the PS5 Pro/PS6 and Xbox Series Y or whatever stupid name they choose. If they've decide that they want more of an RT uplift than the FLOPS improvement over the current consoles would imply, then I don't doubt AMD could do it. Same with matrix multiplication/tensor acceleration. If console designs demand it, they could probably deliver it.

There is the software side of course, to actually take advantage of the hardware, but NVidia is publishing huge amounts research in that space. When and if AMD ever decides to gets serious about RT/AI for games, for whatever reason, they'll be able to move pretty quickly because NVidia has already done a ton of legwork for them. That's not to say AMD isn't also publishing a lot of interesting stuff, but NVidia is definitely doing waaaay more.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Scarabrae posted:

Would I be able to get the goon consensus on the best bang for the buck 4070?

I got an Open-Box from BestBuy for $500 and it was missing the 12VHPWR dongle, so I talked the cashier down to $400. I'd check BB in your area, and if not maybe Microcenter or Amazon Warehouse open box.

Yudo
May 15, 2003

Edit:

Dr. Video Games 0031 posted:

2nm is still regular EUV, so it will have an 800mm2 reticle limit still. High-NA EUV will have a limit closer to 400mm2. Nvidia does have some MCM designs for datacenter chips, and something tells me that they won't have much trouble pulling off chiplets in home GPUs if that's where the future ends up lying.

edit: Actually, I got TSMC's process nodes mixed up, and it seems 2nm will be the first to use High-NA. Still, I think it's weird to pose this as some kind of problem that will be really difficult for Nvidia to overcome somehow. They have some of the most talented engineers in the world, I'm sure they'll figure it out if it becomes necessary to do so.


And how did the strategy of rushing to chiplets work for Intel? Intel has some very smart engineers too! Nvidia has an excellent design team, but learning by doing shouldn't be underestimated. Nvidia has hosed up too--for example, cooking 4090s--and they are in full "give no shits" about gaming mode.

Yudo fucked around with this message at 02:16 on Sep 4, 2023

power crystals
Jun 6, 2007

Who wants a belly rub??

The immediate counter argument to chiplet GPUs being amazing is every rumor of the RX8000s says they're going to be the wettest of farts. I would love to see AMD leap past nvidia but it sure doesn't feel like it's going to happen anytime soon.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Yudo posted:

Edit:

And how did the strategy of rushing to chiplets work for Intel? Intel has some very smart engineers too! Nvidia has an excellent design team, but learning by doing shouldn't be underestimated. Nvidia has hosed up too--for example, cooking 4090s--and they are in full "give no shits" about gaming mode.

Intel also has a lot of institutional... challenges, that I don't think exist with Nvidia. At the same time, Nvidia's dominance/lead could very easily lead to them believing they can easily handle any challenge, and overlook/underestimate what it would take. I still doubt that though.

I think it's far more likely that they keep going bigger. Bigger. BIGGER, as long as they can, to just continue raking in piles of cash, while simultaneously trying to get more experience with chiplets.

The challenge really will become them procuring cutting edge fab space relative to Apple's interests, and I could easily see them eventually being at least 1-2 nodes behind the cutting edge due to Apple's demands. So at that point, I guess it depends on how well they can juice non-chiplet designs vs. rapid transition to chipsets.

I almost wonder if all of the noise last year about 600W 4090s and such, including over engineered cooling solutions, was essentially them setting the stage for needing to crank up power usage in the next generation or two. It'll be interesting to see.

I will find it hilarious though if a point comes where a 5090 or 6090 is using 600W on its own, and everyone in this thread is stumbling over themselves to explain why, no, actually it's perfectly fine and reasonable for a GPU to use that much. Because "hobby". lol

Edit:

power crystals posted:

The immediate counter argument to chiplet GPUs being amazing is every rumor of the RX8000s says they're going to be the wettest of farts. I would love to see AMD leap past nvidia but it sure doesn't feel like it's going to happen anytime soon.

Well, there's also that discussion about RDNA4 basically only targeting mid-range; I think Dr. Video Games 0031 had a good comment one time about how they may just focus on mid-tier for a generation or two while they reset themselves or plan for the next console generations. If that's likely, then it seems like a perfect time to do a mass-transition over to solely a chiplet approach, and work out all the kinks/bugs before using it for Sony and MS's next offerings while also targeting the high end again.

I mean, look at Ryzen - the first and second generations all had teething pains of various sorts, for a variety of reasons, but by the third generation they pretty much had their poo poo together and could start pushing back heavily on Intel, and it seems like they're now set to continue that strong showing for the foreseeable future.

Canned Sunshine fucked around with this message at 02:27 on Sep 4, 2023

Yudo
May 15, 2003

power crystals posted:

The immediate counter argument to chiplet GPUs being amazing is every rumor of the RX8000s says they're going to be the wettest of farts. I would love to see AMD leap past nvidia but it sure doesn't feel like it's going to happen anytime soon.

I think the take away is that an ambitious chiplet design was out of AMD's reach. That says a lot about how hard it is.

I don't give a fig if AMD leaps over Nvidia. I think top of the stack competition matters to a tiny segment of the market. Future lithographic technology will not support Nvidia's current design philosophy, nor will the cost of 2nm in the absence of SRAM scaling. If 3nm is a harbinger of what is to come, chiplets are a promising way forward that is way more important than eye candy if people want faster GPUs at reasonable prices. If we want path tracing to be mainstream, we need mainstream cards/APUs to be as fast as the 4090! That is a tall order.

SourKraut posted:

Intel also has a lot of institutional... challenges, that I don't think exist with Nvidia. At the same time, Nvidia's dominance/lead could very easily lead to them believing they can easily handle any challenge, and overlook/underestimate what it would take. I still doubt that though.

I think it's far more likely that they keep going bigger. Bigger. BIGGER, as long as they can, to just continue raking in piles of cash, while simultaneously trying to get more experience with chiplets.

Regarding intel, I agree completely. Still, the point is that moving to chiplets is not trivial, though in fairness Intel's manufacturing problems are also a likely culprit as well (Nvidia will of course use Samsung or TSMC). Nvidia can't go bigger and bigger: at 2nm, 400mm^2 is the limit.

Yudo fucked around with this message at 02:51 on Sep 4, 2023

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Yudo posted:

Regarding intel, I agree completely. Still, the point is that moving to chiplets is not trivial, though in fairness Intel's manufacturing problems are also a likely culprit as well (Nvidia will of course use Samsung or TSMC). Nvidia can't go bigger and bigger: at 2nm, 400mm^2 is the limit.

Sorry, I meant more that prior to 2nm, they'll just try to go bigger and bigger; they'll have to have transitioned their designs to chiplet-based by then, or otherwise just be on an older node, juicing the hell out of the design.

Yudo
May 15, 2003

SourKraut posted:

Sorry, I meant more that prior to 2nm, they'll just try to go bigger and bigger; they'll have to have transitioned their designs to chiplet-based by then, or otherwise just be on an older node, juicing the hell out of the design.

Fair enough. Thanks for clarifying. If anyone can pull it off, it is Nvidia.

Edit: also, regarding your previous comment, cooling a 600w chip that is as transistor dense as 3nm or 2nm (though we don't know much about that node) is going to be impractical. It is too much heat in too little a space.

I doubt it will be necessary, and 600w wasn't even necessary for the 4090. While I don't watch it closely, power use during a game is usually under 400w on my 4090, math stuff closer to 450w. Bumping up the limit does next to gently caress all and isn't worth the heat. Dumping power into chips has been the go to move for Intel and AMD CPUs this generation with not much to show (i.e. perfomance per watt staying roughly flat or decreasing on several SKUs), but Ada is actually pretty impressive in term of efficiency even with the top line card having such a ludicrous TDP.

Yudo fucked around with this message at 03:03 on Sep 4, 2023

Cross-Section
Mar 18, 2009

Zero VGS posted:

They probably made more money than any grunt working on the actual game, lol

If you pay the 5bux, do you get ALL the puredark mods released, or just this one?

Yes, they’re all downloadable from their discord

The Jedi Survivor one especially is still a must-have if you want to have anything approaching a high FPS experience in that game

Truga
May 4, 2014
Lipstick Apathy

SourKraut posted:

I mean, look at Ryzen - the first and second generations all had teething pains of various sorts, for a variety of reasons, but by the third generation they pretty much had their poo poo together and could start pushing back heavily on Intel, and it seems like they're now set to continue that strong showing for the foreseeable future.

yeah if amd manages to pull that poo poo off in GPU space we're gonna see some real poo poo

hopefully they're done before my 6900 kicks the bucket

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
I guess FSR is good but it's kind of funny how the best time to be a DLSS user was before FSR existed since games couldn't just rely on some solution being present. The window between DLSS becoming good and the implementation of FSR2 will be seen as a golden age for upscaling.

You can somewhat bypass this effect through sheer horsepower with a 4090 but still. For most people it feels like we're going back to where we started and all the overhead is just being snatched away in an effort to make games easier to develop.

Welp. It was good for a second! And it's ironically still good if you have a 4090 because devs still can't assume you have that kind of horsepower, and 4090 users are also the only real benefactors of DLSS 3.5 RTX for the foreseeable future as well, so it's a particularly good place to be. But even then, CPU overhead is starting to become a major problem even for 4090 users.

On one side we have a company literally paying people to make their games worse (and apparently lying about it???), and on the other side, a company making products that only truly shine with a $1,500 graphics card.

while in the middle, developers (or more accurately, the financial arms of these development studios) are milking the system on purpose to save money. Amazing.

Tangential question, is DLSS3 FG still bad, and if not, under what circumstances is it good? I tried it once with CP2077 and thought it was gross, but that was a long time ago (well, relatively; when it came out). I want it to be good though, because it feels like the future of upscaling, I guess.

Taima fucked around with this message at 18:39 on Sep 4, 2023

Indiana_Krom
Jun 18, 2007
Net Slacker

Taima posted:

Tangential question, is DLSS3 FG still bad, and if not, under what circumstances is it good? I tried it once with CP2077 and thought it was gross, but that was a long time ago (well, relatively; when it came out).

FG is always going to be bad in most games because it adds 1 (real) frame of latency to the pipeline. The only place it won't be bad is in games that aren't latency sensitive at all.

BurritoJustice
Oct 9, 2012

Indiana_Krom posted:

FG is always going to be bad in most games because it adds 1 (real) frame of latency to the pipeline. The only place it won't be bad is in games that aren't latency sensitive at all.

With this logic nobody should ever buy an Intel or AMD GPU, because they're more than a frame of latency behind Nvidia with reflex enabled. Not everyone requires the absolute lowest possible latency at all times.

Arzachel
May 12, 2012

BurritoJustice posted:

With this logic nobody should ever buy an Intel or AMD GPU, because they're more than a frame of latency behind Nvidia with reflex enabled. Not everyone requires the absolute lowest possible latency at all times.

Reflex is nice but you can get most of the benefit by capping your fps. What's the scenario you want to push the frame rate but also don't care about the latency hit?

Jeff Fatwood
Jun 17, 2013

Truga posted:

yeah if amd manages

We're hosed

Truga
May 4, 2014
Lipstick Apathy

i really don't understand this genre of posts, i'm on an amd gpu right now and it works great, and on an amd cpu which is insanely loving good

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Dr. Video Games 0031 posted:

2nm is still regular EUV, so it will have an 800mm2 reticle limit still. High-NA EUV will have a limit closer to 400mm2. Nvidia does have some MCM designs for datacenter chips, and something tells me that they won't have much trouble pulling off chiplets in home GPUs if that's where the future ends up lying.

edit: Actually, I got TSMC's process nodes mixed up, and it seems 2nm will be the first to use High-NA. Still, I think it's weird to pose this as some kind of problem that will be really difficult for Nvidia to overcome somehow. They have some of the most talented engineers in the world, I'm sure they'll figure it out if it becomes necessary to do so.

The extremely funni outcome will be if DLSS is the key to efficient multichiplet scaling.

Temporal has always struggled with that, but if DLSS is really super good at working with limited sample availability and getting a good output, maybe that’s not an issue. And technically there’s nothing that requires that each chiplet be working from the same samples, as long as the output is temporary stable.

Maybe the big push for DLSS is also partly because they know it’ll be foundational in 3-5 years when they move to MCM lol

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
https://twitter.com/PcPhilanthropy/status/1696513848939856168/history

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Yudo posted:

Edit:

And how did the strategy of rushing to chiplets work for Intel? Intel has some very smart engineers too! Nvidia has an excellent design team, but learning by doing shouldn't be underestimated. Nvidia has hosed up too--for example, cooking 4090s--and they are in full "give no shits" about gaming mode.

Nvidia has plenty of learning by doing too - they were only a year or two behind Fiji with GP100 and have been iterating ever since.

Like what did amd learn about MCM from fiji that you don’t think nvidia learned from GP100, a much better and bigger and more successful product that spawned a product line that is currently pushing them to over a trillion dollar market cap?

There's certainly some lessons around advanced packaging/HBM2 integration, but NVIDIA would have learned those same lessons. And while true multi-GCD chiplet scaling is another step forward, AMD isn't there yet either, and even just pulling out MCDs hasn't been entirely successful.

Paul MaudDib fucked around with this message at 15:13 on Sep 4, 2023

repiv
Aug 13, 2009

Paul MaudDib posted:

The extremely funni outcome will be if DLSS is the key to efficient multichiplet scaling.

nah any chiplet approach that requires software intervention is an absolute non-starter unless consoles go down that road too, approximately nobody is going to rearchitect their engines just for a weird PC architecture

pixel accumulation isn't the only problematic thing if you wanted to do AFR in a modern engine, you've also got things like shadow caches, GPU physics sims, GPU culling etc that rely on last-frame data

maybe you could get away with using frame N-2 data in some cases (probably good enough for culling at least) but that's still something the game engine would have to facilitate

repiv fucked around with this message at 15:17 on Sep 4, 2023

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

Why? AMD is massively behind in the GPU space, but not to the extent that they were behind in the CPU space pre-Zen. They are maybe as far behind as they were with Zen2, and then goons were happy to pretend AMD CPUs were "just as good" even though they clearly weren't. The opposing double hyperbole is a bit weird to me.

Granted, catching up on a lazy Intel was easier than catching Nvidia at their best, so it would take more time, but it's not impossible.

K8.0 fucked around with this message at 15:35 on Sep 4, 2023

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

nah any chiplet approach that requires software intervention is an absolute non-starter unless consoles go down that road too, approximately nobody is going to rearchitect their engines just for a weird PC architecture

pixel accumulation isn't the only problematic thing if you wanted to do AFR in a modern engine, you've also got things like shadow caches, GPU physics sims, GPU culling etc that rely on last-frame data

maybe you could get away with using frame N-2 data in some cases (probably good enough for culling at least) but that's still something the game engine would have to facilitate

what if the "fuzziness" of DLSS lets you distribute a single frame (with some overlap/re-rendering) across multiple GPUs? if DLSS eventually succeeds in "inversion of control" (where the game engine is really an injectable service/module with some public hooks/calls that the DLSS engine puppeteers, and is merely used for the game logic loop and then sample generation) then mGPU rendering simply becomes DLSS calling for all the samples for region [0,N/2] on card A and putting [N/2, N] on card B. And it's already probably going to be deciding what pixels it wants to sample to optimize for DLSS output (filling in pixels with lots of motion/poor sample history quality), so why not sample them location-aware and then splice together the resulting image?

or if the framegen gets good enough that they can do culling/etc off the framegen image from n-2 and get a reasonable approximation of n-1. But yeah you're probably right that this would need some additional tuning... but DLSS/FSR/TAA in general always has needed some tuning, things like putting objects into the masks or warming up sample history caches before objects are visible.

everyone's right that NVIDIA is absolutely hammering forward on DLSS and it may not be just that it's an awesome upscaler/good for performance, but that one or all of these techniques (TAAU, framegen, potentially more stuff in the future) is going to be necessary to make mGPU work. And even if some small intervention is necessary, if the alternative is "single-chiplet performance isn't going to go up anymore with normal methods due to Physics Problems, this is what you get" then maybe there's the leverage to force that tuning. AMD has to make their case on that too of course.

because I do agree that NVIDIA clearly is not going to give up such a foundational market and I severely doubt they're going to go "welp, 400mm2 max, no more 5090s guys" and not only have a stall-out generation but generally reduce their capacity for true high-end products indefinitely.

Paul MaudDib fucked around with this message at 05:00 on Sep 5, 2023

Truga
May 4, 2014
Lipstick Apathy

repiv posted:

nah any chiplet approach that requires software intervention is an absolute non-starter unless consoles go down that road too, approximately nobody is going to rearchitect their engines just for a weird PC architecture

pixel accumulation isn't the only problematic thing if you wanted to do AFR in a modern engine, you've also got things like shadow caches, GPU physics sims, GPU culling etc that rely on last-frame data

maybe you could get away with using frame N-2 data in some cases (probably good enough for culling at least) but that's still something the game engine would have to facilitate

IMO even in the best case scenario of "all popular console engines now support chiplet GPUs", you still have a shitton of old games that are going to break and/or run like poo poo, and in PC space that's a complete non-starter. hell, even with consoles it seems to me like backwards compatibility is now entirely expected rather than unsual

maybe if a single chiplet is powerful enough to run all the old games and the gpu has a single-chiplet-mode (kinda like amd64 has a 32bit mode i guess?) that'd fly, but i doubt that's even possible, much less feasible

Jeff Fatwood
Jun 17, 2013

Truga posted:

i really don't understand this genre of posts, i'm on an amd gpu right now and it works great, and on an amd cpu which is insanely loving good

That's cool and all, but I don't see a future currently where higher performance in the GPU market becomes any affordable, probably the opposite. The mid range price will keep creeping up towards $1000 and performance will stagnate. I'll be very happy to see AMD knock out more architectural whammies obviously, but lmao at expecting anything anymore with what we got with RX7000 competing with RTX4000

K8.0 posted:

Why? AMD is massively behind in the GPU space, but not to the extent that they were behind in the CPU space pre-Zen. They are maybe as far behind as they were with Zen2, and then goons were happy to pretend AMD CPUs were "just as good" even though they clearly weren't. The opposing double hyperbole is a bit weird to me.

Granted, catching up on a lazy Intel was easier than catching Nvidia at their best, so it would take more time, but it's not impossible.

I dunno what to tell you, it's been years. Did I miss the news where Nvidia is expected to slow down and AMD is going to do a slamdunk or something?

Jeff Fatwood fucked around with this message at 15:44 on Sep 4, 2023

Adbot
ADBOT LOVES YOU

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Jeff Fatwood posted:


I dunno what to tell you, it's been years. Did I miss the news where Nvidia is expected to slow down and AMD is going to do a slamdunk or something?

If a company thinks it can cut corners or save money on R&D without losing market share they will.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply