Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Ugly In The Morning
Jul 1, 2010
Pillbug
I do want to see a graphics card on 4.0 play the HZD port, but that’s mostly because it’s an insane train wreck that floods PCIe 3.0 a lot of the time. I’m curious if it would flood a 4.0 connection just by grabbing all the bandwidth available for shits and giggles.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

There are some mild comedy option GPU configs that come to mind getting a 96-lane PCIe 4.0 switch and hooking up as many GPUs as possible, if you have like $100K to burn in engineering time + BOM cost.

x16 4.0 link in to... 4 x4 PCIe 4.0 GPUs? 8 x4 PCIe 3.0 GPUs? All sorts of hilarious combinations.

repiv
Aug 13, 2009

Ugly In The Morning posted:

I do want to see a graphics card on 4.0 play the HZD port, but that’s mostly because it’s an insane train wreck that floods PCIe 3.0 a lot of the time. I’m curious if it would flood a 4.0 connection just by grabbing all the bandwidth available for shits and giggles.

reviewers saw a noticeable improvement going from 3.0x8 to 3.0x16, but going from 3.0x16 to 4.0x16 on the 5700xt barely moves the needle

it may be that the 5700xt isn't fast enough in general and bottlenecks elsewhere before saturating 3.0x16 though, we'll see when people test the faster ampere cards with 4.0

FuturePastNow
May 19, 2014


ufarn posted:

What are the implications of GPUs optimized for PCIe 4.0?

https://twitter.com/VideoCardz/status/1298303238991753223

A really low end GPU could probably run on just 1x lane of PCIe 4 without a bottleneck. That could be a measurable power savings for, again, something really low-end.

That's the only implication I can think of for something named MX 450

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."
Apparently not all DLSS 2.0 implementations are created equal, Guru3D has a comparison between F1 2020's TAA and DLSS, and uh...DLSS looks kinda poo poo in this game? First case of DLSS 2.0 I've seen where TAA looks noticeably sharper:

TAA:



DLSS:

repiv
Aug 13, 2009

It looks different but it seems like DLSS is resolving the same details, feels like the TAA is just more aggressively sharpened?

Hard to say from one screenshot, hopefully DigitalFoundry will pick it apart

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
I've been pulling that direct comparison script from left to right for like 2 minutes and calling it poo poo is... ridiculously hyperbolic and if anything confirms that DLSS is rad and the performance gains are super worth it even in a worst case scenario.

repiv posted:

It looks different but it seems like DLSS is resolving the same details, feels like the TAA is just more aggressively sharpened?

Hard to say from one screenshot, hopefully DigitalFoundry will pick it apart

Yeah exactly. I don't think it's even worse just sharpened less?

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
I can't tell the difference in these screenshots.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
That's interesting. I may just be the sharpening filter and what you expect a game to look like. I bet that yet again this is a case where the DLSS image is far better conformant to a 32x supersampled image. The road perhaps looks a bit better in the TAA shot, but again that may be the sharpening filter and expectations. Everything else seems clearly better or identical in the DLSS shot. Hard to say without seeing it in motion, but DLSS shot has a lot less signs of the typical shimmering type aliasing where specular highlights and the like blow out on one frame and are gone the next.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Is PCIe 4.0 x8 more or less efficient than PCIe 3.0 x16?

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

repiv posted:

It looks different but it seems like DLSS is resolving the same details, feels like the TAA is just more aggressively sharpened?

Hard to say from one screenshot, hopefully DigitalFoundry will pick it apart

Nah, I don't really see sharpening artifacts. Look at the crowd and the top of the video display on the right, DLSS has noticeably more stair-stepping, as well the railing on the sign in the distance is significantly thicker. It just looks like lower res, the performance boost with it isn't massive like other games as well - this implementation looks barely improved, if at all, from just dropping the res a tad based on these shots. Maybe Guru3D screwed it up?

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

Taima posted:

I've been pulling that direct comparison script from left to right for like 2 minutes and calling it poo poo is... ridiculously hyperbolic and if anything confirms that DLSS is rad and the performance gains are super worth it even in a worst case scenario.

If the 'worst case scenario' is that it looks no different than just dropping the res a bit and letting the game scale up normally, then it's kind of pointless in this case? DLSS 2.0 is great because on the average it looks better than native while giving a huge performance boost. Based on these screenshots/benches with this game, it does neither.

Sure, it's not a drastic quality loss, but then again 1800p vs 4k isn't either. This isn't doubling your performance like it does with Death Stranding/Control, the % improvement is less, so it's far more relevant to consider what it would look like if it was just scaled from a lower res normally. That's not the case with the other DLSS 2.0 games we've seen where it's clearly superior to native, and when it's invoked you're almost doubling your performance.

repiv
Aug 13, 2009

Oh yeah I see it, DLSS is a bit lacking in those areas. I'm more interested in how they compare when actually racing though, I get that it's easier to make a comparison on a static shot but it's not representative of how a game like that is actually played.

DF are the only ones that really dig into temporal stability and motion artifacts

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

repiv posted:

Oh yeah I see it, DLSS is a bit lacking in those areas. I'm more interested in how they compare when actually racing though, I get that it's easier to make a comparison on a static shot but it's not representative of how a game like that is actually played.

DF are the only ones that really dig into temporal stability and motion artifacts

Yeah that's true. This screenshot could have been taken just sitting idle on the track, the loss of detail in motion is TAA's weak point which DLSS rectifies.

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
I understand your point now. I would just want to see it in action.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Yeah the car is sitting at 0km/h. DLSS is clearly blurrier in these stills, just open the two images in two tabs and swatch between then. I'm sure it wouldn't be noticeable when actually driving though, unless the DLSS shits itself in motion somehow and produces worse results.

repiv
Aug 13, 2009

I tried applying FidelityFX CAS to the DLSS screenshot to see how much of the difference is down to sharpening, but Guru3D uploaded the images as JPGs so it amplifies the compression artifacts and looks like poo poo :argh:

edit: oh they did post PNGs, the embedded versions from imgur were converted to jpeg. brb sharpening

repiv fucked around with this message at 21:08 on Aug 25, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Malcolm XML posted:

Is PCIe 4.0 x8 more or less efficient than PCIe 3.0 x16?

More efficient in that you can get the same bandwidth using half the lanes. Generally that's not very important, but could be useful if we keep getting desktop chipsets with constrained numbers of PCIe links and want to start doing stuff like throwing multiple NVMe x4 slots in there, 10Gb NICs, etc.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

DrDork posted:

More efficient in that you can get the same bandwidth using half the lanes. Generally that's not very important, but could be useful if we keep getting desktop chipsets with constrained numbers of PCIe links and want to start doing stuff like throwing multiple NVMe x4 slots in there, 10Gb NICs, etc.

I guess I meant more along the lines of is 8 lanes of pcie 4.0 less power than 16 lanes of pcie 3.0? Or is it just board space savings (eaten up in the need for higher quality pcb material)

repiv
Aug 13, 2009

repiv posted:

I tried applying FidelityFX CAS to the DLSS screenshot to see how much of the difference is down to sharpening, but Guru3D uploaded the images as JPGs so it amplifies the compression artifacts and looks like poo poo :argh:

edit: oh they did post PNGs, the embedded versions from imgur were converted to jpeg. brb sharpening

TAA: https://img.guru3d.com/compare/f1-2020/taa.png
DLSS: https://img.guru3d.com/compare/f1-2020/dlss.png
DLSS+CAS: https://files.catbox.moe/qk6qm3.png

Things like the track texture look more consistent with TAA after CAS is applied, so I think it is partially down to different amounts of sharpening. There's still some flaws though as Happy Misanthrope pointed out.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

mobby_6kl posted:

Yeah the car is sitting at 0km/h. DLSS is clearly blurrier in these stills, just open the two images in two tabs and swatch between then. I'm sure it wouldn't be noticeable when actually driving though, unless the DLSS shits itself in motion somehow and produces worse results.

Given how DLSS works - basically, by integrating details across several frames, but in a smarter way than TAA - it's likely to not look as good in motion compared to how it looks at rest, or compared to native rendering, but DLSS will probably still look better than TAA.

Go back and check out those super-low-res Control videos to get a more obvious sense of how this works. If the camera stays on a given low-motion subject for even a few frames, DLSS is using every frame it can see the subject to shift the rendering and find more detail. It can get fine texture detail quickly even from a handful of low-res source images, because it can basically say "I'm missing a bit of information, please shift the image a quarter pixel to the left and down a touch." That's how it can do "better than native" results. But, when the camera swings around quickly, or there's something like a fire, explosion, or crazy sci-fi effect happening on screen with a lot of rapidly-changing detail, DLSS just doesn't have enough information to do much more than a good single-frame upscale.

The good news is that, when there's a lot of rapid motion happening, your eyes and brain also aren't great at noticing and picking out fine detail. So, the overall effect stays pretty good as long as you're not trying to upscale a sub-DVD-resolution source image to 4K.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Malcolm XML posted:

I guess I meant more along the lines of is 8 lanes of pcie 4.0 less power than 16 lanes of pcie 3.0? Or is it just board space savings (eaten up in the need for higher quality pcb material)

Nope, it's a straight doubling for all practical purposes. A 1x PCIe 3.0 lane runs at 8GT/s (or simply 1GB/s for anyone not interested in encoding techniques). A 1x PCIe 4.0 lane runs at 16GT/s.

The biggest uses we'll see for it in the near term are to enable faster SSDs, and by allowing more devices to be attached at reasonable speeds--so like instead of dedicating a 4x 3.0 lane chunk to a NVMe drive, a board could opt to split that into two 2x 4.0 links to support two drives at the same speeds as the old 4x link. You'd still need a SSD that talks 4.0, though, to make that work.

I doubt we'll see much in the way of space-savings for desktop boards, because for the near future they'll want to retain 3.0 compatibility. But for laptops and the like where you're not worried about replaceable parts, it might enable some savings there, yeah.

Ugly In The Morning
Jul 1, 2010
Pillbug

DrDork posted:

Nope, it's a straight doubling for all practical purposes. A 1x PCIe 3.0 lane runs at 8GT/s (or simply 1GB/s for anyone not interested in encoding techniques). A 1x PCIe 4.0 lane runs at 16GT/s.

The biggest uses we'll see for it in the near term are to enable faster SSDs, and by allowing more devices to be attached at reasonable speeds--so like instead of dedicating a 4x 3.0 lane chunk to a NVMe drive, a board could opt to split that into two 2x 4.0 links to support two drives at the same speeds as the old 4x link. You'd still need a SSD that talks 4.0, though, to make that work.

Oh man, my next computer is going to have so many SSDs in it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Ugly In The Morning posted:

Oh man, my next computer is going to have so many SSDs in it.

The crazy thing is that large (2TB+) SSDs are actually starting to become a reasonably priced option, substantially reducing most people's needs to have multiple drives in the first place. :shrug:

Ugly In The Morning
Jul 1, 2010
Pillbug

DrDork posted:

The crazy thing is that large (2TB+) SSDs are actually starting to become a reasonably priced option, substantially reducing most people's needs to have multiple drives in the first place. :shrug:

Oh, I know that, I was just eyeballing a nice 2TB NVMe drive for less than 300 bucks, I just like having an obscene amount of fast storage available.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

The crazy thing is that large (2TB+) SSDs are actually starting to become a reasonably priced option, substantially reducing most people's needs to have multiple drives in the first place. :shrug:

the Micron 1100 has been really affordable for a long time, it's the OEM version of the M550 and didn't sell well so they were dumped very cheaply, I think I paid $200 for a 2TB drive about 2 years ago

pyrotek
May 21, 2004



Happy_Misanthrope posted:

Nah, I don't really see sharpening artifacts. Look at the crowd and the top of the video display on the right, DLSS has noticeably more stair-stepping, as well the railing on the sign in the distance is significantly thicker. It just looks like lower res, the performance boost with it isn't massive like other games as well - this implementation looks barely improved, if at all, from just dropping the res a tad based on these shots. Maybe Guru3D screwed it up?

It looks to me like there is a thin black border around the... whatever it is on top of the video board. You can see it is there in both versions above the display on the left side. It is pretty hard to tell which one is more accurate without knowing how it "should" look.

Either way, it is close enough that I'd gladly take whichever version gave me 30+% more performance.

shrike82
Jun 11, 2005

https://twitter.com/leakbench/status/1298312087425486849?s=20

Disappointing if true

movax
Aug 30, 2008

Malcolm XML posted:

I guess I meant more along the lines of is 8 lanes of pcie 4.0 less power than 16 lanes of pcie 3.0? Or is it just board space savings (eaten up in the need for higher quality pcb material)

Less lanes makes routing way easier, and if you wanted to do switching between slots and things, you'd need fewer devices to do it. Electrically, there are less physical transceivers running so that should draw less power in terms of I/O. The encoding scheme is still 128b/130b so on the silicon, I think that portion would be the same (and that usually will drop along with node sizes). The CTLE and DFE bits though (equalization) have changed, and I'd have to guess that the internal PIPE / similar clock rates in the IP have either doubled in clock rate, or doubled in width to keep up with the increased bandwidth.

There's just a lot of variables that could change between the two when it comes to figuring out power — node size, link width, IP core properties (width of the internal parallel interface). Power scales with the square of voltage, not frequency, so the biggest gains would be realized with dropping the operating voltage of the device / IP core where possible.

I just want to see some crazy flash density happen as a result of this — narrower you make those lanes, the better point of inflection / sweet spot can be found in sizing SSD controllers. Maybe PCIe 4.0 x1 or x2 controllers are the actual sweet spot for NVMe bandwidth even though M.2 has basically made it such that x4 lanes are the norm. I guess mobo makers play a physical layout game of how many physical M.2 slots they can cram onto a board.

fake edit: a single PCIe 4.0 lane comes really really close to supporting a 10GbE controller at full theoretical bandwidth in both directions and I feel practically would do the job for most people.

movax fucked around with this message at 22:52 on Aug 25, 2020

sean10mm
Jun 29, 2005

It's a Mad, Mad, Mad, MAD-2R World

Coming from a 1070 the 3070 is probably going to amaze me regardless.

shrike82
Jun 11, 2005

The 3090 is a lot less interesting if it’s just a hot and big Ti replacement

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

shrike82 posted:

The 3090 is a lot less interesting if it’s just a hot and big Ti replacement

That doesn't make any sense, if you look at it like this: the "Ti" variants have normally been the mid-cycle re-spins with better yields allowing them to bump performance at minimal extra cost above the normal iterations. If they're launching it day 1, obviously that can't be the case here, so we shouldn't think of it as a "Ti" replacement in that sense, but as a "Titan" replacement. "2080Ti > 3090" really sounds more like just "hey if you already have the top of the stack 20-series card (lol no one bought the RTX titan, amirite?), the 3090 is going to be the top of the stack 30-series card." But, yeah, if the pricing rumors of >$1400 are true, it'd better come with silly amounts of VRAM or something else to make it worth the price over the 3080.

What we normally have thought of as "Ti" will likely end up as "Supers" in about a year, and we can hope that they'll do about what they did for the 20-series: modest, if any, price bumps and good performance lifts.

Cactus
Jun 24, 2006

One week 'till knowledge. Actual, tangible knowledge. Then decisions can be made.

I can't believe after waiting so long to upgrade that there's a non-zero chance within a month or so I'll be unboxing a brand new GPU to replace my 970. I'm excited.

shrike82
Jun 11, 2005

I have an RTX Titan.

Even as a Titan, it’s not particularly interesting. The Turing Titan came with double the ram of anything else in the lineup. If there’s a 3080 20GB blower option, double stacking them for a 40GB NVlinked setup will be better for compute than a single massive 24GB 3090 (at least for my use case).

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

shrike82 posted:

I have an RTX Titan.

Even as a Titan, it’s not particularly interesting. The Turing Titan came with double the ram of anything else in the lineup. If there’s a 3080 20GB blower option, double stacking them for a 40GB NVlinked setup will be better for compute than a single massive 24GB 3090 (at least for my use case).

Considering the pricing rumors, and the possibility of it having VRAM dies on the back side of the card, it's possible that the 3090 will, just like the RTX Titan, have considerably more VRAM than the xx80.

Considering the 2080Ti only had 11GB VRAM, and next-gen consoles are capped at 16GB RAM total, I wouldn't see any reason for NVidia to slap 20GB on a 3080. That seems like an enormously expensive overkill option. I'd expect more like 12GB VRAM on the 3080, honestly. Maybe 14GB, tops.

shrike82
Jun 11, 2005

The rumours are they’re offering 10GB/20GB options for the 3080 and 24GB for the 3090

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know
I'm extremely confused who Leakbench are. The leak scene is already pretty stupid with the normal consistent actors within it now we're supposed to trust some twitter with 126 followers

e: who wants us to believe that the gains are linear across every model as well, riiiight

jisforjosh
Jun 6, 2006

"It's J is for...you know what? Fuck it, jizz it is"

This doesn't make much sense.

The 3080 is rumored to have the same number of CUDA cores as the 2080Ti with more bandwidth right? Only to end up a few percentage points faster than a 2080Ti?

repiv
Aug 13, 2009

what in tarnation

https://twitter.com/hms1193/status/1298249107367006208/photo/1

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

guessing mining rig

lol
https://www.tweaktown.com/news/74703/msi-will-have-29-different-geforce-rtx-3090-3080-3070-models/index.html

quote:

MSI is gearing up for a huge next few months selling graphics cards, with the company submitting 29 different SKUs to the ECC for NVIDIA's next-gen GeForce RTX 3090, RTX 3080, and RTX 3070 graphics cards.

MSI has 14 x V388 models, 11 x V389 models, and 4 x V390 models -- where we should see those split into the GeForce RTX 3090, RTX 3080, and RTX 3070 -- but we don't know which models are which. We could have 14 x custom GeForce RTX 3090 graphics cards, or only 4 x custom GeForce RTX 3090 cards depending on those model numbers.

shrike82 fucked around with this message at 23:43 on Aug 25, 2020

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply