Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shipon
Nov 7, 2005

Truga posted:

8k is in the high enough pixel density range that at normal sitting distances your eyes can't distinguish individual pixels any longer. i'm super hyped for it to be affordable

Didn't they say this about 4K?

Adbot
ADBOT LOVES YOU

sean10mm
Jun 29, 2005

It's a Mad, Mad, Mad, MAD-2R World

Shipon posted:

Didn't they say this about 1080p?

eggyolk
Nov 8, 2007


Is there an equivalent DLSS feature in photo editing where you can do some "Enhance... enhance.... enhance..." poo poo like they do in movies when they catch the bad guys face in the reflection of something at the back of a grainy security camera feed?

shrike82
Jun 11, 2005

No - contemporary AI techniques that do reconstruction are trained and focused on filling in plausible details, not the original details

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Sort of, but you get random things the algo thinks could be there, not what was really there. It works for games because they can train the network on just the game. In a non game situation, the training has to be more general

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

repiv posted:



The image on the right is upscaled from 1440p :holymoley:

Where is it getting that detail from!?

movax
Aug 30, 2008

Rinkles posted:

Where is it getting that detail from!?

CAPTCHAs are getting weird

Nfcknblvbl
Jul 15, 2002

eggyolk posted:

Is there an equivalent DLSS feature in photo editing where you can do some "Enhance... enhance.... enhance..." poo poo like they do in movies when they catch the bad guys face in the reflection of something at the back of a grainy security camera feed?

If you have a computer running Docker, you could try this out https://github.com/alexjc/neural-enhance

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Rinkles posted:

Where is it getting that detail from!?

They train the neural network on the low res output and high res output. This takes a long time, but when it's done it can approximate the high res version given the low res data

AirRaid
Dec 21, 2004

Nose Manual + Super Sonic Spin Attack

Rinkles posted:

Where is it getting that detail from!?

It infers detail from super high res training images and fills in where it thinks is appropriate. That's the "Deep Learning" part of "DLSS".

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

taqueso posted:

They train the neural network on the low res output and high res output. This takes a long time, but when it's done it can approximate the high res version given the low res data

I thought DLSS 2 wasn't game specific?

repiv
Aug 13, 2009

shrike82 posted:

No - contemporary AI techniques that do reconstruction are trained and focused on filling in plausible details, not the original details

yeah, single image upscaling is just guesswork based on trends inferred from training data

which leads to hilarious results like this

https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-added-ryan-goslings-face-to-this-photo/

Rinkles posted:

Where is it getting that detail from!?

previous frames, it accumulates detail over time

mobby_6kl
Aug 9, 2009

by Fluffdaddy
:argh: beaten on all counts

taqueso posted:

Sort of, but you get random things the algo thinks could be there, not what was really there. It works for games because they can train the network on just the game. In a non game situation, the training has to be more general
I thought the original implementation was game specific, but Isn't DLSS 2 supposed to be generic now?


Rinkles posted:

Where is it getting that detail from!?
The previous frames, as I understand.


In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says

"For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact."

which is uh...

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

repiv posted:

previous frames, it accumulates detail over time

that makes sense. I mean in so far as I can glean anything about this black magic technology

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

mobby_6kl posted:

:argh: beaten on all counts

I thought the original implementation was game specific, but Isn't DLSS 2 supposed to be generic now?

The previous frames, as I understand.

Wow, thats incredible and almost unbelievable except for the part where it works

AirRaid
Dec 21, 2004

Nose Manual + Super Sonic Spin Attack

mobby_6kl posted:

In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says

"For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact."

which is uh...

https://developer.nvidia.com/contact

Start there I guess?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

PC LOAD LETTER posted:

That thing often ran hotter than that (70C+ was typical) and was widely regarded as laughable at the time. edit: looking at anandtech's review of the thing it was forced to underclock itself in stuff like furmark over 10%.

Dude a IHS doesn't limit you that much. People used to run direct die water coolers all the time and ran into heat problems with much less heat loads all the time with CPU's.

If you're willing to run the fans at much higher rpms you can get away with smaller radiators but most people aren't interested at all in that.

I'm curious as to how much AIO/liquid cooling you've actually done? Because you're just...wrong. Also, Furmark? :getout:

IHS on CPUs are a substantial limitation. Additionally, CPU physical architecture is set up in a way that does not optimize cooling potential--they have a couple of very small, very hot spots (cores, mostly), and then big chunks of space that aren't putting out much heat (caches, units that aren't currently being used like AVX-512 junk, the entire iGPU slice which is like 50% of the size of a lot of chips now, etc). This ends up meaning that getting heat out of a CPU is a considerable challenge, and is part of reason they have an IHS to begin with. The other part being so you don't crush it with your ham-hands when installing the cooler, of course.

GPUs, on the other hand, tend to have lower heat point-density, and run bare-die because you're not supposed to be loving with the cooler in the first place. The end result being it's substantially easier to get heat out of a GPU die than a CPU die.

The 295x2 did indeed run at 70C+, but you know what else does? Pretty much every 2080 and 2080Ti. 70C was also 20C lower than the 290X it was based on, and at substantially lower noise levels, so, uh. Yeah. Even a single 120mm AIO from 2014 can cool ~500W without much issue.

The only real knock it took there was it didn't seem to have a good enough no-load down-throttle system, so its idle noise floor was a bit above a comparable open-air cooler. To be entirely fair, some current AIOs still struggle with that, or have obnoxious pump noise. Others are basically silent.

You're also wrong going back to talking about boost clocks: in most situations NVidia cards boost up to a max point and then pretty much stay there, +/- maybe 20Mhz, unless they run into some other throttling issue first (heat, usually). No one gives a gently caress about base "guarantee" clocks of 1800Mhz or whatever if in practical use they'll be seeing 2000-2050Mhz or similar. You know, just like with Pascal and Turing.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Ugly In The Morning posted:

You could probably achieve similar results at greater efficiency by having the radiator submerged in water being resupplied at some slow rate to keep the temp low enough.

You'd get an insane water bill and destroy the environment, but you could install a waterblock to a GPU then just run a low stream from the nearest faucet set to cold.

If you want to be efficient about it, someone once build a geothermal loop to cool their GPU: https://www.overclock.net/forum/134-cooling-experiments/671177-12-feet-under-1000-square-feet-geothermal-pc-cooling.html

Fuzzy Mammal
Aug 15, 2001

Lipstick Apathy

From this presentation: http://behindthepixels.io/assets/files/DLSS2.0.pdf we get this page with this handy faq:

Q: How do I integrate DLSS into my engine?
A: For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Nfcknblvbl posted:

My mini ITX Thermaltake Core V1 pc owns. Too bad I'm gonna have to chuck it when I upgrade the GPU.

You can do the same thing that my Skyreach case can do. Pop the front of it off and hotrod the GPU out of the frame. Just have to keep the GPU to 2 slots like some of the 3080's are.

repiv
Aug 13, 2009

mobby_6kl posted:

I thought the original implementation was game specific, but Isn't DLSS 2 supposed to be generic now?

yeah, DLSS1 attempted to adapt the traditional AI upscaling methods that imagine extra details out of thin air, and required training on each game so it could learn which patterns to infer for their art style. it also didn't work very well.

DLSS2 scrapped that and works more like a conventional temporal upscaler, integrating real detail over time rather than guessing, but uses an ML model to predict which data from the previous frames is still useful and what needs to be rejected to avoid artifacts

the new method doesn't really depend on the games visual style so it doesn't need training per game anymore

mobby_6kl posted:

In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says

"For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact."

which is uh...

https://www.youtube.com/watch?v=d5knHzv0IQE

this presentation gives an overview of how engine integration works, but as for actually getting the SDK yourself you probably can't

most nvidia libraries are easy to get by clicking through a few EULAs but they are holding the DLSS SDK close to their chest for some reason

repiv fucked around with this message at 18:46 on Sep 9, 2020

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I really just want DLSS2 everywhere to upsample to 4K DSR, to get super-awesome antialiasing on the cheap.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Rinkles posted:

can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice?

It's part of the frame rendering process, so it all occurs within the generation of a single frame. It's not like, say, lovely game engines doing asset pop-in where frame 1 has the low res texture, frame 2 the medium res, and then finally by frame 5 or whatever it's gotten the high res texture off the disk and shoved it in there so you can watch as it progressively improves the image over the course of 1/4 seconds worth of frames. DLSS works "instantly" in that sense.

e; vvvvv is technically correct, but a complete hard cut to a totally static view isn't a very common occurrence in most games, so that sort of thing is more an edge case than something you'd expect to see regularly.

DrDork fucked around with this message at 18:59 on Sep 9, 2020

repiv
Aug 13, 2009

Rinkles posted:

can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice?

in theory you can but it's usually quite hard to notice

TAA has a similar phenomenon where it outputs a raw aliased image immediately after a camera cut, but it only takes a few frames to clean up

it's more noticeable at lower framerates of course

shrike82
Jun 11, 2005

The Shield’s still using the more traditional AI upsampling technique for video. It’s an interesting problem because the universe of possible imagery is far greater there.

Fuzzy Mammal
Aug 15, 2001

Lipstick Apathy

repiv posted:

yeah, DLSS1 attempted to adapt the traditional AI upscaling methods that imagine extra details out of thin air, and required training on each game so it could learn which patterns to infer for their art style. it also didn't work very well.

DLSS2 scrapped that and works more like a conventional temporal upscaler, integrating real detail over time rather than guessing, but uses an ML model to predict which data from the previous frames is still useful and what needs to be rejected to avoid artifacts

the new method doesn't really depend on the games visual style so it doesn't need training per game anymore


https://www.youtube.com/watch?v=d5knHzv0IQE

this presentation gives an overview of how engine integration works, but as for actually getting the SDK yourself you probably can't

most nvidia libraries are easy to get by clicking through a few EULAs but they are holding the DLSS SDK close to their chest for some reason

@11:56 "Twenty Eighty Tie" :negative:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I've just been messing with a deep learning based vocal remover. Using the GPU over the CPU, the speed :psyduck:

Now I want the 3090 because of the VRAM.

LRADIKAL
Jun 10, 2001

Fun Shoe
DLSS requires the dev to apply or label motion vectors to objects or something like that... There's more info in the Digital Foundry Death Stranding video. I think I pasted a link into this thread earlier.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I haven't looked too hard at how DLSS 1 vs 2 is actually developed against but I'm trying to figure out why it's cool to use machine learning for gaming despite its limited use cases for photorealistic image sets in terms of extrapolation (see: the limits of Adobe content-aware image culling / extrapolation which isn't even ML). It's perfectly reasonable to use intermediate levels of training data to extrapolate different scenes and textures if there's 4k or 8k+ textures available as one's verification set. So you train the model to upscale from, say, 1080 textures to 4k textures using a subset of all 4k textures or perhaps rendered scenes as the training data and to verify how good the model is you'd compare it against a verification set that's a distinct subset. This is inherently different from most machine learning datasets in that the total population is effectively infinite while in a game situation we have finite data and a near-ideal training data set. As a result, a sufficiently detailed model can be literally pixel-perfect accurate while it's much harder with the massive space of real world images.

From what it sounds like based upon the bits I've read DLSS 2 is an LSTM network while DLSS 1 was a CNN.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

necrobobsledder posted:

I haven't looked too hard at how DLSS 1 vs 2 is actually developed against but I'm trying to figure out why it's cool to use machine learning for gaming despite its limited use cases for photorealistic image sets in terms of extrapolation (see: the limits of Adobe content-aware image culling / extrapolation which isn't even ML). It's perfectly reasonable to use intermediate levels of training data to extrapolate different scenes and textures if there's 4k or 8k+ textures available as one's verification set. So you train the model to upscale from, say, 1080 textures to 4k textures using a subset of all 4k textures or perhaps rendered scenes as the training data and to verify how good the model is you'd compare it against a verification set that's a distinct subset. This is inherently different from most machine learning datasets in that the total population is effectively infinite while in a game situation we have finite data and a near-ideal training data set. As a result, a sufficiently detailed model can be literally pixel-perfect accurate while it's much harder with the massive space of real world images.

From what it sounds like based upon the bits I've read DLSS 2 is an LSTM network while DLSS 1 was a CNN.

DLSS 2.0 isn't deep-dreaming images directly, it's using deep learning to decide how to weight the temporal samples that it feeds to TAA/a traditional upscaler.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

So based on my reading NVidia used their HPC solutions to train a neural net on how to upscale game content in a general sense and packages it along with their drivers. This neural net is then used to upscale frames, and the upscaled frames are fed back into the temporal antialiasing algorithm along with the relevant engine data to antialias every subsequent frame. Yes?

repiv
Aug 13, 2009

DrDork posted:

e; vvvvv is technically correct, but a complete hard cut to a totally static view isn't a very common occurrence in most games, so that sort of thing is more an edge case than something you'd expect to see regularly.

camera cuts are the worst case scenario but it happens during continuous motion too



if you have an object moving across the frame, like the circle here, then it leaves a "trail" of invalid history data behind it as whatever was behind it only just became visible

the first few pixels in its wake have no history data and are effectively not upsampled, then the history data progressively builds up and increases in quality

in practice it's rarely noticeable though because the history re-accumulates before you really have a chance to scrutinize it, especially at 60fps or higher

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Some Goon posted:

So based on my reading NVidia used their HPC solutions to train a neural net on how to upscale game content in a general sense and packages it along with their drivers. This neural net is then used to upscale frames, and the upscaled frames are fed back into the temporal antialiasing algorithm along with the relevant engine data to antialias every subsequent frame. Yes?

no, the neural net is not upscaling content at all. just read the presentation, look at the "spatial-temporal supersampling" part

http://behindthepixels.io/assets/files/DLSS2.0.pdf

it is using a neural net working on the motion vectors to choose weighting for the samples, that it feeds into a TAA/temporal upscaling algorithm. The neural net doesn't directly upscale anything, the neural net picks weights for the samples.

Paul MaudDib fucked around with this message at 19:12 on Sep 9, 2020

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
As far as we can tell and from everything Nvidia has revealed, DLSS doesn't actually use ML to generate samples. It uses ML to determine what samples from previous frames to re-use, and how. That's how DLSS can produce better than native results - it's effectively supersampling across multiple frames, as long as it doesn't use bad data or throw away too much good data. There is clearly a little special sauce of some sort in there because camera cuts aren't hard aliased like TAA even though they should look awful because the single-frame resolution is below native. I don't know that anyone outside Nvidia has a good idea how that works.

TacticalHoodie
May 7, 2007

What is the most rgb video card out there? I want to get it for my girlfriend who just love flashing lights in her ten year old matx case.

LRADIKAL
Jun 10, 2001

Fun Shoe

repiv posted:

camera cuts are the worst case scenario but it happens during continuous motion too



if you have an object moving across the frame, like the circle here, then it leaves a "trail" of invalid history data behind it as whatever was behind it only just became visible

the first few pixels in its wake have no history data and are effectively not upsampled, then the history data progressively builds up and increases in quality

in practice it's rarely noticeable though because the history re-accumulates before you really have a chance to scrutinize it, especially at 60fps or higher

This exact bug is in death stranding running DLSS.

Stickman
Feb 1, 2004

Zero VGS posted:

You can do the same thing that my Skyreach case can do. Pop the front of it off and hotrod the GPU out of the frame. Just have to keep the GPU to 2 slots like some of the 3080's are.

The founder’s edition should just fit in a core V1. Now we just have to wait and see how poo poo the cooler is :v:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Whiskey A Go Go! posted:

What is the most rgb video card out there? I want to get it for my girlfriend who just love flashing lights in her ten year old matx case.

Consider the following:

GIGABYTE AORUS GeForce RTX 3080 Master & Xtreme
Inno3D RTX 3080 iChill X3 & X4
Palit GeForce RTX 3080 Gaming PRO & Gaming PRO OC
PNY GeForce RTX 3080 XLR8 Gaming Epic-X RGB
ZOTAC RTX 3080 Trinity

Which one is the "most" RGB is a matter of personal choice, but all of the above are quite blingy.

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

Paul MaudDib posted:

no, the neural net is not upscaling content at all. just read the presentation, look at the "spatial-temporal supersampling" part

http://behindthepixels.io/assets/files/DLSS2.0.pdf

it is using a neural net working on the motion vectors to choose weighting for the samples, that it feeds into a TAA/temporal upscaling algorithm. The neural net doesn't directly upscale anything, the neural net picks weights for the samples.

Wrong, they’re using motion vectors as inputs to the NN but it’s still a CNN auto encoder that spits out images that can be compared against the higher rez ground truth. The spatio-temporal section you’re referring to is actually discussing an existing technique which is based on heuristics.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply