|
Truga posted:8k is in the high enough pixel density range that at normal sitting distances your eyes can't distinguish individual pixels any longer. i'm super hyped for it to be affordable Didn't they say this about 4K?
|
# ? Sep 9, 2020 18:12 |
|
|
# ? May 30, 2024 11:14 |
|
Shipon posted:Didn't they say this about 1080p?
|
# ? Sep 9, 2020 18:13 |
|
Is there an equivalent DLSS feature in photo editing where you can do some "Enhance... enhance.... enhance..." poo poo like they do in movies when they catch the bad guys face in the reflection of something at the back of a grainy security camera feed?
|
# ? Sep 9, 2020 18:15 |
|
No - contemporary AI techniques that do reconstruction are trained and focused on filling in plausible details, not the original details
|
# ? Sep 9, 2020 18:16 |
|
Sort of, but you get random things the algo thinks could be there, not what was really there. It works for games because they can train the network on just the game. In a non game situation, the training has to be more general
|
# ? Sep 9, 2020 18:17 |
|
repiv posted:
Where is it getting that detail from!?
|
# ? Sep 9, 2020 18:17 |
|
Rinkles posted:Where is it getting that detail from!? CAPTCHAs are getting weird
|
# ? Sep 9, 2020 18:18 |
|
eggyolk posted:Is there an equivalent DLSS feature in photo editing where you can do some "Enhance... enhance.... enhance..." poo poo like they do in movies when they catch the bad guys face in the reflection of something at the back of a grainy security camera feed? If you have a computer running Docker, you could try this out https://github.com/alexjc/neural-enhance
|
# ? Sep 9, 2020 18:18 |
|
Rinkles posted:Where is it getting that detail from!? They train the neural network on the low res output and high res output. This takes a long time, but when it's done it can approximate the high res version given the low res data
|
# ? Sep 9, 2020 18:19 |
|
Rinkles posted:Where is it getting that detail from!? It infers detail from super high res training images and fills in where it thinks is appropriate. That's the "Deep Learning" part of "DLSS".
|
# ? Sep 9, 2020 18:19 |
|
taqueso posted:They train the neural network on the low res output and high res output. This takes a long time, but when it's done it can approximate the high res version given the low res data I thought DLSS 2 wasn't game specific?
|
# ? Sep 9, 2020 18:20 |
|
shrike82 posted:No - contemporary AI techniques that do reconstruction are trained and focused on filling in plausible details, not the original details yeah, single image upscaling is just guesswork based on trends inferred from training data which leads to hilarious results like this https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-added-ryan-goslings-face-to-this-photo/ Rinkles posted:Where is it getting that detail from!? previous frames, it accumulates detail over time
|
# ? Sep 9, 2020 18:21 |
|
beaten on all countstaqueso posted:Sort of, but you get random things the algo thinks could be there, not what was really there. It works for games because they can train the network on just the game. In a non game situation, the training has to be more general Rinkles posted:Where is it getting that detail from!? In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says "For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact." which is uh...
|
# ? Sep 9, 2020 18:23 |
|
repiv posted:previous frames, it accumulates detail over time that makes sense. I mean in so far as I can glean anything about this black magic technology
|
# ? Sep 9, 2020 18:24 |
|
mobby_6kl posted:beaten on all counts Wow, thats incredible and almost unbelievable except for the part where it works
|
# ? Sep 9, 2020 18:26 |
|
mobby_6kl posted:In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says https://developer.nvidia.com/contact Start there I guess?
|
# ? Sep 9, 2020 18:28 |
|
PC LOAD LETTER posted:That thing often ran hotter than that (70C+ was typical) and was widely regarded as laughable at the time. edit: looking at anandtech's review of the thing it was forced to underclock itself in stuff like furmark over 10%. I'm curious as to how much AIO/liquid cooling you've actually done? Because you're just...wrong. Also, Furmark? IHS on CPUs are a substantial limitation. Additionally, CPU physical architecture is set up in a way that does not optimize cooling potential--they have a couple of very small, very hot spots (cores, mostly), and then big chunks of space that aren't putting out much heat (caches, units that aren't currently being used like AVX-512 junk, the entire iGPU slice which is like 50% of the size of a lot of chips now, etc). This ends up meaning that getting heat out of a CPU is a considerable challenge, and is part of reason they have an IHS to begin with. The other part being so you don't crush it with your ham-hands when installing the cooler, of course. GPUs, on the other hand, tend to have lower heat point-density, and run bare-die because you're not supposed to be loving with the cooler in the first place. The end result being it's substantially easier to get heat out of a GPU die than a CPU die. The 295x2 did indeed run at 70C+, but you know what else does? Pretty much every 2080 and 2080Ti. 70C was also 20C lower than the 290X it was based on, and at substantially lower noise levels, so, uh. Yeah. Even a single 120mm AIO from 2014 can cool ~500W without much issue. The only real knock it took there was it didn't seem to have a good enough no-load down-throttle system, so its idle noise floor was a bit above a comparable open-air cooler. To be entirely fair, some current AIOs still struggle with that, or have obnoxious pump noise. Others are basically silent. You're also wrong going back to talking about boost clocks: in most situations NVidia cards boost up to a max point and then pretty much stay there, +/- maybe 20Mhz, unless they run into some other throttling issue first (heat, usually). No one gives a gently caress about base "guarantee" clocks of 1800Mhz or whatever if in practical use they'll be seeing 2000-2050Mhz or similar. You know, just like with Pascal and Turing.
|
# ? Sep 9, 2020 18:34 |
|
Ugly In The Morning posted:You could probably achieve similar results at greater efficiency by having the radiator submerged in water being resupplied at some slow rate to keep the temp low enough. You'd get an insane water bill and destroy the environment, but you could install a waterblock to a GPU then just run a low stream from the nearest faucet set to cold. If you want to be efficient about it, someone once build a geothermal loop to cool their GPU: https://www.overclock.net/forum/134-cooling-experiments/671177-12-feet-under-1000-square-feet-geothermal-pc-cooling.html
|
# ? Sep 9, 2020 18:35 |
|
AirRaid posted:https://developer.nvidia.com/contact From this presentation: http://behindthepixels.io/assets/files/DLSS2.0.pdf we get this page with this handy faq: Q: How do I integrate DLSS into my engine? A: For information on integrating DLSS into a new engine, contact your NVIDIA developer relations contact.
|
# ? Sep 9, 2020 18:36 |
|
Nfcknblvbl posted:My mini ITX Thermaltake Core V1 pc owns. Too bad I'm gonna have to chuck it when I upgrade the GPU. You can do the same thing that my Skyreach case can do. Pop the front of it off and hotrod the GPU out of the frame. Just have to keep the GPU to 2 slots like some of the 3080's are.
|
# ? Sep 9, 2020 18:41 |
|
mobby_6kl posted:I thought the original implementation was game specific, but Isn't DLSS 2 supposed to be generic now? yeah, DLSS1 attempted to adapt the traditional AI upscaling methods that imagine extra details out of thin air, and required training on each game so it could learn which patterns to infer for their art style. it also didn't work very well. DLSS2 scrapped that and works more like a conventional temporal upscaler, integrating real detail over time rather than guessing, but uses an ML model to predict which data from the previous frames is still useful and what needs to be rejected to avoid artifacts the new method doesn't really depend on the games visual style so it doesn't need training per game anymore mobby_6kl posted:In any case, has anyone seen what it takes to actually get it working with your engine? I have a toy renderer I've used in a few projects and although it doesn't need DLSS , I'm kind of curious what it'd require. Unfortunately the nvidia page says https://www.youtube.com/watch?v=d5knHzv0IQE this presentation gives an overview of how engine integration works, but as for actually getting the SDK yourself you probably can't most nvidia libraries are easy to get by clicking through a few EULAs but they are holding the DLSS SDK close to their chest for some reason repiv fucked around with this message at 18:46 on Sep 9, 2020 |
# ? Sep 9, 2020 18:44 |
|
I really just want DLSS2 everywhere to upsample to 4K DSR, to get super-awesome antialiasing on the cheap.
|
# ? Sep 9, 2020 18:46 |
|
can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice?
|
# ? Sep 9, 2020 18:46 |
|
Rinkles posted:can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice? It's part of the frame rendering process, so it all occurs within the generation of a single frame. It's not like, say, lovely game engines doing asset pop-in where frame 1 has the low res texture, frame 2 the medium res, and then finally by frame 5 or whatever it's gotten the high res texture off the disk and shoved it in there so you can watch as it progressively improves the image over the course of 1/4 seconds worth of frames. DLSS works "instantly" in that sense. e; vvvvv is technically correct, but a complete hard cut to a totally static view isn't a very common occurrence in most games, so that sort of thing is more an edge case than something you'd expect to see regularly. DrDork fucked around with this message at 18:59 on Sep 9, 2020 |
# ? Sep 9, 2020 18:49 |
|
Rinkles posted:can you observe the extra detail being painted in when you look at a totally new area, or is it too fast to notice? in theory you can but it's usually quite hard to notice TAA has a similar phenomenon where it outputs a raw aliased image immediately after a camera cut, but it only takes a few frames to clean up it's more noticeable at lower framerates of course
|
# ? Sep 9, 2020 18:51 |
|
The Shield’s still using the more traditional AI upsampling technique for video. It’s an interesting problem because the universe of possible imagery is far greater there.
|
# ? Sep 9, 2020 18:57 |
|
repiv posted:yeah, DLSS1 attempted to adapt the traditional AI upscaling methods that imagine extra details out of thin air, and required training on each game so it could learn which patterns to infer for their art style. it also didn't work very well. @11:56 "Twenty Eighty Tie"
|
# ? Sep 9, 2020 19:00 |
|
I've just been messing with a deep learning based vocal remover. Using the GPU over the CPU, the speed Now I want the 3090 because of the VRAM.
|
# ? Sep 9, 2020 19:01 |
|
DLSS requires the dev to apply or label motion vectors to objects or something like that... There's more info in the Digital Foundry Death Stranding video. I think I pasted a link into this thread earlier.
|
# ? Sep 9, 2020 19:02 |
|
I haven't looked too hard at how DLSS 1 vs 2 is actually developed against but I'm trying to figure out why it's cool to use machine learning for gaming despite its limited use cases for photorealistic image sets in terms of extrapolation (see: the limits of Adobe content-aware image culling / extrapolation which isn't even ML). It's perfectly reasonable to use intermediate levels of training data to extrapolate different scenes and textures if there's 4k or 8k+ textures available as one's verification set. So you train the model to upscale from, say, 1080 textures to 4k textures using a subset of all 4k textures or perhaps rendered scenes as the training data and to verify how good the model is you'd compare it against a verification set that's a distinct subset. This is inherently different from most machine learning datasets in that the total population is effectively infinite while in a game situation we have finite data and a near-ideal training data set. As a result, a sufficiently detailed model can be literally pixel-perfect accurate while it's much harder with the massive space of real world images. From what it sounds like based upon the bits I've read DLSS 2 is an LSTM network while DLSS 1 was a CNN.
|
# ? Sep 9, 2020 19:05 |
|
necrobobsledder posted:I haven't looked too hard at how DLSS 1 vs 2 is actually developed against but I'm trying to figure out why it's cool to use machine learning for gaming despite its limited use cases for photorealistic image sets in terms of extrapolation (see: the limits of Adobe content-aware image culling / extrapolation which isn't even ML). It's perfectly reasonable to use intermediate levels of training data to extrapolate different scenes and textures if there's 4k or 8k+ textures available as one's verification set. So you train the model to upscale from, say, 1080 textures to 4k textures using a subset of all 4k textures or perhaps rendered scenes as the training data and to verify how good the model is you'd compare it against a verification set that's a distinct subset. This is inherently different from most machine learning datasets in that the total population is effectively infinite while in a game situation we have finite data and a near-ideal training data set. As a result, a sufficiently detailed model can be literally pixel-perfect accurate while it's much harder with the massive space of real world images. DLSS 2.0 isn't deep-dreaming images directly, it's using deep learning to decide how to weight the temporal samples that it feeds to TAA/a traditional upscaler.
|
# ? Sep 9, 2020 19:07 |
|
So based on my reading NVidia used their HPC solutions to train a neural net on how to upscale game content in a general sense and packages it along with their drivers. This neural net is then used to upscale frames, and the upscaled frames are fed back into the temporal antialiasing algorithm along with the relevant engine data to antialias every subsequent frame. Yes?
|
# ? Sep 9, 2020 19:08 |
|
DrDork posted:e; vvvvv is technically correct, but a complete hard cut to a totally static view isn't a very common occurrence in most games, so that sort of thing is more an edge case than something you'd expect to see regularly. camera cuts are the worst case scenario but it happens during continuous motion too if you have an object moving across the frame, like the circle here, then it leaves a "trail" of invalid history data behind it as whatever was behind it only just became visible the first few pixels in its wake have no history data and are effectively not upsampled, then the history data progressively builds up and increases in quality in practice it's rarely noticeable though because the history re-accumulates before you really have a chance to scrutinize it, especially at 60fps or higher
|
# ? Sep 9, 2020 19:08 |
|
Some Goon posted:So based on my reading NVidia used their HPC solutions to train a neural net on how to upscale game content in a general sense and packages it along with their drivers. This neural net is then used to upscale frames, and the upscaled frames are fed back into the temporal antialiasing algorithm along with the relevant engine data to antialias every subsequent frame. Yes? no, the neural net is not upscaling content at all. just read the presentation, look at the "spatial-temporal supersampling" part http://behindthepixels.io/assets/files/DLSS2.0.pdf it is using a neural net working on the motion vectors to choose weighting for the samples, that it feeds into a TAA/temporal upscaling algorithm. The neural net doesn't directly upscale anything, the neural net picks weights for the samples. Paul MaudDib fucked around with this message at 19:12 on Sep 9, 2020 |
# ? Sep 9, 2020 19:09 |
|
As far as we can tell and from everything Nvidia has revealed, DLSS doesn't actually use ML to generate samples. It uses ML to determine what samples from previous frames to re-use, and how. That's how DLSS can produce better than native results - it's effectively supersampling across multiple frames, as long as it doesn't use bad data or throw away too much good data. There is clearly a little special sauce of some sort in there because camera cuts aren't hard aliased like TAA even though they should look awful because the single-frame resolution is below native. I don't know that anyone outside Nvidia has a good idea how that works.
|
# ? Sep 9, 2020 19:09 |
What is the most rgb video card out there? I want to get it for my girlfriend who just love flashing lights in her ten year old matx case.
|
|
# ? Sep 9, 2020 19:12 |
|
repiv posted:camera cuts are the worst case scenario but it happens during continuous motion too This exact bug is in death stranding running DLSS.
|
# ? Sep 9, 2020 19:17 |
|
Zero VGS posted:You can do the same thing that my Skyreach case can do. Pop the front of it off and hotrod the GPU out of the frame. Just have to keep the GPU to 2 slots like some of the 3080's are. The founder’s edition should just fit in a core V1. Now we just have to wait and see how poo poo the cooler is
|
# ? Sep 9, 2020 19:24 |
|
Whiskey A Go Go! posted:What is the most rgb video card out there? I want to get it for my girlfriend who just love flashing lights in her ten year old matx case. Consider the following: GIGABYTE AORUS GeForce RTX 3080 Master & Xtreme Inno3D RTX 3080 iChill X3 & X4 Palit GeForce RTX 3080 Gaming PRO & Gaming PRO OC PNY GeForce RTX 3080 XLR8 Gaming Epic-X RGB ZOTAC RTX 3080 Trinity Which one is the "most" RGB is a matter of personal choice, but all of the above are quite blingy.
|
# ? Sep 9, 2020 19:26 |
|
|
# ? May 30, 2024 11:14 |
|
Paul MaudDib posted:no, the neural net is not upscaling content at all. just read the presentation, look at the "spatial-temporal supersampling" part Wrong, they’re using motion vectors as inputs to the NN but it’s still a CNN auto encoder that spits out images that can be compared against the higher rez ground truth. The spatio-temporal section you’re referring to is actually discussing an existing technique which is based on heuristics.
|
# ? Sep 9, 2020 19:27 |