Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Conspiratiorist
Nov 12, 2015

17th Separate Kryvyi Rih Tank Brigade named after Konstantin Pestushko
Look to my coming on the first light of the fifth sixth some day
I go 6/11 2400dpi for desktop use :shrug:

800dpi switch is my "i'm casually touching up pixels on a 1440p monitor" speed, and frustratingly slow beyond that use case.

Adbot
ADBOT LOVES YOU

change my name
Aug 27, 2007

Legends die but anime is forever.

RIP The Lost Otakus.

I keep my mouse at 3200 DPI

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

VostokProgram posted:

the real reason to have your mouse set to 800 dpi is that windows cursor speed is busted and has to stay exactly in the middle to avoid pixel skipping, so if your mouse is 1600 dpi it'll be extremely twitchy on the desktop

Not true for turning it down. There are a whole bunch of even divisors and you can run higher DPI with all the lower notches on the "old" slider except 5/11 and be just fine. Try it, it's actually really noticeable how much better a mouse tracks at higher DPI.

MarcusSA
Sep 23, 2007

I use 10dpi because i like a workout when I’m gaming.

SlowBloke
Aug 14, 2017

MarcusSA posted:

I use 10dpi because i like a workout when I’m gaming.



Pictured: MarcusSA trying to micro in SC2

Anime Schoolgirl
Nov 28, 2002

Fortnite is the premier micro game now.

Wiggly Wayne DDS
Sep 11, 2010



BlankSystemDaemon posted:

We're finally starting to see the kind of performance analysis that I was hoping for, in this particular instance it's from chipsandcheese.com.
If the way they cover things reminds you of how Anand used to do things, it's probably not an accident.
while that's interesting i'm iffy about their decision to talk about architectural issues over a single set of frame analyses off of a single game renderer. i am surprised they never busted out renderdoc to see more in-depth though

there isn't much to work with from their analysis beyond "oh that's interesting" over x unknown function, not even a hint of what could be improved in the renderer

shrike82
Jun 11, 2005

Yeah wiring up your home with ethernet is a game changer even for wifi. A wired backhaul will improve wifi speeds and reliability.

MarcusSA
Sep 23, 2007

shrike82 posted:

Yeah wiring up your home with ethernet is a game changer even for wifi. A wired backhaul will improve wifi speeds and reliability.

Hell yeah.

I got incredibly lucky that my place is wired. Having that back haul is a huge improvement.

Indiana_Krom
Jun 18, 2007
Net Slacker
Got my 4090 founders edition installed today, actually pretty impressive how quiet it is, I could totally ditch the water cooler and just go air on this thing. (But I already have the block for it, so I'll slap it on in a few days after I am comfortable the 4090 is going to ride the bathtub curve.) A huge difference in fan noise and tone compared to my previous EVGA 3080 Ti FTW3 on its stock cooler, that thing was loud AF.

Performance wise, uhhhh, my 5 year old 9900k CPU was already struggling to feed the 3080 Ti, the 4090 is just a snore fest. Oh how amusing the first game I tried that managed to scrape 100% GPU utilization out of it and not just stuck at a CPU limit was Portal RTX.

Control with the recent HDR patch and its "More ray tracing! (DANGER: EXPENSIVE DO NOT USE)" toggle that still runs ~100 FPS is also talking the card up into comfortably over 425w consumption (in native resolution DLAA mode). Seems I've found my GPU burn-in tester. Maybe there is space to hit the GPU limits if I start throwing DSR into the mix elsewhere, but honestly the benefit over DLSS quality seems pretty debatable. On the plus side overall system power consumption is down a fair amount in games that were already hitting the CPU limit, because the 4090 uses less power than a 3080 Ti when operating at the same CPU limited performance levels.

Arzachel
May 12, 2012

Shipon posted:

it's like smash players complaining about wifi vs ethernet, when half of them think that powerline ethernet is preferrable to wifi

Taima
Dec 31, 2006

tfw you're peeing next to someone in the lineup and they don't know

Indiana_Krom posted:

Got my 4090 founders edition installed today, actually pretty impressive how quiet it is, I could totally ditch the water cooler and just go air on this thing. (But I already have the block for it, so I'll slap it on in a few days after I am comfortable the 4090 is going to ride the bathtub curve.) A huge difference in fan noise and tone compared to my previous EVGA 3080 Ti FTW3 on its stock cooler, that thing was loud AF.

Performance wise, uhhhh, my 5 year old 9900k CPU was already struggling to feed the 3080 Ti, the 4090 is just a snore fest. Oh how amusing the first game I tried that managed to scrape 100% GPU utilization out of it and not just stuck at a CPU limit was Portal RTX.

Control with the recent HDR patch and its "More ray tracing! (DANGER: EXPENSIVE DO NOT USE)" toggle that still runs ~100 FPS is also talking the card up into comfortably over 425w consumption (in native resolution DLAA mode). Seems I've found my GPU burn-in tester. Maybe there is space to hit the GPU limits if I start throwing DSR into the mix elsewhere, but honestly the benefit over DLSS quality seems pretty debatable. On the plus side overall system power consumption is down a fair amount in games that were already hitting the CPU limit, because the 4090 uses less power than a 3080 Ti when operating at the same CPU limited performance levels.

“A poignant but charming coming of age story that tugs on your heartstrings in all the right ways” - New York Times

Kibner
Oct 21, 2008

Acguy Supremacy
I really do like the Adrenaline software. Especially the metrics tracker:



There are more things you can list (like fan speeds) and you can also use just numbers instead of graphs, but it's pretty neat.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Taima posted:

Yep. I used powerline ethernet for almost a year before giving up and hard wiring my house. Powerline ethernet sucks rear end but it was amazing for gaming latency vs my (extremely good) wifi 6e router.

MoCa or hardwire

MarcusSA posted:

Hell yeah.

I got incredibly lucky that my place is wired. Having that back haul is a huge improvement.

This is one of my projects this winter (hardwiring for backhaul) once going into the attic doesn’t mean 10 minutes or more equals death.

Canned Sunshine fucked around with this message at 02:15 on Sep 17, 2023

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

SourKraut posted:

MoCa or hardwire

This is one of my projects this winter (hardwiring for backhaul) once going into the attic doesn’t mean 10 minutes or more equals death.

I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Subjunctive posted:

I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure.

Oh that sucks; at least the wall plates and drop routes are known tho, and you could use the old Cat5 to help pull through the new cables!

Inept
Jul 8, 2003

SourKraut posted:

MoCa or hardwire

maybe they fixed it, but on mine moca added 3ms of latency no matter what

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

Kibner posted:

I really do like the Adrenaline software. Especially the metrics tracker:



There are more things you can list (like fan speeds) and you can also use just numbers instead of graphs, but it's pretty neat.

And everything in one app, one UI. Nvidia's software UX is just laughably bad.

Yudo
May 15, 2003

I miss Adrenalin. It is quite useful.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Subjunctive posted:

I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure.

You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Inept posted:

maybe they fixed it, but on mine moca added 3ms of latency no matter what

What about compared to Powerline ethernet? For some reason I thought MoCa 2.5 fixed the latency issue, but maybe not.

HalloKitty posted:

You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length.

I think there was a period in the mid-to-late 90s when ethernet was becoming a regular option, so a lot of homebuilders put it in, but this was before 5e debuted.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

HalloKitty posted:

You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length.

I do seem to have Cat5e, which is good.

kliras
Mar 27, 2021
nvidia throwing some more money at developer outreach for their streamline api doesn't seem like the worst idea, with the dlss/reflex add-ins people could make

speaking of, how are things like lod bias configured for things like this? (or rather, where should they be to make this stuff easier)

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible)

is that sort of disparity something that's always going to be a thing, because of the technological/technical difference between FSR2 and DLSS? is it possible to make them closer together, or for FSR2 to not look as bad compared to DLSS, if the devs... program it better?

I guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up

kliras
Mar 27, 2021
jedi survivor looks a lot like launch cp2077 where something was just horribly wrong with the aa implemention. re engine games also have some issues

i think the advantage of dlss is that it's better at cleaning up awful visual quality than fsr, and the average game is always going to have a litany of issues and rushed implementations rather than the best-case tech demo comparisons we'd normally like to see

this on top of developers trying their hands at dx12 without the same crutches of dx11 and nvidia fixing a lot of their code vomit in drivers

Dr. Video Games 0031
Jul 17, 2004

gradenko_2000 posted:

I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible)

is that sort of disparity something that's always going to be a thing, because of the technological/technical difference between FSR2 and DLSS? is it possible to make them closer together, or for FSR2 to not look as bad compared to DLSS, if the devs... program it better?

I guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up

DLSS's advantage is apparently its usage of tensor cores/ML to do sample rejection. FSR2's hand-tuned algorithm just doesn't do as good of a job, and it will probably never will without that machine learning component. XeSS does a similar thing, and it's very close to DLSS when it runs on Arc GPUs.

pyrotek
May 21, 2004



gradenko_2000 posted:

guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up

The natural limits of the technique AMD is using is that it has to run on the shader cores of consoles since they don't have any specialized hardware. If they chose to prioritize image quality, it would cut down speed. I could perhaps see them coming out with a high-quality version of FSR in the future that would be optimized to improve image quality with the idea that it is for games targeting 30FPS on consoles, but if increasing image quality to the same level as DLSS required the technique to be significantly slower than DLSS, the comparison wouldn't be flattering to AMD.

They are probably stuck unless a PS5 Pro or something along those lines has new hardware to let them offload FSR from the shaders.

Does anybody know if the Cyberpunk 2.0 patch will include DLSS 3.5 support?

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

pyrotek posted:

Does anybody know if the Cyberpunk 2.0 patch will include DLSS 3.5 support?

Phantom Liberty headlines their 3.5 article, so probably

Yudo
May 15, 2003

gradenko_2000 posted:

I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible)

is that sort of disparity something that's always going to be a thing, because of the technological/technical difference between FSR2 and DLSS? is it possible to make them closer together, or for FSR2 to not look as bad compared to DLSS, if the devs... program it better?

I guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up

Not necessarily, no. There is nothing about how dlss2 works that makes it inherently superior to something like fsr2. That said, to get fsr2 to be as good as the competition is much harder due to having to specify the function/algorithm (over innumerable conditions), where as deep learning the computer does all of the curve fitting by itself. As far as I know, no proof demonstrates that neural networks are inherently better than any other structure; however, other approaches may just be impossible to implement as well (there are decades of attempts/failures to back this up) and thus it is a moot point.

Upscaling isn't exactly a huge ask of a gpu. Watch the utilization rates of those ever so magical tensor cores using dlss2 or even, to a lesser extent, dlss3. That is a lot of silicon on a consumer chip that does nothing for most of its life; Intel arc is the apotheosis of this design decision, but there I think it hurt them where Nvidia not so much. I don't think the hardware has anything to do with it

repiv
Aug 13, 2009

if hardware isn't a factor then one has to ask why AMD is continuing to invest in fiddling around with hand-tuned algorithms rather than trying to replicate the two leading methods which have leapfrogged everything else by a considerable margin

we're over three years past the launch of DLSS2 now, even if they were blindsided by that they've had time to change course

steckles
Jan 14, 2006

repiv posted:

if hardware isn't a factor then one has to ask why AMD is continuing to invest in fiddling around with hand-tuned algorithms rather than trying to replicate the two leading methods which have leapfrogged everything else by a considerable margin
Given the marketing around FSR being usable on any architecture, I’m not sure we’ll see something more advanced until there is better high level API support for matrix math. Nvidia’s hardware is opaque, RDNA3/Arc’s vector hardware is better documented, but still requires emitting highly architecture-specific instructions with different requirement for getting good performance. Supporting all that is probably not something they’re interested in when the purpose is just to show up. I don’t think AMD cares all that much if it loses in image quality comparisons, FSR ticks a box. Plus they can always just pay developers to use it…

repiv
Aug 13, 2009

wasn't that supposed to be the role of DirectML? microsoft provides a standard library of higher level ML primitives and exposes hooks for the GPU driver to substitute in their own optimized implementations using whichever instructions are the fastest for their hardware

uptake has been non-existent in real-time graphics though, intel followed nvidia in using their own private driver interfaces to implement XeSS

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Probably because Intel doesn't (yet) matter and AMD doesn't have dedicated hardware or show interest, so if you're going to implement something using ML hardware you just do it the Nvidia specific way rather than dealing with the downsides of a universal API that may be less performant, less mature, etc.

And aside from upsampling and related stuff where you are unlikely to beat Nvidia's R&D since they still seem to be pushing pretty hard, what exactly are you going to do with ML hardware for real-time rendering right now? Nvidia is leading the software push, so you just let them do it and implement their black box APIs. If AMD gets serious maybe that changes, but it probably won't for another 5+ years until next-gen consoles drive a next-gen architecture reconsideration.

K8.0 fucked around with this message at 17:32 on Sep 18, 2023

steckles
Jan 14, 2006

Yeah, uptake for DirectML has been pretty much nonexistent so tying you run-anywhere upscaling product to it doesn’t seem like a great idea. Maybe if it matures a little more we’ll see some movement in that direction. I don’t think the layout of the API is a great fit for a real-time upscale either, where you’re probably wanting to tightly interleave your matrix and vector operations in an architecture dependent way.

K8.0 posted:

AMD doesn't have dedicated hardware or show interest
RDNA3 does have a broadly comparable set of matrix manipulation instructions to Nvidia’s tensor cores. It’s unclear exactly how performance actually stacks up, but they’re not showing no interest.

steckles fucked around with this message at 17:45 on Sep 18, 2023

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

steckles posted:

Given the marketing around FSR being usable on any architecture, I’m not sure we’ll see something more advanced until there is better high level API support for matrix math.

I think "works everywhere and even better on our latest-gen cards" would be fine with their marketing strategy, and it's not marketing purity that's keeping them from doing something using RDNA3's new bits. I think it's that they don't have an algorithm that works as well as DLSS2+'s training has produced, and likely won't until they break down and build a model themselves.

repiv
Aug 13, 2009

"works everywhere and even better on our latest-gen cards" is exactly what they're doing with FSR3, where the interpolation runs everywhere but RDNA3 gets extra latency compensation

Yudo
May 15, 2003

steckles posted:

Yeah, uptake for DirectML has been pretty much nonexistent so tying you run-anywhere upscaling product to it doesn’t seem like a great idea. Maybe if it matures a little more we’ll see some movement in that direction. I don’t think the layout of the API is a great fit for a real-time upscale either, where you’re probably wanting to tightly interleave your matrix and vector operations in an architecture dependent way.

RDNA3 does have a broadly comparable set of matrix manipulation instructions to Nvidia’s tensor cores. It’s unclear exactly how performance actually stacks up, but they’re not showing no interest.

A 7900xtx is about on par with a 4080 in things like stable diffusion. PyTorch 2.0 has minimized the need for DirectML for now.

shrike82
Jun 11, 2005

https://twitter.com/VideoCardz/status/1703772692644573268?s=20

People hyping themselves up on Switch 2 against the PS5/XS are going to be disappointed imo

repiv
Aug 13, 2009

was anyone expecting a handheld to keep up with ~200W systems

the switch 2 might be a node ahead but that's not enough

Adbot
ADBOT LOVES YOU

wargames
Mar 16, 2008

official yospos cat censor

repiv posted:

was anyone expecting a handheld to keep up with ~200W systems

the switch 2 might be a node ahead but that's not enough

i mean those are pretty old consoles now, and alot of improvements have been since rdna2-ish was released. with framegen in fsr3/dlss you can make up for alot for alot of trash tier hardware

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply