|
I go 6/11 2400dpi for desktop use 800dpi switch is my "i'm casually touching up pixels on a 1440p monitor" speed, and frustratingly slow beyond that use case.
|
# ? Sep 16, 2023 19:30 |
|
|
# ? Jun 2, 2024 13:53 |
|
I keep my mouse at 3200 DPI
|
# ? Sep 16, 2023 19:40 |
|
VostokProgram posted:the real reason to have your mouse set to 800 dpi is that windows cursor speed is busted and has to stay exactly in the middle to avoid pixel skipping, so if your mouse is 1600 dpi it'll be extremely twitchy on the desktop Not true for turning it down. There are a whole bunch of even divisors and you can run higher DPI with all the lower notches on the "old" slider except 5/11 and be just fine. Try it, it's actually really noticeable how much better a mouse tracks at higher DPI.
|
# ? Sep 16, 2023 19:40 |
|
I use 10dpi because i like a workout when I’m gaming.
|
# ? Sep 16, 2023 20:16 |
|
MarcusSA posted:I use 10dpi because i like a workout when I’m gaming. Pictured: MarcusSA trying to micro in SC2
|
# ? Sep 16, 2023 21:03 |
|
Fortnite is the premier micro game now.
|
# ? Sep 16, 2023 21:05 |
|
BlankSystemDaemon posted:We're finally starting to see the kind of performance analysis that I was hoping for, in this particular instance it's from chipsandcheese.com. there isn't much to work with from their analysis beyond "oh that's interesting" over x unknown function, not even a hint of what could be improved in the renderer
|
# ? Sep 16, 2023 21:12 |
|
Yeah wiring up your home with ethernet is a game changer even for wifi. A wired backhaul will improve wifi speeds and reliability.
|
# ? Sep 16, 2023 21:33 |
|
shrike82 posted:Yeah wiring up your home with ethernet is a game changer even for wifi. A wired backhaul will improve wifi speeds and reliability. Hell yeah. I got incredibly lucky that my place is wired. Having that back haul is a huge improvement.
|
# ? Sep 16, 2023 21:34 |
|
Got my 4090 founders edition installed today, actually pretty impressive how quiet it is, I could totally ditch the water cooler and just go air on this thing. (But I already have the block for it, so I'll slap it on in a few days after I am comfortable the 4090 is going to ride the bathtub curve.) A huge difference in fan noise and tone compared to my previous EVGA 3080 Ti FTW3 on its stock cooler, that thing was loud AF. Performance wise, uhhhh, my 5 year old 9900k CPU was already struggling to feed the 3080 Ti, the 4090 is just a snore fest. Oh how amusing the first game I tried that managed to scrape 100% GPU utilization out of it and not just stuck at a CPU limit was Portal RTX. Control with the recent HDR patch and its "More ray tracing! (DANGER: EXPENSIVE DO NOT USE)" toggle that still runs ~100 FPS is also talking the card up into comfortably over 425w consumption (in native resolution DLAA mode). Seems I've found my GPU burn-in tester. Maybe there is space to hit the GPU limits if I start throwing DSR into the mix elsewhere, but honestly the benefit over DLSS quality seems pretty debatable. On the plus side overall system power consumption is down a fair amount in games that were already hitting the CPU limit, because the 4090 uses less power than a 3080 Ti when operating at the same CPU limited performance levels.
|
# ? Sep 16, 2023 21:38 |
|
Shipon posted:it's like smash players complaining about wifi vs ethernet, when half of them think that powerline ethernet is preferrable to wifi
|
# ? Sep 16, 2023 22:43 |
|
Indiana_Krom posted:Got my 4090 founders edition installed today, actually pretty impressive how quiet it is, I could totally ditch the water cooler and just go air on this thing. (But I already have the block for it, so I'll slap it on in a few days after I am comfortable the 4090 is going to ride the bathtub curve.) A huge difference in fan noise and tone compared to my previous EVGA 3080 Ti FTW3 on its stock cooler, that thing was loud AF. “A poignant but charming coming of age story that tugs on your heartstrings in all the right ways” - New York Times
|
# ? Sep 16, 2023 22:52 |
|
I really do like the Adrenaline software. Especially the metrics tracker: There are more things you can list (like fan speeds) and you can also use just numbers instead of graphs, but it's pretty neat.
|
# ? Sep 17, 2023 01:55 |
|
Taima posted:Yep. I used powerline ethernet for almost a year before giving up and hard wiring my house. Powerline ethernet sucks rear end but it was amazing for gaming latency vs my (extremely good) wifi 6e router. MoCa or hardwire MarcusSA posted:Hell yeah. This is one of my projects this winter (hardwiring for backhaul) once going into the attic doesn’t mean 10 minutes or more equals death. Canned Sunshine fucked around with this message at 02:15 on Sep 17, 2023 |
# ? Sep 17, 2023 02:13 |
|
SourKraut posted:MoCa or hardwire I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure.
|
# ? Sep 17, 2023 02:17 |
|
Subjunctive posted:I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure. Oh that sucks; at least the wall plates and drop routes are known tho, and you could use the old Cat5 to help pull through the new cables!
|
# ? Sep 17, 2023 02:35 |
|
SourKraut posted:MoCa or hardwire maybe they fixed it, but on mine moca added 3ms of latency no matter what
|
# ? Sep 17, 2023 03:25 |
|
Kibner posted:I really do like the Adrenaline software. Especially the metrics tracker: And everything in one app, one UI. Nvidia's software UX is just laughably bad.
|
# ? Sep 17, 2023 04:39 |
|
I miss Adrenalin. It is quite useful.
|
# ? Sep 17, 2023 06:15 |
|
Subjunctive posted:I have Cat5 in the house from the builders, but I think they stapled it down so I can’t use it to pull 6a that would let me do 2.5/10. First world problem to be sure. You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length.
|
# ? Sep 17, 2023 08:05 |
|
Inept posted:maybe they fixed it, but on mine moca added 3ms of latency no matter what What about compared to Powerline ethernet? For some reason I thought MoCa 2.5 fixed the latency issue, but maybe not. HalloKitty posted:You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length. I think there was a period in the mid-to-late 90s when ethernet was becoming a regular option, so a lot of homebuilders put it in, but this was before 5e debuted.
|
# ? Sep 17, 2023 16:36 |
|
HalloKitty posted:You can run 2.5 over Cat5e (I assume it's Cat5e you have, not Cat5) up to the original spec max length of 100m. 5Gbit is possible too, with reduced max length. I do seem to have Cat5e, which is good.
|
# ? Sep 17, 2023 17:21 |
|
nvidia throwing some more money at developer outreach for their streamline api doesn't seem like the worst idea, with the dlss/reflex add-ins people could make speaking of, how are things like lod bias configured for things like this? (or rather, where should they be to make this stuff easier)
|
# ? Sep 18, 2023 12:06 |
|
I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible) is that sort of disparity something that's always going to be a thing, because of the technological/technical difference between FSR2 and DLSS? is it possible to make them closer together, or for FSR2 to not look as bad compared to DLSS, if the devs... program it better? I guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up
|
# ? Sep 18, 2023 12:31 |
|
jedi survivor looks a lot like launch cp2077 where something was just horribly wrong with the aa implemention. re engine games also have some issues i think the advantage of dlss is that it's better at cleaning up awful visual quality than fsr, and the average game is always going to have a litany of issues and rushed implementations rather than the best-case tech demo comparisons we'd normally like to see this on top of developers trying their hands at dx12 without the same crutches of dx11 and nvidia fixing a lot of their code vomit in drivers
|
# ? Sep 18, 2023 12:36 |
|
gradenko_2000 posted:I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible) DLSS's advantage is apparently its usage of tensor cores/ML to do sample rejection. FSR2's hand-tuned algorithm just doesn't do as good of a job, and it will probably never will without that machine learning component. XeSS does a similar thing, and it's very close to DLSS when it runs on Arc GPUs.
|
# ? Sep 18, 2023 12:42 |
|
gradenko_2000 posted:guess I'm wondering how much of this is a "natural" part of the limits of FSR2, versus how well the developer can polish it up The natural limits of the technique AMD is using is that it has to run on the shader cores of consoles since they don't have any specialized hardware. If they chose to prioritize image quality, it would cut down speed. I could perhaps see them coming out with a high-quality version of FSR in the future that would be optimized to improve image quality with the idea that it is for games targeting 30FPS on consoles, but if increasing image quality to the same level as DLSS required the technique to be significantly slower than DLSS, the comparison wouldn't be flattering to AMD. They are probably stuck unless a PS5 Pro or something along those lines has new hardware to let them offload FSR from the shaders. Does anybody know if the Cyberpunk 2.0 patch will include DLSS 3.5 support?
|
# ? Sep 18, 2023 13:25 |
|
pyrotek posted:Does anybody know if the Cyberpunk 2.0 patch will include DLSS 3.5 support? Phantom Liberty headlines their 3.5 article, so probably
|
# ? Sep 18, 2023 13:35 |
|
gradenko_2000 posted:I was watching Digital Foundry's coverage of Jedi Survivor's latest patch (their verdict: still pretty bad) and I was able to see some glaring differences between FSR2 and DLSS (because DF took lengths to point it out and make it visible) Not necessarily, no. There is nothing about how dlss2 works that makes it inherently superior to something like fsr2. That said, to get fsr2 to be as good as the competition is much harder due to having to specify the function/algorithm (over innumerable conditions), where as deep learning the computer does all of the curve fitting by itself. As far as I know, no proof demonstrates that neural networks are inherently better than any other structure; however, other approaches may just be impossible to implement as well (there are decades of attempts/failures to back this up) and thus it is a moot point. Upscaling isn't exactly a huge ask of a gpu. Watch the utilization rates of those ever so magical tensor cores using dlss2 or even, to a lesser extent, dlss3. That is a lot of silicon on a consumer chip that does nothing for most of its life; Intel arc is the apotheosis of this design decision, but there I think it hurt them where Nvidia not so much. I don't think the hardware has anything to do with it
|
# ? Sep 18, 2023 16:04 |
|
if hardware isn't a factor then one has to ask why AMD is continuing to invest in fiddling around with hand-tuned algorithms rather than trying to replicate the two leading methods which have leapfrogged everything else by a considerable margin we're over three years past the launch of DLSS2 now, even if they were blindsided by that they've had time to change course
|
# ? Sep 18, 2023 16:47 |
|
repiv posted:if hardware isn't a factor then one has to ask why AMD is continuing to invest in fiddling around with hand-tuned algorithms rather than trying to replicate the two leading methods which have leapfrogged everything else by a considerable margin
|
# ? Sep 18, 2023 17:10 |
|
wasn't that supposed to be the role of DirectML? microsoft provides a standard library of higher level ML primitives and exposes hooks for the GPU driver to substitute in their own optimized implementations using whichever instructions are the fastest for their hardware uptake has been non-existent in real-time graphics though, intel followed nvidia in using their own private driver interfaces to implement XeSS
|
# ? Sep 18, 2023 17:19 |
|
Probably because Intel doesn't (yet) matter and AMD doesn't have dedicated hardware or show interest, so if you're going to implement something using ML hardware you just do it the Nvidia specific way rather than dealing with the downsides of a universal API that may be less performant, less mature, etc. And aside from upsampling and related stuff where you are unlikely to beat Nvidia's R&D since they still seem to be pushing pretty hard, what exactly are you going to do with ML hardware for real-time rendering right now? Nvidia is leading the software push, so you just let them do it and implement their black box APIs. If AMD gets serious maybe that changes, but it probably won't for another 5+ years until next-gen consoles drive a next-gen architecture reconsideration. K8.0 fucked around with this message at 17:32 on Sep 18, 2023 |
# ? Sep 18, 2023 17:29 |
|
Yeah, uptake for DirectML has been pretty much nonexistent so tying you run-anywhere upscaling product to it doesn’t seem like a great idea. Maybe if it matures a little more we’ll see some movement in that direction. I don’t think the layout of the API is a great fit for a real-time upscale either, where you’re probably wanting to tightly interleave your matrix and vector operations in an architecture dependent way.K8.0 posted:AMD doesn't have dedicated hardware or show interest steckles fucked around with this message at 17:45 on Sep 18, 2023 |
# ? Sep 18, 2023 17:34 |
|
steckles posted:Given the marketing around FSR being usable on any architecture, I’m not sure we’ll see something more advanced until there is better high level API support for matrix math. I think "works everywhere and even better on our latest-gen cards" would be fine with their marketing strategy, and it's not marketing purity that's keeping them from doing something using RDNA3's new bits. I think it's that they don't have an algorithm that works as well as DLSS2+'s training has produced, and likely won't until they break down and build a model themselves.
|
# ? Sep 18, 2023 17:55 |
|
"works everywhere and even better on our latest-gen cards" is exactly what they're doing with FSR3, where the interpolation runs everywhere but RDNA3 gets extra latency compensation
|
# ? Sep 18, 2023 18:02 |
|
steckles posted:Yeah, uptake for DirectML has been pretty much nonexistent so tying you run-anywhere upscaling product to it doesn’t seem like a great idea. Maybe if it matures a little more we’ll see some movement in that direction. I don’t think the layout of the API is a great fit for a real-time upscale either, where you’re probably wanting to tightly interleave your matrix and vector operations in an architecture dependent way. A 7900xtx is about on par with a 4080 in things like stable diffusion. PyTorch 2.0 has minimized the need for DirectML for now.
|
# ? Sep 18, 2023 20:14 |
|
https://twitter.com/VideoCardz/status/1703772692644573268?s=20 People hyping themselves up on Switch 2 against the PS5/XS are going to be disappointed imo
|
# ? Sep 18, 2023 20:40 |
|
was anyone expecting a handheld to keep up with ~200W systems the switch 2 might be a node ahead but that's not enough
|
# ? Sep 18, 2023 20:42 |
|
|
# ? Jun 2, 2024 13:53 |
|
repiv posted:was anyone expecting a handheld to keep up with ~200W systems i mean those are pretty old consoles now, and alot of improvements have been since rdna2-ish was released. with framegen in fsr3/dlss you can make up for alot for alot of trash tier hardware
|
# ? Sep 18, 2023 20:44 |