|
Scoss posted:Is the upcoming AMD reveal likely to be limited only to bigass flagship enthusiast cards like Nvidia? AMD has Navi33 (our friend the Hotpink Blowfish), which is expected to be a monolithic 4096 shader part with 8x interface and 8gb of RAM. But I don't think anyone has any desire to launch any low end parts at the moment, so its possible we don't see it for a while.
|
# ? Oct 4, 2022 18:44 |
|
|
# ? Jun 4, 2024 07:16 |
|
yeah, i think the low end agenda for next year's gonna be "if you want low end, there's gonna be $250 3080s on ebay soon" lol remember $200 radeon 290s? lmfao
|
# ? Oct 4, 2022 18:47 |
|
AMD is going to release a $299 card with 16gb and kill whatever intel just put out
|
# ? Oct 4, 2022 18:53 |
|
it looks like the RTX A6000 is going for around $5,000. At that price, I'm wondering if it makes more sense to get a Mac Pro. Didn't someone in this thread mention that the Mac Pro shares ram between the GPU and CPU? But I'm getting confusing results on searching, the Mac Pro builder makes it look like it has separate vram / regular ram, but google claims it does share?
|
# ? Oct 4, 2022 18:59 |
|
Chainclaw posted:it looks like the RTX A6000 is going for around $5,000. At that price, I'm wondering if it makes more sense to get a Mac Pro. Didn't someone in this thread mention that the Mac Pro shares ram between the GPU and CPU? But I'm getting confusing results on searching, the Mac Pro builder makes it look like it has separate vram / regular ram, but google claims it does share? AFAIK its all unified ram at this point.
|
# ? Oct 4, 2022 19:01 |
|
Chainclaw posted:it looks like the RTX A6000 is going for around $5,000. At that price, I'm wondering if it makes more sense to get a Mac Pro. Didn't someone in this thread mention that the Mac Pro shares ram between the GPU and CPU? But I'm getting confusing results on searching, the Mac Pro builder makes it look like it has separate vram / regular ram, but google claims it does share? The current Mac Pro (released Dec 2019) is one of the last remaining Intel holdouts with separate CPU & GPU. It looks like Apple is holding off until they're able to put together an "Extreme" version of the M2 with 48 cores before they're going to update the Mac Pro to their ARM chips with unified RAM. All of their laptops, the Mac Studio and the current iMac releases have moved to the ARM chip with unified RAM though.
|
# ? Oct 4, 2022 19:06 |
|
The Clap posted:The current Mac Pro (released Dec 2019) is one of the last remaining Intel holdouts with separate CPU & GPU. It looks like Apple is holding off until they're able to put together an "Extreme" version of the M2 with 48 cores before they're going to update the Mac Pro to their ARM chips with unified RAM. All of their laptops, the Mac Studio and the current iMac releases have moved to the ARM chip with unified RAM though. Someone pointed me at the Studios and they seem to have the integrated memory. The main gotcha I think is Nvidia puts out a ton of cool software that only runs on Nvidia GPUs, but building an Nvidia machine would probably cost like $10k, when an equivalent Mac Studio would be like $7k. I think more stuff works on Nvidia besides just first party Nvidia software, too.
|
# ? Oct 4, 2022 19:13 |
|
Scoss posted:Is the upcoming AMD reveal likely to be limited only to bigass flagship enthusiast cards like Nvidia? AMD has Navi 33 for midrange (7700XT or 7600XT tier depending on how they do it) but they opted to push it back to next year and launch the high-end parts first. I think the implication is they don't want to be fighting miner inventory either... like I said I think that one's lose-lose, if you undercut pricing then miners will adapt and undercut back, $200 is better than $0 for them. AMD has the benefit that they don't have to ditch a bunch of their own last-gen inventory on top of that, though... ideally, NVIDIA really needs to time it so that they sell through about the same time as miner inventory starts to clear, and I'm not sure that's possible, they'd really have to move a lot of inventory. I'm guessing we don't see Navi 33 in much volume until Q2 next year. It's hard to say when they'll do the announcement/launch because timelines are so squishy, they could honestly launch it as soon as CES, but in that case I'd expect it to be a very protracted launch with reviews releasing in like, mid/late feb and cards on the market no earlier than late feb/early march, with very little inventory in march and things really firming up in April. Or they could do the announcements in Feb and do a firmer launch in mid or late March. Or even slide things back a little further and announce in March, even. Those two scenarios (CES announce vs Feb/March announce) would be my guesses at this point, but there aren't rumors on this that I've seen, just my guess from entrail-reading. Chainclaw posted:it looks like the RTX A6000 is going for around $5,000. At that price, I'm wondering if it makes more sense to get a Mac Pro. Didn't someone in this thread mention that the Mac Pro shares ram between the GPU and CPU? But I'm getting confusing results on searching, the Mac Pro builder makes it look like it has separate vram / regular ram, but google claims it does share? Mac Pro hasn't been updated with M1/M2 processors yet, it still uses Intel (not sure if it's skylake-sp or ice lake-sp at this point) so no shared RAM there. Right now the Mac Studio with M1 Ultra (2x M1 Max chiplets on a package) is the beefiest thing Apple offers. But yes, the M1/M2 series do share RAM between all the processors/coprocessors... the CPU, GPU, and NPU all are attached to a single memory controller and have their own ports - but not all the ports can saturate the whole controller. So you are in a situation where the CPU could potentially touch 64GB of memory, but it caps out at 1/4 the bandwidth that the GPU could access. Note that the ML ecosystem isn't highly evolved for Apple yet... and I think especially for training, inference is a lot easier (I know there's Stable Diffusion running on M1). A lot of the tooling basically still assumes NVIDIA for the most part since that's the industry standard. And out of all Apple's performance claims, the GPU ones are the most dubious... I don't think many actual benchmarks have borne out the "3090 performance" claim. But it's hard to validate because not a lot of software runs on AMD anyway. Be sure to check out what types of math (bfloat, etc) your particular training approach uses and check that Apple supports those with an acceptable level of performance, because that varies too.
|
# ? Oct 4, 2022 19:20 |
|
Could orders please open up :/ sigh. what 4090 models are yall getting? I have no idea what the preferred skus are.
|
# ? Oct 4, 2022 21:08 |
|
repiv posted:brief XeSS comparison in the new DF direct motion handling "XeSS [on non-intel hardware] is still pretty good here I would say" voiced over this image: That's not "pretty good" it's hilariously awful ghosting. Might as well be playing Snake! with those motion trails...
|
# ? Oct 4, 2022 21:10 |
|
Harik posted:motion handling "XeSS [on non-intel hardware] is still pretty good here I would say" voiced over this image: That’s very reminiscent of what you get in Wonderlands with FSR2. Sadly the latest update broke the DLSS hack.
|
# ? Oct 4, 2022 21:13 |
|
Is there any competitive advantage to running CS:GO or Valorant or whatever at +240fps? Maybe this is more Games forum related but Hardware Unboxed and Gamers Nexus talk about playing games at 300FPS and I really have to wonder if this stuff makes or breaks noscope headshot performance on a noticeable level or if any competitive advantage it brings is lost when playing online with latency.
|
# ? Oct 4, 2022 21:13 |
|
buglord posted:Is there any competitive advantage to running CS:GO or Valorant or whatever at +240fps? Maybe this is more Games forum related but Hardware Unboxed and Gamers Nexus talk about playing games at 300FPS and I really have to wonder if this stuff makes or breaks noscope headshot performance on a noticeable level or if any competitive advantage it brings is lost when playing online with latency. Higher framerate = more responsive and when you're in a split second reaction shooter like CSGO or Valorant that stuff can matter when you're not in your mid-30s with wrists of dust.
|
# ? Oct 4, 2022 21:16 |
|
njsykora posted:Higher framerate = more responsive and when you're in a split second reaction shooter like CSGO or Valorant that stuff can matter when you're not in your mid-30s with wrists of dust. don't most modern games decouple render framerate from input polling framerate? If you're rendering, say, 300 FPS chances are the game isn't polling input at 300 FPS. If you're playing Quake 2 or something built on ancient tech then yeah, rendering and gameplay logic loops will be coupled.
|
# ? Oct 4, 2022 21:18 |
|
Chainclaw posted:don't most modern games decouple render framerate from input polling framerate? If you're rendering, say, 300 FPS chances are the game isn't polling input at 300 FPS. Rendering is an inherent part of input lag since input lag is measured from an action being input to the display of that action. The higher the frame rate, the sooner that action will be shown on screen and the more responsive a game will feel. For an extreme example, try capping your favorite game to 30 fps and see how different it feels.
|
# ? Oct 4, 2022 21:22 |
|
I remember in Quake 3 there were certain ledges you can only jump to if your frame rate was capped at 125 fps. 333 fps lets you jump higher but online play limits to 125.
|
# ? Oct 4, 2022 21:25 |
|
Yes, it does make a difference. If nothing else, opposing players will become visible from around corners earlier the higher your framerate is. The greater your fps is over your opponent's, the more time you have to react before they do. This is in addition to seeing your physical actions correspond to in-game actions more quickly, leading to you not having to overcorrect as much since there is less "lag time" between performing your action and seeing the result on the screen.
|
# ? Oct 4, 2022 21:30 |
|
are we polling input now or just getting events as they come in? Because there was a big to-do about implementing a type of nested IO wait on linux to try to better emulate how windows event loops work for wine gaming so I thought everything was event-driven on modern engines.
|
# ? Oct 4, 2022 21:31 |
|
Dr. Video Games 0031 posted:Rendering is an inherent part of input lag since input lag is measured from an action being input to the display of that action. The higher the frame rate, the sooner that action will be shown on screen and the more responsive a game will feel. For an extreme example, try capping your favorite game to 30 fps and see how different it feels. Yeah in that range, but the question was if it mattered at 300 FPS. If the game is polling input at 30 or 60 FPS, it won't matter from a competitive edge if you're running 300 FPS locally. Sure it will look way nicer, and that's what I care more about, but the question was if it helped competitive gamers.
|
# ? Oct 4, 2022 21:33 |
|
Chainclaw posted:Yeah in that range, but the question was if it mattered at 300 FPS. If the game is polling input at 30 or 60 FPS, it won't matter from a competitive edge if you're running 300 FPS locally. Sure it will look way nicer, and that's what I care more about, but the question was if it helped competitive gamers. Polling rates in competitive games are much faster than 30 or 60fps. Input lag is measurably better at higher frame rates in most popular competitive games. Though there have been a few oddball examples like Apex Legends (which is still using the Source Engine, god bless it)
|
# ? Oct 4, 2022 21:44 |
|
embargos on arc 7 up tomorrow https://twitter.com/VideoCardz/status/1577400695728312348
|
# ? Oct 4, 2022 21:57 |
|
Paul MaudDib posted:I'm guessing we don't see Navi 33 in much volume until Q2 next year. It's hard to say when they'll do the announcement/launch because timelines are so squishy, they could honestly launch it as soon as CES, but in that case I'd expect it to be a very protracted launch with reviews releasing in like, mid/late feb and cards on the market no earlier than late feb/early march, with very little inventory in march and things really firming up in April. Or they could do the announcements in Feb and do a firmer launch in mid or late March. Or even slide things back a little further and announce in March, even. Those two scenarios (CES announce vs Feb/March announce) would be my guesses at this point, but there aren't rumors on this that I've seen, just my guess from entrail-reading. it'd make sense since they should be announcing a bunch of laptop chips like they always do & we already know that Navi 33 mobile is going to get a big push for laptops
|
# ? Oct 4, 2022 21:58 |
|
Got my 1660 for sale if anyone's looking for a starter GPU for their kid's fortnite box or something. You won't be able to reply though. I have PMs here and my twitter is @adequate_scott https://forums.somethingawful.com/showthread.php?threadid=4013976 TheScott2K fucked around with this message at 23:36 on Oct 4, 2022 |
# ? Oct 4, 2022 23:12 |
|
Chainclaw posted:Yeah in that range, but the question was if it mattered at 300 FPS. If the game is polling input at 30 or 60 FPS, it won't matter from a competitive edge if you're running 300 FPS locally. Sure it will look way nicer, and that's what I care more about, but the question was if it helped competitive gamers. It's a measurable difference, but not really a meaningful one. Still, if you're a competitive gamer and your livelihood depends on how fast you click people's heads, it'd be very stupid not to take every millisecond you can possibly get.
|
# ? Oct 4, 2022 23:27 |
|
Dr. Video Games 0031 posted:Polling rates in competitive games are much faster than 30 or 60fps. Input lag is measurably better at higher frame rates in most popular competitive games. Though there have been a few oddball examples like Apex Legends (which is still using the Source Engine, god bless it) I saw the other day that there’s apparently arguments about whether the Apex Legends engine is really still the source engine because it’s so heavily modified. I’ll just confuse things further by claiming it’s the Quake I engine
|
# ? Oct 5, 2022 00:10 |
|
respawn made it clear in GDC talks that massive chunks of the engine (rendering, collision, streaming...) were rewritten from scratch as early as titanfall 1, and it probably needed further work to support the big open apex maps the titanfall/apex engine is source in the same way the modern COD engine is idtech, there's a direct lineage there but it's been ship of theseus'd at this point
|
# ? Oct 5, 2022 00:21 |
|
If it still has the inverse square root function then it’s idtech, easy call.
|
# ? Oct 5, 2022 00:23 |
|
if you want to see what a battle royale actually looks like on vanilla source just look to CS:GO danger zone, which feels like a janky mod that's barely keeping the engine from catching fire despite the map and player count being tiny by usual BR standards
|
# ? Oct 5, 2022 00:25 |
|
The game engine of Theseus
|
# ? Oct 5, 2022 00:36 |
|
buglord posted:Is there any competitive advantage to running CS:GO or Valorant or whatever at +240fps? Maybe this is more Games forum related but Hardware Unboxed and Gamers Nexus talk about playing games at 300FPS and I really have to wonder if this stuff makes or breaks noscope headshot performance on a noticeable level or if any competitive advantage it brings is lost when playing online with latency. It's always going to be a meaningful advantage against peers. Frame rate matters a lot. The higher framerate goes the less the advantage is, but even the ~1ms going from 240 to 300 is going to give you a measurable performance increase. Chainclaw posted:Yeah in that range, but the question was if it mattered at 300 FPS. If the game is polling input at 30 or 60 FPS, it won't matter from a competitive edge if you're running 300 FPS locally. Sure it will look way nicer, and that's what I care more about, but the question was if it helped competitive gamers. I don't think there's a game anyone takes seriously that polls at 60hz. In fact AFAIK every single game polls mouse input at framerate because the experience of decoupling mouse input from visual output would feel heinous, with horrible stutter inevitably resulting. Even if a game did, 300 FPS would still be a significant advantage, because you'd still be seeing things at least a millisecond faster than someone at a lower refresh rate. It'd just be less consistent in terms of reacting properly.
|
# ? Oct 5, 2022 00:51 |
|
Isn't polling rate controlled by the OS and mouse driver, not the game?
|
# ? Oct 5, 2022 01:16 |
|
K8.0 posted:It's always going to be a meaningful advantage against peers. Frame rate matters a lot. The higher framerate goes the less the advantage is, but even the ~1ms going from 240 to 300 is going to give you a measurable performance increase.
|
# ? Oct 5, 2022 01:33 |
|
I would not say that 1ms on its own has a big impact, but competitive gamers seek to optimize every single part of the chain. Shave 1 ms off here, 2ms off there, etc, and eventually it will add up to a pretty big difference. And reaction times are an entirely different story. And I'd like to think that reacting to a starting pistol and starting a sprint is a very different type of reaction than seeing something on a screen and flicking a mouse. These are not comparable actions, and I would suspect that this "far more complex" action can be done much faster than starting a sprint. And latency of any kind doesn't overlap with reaction times but adds to them.
|
# ? Oct 5, 2022 01:45 |
|
Shumagorath posted:Do you have any research supporting that 1ms makes a measurable difference? Last I looked into it, Olympians have simple reaction times (starting pistol, ruler test, etc) on the order of 0.15s. 1ms would not have a measurable impact on a far more complex action like IFF —> Track —> Fire. This comes up a lot, and yes. Latency is additive. Going from a 3ms device latency to a 1ms device latency still removes 2ms from the total round trip time, regardless of whether that total time is 5ms or 500ms.
|
# ? Oct 5, 2022 01:48 |
|
A follow up question here is, does additional FPS matter beyond your refresh rate
|
# ? Oct 5, 2022 01:51 |
|
gradenko_2000 posted:A follow up question here is, does additional FPS matter beyond your refresh rate It depends on the game engine. Usually not these days. Either way, if you’re talking 144 (or 141 after “recommendations”) then it really doesn’t matter. If it does matter you’ll have a coach…
|
# ? Oct 5, 2022 01:53 |
|
I’m arguing that 5-10ms of lag (input, network, whatever) is less impactful than many here are claiming it is outside of very simple games. Say your game is two players having a dot appear on screen and the fastest on-target click wins. In that scenario every millisecond matters. In a game where both players have free movement in 3-space, audio cues, materials, etc then quality of play is going to dominate minor latency gaps in everything but hypothetical quickdraws.KillHour posted:This comes up a lot, and yes. Latency is additive. Going from a 3ms device latency to a 1ms device latency still removes 2ms from the total round trip time, regardless of whether that total time is 5ms or 500ms. Shumagorath fucked around with this message at 01:57 on Oct 5, 2022 |
# ? Oct 5, 2022 01:55 |
|
I’m guessing there aren’t CRTs that are fast enough at 1080p-ish, but if they existed, I wonder if pros would stick to CRTs to get around sample and hold blur, which is honestly a big asterisk for high refresh rate LCDs.
|
# ? Oct 5, 2022 01:58 |
|
Shumagorath posted:I’m arguing that 5-10ms of lag (input, network, whatever) is less impactful than many here are claiming it is outside of very simple games. Say your game is two players having a dot appear on screen and the fastest on-target click wins. In that scenario every millisecond matters. In a game where both players have free movement in 3-space, audio cues, materials, etc then quality of play is going to dominate minor latency gaps in everything but hypothetical quickdraws. Input lag is to an extent additive with reaction times too. Two people can have the same reaction time of 100ms but if one has 10ms less input lag then they are going to react faster.
|
# ? Oct 5, 2022 02:01 |
|
|
# ? Jun 4, 2024 07:16 |
|
Shumagorath posted:Do you have any research supporting that 1ms makes a measurable difference? Last I looked into it, Olympians have simple reaction times (starting pistol, ruler test, etc) on the order of 0.15s. 1ms would not have a measurable impact on a far more complex action like IFF —> Track —> Fire. The person with a faster system gets to react with a lag advantage, making their success rate in reaction times against their opponents more likely. So, yes, it would make a measurable impact.
|
# ? Oct 5, 2022 02:02 |