|
Why would they design a benchmark where the weather changes some runs and doesn't on others?
|
# ? Oct 7, 2018 09:16 |
|
|
# ? Jun 11, 2024 13:08 |
|
Cygni posted:I realized after running this a trillion times that the benchmark acutally sorta sucks rear end. Not only does the weather change sometimes (totally tanking FPS, I cancelled all those runs), but the NPC locations do too. But whatever, I already ran it a bunch on my Saturday night so im posting it. gently caress i need a girlfriend. Good news on the girlfriend front: variability in the benchmark just means you need to run it more times. Also, run it more times with equal latency please.
|
# ? Oct 7, 2018 09:27 |
|
Paul MaudDib posted:Yup, and even 10+ is a liiiittle below 14++ in that chart, let alone if the 9-series pushes 14++ a little farther, or if Intel doesn't hit their performance targets for 10+ (I remember hearing they were backing down their ambitions for 10nm a bit so they can actually get it into production). Semi-accurate claims that Intel have abandoned the original 10nm for 12nm (which they'll still call 10nm) for their next process shrink. That performance regression graph above is over a year old now, so the figures on it for 10nm aren't going to line up with the "10nm" Intel releases.
|
# ? Oct 7, 2018 11:28 |
|
I was going to post about how Intel surely has a huge pile of assets/cash maybe they can just accept the R+D penalty and fling themselves into whatever improvements their lineup needs, but google seems to say they gave up over ten billion dollars since 2016? EDIT: well wiki seems to think they have mad dough anyway Surprise Giraffe fucked around with this message at 12:15 on Oct 7, 2018 |
# ? Oct 7, 2018 12:12 |
|
Scalding Coffee posted:I knew about this, but for some reason, it was turned off and set to some weird number. The computer doesn't feel so sluggish after turning it on. A shame the RAM sticks are old and dying or I might be able to get another year out of it. Nowadays, the only reason it "needs" to be large is that Windows can copy stale pages over, to be able to dump them quickly without an outstanding paging op, for when you're actually requiring the page file. That and dumping memory on BSODs. So long your RAM usage isn't hitting the ceiling, the page file doesn't really do anything. Switching between a fix 16GB allocation and automatic management shouldn't make a difference in feel. Also, size isn't really related to RAM. I have it set to automatic management and it's just 4.8GB, on a machine with 32GB RAM. Paul MaudDib posted:Software will get written/rewritten over time to be more thread-friendly, because there is no other way forward. But unfortunately that's what they said 10 years ago too. At least rewriting software to be CCX friendly is probably easier than rewriting it to be thread-friendly in the first place. Combat Pretzel fucked around with this message at 12:54 on Oct 7, 2018 |
# ? Oct 7, 2018 12:45 |
|
I just realized my motherboard has been running my 3000mhz ram at 2100 and I just assumed that's what i bought because my brain is poo poo. Why would it do this when set to auto?
|
# ? Oct 7, 2018 15:57 |
|
68Bird posted:Can anyone suggest some quick things to check regarding low fps with a new 2080 ti? I deleted the old drivers via the windows control panel, installed new ones, updated my mobo bios just for shits, but I'm still seeing numbers way lower than they should be according to what other people have gotten with older cards. Processor is a 7700k, ram is 32gb of ddr4-3200. I only have Far Cry 5 and Ghost Recon Wildlands to benchmark with, but I don't see CPU or video card temps getting out of hand, so I'm thinking it's either some kind of weird software issue or I'm having power supply issues. The PSU is a 660w 80+ platinum seasonic, so I feel like it should be fine, but I'm downright confused as to what else could be an issue. Only thing I see is uninstalling drivers via CP, personally I use DDU uninstaller, works great Edit: also I assume you have MSI afterburner for checking that your card is running at proper speeds etc Comfy Fleece Sweater fucked around with this message at 16:09 on Oct 7, 2018 |
# ? Oct 7, 2018 16:01 |
|
Aeka 2.0 posted:I just realized my motherboard has been running my 3000mhz ram at 2100 and I just assumed that's what i bought because my brain is poo poo. Why would it do this when set to auto? Because auto is the JEDEC standard that guarantees compatibility, to actually get what the manufacturer claims you have to switch it from auto to XMP.
|
# ? Oct 7, 2018 16:04 |
|
The bios knows what the CPU officially supports and doesn't/shouldn't default to the XMP profile. I used to run my memory at the default 2400 until I saw somewhere that XMP is validated and not just a guess and it's been stable for a year now at 2666. It should be perfectly stable, but the default won't be overclocked.
|
# ? Oct 7, 2018 16:09 |
|
Last time I put on XMP it crashed. hmm. edit: seems to be fine now. Thanks. Aeka 2.0 fucked around with this message at 17:56 on Oct 7, 2018 |
# ? Oct 7, 2018 17:49 |
|
Then it's likely your RAM is bad and doesn't run at the advertised speeds.
|
# ? Oct 7, 2018 17:50 |
|
Getting a noticeable improvement on the 2080 ti from my old 1080. Also, I was able to plug my Oculus into the usb-c connector using an adapter, so that's cool.
|
# ? Oct 7, 2018 17:53 |
|
Deviant posted:Getting a noticeable improvement on the 2080 ti from my old 1080. Which adapter did you use? I heard the Apple one works. I’m hoping to use one for the same reason.
|
# ? Oct 7, 2018 18:22 |
|
I've used a generic Chinese adapter to plug my Odyssey into the tablet and it worked perfectly fine, if we ignore that it had an order of magnitude lower performance than required to actually play something.
|
# ? Oct 7, 2018 18:26 |
|
Scary Hellions / Super Creeps › GPU Megat[H]read - gently caress I need a girlfriend
|
# ? Oct 7, 2018 18:32 |
|
Deviant posted:Getting a noticeable improvement on the 2080 ti from my old 1080. Do you know if there is anyway to do that with the Vive? I've been looking for an answer but am unable to find anything.
|
# ? Oct 7, 2018 19:23 |
|
Pure anecdote here but using NVENC with OBS I was able to stream near flawless 1080p60fps to Twitch without any drops or really visible artifacts outside of downconverting from 1440p at 6 mbps. I havent pushed the limits but its clearly an improvement over the last encoder
|
# ? Oct 7, 2018 21:02 |
|
Can I put an Nvidia bios on my Zotac or is that a *BAD IDEA*?
|
# ? Oct 7, 2018 23:18 |
|
Kagemusha posted:Which adapter did you use? I heard the Apple one works. I’m hoping to use one for the same reason. I only used a usb to usb c adapter, but I have seen confirmation the apple one works.
|
# ? Oct 7, 2018 23:49 |
|
Paul MaudDib posted:Yup, and even 10+ is a liiiittle below 14++ in that chart, let alone if the 9-series pushes 14++ a little farther, or if Intel doesn't hit their performance targets for 10+ (I remember hearing they were backing down their ambitions for 10nm a bit so they can actually get it into production). fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen. ConanTheLibrarian posted:Semi-accurate claims that Intel have abandoned the original 10nm for 12nm (which they'll still call 10nm) for their next process shrink. That performance regression graph above is over a year old now, so the figures on it for 10nm aren't going to line up with the "10nm" Intel releases. IIRC they weren't able to ever get graphics working on the 10nm Cannon Lake. They only ever released the i3-8121U out of the process, which is a laptop CPU that doesn't have integrated graphics, which is good for an il serpente cosmico fucked around with this message at 00:03 on Oct 8, 2018 |
# ? Oct 7, 2018 23:58 |
|
NewFatMike posted:To back up the effort post, Intel even projected a performance regression at 10nm compared to 14+++:
|
# ? Oct 8, 2018 00:51 |
|
il serpente cosmico posted:fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen. The uncertainty is whether they're going to make a GCN GPU for the next generation of consoles that actually scales higher assuming they don't just use the same amount of GPU power as the bone x, so we may run into the opposite problem
|
# ? Oct 8, 2018 01:43 |
|
1gnoirents posted:Pure anecdote here but using NVENC with OBS I was able to stream near flawless 1080p60fps to Twitch without any drops or really visible artifacts outside of downconverting from 1440p at 6 mbps. I havent pushed the limits but its clearly an improvement over the last encoder I wish there was a way for them tp update the NVENC encoder for the Pascal series through just a driver update, but I believe all the games they made were physical hardware changes.
|
# ? Oct 8, 2018 01:50 |
|
Anime Schoolgirl posted:On GF/Samsung 14nm+ they could mass-produce 2.8ghz sustained base clock Ryzen on 45w which is still over 2.5x as fast as the Jaguar on the Bone X. So assuming they're willing to give CPU more than a 20w power budget (and they should) we should not have problems on that front. otoh it may just make them use lower numbers of threads again. I wouldn't be surprised if they underbudget the GPU but design it in such a way to easily double it down the line to make a Pro / X again. No one seems to care a whole lot about hitting 60FPS or 4K without some kind of reconstruction technology this generation, so we'll see how this next generation goes. Plus there's the question of Ray Tracing hardware.
|
# ? Oct 8, 2018 02:01 |
|
Ray tracing is unlikely to happen on the incoming consoles, it seems too early. The hardware is probably close to being locked down for both console vendors assuming they are targeting 2020. If nvidia is making people pay $700+ just for the lower tier of RTX GPUs there is no feasible way AMD would have raytracing-capable GPUs in consoles that typically sells for half that. I genuinely don't believe AMD is capable of coming up with their own solution to ray-tracing in such a short amount of time, they don't have the resources for it.
|
# ? Oct 8, 2018 08:32 |
|
Zedsdeadbaby posted:Ray tracing is unlikely to happen on the incoming consoles, it seems too early. The hardware is probably close to being locked down for both console vendors assuming they are targeting 2020. If nvidia is making people pay $700+ just for the lower tier of RTX GPUs there is no feasible way AMD would have raytracing-capable GPUs in consoles that typically sells for half that. I genuinely don't believe AMD is capable of coming up with their own solution to ray-tracing in such a short amount of time, they don't have the resources for it. Ray tracing might not even be feasible yet in the PC realm of things. Who knows what final tweaked results will be, but initial unoptimized builds of games with ray traced reflections during the Nvidia event were only getting around 50-60fps at 1080p. It's probably gonna be a while until ray tracing is truly mainstream.
|
# ? Oct 8, 2018 08:38 |
|
https://www.youtube.com/watch?v=SrF4k6wJ-do I found this analysis quite enlightening. RTX currently does reflections and shadows with ray tracing at one sample per pixel. It seems that it is very performance intensive to do this. On the other hand it makes these effects easy to implement on the software side. The current rasterization tricks, while being crummy approximations sometimes, are very honed tricks. Between console releases, 4K requirements and the current limitations of the RTX implementation, I don't think path tracing will be very important to gamers for the next couple of years. Might be a real boon for 3D graphics guys who want to preview in close to real time!
|
# ? Oct 8, 2018 08:56 |
|
LRADIKAL posted:https://www.youtube.com/watch?v=SrF4k6wJ-do I mean RTX is also crummy approximation, it's just a much better crummy approximation than rasterization But yeah, requiring at least one ray per pixel (two for optimal results) is a massive improvement from hundreds+ but it still hits the shaders pretty hard, especially at 4k.
|
# ? Oct 8, 2018 09:17 |
|
I know it is too early and we have to wait for benchmarks and so on, but do you guys think that the 2070 will be at least decent in this RT stuff, or it'll be like trying to run the latest games on a 5 year old computer? (working, but definitely not enjoyable) I have just recently bought a 1080 FTW that I could potentially step-up to a 2070, if EVGA offers it... I'm definitely not going for the 2080 or 2080Ti for budget reasons, but if a 2070 is better in normal games than the 1080 (for 1440p) and also has decent raytracing capabilities I could fork over the 100-150€ or so to step up and "futureproof" a little, even if it means paying double taxes... TorakFade fucked around with this message at 10:16 on Oct 8, 2018 |
# ? Oct 8, 2018 09:58 |
|
TorakFade posted:I know it is too early and we have to wait for benchmarks and so on, but do you guys think that the 2070 will be at least decent in this RT stuff, or it'll be like trying to run the latest games on a 5 year old computer? (working, but definitely not enjoyable) Extrapolating from 2080/2080ti, 2070 should be slightly faster than the 1080 except in 4k hdr and game engines that make use of double rate fp16 (Wolfenstein 2/id tech 6) where 2070 should pull ahead quite a bit. The current tensor core extras are DLSS - a faster, slightly lower quality TAA alternative and raytracing acceleration which might be neat if you're into maximising IQ, framerate be damned? If that sounds good, go for it. Else put the money towards the next gen.
|
# ? Oct 8, 2018 10:53 |
|
^ AMD has had two generations of open source ray tracing libraries. https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK Their existing tech can do like ~80 to 100 MEGARAYS per second on not-latest hardware from what I've seen. Of course, they don't have a GPU with 1/3rd of its die dedicated to Tensor.
|
# ? Oct 8, 2018 15:13 |
|
il serpente cosmico posted:fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen. "this latest console generation" = the last 5 years. PS4 and XB1 launched in 2013. Which, remember, was what people were saying about Bulldozer as well. "Consoles have more threads now, we'll see games start threading more heavily". Single-threaded performance is still the dominant factor in gaming though. As long as you have enough threads to offload the satellite work, it all comes down to how fast you can churn through the main game loop, which is single-threaded. Amdahl's law and all - reduce the time spent in the multithreaded portion of the program to ~zero and what you're left with is the single-threaded portion, which dominates your run-time. Paul MaudDib fucked around with this message at 15:44 on Oct 8, 2018 |
# ? Oct 8, 2018 15:23 |
|
I'd really like it if AMD pulls a 9000 series again and manages to put raytracing into mostly a contemporary GPU chip and dumps that poo poo on the market for $300-600 for the high end options. Of course, I know now that even if they do that, nvidia will just keep selling GPUs because consumers are incredibly good at self-ownage.
|
# ? Oct 8, 2018 15:27 |
|
Paul MaudDib posted:"this latest console generation" = the last 5 years. PS4 and XB1 launched in 2013. Jeeze, has it really been that long? Seems like only a few months ago I was laughing at how the they named the third xbox the xbox one and then proceeded to put the wrong ram into it. I don't pay much attention to AMD cards, what does this whole Polaris 30 stuff mean? New cards? Confusing names? Lower/higher prices? I have no idea. I like my clear cut generation names.
|
# ? Oct 8, 2018 15:34 |
|
It's probably just a polaris refresh on 14nm+ process.
|
# ? Oct 8, 2018 15:37 |
|
Broose posted:I don't pay much attention to AMD cards, what does this whole Polaris 30 stuff mean? New cards? Confusing names? Lower/higher prices? I have no idea. I like my clear cut generation names. 0-10% higher performance and a return to MSRP pricing. 12nm is a 14++, they are probably just running off the existing die on a tighter process. 670 is described as being "in the $200 segment", which was originally the price-point of the 480, which the 670 will probably barely match up against, even assuming they hit the 10% mark. "Polaris 20" was another one of these process tweaks, and ended up being a big wet fart. AMD does this every time the prices start to drift too low - once upon a time you could pick up a 480 4 GB for $150 and a 480 8 GB for $175, and the historical low for the 480 8 GB was $130. Then they introduced the 580 to drag prices back up to $250. Same thing they did with the 390 as well, change the number and add $100 to the price (hilarious thing is they actually reduced BOM - by that point the smaller VRAM modules were almost out of production and actually more expensive than the bigger ones with better volume). VRAM prices went up a lot (vs 2016) and the tariffs are in play now, so it's understandable they need to crank the price, but it still sucks. Paul MaudDib fucked around with this message at 17:03 on Oct 8, 2018 |
# ? Oct 8, 2018 15:50 |
|
The latest Geforce Experience made it so the useless notification settings here get turned back on after every restart Ughh. Also I can't change microphone settings at all anymore it seems. Such a garbage program.
|
# ? Oct 8, 2018 19:02 |
|
OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface?
|
# ? Oct 8, 2018 19:03 |
|
Combat Pretzel posted:OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface? p sure this is done when it makes sense. can get "free" interpolation from the texture samplers
|
# ? Oct 8, 2018 19:06 |
|
|
# ? Jun 11, 2024 13:08 |
|
Combat Pretzel posted:OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface? Funny you should say that, Unity just recently published a technique to generate infinite non-repeating textures from a small example texture, that's fast enough to run inside a realtime material shader https://eheitzresearch.wordpress.com/722-2/ https://rivten.github.io/2018/07/07/tldr-noise-histogram.html The idea has been explored before but previous techniques either didn't replicate the source texture well, or were too slow for realtime. This one is the best of all worlds.
|
# ? Oct 8, 2018 19:09 |