Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shrimp or Shrimps
Feb 14, 2012


Why would they design a benchmark where the weather changes some runs and doesn't on others?

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Cygni posted:

I realized after running this a trillion times that the benchmark acutally sorta sucks rear end. Not only does the weather change sometimes (totally tanking FPS, I cancelled all those runs), but the NPC locations do too. But whatever, I already ran it a bunch on my Saturday night so im posting it. gently caress i need a girlfriend.

Good news on the girlfriend front: variability in the benchmark just means you need to run it more times.

Also, run it more times with equal latency please.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

Paul MaudDib posted:

Yup, and even 10+ is a liiiittle below 14++ in that chart, let alone if the 9-series pushes 14++ a little farther, or if Intel doesn't hit their performance targets for 10+ (I remember hearing they were backing down their ambitions for 10nm a bit so they can actually get it into production).

Semi-accurate claims that Intel have abandoned the original 10nm for 12nm (which they'll still call 10nm) for their next process shrink. That performance regression graph above is over a year old now, so the figures on it for 10nm aren't going to line up with the "10nm" Intel releases.

Surprise Giraffe
Apr 30, 2007
1 Lunar Road
Moon crater
The Moon
I was going to post about how Intel surely has a huge pile of assets/cash maybe they can just accept the R+D penalty and fling themselves into whatever improvements their lineup needs, but google seems to say they gave up over ten billion dollars since 2016?

EDIT: well wiki seems to think they have mad dough anyway

Surprise Giraffe fucked around with this message at 12:15 on Oct 7, 2018

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Scalding Coffee posted:

I knew about this, but for some reason, it was turned off and set to some weird number. The computer doesn't feel so sluggish after turning it on. A shame the RAM sticks are old and dying or I might be able to get another year out of it.
Pagefiles are a load of poo poo these days. If you're actually actively using several gigabytes of swap, you're hosed anyway. That thing made sense back when computers had like 16MB RAM and to a certain degree more.

Nowadays, the only reason it "needs" to be large is that Windows can copy stale pages over, to be able to dump them quickly without an outstanding paging op, for when you're actually requiring the page file. That and dumping memory on BSODs. So long your RAM usage isn't hitting the ceiling, the page file doesn't really do anything. Switching between a fix 16GB allocation and automatic management shouldn't make a difference in feel.

Also, size isn't really related to RAM. I have it set to automatic management and it's just 4.8GB, on a machine with 32GB RAM.

Paul MaudDib posted:

Software will get written/rewritten over time to be more thread-friendly, because there is no other way forward. But unfortunately that's what they said 10 years ago too. At least rewriting software to be CCX friendly is probably easier than rewriting it to be thread-friendly in the first place.
CCX friendlyness for games should be "easy". Enumerate the topology, and if there's lines drawn somewhere, set thread affinities accordingly. Same would apply for NUMA on a Threadripper (assuming it runs in that mode), the system will care about memory allocations. The thread affinities will make sure the threads won't slip onto another die away from memory. Of course, this'll only work if the game is the only application doing that tomfoolery in the system. Anything beyond that would require OS level control of this poo poo.

Combat Pretzel fucked around with this message at 12:54 on Oct 7, 2018

Aeka 2.0
Nov 16, 2000

:ohdear: Have you seen my apex seals? I seem to have lost them.




Dinosaur Gum
I just realized my motherboard has been running my 3000mhz ram at 2100 and I just assumed that's what i bought because my brain is poo poo. Why would it do this when set to auto?

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

68Bird posted:

Can anyone suggest some quick things to check regarding low fps with a new 2080 ti? I deleted the old drivers via the windows control panel, installed new ones, updated my mobo bios just for shits, but I'm still seeing numbers way lower than they should be according to what other people have gotten with older cards. Processor is a 7700k, ram is 32gb of ddr4-3200. I only have Far Cry 5 and Ghost Recon Wildlands to benchmark with, but I don't see CPU or video card temps getting out of hand, so I'm thinking it's either some kind of weird software issue or I'm having power supply issues. The PSU is a 660w 80+ platinum seasonic, so I feel like it should be fine, but I'm downright confused as to what else could be an issue.

Only thing I see is uninstalling drivers via CP, personally I use DDU uninstaller, works great

Edit: also I assume you have MSI afterburner for checking that your card is running at proper speeds etc

Comfy Fleece Sweater fucked around with this message at 16:09 on Oct 7, 2018

Indiana_Krom
Jun 18, 2007
Net Slacker

Aeka 2.0 posted:

I just realized my motherboard has been running my 3000mhz ram at 2100 and I just assumed that's what i bought because my brain is poo poo. Why would it do this when set to auto?

Because auto is the JEDEC standard that guarantees compatibility, to actually get what the manufacturer claims you have to switch it from auto to XMP.

craig588
Nov 19, 2005

by Nyc_Tattoo
The bios knows what the CPU officially supports and doesn't/shouldn't default to the XMP profile. I used to run my memory at the default 2400 until I saw somewhere that XMP is validated and not just a guess and it's been stable for a year now at 2666. It should be perfectly stable, but the default won't be overclocked.

Aeka 2.0
Nov 16, 2000

:ohdear: Have you seen my apex seals? I seem to have lost them.




Dinosaur Gum
Last time I put on XMP it crashed. hmm.

edit: seems to be fine now. Thanks.

Aeka 2.0 fucked around with this message at 17:56 on Oct 7, 2018

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
Then it's likely your RAM is bad and doesn't run at the advertised speeds.

Deviant
Sep 26, 2003

i've forgotten all of your names.


Getting a noticeable improvement on the 2080 ti from my old 1080.

Also, I was able to plug my Oculus into the usb-c connector using an adapter, so that's cool.

Kagemusha
Sep 30, 2005

Don't worry! I'll fix it!

Deviant posted:

Getting a noticeable improvement on the 2080 ti from my old 1080.

Also, I was able to plug my Oculus into the usb-c connector using an adapter, so that's cool.

Which adapter did you use? I heard the Apple one works. I’m hoping to use one for the same reason.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I've used a generic Chinese adapter to plug my Odyssey into the tablet and it worked perfectly fine, if we ignore that it had an order of magnitude lower performance than required to actually play something.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."
Scary Hellions / Super Creeps › GPU Megat[H]read - gently caress I need a girlfriend

Partial Octopus
Feb 4, 2006



Deviant posted:

Getting a noticeable improvement on the 2080 ti from my old 1080.

Also, I was able to plug my Oculus into the usb-c connector using an adapter, so that's cool.

Do you know if there is anyway to do that with the Vive? I've been looking for an answer but am unable to find anything.

1gnoirents
Jun 28, 2014

hello :)
Pure anecdote here but using NVENC with OBS I was able to stream near flawless 1080p60fps to Twitch without any drops or really visible artifacts outside of downconverting from 1440p at 6 mbps. I havent pushed the limits but its clearly an improvement over the last encoder

Aeka 2.0
Nov 16, 2000

:ohdear: Have you seen my apex seals? I seem to have lost them.




Dinosaur Gum
Can I put an Nvidia bios on my Zotac or is that a *BAD IDEA*?

Deviant
Sep 26, 2003

i've forgotten all of your names.


Kagemusha posted:

Which adapter did you use? I heard the Apple one works. I’m hoping to use one for the same reason.

I only used a usb to usb c adapter, but I have seen confirmation the apple one works.

il serpente cosmico
May 15, 2003

Best five bucks I've ever spend.

Paul MaudDib posted:

Yup, and even 10+ is a liiiittle below 14++ in that chart, let alone if the 9-series pushes 14++ a little farther, or if Intel doesn't hit their performance targets for 10+ (I remember hearing they were backing down their ambitions for 10nm a bit so they can actually get it into production).

I mean I don't think 10+ is going to be a huge regression or anything but it's definitely not like it's going to keep pushing upwards either. I think the real change is that in the future games are going to have to be optimized for Ryzen/post-icelake style CCX architectures, but that's going to take a while, and most existing games are not going to get updated and will run fairly bad on CCX architectures. It will take a while to get enough improvements to overcome the clock and latency downsides.

Honestly I kinda hope AMD can keep pumping out some decent improvements since Zen is still a young architecture with more un-tapped potential but it's not like they can keep pushing forever either. I figure there's maybe one more big step, then a couple small steps, then anything past that is process improvements.

And process improvements aren't going anywhere either. 10/7nm is the last major generation, 5nm is a half-gen and is still going to take a while, and nobody really has any idea what 3nm is going to look like, that's easily 10 years off at this point if not more. Again, at least 7nm seems to be doing better than 10nm is, but we haven't seen actual production CPUs/GPUs on 7nm either (except for the A12, which is incredibly dissimilar to any other ARM chip on the market - Apple is optimizing for performance over cost/area).

Software will get written/rewritten over time to be more thread-friendly, because there is no other way forward. But unfortunately that's what they said 10 years ago too. At least rewriting software to be CCX friendly is probably easier than rewriting it to be thread-friendly in the first place.

Silicon is so totally hosed, we need graphene or something else so badly right now.



fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen.

ConanTheLibrarian posted:

Semi-accurate claims that Intel have abandoned the original 10nm for 12nm (which they'll still call 10nm) for their next process shrink. That performance regression graph above is over a year old now, so the figures on it for 10nm aren't going to line up with the "10nm" Intel releases.

IIRC they weren't able to ever get graphics working on the 10nm Cannon Lake. They only ever released the i3-8121U out of the process, which is a laptop CPU that doesn't have integrated graphics, which is good for an :lol:

il serpente cosmico fucked around with this message at 00:03 on Oct 8, 2018

mobby_6kl
Aug 9, 2009

by Fluffdaddy

NewFatMike posted:

To back up the effort post, Intel even projected a performance regression at 10nm compared to 14+++:

https://i.imgur.com/BwM8zAb.jpg
I haven't seen the rest of the presentation so maybe it's different in context, but that vertical axis is labeled transistor performance, btw, which doesn't necessarily tell us anything about the performance of an actual CPU built using that tech. They could (will) have more transistors and various architectural changes, so I seriously doubt there will be actual regression in any performance metric.

Anime Schoolgirl
Nov 28, 2002

il serpente cosmico posted:

fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen.
On GF/Samsung 14nm+ they could mass-produce 2.8ghz sustained base clock Ryzen on 45w which is still over 2.5x as fast as the Jaguar on the Bone X. So assuming they're willing to give CPU more than a 20w power budget (and they should) we should not have problems on that front. otoh it may just make them use lower numbers of threads again.

The uncertainty is whether they're going to make a GCN GPU for the next generation of consoles that actually scales higher assuming they don't just use the same amount of GPU power as the bone x, so we may run into the opposite problem

SlayVus
Jul 10, 2009
Grimey Drawer

1gnoirents posted:

Pure anecdote here but using NVENC with OBS I was able to stream near flawless 1080p60fps to Twitch without any drops or really visible artifacts outside of downconverting from 1440p at 6 mbps. I havent pushed the limits but its clearly an improvement over the last encoder

I wish there was a way for them tp update the NVENC encoder for the Pascal series through just a driver update, but I believe all the games they made were physical hardware changes.

il serpente cosmico
May 15, 2003

Best five bucks I've ever spend.

Anime Schoolgirl posted:

On GF/Samsung 14nm+ they could mass-produce 2.8ghz sustained base clock Ryzen on 45w which is still over 2.5x as fast as the Jaguar on the Bone X. So assuming they're willing to give CPU more than a 20w power budget (and they should) we should not have problems on that front. otoh it may just make them use lower numbers of threads again.

The uncertainty is whether they're going to make a GCN GPU for the next generation of consoles that actually scales higher assuming they don't just use the same amount of GPU power as the bone x, so we may run into the opposite problem

I wouldn't be surprised if they underbudget the GPU but design it in such a way to easily double it down the line to make a Pro / X again. No one seems to care a whole lot about hitting 60FPS or 4K without some kind of reconstruction technology this generation, so we'll see how this next generation goes.

Plus there's the question of Ray Tracing hardware.

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
Ray tracing is unlikely to happen on the incoming consoles, it seems too early. The hardware is probably close to being locked down for both console vendors assuming they are targeting 2020. If nvidia is making people pay $700+ just for the lower tier of RTX GPUs there is no feasible way AMD would have raytracing-capable GPUs in consoles that typically sells for half that. I genuinely don't believe AMD is capable of coming up with their own solution to ray-tracing in such a short amount of time, they don't have the resources for it.

Avalanche
Feb 2, 2007

Zedsdeadbaby posted:

Ray tracing is unlikely to happen on the incoming consoles, it seems too early. The hardware is probably close to being locked down for both console vendors assuming they are targeting 2020. If nvidia is making people pay $700+ just for the lower tier of RTX GPUs there is no feasible way AMD would have raytracing-capable GPUs in consoles that typically sells for half that. I genuinely don't believe AMD is capable of coming up with their own solution to ray-tracing in such a short amount of time, they don't have the resources for it.

Ray tracing might not even be feasible yet in the PC realm of things. Who knows what final tweaked results will be, but initial unoptimized builds of games with ray traced reflections during the Nvidia event were only getting around 50-60fps at 1080p. It's probably gonna be a while until ray tracing is truly mainstream.

LRADIKAL
Jun 10, 2001

Fun Shoe
https://www.youtube.com/watch?v=SrF4k6wJ-do

I found this analysis quite enlightening.

RTX currently does reflections and shadows with ray tracing at one sample per pixel. It seems that it is very performance intensive to do this. On the other hand it makes these effects easy to implement on the software side. The current rasterization tricks, while being crummy approximations sometimes, are very honed tricks. Between console releases, 4K requirements and the current limitations of the RTX implementation, I don't think path tracing will be very important to gamers for the next couple of years. Might be a real boon for 3D graphics guys who want to preview in close to real time!

Arzachel
May 12, 2012

LRADIKAL posted:

https://www.youtube.com/watch?v=SrF4k6wJ-do

I found this analysis quite enlightening.

RTX currently does reflections and shadows with ray tracing at one sample per pixel. It seems that it is very performance intensive to do this. On the other hand it makes these effects easy to implement on the software side. The current rasterization tricks, while being crummy approximations sometimes, are very honed tricks. Between console releases, 4K requirements and the current limitations of the RTX implementation, I don't think path tracing will be very important to gamers for the next couple of years. Might be a real boon for 3D graphics guys who want to preview in close to real time!

I mean RTX is also crummy approximation, it's just a much better crummy approximation than rasterization :v:

But yeah, requiring at least one ray per pixel (two for optimal results) is a massive improvement from hundreds+ but it still hits the shaders pretty hard, especially at 4k.

TorakFade
Oct 3, 2006

I strongly disapprove


I know it is too early and we have to wait for benchmarks and so on, but do you guys think that the 2070 will be at least decent in this RT stuff, or it'll be like trying to run the latest games on a 5 year old computer? (working, but definitely not enjoyable)

I have just recently bought a 1080 FTW that I could potentially step-up to a 2070, if EVGA offers it... I'm definitely not going for the 2080 or 2080Ti for budget reasons, but if a 2070 is better in normal games than the 1080 (for 1440p) and also has decent raytracing capabilities I could fork over the 100-150€ or so to step up and "futureproof" a little, even if it means paying double taxes...

TorakFade fucked around with this message at 10:16 on Oct 8, 2018

Arzachel
May 12, 2012

TorakFade posted:

I know it is too early and we have to wait for benchmarks and so on, but do you guys think that the 2070 will be at least decent in this RT stuff, or it'll be like trying to run the latest games on a 5 year old computer? (working, but definitely not enjoyable)

I have just recently bought a 1080 FTW that I could potentially step-up to a 2070, if EVGA offers it... I'm definitely not going for the 2080 or 2080Ti for budget reasons, but if a 2070 is better in normal games than the 1080 (for 1440p) and also has decent raytracing capabilities I could fork over the 100-150€ or so to step up and "futureproof" a little, even if it means paying double taxes...

Extrapolating from 2080/2080ti, 2070 should be slightly faster than the 1080 except in 4k hdr and game engines that make use of double rate fp16 (Wolfenstein 2/id tech 6) where 2070 should pull ahead quite a bit. The current tensor core extras are DLSS - a faster, slightly lower quality TAA alternative and raytracing acceleration which might be neat if you're into maximising IQ, framerate be damned?

If that sounds good, go for it. Else put the money towards the next gen.

Bloody Antlers
Mar 27, 2010

by Jeffrey of YOSPOS
^ AMD has had two generations of open source ray tracing libraries. https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK

Their existing tech can do like ~80 to 100 MEGARAYS per second on not-latest hardware from what I've seen. Of course, they don't have a GPU with 1/3rd of its die dedicated to Tensor.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

il serpente cosmico posted:

fwiw this latest console generation has pushed at least pushed devs to start optimizing across threads more, since the PS4 and XB1 both use gimped octo-cores CPUs. The PS5 and whatever Xbox equivalent will probably use a down-clocked Ryzen.

"this latest console generation" = the last 5 years. PS4 and XB1 launched in 2013.

Which, remember, was what people were saying about Bulldozer as well. "Consoles have more threads now, we'll see games start threading more heavily".

Single-threaded performance is still the dominant factor in gaming though. As long as you have enough threads to offload the satellite work, it all comes down to how fast you can churn through the main game loop, which is single-threaded.

Amdahl's law and all - reduce the time spent in the multithreaded portion of the program to ~zero and what you're left with is the single-threaded portion, which dominates your run-time.

Paul MaudDib fucked around with this message at 15:44 on Oct 8, 2018

Truga
May 4, 2014
Lipstick Apathy
I'd really like it if AMD pulls a 9000 series again and manages to put raytracing into mostly a contemporary GPU chip and dumps that poo poo on the market for $300-600 for the high end options.

Of course, I know now that even if they do that, nvidia will just keep selling GPUs because consumers are incredibly good at self-ownage.

Broose
Oct 28, 2007

Paul MaudDib posted:

"this latest console generation" = the last 5 years. PS4 and XB1 launched in 2013.

Which, remember, was what people were saying about Bulldozer as well. "Consoles have more threads now, we'll see games start threading more heavily".

Single-threaded performance is still the dominant factor in gaming though. As long as you have enough threads to offload the satellite work, it all comes down to how fast you can churn through the main game loop, which is single-threaded.

Jeeze, has it really been that long? Seems like only a few months ago I was laughing at how the they named the third xbox the xbox one and then proceeded to put the wrong ram into it.

I don't pay much attention to AMD cards, what does this whole Polaris 30 stuff mean? New cards? Confusing names? Lower/higher prices? I have no idea. I like my clear cut generation names.

Truga
May 4, 2014
Lipstick Apathy
It's probably just a polaris refresh on 14nm+ process.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Broose posted:

I don't pay much attention to AMD cards, what does this whole Polaris 30 stuff mean? New cards? Confusing names? Lower/higher prices? I have no idea. I like my clear cut generation names.

0-10% higher performance and a return to MSRP pricing. 12nm is a 14++, they are probably just running off the existing die on a tighter process.

670 is described as being "in the $200 segment", which was originally the price-point of the 480, which the 670 will probably barely match up against, even assuming they hit the 10% mark. "Polaris 20" was another one of these process tweaks, and ended up being a big wet fart.

AMD does this every time the prices start to drift too low - once upon a time you could pick up a 480 4 GB for $150 and a 480 8 GB for $175, and the historical low for the 480 8 GB was $130. Then they introduced the 580 to drag prices back up to $250. Same thing they did with the 390 as well, change the number and add $100 to the price (hilarious thing is they actually reduced BOM - by that point the smaller VRAM modules were almost out of production and actually more expensive than the bigger ones with better volume).

VRAM prices went up a lot (vs 2016) and the tariffs are in play now, so it's understandable they need to crank the price, but it still sucks.

Paul MaudDib fucked around with this message at 17:03 on Oct 8, 2018

Sininu
Jan 8, 2014

The latest Geforce Experience made it so the useless notification settings here get turned back on after every restart

Ughh. Also I can't change microphone settings at all anymore it seems.
Such a garbage program.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Combat Pretzel posted:

OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface?

p sure this is done when it makes sense. can get "free" interpolation from the texture samplers

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

Combat Pretzel posted:

OK, question, how expensive are shaders for texture operations? What I've been wondering for a while now is, why aren't they using a shader to sample random bits of a large texture, e.g. guided by procedural cells or whatever, to shift UVs and blend on transitions, to break up repeating patterns, say a road surface?

Funny you should say that, Unity just recently published a technique to generate infinite non-repeating textures from a small example texture, that's fast enough to run inside a realtime material shader :eng101:

https://eheitzresearch.wordpress.com/722-2/
https://rivten.github.io/2018/07/07/tldr-noise-histogram.html

The idea has been explored before but previous techniques either didn't replicate the source texture well, or were too slow for realtime. This one is the best of all worlds.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply