Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

PBCrunch posted:

Would a GTX 1080 with its clock speed reduced by 50% give a reasonable prediction of GTX 1060 performance? Does the 1080 overclocking software allow for substantially decreased clock speed?

Nope. If a 1280SP 1060 can even approach 980 level perf it can only mean Pascal doesn't scale well with increasing core count at 1080 levels. The 1060 is going to be somewhere 26% better IPC than the 1080.

GP106 is shaping to be the most perf/mm2 and the highest IPC midrange GPU ever.

Palladium fucked around with this message at 14:50 on Jul 5, 2016

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

Hieronymous Alloy posted:

Also, why the gently caress have none of the aftermarket card sellers announced actual release dates? Do y'all think the card problems will delay those launches?

FWIW, Overclockers UK have listed the ETA on the Sapphire 480 Nitro as the 22nd.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Josh Lyman posted:

My 3570K integrated graphics would probably do well per dollar on Overwatch

Overwatch is something that's driving PC upgrades for the low requirement esports type gamer for sure. I hear it's not quite there at 1080p on a 750 Ti, so the 470 or 480 should be a great fit.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Ninkobei posted:

Is it possible that, as Global Foundries improves its finfet technique, the yields become higher quality/allow for higher clocks? I guess the same question goes to TSMC..will the yields improve chip quality later in the GPU's generation?

If a chip is being limited by thermals, cache or current then it becomes a question by how much. It'd be highly unusual to see much increase theoretical maximum performance, but usually better process means the chip will have less latency issues, run cooler, and suck less power, rather than open up more headroom necessarily.

Palladium posted:

Nope. If a 1280SP 1060 can even approach 980 level perf it can only mean Pascal doesn't scale well with increasing core count at 1080 levels. The 1060 is going to be somewhere 26% better IPC than the 1080.

GP106 is shaping to be the most perf/mm2 and the highest IPC midrange GPU ever.

Inversely, this would mean GP102 would be half buried in the wall of diminishing returns.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

FaustianQ posted:

Inversely, this would mean GP102 would be half buried in the wall of diminishing returns.

And Intel's wall of getting CPU limited.

Gwaihir
Dec 8, 2009
Hair Elf

Ninkobei posted:

Is it possible that, as Global Foundries improves its finfet technique, the yields become higher quality/allow for higher clocks? I guess the same question goes to TSMC..will the yields improve chip quality later in the GPU's generation?

Almost certainly. That's why once in a while you see Intel introduce higher clocked SKUs later on in a series life cycle (independently of the usual architecture updates). AMD has done it in the past with things like the 7970 "GHZ edition"

The more common situation is probably just sitting on existing clocks and taking advantage of the greater profit per wafer due to fewer defective dies per wafer.

repiv
Aug 13, 2009

FaustianQ posted:

Inversely, this would mean GP102 would be half buried in the wall of diminishing returns.

It's going to come down to whether GP102 is a HBM2 part I think. The 1080s poor scaling is at least in part due to lack of bandwidth, but a theoretical HBM2 Titan P would have more than enough.

fozzy fosbourne
Apr 21, 2010

Since I think we have some rendering folks here, I have a question about how the graphics pipeline and input latency fundamentally interact.

I came across this post about latency in Overwatch, which cites a Blizzard employee post made during the beta. The blizzard post no longer exists but here is the quote:

quote:

Thanks for all the feedback. I can give you a little bit more information about the details of input lag.

We use only unadjusted raw input for our input handling (except when in the UI) as is common for FPS's. We also do a few other things to try to minimize input lat (sample input at the latest possible moment, minimize allowable buffered frames in the driver, etc).

However, we have noticed that if the GPU gets bogged down, input lag will be a little bit worse because the driver will start to buffer a frame and there can be a frame buffered in our game (we have a multi-threaded renderer that has a frame in submission to the GPU while we simulate the next frame). In those cases if you want to prefer less input lag over visual quality you can reduce your graphics detail settings to ensure you get the quickest path from sampled input to result on screen.

I've also seen various reports of people suggesting they perceive mouse lag at higher graphical settings even if the fps counter remains the same.

I've always had the assumption that if a graphics setting were to impact input latency, it would also always be accompanied by a framerate drop, but is that naive?

Note, I know the esports solution is to just turn everything off and furthermore I know it's not practically useful at my scrubby level of play but I'm just curious how this works fundamentally.

Josh Lyman
May 24, 2009


repiv posted:

It's going to come down to whether GP102 is a HBM2 part I think. The 1080s poor scaling is at least in part due to lack of bandwidth, but a theoretical HBM2 Titan P would have more than enough.


GP102 will not use HBM2, so says a friend who works at Nvidia.

repiv
Aug 13, 2009

Josh Lyman posted:

GP102 will not use HBM2, so says a friend who works at Nvidia.

Welp, so much for that. There is faster GDDR5X in the pipeline that could make it work though, provided its matured by the time NV needs it.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Hieronymous Alloy posted:

Also, why the gently caress have none of the aftermarket card sellers announced actual release dates? Do y'all think the card problems will delay those launches?

There were rumors before launch that aftermarket sellers weren't thrilled with the board price/MSRP split AMD was giving them for the 480 (again, hints AMDs margins are bad here). If that's the case I could see them not really putting a priority on getting 3rd party cards into the pipeline, especially if there actually is enough 1070 stock available for them to spend the resources making those instead.

fozzy fosbourne posted:

input latency stuff

So basics: You can think of the game engine as two separate machines (CPU and GPU) working in parallel. While the CPU is simulating the world, reading input, and issuing rendering commands, the GPU is rendering the frame it's about to display next. If the GPU is running faster than the CPU, the GPU will finish first and have to wait for the CPU to give it the next frame in the queue. Having multiple frames buffered helps here because if the CPU stalls for disk access or something, the GPU can still pick up the next frame that's already queued (which won't help raw framerate but can fix CPU-induced stutter). This obviously adds 1000/framerate ms of latency per buffered frame, however. If the CPU is running faster than the GPU, then the CPU can start processing the next frame; however, since you have a limit on how many frames you want buffered, you'll reach some point where the CPU can't do any work until the GPU is ready for it, so it idles. In either case, your framerate is going to be bound by whatever the slowest of the two (CPU frame or GPU frame) is. Meanwhile, your effective latency is whatever the difference is between "sample input" and "render results".

In this case, it sounds like they're doing as much simulation and stuff as possible, then sampling the input and doing the last bit of simulation and issuing rendering commands. If you are dropping graphical settings but not seeing a framerate change, then you are CPU-bound, meaning that the rate at which frames are presented is limited by the CPU, and the GPU is sitting idle; however, your perceived "input->present" latency is still the difference between when the input is sampled and when the results are displayed. At higher rendering quality levels, the amount of time spent on the GPU is longer, so the latency increases (even if it doesn't increase enough to make the frame GPU-bound). Your frame rate remains the same because you've still got GPU-idle time, but there is less of it, and your GPU rendering time increases, making the input-present latency larger.

Does that make sense? I realize I might not be explaining it in the clearest way.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Hubis posted:

There were rumors before launch that aftermarket sellers weren't thrilled with the board price/MSRP split AMD was giving them for the 480 (again, hints AMDs margins are bad here). If that's the case I could see them not really putting a priority on getting 3rd party cards into the pipeline, especially if there actually is enough 1070 stock available for them to spend the resources making those instead.

This would make sense, but the RX 480 has even less of a place if the customer-cooler models have a large price premium.

Poor AMD, they really needed a lucky break.

Phlegmish
Jul 2, 2011



repiv posted:

According to Oxide, the pre-release 1080 driver that AMD used had a bug that caused it to generate less snow than intended. The CF480s terrain looked worse because it was obscured by more plain boring snow.

I don't think the different number of units on screen was ever addressed though. Maybe Ashes is just a bad benchmark :prepop:

I have never heard anyone talk about Ashes of the Singularity outside of the context of benchmarks.

Is it actually a good game?

Klyith
Aug 3, 2007

GBS Pledge Week

fozzy fosbourne posted:

I've also seen various reports of people suggesting they perceive mouse lag at higher graphical settings even if the fps counter remains the same.

I've always had the assumption that if a graphics setting were to impact input latency, it would also always be accompanied by a framerate drop, but is that naive?

Note, I know the esports solution is to just turn everything off and furthermore I know it's not practically useful at my scrubby level of play but I'm just curious how this works fundamentally.

More buffering = more lag between when the frame was generated (and thus the last time your input would be reflected) and when it's displayed.

Buffering is also the easiest / cheapest way to prevent framerate drops due to transient events. It has a frame ready in the pipe when the big shader spike hits or GPU needs something from main memory, and then will hopefully catch up before the next one.

Generally the GPU itself will buffer one or two frames in addition to the one that's on the screen. If overwatch is adding a separate in-engine buffer, that could easily start adding up to more input lag than you'd like -- especially if anything else is making things worse*. Not just pro gamers, anyone with a lot of FPS experience will start feeling it, if only that something seems "off". Same way that people could feel the slow 20hz tick rate of overwatch servers.

*1 GPU buffer + 1 engine buffer + vsync on a 60hz monitor + FPS not constant at 60 + average monitor lag ~= nearly a tenth of a second of worst-case input lag

I'd point all this back to Blizzard being Blizzard: they want the widest audience possible and seem ok compromising other areas to make it happen. Their engine having an extra buffer seems like a very Bliz thing to do even though it's not a great feel for a twitch FPS. It helps the people with min-spec computers be able to play the game at all.

penus penus penus
Nov 9, 2014

by piss__donald

Hieronymous Alloy posted:

So today's the day AMD explains exactly how they hosed the chicken with the 480 pci-e slot. Any bets on what it was or how theyll propose to fix it?

Also, why the gently caress have none of the aftermarket card sellers announced actual release dates? Do y'all think the card problems will delay those launches?

My bet is simply lowering clock speed and voltage.

If they can decouple the PCIe and 6 pin power draw (like every other card) then they could conceivably just get all the remaining power from the power connector and not have to compromise any performance. But since its that way to start with and frankly that poo poo is kind of weird, there might be some underlying reason why it can't be changed.

Themage
Jul 21, 2010

by Nyc_Tattoo
Toms released an article about the 480:
http://www.tomshardware.com/reviews/amd-radeon-rx-480-power-measurements,4622.html

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

quote:

Our measurements show that the motherboard’s PCIe slot averages 6.74A at 12V. This means that our numbers exceed the norm by 1.24A, a significant 23 percent. And this is already counting from the absolute maximum allowed by the PCI-SIG’s specifications.

And there's that.

Still don't understand why anybody still defend this turd version of the card, if it was Intel they would have recalled this whole shipment back.

Naffer
Oct 26, 2004

Not a good chemist

They speculate that 4/6 phases are drawing from the PCIe slot.

Tom's posted:

In the end, we're not sure if it is really a physical 4:2 split, or only done with the firmware to change the balance in the direction of the PCIe slot, but the result is exactly the same.

penus penus penus
Nov 9, 2014

by piss__donald
Lol basically "yeah its repeatable, but dont worry, make sure your slot is clean and it'll be fine!"

No thanks.

Hieronymous Alloy
Jan 30, 2009


Why! Why!! Why must you refuse to accept that Dr. Hieronymous Alloy's Genetically Enhanced Cream Corn Is Superior to the Leading Brand on the Market!?!




Morbid Hound

Gwaihir posted:

Apparently I grabbed Ashes on steam thinking it might be an OK Supreme Commander successor.

Boy was I wrong, that game loving sucks. The only relevant benchmark that should be done using it is how long you can stand to play it before the boredom becomes terminal.

On the other hand Total War Warhammer is *great*, though Nvidia cards show a 25% performance drop in dx12 in it (the 480 apparently shows about a 15% gain).

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

THE DOG HOUSE posted:

My bet is simply lowering clock speed and voltage.

If they can decouple the PCIe and 6 pin power draw (like every other card) then they could conceivably just get all the remaining power from the power connector and not have to compromise any performance. But since its that way to start with and frankly that poo poo is kind of weird, there might be some underlying reason why it can't be changed.

I was actually thinking about this. So the 6-pin is rated for 75W, and PCI-E is rated at 65W. The problem they're having is that the TDP is spiking to >150W, which from what I was seeing is equally splitting the draw between the two sources, causing problems with the PCI-E trace, while the 6-pin is absorbing the overage because it's more robust (and in all likelyhood actually rated to 150W because it's really an 8-pin connection). So if they could cap the PCI-E draw in software and pull the additional load from the power connector then great -- except they'd be pulling 100-125W from a 75W rated connector.

Now maybe this is fine because for your and my power supplies that connector rated well above the 6-pin spec. But what about the 480 "target" audience, i.e. people with old PSUs, OEM machines they're trying to upgrade, etc. Is concentrating even more load on a different connector actually any better, or are you running the risk of just causing power supply problems instead because in reality most people don't have PSUs that can just absorb that? Or are there people who might not be aware of the issue who are happily overdrawing through PCI-E without trouble, who are now all of the sudden going to START having problems because they're shifting draw to the 6-pin supply?

This makes me think that just isn't an option. They'll have to reduce the overall draw for 6-pin cards, and then maybe allow cards with an actual 8-pin connector to run at full power.

Hubis fucked around with this message at 17:26 on Jul 5, 2016

Hieronymous Alloy
Jan 30, 2009


Why! Why!! Why must you refuse to accept that Dr. Hieronymous Alloy's Genetically Enhanced Cream Corn Is Superior to the Leading Brand on the Market!?!




Morbid Hound

THE DOG HOUSE posted:

Lol basically "yeah its repeatable, but dont worry, make sure your slot is clean and it'll be fine!"

No thanks.

I love how the responses are all "it's fine unless you over clock".

There's no way this could cause any problems at all, it's just mathematically impossible -- unless you do the most common thing imaginable, that the card even comes with a branded tool to help you do.

This car runs fine, just don't accelerate too fast! City driving only! It's fine as long as it never goes on a highway.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Hieronymous Alloy posted:

I love how the responses are all "it's fine unless you over clock".

There's no way this could cause any problems at all, it's just mathematically impossible -- unless you do the most common thing imaginable, that the card even comes with a branded tool to help you do.

This car runs fine, just don't accelerate too fast! City driving only! It's fine as long as it never goes on a highway.

Also don't put more than one of them in your system at a time -- but who would recommend that?

e:

quote:

Current hardware should be able to handle this amount of current without taking any damage, as long as the motherboard’s slots are clean and not corroded. It’s also advisable to make sure that the graphics card sits precisely in its slot. This should always be the case, though, even with significantly lower amounts of power

RX480: As safe as aluminum wiring!

Hubis fucked around with this message at 17:31 on Jul 5, 2016

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

THE DOG HOUSE posted:

For real though ashes sucks and has been skewing bemchmarks for like 6 months straight now. There are other, real, dx12 games now.

When AMD put up that slide I legitimately laughed.

DX 12 benchmarking is a huge mess and it's stupider than sites using Project Cars because they can't get a single game right. For some reason some sites use the DX 12 path in tomb raider, even though it performs worse especially in multi-gpu. TPU does that and then benches Hitman in DX11, when it runs better in DX12 if I remember right.

Phlegmish posted:

I have never heard anyone talk about Ashes of the Singularity outside of the context of benchmarks.

Is it actually a good game?

It's very stripped down. I think it's the core of what could be a really fun game but it's just too stripped down. However I am really looking forward to its engine being used for a Sins successor.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

xthetenth posted:

DX 12 benchmarking is a huge mess and it's stupider than sites using Project Cars because they can't get a single game right. For some reason some sites use the DX 12 path in tomb raider, even though it performs worse especially in multi-gpu.

Welp, I picked up RotTR in the steam sale and thought that DX12 would help me with an R9-290. I should have DX12 turned off, same settings?

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Hieronymous Alloy posted:

I love how the responses are all "it's fine unless you over clock".

There's no way this could cause any problems at all, it's just mathematically impossible -- unless you do the most common thing imaginable, that the card even comes with a branded tool to help you do.

This car runs fine, just don't accelerate too fast! City driving only! It's fine as long as it never goes on a highway.

It's not fine in it's current state even if you don't overclock. Out-of-spec.

It's not fine either even if the slot power shifted to the 6-pin. Still Out-of-spec.

it's also not fine if there is a performance loss from downvolting and underclocking to save power. Bait-and-switch tactics.

AMD can only blame themselves for this whole clusterfuck. They are a multibillion company in this business for 10+ years maybe they would like to act like one, instead of relying on their tiny segment of rabid fanboys?

Wiggly Wayne DDS
Sep 11, 2010



I still say a recall is effectively mandatory at this point, even if a firmware update works around the underlying issue.

mcbexx
Jul 4, 2004

British dentistry is
not on trial here!



Well, John McAfee is entering the Bitcoin business with a 10 Petahash/s farm.

He's probably gobbling up all those cards.

http://ir.stockpr.com/mgtci/company-news/detail/864/mgt-announces-initial-phase-of-multi-petahash-bitcoin-and-blockchain-project

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Twerk from Home posted:

Welp, I picked up RotTR in the steam sale and thought that DX12 would help me with an R9-290. I should have DX12 turned off, same settings?

Probably. It runs slower for me and most people.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Klyith posted:

Generally the GPU itself will buffer one or two frames in addition to the one that's on the screen.

I'd be interested in a citation on that. It would be catastrophic for VR, among other issues, if the GPU held one or two frames back for subsequent scan outs. (Where does it put these unpredictable frames?) It would also mean that you should never get shearing even with vsync off.

Can you elaborate?

wicka
Jun 28, 2007


Palladium posted:

AMD can only blame themselves for this whole clusterfuck. They are a multibillion company in this business

but for how much longer?

E: actually their revenue last year was under a billion

wicka fucked around with this message at 18:05 on Jul 5, 2016

Phlegmish
Jul 2, 2011



Twerk from Home posted:

Welp, I picked up RotTR in the steam sale and thought that DX12 would help me with an R9-290. I should have DX12 turned off, same settings?

Man at first I thought this was Rise of the TRiad

SwissArmyDruid
Feb 14, 2014

by sebmojo

Ninkobei posted:

Is it possible that, as Global Foundries improves its finfet technique, the yields become higher quality/allow for higher clocks? I guess the same question goes to TSMC..will the yields improve chip quality later in the GPU's generation?

GloFo _shouldn't need to improve shit_, is the problem. The process is Samsung's, and they'd been doing it since the Exynos 7420 that came out last year.

So either GloFo are incompetent or the 14nm LPE process is unsuited to scaling up to the kinds of core counts involved in making GPUs, in which case if Vega comes out on, say, 14nm LPP, we may see teething problems all over again.

edit: Closer to a year and a half, now.

SwissArmyDruid fucked around with this message at 18:12 on Jul 5, 2016

Craptacular!
Jul 9, 2001

Fuck the DH
That Tom's article made me think about buying one.

In my entire history of computers, I've never overclocked something even once. I've bought factory OC cards with warranties, but I always buy new hardware rather than OC to try to squeeze more juice out of the hardware I already bought.
This Asus 660 cost me $200 in early 2013 and now has lasted me 3.5 years. If the 480 can run do the same at stock frequencies and not short anything out, I don't see why not. I just am loathe to purchase reference coolers because this PC's focus was on absolute quiet, but it's not worth $100 more for cooling.

If it does eventually break this motherboard, that would be disappointing, but offset by the fact that this is a four year old Z77 motherboard.

penus penus penus
Nov 9, 2014

by piss__donald

Craptacular! posted:

That Tom's article made me think about buying one.

In my entire history of computers, I've never overclocked something even once. I've bought factory OC cards with warranties, but I always buy new hardware rather than OC to try to squeeze more juice out of the hardware I already bought.
This Asus 660 cost me $200 in early 2013 and now has lasted me 3.5 years. If the 480 can run do the same at stock frequencies and not short anything out, I don't see why not. I just am loathe to purchase reference coolers because this PC's focus was on absolute quiet, but it's not worth $100 more for cooling.

But thats the thing :( while the Tom's article clearly says "its okay" the very data in the article seems to suggest otherwise. Would we be comfortable maxing out the spec of anything power related for any amount of time for any part in the computer? In PCP's test you can see the voltage actually drooping on the motherboard because of the power draw - at stock settings. Its power coming through the motherboard for seemingly no good reason when you could literally power the whole card off the 6 pin itself - while equally out of spec at least its less cringe worthy because thats definitely more arbitrary, it would be a footnote just like it has been on past cards that did that.

I'd at least hold out for AMD's fix. I am at best uncomfortable with the card, and thats like real discomfort not "this card doesn't perform how I like so im going to nitpick everything" discomfort. It really didn't help that there were immediate reports of failure in exactly the kind of wide reaching ways one could imagine this causing, tripping safeties, breaking individual slots, destroying audio, etc.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Right now is not a great time to buy a sub-$400 GPU. If you have that much to spend, you can fire away and be sure to have a good time with your 1070. Everybody poking their heads in here asking "what $200-300 thing should I buy" need to wait for the GTX 1060, RX 470, and whatever software fix AMD has coming.

japtor
Oct 28, 2005
Kinda curious about the simultaneous multi projection thing Nvidia touted for their new GPUs, has it actually been tested/benchmarked? Devs need to support it so I'm guessing nothing is out yet, but did they not even make a longer bars graph about it?

Joink
Jan 8, 2004

What if I told you cod is no longer a fish :coolfish:

Craptacular! posted:

That Tom's article made me think about buying one.

In my entire history of computers, I've never overclocked something even once. I've bought factory OC cards with warranties, but I always buy new hardware rather than OC to try to squeeze more juice out of the hardware I already bought.
This Asus 660 cost me $200 in early 2013 and now has lasted me 3.5 years. If the 480 can run do the same at stock frequencies and not short anything out, I don't see why not. I just am loathe to purchase reference coolers because this PC's focus was on absolute quiet, but it's not worth $100 more for cooling.

If it does eventually break this motherboard, that would be disappointing, but offset by the fact that this is a four year old Z77 motherboard.

I'm getting one as im still running a 7950 on this 1080p monitor, time to upgrade. The RX480 seems ideal for what I currently play, and with good support for VR/DX12, makes it 'future proof'.

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

Twerk from Home posted:

Right now is not a great time to buy a sub-$400 GPU. If you have that much to spend, you can fire away and be sure to have a good time with your 1070. Everybody poking their heads in here asking "what $200-300 thing should I buy" need to wait for the GTX 1060, RX 470, and whatever software fix AMD has coming.

If you've been out of the game a while, there are good prices on previously used GTX970 / R9 290 / R9 290X / R9 390 right now. Especially so with the AMD chips given their power profile sucks and have been outright replaced by a $200 part so you can haggle pretty well.

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

japtor posted:

Kinda curious about the simultaneous multi projection thing Nvidia touted for their new GPUs, has it actually been tested/benchmarked? Devs need to support it so I'm guessing nothing is out yet, but did they not even make a longer bars graph about it?

It hasn't been implemented in any engines yet, so there's nothing to benchmark. Nvidia say they're working with Unity and Epic to get it done (which would cover most VR games) but there's no timeframe for it.

They've thrown around some marketing numbers but how they translate into framerate will depend on where the bottlenecks are in a game.

repiv fucked around with this message at 18:30 on Jul 5, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply