Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
I would wait for the next generation, the 1080 Ti is still a pretty good card and the 2080 Ti is now more than a year old.

Adbot
ADBOT LOVES YOU

Mindblast
Jun 28, 2006

Moving at the speed of death.


Yeah nthing the advice here. The 1080ti is really strong so unless you NEED the rtx right now you should be good until the 3080ti rolls around( or whatever it is called at that time).

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
If you hestitate in buying the 2080 Ti, you don't have the money to waste on it. It's not worth upgrading a 1080 Ti yet

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Thelonius Van Funk posted:

I have a GTX 1080 Ti that I am pretty pleased with, but the prices for the RTX 2080 Ti have finally started dropping in Sweden. Is there any point in upgrading or should I just hold out for the new cards in 2020?

The 2080Ti is 30-40% faster than the 1080Ti. So the question is whether or not that's worth the price for you. If the used market in Sweden is anything like it is in the US, you can still get a good price for a used 1080Ti, reducing the total upgrade cost.

If you've got a monitor and/or games that could really benefit from that 30-40% bump, and you want it now then the 2080Ti is a great card to have.

That said, I have to agree with everyone else: NVidia should be releasing their next gen cards within 6 months or so. These cards are widely expected to be on the newer 7nm node, meaning that the 2180Ti (or whatever they call it) should be, at minimum, 30-50% faster than the 2080Ti, and will likely come with respun RTX segments to allow you to actually utilize RTX at the resolutions and framerates you'd expect from a $1000+ card.

So if you're not really hurting for a new card right this minute, I'd wait. If you are, though, it's not like you're going to have a bad time rockin' a 2080Ti for a few years.

Stanley Pain
Jun 16, 2001

by Fluffdaddy
Not an old game but I would KILL to see an RTX enabled version of the Deathwing game.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

DrDork posted:

NVidia should be releasing their next gen cards within 6 months or so.

Why so soon? Is there anything that actually indicates they'll cut the usual two year product cycle short? There's still no meaningful competition and we've even had a mid-cycle refresh with the supers this time around. I don't expect an announcement until like a year from now, and mainstream availability/reasonable pricing will probably take another 2-3 months in many parts of the world.

feedmegin
Jul 30, 2008

repiv posted:

Nvidia is hiring people to work on more :pcgaming: remasters a-la Quake 2 RTX



I wonder which game they're working on...

Pong

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

TheFluff posted:

Why so soon? Is there anything that actually indicates they'll cut the usual two year product cycle short? There's still no meaningful competition and we've even had a mid-cycle refresh with the supers this time around. I don't expect an announcement until like a year from now, and mainstream availability/reasonable pricing will probably take another 2-3 months in many parts of the world.

While NVidia hasn't announced it themselves, there have been multiple corroborating reports that they're targeting a 1H2020 release.

The reason why is pretty simple: the 20-series cards were lackluster in a lot of ways, and have not sold particularly well. Compared to the 10-series, they didn't improve on performance nearly as much as NVidia generally does with a new generation, and the price:performance ratio was disappointing thanks to price hikes.

There're a lot of reasons to believe that they never really wanted to do the 20-series on 14nm, but just couldn't make the jump to 7nm in time. Instead, they basically burned almost an entire generation on priming the pump for RTX. The 21-series (or whatever) should provide a substantial performance, power efficiency, and density jump compared to current cards. The end result of that is they should be able to get better yields (read: lower prices or better profits), and be able to turn RTX from its current status as more or less a tech demo into something that people with cards that don't end in 80Ti will actually be able to use and want.

And while AMD might not be able to compete with the 2080Ti, their mid-tier and low-end products are actually viable alternatives in many cases. Quick turning to the next generation on a 7nm node should allow NVidia to take that space back again, and that space is where a ton of money is made. I mean, AMD was able to survive for the past how many years without a flagship card precicely because of mid- and low-end card sales.

There's really no reason for them to want to stay on 14nm any longer than they absolutely have to. Everyone has been assuming the 20-series would be short-lived compared to normal generations since day 1, really.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

DrDork posted:

While NVidia hasn't announced it themselves, there have been multiple corroborating reports that they're targeting a 1H2020 release.

The reason why is pretty simple: the 20-series cards were lackluster in a lot of ways, and have not sold particularly well. Compared to the 10-series, they didn't improve on performance nearly as much as NVidia generally does with a new generation, and the price:performance ratio was disappointing thanks to price hikes.

There're a lot of reasons to believe that they never really wanted to do the 20-series on 14nm, but just couldn't make the jump to 7nm in time. Instead, they basically burned almost an entire generation on priming the pump for RTX. The 21-series (or whatever) should provide a substantial performance, power efficiency, and density jump compared to current cards. The end result of that is they should be able to get better yields (read: lower prices or better profits), and be able to turn RTX from its current status as more or less a tech demo into something that people with cards that don't end in 80Ti will actually be able to use and want.

And while AMD might not be able to compete with the 2080Ti, their mid-tier and low-end products are actually viable alternatives in many cases. Quick turning to the next generation on a 7nm node should allow NVidia to take that space back again, and that space is where a ton of money is made. I mean, AMD was able to survive for the past how many years without a flagship card precicely because of mid- and low-end card sales.

There's really no reason for them to want to stay on 14nm any longer than they absolutely have to. Everyone has been assuming the 20-series would be short-lived compared to normal generations since day 1, really.

The recent "leaks" in the last few days all seem to be barely better than pure speculation. They are also all talking about Ampere, which last I heard was compute-only (like Volta) with no consumer parts. I'd still say new consumer cards before summer 2020 is more wishful thinking than anything else.

e: pretty much all the reporting in the last few days that "new nvidia gpus are coming 1H2020!" from all the various outlets can be traced back to a single sentence near the end of an igor's lab post on the 1660 super and the 5500xt which says:

quote:

Whereby up to the half of the year 2020 Ampere should finally come
(emphasis mine)
it's 100% wishful thinking all the way down lmao

the entire idea that turing's product cycle would be shorter than normal is just some weird idea people got into their heads because it wasn't as good as they'd hoped, it has no foundation in anything at all other than disappointment with the product

TheFluff fucked around with this message at 16:14 on Oct 10, 2019

TorakFade
Oct 3, 2006

I strongly disapprove


The longer they take, the more I can save up, the better card I'll be able to get to replace my perfectly fine in every single AAA game, with mostly ultra settings, @1440p GTX1080, a GPU released over 3 years ago.

Those are the times where it feels good not having 4K or 144Hz :unsmith:

I guess I'll be on the lookout for a new card around the time Cyberpunk 2077 comes out or thereabouts :)

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin

TheFluff posted:

Why so soon? Is there anything that actually indicates they'll cut the usual two year product cycle short? There's still no meaningful competition and we've even had a mid-cycle refresh with the supers this time around. I don't expect an announcement until like a year from now, and mainstream availability/reasonable pricing will probably take another 2-3 months in many parts of the world.

It's not soon, it's almost precisely on the schedule Nvidia have been keeping since 2013.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Riflen posted:

It's not soon, it's almost precisely on the schedule Nvidia have been keeping since 2013.

:thunk:

GTX 680/Kepler, 28nm: March 2012
GTX 780/Kepler, 28nm: May 2013 (14 months)
GTX 750/Maxwell, 28nm: February 2014 (23 months from Kepler)
GTX 980/Maxwell, 28nm: September 2014 (16 months from 780)
GTX 1080/Pascal, 16nm: May 2016 (20 months from 980, 27 months from Maxwell)
RTX 2080/Turing, 12nm: September 2018 (28 months)

680 and 780 were both Kepler, and Maxwell actually launched as early as in February 2014 with the GTX 750, and both Kepler and Maxwell were on the 28nm node. But no, that could not possibly have made development faster and product cycles shorter. Clearly there is a strict 18 month product cycle here!

e: date calculation is hard, fixed some errors. the shortest time between new consumer card architectures since 2012 has been 23 months.
e2: not only that, but there's clearly no schedule even if you just consider times between x80 cards - rather, there's a clear trend towards longer intervals (14/16/20/28 months between releases).

TheFluff fucked around with this message at 18:04 on Oct 10, 2019

Cygni
Nov 12, 2005

raring to post

DrDork posted:

There's really no reason for them to want to stay on 14nm any longer than they absolutely have to. Everyone has been assuming the 20-series would be short-lived compared to normal generations since day 1, really.

This has been wishful thinking since day 1, honestly.

CrazyLoon
Aug 10, 2015

"..."

Cygni posted:

This has been wishful thinking since day 1, honestly.

Yea. I mean...NVidia's track record these past few years really has been so poo poo, that it's made me consider AMD options as just far better deals overall, especially aftermarket ones. But that doesn't diminish the big pile of money NVidia is sitting on and which allows them to throw RTX sand in everyone's eyes while they take their sweet time with their version of 7nm.

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin

TheFluff posted:

:thunk:

GTX 680/Kepler, 28nm: March 2012
GTX 780/Kepler, 28nm: May 2013 (14 months)
GTX 750/Maxwell, 28nm: February 2014 (23 months from Kepler)
GTX 980/Maxwell, 28nm: September 2014 (16 months from 780)
GTX 1080/Pascal, 16nm: May 2016 (20 months from 980, 27 months from Maxwell)
RTX 2080/Turing, 12nm: September 2018 (28 months)

680 and 780 were both Kepler, and Maxwell actually launched as early as in February 2014 with the GTX 750, and both Kepler and Maxwell were on the 28nm node. But no, that could not possibly have made development faster and product cycles shorter. Clearly there is a strict 18 month product cycle here!

e: date calculation is hard, fixed some errors. the shortest time between new consumer card architectures since 2012 has been 23 months.

I didn't say 2012, you did.

Since 2013, the average time between releases of x80 parts has been 21 months.
The average time between release of the x80 Ti parts has been 19 months.
There has been a Titan GPU released at least every calendar year (in 2017 we got two and the average time between releases is currently just under 12 months).

These schedules are dictated by technological and economic factors and this trend only continues until it doesn't. But the middle of next year for an x80 Ampere or whatever it'll be called, is in keeping with trends seen over the last 6 years. Typically the x104 part (x80) is the first Geforce product released from a new architecture.

Edit: Seeing as you edited, I'll add that there were very credible reports that the next GPU design taped out in April 2019. In the past, tape out happens about a year and a bit before the first products are released but again, it depends.

Riflen fucked around with this message at 18:26 on Oct 10, 2019

eames
May 9, 2009

I wonder if Cyberpunk 2077 is going to be a big enough deal for GPU manufacturers to rush something to market before it releases.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Riflen posted:

I didn't say 2012, you did.

Since 2013, the average time between releases of x80 parts has been 21 months.
The average time between release of the x80 Ti parts has been 19 months.
Come on dude. When the actual intervals between x80 cards have been 14/16/20/28 months in that order, saying that the average is 21 months is not just meaningless, it's actively misleading. If you're excluding the 680 you're also taking an average of three (3) values.
The x80Ti's are just irrelevant to the discussion and so are the Titans.

Again, the only reason that the 780 and the 980 were so close in time was that Maxwell launched with the 750, not with the 780. Look at the microarchs, not the x80 cards.

Riflen posted:

Edit: Seeing as you edited, I'll add that there were very credible reports that the next GPU design taped out in April 2019. In the past, tape out happens about a year and a bit before the first products are released but again, it depends.

Yes, I remember that leak too. What I also remember from that leak though that you either forgot or missed at the time is that it said that Ampere was likely compute-only.

TheFluff fucked around with this message at 18:51 on Oct 10, 2019

ufarn
May 30, 2009

eames posted:

I wonder if Cyberpunk 2077 is going to be a big enough deal for GPU manufacturers to rush something to market before it releases.
If anything, they should probably make something with a cybershroud since even gamer bros are vain as hell about their neon inferno with clear panels.

FuturePastNow
May 19, 2014


Isn't the biggest problem with the 20-series, from Nvidia's perspective, that it's kinda inefficient and expensive to make due to the AI/deeplearning tensor cores being essentially wasted die space for gaming use?

I could see them rushing out new high end GPUs that ditch almost all the AI poo poo.

VelociBacon
Dec 8, 2009

FuturePastNow posted:

Isn't the biggest problem with the 20-series, from Nvidia's perspective, that it's kinda inefficient and expensive to make due to the AI/deeplearning tensor cores being essentially wasted die space for gaming use?

I could see them rushing out new high end GPUs that ditch almost all the AI poo poo.

They won't do this because it would invalidate their earlier efforts + they have stock to clear.

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE

FuturePastNow posted:

Isn't the biggest problem with the 20-series, from Nvidia's perspective, that it's kinda inefficient and expensive to make due to the AI/deeplearning tensor cores being essentially wasted die space for gaming use?

I could see them rushing out new high end GPUs that ditch almost all the AI poo poo.

That's why they're trying to push DLSS and all that so they do have a gaming use, but so far it hasn't been going well.

Stickman
Feb 1, 2004

And so far doesn't actually use the tensor cores :v:

Arzachel
May 12, 2012

FuturePastNow posted:

Isn't the biggest problem with the 20-series, from Nvidia's perspective, that it's kinda inefficient and expensive to make due to the AI/deeplearning tensor cores being essentially wasted die space for gaming use?

I could see them rushing out new high end GPUs that ditch almost all the AI poo poo.

12nm wafers should be much cheaper than anything 7nm. If anything, Nvidia is saving money despite the large die sizes.

Riflen
Mar 13, 2009

"Cheating bitch"
Bleak Gremlin

TheFluff posted:

Come on dude. When the actual intervals between x80 cards have been 14/16/20/28 months in that order, saying that the average is 21 months is not just meaningless, it's actively misleading. If you're excluding the 680 you're also taking an average of three (3) values.
The x80Ti's are just irrelevant to the discussion and so are the Titans.

Again, the only reason that the 780 and the 980 were so close in time was that Maxwell launched with the 750, not with the 780. Look at the microarchs, not the x80 cards.


Yes, I remember that leak too. What I also remember from that leak though that you either forgot or missed at the time is that it said that Ampere was likely compute-only.

You seem to want to have an argument about this. There's no need to be combative. All I said was June is 1H 2020 and June-Sept is likely for Gx104 considering how Nvidia have operated in recent years. It's all speculation based on what little the public knows ahead of time anyway. I agree these recent reports have zero information in them and are probably just websites trying to get ad hits, but grabbing on to one off events like how Maxwell was launched doesn't make a strong case for any other launch window either.

eames
May 9, 2009

I figure I'll leave this here because we don't seem to have a SH/SC streaming thread yet and the readers of this topic might be interested.

Google Stadia VP of engineering posted:

“Ultimately, we think in a year or two we’ll have games that are running faster and feel more responsive in the cloud than they do locally,” Bakar says to Edge, “regardless of how powerful the local machine is.”

https://www.pcgamesn.com/stadia/negative-latency-prediction

They plan to do this using fancy algorithms for input anticipation (prediction?), kind of what I failed to describe here.

Arzachel
May 12, 2012

eames posted:

I figure I'll leave this here because we don't seem to have a SH/SC streaming thread yet and the readers of this topic might be interested.


https://www.pcgamesn.com/stadia/negative-latency-prediction

They plan to do this using fancy algorithms for input anticipation (prediction?), kind of what I failed to describe here.

My Little Stadia: Algorthms are Magic

TorakFade
Oct 3, 2006

I strongly disapprove


eames posted:

Google Stadia VP of engineering posted:

“Ultimately, we think in a year or two we’ll have games that are running faster and feel more responsive in the cloud than they do locally,” Bakar says to Edge, “regardless of how powerful the local machine is.”

Not with the kind of latency, ping and generally poo poo broadband connections we have here, no. I have FTTC, but cabinet to house is still good ol' copper wire and while I can get 60mbps download/20mbps upload, ping is very rarely less than 70-80ms in real-world situations and latency is pretty noticeable, much like the 7mbps ADSL we all used to have few years ago (or even 640kbps :v: ).

Unless they manage to renovate the entire network infrastructure on a global level so we all get FTTH in 1-2 years, which sounds absolutely ridiculous.

Maybe in smaller nations with already great infrastructure (Japan, Sweden...) and maybe big cities with >1,000,000 people in major nations? But there's not many of those where I live :v:

E: ok sorry missed the part of your post where apparently their solution to this problem is "algorithm predicting what action, button, or movement you’re likely to do next and render it ready for you", which I believe is utter bullshit

TorakFade fucked around with this message at 11:58 on Oct 11, 2019

Stickman
Feb 1, 2004

Yeah, it still needs to figure out which input you actually did and stream the image back to you, so it’ll only eliminate the internal server latency at best. I’d put good money on the network latency dwarfing the internal latency.

E: Unless it’s sending you 20 streams and you get to pick the one you did :v:

Stickman fucked around with this message at 12:13 on Oct 11, 2019

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!
Latency? We have... negative latency.

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE
Without getting into the quagmire of just how they're leveraging predictive action into reducing perceived latency, I want to know how they plan on avoiding the latency impact of failed predictions. Like, what happens to the mouse cursor? If I move left when Stadia thinks right, is it going to jump?

I could see a game being designed around asymmetric UI paradigms and have each 'layer' predicted separately, like Stadia doesn't predict the mouse layer, the UI layer can be cheaply prerendered, the game world layer runs at it own speed and it's all stitched together locally. But that's something you'd have to have the game built for from the ground up, you can't retrofit existing games into that kind of paradigm.

Inept
Jul 8, 2003

I'm pretty sure it's just bullshit marketing and they're not actually going to do any of that. They're trying to downplay the downsides to this with "we'll use math to fix it". It would probably be easier for them to send everyone who has Stadia a monitor with minimal input lag instead of a TV with 30ms+.

Google where's my CRT monitor

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I have high quality FTTH, and even then average RTT to local datacenters is in the 8-10ms range. Most of my gaming targets sub-15ms frame times. With just the network layer taking half or more of the time it takes my local system to render an entire frame, good luck ever getting latency to compare.

The only way I could see this being true is if they cherry picked the scenario so you had someone on fiber sitting next to their datacenter but playing on some potato computer with an iGPU. Then maybe it would technically be faster because their local processing capacity would be almost nil, but that's not what their clipping said, so :shrug:

Wouldn't be the first tech startup to outright lie about the capabilities of their product, though.

Arzachel
May 12, 2012

isndl posted:

Without getting into the quagmire of just how they're leveraging predictive action into reducing perceived latency, I want to know how they plan on avoiding the latency impact of failed predictions. Like, what happens to the mouse cursor? If I move left when Stadia thinks right, is it going to jump?

I could see a game being designed around asymmetric UI paradigms and have each 'layer' predicted separately, like Stadia doesn't predict the mouse layer, the UI layer can be cheaply prerendered, the game world layer runs at it own speed and it's all stitched together locally. But that's something you'd have to have the game built for from the ground up, you can't retrofit existing games into that kind of paradigm.

You could minimise the latency impact by rolling back to a previous game state and replaying it with your actual inputs but that sounds like a stuttery mess for anything with direct camera controls.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Rollback netcode for all games not just fighting games, let's go.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Riflen posted:

You seem to want to have an argument about this. There's no need to be combative. All I said was June is 1H 2020 and June-Sept is likely for Gx104 considering how Nvidia have operated in recent years. It's all speculation based on what little the public knows ahead of time anyway. I agree these recent reports have zero information in them and are probably just websites trying to get ad hits, but grabbing on to one off events like how Maxwell was launched doesn't make a strong case for any other launch window either.

:shrug:

I disagree, and I'd say there's a pretty convincing case for the next xx80 coming later than that. We have the leak saying Ampere taped out back in April and that it's likely compute-only. Supporting that is the EEC filing that WCCFTech reported on recently where the only two Ampere parts listed are a GA104 and a GA104-400. A GA104 with no suffix sounds like a Quadro part, and while a GA104-400 could be an xx80 part it's more likely a Titan. This supports my belief that Ampere will be like Volta, and the next xx80 parts will probably not launch before Q4 2020 given the past track record of 24 months or more between consumer microarch launches in the last decade (it goes further back than Kepler - Fermi launched in April 2010, 24 months before Kepler). Fermi and Kepler had two x80 parts each, which contributed to shorter product cycles, but that hasn't been a thing since Maxwell.

Even supposedly well-informed publications like WCCFTech bring up completely inane arguments like "well Nvidia took a beating in the recent quarterly reports" as something that makes a consumer card launch likely to happen earlier, but designing, prototyping, testing and mass producing a new microarch takes several years and involves huge supplier lead times - it's nothing you can just rush out because the latest quarterly report looks bad. The roadmap and launch quarter for Ampere and whatever comes next was probably decided over a year ago already and there's nothing that can be done to change it now.

Or maybe I'm wrong and Ampere will launch compute only at first but get consumer parts soon after, what do I know. I don't see any arguments other than "it would feel good" for a 1H2020 consumer launch though. I really want to replace my GTX1080 myself but I don't feel hopeful about getting to do it soon.

edit: haha never mind, i shouldn't have trusted wccftech. the filing they're reporting on is from back in august 2018, at which time they made this piece of highly accurate reporting based on it (like seriously, it's hilarious how confidently they are wrong). never mind, not much reason to believe anything at all right now.

TheFluff fucked around with this message at 17:03 on Oct 11, 2019

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

SwissArmyDruid posted:

Rollback netcode for all games not just fighting games, let's go.

Fighting games were the last genre to get it. Half-Life pioneered it in 1999 or 2000 and almost every non-fighting game made since at least 2004 has it.

Cygni
Nov 12, 2005

raring to post

lllllllllllllllllll posted:

Latency? We have... negative latency.

Stadia is officially a console, as we have reached console level lovely marketing.

Truga
May 4, 2014
Lipstick Apathy
Yeah, and that's why netcode post like quake3 has been a pile of garbage.

Quake 2 still has the best amount of prediction for an online multiplayer game.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
reminder that everyone on a 60hz monitor that's using v-sync (which is to say most people) likely has ~90ms input lag already

TheFluff fucked around with this message at 17:23 on Oct 11, 2019

Adbot
ADBOT LOVES YOU

Dead Goon
Dec 13, 2002

No Obvious Flaws




Well, we're never going to see Half-Life 3, so may as well Raytrace the gently caress out of the games we do have.

Black Mesa does look pretty good though!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply