Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bus hustler
Mar 14, 2019

Ha Ha Ha... YES!
I have no info on the supply, I'm not making an argument for big demand, simply saying that the card exists, it is not a paper launch ,and pointing to it being out of stock everywhere is not at all the same as saying "tell me if you find a retailer actually selling that card," which to any reasonable person implies it's a paper launch and the product isn't for sale.

It is, in whatever limited quantity, it's just being gobbled up.

I need.... 450 of them, so that's partly why they go out of stock. There are lots of shops like ours. :) We do not care about value, we have an existing stock of all SFF machines that suddenly need GPU for CUDA accelerated medical imaging, or we buy them with the cheapest dGPU (GT 730 gross) from Dell and swap on arrival. I've complained about this before ITT I think, we really need to move toward a decentralized model but they'd rather buy $200 GPUs all day than capex the million+ for VDI.

Also hoping that Dell's AIO which currently ships a 1050 will get a bump to 1650...

bus hustler fucked around with this message at 20:11 on Jan 3, 2020

Adbot
ADBOT LOVES YOU

Shaocaholica
Oct 29, 2002

Fig. 5E
Plz save 1 out of that 450 for me.

Its Chocolate
Dec 21, 2019
Is there a reason not to use a Nvidia card in an AMD system?

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Its Chocolate posted:

Is there a reason not to use a Nvidia card in an AMD system?

No, Nvidia GPU's work perfectly fine with AMD CPU's, NV even promotes the combination themselves at times.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

Nvidia probably about do be doing a lot more of that lol

E:. That was a good point on the big companies buying those up that I didn't consider, I had a whole post responding to that that I thought I already posted tbh. But yeah good point regardless and yes 100% those are amazing if you already have q system to drop them into

Shaocaholica
Oct 29, 2002

Fig. 5E
I just bought a Dell 9010 SFF to run a dual slot low profile 1650 because boredom.

bus hustler
Mar 14, 2019

Ha Ha Ha... YES!
My HTPC is a 990 with a GT 1030 and a USB 3.0 card, but I really want to bump it to a 9020 so I can use a dual slot card for plex. The 9010 has the same processors as the 990, but native 3.0, so I guess that would work as a side grade...

Shaocaholica
Oct 29, 2002

Fig. 5E
I don’t think the 9020 supports dual slot cards tho.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

charity rereg posted:

My HTPC is a 990 with a GT 1030 and a USB 3.0 card, but I really want to bump it to a 9020 so I can use a dual slot card for plex. The 9010 has the same processors as the 990, but native 3.0, so I guess that would work as a side grade...

What exactly are you trying to do here that you need an upgraded card for

bus hustler
Mar 14, 2019

Ha Ha Ha... YES!

Statutory Ape posted:

What exactly are you trying to do here that you need an upgraded card for

i5 2500 sucks butt at CPU encoding and the GT 1030 doesn't support HVENC, you need a 1050ti or Quadro P400 :smith: My ideal final setup would have an encoding card AND usb 3.0 for external storage.

I'm fully aware I can remove the USB 3.0 card and fit a 1050ti, this is not an urgent project - the GT 1030 was free, I'm swimming in these cards from work and didn't know it didn't support encoding when I installed it. I have a rack of pulled GT 730/1030s laying around from those Dell SFFs.

bus hustler fucked around with this message at 01:49 on Jan 4, 2020

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Its Chocolate posted:

Is there a reason not to use a Nvidia card in an AMD system?

No, there's no relevant argument for "matching" them.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Indeed, if NVidia also releases PCIe4 cards next gen, AMD processors will be your only option.

Shaocaholica
Oct 29, 2002

Fig. 5E

SwissArmyDruid posted:

Indeed, if NVidia also releases PCIe4 cards next gen, AMD processors will be your only option.

Only to run at gen4 bandwidth. Does it even matter tho?

SwissArmyDruid
Feb 14, 2014

by sebmojo
It always matters in the datacenter.

eames
May 9, 2009

Some pretty rare and vague rumors about Ampere from Industry analysts... :salt:

https://www.notebookcheck.net/Yuant...r.449041.0.html

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
I'm sure it'll reverse climate change and exhaust delicious vanilla ice cream, too.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

I was literally about to get out of bed and start doing numbers on that myself lmao.

I was going to check and see what even a 25% reduction would be like. If you reduce the power req by 50% IDGAF if you even make it more performant.

I agree on the voodoo on that but I assure you I'm happy to be wrong af

E: lol haha no pun intended haha sweet

CrazyLoon
Aug 10, 2015

"..."
Probably as likely to be true as Navi 21 being twice as fast and as to why that's BS.

So certainly possible to have meaningful gains, just no way in hell those kinds and it's likely just the hypebeasts on both sides getting to work for this year.

eames
May 9, 2009

K8.0 posted:

At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility.

I mostly agree with your posting, particularly if they skip the regular TSMC 7nm (Zen2, Navi) process and skip straight to the improved 7+ process.

Half the power consumption does sound unrealistic for desktop products but in mobile chips that generally run in the more efficient part of the power/perf curve I could also see that happening. Perhaps the next gen is going to be another Pascal level jump in those regards, wouldn’t that be nice.

Mr.PayDay
Jan 2, 2004
life is short - play hard

K8.0 posted:

At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility.

What will be new in the next iteration are comparisons of two fps+ resolution scenarios: RTX off and RTX on compared to the first Turing/2000 generation.
IIRC Nvidia won’t be able to deliver 35-50% more fps in both scenarios and they have to decide : Boost Raytracing fps or RTX off „classic“ Render/Shader Power fps, and how big will the VRAM be, which all leads to the size of the GPU after all.

It may lead to force them to find a compromise
OTOH only 25% more fps RTX off with a 3070 and 3080 compared to the 2070 and 2080 will lead to gamer rage.
OTOH only 25% additional Raytracing fps will upset everyone how skipped the 2000 cards because everyone is “sure” the next iteration would (no matter how absurd that is ) double the Raytracing fps .

If they somehow manage to get 40-50% above a 2080TI with RTX on AND off, it’s hard to imagine it will be a below 1400 founder Euro/Dollar GPU which leads to market/ street prices of usually 20-30% more.

I would think about a 3070 that is 25% faster with RTX off and 40% faster with RTX on compared to a 2070 or 2070S and 8 GB VRAM for 599 Euro/Dollar. (street prices)

I am hoping to be way off (too high) with my worst case expectations.

Mr.PayDay fucked around with this message at 20:13 on Jan 4, 2020

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
I think we can safely say that nvidia will drop the tensor cores unless they are required for ray tracing. That might give them back some headroom

shrike82
Jun 11, 2005

the tensor cores are huge for mixed precision ML training/quantized inferencing so i'm skeptical they'd take it out.
the X070s have become the default starter platform for ML students and the X080 Tis defaults for small research/startup teams.

Cygni
Nov 12, 2005

raring to post

They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately.

Stickman
Feb 1, 2004

Mr.PayDay posted:

What will be new in the next iteration are comparisons of two fps+ resolution scenarios: RTX off and RTX on compared to the first Turing/2000 generation.
IIRC Nvidia won’t be able to deliver 35-50% more fps in both scenarios and they have to decide : Boost Raytracing fps or RTX off „classic“ Render/Shader Power fps, and how big will the VRAM be, which all leads to the size of the GPU after all.

Where are you getting this from? That'd only be true if they didn't have true 40% price point uplift in the process and needed to cut the RT/Shader ratio to hit a price point. I don't think that the RT cores are doing anything that wouldn't scale more or less linearly by adding more cores unless it's too cpu-limited at the high end (but repiv can correct me if I'm wrong).

E: I suppose it could be true if the IPC improvement to the shader cores can't be matched by the RT cores and requires a shift in the core ratio to maintain the performance ratio. It's certainly not going to be a linear split, though - the shader cores comprise a significantly larger portion of the die than the RTX cores and "RTX-on" performance relies on both.

Kazinsal
Dec 13, 2011

Cygni posted:

They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately.

Expensive both for Nvidia and the end buyer.

That being said, lol we're all used to paying $2000 for a $300 SSD in the enterprise space, so I look forward to seeing tensor core cards starting at ten grand per.

Mr.PayDay
Jan 2, 2004
life is short - play hard

shrike82 posted:

the tensor cores are huge for mixed precision ML training/quantized inferencing so i'm skeptical they'd take it out.
the X070s have become the default starter platform for ML students and the X080 Tis defaults for small research/startup teams.

Yep. Nvidia considers the whole AI/DLI/ML stuff as a blockbuster on their agenda.

https://news.developer.nvidia.com/tensorrt-7-conversational-ai/

The Tensor architecture will get even more attention, not less.


https://arxiv.org/pdf/1902.05942.pdf
The parallel path space filtering paper linked from https://news.developer.nvidia.com/massively-parallel-path-space-filtering-in-game-development/ under their GAMEWORKS section splus „Raytracing“ as first topic on the left is a hint that Nvidia might prioritize this stuff even further.

quote:

The upcoming GameWorks SDK — which will support Volta and future generation GPU architectures — enable ray-traced area shadows, ray-traced glossy reflections and ray-traced ambient occlusion.

The RTX off gains might even be less than the avg fps jump from the 1080 to the 2080 or 1080Ti vs 2080Ti if Nvidias priority is RT/AI/DLSS/ML etc.

That’s just me pulling that out of nowhere tho.

If Nvidia pushes ~ 40% RTX on and off fps gains each for the next 3060/3070/3080/3080Ti iterations, that would be amazing, of course. If the prices stay similar.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Serious Hardware / Software Crap >> GPU Megathread:

Mr.PayDay posted:

That’s just me pulling that out of nowhere tho.

Mr.PayDay
Jan 2, 2004
life is short - play hard

Stickman posted:

Where are you getting this from?
From Nvidias Lars Weinand himself at his presentation at the German Dreamhack.

https://www.it-business.de/index.cfm?pid=7531&pk=8570&fk=1520844&type=gallery

Nvidia would sacrifice RTX off gains if they had to chose, they will and have to push their next level business direction which is the Raytracing neuronal network, AI, ML etc. stuff.

This info is from spring 2019 tho I honestly don’t know how Nvidia might have changed their agenda.

Mr.PayDay
Jan 2, 2004
life is short - play hard

Subjunctive posted:

Serious Hardware / Software Crap >> GPU Megathread:



https://wccftech.com/nvidia-shows-that-their-geforce-rtx-gpus-are-much-faster-powerful-than-next-gen-consoles/

Edit: „One of the very first things that NVIDIA allegedly wanted to communicate to its partners was that it's still definitely all-in on ray tracing.“

https://wccftech.com/nvidia-ampere-rumors-massive-rt-performance-uplift-higher-clocks-more-vram-lower-tdps-vs-turing/

It’s not like there are not tons of hints tho ;)

Mr.PayDay fucked around with this message at 02:59 on Jan 5, 2020

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Still all in on ray-tracing...

...they'll just lobby Khronos to start calling Vulkan "RTX-Compatible."

Cactus
Jun 24, 2006

Is nvidia gonna release a new card in time for the cyberpunky or should i just get the 2080ti now and call it a day?

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Cactus posted:

Is nvidia gonna release a new card in time for the cyberpunky or should i just get the 2080ti now and call it a day?

Depends on if CP2077 gets delayed again. I wouldn't expect the 3070/80 cards until Computex>early Q4.

MH Knights
Aug 4, 2007

BIG HEADLINE posted:

I wouldn't expect the 3070/80 cards until Computex>early Q4.

Is that launching the cards or actual mass/actual availability of the cards?

repiv
Aug 13, 2009

Cygni posted:

They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately.

comedy option: they keep tensor cores on the silicon but fuse them off on consumer cards, like they used to do with double precision

consumers are none the wiser since nobody figured out a use-case for them anyway, and those drat cheapskate ML startups have to pay the tesla/quadro tax :homebrew:

shrike82
Jun 11, 2005

Have there been any other GPU generational transitions where a manufacturer removed a major bit of functionality like that?

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

MH Knights posted:

Is that launching the cards or actual mass/actual availability of the cards?

Probably limited availability, plus nVidia has liked to launch the x80s first and follow with the x70 1-2 months later.

The big question will be if they launch the x80Ti again at the same time like they did with the 20-series.

B-Mac
Apr 21, 2003
I'll never catch "the gay"!

BIG HEADLINE posted:

Probably limited availability, plus nVidia has liked to launch the x80s first and follow with the x70 1-2 months later.

The big question will be if they launch the x80Ti again at the same time like they did with the 20-series.

Wouldn’t be surprised seeing them go back to their normal release schedule and the x80 Ti 9 months or so after the x80 part.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Mr.PayDay posted:

Yep. Nvidia considers the whole AI/DLI/ML stuff as a blockbuster on their agenda.

https://news.developer.nvidia.com/tensorrt-7-conversational-ai/

The Tensor architecture will get even more attention, not less.


https://arxiv.org/pdf/1902.05942.pdf
The parallel path space filtering paper linked from https://news.developer.nvidia.com/massively-parallel-path-space-filtering-in-game-development/ under their GAMEWORKS section splus „Raytracing“ as first topic on the left is a hint that Nvidia might prioritize this stuff even further.


The RTX off gains might even be less than the avg fps jump from the 1080 to the 2080 or 1080Ti vs 2080Ti if Nvidias priority is RT/AI/DLSS/ML etc.

That’s just me pulling that out of nowhere tho.

If Nvidia pushes ~ 40% RTX on and off fps gains each for the next 3060/3070/3080/3080Ti iterations, that would be amazing, of course. If the prices stay similar.

The tensor cores are essentially useless for gaming given that DLSS flopped

NV can and will keep em around for the dedicated ML market, but that's TESLA stuff

NV already has a split with Volta/Turing: there's probably gonna be a dedicated compute (TESLA) card with no RT hw, a dedicated big graphics for quadro/titan with the pro stuff, maybe some minor tensor stuff, and then strip it off for the consumer graphics/gaming skus

Malcolm XML fucked around with this message at 05:34 on Jan 5, 2020

Adbot
ADBOT LOVES YOU

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

B-Mac posted:

Wouldn’t be surprised seeing them go back to their normal release schedule and the x80 Ti 9 months or so after the x80 part.

Nah, why leave money on the table when we can expect the "SUPER" refreshes now as well?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply