|
I have no info on the supply, I'm not making an argument for big demand, simply saying that the card exists, it is not a paper launch ,and pointing to it being out of stock everywhere is not at all the same as saying "tell me if you find a retailer actually selling that card," which to any reasonable person implies it's a paper launch and the product isn't for sale. It is, in whatever limited quantity, it's just being gobbled up. I need.... 450 of them, so that's partly why they go out of stock. There are lots of shops like ours. We do not care about value, we have an existing stock of all SFF machines that suddenly need GPU for CUDA accelerated medical imaging, or we buy them with the cheapest dGPU (GT 730 gross) from Dell and swap on arrival. I've complained about this before ITT I think, we really need to move toward a decentralized model but they'd rather buy $200 GPUs all day than capex the million+ for VDI. Also hoping that Dell's AIO which currently ships a 1050 will get a bump to 1650... bus hustler fucked around with this message at 20:11 on Jan 3, 2020 |
# ? Jan 3, 2020 20:07 |
|
|
# ? May 29, 2024 07:43 |
|
Plz save 1 out of that 450 for me.
|
# ? Jan 3, 2020 20:57 |
|
Is there a reason not to use a Nvidia card in an AMD system?
|
# ? Jan 3, 2020 21:03 |
|
Its Chocolate posted:Is there a reason not to use a Nvidia card in an AMD system? No, Nvidia GPU's work perfectly fine with AMD CPU's, NV even promotes the combination themselves at times.
|
# ? Jan 3, 2020 21:13 |
|
Nvidia probably about do be doing a lot more of that lol E:. That was a good point on the big companies buying those up that I didn't consider, I had a whole post responding to that that I thought I already posted tbh. But yeah good point regardless and yes 100% those are amazing if you already have q system to drop them into
|
# ? Jan 3, 2020 21:40 |
|
I just bought a Dell 9010 SFF to run a dual slot low profile 1650 because boredom.
|
# ? Jan 3, 2020 23:24 |
|
My HTPC is a 990 with a GT 1030 and a USB 3.0 card, but I really want to bump it to a 9020 so I can use a dual slot card for plex. The 9010 has the same processors as the 990, but native 3.0, so I guess that would work as a side grade...
|
# ? Jan 3, 2020 23:48 |
|
I don’t think the 9020 supports dual slot cards tho.
|
# ? Jan 4, 2020 00:02 |
|
charity rereg posted:My HTPC is a 990 with a GT 1030 and a USB 3.0 card, but I really want to bump it to a 9020 so I can use a dual slot card for plex. The 9010 has the same processors as the 990, but native 3.0, so I guess that would work as a side grade... What exactly are you trying to do here that you need an upgraded card for
|
# ? Jan 4, 2020 00:12 |
|
Statutory Ape posted:What exactly are you trying to do here that you need an upgraded card for i5 2500 sucks butt at CPU encoding and the GT 1030 doesn't support HVENC, you need a 1050ti or Quadro P400 My ideal final setup would have an encoding card AND usb 3.0 for external storage. I'm fully aware I can remove the USB 3.0 card and fit a 1050ti, this is not an urgent project - the GT 1030 was free, I'm swimming in these cards from work and didn't know it didn't support encoding when I installed it. I have a rack of pulled GT 730/1030s laying around from those Dell SFFs. bus hustler fucked around with this message at 01:49 on Jan 4, 2020 |
# ? Jan 4, 2020 01:30 |
|
Its Chocolate posted:Is there a reason not to use a Nvidia card in an AMD system? No, there's no relevant argument for "matching" them.
|
# ? Jan 4, 2020 02:13 |
|
Indeed, if NVidia also releases PCIe4 cards next gen, AMD processors will be your only option.
|
# ? Jan 4, 2020 06:21 |
|
SwissArmyDruid posted:Indeed, if NVidia also releases PCIe4 cards next gen, AMD processors will be your only option. Only to run at gen4 bandwidth. Does it even matter tho?
|
# ? Jan 4, 2020 06:27 |
|
It always matters in the datacenter.
|
# ? Jan 4, 2020 06:28 |
|
Some pretty rare and vague rumors about Ampere from Industry analysts... https://www.notebookcheck.net/Yuant...r.449041.0.html
|
# ? Jan 4, 2020 09:33 |
|
I'm sure it'll reverse climate change and exhaust delicious vanilla ice cream, too.
|
# ? Jan 4, 2020 09:41 |
|
At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility.
|
# ? Jan 4, 2020 10:49 |
|
I was literally about to get out of bed and start doing numbers on that myself lmao. I was going to check and see what even a 25% reduction would be like. If you reduce the power req by 50% IDGAF if you even make it more performant. I agree on the voodoo on that but I assure you I'm happy to be wrong af E: lol haha no pun intended haha sweet
|
# ? Jan 4, 2020 10:51 |
|
Probably as likely to be true as Navi 21 being twice as fast and as to why that's BS. So certainly possible to have meaningful gains, just no way in hell those kinds and it's likely just the hypebeasts on both sides getting to work for this year.
|
# ? Jan 4, 2020 11:07 |
|
K8.0 posted:At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility. I mostly agree with your posting, particularly if they skip the regular TSMC 7nm (Zen2, Navi) process and skip straight to the improved 7+ process. Half the power consumption does sound unrealistic for desktop products but in mobile chips that generally run in the more efficient part of the power/perf curve I could also see that happening. Perhaps the next gen is going to be another Pascal level jump in those regards, wouldn’t that be nice.
|
# ? Jan 4, 2020 11:10 |
|
K8.0 posted:At first that sounds ridiculous, but I'm not sure it's entirely impossible. 40% generational performance improvement is fairly normal for Nvidia, but they've got more time, a bigger process leap, and a relatively weaker prior generation than usual here, so all told 50% may happen. The efficiency increase is the bigger stretch to me - I could believe Ampere coming in at 2/3 the power, but half? Probably outside the realm of possibility. What will be new in the next iteration are comparisons of two fps+ resolution scenarios: RTX off and RTX on compared to the first Turing/2000 generation. IIRC Nvidia won’t be able to deliver 35-50% more fps in both scenarios and they have to decide : Boost Raytracing fps or RTX off „classic“ Render/Shader Power fps, and how big will the VRAM be, which all leads to the size of the GPU after all. It may lead to force them to find a compromise OTOH only 25% more fps RTX off with a 3070 and 3080 compared to the 2070 and 2080 will lead to gamer rage. OTOH only 25% additional Raytracing fps will upset everyone how skipped the 2000 cards because everyone is “sure” the next iteration would (no matter how absurd that is ) double the Raytracing fps . If they somehow manage to get 40-50% above a 2080TI with RTX on AND off, it’s hard to imagine it will be a below 1400 founder Euro/Dollar GPU which leads to market/ street prices of usually 20-30% more. I would think about a 3070 that is 25% faster with RTX off and 40% faster with RTX on compared to a 2070 or 2070S and 8 GB VRAM for 599 Euro/Dollar. (street prices) I am hoping to be way off (too high) with my worst case expectations. Mr.PayDay fucked around with this message at 20:13 on Jan 4, 2020 |
# ? Jan 4, 2020 20:02 |
|
I think we can safely say that nvidia will drop the tensor cores unless they are required for ray tracing. That might give them back some headroom
|
# ? Jan 5, 2020 01:43 |
|
the tensor cores are huge for mixed precision ML training/quantized inferencing so i'm skeptical they'd take it out. the X070s have become the default starter platform for ML students and the X080 Tis defaults for small research/startup teams.
|
# ? Jan 5, 2020 01:56 |
|
They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately.
|
# ? Jan 5, 2020 02:00 |
|
Mr.PayDay posted:What will be new in the next iteration are comparisons of two fps+ resolution scenarios: RTX off and RTX on compared to the first Turing/2000 generation. Where are you getting this from? That'd only be true if they didn't have true 40% price point uplift in the process and needed to cut the RT/Shader ratio to hit a price point. I don't think that the RT cores are doing anything that wouldn't scale more or less linearly by adding more cores unless it's too cpu-limited at the high end (but repiv can correct me if I'm wrong). E: I suppose it could be true if the IPC improvement to the shader cores can't be matched by the RT cores and requires a shift in the core ratio to maintain the performance ratio. It's certainly not going to be a linear split, though - the shader cores comprise a significantly larger portion of the die than the RTX cores and "RTX-on" performance relies on both.
|
# ? Jan 5, 2020 02:01 |
|
Cygni posted:They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately. Expensive both for Nvidia and the end buyer. That being said, lol we're all used to paying $2000 for a $300 SSD in the enterprise space, so I look forward to seeing tensor core cards starting at ten grand per.
|
# ? Jan 5, 2020 02:03 |
|
shrike82 posted:the tensor cores are huge for mixed precision ML training/quantized inferencing so i'm skeptical they'd take it out. Yep. Nvidia considers the whole AI/DLI/ML stuff as a blockbuster on their agenda. https://news.developer.nvidia.com/tensorrt-7-conversational-ai/ The Tensor architecture will get even more attention, not less. https://arxiv.org/pdf/1902.05942.pdf The parallel path space filtering paper linked from https://news.developer.nvidia.com/massively-parallel-path-space-filtering-in-game-development/ under their GAMEWORKS section splus „Raytracing“ as first topic on the left is a hint that Nvidia might prioritize this stuff even further. quote:The upcoming GameWorks SDK — which will support Volta and future generation GPU architectures — enable ray-traced area shadows, ray-traced glossy reflections and ray-traced ambient occlusion. The RTX off gains might even be less than the avg fps jump from the 1080 to the 2080 or 1080Ti vs 2080Ti if Nvidias priority is RT/AI/DLSS/ML etc. That’s just me pulling that out of nowhere tho. If Nvidia pushes ~ 40% RTX on and off fps gains each for the next 3060/3070/3080/3080Ti iterations, that would be amazing, of course. If the prices stay similar.
|
# ? Jan 5, 2020 02:29 |
|
Serious Hardware / Software Crap >> GPU Megathread:Mr.PayDay posted:That’s just me pulling that out of nowhere tho.
|
# ? Jan 5, 2020 02:33 |
|
Stickman posted:Where are you getting this from? https://www.it-business.de/index.cfm?pid=7531&pk=8570&fk=1520844&type=gallery Nvidia would sacrifice RTX off gains if they had to chose, they will and have to push their next level business direction which is the Raytracing neuronal network, AI, ML etc. stuff. This info is from spring 2019 tho I honestly don’t know how Nvidia might have changed their agenda.
|
# ? Jan 5, 2020 02:37 |
|
Subjunctive posted:Serious Hardware / Software Crap >> GPU Megathread: https://wccftech.com/nvidia-shows-that-their-geforce-rtx-gpus-are-much-faster-powerful-than-next-gen-consoles/ Edit: „One of the very first things that NVIDIA allegedly wanted to communicate to its partners was that it's still definitely all-in on ray tracing.“ https://wccftech.com/nvidia-ampere-rumors-massive-rt-performance-uplift-higher-clocks-more-vram-lower-tdps-vs-turing/ It’s not like there are not tons of hints tho Mr.PayDay fucked around with this message at 02:59 on Jan 5, 2020 |
# ? Jan 5, 2020 02:43 |
|
Still all in on ray-tracing... ...they'll just lobby Khronos to start calling Vulkan "RTX-Compatible."
|
# ? Jan 5, 2020 03:08 |
|
Is nvidia gonna release a new card in time for the cyberpunky or should i just get the 2080ti now and call it a day?
|
# ? Jan 5, 2020 03:11 |
|
Cactus posted:Is nvidia gonna release a new card in time for the cyberpunky or should i just get the 2080ti now and call it a day? Depends on if CP2077 gets delayed again. I wouldn't expect the 3070/80 cards until Computex>early Q4.
|
# ? Jan 5, 2020 03:14 |
|
BIG HEADLINE posted:I wouldn't expect the 3070/80 cards until Computex>early Q4. Is that launching the cards or actual mass/actual availability of the cards?
|
# ? Jan 5, 2020 03:49 |
|
Cygni posted:They would only take the tensor cores out of the consumer cards if they plan to develop a completely separate architecture for the enterprise market, top to bottom. Which is expensive... but is exactly what has been rumored lately. comedy option: they keep tensor cores on the silicon but fuse them off on consumer cards, like they used to do with double precision consumers are none the wiser since nobody figured out a use-case for them anyway, and those drat cheapskate ML startups have to pay the tesla/quadro tax
|
# ? Jan 5, 2020 03:54 |
|
Have there been any other GPU generational transitions where a manufacturer removed a major bit of functionality like that?
|
# ? Jan 5, 2020 04:43 |
|
MH Knights posted:Is that launching the cards or actual mass/actual availability of the cards? Probably limited availability, plus nVidia has liked to launch the x80s first and follow with the x70 1-2 months later. The big question will be if they launch the x80Ti again at the same time like they did with the 20-series.
|
# ? Jan 5, 2020 04:49 |
|
BIG HEADLINE posted:Probably limited availability, plus nVidia has liked to launch the x80s first and follow with the x70 1-2 months later. Wouldn’t be surprised seeing them go back to their normal release schedule and the x80 Ti 9 months or so after the x80 part.
|
# ? Jan 5, 2020 05:06 |
|
Mr.PayDay posted:Yep. Nvidia considers the whole AI/DLI/ML stuff as a blockbuster on their agenda. The tensor cores are essentially useless for gaming given that DLSS flopped NV can and will keep em around for the dedicated ML market, but that's TESLA stuff NV already has a split with Volta/Turing: there's probably gonna be a dedicated compute (TESLA) card with no RT hw, a dedicated big graphics for quadro/titan with the pro stuff, maybe some minor tensor stuff, and then strip it off for the consumer graphics/gaming skus Malcolm XML fucked around with this message at 05:34 on Jan 5, 2020 |
# ? Jan 5, 2020 05:29 |
|
|
# ? May 29, 2024 07:43 |
|
B-Mac posted:Wouldn’t be surprised seeing them go back to their normal release schedule and the x80 Ti 9 months or so after the x80 part. Nah, why leave money on the table when we can expect the "SUPER" refreshes now as well?
|
# ? Jan 5, 2020 05:45 |