Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kibner
Oct 21, 2008

Acguy Supremacy
It is also the name of some Schitt DACs.

Adbot
ADBOT LOVES YOU

v1ld
Apr 16, 2012


loving lol

Isn't it true at some fundamental level you could always strive to get more performance by blowing up the physics of the thing? Sure, scalability is not as trivial as AdD Mo4r!! but there's also some decency and taste that goes into a design that should prevent us from needing this kind of monstrosity?

I still think the primary design parameter for this generation is "we need to justify keeping prices as high as they managed to get during a crazy period. how do we get enough performance to justify it? physics be damned."

This will have knock-on effects for future gens. As far as I know there are no freebies coming from process or other improvements that will let them improve 40xx to 50xx without similar excess on the power/cooling side. I sincerely hope I'm wrong on that last though.

repiv
Aug 13, 2009

SLI is dead and most people have no other PCIe cards, may as well use the space

v1ld
Apr 16, 2012

Yeah, it was pointed out earlier how this is cheaper in power than SLI and gives you linear scaling at least. So that's something.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
I mean aside from the 4090 the performance gains look far from impressive. It's not that they're scaling up in size for some conspiratorial reason, it's just the only way left to get GPU performance to increase meaningfully.

Increasing vram speed is proving to be quite a problem too. You could go with a huge, fat bus again, but that also means expensive and big.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
Let's go 5.25" bay GPUs

v1ld
Apr 16, 2012

I don't see any conspiracies, just a company doing what public companies are required to do: maximize profitshareholder value. It's the design parameters for the solution they found I have issue with.

The 4090 is competitive at the outlandish price they picked because the performance is so high. That performance is achieved through what I personally consider excessive freedom on the power side.

They could have released something on the same price/perf curve with lower performance while still being an improvement gen-over-gen, but that would require lower pricing. Hence the statement on the design params selected.

v1ld fucked around with this message at 18:23 on Sep 30, 2022

Au Revoir Shosanna
Feb 17, 2011

i support this government and/or service
How are desktop cards still so big when gaming laptops have come so far in the last few years?

The MaxQ stuff is less powerful sure but it isn't a huge downgrade in performance and they somehow fit in these 15" ultrathins?

v1ld
Apr 16, 2012

Because power - usage and dissipation - is the primary design constraint for laptops. No longer the case for home computing on the desktop.

What do datacenter GPUs look like these days? I assume no one in a real - ie., not crypto - DC is going to be eager at these power levels - they're probably targeting better spots on the efficiency curve than we home folks have access to anymore, or so I would hope?

I haven't had to approve/review dc gear purchases in well over a decade but there's no loving way I would've approve a 4090's level of excessive power usage for results. E: not that I ever had to do it for a general purpose, resold dc of the sort most of our stuff runs on today which is light years ahead of what we built and ran.

repiv
Aug 13, 2009

v1ld posted:

What do datacenter GPUs look like these days? I assume no one in a real - ie., not crypto - DC is going to be eager at these power levels - they're probably targeting better spots on the efficiency curve than we home folks have access to anymore, or so I would hope?

the PCIe version of H100 runs at 300-350W, but the SXM form factor version meant exclusively for servers runs up to 700W

https://www.nvidia.com/en-gb/data-center/h100/

jokes
Dec 20, 2012

Uh... Kupo?

I think heat management is the reason they're so big, and laptops usually undervolt their cards IIRC, which really helps with laptop design which is mostly a game of balancing price, form factor, heat/power, and performance.

Also CPUs are, like, stupid efficient nowadays so all the heat/power management can be dedicated to the GPU.

For example, the processor in my work laptop, i7-1185G7 has a TDP of 15W, while an i7-11700K has a TDP of 125W for roughly 2.4x the performance.

Mr.PayDay
Jan 2, 2004
life is short - play hard
So I was in my local pc builder store today to get the 4090 Strix day 1. Still lots of time, but I wanna have the system ready asap as my current rig will go to a buddy asap as well.

Then the store guy tells me I was the third dude asking for a custom 4090 reservation.
The third today. :allbuttons:

They already got over a dozen sales forecast of 4090s and 4080s from their long time customers and enthusiasts the last days. And we haven’t even seen benchmarks and AMDs RDNA3 fleet isn’t even revealed.

Thinking Nvidia will have problems selling those RTX 40 GPUs is pure denial I guess.

jokes
Dec 20, 2012

Uh... Kupo?

Mr.PayDay posted:

So I was in my local pc builder store today to get the 4090 Strix day 1. Still lots of time, but I wanna have the system ready asap as my current rig will go to a buddy asap as well.

Then the store guy tells me I was the third dude asking for a custom 4090 reservation.
The third today. :allbuttons:

They already got over a dozen sales forecast of 4090s and 4080s from their long time customers and enthusiasts the last days. And we haven’t even seen benchmarks and AMDs RDNA3 fleet isn’t even revealed.

Thinking Nvidia will have problems selling those RTX 40 GPUs is pure denial I guess.

There are always enthusiasts, but the real question is whether or not normal people will spend that much money on a graphics card.

v1ld
Apr 16, 2012

repiv posted:

the PCIe version of H100 runs at 300-350W, but the SXM form factor version meant exclusively for servers runs up to 700W

https://www.nvidia.com/en-gb/data-center/h100/

Interesting, thanks for the link. I would've assumed the 700W part to be linear or super-linear in efficiency (SLI on a board naively) but the table shows that it is nowhere near twice as efficient as the PCIe part when it comes to flops - by a very long shot. See image below.

So why is that? Is the nvlink bandwidth improvement enough to trump power efficiency? Because that seems significant, both to CPU and inter-GPU(?).

Have no idea of GPU compute at scale or the workloads on these things.

Inept
Jul 8, 2003

Au Revoir Shosanna posted:

How are desktop cards still so big when gaming laptops have come so far in the last few years?

The MaxQ stuff is less powerful sure but it isn't a huge downgrade in performance and they somehow fit in these 15" ultrathins?

Laptops are on the efficiency part of the curve. The difference on low TGP video cards can be pretty significant on an ultra-thin. The 60 watt 3060 in the G14 is a good amount slower than a desktop 3060, and has half the VRAM.

Mr.PayDay
Jan 2, 2004
life is short - play hard

jokes posted:

There are always enthusiasts, but the real question is whether or not normal people will spend that much money on a graphics card.

Ok, I have a theory and it goes like this:
Out there are soooooo many gamers that already skipped the 3000 scalper fiasco and Turing GPUs as well.
That’s a demand queue of years of gamers who waited 4-6 years to buy and are now landing at Ada Lovelace RTX 4000 that all tech mags and youtubers and even Reddit and we here forgot.
I know guys who saved 25 Euro monthly for their GPU budget since years and skipped the last gens and now have 1500 Euro as disposable income or saved up for a 4080-16.

Just regular non-enthusiastic gamers who don’t want to skip the 3rd or 4th Nvidia generation and indeed spend 1000+ Euro now.

jokes
Dec 20, 2012

Uh... Kupo?

Yeah but with Crypto flooding the market with 3000s, a person has to decide on upgrading from a 970 to a 3080 for $450 or upgrading to a 4080 for $1450 and, well, long story short I bought a 3080.

Mr.PayDay
Jan 2, 2004
life is short - play hard

jokes posted:

Yeah but with Crypto flooding the market with 3000s, a person has to decide on upgrading from a 970 to a 3080 for $450 or upgrading to a 4080 for $1450.

There ain’t even sth close to 450 Euro 3080s in Germany. And DLSS3 Frame generation will leave the Ampere fleet way behind .
Benchmarks are selling points.

infraboy
Aug 15, 2002

Phungshwei!!!!!!1123
The 4090 is expensive but if you break it down it's around 70$ a month assuming you keep it at least 2 years and honestly i've spent money on worse sfuff.

Mr.PayDay
Jan 2, 2004
life is short - play hard

infraboy posted:

The 4090 is expensive but if you break it down it's around 70$ a month assuming you keep it at least 2 years and honestly i've spent money on worse sfuff.

Exactly, and you might sell your current GPU for ROI as well.
I will get 900 Euro for my 3090 Strix.

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost
Sure, but that means a 3080 would still be a third of that on a monthly basis, and if you're not going to be trying to do raytracing at 4k you may not need a 4090.

MarcusSA
Sep 23, 2007

The 4090 is not going to sell gang busters. You can quote me on that. They are going to sell a few but the demand for it is gonna fall off a cliff.

Mr.PayDay
Jan 2, 2004
life is short - play hard

Mozi posted:

Sure, but that means a 3080 would still be a third of that on a monthly basis, and if you're not going to be trying to do raytracing at 4k you may not need a 4090.

You are absolutely right from a rational perspective, but don’t underestimate the emotional aspects especially for a gaming hobby.
I had conversations the last days with my gaming buddies and Nvidia’s marketing works.
“That new Cyberpunk Raytracing looks dope, can I do 4K maxed at 100+ fps with a 3080/3090 as well ?”
“nope”
“With a 4080/4090?”
“Yes”.

That’s simplified but you get my point. Some just wanna have the newest tech and tools and toy, especially after having skipped several GPU generations.
That’s paying for fun and ignoring sane financial reasons and spending an outrageous difference from a rational POV.

repiv
Aug 13, 2009

v1ld posted:

Interesting, thanks for the link. I would've assumed the 700W part to be linear or super-linear in efficiency (SLI on a board naively) but the table shows that it is nowhere near twice as efficient as the PCIe part when it comes to flops - by a very long shot. See image below.

So why is that? Is the nvlink bandwidth improvement enough to trump power efficiency? Because that seems significant, both to CPU and inter-GPU(?).

Have no idea of GPU compute at scale or the workloads on these things.



i would guess it's to do with the balance between space efficiency and power efficiency, if rack space is at a premium they have the option of juicing the GPUs extra hard to pack more compute into the area they have

the previous generation A100 chips only went up to 500W, so someone must have been asking nvidia to let them go wild with the power limit if they raised it all the way to 700W

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
We also have no idea what the 4000 mobile series will look like in terms of perf/watt. Nvidia fucks around with the numbering in mobile the same as in desktop. Last generation a 3080mobile was a different chip (GA104) than the desktop (GA102). You can't always just cross compare and say "Oh, its the same thing just lower power".

Craptacular!
Jul 9, 2001

Fuck the DH
IIRC last gen MrPayDay was telling us about how the 3090 is worth every penny because he likes to play games with settings that are designed entirely for screenshots and not for real-time play. I don't know how he makes his money, but he thinks significant amounts of it is worth seeing the highest quality shadows and the highest quality lights and so on at 90+ FPS, playing with settings most here wouldn't touch.

And that would be fine, except that he takes the opinion that he's really the normal rational person here and we're just not true gaming enthusiasts because we actually stopped and hesitated when the price emerged.

v1ld
Apr 16, 2012

Recently looked at the Asus G14/G15 to see if I wanted to upgrade my 4yr old and still amazing HP spectre ultralight (8550u / 620Hd) to something with a mobile 3070Ti chip (G15). Tried out one each from their open boxes and returned both, though they were both amazing deals (just over $1000 each a month ago, those deals seem to have disappeared now).

That 3070Ti is cut down from the desktop part, just like Lockback says, but also the thermal limits allowed the chip varies dramatically from laptop to laptop and can change the size of the thing dramatically. I wanted light and small and Asus seems best in terms of power/thermal choices there.

But even so I found those laptops too big and unwieldy to be truly mobile even making concessions for just how good an ultralight this HP is. The performance for gaming is decent, maybe even up to 1440p 16:9 - but not worth the space/weight concessions for me.

if you don't mind a 15"+ desktop-replacement laptop that you're only going to move around rarely or not really use in your lap or as your drive around compute, then there are some pretty good gaming laptops. But not really if you want an ultralight++.

E: The AMD 6-series mobile APUs with RDNA2 are excellent though still not really available in ultralights I think. That would be a good replacement for this HP if it was possible to do in an ultralight.

v1ld fucked around with this message at 19:45 on Sep 30, 2022

jokes
Dec 20, 2012

Uh... Kupo?

Craptacular! posted:

IIRC last gen MrPayDay was telling us about how the 3090 is worth every penny because he likes to play games with settings that are designed entirely for screenshots and not for real-time play. I don't know how he makes his money, but he thinks significant amounts of it is worth seeing the highest quality shadows and the highest quality lights and so on at 90+ FPS, playing with settings most here wouldn't touch.

And that would be fine, except that he takes the opinion that he's really the normal rational person here and we're just not true gaming enthusiasts because we actually stopped and hesitated when the price emerged.

It's actually insane not to spend $1500 on a new graphics card if you think about it

FuturePastNow
May 19, 2014


Mr.PayDay posted:


Thinking Nvidia will have problems selling those RTX 40 GPUs is pure denial I guess.

The 4090 will probably sell about as well as Nvidia expects because some people will always want the best and be willing to pay for it.

But how well will the two 4080s compete against unsold 3080/3090 inventory (and cheap used ex-miners)? I guess we'll see

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

repiv posted:

i would guess it's to do with the balance between space efficiency and power efficiency, if rack space is at a premium they have the option of juicing the GPUs extra hard to pack more compute into the area they have
Most of the hyperscalers have already gotten close to the maximum power allowed per even zoning laws in many jurisdictions and can't add more racks necessarily so for them power efficiency is compute capacity and performance-per-watt is aggregate performance. GPUs and tensor cores are all part of the calculations as well but internal compute capacity vs capacity that's sold are different sets of issues that complicate matters given old equipment can be what's sold and reserved by customers that a company would have long decommissioned (seriously, what kind of jokers still use m1 instances in AWS again?).

Inept
Jul 8, 2003

Mr.PayDay posted:

DLSS3 Frame generation will leave the Ampere fleet way behind .

lol someone actually bought nvidia's marketing

v1ld
Apr 16, 2012

"We need a 2 bedroom house for the next 5 years."
"Honey, look at all these deals on 10 bedroom houses!!! Their price/bedroom is way better than all these 2 bedroom places."
"But they cost 4x."
"Yeah, that's why it's a 25% win you see!! You get 5x and only pay 4x!!!!!"

What are you going to play on the inter/extrapolated, frame-generated, upscaled honking huge card that does 200 fps on every game we have? Like even if Ampere is "left in the dust" by available compute power, what are you going to do with the extra power?

If the answer is hang onto it until that power is needed, well there are going to be way better cards when you do need the power. Why not buy it then?

The niche is 8K, but I think that niche is still in NVidia marketing's head?

v1ld fucked around with this message at 20:21 on Sep 30, 2022

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
Support for DLSS3 in games will be close to non-existent, let's face it. Adoption of DLSS2 is abysmally low two generations on from its introduction. Developers aren't going to bother with a feature that less than 0.01% of the audience will use. Steam hardware survey says it all.

Chainclaw
Feb 14, 2009

If you're running something like stable diffusion locally, you're going to pretty easily hit the limits of a 3090 or 4090. I think you're mostly going to be gated on vram with that software.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I'm not impressed with the frame generation we've seen so far. I think it's a bullshit technology that kicks the can to give them marketing "performance improvements" because the actual generation to generation improvement here is less impressive without just going balls to the wall with power consumption and the biggest chip you can put on a card.

DLSS upscaling is awesome and a real improvement over simply lowering the rendering resolution and letting your monitor/TV or GPU scale it using the old ways, and to me is just better than competing upscaling methods, especially in motion and with certain details. I think most people can appreciate it and how it can make a gameplay experience much better without sacrificing settings or making the whole frame a jagged aliased mess.

But making fake frames and calling that real performance is dubious as hell IMO, you can say it leaves past generations behind but I totally disagree.

Agreed fucked around with this message at 20:27 on Sep 30, 2022

repiv
Aug 13, 2009

Zedsdeadbaby posted:

Developers aren't going to bother with a feature that less than 0.01% of the audience will use. Steam hardware survey says it all.

the steam survey says about a third of steam users have RTX cards at this point

e: not arguing that uptake of DLSS3 is going to be slow though, it's all going to be driven by nvidia throwing devrel dollars around for the forseeable future given the price situation

repiv fucked around with this message at 20:27 on Sep 30, 2022

Animal
Apr 8, 2003

Mr.PayDay posted:

And DLSS3 Frame generation will leave the Ampere fleet way behind .


Sure, in like 3 games.

MarcusSA
Sep 23, 2007

Animal posted:

Sure, in like 3 games.

The 3 most important games that are also nearly 3 years old at this point!

PBCrunch
Jun 17, 2002

Lawrence Phillips Always #1 to Me
I would like to run Stable Diffusion local. I run Linux on all my machines. If it is in any way possible I'd like to avoid buying an Nvidia card. For open-source driver reasons. Is there any possibility that Arc cards might be able to run SD through oneAPI or whatever? Is there a sweet spot in price:performance with AMD cards for Stable Diffusion? From my reading it looks like cards with 8 GB VRAM are skating pretty close to the minimums.

PBCrunch fucked around with this message at 20:37 on Sep 30, 2022

Adbot
ADBOT LOVES YOU

Animal
Apr 8, 2003

Agreed posted:

I'm not impressed with the frame generation we've seen so far. I think it's a bullshit technology that kicks the can to give them marketing "performance improvements" because the actual generation to generation improvement here is less impressive without just going balls to the wall with power consumption and the biggest chip you can put on a card.


I think it has its place, especially in games like Flight Simulator where the motion vector is not rapidly changing

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply