Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Bjork Bjowlob posted:


If you're looking to do multiple passthrough to client VMs for the purposes of doing GPU-based work (e.g. machine learning, data processing, or running games) you could have a look at GPU over IP: https://github.com/Juice-Labs/Juice-Labs . I've tested this and it works fairly well for those use cases. Note that it requires application-specific setup on the client (analogous to manually specifying which GPU to use for which application in the client) and as far as I can tell it won't work for the desktop session itself. I also had trouble using it for GPU accelerated encoding in Jellyfin via ffmpeg. Your mileage may vary!

Ah, I missed that you were referring to the new dedicated device - I spent some time researching (well, hoping!) whether Intel would support SRIOV on their consumer GPUs as it would be appealing overall from a cost perspective, so my mind jumped straight to that.

I've tried juice but it hasn't worked well for me. Specifically their linux CUDA support blows up when you try to use blender. It works if I want to run AEYEE EYEE over the network but I don't give a single poo poo about that so not particularly useful. I haven't tried it with a windows client since if I'm spending 2-3 grand on a GPU I intend to use it myself and my desktop is linux. They also have a lot of super-annoying bugs, some of which I've root-caused (like empty logfiles after a crash because they never flush the output) in reports which just get ignored.

Wendel hinted that the A770 can be cross-flashed to the flex 170 but so far I haven't seen it happen. The Flex170 is basically nonexistent outside select integrators. and again, I doubt cleric will ever happen. Even battlemage is iffy. Intel leadership has the attention span of a squirrel that got into a kilo of coke.

Adbot
ADBOT LOVES YOU

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".
I could use some advice thread!

Given the latest announcements... is there any reason to hold off on getting a MSI Suprim liquid X hybrid 4090 for ~$2000 right now?

It's going to be for AI/Design stuff mostly. I don't even think I own a game that'll really put it through its paces. My current card is pre-Ampere

if I'm seeing correctly, the gains of "Blackwell" announced yesterday are mostly from doubling the die and shrinking the size/precision of a floating point data type?

is there any specific feature that's expected in the 5-series cards that will make a 4090 obsolete for AI in the next year or two?
It might be a dumb question, and I know it's all opinion/guesses, but I figured I'd ask. There's a lot of knowledge in this thread.

vvvvv thanks for the reply... took a look and those cost a lot of cake. beautiful card though

namlosh fucked around with this message at 22:08 on Mar 19, 2024

shrike82
Jun 11, 2005

the only thing major you'd potentially miss out for AI enthusiast compute is if the 5090s get a VRAM size bump but doesn't seem likely atm

Maybe consider a 48gb A6000

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

shrike82 posted:

the only thing major you'd potentially miss out for AI enthusiast compute is if the 5090s get a VRAM size bump but doesn't seem likely atm

Maybe consider a 48gb A6000

I know that Userbenchmark is pretty messy and has a lot of downsides, but there's barely any comparisons between those two GPUs and the 4090 looks majorly faster than the A6000: https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-vs-Nvidia-Quadro-RTX-A6000/4136vsm1300600

shrike82
Jun 11, 2005

Twerk from Home posted:

I know that Userbenchmark is pretty messy and has a lot of downsides, but there's barely any comparisons between those two GPUs and the 4090 looks majorly faster than the A6000: https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-vs-Nvidia-Quadro-RTX-A6000/4136vsm1300600

You're looking at the wrong A6000

orcane
Jun 13, 2012

Fun Shoe
You want the RTX 6000 Ada Generation (I don't make the names).

The RTX A6000 is the Ampere card ie. a RTX 3090 Ti with different RAM and clocks.

shrike82
Jun 11, 2005

whenever i hear someone talk about doing home AI stuff these days, i assume it's genai related and 24gb is marginal even if you're just running inference.

i do wonder if nvidia will bother upping the vram on the 5090s at all - games don't need more than 24gb.
maybe they'll go back to releasing titans? :shrug:

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrike82 posted:

whenever i hear someone talk about doing home AI stuff these days, i assume it's genai related and 24gb is marginal even if you're just running inference.

there are people using it for video processing, and some local voice models now too!

TheDemon
Dec 11, 2006

...on the plus side I'm feeling much more angry now than I expected so this totally helps me get in character.
it won't be a feature of the 5090, but expandable vram would be an interesting draw since it would avoid those dual 3090 frankensetups

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

shrike82 posted:


maybe they'll go back to releasing titans? :shrug:

Honestly they should, though the 4090 probably sold better being called a 4090 than it would have if it were called a Titan so

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".
Thanks very much for the replies everyone... one of the things I'm going to try and look at are generation of Hypoglycemic Index for different foods (based on description) for diabetics. But that's a long term thing.
That and maybe create an AI ad-blocker.

I'm going to head over to the AI/Stable diffusion threads in yospos and CoC, but if anyone has any cool eye candy that I should do with it I'd love to hear it. All of Nvidia's tech demos seem to be super old.
I do plan on jumping on GTA Online at some point since it's the only game I play on PC and it'll be neat to see everything maxed out at 144 fps (monitors max).

Cygni
Nov 12, 2005

raring to post

Lockback posted:

Honestly they should, though the 4090 probably sold better being called a 4090 than it would have if it were called a Titan so

Both a 4090 Ti and "Titan Ada" were prototyped too but ended up getting canned, at least partially due to the insatiable demand for L40s and 6000 Adas. Of course just because something was prototyped and talked about with OEMs doesn't mean it was ever really close to being a real product so whomst knoweth.

KillHour
Oct 28, 2007


Cygni posted:

Both a 4090 Ti and "Titan Ada" were prototyped too but ended up getting canned, at least partially due to the insatiable demand for L40s and 6000 Adas. Of course just because something was prototyped and talked about with OEMs doesn't mean it was ever really close to being a real product so whomst knoweth.

This is what I'm worried about - that the explosion in demand for datacenter GPUs is going to keep gaming GPU supply strangled, even if demand cools. Why would nvidia use more fab capacity on a $1000 consumer card when the same chip in a datacenter card costs 5x as much?

Kazinsal
Dec 13, 2011


namlosh posted:

Thanks very much for the replies everyone... one of the things I'm going to try and look at are generation of Hypoglycemic Index for different foods (based on description) for diabetics. But that's a long term thing.
That and maybe create an AI ad-blocker.

I'm going to head over to the AI/Stable diffusion threads in yospos and CoC, but if anyone has any cool eye candy that I should do with it I'd love to hear it. All of Nvidia's tech demos seem to be super old.
I do plan on jumping on GTA Online at some point since it's the only game I play on PC and it'll be neat to see everything maxed out at 144 fps (monitors max).

Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed.

Cyrano4747
Sep 25, 2006

Yes, I know I'm old, get off my fucking lawn so I can yell at these clouds.

Kazinsal posted:

Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed.

Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread.

You can breeze over my commentary, it's all history nerd poo poo, the important bit is the screen shots:

Cyrano4747 posted:

This is mildly off topic, but the example I used is milhist so this thread might find it amusing.

A co-worker was discussing the need to get a grip on ChatGPT and AI in general for work reasons, so I did this quick demonstration using Microsoft's Copilot.

Here's the question I asked and the response:



Note that this is a mix of decently accurate info along with bad interpretations based on poorly researched books from the 40s and 50s that have become the cornerstone of a lot of online info about it. In particular the notion that the performance of Winchesters at Plevna is what spurred European armies to invest in magazine fed, repeating firearms is bullshit. I've got a whole spiel on Plevna, but the tl;dr is that while it was something people studied the main focus was on the performance of the Peabody rifles and how much damage relatively untrained troops with effective breach loading rifles, even single shot rifles, could inflict from prepared defenses. The broader discussion it was part of had to do with weight of firepower, which I think is where people assume the Winchester comes in, but everyone at the time recognized and emphasized the damage done by the Peabodies. Regardless, my personal take is that the Winchesters got latched onto by American writers in the 40s and 50s who were enamored with "the gun that won the west" and wanted to emphasize the importance of American arms in shaping European policies.

But I digress.

So I tell Copilot it's wrong and it immediately agrees with me! It even goes so far as to insist that it is poorly attributed.



But what happens if I tell it that no, it was correct?



LMAO.

Note that I'm an idiot and know nothing about AI. The fact that I was able to trip it up is akin to a clumsy 4 year old figuring out how to disable your car.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Kazinsal posted:

Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed.

You should go warn the thousands of people who have been in trials of ML-driven insulin pumps since 2018!

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

Cyrano4747 posted:

Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread.

You can breeze over my commentary, it's all history nerd poo poo, the important bit is the screen shots:

Note that I'm an idiot and know nothing about AI. The fact that I was able to trip it up is akin to a clumsy 4 year old figuring out how to disable your car.

Copilot is built on ChatGPT 4.0, but whatever Microsoft have done to tailor it to their needs, has made it dumber and more susceptible to outputting nonsense, often very confidently. All of these chatbot assistants get things wrong, but ime Copilot is on another level.

Truga
May 4, 2014
Lipstick Apathy
i'm glad the people making all the money off of internet are going to make internet loving useless over the next couple years

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Rinkles posted:

Copilot is built on ChatGPT 4.0, but whatever Microsoft have done to tailor it to their needs, has made it dumber and more susceptible to outputting nonsense, often very confidently. All of these chatbot assistants get things wrong, but ime Copilot is on another level.

Gemini is an outstanding chatbot, in that they managed to replicate an actual idiot.


Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

Cyrano4747 posted:

Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread.

You can breeze over my commentary, it's all history nerd poo poo, the important bit is the screen shots:

Note that I'm an idiot and know nothing about AI. The fact that I was able to trip it up is akin to a clumsy 4 year old figuring out how to disable your car.

An LLM is not the only machine learning avenue. An LLM like copilot/chapgpt is trying to induce new novel text based on it's model and being correct is very secondary. People like to paint AI with one brush but something like a LLM is doing what it's supposed to be doing, and being wrong or hallucinating is not out of bounds.

Using AI to help model Hypoglycemic Index isn't at all an outlandish thing. You need to model these things, and you need to make guesses since you cannot test every single food you eat and AI is excellent for chewing through massive amounts of data to make that easier. That's a kind of classification problem which AI has been really good at for decades and I assure you is being used in all sorts of applications that you trust with your life today.

Inept
Jul 8, 2003

AI is a stupid term that now means nothing but "buy our stock"

Same as blockchain, IoT, cloud, .com, and every other thing that people have rebranded themselves to to try to make more money

njsykora
Jan 23, 2012

Robots confuse squirrels.


Lockback posted:

Using AI to help model Hypoglycemic Index isn't at all an outlandish thing. You need to model these things, and you need to make guesses since you cannot test every single food you eat and AI is excellent for chewing through massive amounts of data to make that easier. That's a kind of classification problem which AI has been really good at for decades and I assure you is being used in all sorts of applications that you trust with your life today.

Yeah something I've always heard is this kind of thing is super useful for medical research since it turns out going through massive data sets looking for outliers and anomalies is something a computer is pretty good at.

Bofast
Feb 21, 2011

Grimey Drawer

Harik posted:

Gemini is an outstanding chatbot, in that they managed to replicate an actual idiot.



:eyepop:

repiv
Aug 13, 2009

garry newman of gmod fame got a similar treatment

https://twitter.com/garrynewman/status/1755851884047303012

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

njsykora posted:

Yeah something I've always heard is this kind of thing is super useful for medical research since it turns out going through massive data sets looking for outliers and anomalies is something a computer is pretty good at.

AI is very good at classification (which bucket does this thing belong in), Anomaly detection, pattern recognition (certain scenarios), Vectorization. In those areas I would absolutely trust a well built model over a person, 100%. And its not new, models have been doing this for decades in many areas that you probably wouldn't consider AI. What is somewhat new is newer tools that let you do a lot of these things without needing the kind of math-by-hand that you had to do before. That doesn't mean the models are worse, and there are lots of areas that even amateurs at home with consumer cards can do useful things.

What gets lost is people start trying to push the tools beyond what they can/should do (which you're ABSOLUTELY seeing with LLMs) and then point to that to say the whole genre is built on a house of cards. That's over reacting the other way.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
new realtime UE5 showcase

https://www.youtube.com/watch?v=Lb2wwEx6DVw

Amy Hennig's new project (hopefully this one actually comes out) Marvel 1943: Rise of Hydra, due next year

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".

Kazinsal posted:

Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed.

It wouldn't be about putting all of your faith into an algorithm nor would I intend to have it control any type of pump or apparatus. Just guidance.
It's exhausting to have to figure out how much insulin to take every time you eat something. If there was a way to take a picture of what you're about to eat and let object detection try to figure out what it is, how much of it there is, and what its hypoglycemic index is, that could help a lot. It could certainly provide suggestions on number of units to take based on the potful of data collected by the Dexcom or other CGM... but that would be later down the road.
Source: My wife is T1D


Subjunctive posted:

You should go warn the thousands of people who have been in trials of ML-driven insulin pumps since 2018!

I had heard about this... is it Dexcom doing it? do you remember if it was another company?
I can google, but if you remember that'd be cool.

Lockback posted:


Using AI to help model Hypoglycemic Index isn't at all an outlandish thing. You need to model these things, and you need to make guesses since you cannot test every single food you eat and AI is excellent for chewing through massive amounts of data to make that easier. That's a kind of classification problem which AI has been really good at for decades and I assure you is being used in all sorts of applications that you trust with your life today.
Exactly... this would be more classification rather than generative. Everyone who's used GPT/GEMINI etc have probably had the AI do something stupid, wrong, or flat out hallucinatory. This won't be that.

thanks for the replies everyone!

Inept
Jul 8, 2003

repiv posted:

garry newman of gmod fame got a similar treatment

https://twitter.com/garrynewman/status/1755851884047303012

"unethical" nah they just don't want to be liable for their chatbot making GBS threads out memory unsafe code that gets used all over the place by developers

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Inept posted:

"unethical" nah they just don't want to be liable for their chatbot making GBS threads out memory unsafe code that gets used all over the place by developers

Well if they won't give me the code, I'm gonna do it myself :smug:

MrYenko
Jun 18, 2012

#2 isn't ALWAYS bad...

Rinkles posted:

new realtime UE5 showcase

https://www.youtube.com/watch?v=Lb2wwEx6DVw

Amy Hennig's new project (hopefully this one actually comes out) Marvel 1943: Rise of Hydra, due next year

Why does that guy have a Springfield and a trumpet?

FuturePastNow
May 19, 2014


Shooting while playing the trumpet is his superpower

Branch Nvidian
Nov 29, 2012



MrYenko posted:

Why does that guy have a Springfield and a trumpet?

lol, look at this guy who doesn't know the marvel character "lieutenant trumpet," the bad rear end military hero who always has his trusty stradivarius at his side

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

namlosh posted:

I had heard about this... is it Dexcom doing it? do you remember if it was another company?
I can google, but if you remember that'd be cool.

I forget, sorry. I think I remember that the wearer just said when they had a large/medium/small meal for a bit to train the device and then only had to do it in certain circumstances afterward? It was a Europe thing and not relevant yet in Canada, so I didn’t dig into it more (brother-in-law is T1D)

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

MrYenko posted:

Why does that guy have a Springfield and a trumpet?

Jazz... the deadliest weapon

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
Imagine getting shot on the battlefield and the last thing you hear before shuffling your mortal coil is the refrain from Gimme all your lovin

Dr. Video Games 0031
Jul 17, 2004

Rinkles posted:

new realtime UE5 showcase

https://www.youtube.com/watch?v=Lb2wwEx6DVw

Amy Hennig's new project (hopefully this one actually comes out) Marvel 1943: Rise of Hydra, due next year

The game seems like whatever, but this tech demo showing the new dynamic tessellation system and particle effects looks really good:

https://www.youtube.com/watch?v=v1HCGLd_IAc

The in-game demo starts 3 minutes in.

edit: the clothing physics too, goddamn. it's very difficult to get clothing to behave like actual clothing, but I think they're getting closer than I've seen any other game. It seems like such a minor detail on paper, but it really lends a lot of believability to the characters' movements.

Dr. Video Games 0031 fucked around with this message at 20:51 on Mar 20, 2024

shrike82
Jun 11, 2005

https://twitter.com/rockpapershot/status/1770469696464191987?s=20


Welp, performance even on a high end PC sounds like a shitshow

quote:

To make sure, I built a new test rig based around the newer, faster, far more core-rich Intel Core i9-13900K. Sure enough, performance improved at all resolutions, with the RTX 4090’s 4K/High/RT on/DLSS Quality average shooting from 41fps to 64fps and the RTX 4060’s 1080p/High/RT off result boosted from 47fps to 61fps.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Lockback posted:

AI is very good at classification (which bucket does this thing belong in), Anomaly detection, pattern recognition (certain scenarios), Vectorization. In those areas I would absolutely trust a well built model over a person, 100%. And its not new, models have been doing this for decades in many areas that you probably wouldn't consider AI. What is somewhat new is newer tools that let you do a lot of these things without needing the kind of math-by-hand that you had to do before. That doesn't mean the models are worse, and there are lots of areas that even amateurs at home with consumer cards can do useful things.

What gets lost is people start trying to push the tools beyond what they can/should do (which you're ABSOLUTELY seeing with LLMs) and then point to that to say the whole genre is built on a house of cards. That's over reacting the other way.

namlosh posted:

Exactly... this would be more classification rather than generative. Everyone who's used GPT/GEMINI etc have probably had the AI do something stupid, wrong, or flat out hallucinatory. This won't be that.

thanks for the replies everyone!

Personally I don’t consider evaluation of data and the subsequent output of mathematical analysis to actually be “Artificial intelligence” though. That’s simply a well-programmed model that is utilizing the developed programming to perform analysis and even trend analysis based upon an established, underlying mathematical foundation but still within the parameters of its original intended programming scope.

Essentially, and I could be ignorant/wrong on this, but none of it seems to be a situation where the model itself is growing beyond the original parameters established for it. Now if the model/application is taking all of this data, and then is able to make a connection between the data being input to then make a recommendation beyond the original scope, that would be impressive. But that doesn’t seem to be what is occurring here.

So to me, all of this seems to be invalid applications of the term “Artificial Intelligence”, but that might have been what Lockback was already getting at.

Edit:

Yeah, basically this:

Inept posted:

AI is a stupid term that now means nothing but "buy our stock"

Same as blockchain, IoT, cloud, .com, and every other thing that people have rebranded themselves to to try to make more money

Canned Sunshine fucked around with this message at 21:25 on Mar 20, 2024

Brazilianpeanutwar
Aug 27, 2015

Spent my walletfull, on a jpeg, desolate, will croberts make a whale of me yet?

shrike82 posted:

https://twitter.com/rockpapershot/status/1770469696464191987?s=20


Welp, performance even on a high end PC sounds like a shitshow

Are Capcom the types to unfuck their performance issues with patches and stuff? Cause i wanna play Dragons dogma 2 so bad but not to the tune of over £600 for a new computer.

Adbot
ADBOT LOVES YOU

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

Canned Sunshine posted:

Personally I don’t consider evaluation of data and the subsequent output of mathematical analysis to actually be “Artificial intelligence” though. That’s simply a well-programmed model that is utilizing the developed programming to perform analysis and even trend analysis based upon an established, underlying mathematical foundation but still within the parameters of its original intended programming scope.

Essentially, and I could be ignorant/wrong on this, but none of it seems to be a situation where the model itself is growing beyond the original parameters established for it. Now if the model/application is taking all of this data, and then is able to make a connection between the data being input to then make a recommendation beyond the original scope, that would be impressive. But that doesn’t seem to be what is occurring here.

So to me, all of this seems to be invalid applications of the term “Artificial Intelligence”, but that might have been what Lockback was already getting at.

Edit:

Yeah, basically this:

You use AI to build the model. Typically yeah you update the model on whatever frequency but the frequency of an update is not really a qualifier for AI or not. It's trivial and arbitrary to say "update every time" or "Only update the model when I tell you". In OPs example they're not going to program "roundish red thing = apple" they let an AI learn it and then build a model around it and the programmer is fine tuning the parameters of how the model is built, but most likely not the model itself. It is under the AI umbrella and machine learning sub category and likely deep learning sub-sub category depending on what they do. I mean I guess you can have opinions on what stuff is called but this has been considered AI for decades and decades, probably pushing 100 years now, and this particular application has a number of VERY successful and reliable applications.

And yes, lots of people are trying to paint their walls in an AI color and calling themselves AI, whatever, but AI is way more than just "Trying to build skynet" of which LLMs are actually not really that much more advanced than a lot of more mundane applications. LLMs just seem really advanced because of how they communicate (though yes, the GPT things were also big leaps).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply