|
Bjork Bjowlob posted:
I've tried juice but it hasn't worked well for me. Specifically their linux CUDA support blows up when you try to use blender. It works if I want to run AEYEE EYEE over the network but I don't give a single poo poo about that so not particularly useful. I haven't tried it with a windows client since if I'm spending 2-3 grand on a GPU I intend to use it myself and my desktop is linux. They also have a lot of super-annoying bugs, some of which I've root-caused (like empty logfiles after a crash because they never flush the output) in reports which just get ignored. Wendel hinted that the A770 can be cross-flashed to the flex 170 but so far I haven't seen it happen. The Flex170 is basically nonexistent outside select integrators. and again, I doubt cleric will ever happen. Even battlemage is iffy. Intel leadership has the attention span of a squirrel that got into a kilo of coke.
|
# ? Mar 19, 2024 19:36 |
|
|
# ? Jun 10, 2024 11:48 |
|
I could use some advice thread! Given the latest announcements... is there any reason to hold off on getting a MSI Suprim liquid X hybrid 4090 for ~$2000 right now? It's going to be for AI/Design stuff mostly. I don't even think I own a game that'll really put it through its paces. My current card is pre-Ampere if I'm seeing correctly, the gains of "Blackwell" announced yesterday are mostly from doubling the die and shrinking the size/precision of a floating point data type? is there any specific feature that's expected in the 5-series cards that will make a 4090 obsolete for AI in the next year or two? It might be a dumb question, and I know it's all opinion/guesses, but I figured I'd ask. There's a lot of knowledge in this thread. vvvvv thanks for the reply... took a look and those cost a lot of cake. beautiful card though namlosh fucked around with this message at 22:08 on Mar 19, 2024 |
# ? Mar 19, 2024 21:53 |
|
the only thing major you'd potentially miss out for AI enthusiast compute is if the 5090s get a VRAM size bump but doesn't seem likely atm Maybe consider a 48gb A6000
|
# ? Mar 19, 2024 22:03 |
|
shrike82 posted:the only thing major you'd potentially miss out for AI enthusiast compute is if the 5090s get a VRAM size bump but doesn't seem likely atm I know that Userbenchmark is pretty messy and has a lot of downsides, but there's barely any comparisons between those two GPUs and the 4090 looks majorly faster than the A6000: https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-vs-Nvidia-Quadro-RTX-A6000/4136vsm1300600
|
# ? Mar 19, 2024 22:12 |
|
Twerk from Home posted:I know that Userbenchmark is pretty messy and has a lot of downsides, but there's barely any comparisons between those two GPUs and the 4090 looks majorly faster than the A6000: https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-vs-Nvidia-Quadro-RTX-A6000/4136vsm1300600 You're looking at the wrong A6000
|
# ? Mar 19, 2024 22:14 |
|
You want the RTX 6000 Ada Generation (I don't make the names). The RTX A6000 is the Ampere card ie. a RTX 3090 Ti with different RAM and clocks.
|
# ? Mar 19, 2024 22:22 |
|
whenever i hear someone talk about doing home AI stuff these days, i assume it's genai related and 24gb is marginal even if you're just running inference. i do wonder if nvidia will bother upping the vram on the 5090s at all - games don't need more than 24gb. maybe they'll go back to releasing titans?
|
# ? Mar 19, 2024 22:27 |
|
shrike82 posted:whenever i hear someone talk about doing home AI stuff these days, i assume it's genai related and 24gb is marginal even if you're just running inference. there are people using it for video processing, and some local voice models now too!
|
# ? Mar 19, 2024 22:28 |
|
it won't be a feature of the 5090, but expandable vram would be an interesting draw since it would avoid those dual 3090 frankensetups
|
# ? Mar 20, 2024 04:06 |
|
shrike82 posted:
Honestly they should, though the 4090 probably sold better being called a 4090 than it would have if it were called a Titan so
|
# ? Mar 20, 2024 04:10 |
|
Thanks very much for the replies everyone... one of the things I'm going to try and look at are generation of Hypoglycemic Index for different foods (based on description) for diabetics. But that's a long term thing. That and maybe create an AI ad-blocker. I'm going to head over to the AI/Stable diffusion threads in yospos and CoC, but if anyone has any cool eye candy that I should do with it I'd love to hear it. All of Nvidia's tech demos seem to be super old. I do plan on jumping on GTA Online at some point since it's the only game I play on PC and it'll be neat to see everything maxed out at 144 fps (monitors max).
|
# ? Mar 20, 2024 04:18 |
|
Lockback posted:Honestly they should, though the 4090 probably sold better being called a 4090 than it would have if it were called a Titan so Both a 4090 Ti and "Titan Ada" were prototyped too but ended up getting canned, at least partially due to the insatiable demand for L40s and 6000 Adas. Of course just because something was prototyped and talked about with OEMs doesn't mean it was ever really close to being a real product so whomst knoweth.
|
# ? Mar 20, 2024 05:24 |
|
Cygni posted:Both a 4090 Ti and "Titan Ada" were prototyped too but ended up getting canned, at least partially due to the insatiable demand for L40s and 6000 Adas. Of course just because something was prototyped and talked about with OEMs doesn't mean it was ever really close to being a real product so whomst knoweth. This is what I'm worried about - that the explosion in demand for datacenter GPUs is going to keep gaming GPU supply strangled, even if demand cools. Why would nvidia use more fab capacity on a $1000 consumer card when the same chip in a datacenter card costs 5x as much?
|
# ? Mar 20, 2024 06:31 |
|
namlosh posted:Thanks very much for the replies everyone... one of the things I'm going to try and look at are generation of Hypoglycemic Index for different foods (based on description) for diabetics. But that's a long term thing. Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed.
|
# ? Mar 20, 2024 07:56 |
|
Kazinsal posted:Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed. Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread. You can breeze over my commentary, it's all history nerd poo poo, the important bit is the screen shots: Cyrano4747 posted:This is mildly off topic, but the example I used is milhist so this thread might find it amusing. Note that I'm an idiot and know nothing about AI. The fact that I was able to trip it up is akin to a clumsy 4 year old figuring out how to disable your car.
|
# ? Mar 20, 2024 12:49 |
|
Kazinsal posted:Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed. You should go warn the thousands of people who have been in trials of ML-driven insulin pumps since 2018!
|
# ? Mar 20, 2024 13:08 |
|
Cyrano4747 posted:Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread. Copilot is built on ChatGPT 4.0, but whatever Microsoft have done to tailor it to their needs, has made it dumber and more susceptible to outputting nonsense, often very confidently. All of these chatbot assistants get things wrong, but ime Copilot is on another level.
|
# ? Mar 20, 2024 13:11 |
|
i'm glad the people making all the money off of internet are going to make internet loving useless over the next couple years
|
# ? Mar 20, 2024 13:50 |
|
Rinkles posted:Copilot is built on ChatGPT 4.0, but whatever Microsoft have done to tailor it to their needs, has made it dumber and more susceptible to outputting nonsense, often very confidently. All of these chatbot assistants get things wrong, but ime Copilot is on another level. Gemini is an outstanding chatbot, in that they managed to replicate an actual idiot.
|
# ? Mar 20, 2024 14:02 |
|
Cyrano4747 posted:Crossposting something from the milhist thread which is not something I ever expected to type in the GPU thread. An LLM is not the only machine learning avenue. An LLM like copilot/chapgpt is trying to induce new novel text based on it's model and being correct is very secondary. People like to paint AI with one brush but something like a LLM is doing what it's supposed to be doing, and being wrong or hallucinating is not out of bounds. Using AI to help model Hypoglycemic Index isn't at all an outlandish thing. You need to model these things, and you need to make guesses since you cannot test every single food you eat and AI is excellent for chewing through massive amounts of data to make that easier. That's a kind of classification problem which AI has been really good at for decades and I assure you is being used in all sorts of applications that you trust with your life today.
|
# ? Mar 20, 2024 15:09 |
|
AI is a stupid term that now means nothing but "buy our stock" Same as blockchain, IoT, cloud, .com, and every other thing that people have rebranded themselves to to try to make more money
|
# ? Mar 20, 2024 15:23 |
|
Lockback posted:Using AI to help model Hypoglycemic Index isn't at all an outlandish thing. You need to model these things, and you need to make guesses since you cannot test every single food you eat and AI is excellent for chewing through massive amounts of data to make that easier. That's a kind of classification problem which AI has been really good at for decades and I assure you is being used in all sorts of applications that you trust with your life today. Yeah something I've always heard is this kind of thing is super useful for medical research since it turns out going through massive data sets looking for outliers and anomalies is something a computer is pretty good at.
|
# ? Mar 20, 2024 16:23 |
|
Harik posted:Gemini is an outstanding chatbot, in that they managed to replicate an actual idiot.
|
# ? Mar 20, 2024 17:16 |
|
garry newman of gmod fame got a similar treatment https://twitter.com/garrynewman/status/1755851884047303012
|
# ? Mar 20, 2024 17:19 |
|
njsykora posted:Yeah something I've always heard is this kind of thing is super useful for medical research since it turns out going through massive data sets looking for outliers and anomalies is something a computer is pretty good at. AI is very good at classification (which bucket does this thing belong in), Anomaly detection, pattern recognition (certain scenarios), Vectorization. In those areas I would absolutely trust a well built model over a person, 100%. And its not new, models have been doing this for decades in many areas that you probably wouldn't consider AI. What is somewhat new is newer tools that let you do a lot of these things without needing the kind of math-by-hand that you had to do before. That doesn't mean the models are worse, and there are lots of areas that even amateurs at home with consumer cards can do useful things. What gets lost is people start trying to push the tools beyond what they can/should do (which you're ABSOLUTELY seeing with LLMs) and then point to that to say the whole genre is built on a house of cards. That's over reacting the other way.
|
# ? Mar 20, 2024 17:47 |
|
new realtime UE5 showcase https://www.youtube.com/watch?v=Lb2wwEx6DVw Amy Hennig's new project (hopefully this one actually comes out) Marvel 1943: Rise of Hydra, due next year
|
# ? Mar 20, 2024 17:56 |
|
Kazinsal posted:Trusting machine learning to manage diabetes sounds like a good way to get diabetics killed. It wouldn't be about putting all of your faith into an algorithm nor would I intend to have it control any type of pump or apparatus. Just guidance. It's exhausting to have to figure out how much insulin to take every time you eat something. If there was a way to take a picture of what you're about to eat and let object detection try to figure out what it is, how much of it there is, and what its hypoglycemic index is, that could help a lot. It could certainly provide suggestions on number of units to take based on the potful of data collected by the Dexcom or other CGM... but that would be later down the road. Source: My wife is T1D Subjunctive posted:You should go warn the thousands of people who have been in trials of ML-driven insulin pumps since 2018! I had heard about this... is it Dexcom doing it? do you remember if it was another company? I can google, but if you remember that'd be cool. Lockback posted:
thanks for the replies everyone!
|
# ? Mar 20, 2024 17:58 |
|
repiv posted:garry newman of gmod fame got a similar treatment "unethical" nah they just don't want to be liable for their chatbot making GBS threads out memory unsafe code that gets used all over the place by developers
|
# ? Mar 20, 2024 17:59 |
|
Inept posted:"unethical" nah they just don't want to be liable for their chatbot making GBS threads out memory unsafe code that gets used all over the place by developers Well if they won't give me the code, I'm gonna do it myself
|
# ? Mar 20, 2024 18:45 |
|
Rinkles posted:new realtime UE5 showcase Why does that guy have a Springfield and a trumpet?
|
# ? Mar 20, 2024 18:47 |
|
Shooting while playing the trumpet is his superpower
|
# ? Mar 20, 2024 18:55 |
|
MrYenko posted:Why does that guy have a Springfield and a trumpet? lol, look at this guy who doesn't know the marvel character "lieutenant trumpet," the bad rear end military hero who always has his trusty stradivarius at his side
|
# ? Mar 20, 2024 18:55 |
|
namlosh posted:I had heard about this... is it Dexcom doing it? do you remember if it was another company? I forget, sorry. I think I remember that the wearer just said when they had a large/medium/small meal for a bit to train the device and then only had to do it in certain circumstances afterward? It was a Europe thing and not relevant yet in Canada, so I didn’t dig into it more (brother-in-law is T1D)
|
# ? Mar 20, 2024 19:05 |
|
MrYenko posted:Why does that guy have a Springfield and a trumpet? Jazz... the deadliest weapon
|
# ? Mar 20, 2024 19:25 |
|
Imagine getting shot on the battlefield and the last thing you hear before shuffling your mortal coil is the refrain from Gimme all your lovin
|
# ? Mar 20, 2024 19:32 |
|
Rinkles posted:new realtime UE5 showcase The game seems like whatever, but this tech demo showing the new dynamic tessellation system and particle effects looks really good: https://www.youtube.com/watch?v=v1HCGLd_IAc The in-game demo starts 3 minutes in. edit: the clothing physics too, goddamn. it's very difficult to get clothing to behave like actual clothing, but I think they're getting closer than I've seen any other game. It seems like such a minor detail on paper, but it really lends a lot of believability to the characters' movements. Dr. Video Games 0031 fucked around with this message at 20:51 on Mar 20, 2024 |
# ? Mar 20, 2024 20:43 |
|
https://twitter.com/rockpapershot/status/1770469696464191987?s=20 Welp, performance even on a high end PC sounds like a shitshow quote:To make sure, I built a new test rig based around the newer, faster, far more core-rich Intel Core i9-13900K. Sure enough, performance improved at all resolutions, with the RTX 4090’s 4K/High/RT on/DLSS Quality average shooting from 41fps to 64fps and the RTX 4060’s 1080p/High/RT off result boosted from 47fps to 61fps.
|
# ? Mar 20, 2024 21:19 |
|
Lockback posted:AI is very good at classification (which bucket does this thing belong in), Anomaly detection, pattern recognition (certain scenarios), Vectorization. In those areas I would absolutely trust a well built model over a person, 100%. And its not new, models have been doing this for decades in many areas that you probably wouldn't consider AI. What is somewhat new is newer tools that let you do a lot of these things without needing the kind of math-by-hand that you had to do before. That doesn't mean the models are worse, and there are lots of areas that even amateurs at home with consumer cards can do useful things. namlosh posted:Exactly... this would be more classification rather than generative. Everyone who's used GPT/GEMINI etc have probably had the AI do something stupid, wrong, or flat out hallucinatory. This won't be that. Personally I don’t consider evaluation of data and the subsequent output of mathematical analysis to actually be “Artificial intelligence” though. That’s simply a well-programmed model that is utilizing the developed programming to perform analysis and even trend analysis based upon an established, underlying mathematical foundation but still within the parameters of its original intended programming scope. Essentially, and I could be ignorant/wrong on this, but none of it seems to be a situation where the model itself is growing beyond the original parameters established for it. Now if the model/application is taking all of this data, and then is able to make a connection between the data being input to then make a recommendation beyond the original scope, that would be impressive. But that doesn’t seem to be what is occurring here. So to me, all of this seems to be invalid applications of the term “Artificial Intelligence”, but that might have been what Lockback was already getting at. Edit: Yeah, basically this: Inept posted:AI is a stupid term that now means nothing but "buy our stock" Canned Sunshine fucked around with this message at 21:25 on Mar 20, 2024 |
# ? Mar 20, 2024 21:22 |
|
shrike82 posted:https://twitter.com/rockpapershot/status/1770469696464191987?s=20 Are Capcom the types to unfuck their performance issues with patches and stuff? Cause i wanna play Dragons dogma 2 so bad but not to the tune of over £600 for a new computer.
|
# ? Mar 20, 2024 22:33 |
|
|
# ? Jun 10, 2024 11:48 |
|
Canned Sunshine posted:Personally I don’t consider evaluation of data and the subsequent output of mathematical analysis to actually be “Artificial intelligence” though. That’s simply a well-programmed model that is utilizing the developed programming to perform analysis and even trend analysis based upon an established, underlying mathematical foundation but still within the parameters of its original intended programming scope. You use AI to build the model. Typically yeah you update the model on whatever frequency but the frequency of an update is not really a qualifier for AI or not. It's trivial and arbitrary to say "update every time" or "Only update the model when I tell you". In OPs example they're not going to program "roundish red thing = apple" they let an AI learn it and then build a model around it and the programmer is fine tuning the parameters of how the model is built, but most likely not the model itself. It is under the AI umbrella and machine learning sub category and likely deep learning sub-sub category depending on what they do. I mean I guess you can have opinions on what stuff is called but this has been considered AI for decades and decades, probably pushing 100 years now, and this particular application has a number of VERY successful and reliable applications. And yes, lots of people are trying to paint their walls in an AI color and calling themselves AI, whatever, but AI is way more than just "Trying to build skynet" of which LLMs are actually not really that much more advanced than a lot of more mundane applications. LLMs just seem really advanced because of how they communicate (though yes, the GPT things were also big leaps).
|
# ? Mar 20, 2024 22:41 |