Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Canna Happy
Jul 11, 2004
The engine, code A855, has a cast iron closed deck block and split crankcase. It uses an 8.1:1 compression ratio with Mahle cast eutectic aluminum alloy pistons, forged connecting rods with cracked caps and threaded-in 9 mm rod bolts, and a cast high

How is it possible?

Adbot
ADBOT LOVES YOU

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Live shot from Nvidia’s GTC presentation:

Cygni
Nov 12, 2005

raring to post

now jensen will release the robot army on the crowd and begin the takeover, thank u jensen

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

https://www.tomshardware.com/pc-com...ver-hopper-h100

quote:

The second caveat we need to discuss is with the maximum theoretical compute of 20 petaflops. Blackwell B200 gets to that figure via a new FP4 number format, with twice the throughput as Hopper H100’s FP8 format. So, if we were comparing apples to apples and sticking with FP8, B200 ‘only’ offers 2.5X more theoretical FP8 compute than H100 (with sparsity), and a big part of that comes from having two chips.

Blackwell, huh? The top-line marketing number is that a Blackwell chip will do 20 petaflops to a mere 4 petaflops on a Hopper chip, but half of that perf came from moving to FP4, and the other half is from moving to 2 dies. These things are going to be expensive.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Lmao the quotes from the various CEOs are funny. All the normal CEOs saying normal CEO things then Elon with the big dumb pronouncement.

Inept
Jul 8, 2003

Twerk from Home posted:

https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

https://www.tomshardware.com/pc-com...ver-hopper-h100

Blackwell, huh? The top-line marketing number is that a Blackwell chip will do 20 petaflops to a mere 4 petaflops on a Hopper chip, but half of that perf came from moving to FP4, and the other half is from moving to 2 dies. These things are going to be expensive.

imagine how much performance they'll get with FP0

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Being the person I am... does this mean anything for computer games and better graphics?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Gucci Loafers posted:

Being the person I am... does this mean anything for computer games and better graphics?

Yeah, it means the 5090 is going to perform a lot better than a 4090 and probably cost a lot more.

Branch Nvidian
Nov 29, 2012



Twerk from Home posted:

https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

https://www.tomshardware.com/pc-com...ver-hopper-h100

Blackwell, huh? The top-line marketing number is that a Blackwell chip will do 20 petaflops to a mere 4 petaflops on a Hopper chip, but half of that perf came from moving to FP4, and the other half is from moving to 2 dies. These things are going to be expensive.

Is there a linear scaling from FP16 -> FP8 -> FP4? This feels like they’re massaging the numbers to make the chart look more impressive than it already is. They’re starting with Pascal at 19 TFLOPS FP16, does this mean Pascal can’t do FP8 or FP4 calculations? I thought it got easier the smaller the floating point byte (bit?) size was? 20,000 TFLOPS at FP4 isn’t a 1,000x improvement over 19 TFLOPS FP16 if that’s how this works.

Branch Nvidian fucked around with this message at 03:55 on Mar 19, 2024

shrike82
Jun 11, 2005

You got near-linear speedups with almost no loss in accuracy going from FP32 to FP16. Everything after that has been more incremental with a much higher risk of accuracy being impacted materially and a lot of tweaking requiring to exclude parts of model architectures that don't handle lower precision well.

Dr. Video Games 0031
Jul 17, 2004

Gucci Loafers posted:

Being the person I am... does this mean anything for computer games and better graphics?

No, there's not much we can extrapolate to desktop GPUs. I don't think there's a node advance for the B100 datacenter GPUs releasing this year (staying on N4), so transistor density is staying roughly the same and performance isn't improving per die since these dies are already at the reticle limit. To overcome this, Nvidia is stapling two dies together.

Kopite7kimi is suggesting that desktop blackwell GPUs will move to the N4P node, while Lovelace is currently on a branch of the N5 node (deceptively marketed as "4N" by Nvidia, which is not to be confused with "N4" and its derivatives). This means that there will be a node advancement with the desktop GPUs, though it won't be nearly as impactful as when Nvidia went from Samsung 8nm to TSMC 5nm (30-series to 40-series). There are still things Nvidia can leverage to make it a decent generation, though. This generation sucked for value until the Super series launched (and it's still not great), so Nvidia could possibly afford to offer bigger dies for each SKU without increasing the price, but we'll see if they feel that friendly next year.

edit: To add to the confusion, the node Nvidia is using for their datacenter GPUs is actually labeled "4NP" which is said to be a variant of "N4X" and not "N4P". And it's that "4NP" node that the 50-series GPUs will use too. Node naming is truly a cursed art.

Dr. Video Games 0031 fucked around with this message at 05:44 on Mar 19, 2024

Bad Parenting
Mar 26, 2007

This could get emotional...


Not sure if this is the right thread to ask but I just got a 4090 delivered today, and I just remembered I only have a 750W PSU. It that gonna be okay powering the 4090 founder's edition + an i9-9900K CPU?

Also, is that CPU going to be a bottleneck at this point if I mainly play on an LG OLED tv aiming for 4k 120FPS?

Nfcknblvbl
Jul 15, 2002

Bad Parenting posted:

Also, is that CPU going to be a bottleneck at this point if I mainly play on an LG OLED tv aiming for 4k 120FPS?

Yep. I'm on a 4090 & 10850k and I notice my CPU being a bottleneck.

YerDa Zabam
Aug 13, 2016



The cpu will be a bottleneck yes. Couldn't say how much. You might be able to find something on YouTube with that combo to give you an idea. Then compare that with the card and a mire recent cpu to give you a rough idea of how much you are leaving on the table. A fair bit I'd guess. 30% wouldn't surprise me. Just guessing though.
I had an 8th intel cpu that I was considering pairing with a 4070ti a while back and found some youtube benchmark videos that put me off the idea.

As for the psu, idk, but 750 does seem to be cutting it very close.

YerDa Zabam fucked around with this message at 13:20 on Mar 19, 2024

Dr. Video Games 0031
Jul 17, 2004

A 750W PSU might be able to handle a 4090, but it will be cutting it close indeed. The 9900K can be a bottleneck in some games at 4K and not at all in a lot of others. It will be very game dependent because every game has really different CPU usage patterns. The 4080 would've likely been fine with both the PSU and CPU, but the 4090 is borderline or maybe over the line.

Animal
Apr 8, 2003

A 750w has been enough for my 7800x3D + 4090, and 12900k + 4090 before that. If you are worried, set the power target for the 4090 at 80% and it will max out at 300w. You only lose about 5%R performance with a fast CPU, and you are CPU limited anyways so there will be no performance benefit to running the 4090 at 100% target or beyond.

Cross-Section
Mar 18, 2009

I've been running a 4090/5800X3D on a Corsair HX750 for a year and a half now, and it's been totally fine with a CPU undervolt and Afterburner forcing a 80% power limit on the GPU. I have a UPS hooked up and the overall power draw rarely goes above 600W.

Bad Parenting
Mar 26, 2007

This could get emotional...


Thanks for the replies all. What's the CPU of choice for gaming at the moment if I decide to upgrade, the Ryzen 7 7800X3D? What is a good motherboard to pair with that?

If I want to upgrade the PSU should I go to 850W or just jump up to 1000W?

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
Made the mistake of talking about two things in one post so this half got lost, lol

If I wanted to do virtualized game streaming from my homelab to stuff around my house, what level of card would I need to do that? Can a quadro be partitioned for multiple VMs, or do I need to go a step beyond that into server-land? Not talking 1:1 card-to-VM passthrough, you can do that with the consumer RTXs. 1 card, multiple VM.

Not worried about the 'how', there's a few options for that and I'll poke around the virtualization thread if I get stuck. But I have no idea where up the stack nvidia starts allowing that.

Sadly can't go the AMD route since I want the virtual desktops to be able to do stuff like blender and lol AMD's single intern working on compute.

I know Intel's going hard on it with their new card specifically for this but.... non-x86 processors from intel have a life expectancy nearly as bad as non-adwords products from google.

ijyt
Apr 10, 2012

Dr. Video Games 0031 posted:

A 750W PSU might be able to handle a 4090, but it will be cutting it close indeed. The 9900K can be a bottleneck in some games at 4K and not at all in a lot of others. It will be very game dependent because every game has really different CPU usage patterns. The 4080 would've likely been fine with both the PSU and CPU, but the 4090 is borderline or maybe over the line.

I've been using a 4090 and 7800X3D on a SF750 with zero issues and no power limiting. 750W is basically what every SFF nut uses for their 4090 hotbox.

change my name
Aug 27, 2007

Legends die but anime is forever.

RIP The Lost Otakus.

ijyt posted:

I've been using a 4090 and 7800X3D on a SF750 with zero issues and no power limiting. 750W is basically what every SFF nut uses for their 4090 hotbox.

I've mentioned it before but the EVGA 750-watt SFF Supernova whatever rules hard because the fan literally never kicks on except at startup so it's dead silent

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


Harik posted:

Made the mistake of talking about two things in one post so this half got lost, lol

If I wanted to do virtualized game streaming from my homelab to stuff around my house, what level of card would I need to do that? Can a quadro be partitioned for multiple VMs, or do I need to go a step beyond that into server-land? Not talking 1:1 card-to-VM passthrough, you can do that with the consumer RTXs. 1 card, multiple VM.

Not worried about the 'how', there's a few options for that and I'll poke around the virtualization thread if I get stuck. But I have no idea where up the stack nvidia starts allowing that.

Sadly can't go the AMD route since I want the virtual desktops to be able to do stuff like blender and lol AMD's single intern working on compute.

I know Intel's going hard on it with their new card specifically for this but.... non-x86 processors from intel have a life expectancy nearly as bad as non-adwords products from google.

I looked into this a bit a year or so ago. To split one NVIDIA card into multiple virtual GPUs you're looking for their vGPU product, which comes in a few different flavours. The list of supported GPUs is here: https://www.nvidia.com/en-us/data-center/graphics-cards-for-virtualization/

Unsurprisingly there are no GeForce cards in the list. However, this restriction is implemented in software for GPUs up to Turing. If you have a 1000 or 2000 series GPU (or Quadro from the same architecture generations) you can try patching the NVIDIA driver to remove this restriction. See here: https://github.com/DualCoder/vgpu_unlock/tree/f432ffc8b7ed245df8858e9b38000d3b8f0352f4 and https://github.com/VGPU-Community-Drivers/vGPU-Unlock-patcher .

Note that this approach likely contravenes the vGPU licensing terms so take that into account.

Re Intel, all information I've seen states that they have no plans to support SRIOV on their consumer GPUs, even though it is nominally supported on their integrated GPUs. SRIOV is the cleanest way to provide virtualised devices on top of a physical devices - this can be found on many network devices including those from Intel.

YerDa Zabam
Aug 13, 2016



Bad Parenting posted:

Thanks for the replies all. What's the CPU of choice for gaming at the moment if I decide to upgrade, the Ryzen 7 7800X3D? What is a good motherboard to pair with that?

If I want to upgrade the PSU should I go to 850W or just jump up to 1000W?

Yeah the 7800X3D is very popular, and an excellent choice. Monster performance to go nicely with a 4090. Runs relatively cool (sub 85c) and impressive low power use. Says 120W, but is generally more like 80W or under.

Since others have said 750 is fine for them (with maybe reducing the power target for peace of mind) then the extra ten bux or so for an 850 would make sense. 1000 is way more than needed and will bump the price a bit.

There was a good back and forth a couple of pages back in the PC building thread
https://forums.somethingawful.com/showthread.php?threadid=3970266&userid=0&perpage=40&pagenumber=501
Basically get a B650 board, pick what IO options/M.2 numbers/generations and call it a day. Pooperscooper who was asking initially got a ASRock B650M Pro RS WiFi Micro ATX. I recently go (and am happy with ) an Asus Prime B650M-A WIFI II.
You could throw a dart into that whole selection of boards and most likely get something decent. It's all about IO/USB/M.2, and aesthetics really. Performance is fine across the board except for specific extreme OC stuff, and particular, highly specific tasks (even then it is marginal difference)

YerDa Zabam fucked around with this message at 15:28 on Mar 19, 2024

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Branch Nvidian posted:

They’re starting with Pascal at 19 TFLOPS FP16, does this mean Pascal can’t do FP8 or FP4 calculations?

I believe that’s correct. FP8 was an optimization option for ML models that didn’t need the full precision, and prior to that the kernels had to operate on FP16 at a minimum.

(I guess the point can technically float in FP4, but it is still a bit funny to me.)

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Subjunctive posted:

I believe that’s correct. FP8 was an optimization option for ML models that didn’t need the full precision, and prior to that the kernels had to operate on FP16 at a minimum.

(I guess the point can technically float in FP4, but it is still a bit funny to me.)

We simply reduce the entire space of real numbers onto 16 distinct values, what could be simpler? I find it terrifying as well.

Bad Parenting
Mar 26, 2007

This could get emotional...


YerDa Zabam posted:

Yeah the 7800X3D is very popular, and an excellent choice. Monster performance to go nicely with a 4090. Runs relatively cool (sub 85c) and impressive low power use. Says 120W, but is generally more like 80W or under.

Since others have said 750 is fine for them (with maybe reducing the power target for peace of mind) then the extra ten bux or so for an 850 would make sense. 1000 is way more than needed and will bump the price a bit.

There was a good back and forth a couple of pages back in the PC building thread
https://forums.somethingawful.com/showthread.php?threadid=3970266&userid=0&perpage=40&pagenumber=501
Basically get a B650 board, pick what IO options/M.2 numbers/generations and call it a day. Pooperscooper who was asking initially got a ASRock B650M Pro RS WiFi Micro ATX. I recently go (and am happy with ) an Asus Prime B650M-A WIFI II.
You could throw a dart into that whole selection of boards and most likely get something decent. It's all about IO/USB/M.2, and aesthetics really. Performance is fine across the board except for specific extreme OC stuff, and particular, highly specific tasks (even then it is marginal difference)

Thanks for the link, I'll have a catch up in that thread and shout there if I've got any more questions! I'll likely end up getting the 7800X3D CPU & B650 motherboard combo along with a 850W PSU, my PSU is getting on for 5 years old now, how long do they last these days?

Kibner
Oct 21, 2008

Acguy Supremacy

Bad Parenting posted:

my PSU is getting on for 5 years old now, how long do they last these days?

The better ones are warrantied for 10-12 years now.

YerDa Zabam
Aug 13, 2016



Bad Parenting posted:

Thanks for the link, I'll have a catch up in that thread and shout there if I've got any more questions! I'll likely end up getting the 7800X3D CPU & B650 motherboard combo along with a 850W PSU, my PSU is getting on for 5 years old now, how long do they last these days?

My previous Corsair is over 14 years old, and my nephew is connecting his new 4070 super to it today (with an adapter thingy) They tend to just keep on trucking.

Internet Explorer
Jun 1, 2005





Huh, I guess I need to update my approach to PSUs. I've always treated them like a wear item and replaced them when I did a refresh. Maybe they can live on forever like cases tend to.

Branch Nvidian
Nov 29, 2012



My approach to PSUs is that I’ll only trust them for as long as a manufacturer trusts them to provide a warranty for.

Indiana_Krom
Jun 18, 2007
Net Slacker

Branch Nvidian posted:

My approach to PSUs is that I’ll only trust them for as long as a manufacturer trusts them to provide a warranty for.

I've had a couple PSUs die in the warranty period, but the only symptom was the machine would fail to power on, or would require multiple attempts (something something charging up capacitors). In both cases the rest of the PC survived completely unharmed and was fixed by swapping out the PSU. But yes, this is generally a good rule of thumb and not even that unreasonable since it isn't hard to find 10 or even 12 year warranties now.

Cyrano4747
Sep 25, 2006

Yes, I know I'm old, get off my fucking lawn so I can yell at these clouds.

It also depends on how valuable your hardware is.

Building a new uber-system with a top end CPU and a 4090? Get a new loving PSU, it's cheap as gently caress insurance compared to what you just dropped on a halo card.

My system with a $100 CPU and an ever-changing parade of used GPUs that I snipe off ebay (currently 2080, in the hunt for a 3080)? gently caress it, my 9 year old PSU is fine, and if it manages to die and take the GPU with it I'm only out a few hundred bucks. Frankly less because I'll flog that poo poo back onto ebay as for parts only and get at least $100 out of it.

Inept
Jul 8, 2003

Cyrano4747 posted:

Building a new uber-system with a top end CPU and a 4090? Get a new loving PSU, it's cheap as gently caress insurance compared to what you just dropped on a halo card.

Unless you have a gigabyte grenade PSU, when your PSU fails, it will just stop working. It's not gonna suicide bomb your components.

YerDa Zabam
Aug 13, 2016



efb-Also, generally speaking they don't fry all your parts (generally mind) but are more likely to cause weird errors, BSODs and the like.

If I ever have a hard to diagnose issue I always try to swap in a different PSU. Once I do check the RAM as that's easier to do.
Hope I've not tempted the gods and my new gpu goes boom. With my recent luck nothing would surprise me

Internet Explorer
Jun 1, 2005





Cyrano4747 posted:

It also depends on how valuable your hardware is.

Building a new uber-system with a top end CPU and a 4090? Get a new loving PSU, it's cheap as gently caress insurance compared to what you just dropped on a halo card.

My system with a $100 CPU and an ever-changing parade of used GPUs that I snipe off ebay (currently 2080, in the hunt for a 3080)? gently caress it, my 9 year old PSU is fine, and if it manages to die and take the GPU with it I'm only out a few hundred bucks. Frankly less because I'll flog that poo poo back onto ebay as for parts only and get at least $100 out of it.

Thank you for helping me justify my spending habits. :pray:

YerDa Zabam posted:

efb-Also, generally speaking they don't fry all your parts (generally mind) but are more likely to cause weird errors, BSODs and the like.

If I ever have a hard to diagnose issue I always try to swap in a different PSU. Once I do check the RAM as that's easier to do.
Hope I've not tempted the gods and my new gpu goes boom. With my recent luck nothing would surprise me

It's those "really odd problems" that always led me to be more proactive with swapping out my PSU. Troubleshooting a partially falling PSU just sucks.

Sininu
Jan 8, 2014

I'm pretty optimistic that my fanless 700W 80+ Titanium PSU will last me over 10 years, perhaps even 15+.
Unless ATX standards change meaningfully and it forces me to swap it.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
I probably wouldn't replace a 750w if I got a 4090 but I also probably wouldn't buy that one if I had a 4090 and needed a PSU, probably would look closer to the 1000W range.

So adjust yourself accordingly.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Bjork Bjowlob posted:

I looked into this a bit a year or so ago. To split one NVIDIA card into multiple virtual GPUs you're looking for their vGPU product, which comes in a few different flavours. The list of supported GPUs is here: https://www.nvidia.com/en-us/data-center/graphics-cards-for-virtualization/

Unsurprisingly there are no GeForce cards in the list. However, this restriction is implemented in software for GPUs up to Turing. If you have a 1000 or 2000 series GPU (or Quadro from the same architecture generations) you can try patching the NVIDIA driver to remove this restriction. See here: https://github.com/DualCoder/vgpu_unlock/tree/f432ffc8b7ed245df8858e9b38000d3b8f0352f4 and https://github.com/VGPU-Community-Drivers/vGPU-Unlock-patcher .

Note that this approach likely contravenes the vGPU licensing terms so take that into account.
Quadro is explicitly allowed, they're on the list of cards nVidia supports VGPU on?

So theoretically possible on a 1080ti or 2080ti but a 2080ti is roughly a 3060ti so it'd make more sense to just buy some of those and do vfio passthrough. Looks like it is allowed on the A5000, but it's only a 24gb card so it's limited to a 3-way split before going under the 8gb minimum. The A6000 could do a 4-way split for 12gb each but lol it's also 5 or 6 grand used which is, by my calculations, more expensive than 4x 4070ti. Even the older quadro rtx8000 is going for 3 grand.

Appreciate the list, gives me something solid to go on, but it looks like the only point in vGPU is to say I did it, it's way too expensive even used for a homelab streaming system.

quote:

Re Intel, all information I've seen states that they have no plans to support SRIOV on their consumer GPUs, even though it is nominally supported on their integrated GPUs. SRIOV is the cleanest way to provide virtualised devices on top of a physical devices - this can be found on many network devices including those from Intel.
No, I was talking about their Flex 170 server GPU which does support SRIOV.

edit: jesus christ nvidia vgpu licensing is a clusterfuck
https://docs.nvidia.com/grid/16.0/grid-licensing-user-guide/index.html

Harik fucked around with this message at 19:22 on Mar 19, 2024

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


Harik posted:

Quadro is explicitly allowed, they're on the list of cards nVidia supports VGPU on?

A few Quadros are explicitly supported, most aren't. As I understand it, the vGPU unlocker will allow for unsupported cards (all GeForce, most Quadro) from the same generation to be used.

Harik posted:

So theoretically possible on a 1080ti or 2080ti but a 2080ti is roughly a 3060ti so it'd make more sense to just buy some of those and do vfio passthrough. Looks like it is allowed on the A5000, but it's only a 24gb card so it's limited to a 3-way split before going under the 8gb minimum. The A6000 could do a 4-way split for 12gb each but lol it's also 5 or 6 grand used which is, by my calculations, more expensive than 4x 4070ti. Even the older quadro rtx8000 is going for 3 grand.

Appreciate the list, gives me something solid to go on, but it looks like the only point in vGPU is to say I did it, it's way too expensive even used for a homelab streaming system.

Yes, I reached the same conclusion after looking into it. It's much easier and likely cheaper to just get multiple separate GPUs and do 1-1 passthrough, especially if you're working with ATX format machines with plenty of PCIe lanes/slots and internal case space.

If you're looking to do multiple passthrough to client VMs for the purposes of doing GPU-based work (e.g. machine learning, data processing, or running games) you could have a look at GPU over IP: https://github.com/Juice-Labs/Juice-Labs . I've tested this and it works fairly well for those use cases. Note that it requires application-specific setup on the client (analogous to manually specifying which GPU to use for which application in the client) and as far as I can tell it won't work for the desktop session itself. I also had trouble using it for GPU accelerated encoding in Jellyfin via ffmpeg. Your mileage may vary!

Harik posted:

No, I was talking about their Flex 170 server GPU which does support SRIOV.

Ah, I missed that you were referring to the new dedicated device - I spent some time researching (well, hoping!) whether Intel would support SRIOV on their consumer GPUs as it would be appealing overall from a cost perspective, so my mind jumped straight to that.

Adbot
ADBOT LOVES YOU

Cygni
Nov 12, 2005

raring to post

Harik posted:

I know Intel's going hard on it with their new card specifically for this but.... non-x86 processors from intel have a life expectancy nearly as bad as non-adwords products from google.

Harik posted:

No, I was talking about their Flex 170 server GPU which does support SRIOV.

I think the evidence shows that Intel is committed to the long haul, and rumors to the contrary mostly seemed to be from brand-warriors that wanted/assumed intel would fail and bail. But I think the bigger problem with the Flex 170 stuff Wendell has been playing with is that you can't buy it yourself, as far as I can tell. Gotta call an SI. It is also ultimately still limited by the A770-ish GPU performance, so for multi-user game streaming or whatever, youll probably want multiple Flex 170s. Big girl/boy prices I'm sure. It is also all off-label with proxmox and whatnot.

That said, Wendell does mention that SRIOV is functional with some A770s, but he didnt give too many details. I'm no expert in this world, but it seems like Nvidia is still the only real fully baked option and its pricey. None of it seems to have homelab friendly prices (until its 2nd hand in a few years!).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply