Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lord Stimperor
Jun 13, 2018

I'm a lovable meme.

What's the problem with the current ATX standard?

Adbot
ADBOT LOVES YOU

Dr. Video Games 0031
Jul 17, 2004

It was not designed with 4-slot monster GPUs in mind, for one. Modern high-end GPUs cover half of your other PCIe/m.2 slots, and having heavy, gigantic expansion cards sticking out horizontally from the board has been leading to cracked PCBs and poo poo. Airflow management is also not ideal. Cooling these 400W GPUs and 200W CPUs works with newer cases with good ventilation and lots of case fans, but it's sort of a brute force approach for a problem that only exists because we're doing things the creators of the ATX spec never envisioned.

repiv
Aug 13, 2009

if ATX were designed today they probably wouldn't use 12V either, raising the voltage would reduce the amps required to feed a power hungry card and make the cabling/connector situation much more straightforward

FuturePastNow
May 19, 2014


My preference would have been for consumer GPUs to start using the EPS12V connector like server GPUs, preferably with it on the end of the card like those instead of the side.

acksplode
May 17, 2004



Has anyone ever tried to mock up a motherboard spec that's better suited for modern hardware? I have no idea what you'd change so that would be a fun thought experiment to read.

Cygni
Nov 12, 2005

raring to post

Lord Stimperor posted:

What's the problem with the current ATX standard?

Power efficiency:
The OG ATX standard still in use in the AIB space has the PSU supplying multiple voltages (some of which are nearly deprecated at this point), which it does by converting off the primary 12v voltage. Moving or abandoning this voltage like in the 12VO standard would increase power efficiency. Or move to an even better voltage if we are starting from scratch!

Thermals: Vertical GPU mounting and horizontal CPU mounting (im going to use this terminology, because ATX was designed for a desktop pizza box lol), designed at a time when users ran multiple cards with the incredible power draw of like 10W each, makes for chaotic and bad airflow patterns when the GPU is the thing drawing 400W. The majority of GPU designs blast that hot air down at the board and up towards the case side panel, while CPU coolers and cases generally suck air in the front and out the back.

Reliability/fragility:
Those vertical GPU mounts are a liability when GPUs are 5lb lumps of aluminum, and they love to bend/break slots, especially when moved around or shipped. It is to the point that "sag brackets" come in AIB boxes now, and OEMs use screwed in brackets for shipping.

Ease of use: The 20pin ATX power standard was quickly insufficient, so they added another separate +4. Then integrated that into the new 24pin that was actually an annoying 20+4 to keep compatibility. Then added another 4pin connector of 12v. And then another 4pins of 12v that was actually an annoying 4+4 to keep compatibility. Now some regular desktop boards even have 2x8pin of 12v, plus the 24pin.

And of course GPUs took over the majority of the power use in a system, so they also had to add a 6pin 12V plug just for GPUs. Then an 8pin that was really an annoying 6+2. Then use a 6 and an 8. Then two 8s. Then three 8s. Then the new 12VHPO. Hope you love wires going everywhere and annoying plugs, baby! It's worth noting that Apple addressed this with a PCIe extension pretty effectively.

Also they never standardized front panel connectors placement, so you can still find plenty of boards that have non standard pins. Those companies should be put in Company Jail.

Physical size/cost: There is no real "E-ATX" standard for oversized boards, and no real standard for smaller boards other than MicroATX (ITX is not an official standard in ATX, as it was removed when MicroATX was added). Also for PSUs, SFX, SFX-L, FlexATX, and ATX-L are all not real standards either. Its a crapshoot of compatibility, which has lead case builders to make the cases physically bigger or reconfigurable to support this insane variety of hardware options. This adds cost and/or size to every unit, regardless of whether you want it or not.

repiv
Aug 13, 2009

acksplode posted:

Has anyone ever tried to mock up a motherboard spec that's better suited for modern hardware? I have no idea what you'd change so that would be a fun thought experiment to read.

look to the cheesegrater mac pro (all card power delivered via the slot, all components cooled by air ducted from system case fans) and nvidias SXM server format (GPUs installed flat against the motherboard)

PC LOAD LETTER
May 23, 2005
WTF?!

repiv posted:

if ATX were designed today they probably wouldn't use 12V either, raising the voltage would reduce the amps required to feed a power hungry card and make the cabling/connector situation much more straightforward

Yeah IMO they could go to 24v, or even 36v, and it'd still be plenty safe for DIY'ers...so long as the connectors aren't poo poo of course. But then there are plenty of already existing connectors that are cheap and do the job there so I don't see it as a big deal.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

repiv posted:

look to the cheesegrater mac pro (all card power delivered via the slot, all components cooled by air ducted from system case fans) and nvidias SXM server format (GPUs installed flat against the motherboard)

https://engineering.fb.com/2019/03/14/data-center-engineering/accelerator-modules/

The Open Accelerator Module format is an ostensibly open standard for GPUs that could easily be adopted into normal desktops. The big AMD GPUs are available in OAM format, and it's way better than a PCIe card for all the reasons people have been bringing up.

You know how there used to be all sorts of different CPU coolers and then they're all towers now because towers are just the best way to cool silicon in a desktop form factor? Same for GPUs. Get a drat tower on there.

Edit: if you want to see a better-than-ATX option for add-in components that aren't super hot, look at what any of the server vendors are delivering smaller, cheaper, and more efficient than PCIe cards: https://www.supermicro.com/white_paper/white_paper_SIOM.pdf

Twerk from Home fucked around with this message at 23:12 on Dec 22, 2023

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Lord Stimperor posted:

What's the problem with the current ATX standard?

I think the main reason NVidia switched was to get more room on their PCBs. Going from 3 ATX plugs to a single smaller plug gave them more room on their Founders Edition boards. But there are so many better ways to get a high-amp 12 volts than what they went with.

repiv
Aug 13, 2009

PC LOAD LETTER posted:

Yeah IMO they could go to 24v, or even 36v, and it'd still be plenty safe for DIY'ers...so long as the connectors aren't poo poo of course.

USB-PD is pushing 48v to charge beefy laptops through 5A cables now meanwhile big graphics cards need a >40A input lol

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Zero VGS posted:

I think the main reason NVidia switched was to get more room on their PCBs. Going from 3 ATX plugs to a single smaller plug gave them more room on their Founders Edition boards. But there are so many better ways to get a high-amp 12 volts than what they went with.

I don’t think it was inherently because of that, because the FE PCBs are ridiculously small and the prior FE PCBs weren’t exactly large to begin with. But as they ramp power requirements over time, yeah, they don’t want the board partners blaming them for the number of connectors on the card.

PC LOAD LETTER
May 23, 2005
WTF?!

repiv posted:

USB-PD is pushing 48v to charge beefy laptops through 5A cables now meanwhile big graphics cards need a >40A input lol

Yeah the power they can get out of those USB cables is really impressive but there are also tons of janky cheap as possible ones on Amazon that are giving people trouble too.

So maybe that is a bit much to do on the cheap. Which you KNOW all these PC OEM's are going to be focusing on.

ijyt
Apr 10, 2012

Cygni posted:

im firmly on team "junk the entire ATX spec and start over", from board+card layouts, to the thermal design, power connectors, and front panel pins. i know everyone is afraid of another BTX failure, but its laughably overdue.

at a minimum, GPUs should be on the same thermal plane as CPUs instead of 90 degrees vertical, and power should be delivered without additional 12v wires running from the power supply. what am i, a caveman? wiring a pc? in my cave? that also has mains power?

preach

Zoya
Jun 12, 2023

echoes of a distant past,
bodies die but voices last.
once were held within a cell,
your mind is where these voices dwell.







truly incredible headline lmfao

Craptacular!
Jul 9, 2001

Fuck the DH
Whoever decided to run so much power through such a small connector should not be in charge of the next one.

Anime Schoolgirl
Nov 28, 2002

12VHPWR theoretically has enough surface area for 600w, it's just when these things are made in scale manufacturing tolerances tend to be a shitpile and it's probably why the original pcie standard was insanely conservative.

Shipon
Nov 7, 2005
The latching mechanism should have been made more robust, the size doesn't seem to be an issue so much as the poor latching performance of the connector. It's fairly hard to tell if it's fully notched, you really need to inspect it to make sure it is.

CableMod making a "solution" that actually worked against ensuring it was properly latched and seated is so funny

BurritoJustice
Oct 9, 2012

12VHPWR was as much a server/enterprise push as a consumer one, to allow PCIe accelerators to push power limits nearer to their SXM/OAM equivalents. It's harder to stuff in loads of cables when you have 8+ accelerators in a chassis, and the sense pins are meant to allow better monitoring.

As of now, the only non-Nvidia GPUs that use the connector are the Intel Datacenter GPU MAX. AMD was originally planning to use it on the second release RDNA3 GPUs, the RX7700/7800XT, but they changed at the last minute due to the issues NVIDIA was having. I imagine they'll use it next generation, with the revisions.

Lord Stimperor
Jun 13, 2018

I'm a lovable meme.

Thanks for the explanations regarding ATX. Some of these quirks / issues were so normalized in my head that I couldn't even imagine these things being any different.


That being said, I do miss the pizzabox desktop form factor. I know it's mainly out of nostalgia and the current desktop cases are pretty bad, and the setups were ergonomically not ideal.

Indiana_Krom
Jun 18, 2007
Net Slacker

*sigh*

I was using one of the 180 degree versions of these in my build that I completed just last weekend, pulled it out and no signs of problems, but I did make sure it was fully seated and locked in when I installed it. So instead I am now making the 180 degree bend in just the cable. I like how they say "don't bend the cable closer than 30cm from the plug" when there is no space or proper angle in the case to do so because of the stupid plug placement designed only to deter/annoy data center use. At least mine makes a nice audible and tactile click when it locks in, something about the EKWB water block I put on makes the plug much easier and much more obvious that it has locked in.

This connector is junk, but if video cards and CPUs are going to be pulling 300w or more then perhaps it is time to start thinking about stepping the supply side up to 48v (or higher) so we aren't pumping 30-50A around inside the case. At 48v with the same 12.5A restriction, the old 8 pin connectors would be rated for 600w. But because the connectors themselves are actually rated for 27 amps and not the 12.5 the PCIe spec set, the actual limits of an 8 pin plug would be 1296 watts or a safety margin greater than 2.

orcane
Jun 13, 2012

Fun Shoe
Some people argue the adapter is fine but since the connector is a general weak point, they can't control that and are doing this for liability/PR purposes (even if the card side of the connector is responsible for melting cables, the adapter will get the bad press).

repiv
Aug 13, 2009

this wouldn't have happened if we'd all just moved to stadia

we should have listened

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

repiv posted:

this wouldn't have happened if we'd all just moved to stadia geforce now

we should have listened

Truga
May 4, 2014
Lipstick Apathy

repiv posted:

this wouldn't have happened if we'd all just moved to stadia

we should have listened

inb4 geforce now datacenter burns down

seriously though, where are those motherboard power connectors asus showed off the other day? why aren't they a standard yet?

repiv
Aug 13, 2009

Truga posted:

seriously though, where are those motherboard power connectors asus showed off the other day? why aren't they a standard yet?

probably because introducing a graphics card that requires you to also get a new special snowflake motherboard would go down like a ton of bricks

ATX is the mess it is because we can only ever increment each individual component in isolation, not as a cohesive whole, as you would have to do change the GPU-motherboard interface, or raise the supply voltages, or whatever

Truga
May 4, 2014
Lipstick Apathy
weird how we had a whole slew of AGP x(number) slots over a few years when it was required to sell a new thing because old-skool pci was too slow, but we're stuck on the same drat slot for like 15 years now when it's the consumer's problem to figure out how to make it work lmao

e: to clarify i'm not saying pcie is bad, it's very good to have effectively 15 years of reverse compatibility

but if breaking from the standard was required to sell new expensive gpus it'd have happened 14 years ago, whereas if it's a problem consumers/oems have to deal with, it's not a problem worth considering lol

Truga fucked around with this message at 15:41 on Dec 23, 2023

kliras
Mar 27, 2021
a lot of companies like asus are already trying to do some vendor lock-in, gonna require something for everyone to jump on a better open standard than a lot of dumb patented stuff like q-release buttons for pci devices

Truga
May 4, 2014
Lipstick Apathy
i'm already having problems with poo poo at work because oems already realized atx doesn't work for them like 10 years ago so they ship proprietary connectors on their PSUs and plug a lot of stuff into the motherboard

on the server side, this is easily fixed by contacting the vendor and asking for the correct cables, but on the desktop side the reply is generally just that bender meme, if they even bother

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
When we went through the previous slot transitions, computers were thoroughly obsolete within a couple years. It would have been silly to put a new GPU in a 3-4 year old PC that couldn't feed it. Since the dawn of the PCIe era, we have consistently shifted towards CPUs lasting several times longer than GPUs.

Unfortunately, the easy time to sell everyone on " your next build will have to be clean from scratch" was 10 plus years ago. Now, with thin margins, it's not like the AIBs want to deal with having two designs per product for a generation or two.

Cyrano4747
Sep 25, 2006

Yes, I know I'm old, get off my fucking lawn so I can yell at these clouds.

K8.0 posted:

When we went through the previous slot transitions, computers were thoroughly obsolete within a couple years. It would have been silly to put a new GPU in a 3-4 year old PC that couldn't feed it. Since the dawn of the PCIe era, we have consistently shifted towards CPUs lasting several times longer than GPUs.

Unfortunately, the easy time to sell everyone on " your next build will have to be clean from scratch" was 10 plus years ago. Now, with thin margins, it's not like the AIBs want to deal with having two designs per product for a generation or two.

Eh, you could probably do it at the next big socket switch over. We're seeing it right now with DDR5.

This doesn't mean you have to do the whole stack at once. I suspect most people shopping 3050s and equivalents aren't dropping it in the latest, fastest CPU/Mobo/RAM combo. Meanwhile the pool of people who are buying 4090s most likely aren't looking to drop it into their AM4 socket machine from 2018.

Plus, if you're going to the effort of coming up with a new GPU slot spec there's also no reason you can't just put one in alongside ye olde PCIe, just like AGP came back in the stone age. I mean, it would be more expensive, and you're not going to do it in a micro board form factor, but we're already talking boards for people who want the latest and best.

Some kind of nu-AGP shipping for whatever 2025's hot new CPU socket is, stick it next to the PCIe slots, and ship the 6070/80/90 on it. Make PCI 6060s. Trickle the fancy new slot down the stack as poo poo ages in and this year's bleeding edge enthusiast bait becomes next year's mainstream feature.

edit: I think enthusiast circles vastly, vastly over-estimate how much individual part in-place upgradability is for the typical builder, much less the typical user. Your typical person building a gaming PC might be lucky to upgrade one or two parts across the lifetime of the device. There are a lot of people out there who just build something pretty decent and then run it into the ground for five years before building anew and (maybe) dragging over a couple of storage drives.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
Enthusiasts are only one part of the market. You need to consider all products that conform to ATX and what it means for the production of feeding all those parts that go to multiple products.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I wonder if there has been any talk to move towards a higher voltage like +48V to get away from having to have higher current 12V. This would require changes to the ATX psus so would be pretty major.

Rejigging the PCIe CEM spec to add optional higher capacity power connectors like the Mac Pro would be really nice too.

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


I'm not an EE at all but is a voltage change something that devices can negotiate? E.g. start at 12v, then communicate to agree to raise the voltage?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Bjork Bjowlob posted:

I'm not an EE at all but is a voltage change something that devices can negotiate? E.g. start at 12v, then communicate to agree to raise the voltage?

Yah that is something that USB-C does starting at +5V and can negotiate up to 48V iirc

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


priznat posted:

Yah that is something that USB-C does starting at +5V and can negotiate up to 48V iirc

Yeah nice I was thinking about USB-C but wasn't sure if it was just current negotiation or current+voltage. If that's the case, and the current ATX connectors are rated appropriately, then could a voltage change in the ATX spec be introduced in a way that devices/PSUs that aren't aware operate normally, but devices/PSUs that are could negotiate a higher voltage across the same connection?

E.g. GPUs could keep 2x8 connectors, and if they're connected to an aware PSU they negotiate such that the power limit is 600W, otherwise fall back to 450W

repiv
Aug 13, 2009

it could be done but the time to do that was when they were defining the 12VHPWR connector and they didn't so now we have to wait another 20 years sorry

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



I am a very big fan of the 2019 Mac Pro design and how Apple handled additional power demand for the MPX Modules, which is why it's all the more unfortunate that they basically ceased active development on MPX Modules after 2022 and with the new 2023 Apple Silicon-based Mac Pro. I can't blame though though, since they probably didn't sell that many MPX Modules to begin with.

BurritoJustice
Oct 9, 2012

Truga posted:

inb4 geforce now datacenter burns down

seriously though, where are those motherboard power connectors asus showed off the other day? why aren't they a standard yet?

If the point is to avoid using 12VHPWR, the motherboard connector would change nothing as it's just a passthrough for a 12VHPWR connector on the rear of the board.

Adbot
ADBOT LOVES YOU

Salt Fish
Sep 11, 2003

Cybernetic Crumb
Any ATX replacement should make it so you mount the GPU to the case and plug power into it, and then the GPU has the CPU socket, DIMM slots, chipset etc that plugs into it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply