|
What's the problem with the current ATX standard?
|
# ? Dec 22, 2023 22:31 |
|
|
# ? Jun 10, 2024 15:52 |
|
It was not designed with 4-slot monster GPUs in mind, for one. Modern high-end GPUs cover half of your other PCIe/m.2 slots, and having heavy, gigantic expansion cards sticking out horizontally from the board has been leading to cracked PCBs and poo poo. Airflow management is also not ideal. Cooling these 400W GPUs and 200W CPUs works with newer cases with good ventilation and lots of case fans, but it's sort of a brute force approach for a problem that only exists because we're doing things the creators of the ATX spec never envisioned.
|
# ? Dec 22, 2023 22:38 |
|
if ATX were designed today they probably wouldn't use 12V either, raising the voltage would reduce the amps required to feed a power hungry card and make the cabling/connector situation much more straightforward
|
# ? Dec 22, 2023 22:48 |
|
My preference would have been for consumer GPUs to start using the EPS12V connector like server GPUs, preferably with it on the end of the card like those instead of the side.
|
# ? Dec 22, 2023 22:53 |
|
Has anyone ever tried to mock up a motherboard spec that's better suited for modern hardware? I have no idea what you'd change so that would be a fun thought experiment to read.
|
# ? Dec 22, 2023 22:59 |
|
Lord Stimperor posted:What's the problem with the current ATX standard? Power efficiency: The OG ATX standard still in use in the AIB space has the PSU supplying multiple voltages (some of which are nearly deprecated at this point), which it does by converting off the primary 12v voltage. Moving or abandoning this voltage like in the 12VO standard would increase power efficiency. Or move to an even better voltage if we are starting from scratch! Thermals: Vertical GPU mounting and horizontal CPU mounting (im going to use this terminology, because ATX was designed for a desktop pizza box lol), designed at a time when users ran multiple cards with the incredible power draw of like 10W each, makes for chaotic and bad airflow patterns when the GPU is the thing drawing 400W. The majority of GPU designs blast that hot air down at the board and up towards the case side panel, while CPU coolers and cases generally suck air in the front and out the back. Reliability/fragility: Those vertical GPU mounts are a liability when GPUs are 5lb lumps of aluminum, and they love to bend/break slots, especially when moved around or shipped. It is to the point that "sag brackets" come in AIB boxes now, and OEMs use screwed in brackets for shipping. Ease of use: The 20pin ATX power standard was quickly insufficient, so they added another separate +4. Then integrated that into the new 24pin that was actually an annoying 20+4 to keep compatibility. Then added another 4pin connector of 12v. And then another 4pins of 12v that was actually an annoying 4+4 to keep compatibility. Now some regular desktop boards even have 2x8pin of 12v, plus the 24pin. And of course GPUs took over the majority of the power use in a system, so they also had to add a 6pin 12V plug just for GPUs. Then an 8pin that was really an annoying 6+2. Then use a 6 and an 8. Then two 8s. Then three 8s. Then the new 12VHPO. Hope you love wires going everywhere and annoying plugs, baby! It's worth noting that Apple addressed this with a PCIe extension pretty effectively. Also they never standardized front panel connectors placement, so you can still find plenty of boards that have non standard pins. Those companies should be put in Company Jail. Physical size/cost: There is no real "E-ATX" standard for oversized boards, and no real standard for smaller boards other than MicroATX (ITX is not an official standard in ATX, as it was removed when MicroATX was added). Also for PSUs, SFX, SFX-L, FlexATX, and ATX-L are all not real standards either. Its a crapshoot of compatibility, which has lead case builders to make the cases physically bigger or reconfigurable to support this insane variety of hardware options. This adds cost and/or size to every unit, regardless of whether you want it or not.
|
# ? Dec 22, 2023 22:59 |
|
acksplode posted:Has anyone ever tried to mock up a motherboard spec that's better suited for modern hardware? I have no idea what you'd change so that would be a fun thought experiment to read. look to the cheesegrater mac pro (all card power delivered via the slot, all components cooled by air ducted from system case fans) and nvidias SXM server format (GPUs installed flat against the motherboard)
|
# ? Dec 22, 2023 23:04 |
|
repiv posted:if ATX were designed today they probably wouldn't use 12V either, raising the voltage would reduce the amps required to feed a power hungry card and make the cabling/connector situation much more straightforward Yeah IMO they could go to 24v, or even 36v, and it'd still be plenty safe for DIY'ers...so long as the connectors aren't poo poo of course. But then there are plenty of already existing connectors that are cheap and do the job there so I don't see it as a big deal.
|
# ? Dec 22, 2023 23:06 |
|
repiv posted:look to the cheesegrater mac pro (all card power delivered via the slot, all components cooled by air ducted from system case fans) and nvidias SXM server format (GPUs installed flat against the motherboard) https://engineering.fb.com/2019/03/14/data-center-engineering/accelerator-modules/ The Open Accelerator Module format is an ostensibly open standard for GPUs that could easily be adopted into normal desktops. The big AMD GPUs are available in OAM format, and it's way better than a PCIe card for all the reasons people have been bringing up. You know how there used to be all sorts of different CPU coolers and then they're all towers now because towers are just the best way to cool silicon in a desktop form factor? Same for GPUs. Get a drat tower on there. Edit: if you want to see a better-than-ATX option for add-in components that aren't super hot, look at what any of the server vendors are delivering smaller, cheaper, and more efficient than PCIe cards: https://www.supermicro.com/white_paper/white_paper_SIOM.pdf Twerk from Home fucked around with this message at 23:12 on Dec 22, 2023 |
# ? Dec 22, 2023 23:10 |
|
Lord Stimperor posted:What's the problem with the current ATX standard? I think the main reason NVidia switched was to get more room on their PCBs. Going from 3 ATX plugs to a single smaller plug gave them more room on their Founders Edition boards. But there are so many better ways to get a high-amp 12 volts than what they went with.
|
# ? Dec 22, 2023 23:16 |
|
PC LOAD LETTER posted:Yeah IMO they could go to 24v, or even 36v, and it'd still be plenty safe for DIY'ers...so long as the connectors aren't poo poo of course. USB-PD is pushing 48v to charge beefy laptops through 5A cables now meanwhile big graphics cards need a >40A input lol
|
# ? Dec 22, 2023 23:16 |
|
Zero VGS posted:I think the main reason NVidia switched was to get more room on their PCBs. Going from 3 ATX plugs to a single smaller plug gave them more room on their Founders Edition boards. But there are so many better ways to get a high-amp 12 volts than what they went with. I don’t think it was inherently because of that, because the FE PCBs are ridiculously small and the prior FE PCBs weren’t exactly large to begin with. But as they ramp power requirements over time, yeah, they don’t want the board partners blaming them for the number of connectors on the card.
|
# ? Dec 22, 2023 23:27 |
|
repiv posted:USB-PD is pushing 48v to charge beefy laptops through 5A cables now meanwhile big graphics cards need a >40A input lol Yeah the power they can get out of those USB cables is really impressive but there are also tons of janky cheap as possible ones on Amazon that are giving people trouble too. So maybe that is a bit much to do on the cheap. Which you KNOW all these PC OEM's are going to be focusing on.
|
# ? Dec 22, 2023 23:45 |
|
Cygni posted:im firmly on team "junk the entire ATX spec and start over", from board+card layouts, to the thermal design, power connectors, and front panel pins. i know everyone is afraid of another BTX failure, but its laughably overdue. preach
|
# ? Dec 23, 2023 01:38 |
|
repiv posted:aw poo poo here we go again truly incredible headline lmfao
|
# ? Dec 23, 2023 05:09 |
|
Whoever decided to run so much power through such a small connector should not be in charge of the next one.
|
# ? Dec 23, 2023 07:35 |
|
12VHPWR theoretically has enough surface area for 600w, it's just when these things are made in scale manufacturing tolerances tend to be a shitpile and it's probably why the original pcie standard was insanely conservative.
|
# ? Dec 23, 2023 08:00 |
|
The latching mechanism should have been made more robust, the size doesn't seem to be an issue so much as the poor latching performance of the connector. It's fairly hard to tell if it's fully notched, you really need to inspect it to make sure it is. CableMod making a "solution" that actually worked against ensuring it was properly latched and seated is so funny
|
# ? Dec 23, 2023 09:18 |
|
12VHPWR was as much a server/enterprise push as a consumer one, to allow PCIe accelerators to push power limits nearer to their SXM/OAM equivalents. It's harder to stuff in loads of cables when you have 8+ accelerators in a chassis, and the sense pins are meant to allow better monitoring. As of now, the only non-Nvidia GPUs that use the connector are the Intel Datacenter GPU MAX. AMD was originally planning to use it on the second release RDNA3 GPUs, the RX7700/7800XT, but they changed at the last minute due to the issues NVIDIA was having. I imagine they'll use it next generation, with the revisions.
|
# ? Dec 23, 2023 09:39 |
|
Thanks for the explanations regarding ATX. Some of these quirks / issues were so normalized in my head that I couldn't even imagine these things being any different. That being said, I do miss the pizzabox desktop form factor. I know it's mainly out of nostalgia and the current desktop cases are pretty bad, and the setups were ergonomically not ideal.
|
# ? Dec 23, 2023 11:30 |
|
repiv posted:aw poo poo here we go again *sigh* I was using one of the 180 degree versions of these in my build that I completed just last weekend, pulled it out and no signs of problems, but I did make sure it was fully seated and locked in when I installed it. So instead I am now making the 180 degree bend in just the cable. I like how they say "don't bend the cable closer than 30cm from the plug" when there is no space or proper angle in the case to do so because of the stupid plug placement designed only to deter/annoy data center use. At least mine makes a nice audible and tactile click when it locks in, something about the EKWB water block I put on makes the plug much easier and much more obvious that it has locked in. This connector is junk, but if video cards and CPUs are going to be pulling 300w or more then perhaps it is time to start thinking about stepping the supply side up to 48v (or higher) so we aren't pumping 30-50A around inside the case. At 48v with the same 12.5A restriction, the old 8 pin connectors would be rated for 600w. But because the connectors themselves are actually rated for 27 amps and not the 12.5 the PCIe spec set, the actual limits of an 8 pin plug would be 1296 watts or a safety margin greater than 2.
|
# ? Dec 23, 2023 13:42 |
|
Some people argue the adapter is fine but since the connector is a general weak point, they can't control that and are doing this for liability/PR purposes (even if the card side of the connector is responsible for melting cables, the adapter will get the bad press).
|
# ? Dec 23, 2023 14:32 |
|
this wouldn't have happened if we'd all just moved to stadia we should have listened
|
# ? Dec 23, 2023 14:36 |
|
repiv posted:this wouldn't have happened if we'd all just moved to
|
# ? Dec 23, 2023 14:49 |
|
repiv posted:this wouldn't have happened if we'd all just moved to stadia inb4 geforce now datacenter burns down seriously though, where are those motherboard power connectors asus showed off the other day? why aren't they a standard yet?
|
# ? Dec 23, 2023 15:19 |
|
Truga posted:seriously though, where are those motherboard power connectors asus showed off the other day? why aren't they a standard yet? probably because introducing a graphics card that requires you to also get a new special snowflake motherboard would go down like a ton of bricks ATX is the mess it is because we can only ever increment each individual component in isolation, not as a cohesive whole, as you would have to do change the GPU-motherboard interface, or raise the supply voltages, or whatever
|
# ? Dec 23, 2023 15:28 |
|
weird how we had a whole slew of AGP x(number) slots over a few years when it was required to sell a new thing because old-skool pci was too slow, but we're stuck on the same drat slot for like 15 years now when it's the consumer's problem to figure out how to make it work lmao e: to clarify i'm not saying pcie is bad, it's very good to have effectively 15 years of reverse compatibility but if breaking from the standard was required to sell new expensive gpus it'd have happened 14 years ago, whereas if it's a problem consumers/oems have to deal with, it's not a problem worth considering lol Truga fucked around with this message at 15:41 on Dec 23, 2023 |
# ? Dec 23, 2023 15:35 |
|
a lot of companies like asus are already trying to do some vendor lock-in, gonna require something for everyone to jump on a better open standard than a lot of dumb patented stuff like q-release buttons for pci devices
|
# ? Dec 23, 2023 15:48 |
|
i'm already having problems with poo poo at work because oems already realized atx doesn't work for them like 10 years ago so they ship proprietary connectors on their PSUs and plug a lot of stuff into the motherboard on the server side, this is easily fixed by contacting the vendor and asking for the correct cables, but on the desktop side the reply is generally just that bender meme, if they even bother
|
# ? Dec 23, 2023 15:53 |
|
When we went through the previous slot transitions, computers were thoroughly obsolete within a couple years. It would have been silly to put a new GPU in a 3-4 year old PC that couldn't feed it. Since the dawn of the PCIe era, we have consistently shifted towards CPUs lasting several times longer than GPUs. Unfortunately, the easy time to sell everyone on " your next build will have to be clean from scratch" was 10 plus years ago. Now, with thin margins, it's not like the AIBs want to deal with having two designs per product for a generation or two.
|
# ? Dec 23, 2023 16:52 |
|
K8.0 posted:When we went through the previous slot transitions, computers were thoroughly obsolete within a couple years. It would have been silly to put a new GPU in a 3-4 year old PC that couldn't feed it. Since the dawn of the PCIe era, we have consistently shifted towards CPUs lasting several times longer than GPUs. Eh, you could probably do it at the next big socket switch over. We're seeing it right now with DDR5. This doesn't mean you have to do the whole stack at once. I suspect most people shopping 3050s and equivalents aren't dropping it in the latest, fastest CPU/Mobo/RAM combo. Meanwhile the pool of people who are buying 4090s most likely aren't looking to drop it into their AM4 socket machine from 2018. Plus, if you're going to the effort of coming up with a new GPU slot spec there's also no reason you can't just put one in alongside ye olde PCIe, just like AGP came back in the stone age. I mean, it would be more expensive, and you're not going to do it in a micro board form factor, but we're already talking boards for people who want the latest and best. Some kind of nu-AGP shipping for whatever 2025's hot new CPU socket is, stick it next to the PCIe slots, and ship the 6070/80/90 on it. Make PCI 6060s. Trickle the fancy new slot down the stack as poo poo ages in and this year's bleeding edge enthusiast bait becomes next year's mainstream feature. edit: I think enthusiast circles vastly, vastly over-estimate how much individual part in-place upgradability is for the typical builder, much less the typical user. Your typical person building a gaming PC might be lucky to upgrade one or two parts across the lifetime of the device. There are a lot of people out there who just build something pretty decent and then run it into the ground for five years before building anew and (maybe) dragging over a couple of storage drives.
|
# ? Dec 23, 2023 18:23 |
|
Enthusiasts are only one part of the market. You need to consider all products that conform to ATX and what it means for the production of feeding all those parts that go to multiple products.
|
# ? Dec 23, 2023 18:40 |
|
I wonder if there has been any talk to move towards a higher voltage like +48V to get away from having to have higher current 12V. This would require changes to the ATX psus so would be pretty major. Rejigging the PCIe CEM spec to add optional higher capacity power connectors like the Mac Pro would be really nice too.
|
# ? Dec 23, 2023 19:34 |
|
I'm not an EE at all but is a voltage change something that devices can negotiate? E.g. start at 12v, then communicate to agree to raise the voltage?
|
# ? Dec 23, 2023 19:37 |
|
Bjork Bjowlob posted:I'm not an EE at all but is a voltage change something that devices can negotiate? E.g. start at 12v, then communicate to agree to raise the voltage? Yah that is something that USB-C does starting at +5V and can negotiate up to 48V iirc
|
# ? Dec 23, 2023 19:39 |
|
priznat posted:Yah that is something that USB-C does starting at +5V and can negotiate up to 48V iirc Yeah nice I was thinking about USB-C but wasn't sure if it was just current negotiation or current+voltage. If that's the case, and the current ATX connectors are rated appropriately, then could a voltage change in the ATX spec be introduced in a way that devices/PSUs that aren't aware operate normally, but devices/PSUs that are could negotiate a higher voltage across the same connection? E.g. GPUs could keep 2x8 connectors, and if they're connected to an aware PSU they negotiate such that the power limit is 600W, otherwise fall back to 450W
|
# ? Dec 23, 2023 19:44 |
|
it could be done but the time to do that was when they were defining the 12VHPWR connector and they didn't so now we have to wait another 20 years sorry
|
# ? Dec 23, 2023 19:46 |
|
I am a very big fan of the 2019 Mac Pro design and how Apple handled additional power demand for the MPX Modules, which is why it's all the more unfortunate that they basically ceased active development on MPX Modules after 2022 and with the new 2023 Apple Silicon-based Mac Pro. I can't blame though though, since they probably didn't sell that many MPX Modules to begin with.
|
# ? Dec 23, 2023 20:55 |
|
Truga posted:inb4 geforce now datacenter burns down If the point is to avoid using 12VHPWR, the motherboard connector would change nothing as it's just a passthrough for a 12VHPWR connector on the rear of the board.
|
# ? Dec 23, 2023 23:50 |
|
|
# ? Jun 10, 2024 15:52 |
|
Any ATX replacement should make it so you mount the GPU to the case and plug power into it, and then the GPU has the CPU socket, DIMM slots, chipset etc that plugs into it.
|
# ? Dec 24, 2023 20:57 |