|
We’ve reached a point where “65W” actually means 400W+?
|
# ? Aug 7, 2021 20:48 |
|
|
# ? Jun 3, 2024 20:41 |
|
Should just drop the 'W' like the 'nm'
|
# ? Aug 7, 2021 20:54 |
|
SwissArmyDruid posted:
Average wattage hasn’t gone up though, so those higher peaks are being balanced by lower lows. What you’re seeing is a progression of the same trends that have impacted AMD and NVIDIA GPUs: power consumption is getting “burstier” on these newer nodes. When everything switches on at once they can suck >2x “average” current for a few milliseconds, then it goes lower for a while to compensate. There is of course a parallel conversation here about the extent to which our power supplies were specced to handle that kind of surge. It was a problem for Vega, a problem for RDNA1, a problem for Ampere (the POSCAP mess), etc. A good PSU should be able to handle it but a lot of people don’t have top tier PSUs, and have pretty marginal capacity that doesn’t leave tons and tons of headroom for surges, because it is a widely recommended place where you could save twenty bucks by cutting corners. Paul MaudDib fucked around with this message at 05:58 on Aug 8, 2021 |
# ? Aug 7, 2021 21:06 |
|
Begall posted:We’ve reached a point where “65W” actually means 400W+? They might be using the same kind of "Watts" as the producers of hi-fi speakers.
|
# ? Aug 7, 2021 21:09 |
|
Those peak numbers characterize worst-case transitory demand spikes running some kind of power virus load crafted by the engineering team to switch as many flipflops as fast as possible. System integrators have to size the PSU to handle such spikes to guarantee stable operation under all possible scenarios. It's been a trend for a long time that each new generation of chips (from anyone, not just Intel) increases the ratio between peak and long-term average power draw. For example, a common method of using up the transistors which new nodes give you is to plonk down special purpose accelerator blocks that are usually dark. When your engineering team does the exercise of assuming everything's active at everything's Fmax at the same time, you do get some big numbers. Note that the "continuous current" aka long-term average numbers show no changes between 10th/11th/12th gen for each TDP bin, other than 12th gen 35W TDP losing 2A (which is good). So in that respect, Intel is holding the line, which is good. The real issue is that the line was ridiculous in the first place: a supposedly 125W TDP CPU is allowed to draw 26A continuous from the 12V supply (312W)? Intel TDP used to actually mean something, many years ago.
|
# ? Aug 7, 2021 21:25 |
|
BobHoward posted:The real issue is that the line was ridiculous in the first place: a supposedly 125W TDP CPU is allowed to draw 26A continuous from the 12V supply (312W)? Intel TDP used to actually mean something, many years ago. the 65W part continuously consuming 270W is even more funny. I don't think AMD has anything that bad
|
# ? Aug 7, 2021 21:30 |
|
Inept posted:the 65W part continuously consuming 270W is even more funny. I don't think AMD has anything that bad They *did*. Bulldozer. The track is on the other bogey now.
|
# ? Aug 7, 2021 21:54 |
|
SwissArmyDruid posted:
So they are pulling an NVIDIA and MOAR POWAH!!!!
|
# ? Aug 7, 2021 23:09 |
|
Inept posted:the 65W part continuously consuming 270W is even more funny. I don't think AMD has anything that bad A full power 64c Epyc clocks in 280w, but that's about it I think. Desktop stuff like the 5800x peaks at 140w.
|
# ? Aug 8, 2021 01:27 |
|
540 watts lmao I have a literal spaceheater that’s 600 watts.
|
# ? Aug 8, 2021 02:22 |
|
Looking forward to the future when AIOs give way to self-contained phase change apparatuses that require a licensed refrigerant tech to service. I mean, Intel already brought back TECs.
|
# ? Aug 8, 2021 02:26 |
|
BIG HEADLINE posted:They *did*. Bulldozer. At least AMD labeled the 220W Blulldozer parts as 220W parts. Even included an AIO with the retail box version.
|
# ? Aug 8, 2021 06:53 |
|
B-1.1.7 Bomber posted:540 watts lmao I have a literal spaceheater that’s 600 watts. When someone asks how powerful your CPU is, you can just say "It peaks at about 3/4 horsepower".
|
# ? Aug 8, 2021 12:32 |
Lots of nerds are gonna have to have their house electrics rewired, so really Intel is doing The Society a favour by ensuring there'll be plenty of jobs in the construction business.
|
|
# ? Aug 8, 2021 12:34 |
|
Looking forward to how mobo makers are gonna market their one-click "easy" 1 extra watt per 1MHz OC
|
# ? Aug 8, 2021 13:19 |
|
B-1.1.7 Bomber posted:540 watts lmao I have a literal spaceheater that’s 600 watts. I wonder at what point it’s worth building dedicated exhaust ventilation into your house for your SLI 3090 build? I know my computer in a well ventilated area of my house makes it hot as balls with a 2080Ti and a 9700k. My cooling costs!!!
|
# ? Aug 8, 2021 13:45 |
|
If I want to try one of those new Intel CPUs, do I need a 240V outlet installed?
|
# ? Aug 8, 2021 13:48 |
|
do a server build hidden in a closet then stream games to a thin client
|
# ? Aug 8, 2021 13:58 |
|
Inept posted:the 65W part continuously consuming 270W is even more funny. I don't think AMD has anything that bad I noticed that the Dell micro-PCs (micro desktops like this) have gone from using 35W desktop CPUs on prior generations to 35W laptop CPUs on 11th gen. This was pretty confusing to me initially because 35W TDP is 35W TDP, right? I guess it's not anymore, though.
|
# ? Aug 8, 2021 14:00 |
|
Is it possible to hit the high continuous power dissipations in non-fake workloads though?
|
# ? Aug 8, 2021 14:53 |
|
Gwaihir posted:A full power 64c Epyc clocks in 280w, but that's about it I think. Desktop stuff like the 5800x peaks at 140w. with 5x scaling and PBO i got my 5800x CPU package power to 183W (cinebench for like 12hours to test the new system, oversized custom loop). Modern CPUs can consume hilarious amounts of power, good to see that that wont change too fast. Inept posted:the 65W part continuously consuming 270W is even more funny. I don't think AMD has anything that bad +1, thats kinda insane AutismVaccine fucked around with this message at 16:10 on Aug 8, 2021 |
# ? Aug 8, 2021 16:07 |
|
AutismVaccine posted:with 5x scaling and PBO i got my 5800x CPU package power to 183W (cinebench for like 12hours to test the new system, oversized custom loop). PBO is specifically running the CPU out of spec and allowing it to violate PPT. On the intel side of things, the 65w cpus absolutely do not draw 270w sustained when using intel's recommended motherboard settings: https://www.techpowerup.com/review/intel-core-i5-11400f/20.html Will it do more than 65w short duration? Absolutely. Can you uncap, overclock, and cause crazy sustained draw? Sure, but that's on you.
|
# ? Aug 8, 2021 16:54 |
|
You can't really compare AMD and Intel CPUs head to head in terms of spec/OC. AMD's auto-clocking is very advanced and is basically getting everything out of the CPUs already. PBO is throwing a ton of power at essentially no gain. Intel's time-limited boost spec, whatever the gently caress they're calling it, is only really time-limited for thermal design purposes for OEM and maybe SI applications. There's really no downside to letting it run in suck down all the power mode forever, Intel doesn't actually consider it out of spec, and it does get you meaningful performance gains for the power cost. That's why there are always a bunch of motherboards that default to running that way all the time. So AMD has their TDP, their actual power draw, and their transient power draw, but Intel has TDP, actual power draw with "spec" time limits, then a lower drop-down power, or maintaining that boost power forever, and also transients. The only sane way to compare them would be setting up a system for a specific goal, benching that specific goal while also capturing power usage, and then analyzing efficiency. In a vacuum none of the numbers mean anything.
|
# ? Aug 8, 2021 17:30 |
|
yeah Intel does this thing where they want to be able to sell CPUs on the OEM and light-user side without scaring off customers/motherboard manufacturers with high wattage numbers and while only needing their piddly-rear end coolers, so the base clocks are set/advertised as low with a simplistic timed boost that actually lets them hit a "65w TDP"... but they also know that they need to goose the clocks real high if they want to be competitive in the gaming/power-user space... so you can just lift the boost limit and let it run at full clip all day without even violating what Intel would consider in-warranty behavior I don't know if I would necessarily call this "cheating", but there's definitely an element of misleading-ness to all of it because they're trying to have their cake and eat it too I remember a couple of months back Hardware Unboxed took issue with this sort of thing, because they found a couple of motherboards where the power delivery was so sparse and underbuilt that even though it could run a CPU within the "Intel guidance" limits with the low clocks and low TDP, it couldn't deliver no-limit-boosting, which is what happens if you tell a motherboard manufacturer "it only needs to be able to handle 65 watts" and they wanna cut so many corners that they actually do exactly that
|
# ? Aug 8, 2021 18:13 |
|
Which is exactly what those benchmarks are. The chip will stay within long duration power limits on the box as long as the uefi settings aren't modified to allow it to do more (or as you mentioned have their default set that way which imo is kinda scummy practice by the board maker - yes, leave the option there for people that want it, but behavior out of the box should match the packaging.)
|
# ? Aug 8, 2021 18:15 |
|
Yeah, both Intel and AMDs "TDP" calculations are far from perfect and their boosting behaviors (and board vendor implementations) are disparate enough that for desktop specifically, you cant use them like you used to. The reality is that the DIY market has sent a clear signal to the hardware providers that they don't really give a poo poo about power draw on average, and AMD/Intel/Nvidia have all responded to that signal with ballooning power budgets. Especially in a world with process improvement grinding down to an unbelievably expensive and slow slog, sending the power budget through the roof for the DIY segment is one of the easiest ways to keep giving people the gaming performance increases they want.
|
# ? Aug 8, 2021 18:44 |
|
Are the continuous power dissipation numbers going up for real applications, or only for fake applications which are designed to turn on all of the sub-circuits of the computer chip? If it is only the case that power is going up for case 2) and not case 1), then I don't really see the controversy here. It makes a lot of sense to me for Intel and AMD to design for real world usage instead of fake computer usage.
|
# ? Aug 8, 2021 19:00 |
|
That hardware unboxed video found intel motherboards that were so bad that they weren’t able to run >65W CPUs within standard lower limits (that the box claimed were supported) without them being throttled to the point of losing half their performance relative to a properly built motherboard.
|
# ? Aug 8, 2021 19:20 |
|
VorpalFish posted:Which is exactly what those benchmarks are. The chip will stay within long duration power limits on the box as long as the uefi settings aren't modified to allow it to do more (or as you mentioned have their default set that way which imo is kinda scummy practice by the board maker - yes, leave the option there for people that want it, but behavior out of the box should match the packaging.) Why is it scummy that a high performance gamer board defaults to high performance gamer settings? The average person buying these boards does not understand what the hell is going on. Probably half the people building their own systems don't even enable XMP, which is a huge performance hit and in an ideal world it would be practical to default that to on as well. Steve at GN makes a huge deal out of this and benches Intel systems at the "spec" which is honestly stupid. No one going to his youtube channel is trying to build an office PC with no cooling from a 11900k.
|
# ? Aug 8, 2021 19:25 |
|
K8.0 posted:Why is it scummy that a high performance gamer board defaults to high performance gamer settings? The average person buying these boards does not understand what the hell is going on. Probably half the people building their own systems don't even enable XMP, which is a huge performance hit and in an ideal world it would be practical to default that to on as well. I think Steve has this one at least a little right - I prefer when review sites test at both stock and oc/raised limits and report power and performance for both cases. And I think it's scummy because they're doing it to make their motherboard look better than it actually is. If you test board A and board B where board B enforces power limit and A does not, A looks like it performs better with the same chip even if B is just as capable of running the chip just as fast by raising the power limit. I think running a chip at a higher power limit should be a conscious choice made by someone who's chosen appropriate cooling to go with it, not something you don't realize is happening. If you aren't the kind of person who's gonna even enable xmp, maybe you're running the stock cooler too, and the performance at stock long power limit is probably sufficient in that scenario. Edit: I would be way more on board with xmp on being a default - ostensibly the memory kit is being guaranteed by the manufacturer to run at the advertised clocks and timings, and it has negligible impact to system power draw. VorpalFish fucked around with this message at 19:57 on Aug 8, 2021 |
# ? Aug 8, 2021 19:54 |
|
I get what you're saying, but if you're dumb and lazy enough that you buy a high-end board and a high end CPU, and then you put a poo poo-rear end cooler on it... I honestly don't care, go gently caress yourself. You can deal with the poorly cycling thermal throttling, you deserve it. Keep in mind that all the higher-end K SKUs don't come with an HSF. You have to do the research to buy an expensive CPU and expensive motherboard and then buy a garbage cooler to wind up in that situation. And yes, at that point you should also be going into BIOS and setting things, but the fact of the matter is that that is a different, more difficult task for most people than buying a list of good parts and putting them together. Also I think we're past the days of benches being the judge of motherboards, but maybe I'm wrong.
|
# ? Aug 8, 2021 20:01 |
|
silence_kit posted:Are the continuous power dissipation numbers going up for real applications, or only for fake applications which are designed to turn on all of the sub-circuits of the computer chip? Real applications that use the CPU. For a lot of regular enthusiasts, games are the only thing that does it. Most apps don't really put that much CPU pressure on the system while active. K8.0 posted:Also I think we're past the days of benches being the judge of motherboards, but maybe I'm wrong. intel has made them relevant again
|
# ? Aug 8, 2021 20:10 |
|
Klyith posted:Real applications that use the CPU. For a lot of regular enthusiasts, games are the only thing that does it. That's exactly what we're talking about; some of those motherboards are allowing the chip to maintain boost way above the rated power limit by default, which makes them look a lot better. In my opinion, that's scummy on the part of the board maker, not intel - and to be clear those are non k series skus that do come with lovely stock heatsinks that a masochist could conceivably use. Yes, some of the boards couldn't handle 140w because of inadequate/undercooked vrms, but if you want to run at uncapped power, get a board with decent vrm. Imo they should all default to enforcing power limit of the sku by default, with the option to uncap - should be opt in, not opt out.
|
# ? Aug 8, 2021 20:22 |
|
Klyith posted:Real applications that use the CPU. For a lot of regular enthusiasts, games are the only thing that does it. everything that intel says about benchmarks is true, especially if it contradicts what they said about benchmarks before.
|
# ? Aug 8, 2021 20:34 |
|
For H&B platforms I agree with you, it's a lower end platform where lower end components are being used and efficiency is far more likely to be a real concern. On Z platforms, goose it to the moon by default.
|
# ? Aug 8, 2021 20:56 |
|
Anyways the last 4-5 pages of this thread are retrospectively super funny now. Intel: We want everyone to use ATX12VO so PCs will be more efficient #green Also Intel: Here's our new 500W CPU!
|
# ? Aug 9, 2021 00:48 |
|
to be fair someone stuck at home on their gaming computer pulling down ~800 W from the wall is still using about as much electricity as it takes a tesla to drive a little over 3 miles so if someone's hobby or entertainment has them driving more than 10 miles or so per outing, tsk tsk
|
# ? Aug 9, 2021 01:01 |
|
love to spend summer sitting next to a 1kw gaming desktop
|
# ? Aug 9, 2021 01:03 |
|
shrike82 posted:love to spend summer sitting next to a 1kw gaming desktop just play on a laptop via Moonlight bing bong so simple
|
# ? Aug 9, 2021 01:08 |
|
|
# ? Jun 3, 2024 20:41 |
|
Meanwhile, the current consoles are legitimately comparable to a midrange current gen PC and ship with 315W PSU, peak energy usage observed right now is around 201W. https://www.eurogamer.net/articles/digitalfoundry-2020-xbox-series-x-power-consumption-and-heat-analysis
|
# ? Aug 9, 2021 01:18 |