Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shrike82
Jun 11, 2005

Twerk from Home posted:

Meanwhile, the current consoles are legitimately comparable to a midrange current gen PC and ship with 315W PSU, peak energy usage observed right now is around 201W.

https://www.eurogamer.net/articles/digitalfoundry-2020-xbox-series-x-power-consumption-and-heat-analysis

tbf the tests were run on last gen games

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

Shipon posted:

to be fair someone stuck at home on their gaming computer pulling down ~800 W from the wall is still using about as much electricity as it takes a tesla to drive a little over 3 miles so if someone's hobby or entertainment has them driving more than 10 miles or so per outing, tsk tsk

to be fair what if their hobby was actually only 1 mile away and instead of just going there they spent 10 minutes looping around the neighborhood racing between the lights for no reason?

500W CPU : 150W CPU :: rolling coal pickup : prius

Indiana_Krom
Jun 18, 2007
Net Slacker
An ~800W gaming computer running for one solid month would consume 576 kWh of electricity, which is enough energy to drive a EV with a 275 Wh/Mi efficiency about 2100 miles.

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD
No gaming computer is running at 800W 24/7.
Heck, it probably isn't even running at 800W in the middle of a game, only during a stress test or benchmark.

silence_kit
Jul 14, 2011

by the sex ghost
Yeah, I wonder how power usage during a game compares with power usage during a benchmark software which is designed to turn on all of the sub-circuits of the computer chip.

BlankSystemDaemon
Mar 13, 2009



Considering most games don't even utilize all of the cores 100%, it's a lot less than some people seem to be expecting.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
y'all need to quit playing games so much

Klyith
Aug 3, 2007

GBS Pledge Week

silence_kit posted:

Yeah, I wonder how power usage during a game compares with power usage during a benchmark software which is designed to turn on all of the sub-circuits of the computer chip.

That's not how benchmarks work. Benchmarks can't just make every transistor turn on. Some benchmarks are artificial in ways that mean their power results are way higher than any real-world task. (FurMark for GPUs is the best example of this.)

Other benchmarks just do stuff that you, a regular desktop user, never do. You probably don't crunch numbers or run AI with AVX-512 or render 3D CGI all day. So those power numbers aren't relevant to you. But hey, that also means you don't need a 11900K or 5950X in the first place! Save your money and buy a cheaper, less power consuming CPU.

And some benchmarks, like 3Dmark, are trying to be like real-world tasks but more intense because they're looking to the future when games & stuff will be doing that.

BlankSystemDaemon posted:

Considering most games don't even utilize all of the cores 100%, it's a lot less than some people seem to be expecting.

AFAIK the highest power draw isn't with all-core loads, it's during max boost to a subset of cores. Clock boosting pushed extra volts for clock speed, which generates a ton of waste heat. It can't do that on all-core loads. But when only a couple cores are loaded, they can boost to Ludicrous Speed and effectively use the rest of the silicon as a heatsink.

Results may vary depending on Intel vs AMD, chip size, game choice, and other factors. But in general I would say that a game isn't necessarily using less power than other workloads that you might assume to be heavier because they use more cores. That's kinda why Intel is doing Ludicrous Watts, they want to keep their Highest FPS title.


(OTOH the highest power draws on Intel mostly come from using AVX-512, which isn't games. And the maximum 500W numbers are momentary peaks, not sustained.)

VorpalFish
Mar 22, 2007
reasonably awesometm

Klyith posted:

That's not how benchmarks work. Benchmarks can't just make every transistor turn on. Some benchmarks are artificial in ways that mean their power results are way higher than any real-world task. (FurMark for GPUs is the best example of this.)

Other benchmarks just do stuff that you, a regular desktop user, never do. You probably don't crunch numbers or run AI with AVX-512 or render 3D CGI all day. So those power numbers aren't relevant to you. But hey, that also means you don't need a 11900K or 5950X in the first place! Save your money and buy a cheaper, less power consuming CPU.

And some benchmarks, like 3Dmark, are trying to be like real-world tasks but more intense because they're looking to the future when games & stuff will be doing that.

AFAIK the highest power draw isn't with all-core loads, it's during max boost to a subset of cores. Clock boosting pushed extra volts for clock speed, which generates a ton of waste heat. It can't do that on all-core loads. But when only a couple cores are loaded, they can boost to Ludicrous Speed and effectively use the rest of the silicon as a heatsink.

Results may vary depending on Intel vs AMD, chip size, game choice, and other factors. But in general I would say that a game isn't necessarily using less power than other workloads that you might assume to be heavier because they use more cores. That's kinda why Intel is doing Ludicrous Watts, they want to keep their Highest FPS title.

I don't believe this is the case - in general you are going to run into a frequency wall such that you can't stably match the power consumption of most all core workloads on 1-2 cores, even though you are boosting higher.

See: https://www.guru3d.com/articles_pages/intel_core_i9_11900k_processor_review,5.html

And intel doesn't have highest FPS title anymore - they're making the value play with the 11400 now.

Klyith
Aug 3, 2007

GBS Pledge Week

VorpalFish posted:

I don't believe this is the case - in general you are going to run into a frequency wall such that you can't stably match the power consumption of most all core workloads on 1-2 cores, even though you are boosting higher.

See: https://www.guru3d.com/articles_pages/intel_core_i9_11900k_processor_review,5.html

Ugh yeah I don't know for sure with Intel, on AMD's CPUs the highest power is not with all-core, especially with PBO. You'd need new charts on that page between the single-thread & multithread to see it. And I think Intel was the same pre-Rocket Lake.

But the Intel "Adaptive Boost Technology" is probably throwing a wrench into that and pushing way more power in all-core loads:

quote:

When in a turbo mode, if 3 or more cores are active, the processor will attempt to provide the best frequency within the power budget, regardless of the TB2 frequency table. The limit of this frequency is given by TB2 in 2-core mode. ABT overrides TVB when 3 or more cores are active.
So that's just saying gently caress it, max the frequency all-core until we either run out of juice or thermals.

Cygni
Nov 12, 2005

raring to post

Nomyth posted:

y'all need to quit playing games so much

No.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Nomyth posted:

y'all need to quit playing games so much

Mods?!

canyoneer
Sep 13, 2005


I only have canyoneyes for you

tehinternet posted:

I wonder at what point it’s worth building dedicated exhaust ventilation into your house for your SLI 3090 build?

I know my computer in a well ventilated area of my house makes it hot as balls with a 2080Ti and a 9700k. My cooling costs!!!

Realtor in 100 years having to explain to the buyers touring a house that the data center in this house is in kind of a weird spot, because it was retrofitted in. The house was built in 2021, which is a few years before builders started designing them in.

Rakeris
Jul 20, 2014

canyoneer posted:

Realtor in 100 years having to explain to the buyers touring a house that the data center in this house is in kind of a weird spot, because it was retrofitted in. The house was built in 2021, which is a few years before builders started designing them in.

poo poo, can say that now maybe?

https://www.zillow.com/homedetails/13229-Southview-Ln-Dallas-TX-75240/118222349_zpid/

VorpalFish
Mar 22, 2007
reasonably awesometm

Klyith posted:

Ugh yeah I don't know for sure with Intel, on AMD's CPUs the highest power is not with all-core, especially with PBO. You'd need new charts on that page between the single-thread & multithread to see it. And I think Intel was the same pre-Rocket Lake.

But the Intel "Adaptive Boost Technology" is probably throwing a wrench into that and pushing way more power in all-core loads:

So that's just saying gently caress it, max the frequency all-core until we either run out of juice or thermals.

My Ryzen 5800x also pulls substantially more power in all core workloads than single core workloads. Running single core cinebench, my CPU will hit about 55% of the 105w power limit (reported in Ryzen master, manually lowered) at 4842mhz (this is hitting the stock frequency limit and would not go up if I increased PPT to 142w - I would have to set a clockspeed offset). All core cinebench hits 100% of 105w PPT at 4350mhz, and those clocks (and power consumption) would increase if I restored the stock power limit.

Per core power consumption is of course way up in the 1C scenario, but not enough to offset the consumption of all cores in the all core scenario.

Notably, 1C load actually does get hotter despite consuming substantially less power, presumably because the much higher per core consumption gives me less surface area to work with to dissipate the energy. My boost clock is not thermally constrained in cinebench 1C but I actually would be (slightly) using 1C small FFT P95.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Looks like the all cores not being the highest power draw is a quirk of the 2 CCD Ryzen chips, for single CCD all core is the highest power draw.

https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/8

VorpalFish
Mar 22, 2007
reasonably awesometm

MaxxBot posted:

Looks like the all cores not being the highest power draw is a quirk of the 2 CCD Ryzen chips, for single CCD all core is the highest power draw.

https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/8



Oh that's really interesting - article speculates it's because they're using different binned chiplets (one low leakage, one high leakage). Hadn't considered that.

Klyith
Aug 3, 2007

GBS Pledge Week
I've been wrong the whole time and what I've been thinking about is max observed temperature. Naively max temp would equal max heat generation would equal max power consumption, but conduction is important.

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

Those peak numbers characterize worst-case transitory demand spikes running some kind of power virus load crafted by the engineering team to switch as many flipflops as fast as possible.

hey i did that once

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

canyoneer posted:

Realtor in 100 years having to explain to the buyers touring a house that the data center in this house is in kind of a weird spot, because it was retrofitted in. The house was built in 2021, which is a few years before builders started designing them in.

You joke, but how many of us have racks sitting somewhere in our house already?

Cygni
Nov 12, 2005

raring to post

For what its worth, a Chinese language site is reporting that Intel will be TSMCs first "3nm" customer, even before Apple, and has purchased all of the initial 3nm run starting summer next year for 1 GPU product and 3 unannounced server products.

https://udn.com/news/story/7240/5662232

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
If true, I can only imagine the eye-watering price of server-grade CPUs produced by a company which had to out-bid Apple for fab time.

hobbesmaster
Jan 28, 2008

DrDork posted:

If true, I can only imagine the eye-watering price of server-grade CPUs produced by a company which had to out-bid Apple for fab time.

Alternative explanation is that its a shitshow that Apple is avoiding and Intel his gleefully walking into because hey it can't be that bad compared to 14nm++++

Cygni
Nov 12, 2005

raring to post

DrDork posted:

If true, I can only imagine the eye-watering price of server-grade CPUs produced by a company which had to out-bid Apple for fab time.

To offset that cost, its probable that future Intel server CPUs are going to be combinations of multiple process techniques (like Ponte Vecchio), so its possible that only the most die-size critical portions of the full CPU are on 3nm, with the rest/majority being on something comparatively cheaper like Intel 7.

The future is weird

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Cygni posted:

To offset that cost, its probable that future Intel server CPUs are going to be combinations of multiple process techniques (like Ponte Vecchio), so its possible that only the most die-size critical portions of the full CPU are on 3nm, with the rest/majority being on something comparatively cheaper like Intel 7.

I mean, this makes a lot of sense for a lot of reasons, but you're still talking about a likely substantial portion of a die for a product already known to be aimed at "price insensitive customers" that's on a world-leading node that you had to out-bid literally the rest of the world to get. And you did so by enough that you got all the fab time.

hobbesmaster posted:

Alternative explanation is that its a shitshow that Apple is avoiding and Intel his gleefully walking into because hey it can't be that bad compared to 14nm++++

Or yeah, this. Or for a slightly less pessimistic view, maybe 3nm doesn't provide a substantial enough bump over mature 5nm that Apple felt the need to bother and is instead happily iterating on their Mx line of chips.

Icept
Jul 11, 2001

hobbesmaster posted:

Alternative explanation is that its a shitshow that Apple is avoiding and Intel his gleefully walking into because hey it can't be that bad compared to 14nm++++

I don't hold any allegiance but from a pure comedy standpoint that would be hilarious.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy

DrDork posted:

I mean, this makes a lot of sense for a lot of reasons, but you're still talking about a likely substantial portion of a die for a product already known to be aimed at "price insensitive customers" that's on a world-leading node that you had to out-bid literally the rest of the world to get. And you did so by enough that you got all the fab time.

Or yeah, this. Or for a slightly less pessimistic view, maybe 3nm doesn't provide a substantial enough bump over mature 5nm that Apple felt the need to bother and is instead happily iterating on their Mx line of chips.

An even more optimistic view, Apple has an architectural advantage over intel and is competitive while being behind a node.

silence_kit
Jul 14, 2011

by the sex ghost
People have also speculated that Apple has a system-level advantage over its competitors because, being a computer systems company and not a computer chip company, they can spend more money on larger area chips. They aren’t as pressured on chip price as computer chip companies, who need to make a profit on the chip.

hobbesmaster
Jan 28, 2008

Icept posted:

I don't hold any allegiance but from a pure comedy standpoint that would be hilarious.

I just figured that intel has stepped on so many rakes that it's likely.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

silence_kit posted:

People have also speculated that Apple has a system-level advantage over its competitors because, being a computer systems company and not a computer chip company, they can spend more money on larger area chips. They aren’t as pressured on chip price as computer chip companies, who need to make a profit on the chip.

Apple also tries to amortize the cost of their chips by putting them in as many different devices as possible. It's not like something like an Ax chip lives for a year in one model of phone and then dies- that particular phone sells for like 2-3 years, the chip ends up in the Apple TV, a more GPU-capable variant goes into the iPad, next year it gets tweaked and it goes into a budget device, maybe it gets tweaked and renamed, maybe it evolves into Ax+1. It's not 10 different chips for 10 different devices, it's more like 3 chip lineages are constantly evolving to meet the needs of those 10 products

Cygni
Nov 12, 2005

raring to post

also the leak seems to suggest to me that the initial 3nm run is small. so even if intel bought that initial batch for their parts with extreme costs/margins, its possible that apple/AMD (or others) could be pretty soon after.

assuming they dont go to Samsung... OR INTEL FABS???

Klyith
Aug 3, 2007

GBS Pledge Week

Perplx posted:

An even more optimistic view, Apple has an architectural advantage over intel and is competitive while being behind a node.

Huh? M1s are on TSMC 5nm, they're ahead a node over AMD on TSMC 7nm and "Intel 7".

silence_kit posted:

People have also speculated that Apple has a system-level advantage over its competitors because, being a computer systems company and not a computer chip company, they can spend more money on larger area chips. They aren’t as pressured on chip price as computer chip companies, who need to make a profit on the chip.

1 bigass SOC is definitely more efficient than 2-4 chips with their own packaging.

However, Apple also has an ecosystem-level advantage over the competition and that's their biggest advantage. They designed a CPU that is really great at a more limited set of systems and applications. They can do this because Apple only cares about being competitive in some areas and is ok not giving a poo poo about the rest. They don't have to, they're Apple.

An M1 is not going to compete with Intel & AMD desktop CPUs if you put a big heatsink on it and shove 90 watts down its mouth. And the M1, while having a good GPU that makes it pretty good vs the competition in ultrabook laptops, is not a great gaming CPU. It's wide and shallow and has a massively huge out-of-order window and instruction cache. They had priorities when designing the chip, and "dominate web & javascript benches" was near the top of the list. "Dominate AAA gaming" was not.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Klyith posted:

Huh? M1s are on TSMC 5nm, they're ahead a node over AMD on TSMC 7nm and "Intel 7".

The note is about the rumored TSMC 3nm node, at which point if Apple remains on 5nm they would be "behind."

You're right about the rest: you can make some really quality trade-off choices when you own the entire product stack and can enforce whatever bits you want the way you want because you want to, rather than trying to always produce a somewhat compromised product because it has to work in 1,000 slightly different configurations.

Also Apple has the ability to just tell their customers they're changing arch and yes it'll break a lot of stuff and if you don't like that just get hosed. None of the trying to innovate while also still supporting cruft from 20+ years ago because billion-dollar Fortune 500 companies refuse to modernize their stuff.

LRADIKAL
Jun 10, 2001

Fun Shoe
Is there a good comparison to back up the idea that the M1 is better and worse than Intel in certain things? Obviously, it's currently low power targeted and has memory very close to the die, but what are pros/cons other than that? I havent' heard this narrative before.

wargames
Mar 16, 2008

official yospos cat censor

DrDork posted:

The note is about the rumored TSMC 3nm node, at which point if Apple remains on 5nm they would be "behind."

You're right about the rest: you can make some really quality trade-off choices when you own the entire product stack and can enforce whatever bits you want the way you want because you want to, rather than trying to always produce a somewhat compromised product because it has to work in 1,000 slightly different configurations.

Also Apple has the ability to just tell their customers they're changing arch and yes it'll break a lot of stuff and if you don't like that just get hosed. None of the trying to innovate while also still supporting cruft from 20+ years ago because billion-dollar Fortune 500 companies refuse to modernize their stuff.

Apple just completely buys out an advance node and lets everyone else backfill the node they left.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Klyith posted:

However, Apple also has an ecosystem-level advantage over the competition and that's their biggest advantage. They designed a CPU that is really great at a more limited set of systems and applications. They can do this because Apple only cares about being competitive in some areas and is ok not giving a poo poo about the rest. They don't have to, they're Apple.

An M1 is not going to compete with Intel & AMD desktop CPUs if you put a big heatsink on it and shove 90 watts down its mouth.

lol nah just the opposite

M1 Firestorm cores hold their own in single thread performance against everything from Intel and AMD. But a Firestorm core only needs about 5W at its Fmax while Intel and AMD cores often need 10-15W (or more).

This is one reason why M1 has made such a big splash. If your load has 8 threads, it will run alll 4+4 cores at Fmax, even in a laptop's power budget. The only M1 model which is often forced to reduce clocks below Fmax is the Macbook Air, because it has no fan. Everything with a fan seems able to sustain Fmax on all cores forever.

But 5W/core at Fmax is also great for the desktop. When Apple builds a desktop M1 (M2?) variant with a higher TDP and 8 or 16 Firestorm cores (which they presumably will), it's going to be a monster. They should not need "turbo" like Intel and AMD, where the power budget per core (and therefore clock freq) goes down quite dramatically as the number of active cores goes up.

quote:

And the M1, while having a good GPU that makes it pretty good vs the competition in ultrabook laptops, is not a great gaming CPU. It's wide and shallow and has a massively huge out-of-order window and instruction cache. They had priorities when designing the chip, and "dominate web & javascript benches" was near the top of the list. "Dominate AAA gaming" was not.

lol that you've come up with a mental model of the world where dominating web and javascript workloads somehow cannot translate to other things. If your CPU runs the shittiest least optimized form of code really fast, it's gonna do well at everything. (This is something you can see in Intel's history too.)

An example of M1 doing great at something which isn't web: https://www.jeffgeerling.com/blog/2021/apple-m1-compiles-linux-30-faster-my-intel-i9

The real reason Apple does not and will not dominate gaming has little to do with hardware, it's that their C-suite seemingly doesn't understand that business and hasn't ever been able to put together a coherent strategy for attracting AAA game devs to target their platforms, selling their platforms to the kind of people who play AAA games, etc. Or perhaps they do understand and have never decided to seriously pursue it, same difference.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Performance per watt is non-linear, always has been, and always will be. Apple's advantage is that they designed a CPU for the performance and power target they care about, which in the context of modern computing is very low. AMD and Intel both demonstrate clearly how the scaling works, you can underclock and get huge efficiency gains. You can't scale things down as low as M1 because of things like architecture and process, just like Apple doesn't have an M1 variant that actually competes with high-end high-power CPUs, but if they were targeting the same relatively low performance as Apple they would be able to hit the same power efficiency on the same node and probably vice versa. Apple isn't magically far superior to everyone else in the chip design space.

redeyes
Sep 14, 2002

by Fluffdaddy
The M1 has a huuue advantage over other procs. Dedicated decoders, accelerators, etc. Stuff that when properly built into the CPU .. package thing itself, means you get insane cpu-offload power.
I gotta admit as a PC user im loving jealous as hell.. but I'm also not willing to sacrifice having an actual PC i can do whatever the hell I want with.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

wargames posted:

Apple just completely buys out an advance node and lets everyone else backfill the node they left.

The news was about how that specifically did not happen. Intel bought out the TSMC 3nm node and Apple was left with the old and busted 5nm.

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

BobHoward posted:

lol that you've come up with a mental model of the world where dominating web and javascript workloads somehow cannot translate to other things.

lol that you can't see some mild hyperbole without becoming the apple defender.


M1 is great for a whole lot of things that can be loosely grouped under "productivity" -- javascript, compiling code, cinebench, encoding, and many others. It's a great chip for that! Though some of Apple's choices were pretty pointed:

AnandTech posted:

On the cache hierarchy side of things, we’ve known for a long time that Apple’s designs are monstrous, and the A14 Firestorm cores continue this trend. Last year we had speculated that the A13 had 128KB L1 Instruction cache, similar to the 128KB L1 Data cache for which we can test for, however following Darwin kernel source dumps Apple has confirmed that it’s actually a massive 192KB instruction cache. That’s absolutely enormous and is 3x larger than the competing Arm designs, and 6x larger than current x86 designs, which yet again might explain why Apple does extremely well in very high instruction pressure workloads, such as the popular JavaScript benchmarks.

But no, not all tasks are the same and performance is not universal. It it was then bulldozer wouldn't have sucked, Vega would be king poo poo of GPUs, we'd have Cell processors in everything, and the descendant of Netburst or Itanium would be powering Intel's architecture. Being really good at javascript and compiling code does not mean you're equally good at everything. Some things don't benefit from a super-wide design that trades some clockspeed and latency for a ginormous OoO buffer and 4+4 ALUs & FPUs, because they don't fill that width. Among tasks that consumers care about, video games are a prominent example. Some of this is inherent tradeoffs of CPU design that go back a looooong time.


Apple can make that tradeoff. And while Apple's C-suite doesn't give a poo poo about games other than how much they can rake from the ios store, Tim Cook isn't designing the CPU. I'm sure that Apple's engineers are eager to have advantage wherever they have it. If they could have designed a CPU that spanked Intel & AMD in both productivity and games, they would have. If the Rosetta people could make games perform better, they would. If the M1 actually was amazing when you put under a big desktop cooler and attacked a dedicated GPU, they'd have demoed it doing that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply