Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
teagone
Jun 10, 2003

That was pretty intense, huh?

Just give me an Apple TV with an M1 chip for $150 or less so I can play your stupid mobile arcade games on my TV, Apple.

Adbot
ADBOT LOVES YOU

VorpalFish
Mar 22, 2007
reasonably awesometm

Performance doesn't necessarily scale in a linear fashion with power consumption. I know zen2 chips were pushed past the point of optional efficiency in the name of absolute performance, and I believe zen3 is similar. You can drop PPT a lot and lose relatively little performance.

That's not to say Apple's chips aren't impressive; just that saying zen3 needs 140w to beat it by 40% is maybe painting a worse picture than the reality for AMD.

Edit: other thing to remember when doing efficiency comparisons to zen3 is that the io die accounts for something like 20w of the thermal budget - something they can get away with in desktop space, but probably the reason all their mobile designs are monolithic. That's also going to distort the picture a bit.

VorpalFish fucked around with this message at 21:43 on Dec 31, 2020

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VorpalFish posted:

Performance doesn't necessarily scale in a linear fashion with power consumption. I know zen2 chips were pushed past the point of optional efficiency in the name of absolute performance, and I believe zen3 is similar. You can drop PPT a lot and lose relatively little performance.

That's not to say Apple's chips aren't impressive; just that saying zen3 needs 140w to beat it by 40% is maybe painting a worse picture than the reality for AMD.

Edit: other thing to remember when doing efficiency comparisons to zen3 is that the io die accounts for something like 20w of the thermal budget - something they can get away with in desktop space, but probably the reason all their mobile designs are monolithic. That's also going to distort the picture a bit.

What do clocks vs power usage look like on mobile or server Zen 2? I’m assuming those are a lot less stressed than desktop.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



VorpalFish posted:

Performance doesn't necessarily scale in a linear fashion with power consumption. I know zen2 chips were pushed past the point of optional efficiency in the name of absolute performance, and I believe zen3 is similar. You can drop PPT a lot and lose relatively little performance.

That's not to say Apple's chips aren't impressive; just that saying zen3 needs 140w to beat it by 40% is maybe painting a worse picture than the reality for AMD.

Edit: other thing to remember when doing efficiency comparisons to zen3 is that the io die accounts for something like 20w of the thermal budget - something they can get away with in desktop space, but probably the reason all their mobile designs are monolithic. That's also going to distort the picture a bit.

Yeah, that is true about performance scaling, but even then, if they can surpass Zen 3 (or 3+ really) and Rocket Lake performance by going up to a 40W or 50W TDP envelop and still only use 1/2 to 1/3 the TDP to do so, that is still extremely impressive, especially since that TDP would almost certainly include the GPU within it, similar to the M1's now (which is technically 17W total but usually hits around 10-12W).

And I actually think it's a worse picture for Intel than AMD, because AMD has shown lately to be a bit more flexible in how they approach their designs, and the fact they're using TSMC also. Intel being stuck on using their own fabs, and low-volume 10nm at that, is what is going to hurt them.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

DrDork posted:

Though it'll still be something of an interesting curiosity as long as the M1 is Apple-exclusive and no one else has any comparable ARM systems available for public consumption. It'd be real interesting to see what sort of server systems Apple could put together with them, but so far I haven't heard much about anything in that direction (yet--they'd be insane not to be pursuing it).
There's not that much 5nm capacity, I doubt Apple want to divert a chunk of it into servers.

Otoh, maybe once 3nm becomes available, they'll keep/use their 5nm capacity for servers and x86 CPUs will be stuck on their current nodes forevermore :v:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

ConanTheLibrarian posted:

There's not that much 5nm capacity, I doubt Apple want to divert a chunk of it into servers.

Otoh, maybe once 3nm becomes available, they'll keep/use their 5nm capacity for servers and x86 CPUs will be stuck on their current nodes forevermore :v:

The process node designation could be changed to the number of plusses after 14nm :haw:

Each year it increments again

Cygni
Nov 12, 2005

raring to post

priznat posted:

The process node designation could be changed to the number of plusses after 14nm :haw:

Each year it increments again

This is TSMC, so the number will keep going down even if the transistor size is the same.

...which is prolly gonna happen because TSMC already delayed 3nm 6 months, and DigiTimes is reporting today that the delay is likely going to be longer than that. Get ready for multiple years of "4nm"!

VorpalFish
Mar 22, 2007
reasonably awesometm

Twerk from Home posted:

What do clocks vs power usage look like on mobile or server Zen 2? I’m assuming those are a lot less stressed than desktop.

Server also uses a chiplet design although of course the clocks aren't pushed nearly as hard.

It's really hard to tell with mobile because it's impossible to separate the chip from the OEM cooling design. I don't think I've seen any reviewers doing a deep dive of power consumption vs clocks for the APUs.

VorpalFish
Mar 22, 2007
reasonably awesometm

SourKraut posted:

Yeah, that is true about performance scaling, but even then, if they can surpass Zen 3 (or 3+ really) and Rocket Lake performance by going up to a 40W or 50W TDP envelop and still only use 1/2 to 1/3 the TDP to do so, that is still extremely impressive, especially since that TDP would almost certainly include the GPU within it, similar to the M1's now (which is technically 17W total but usually hits around 10-12W).

And I actually think it's a worse picture for Intel than AMD, because AMD has shown lately to be a bit more flexible in how they approach their designs, and the fact they're using TSMC also. Intel being stuck on using their own fabs, and low-volume 10nm at that, is what is going to hurt them.

That's a big if though. I'm skeptical they'll outperform just by scaling to 50w, even with a process advantage.

We'll see though, Apple's CPU design team are most definitely very good.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
How well does an M1 run CS GO at 1080p Low

That's the really important part

LRADIKAL
Jun 10, 2001

Fun Shoe
https://www.youtube.com/watch?v=bmuFl-SlV20

I bet you were trying to be funny, but currently runs ~45fps at 2560x1600. This is, I believe not compiled for the M1 processor, so that's pretty good at this point.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Cygni posted:

This is TSMC, so the number will keep going down even if the transistor size is the same.

...which is prolly gonna happen because TSMC already delayed 3nm 6 months, and DigiTimes is reporting today that the delay is likely going to be longer than that. Get ready for multiple years of "4nm"!

I would still bet on TSMC hitting 3nm well before Intel gets their 7nm node fixed.

VorpalFish posted:

That's a big if though. I'm skeptical they'll outperform just by scaling to 50w, even with a process advantage.

We'll see though, Apple's CPU design team are most definitely very good.

Why are you skeptical though, when the architecture itself is wider, capable of more superior and aggressive branch prediction, has a huge ROB, etc.? They will likely be able to crank the clock speed to some extent given the available active cooling systems available, and again, can also crank core counts in addition to clock speed.

wet_goods
Jun 21, 2004

I'M BAAD!

SourKraut posted:

I would still bet on TSMC hitting 3nm well before Intel gets their 7nm node fixed.


Why are you skeptical though, when the architecture itself is wider, capable of more superior and aggressive branch prediction, has a huge ROB, etc.? They will likely be able to crank the clock speed to some extent given the available active cooling systems available, and again, can also crank core counts in addition to clock speed.

It's funny because we are going to hit a wall in feature size that will require a fundamental shift in design to make progress on. Nano sheets and what not and eventually quantum computers I guess but it will give intel a chance to catch up long term.

VorpalFish
Mar 22, 2007
reasonably awesometm

SourKraut posted:

I would still bet on TSMC hitting 3nm well before Intel gets their 7nm node fixed.


Why are you skeptical though, when the architecture itself is wider, capable of more superior and aggressive branch prediction, has a huge ROB, etc.? They will likely be able to crank the clock speed to some extent given the available active cooling systems available, and again, can also crank core counts in addition to clock speed.

I don't think they'll be able to clock high enough in that envelope. Wider generally means it's harder to clock as high.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



VorpalFish posted:

I don't think they'll be able to clock high enough in that envelope. Wider generally means it's harder to clock as high.

Yeah, but the articles discussing the A14 and M1's architecture, all seem to believe that the current limits on speed that Apple has imposed are due to thermal management, since they've been in iPhones, iPads, and now cooling-limited Macs, versus due to the cores being wider. It seems as if the general analysis consensus has been that they can crank speed and power up and simply put active cooling on it to compensate.

BurritoJustice
Oct 9, 2012

LRADIKAL posted:

https://www.youtube.com/watch?v=bmuFl-SlV20

I bet you were trying to be funny, but currently runs ~45fps at 2560x1600. This is, I believe not compiled for the M1 processor, so that's pretty good at this point.

I'm actually genuinely interested in this, my Broadwell xps13 is a bit of a headache even playing light games like hearthstone and Civ6 with the fancy 1800p screen. I've got to run at 900p in a few things and the integrated scaling is rough. Gets a bit hot and loud too.

If the M1 can run eSports level games with some degree of x86 emulation I'd actually seriously consider buying one as a travel/study laptop now I'm going back to higher studies. Shame apple doesn't do laptop touchscreens

I wonder how something newer like Hades would work on it. I've been playing it on my switch the last week and it's incredible.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Tiger Lake Xe graphics should run all the esports stuff pretty well, it's way faster than the Ice Lake or even Vega stuff:



https://www.anandtech.com/show/16323/the-msi-prestige-14-evo-review-testing-the-waters-of-tiger-lake/3

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
Gonna laff if Tiger Lake beats Ryzen mobile because AMD refuses to put Vega out of its goddamn misery

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

mobby_6kl posted:

Tiger Lake Xe graphics should run all the esports stuff pretty well, it's way faster than the Ice Lake or even Vega stuff:



https://www.anandtech.com/show/16323/the-msi-prestige-14-evo-review-testing-the-waters-of-tiger-lake/3

50fps is actually pretty respectable in far cry 5 considering how CPU-heavy it is, even at 768p normal (medium?). There is a lot of power going to the CPU there while gaming too.

I suppose 40fps at 1080p in Tomb Raider is no slouch either.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Ok Comboomer posted:

Apple’s gonna drop those rumored 32 and 64 core high end M-series chips this year and all you corecount queens are gonna be tripping over yourselves to talk about how cores never mattered.

Sure. Just the same way that Intel was tripping all over themselves to go "WELL YOU KNOW, BENCHMARKS AREN'T A GOOD INDICATOR OF REAL WORLD PERFORMANCE..." earlier in 2020, and are gonna go right back to "HA HA LOOK AT OUR BENCHMARKS" when 11-series comes out early 2021.

SwissArmyDruid fucked around with this message at 14:51 on Jan 1, 2021

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

50fps is actually pretty respectable in far cry 5 considering how CPU-heavy it is, even at 768p normal (medium?). There is a lot of power going to the CPU there while gaming too.

I suppose 40fps at 1080p in Tomb Raider is no slouch either.

Tomb Raider is pretty old at this point and gets dragged out specifically because Apple keeps showing "look how well tomb raider runs" even though it's less relevant every year. The engine also seems to like different GPUs than most games.

That far cry 5 result is super impressive. 720-900p medium settings is basically console level performance, except high 50s rather than locked 30 is much better. I'm optimistic about Xe. It's been since Broadwell that Intel had such a huge bump.

I'm going to go looking at the full review for UE4 performance out of curiosity.

Edit: No UE4 games, and they call that far cry 5 result "on the cusp of playable," ha. I've gamed happily on whatever Broadwell's iGPU was, and before that managed to play Starcraft 2 and a lot more on an AMD bobcat netbook. 50fps on medium at 768 would probably have me turning up to high or 900p.

Twerk from Home fucked around with this message at 14:33 on Jan 1, 2021

Arzachel
May 12, 2012

gradenko_2000 posted:

Gonna laff if Tiger Lake beats Ryzen mobile because AMD refuses to put Vega out of its goddamn misery

If reusing the iGPU let them pull in Cezanne by a quarter or two, I feel that's completely worth it.

Khorne
May 1, 2002

gradenko_2000 posted:

Gonna laff if Tiger Lake beats Ryzen mobile because AMD refuses to put Vega out of its goddamn misery
Switching off of Vega wouldn't do much with current gen stuff. DDR5 should see massive igpu performance uplift for AMD's architecture in particular.

Khorne fucked around with this message at 16:43 on Jan 1, 2021

spunkshui
Oct 5, 2011



Perplx posted:

My m1 at 3.2Ghz is faster than my 9900k at 5ghz, so I'm not worried about clock speed, plus apple will have first dibs on the latest tsmc node for the foreseeable future.

This breaks my brain.

I have a 9600K at 5 GHz and I need to strap a loving water cooler to it.

Meanwhile this thing is chilling in a goddamn laptop without a fan.

But is the Apple CPU really just that fast or is a system on a chip really just that much better?

Are we going to need to start shopping for system-on-a-chip gaming rigs 5 years from now?

Would be pretty easy to cool them with an AIO.

KKKLIP ART
Sep 3, 2004

It’s a combo of their architecture, years of improvements that they can pull from the mobile side, and their software. They control it all from top to bottom.

BlankSystemDaemon
Mar 13, 2009



Yeah, it turns out that being able to micro-optimize both hardware and software to work together gives pretty good results. Who would've thunk.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

spunkshui posted:

This breaks my brain.

I have a 9600K at 5 GHz and I need to strap a loving water cooler to it.

Meanwhile this thing is chilling in a goddamn laptop without a fan.

But is the Apple CPU really just that fast or is a system on a chip really just that much better?

Are we going to need to start shopping for system-on-a-chip gaming rigs 5 years from now?

Would be pretty easy to cool them with an AIO.

I'd actually argue that the Intel chips are SoCs too, the memory controller, PCIe, and a ton of things that used to be "chipset" are part of Intel CPUs now.

Apple's CPUs are outstanding because they've been dumping huge R&D into it for a decade now, they control the OS as well, and they are just willing to use way more silicon on their chips than Intel is.

Apple's CPUs are absolutely huge, and if sold at retail would have to sell for a lot. Apple doesn't sell CPUs though, they sell phones and PCs. Intel sells CPUs, and has to sell them at a profit. Just for comparison, the last Intel chip that I can reliably find a transistor count for is the i7-6700K, at ~1.75 Billion transistors. Assuming that the 9900K is basically two of those glued together, it would land around around 3.5 - 4 billion transistors. Meanwhile, the A14 CPU they're shipping in phones is 11.8 Billion transistors: https://www.tomshardware.com/news/apple-a14-bionic-revealed. The M1 is bigger. So, if you want to know what's up: Apple has huge caches, and is just throwing hardware at the problem in a way Intel is unwilling to do.

spunkshui
Oct 5, 2011



But intel is a fab.

They have much better margins on slinging pure silicon then basically anyone. Everyone else relies on another company to fab.

I think the main problem with just using larger chips is that its easier to have poor yields.

AMD solved this with chiplets right?

I wonder if intel will ever do something similar.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

spunkshui posted:

But intel is a fab.

They have much better margins on slinging pure silicon then basically anyone. Everyone else relies on another company to fab.

I think the main problem with just using larger chips is that its easier to have poor yields.

AMD solved this with chiplets right?

I wonder if intel will ever do something similar.

AMD and Apple are thrilled to be relying on another company to fab, because TSMC is offering a better process than anything Intel has ever shipped as a product.

Intel's fat profit margins have become a weakness. The company needs them to keep investors happy going forward, but they are now no longer the best manufacturing process, nor the best computing performance outright or per watt, which is related to process but not purely a result of process.

When your product isn't the best at anything, how do you justify charging huge premiums allowing fat profit margins? If your competitors have designs that are just as good, and processes that are better, and your competitors are able to survive and be a successful company on slimmer profit margins, where does that leave Intel? RIM's most profitable year was 2011, when the writing was on the wall that Blackberry had lost. That's the year the iPhone 4S came out. Intel is facing an existential crisis, but they'll get fat profit margins on their way out of relevance for sure.

cerious
Aug 18, 2010

:dukedog:

spunkshui posted:

But intel is a fab.

They have much better margins on slinging pure silicon then basically anyone. Everyone else relies on another company to fab.

I think the main problem with just using larger chips is that its easier to have poor yields.

AMD solved this with chiplets right?

I wonder if intel will ever do something similar.

Lakefield is already a chiplet-based product.

Another advantage to chiplets is being able to outsource components to other fabs to free up your own fab capacity. Not everything has to be on the same process.

VorpalFish
Mar 22, 2007
reasonably awesometm

Twerk from Home posted:

I'd actually argue that the Intel chips are SoCs too, the memory controller, PCIe, and a ton of things that used to be "chipset" are part of Intel CPUs now.

Apple's CPUs are outstanding because they've been dumping huge R&D into it for a decade now, they control the OS as well, and they are just willing to use way more silicon on their chips than Intel is.

Apple's CPUs are absolutely huge, and if sold at retail would have to sell for a lot. Apple doesn't sell CPUs though, they sell phones and PCs. Intel sells CPUs, and has to sell them at a profit. Just for comparison, the last Intel chip that I can reliably find a transistor count for is the i7-6700K, at ~1.75 Billion transistors. Assuming that the 9900K is basically two of those glued together, it would land around around 3.5 - 4 billion transistors. Meanwhile, the A14 CPU they're shipping in phones is 11.8 Billion transistors: https://www.tomshardware.com/news/apple-a14-bionic-revealed. The M1 is bigger. So, if you want to know what's up: Apple has huge caches, and is just throwing hardware at the problem in a way Intel is unwilling to do.

Partly unable - I believe the 10900k is a larger die than the m1. Transistor density is a real issue as long as they're stuck using 14nm for desktop.

I mean I guess they could go for a gpu sized die but that'd be pretty pricy and you wouldn't be able to make many.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Twerk from Home posted:

When your product isn't the best at anything, how do you justify charging huge premiums allowing fat profit margins? If your competitors have designs that are just as good, and processes that are better, and your competitors are able to survive and be a successful company on slimmer profit margins, where does that leave Intel? RIM's most profitable year was 2011, when the writing was on the wall that Blackberry had lost. That's the year the iPhone 4S came out. Intel is facing an existential crisis, but they'll get fat profit margins on their way out of relevance for sure.

You happen to have a fab network that can produce chips at volume on a node "good enough" for the vast majority of applications in an environment where everyone is clamoring for more chips and the leading-node fabs simply can't produce enough to supply more than a fraction of demand, that's how.

RIM's issue was a refusal to embrace touchscreens when everyone else saw it was an objectively better interface choice, leading to a series of lovely phones with basically no reason for anyone to purchase them--bad prices, bad features, bad weight, bad screen sizes, bad everything in a marketplace where "brand loyalty" didn't exist and the only lock-ins were for some corporate buyers.

Intel gets the benefit of the simple fact that not every chip sold needs to be the most cutting-edge deal, and that there's a poo poo ton of money to be made selling "good enough" chips for low-cost devices. They also still have an enormous cash-cow that is the enterprise market, where support and long-term contracts are often seen as more compelling than whether one chip is more efficient than another. They might not be the "best at anything," but that they're purchasable at all in market segments that AMD can't service in any sort of actual volume is reason enough to move product.

Even if, for argument's sake, Intel never re-takes the node lead from TSMC, and forever trails AMD's designs by 5-10% for whatever reason, Intel as a company is still likely to do just fine based on volume sales. The real long-term worry for them should be NVidia + ARM, Amazon + ARM, and Google + ARM, not AMD or Apple. e; though Apple showing that ARM-based products are actually viable for end-users is gonna speed up things, even if they aren't likely to directly compete in most areas.

DrDork fucked around with this message at 20:43 on Jan 1, 2021

spunkshui
Oct 5, 2011



cerious posted:

Lakefield is already a chiplet-based product.

Another advantage to chiplets is being able to outsource components to other fabs to free up your own fab capacity. Not everything has to be on the same process.

Cool, love to see competition.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Khorne posted:

Switching off of Vega wouldn't do much with current gen stuff. DDR5 should see massive igpu performance uplift for AMD's architecture in particular.

RDNA has better memory compression so it gets you more bandwidth out of DDR4.

SwissArmyDruid
Feb 14, 2014

by sebmojo

cerious posted:

Lakefield is already a chiplet-based product.

Another advantage to chiplets is being able to outsource components to other fabs to free up your own fab capacity. Not everything has to be on the same process.

Or even the same substrate, for that matter!

If there is something that I would expect Intel to do, it's the examination of using non-Silicon materials for the IO die as a test bed for mass production and future viability at smaller transistor sizes.

SwissArmyDruid fucked around with this message at 00:06 on Jan 2, 2021

Gwaihir
Dec 8, 2009
Hair Elf

spunkshui posted:

This breaks my brain.

I have a 9600K at 5 GHz and I need to strap a loving water cooler to it.

Meanwhile this thing is chilling in a goddamn laptop without a fan.

But is the Apple CPU really just that fast or is a system on a chip really just that much better?

Are we going to need to start shopping for system-on-a-chip gaming rigs 5 years from now?

Would be pretty easy to cool them with an AIO.

That 9900k is on an ancient 14nm process, while the M1 is on a cutting edge one- TSMC claims 5nm is 30% better than their 7nm process, which itself is already much better than Intel's 14nm process.

The other factor as noted is that the M1 is gargantuan compared to the Intel chip, in terms of transistor counts. They're using all those extra transistors for much more/larger caches, etc, all things great for performance but which reduces the profitability of the chip because you get fewer per wafer. Apple doesn't give a poo poo about that because they're not selling CPUs though.

movax
Aug 30, 2008

DrDork posted:

Intel gets the benefit of the simple fact that not every chip sold needs to be the most cutting-edge deal, and that there's a poo poo ton of money to be made selling "good enough" chips for low-cost devices. They also still have an enormous cash-cow that is the enterprise market, where support and long-term contracts are often seen as more compelling than whether one chip is more efficient than another. They might not be the "best at anything," but that they're purchasable at all in market segments that AMD can't service in any sort of actual volume is reason enough to move product.

Even if, for argument's sake, Intel never re-takes the node lead from TSMC, and forever trails AMD's designs by 5-10% for whatever reason, Intel as a company is still likely to do just fine based on volume sales. The real long-term worry for them should be NVidia + ARM, Amazon + ARM, and Google + ARM, not AMD or Apple. e; though Apple showing that ARM-based products are actually viable for end-users is gonna speed up things, even if they aren't likely to directly compete in most areas.

As I understand it, which is probably wrong as I don't actually work in the HPC/butt side of things on the HW side, is that while the fab lead was being pissed away / tossed away by that side of the Intel business, the logic designers would spend time implementing features / almost ASIC-like things (spending die area / instructions on things the biggest of big customers cared about) + the strategic roadmap (i.e., AVX-512) all in favor of the enterprise customers buying the top-end Xeons, and that desktop/mobile was "handled" as a side-effect + the process lead keeping power down. No reason for them to give a poo poo about IPC / % increase in performance for Dell OptiPlexes or whatever doctor's office receptionist PC line of products because it didn't matter, after a point. Like we're fond of discussing in this thread, really anything after and including Sandy Bridge can still perform all required desktop computing tasks in 2020 as they did in 2011. Ivy, Haswell, Skylake improvements were more useful IMO on RAM, and then their chipsets piggy-backing along and introducing more/better USB 3.x support, and NVMe support as a result of BIOS updates. QuickSync / iGPU stuff kept getting better, which is more of heterogenous compute argument compared to anything else because that poo poo is unrelated to the CPU core itself and again more a function of being able to fit more logic on the same die.

M1 certainly proves the case for desktop-class non-x86 devices becoming more popular; those legions of cheap-rear end OptiPlexes could be replaced if there was a compelling seamless Win10 + Rosetta-style translator + ARM SoC solution, but AFAIK, that doesn't exist yet and I don't know if it will in the short-term. I mean, if MSFT could whip everyone around, they could create some kind of pseudo-SBC like form-factor of someone's high-end ARM SoC + their Win10 ARM port w/ translator that doesn't actually suck and let everyone just integrate that into a monitor housing and start selling that to business for $500/pc as their "business desktop" for the foreseeable future. Right now though, that market is still cornered by the big OEM's Pentium/i3 boxen.

Like you said though, if Amazon, big butt cos, gov't all actually stop giving a poo poo about Xeon Platinums/whateverthefuck and actually do move to the many-core ARM designs, that's where the profit margin impacts start to hurt I think.

What does their 10-K break down revenue by? Might be an interesting look to see if they disclose what's going on there.

Gwaihir posted:

That 9900k is on an ancient 14nm process, while the M1 is on a cutting edge one- TSMC claims 5nm is 30% better than their 7nm process, which itself is already much better than Intel's 14nm process.

The other factor as noted is that the M1 is gargantuan compared to the Intel chip, in terms of transistor counts. They're using all those extra transistors for much more/larger caches, etc, all things great for performance but which reduces the profitability of the chip because you get fewer per wafer. Apple doesn't give a poo poo about that because they're not selling CPUs though.

I forget, did they share the core voltage of the M1? At a basic level (ignoring leakage/static v. dynamic/etc), power scales linearly with frequency but with the square of voltage, so dropping voltage is a huge lever for power dissipation. Shrinking process nodes should also (IIRC, I don't know if some kind of weird JT-esque inversion of a trendline happens when we got below 10 nm) drop capacitance which helps out.

And of course, hyper-optimizing your hardware to match with your software and knowing where and how to spend die-area (cache, etc.) + kind of having to get it right also helps out. It's not magic, it's just a very, very well-resourced organization that started putting together the pieces for this almost a decade ago and I respect the hell out of them + am insanely jealous they got to pull it off. Politics and people are usually the hard parts of a technical problem, not the actual technical part, and Apple simply has the weight to swing around to make poo poo line up.

They got their silicon design (M1 itself, CPU uArch, etc.) + mixed-signal (I'll lump in all the PMICs and such here... remember Apple can simply go to TI and ask for a custom PMIC, more-or-less) + hardware (mobo, chassis, etc.) + software (kernel, OS) + application software (Safari, etc.) people all under the same tent to march to the same tune and that combination just kills it.

movax fucked around with this message at 02:22 on Jan 2, 2021

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



On the Microsoft front at least, I thought I saw that they're planning on bringing silicon development in-house and working on their own custom SoC design so that they don't have to partner with Qualcomm to do so.

Granted its Microsoft and they seem to have half-assed the last couple iterations of Windows on ARM, but if Microsoft ends up seriously throwing money at their own SoC, there's no reason they couldn't (long term) effectively do something similar to what Apple has been doing. I don't think it'd be quite as successful because I think they'd still try to provide some form of legacy support and open hardware support, but even then, they could definitely pivot away from Intel even more.

That's not to even speak of nVidia and their acquisition of ARM, which sets them up to go directly at Intel in the enterprise market longterm.

I don't know, personally it would be nice to knock Intel completely off their smug ivory tower and let them actually struggle across the board.

movax
Aug 30, 2008

SourKraut posted:

On the Microsoft front at least, I thought I saw that they're planning on bringing silicon development in-house and working on their own custom SoC design so that they don't have to partner with Qualcomm to do so.

Granted its Microsoft and they seem to have half-assed the last couple iterations of Windows on ARM, but if Microsoft ends up seriously throwing money at their own SoC, there's no reason they couldn't (long term) effectively do something similar to what Apple has been doing. I don't think it'd be quite as successful because I think they'd still try to provide some form of legacy support and open hardware support, but even then, they could definitely pivot away from Intel even more.

That's not to even speak of nVidia and their acquisition of ARM, which sets them up to go directly at Intel in the enterprise market longterm.

I don't know, personally it would be nice to knock Intel completely off their smug ivory tower and let them actually struggle across the board.

I feel like (anecdotal + knowing some of the HW folks that worked on Surface stuff) even their attempts at x86 Surface notebooks weren't as awesome as they could have been, because somehow, they still managed to only do about as well as Dell and the usual OEMs in terms of getting a seamless HW+SW experience going. Maybe they've gotten better now + learned from their mistakes, but you really, really, really have to own the entire stack in house and set your SW + HW guys right next to each other in the same building.

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

VorpalFish posted:

Performance doesn't necessarily scale in a linear fashion with power consumption. I know zen2 chips were pushed past the point of optional efficiency in the name of absolute performance, and I believe zen3 is similar. You can drop PPT a lot and lose relatively little performance.

That's not to say Apple's chips aren't impressive; just that saying zen3 needs 140w to beat it by 40% is maybe painting a worse picture than the reality for AMD.

Edit: other thing to remember when doing efficiency comparisons to zen3 is that the io die accounts for something like 20w of the thermal budget - something they can get away with in desktop space, but probably the reason all their mobile designs are monolithic. That's also going to distort the picture a bit.

I don't think it's a worse-than-real picture at all. Here's some measurements done by an ARM server CPU startup which may clarify things. (note, they corrected for baseline idle power, so what you're seeing here is reasonably close to the power used by a single core to run a ST benchmark)

https://nuviainc.com/blog/performancedeliveredanewway

There's no data for A14/M1 and Zen 3 since this was done before those products launched, but they did test A13 and laptop Zen 2, which is an interesting point of comparison since those are both TSMC 7nm and are architecturally similar to their successors. Here's a link to the specific power vs performance graph I want to highlight. If you use a paint program to draw horizontal and vertical lines in order to see where they intersect the A13 and Zen 2 curves, the contrast is stark.

- Horizontal lines (constant performance): Zen 2 needs ~4x power for the same performance
- Vertical lines (constant power): A13 varies between ~2.25x to ~1.75x performance at the same power

Those ratios are substantial.

SourKraut posted:

Yeah, but the articles discussing the A14 and M1's architecture, all seem to believe that the current limits on speed that Apple has imposed are due to thermal management, since they've been in iPhones, iPads, and now cooling-limited Macs, versus due to the cores being wider. It seems as if the general analysis consensus has been that they can crank speed and power up and simply put active cooling on it to compensate.

I doubt Firestorm (the A14/M1 performance core) clocks will get much higher in a desktop part. They already have active cooling in two of the three M1 Macs, and the mini's HSF is very overspecced since they reused the old Intel Mini HSF designed for 65W i7 chips.

Apple's actually made measuring M1 CPU core power under load quite easy compared to the effort Nuvia had to go to: they ship a command line tool which will tell you frequencies and CPU cluster power use. What we know due to that is that M1's power management is set up to allow CPU TDP of about 24W (a small amount of which is used by Icestorm efficiency cores, so it works out to about 5.5W per Firestorm core). If Apple could raise clocks, they could've done so in M1 by using DVFS policy similar to Intel Turbo: let Firestorm boost to 10W or so when only one or two are active. But the observed behavior is that M1 Firestorm runs at 3.2 GHz regardless of whether there's one, four, or eight active threads. Downclocking is only done to keep temps under control in the passively cooled Air, or to reduce power consumption when cores aren't at full load.

I don't think it matters either. Firestorm performance at 3.2 GHz is extremely good, and the low power at that frequency means they should be able to build things like, say, a 16+4-core desktop CPU with a TDP around 100W where the cores can run at full speed even with all of them active.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply