Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
LRADIKAL
Jun 10, 2001

Fun Shoe
That was a lot of talk for not actually addressing your seemingly imaginary idea that the m1 was built with tradeoffs other than the already mentioned frequency/TDP limits.

It seems to me, like someone said above, that all the cache and customizations that Apple added have few if any downsides other than BOM costs, which apple is easily able to make up elsewhere in their hardware and software products.

Adbot
ADBOT LOVES YOU

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Yes, Intel and AMD are entirely operated by loving morons and are just leaving enormous amounts of performance on the table for no good reason. Meanwhile, Apple built out of nowhere a CPU design group that leapt 10 years past the entire rest of the world. That's definitely the most sane and likely conclusion.

Apple also is not stupid, so they optimize everything they make for their specific use cases, since they're partially vertically integrated and only designing for themselves. AMD and Intel are designing architectures for everything from laptops to workstations to gaming to servers. Engineering isn't a quality slider, it mostly IS making compromises to optimize for a specific goal or set of goals. An F650 and a Prius are not meant to be the same, and judging them solely by their crash test scores would be pointless, unless not dying is your only goal.

hobbesmaster
Jan 28, 2008

K8.0 posted:

Yes, Intel and AMD are entirely operated by loving morons and are just leaving enormous amounts of performance on the table for no good reason. Meanwhile, Apple built out of nowhere a CPU design group that leapt 10 years past the entire rest of the world. That's definitely the most sane and likely conclusion.

I mean it sure seems to be true below 65W! :v:

shrike82
Jun 11, 2005

Gaming and scientific computing are arguably the two most common use-cases for a fast desktop/workstation CPU and Apple hasn't shown much interest in either so :shrug:

WhyteRyce
Dec 30, 2001

K8.0 posted:

Yes, Intel and AMD are entirely operated by loving morons and are just leaving enormous amounts of performance on the table for no good reason. Meanwhile, Apple built out of nowhere a CPU design group that leapt 10 years past the entire rest of the world. That's definitely the most sane and likely conclusion.

Apple also is not stupid, so they optimize everything they make for their specific use cases, since they're partially vertically integrated and only designing for themselves. AMD and Intel are designing architectures for everything from laptops to workstations to gaming to servers. Engineering isn't a quality slider, it mostly IS making compromises to optimize for a specific goal or set of goals. An F650 and a Prius are not meant to be the same, and judging them solely by their crash test scores would be pointless, unless not dying is your only goal.

turning a big dial that says CACHE on it and constantly looking back at the audience for approval

LRADIKAL
Jun 10, 2001

Fun Shoe

K8.0 posted:

Yes, Intel and AMD are entirely operated by loving morons and are just leaving enormous amounts of performance on the table for no good reason. Meanwhile, Apple built out of nowhere a CPU design group that leapt 10 years past the entire rest of the world. That's definitely the most sane and likely conclusion.

Apple also is not stupid, so they optimize everything they make for their specific use cases, since they're partially vertically integrated and only designing for themselves. AMD and Intel are designing architectures for everything from laptops to workstations to gaming to servers. Engineering isn't a quality slider, it mostly IS making compromises to optimize for a specific goal or set of goals. An F650 and a Prius are not meant to be the same, and judging them solely by their crash test scores would be pointless, unless not dying is your only goal.

Are you... crying? You certainly didn't address my points. Your first few sentences are right on, btw, those companies each spent years loving up big time.

I'll reiterate, the M1's appear superiority in almost all domains. They are however, packing a huge chunk of cache which makes them much more expensive. AMD has released info on stacking cache on their Ryzen, we'll see what the price and performance deltas are when they go on sale.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
gonna buy the 2022 imac 24" as my ultimate escape from tarkov machine, im ready apple les get it!!!!!

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

shrike82 posted:

Gaming and scientific computing are arguably the two most common use-cases for a fast desktop/workstation CPU and Apple hasn't shown much interest in either so :shrug:

... what is Apple interested in having their computers doing, then? genuine question

Cygni
Nov 12, 2005

raring to post

gradenko_2000 posted:

... what is Apple interested in having their computers doing, then? genuine question
Well first you have to define "computer". :v:

Hardcore desktop gaming is a tasty and profitable market, but it is also insanely volatile and fickle. Just ask Intel. And depending on how you measure things, Apple already own a huge slice of "gaming". Depending on which market group you ask, mobile gaming is already more popular and profitable than all other gaming platforms combined, and iOS has a much higher in app sell rate than Android.

Devices for users that either do no gaming or puzzlers/light/esports gaming combines to make up a huge percentage of the home "computer" market, and that is squarely where Apple has targeted their designs.

With all that said, I do think the Apple of today is more likely to make some monster iGPU version of their stuff and go after hardcore gamers than they are to try to go hard on the business and server markets they used to play in long ago.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

gradenko_2000 posted:

... what is Apple interested in having their computers doing, then? genuine question

Web browsing, javascript, and syncing nicely with an iPhone is what I see it used for by 90% of people who own one. The remaining 10% are your digital artists and developers and such.

Not to say it won't also compile those linux ISOs nice and fast, but "good enough" speed with hilarious battery life is a slam dunk win for most of the people who actually buy their laptops.

e; plus, as noted above, the M1 is another step in the line of getting a unified arch on everything from your iPhone to your iPad to your laptop to your desktop, so you can run those mobile games anywhere and everywhere and give Apple that 30% cut for your lovely gacha rolls.

K8.0 posted:

Yes, Intel and AMD are entirely operated by loving morons and are just leaving enormous amounts of performance on the table for no good reason. Meanwhile, Apple built out of nowhere a CPU design group that leapt 10 years past the entire rest of the world. That's definitely the most sane and likely conclusion.

I mean...yeah? Intel is on 14nm++++++++ because they have repeatedly made some very bad decisions and squandered their previously unassailable dominance by basically not moving forward meaningfully in 5+ years. AMD is the reverse case: they barely kept the lights on during the Bulldozer era and such, and are only now digging back out of it by catching up to Intel. And both of them are suffering from the reality that the x86_64 platform is hemmed in by the need to support ancient poo poo. I'm sure either company could clean-slate a very interesting and compelling processor, but they won't do it without having the equivalent of Rosetta available to make it so people would actually buy it.

Apple played to all its strengths with the M1: they invested gobs of cash into buying up smart people to design them a chip from the ground up that hit exactly what they wanted out of it, they had the software people already in-house to build the translation layer that would prevent them from having to do yet another "lol none of your old programs work on this new thing" deal, they spent even LARGER gobs of cash buying out 5nm from TSMC, and since they get to package it up in a product with a hilarious profit margin to begin with, they didn't have to much worry about how much the chip itself cost.

Credit where it's due, it's a great chip and you would never see it come out of Intel or AMD for exactly the above reasons.

DrDork fucked around with this message at 04:21 on Aug 14, 2021

shrike82
Jun 11, 2005

their "PC" segment is pretty much vestigial - if i had to guess, i'd say they'll support it to the point where people can develop Apple-facing software

shifting to the M1 chips doesn't suggest a new renaissance for Apple PCs but rather streamlining them so they can just plop tech developed first for their phones and tablets over to the PCs

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, less "renaissance" and more "your options for an iPhone size now includes 13" and 16" formats with attached keyboards."

The actual desktop lines seem like just some strange purgatory where they put junior designers or something so they don't impact a product that they actually care about.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

DrDork posted:

And both of them are suffering from the reality that the x86_64 platform is hemmed in by the need to support ancient poo poo Windows.

FTFY, except that both versions actually say the exact same thing.

Mac and iOS users don't have to give a poo poo about legacy, because Apple has and will put in the work on the system side to make migrations close to painless, or even invisible where possible.

Linux users (and here I really mean "datacenter operators", who are the silent and invisible 80,000 pound gorillas in the room -- individual Linux users as a slice of x86_64 users are but noise in anyone's sales numbers) don't give too much of a poo poo because the kernel and compilers will be ported to anything halfway interesting; it's just a question of how fast and completely it can happen due to documentation quality. Hyperscalers have been writing their own drivers, etc., for a long time, when needed. They're functionally able to jump ship as soon as something with a better ROI actually comes along... but x86 has been the broad-spectrum ROI king of the hill for a long time.

It's really Microsoft who are now pinned to x86_64 (and vice versa) because of MS's refusal to commit to moving their ecosystem forward in a meaningful, cohesive, top-to-bottom way, as Apple has. I think their longstanding (and formerly justified) hubris that the CPU market would cater to them has led to them being caught rather flat-footed in a world where that market might actually be volatile with respect to architecture, and software portability is more than just a topic for research papers from the 1970s.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Klyith posted:

lol that you can't see some mild hyperbole without becoming the apple defender.

M1 is great for a whole lot of things that can be loosely grouped under "productivity" -- javascript, compiling code, cinebench, encoding, and many others. It's a great chip for that! Though some of Apple's choices were pretty pointed:

frankly it's actually the exact opposite, watching people spazz out about how M1 isn't actually that good and you can't buy them as a stand-alone chip is a pretty great indicator of people who can't see a single good thing said about apple without seeing red and spazzing the gently caress out about all the ways apple sucks.

https://www.anandtech.com/show/16192/the-iphone-12-review/2

M1 is an absolutely fantastic core. It's on technical par with AMD or Intel at minimum, there's a very solid case to be made that it's the best core on the market architecturally even accounting for the node lead. If you could buy a (Cavium Thunder-style) server full of M1 cores on Epyc-style chiplets it would own. Huge performance at a tiny wattage.

the "optimized for javascript" thing largely comes down to one instruction afaik - it's a float to int instruction or something like that because Javascript stores everything as a float by default and it's also a weird non-standard float, or something like that.

they also do implement a "more restrictive" memory consistency mode to match the x86 behavior, vs the usual looser ARM consistency. Which does help in x86 emulation performance a ton. But even without that (and especially running native binaries) it's a super fast processor.

WhyteRyce posted:

turning a big dial that says CACHE on it and constantly looking back at the audience for approval

also "execution unit count" / "decode width"

yeah they do accelerate a lot of specific stuff but thats more or less the trifecta of M1 right? shitloads of cache (including a comparative ton of instruction cache), super wide execution units, super wide frontend (decode, etc) to keep everything saturated. it just costs a lot of die area. But it's all a very sensible combination of things to improve general IPC (not limited to Javascript).

but Apple also does things like blow a shitload of die area on other stuff. M1 has onboard 10gbe. M1 has onboard dual thunderbolt. that's a ton of PHY that is not going to scale well through shrinks.

between the cache and the execution units and decode and the accelerators and the IO, M1 is the :homebrew: of processor design and it's kind of glorious. It's insane performance for what it is and it's kinda wild.

I'm curious what the (rumored) upcoming ~32+8 core HEDT processor for the Mac Pro is going to be like, especially what IO capabilities they threw in. But I wouldn't put it past Apple to have a "number goes down" generation.

Paul MaudDib fucked around with this message at 06:19 on Aug 15, 2021

LRADIKAL
Jun 10, 2001

Fun Shoe
Ha! I even had the thought "I hope Paul shows up to back up the non idiots". My prayer was answered. No loving poo poo the M1's are good! Sorry Klyth and K8.0, ya'll wrong.

repiv
Aug 13, 2009

Paul MaudDib posted:

the "optimized for javascript" thing largely comes down to one instruction afaik - it's a float to int instruction or something like that because Javascript stores everything as a float by default and it's also a weird non-standard float, or something like that.

that's also not an M1 specific thing, it's part of the core ARM v8.3 ISA

https://developer.arm.com/documentation/dui0801/g/A64-Floating-point-Instructions/FJCVTZS

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.
I’d love to hear someone compare the M1 to Power9, both in terms of relative performance and to compare their respective embraces of instruction-level versus thread-level parallelism. They are built on wildly different processes and for different markets, but it would still be illuminating.

Hasturtium fucked around with this message at 03:56 on Aug 15, 2021

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
I may not know much about architecture beyond Inside the Machine, but what I have experienced in the past is nerds getting too horny for The Future of Silicon multiple times. It happened before Netburst, it happened with VLIW stuff, it happened before Bulldozer.

Apple having more money to throw at R&D does not mean they get to break the laws of physics. Reality will always find a way to gently caress over the little magic pixie-wrangling sand pile. glass is half empty

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

Apple can subsidize costs through system markups and app store purchases and has absolute control over how the hardware and OS interact. The proper metrics for a pissing match would be "what could Intel/amd produce under the same design constraints?" or "what would the m1 have to cost stand-alone / how well does it perform decoupled from macOS?" There aren't good answers to any of these questions (yet), but trying to compare 'engineering prowess' or whatever without accounting for the business reasons driving the design process it's an apples to oranges kind of deal.

Tl;dr

Paul MaudDib posted:

M1 is the :homebrew: of processor design

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

Fantastic Foreskin posted:

without accounting for the business reasons driving the design process it's an apples to oranges kind of deal.

:D

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Klyith posted:

M1 is great for a whole lot of things that can be loosely grouped under "productivity" -- javascript, compiling code, cinebench, encoding, and many others. It's a great chip for that! Though some of Apple's choices were pretty pointed:

lol that because AT mentioned one thing which benefits from a large icache, you've decided that thing must be the only reason they'd want a large icache

quote:

But no, not all tasks are the same and performance is not universal. It it was then bulldozer wouldn't have sucked, Vega would be king poo poo of GPUs, we'd have Cell processors in everything, and the descendant of Netburst or Itanium would be powering Intel's architecture. Being really good at javascript and compiling code does not mean you're equally good at everything.

What a ridiculous comparison. Firestorm cores aren't burdened with having to implement nutty architectural ideas like Itanium, don't suffer from stupid system design decisions like Cell putting each HW thread in its own isolation booth, weren't designed to chase a secondary metric like Netburst was WRT clock frequency, and so forth.

quote:

Some things don't benefit from a super-wide design that trades some clockspeed and latency for a ginormous OoO buffer and 4+4 ALUs & FPUs, because they don't fill that width. Among tasks that consumers care about, video games are a prominent example. Some of this is inherent tradeoffs of CPU design that go back a looooong time.

Sure, your average game engine burns a large fraction of its cycles running messy, branchy, pointer-chasing vtable-pounding template-abusing C++, with a side order (or main course, depending on your perspective) of some kind of interpreter or JIT running the lovely scripts which level designers write because they can't C++.

But bad code isn't unique to games, it's just depressingly normal. Spaghetti-OOP code is everywhere.

As in: you think AAA games are bad at filling that width? JS should be worse. It compounds the phenomenon of bad code by asking bad programmers to write their bad code in a bad language which actively gets in the way of optimizing compilers and JITs.

So if anything, M1's JS scores say it should be good at anything, including games. Absent other factors (broken benchmark etc), they're evidence of the core's ability to fill its own width by finding ILP in a low-quality instruction stream.

(Why do you think Apple went to the trouble of designing probably the largest OoO window ever seen in a mass market CPU? That's not easy, and you don't need a huge window if you're relying on software to provide some low-hanging ILP fruit.)

repiv
Aug 13, 2009

Cygni posted:

For what its worth, a Chinese language site is reporting that Intel will be TSMCs first "3nm" customer, even before Apple, and has purchased all of the initial 3nm run starting summer next year for 1 GPU product and 3 unannounced server products.

https://udn.com/news/story/7240/5662232

DigiTimes is contradicting this report, saying Apple bought out the early 3nm run and Intel won't get access until 2023

https://www.hardwaretimes.com/apple-to-be-tmscs-only-3nm-client-in-2022-followed-by-amd-no-3nm-chips-for-intel-till-2023-report/

Otakufag
Aug 23, 2004
Weird that Intel could outbid Apple for 3mm, unless the exec literally said "this'll be short, whatever figure Apple says, we double it, have a good day".

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Otakufag posted:

Weird that Intel could outbid Apple for 3mm, unless the exec literally said "this'll be short, whatever figure Apple says, we double it, have a good day".

High end Xeons sell for 5 figures, high end iPhones just recently cracked 4. :thunk: - Not a serious take at all.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
A high end xeon is almost the size of an iPhone, per sq/mm I wouldn’t be surprised if the iPhone had a bigger margin.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Perplx posted:

A high end xeon is almost the size of an iPhone, per sq/mm I wouldn’t be surprised if the iPhone had a bigger margin.

Per mm2 smaller is always a higher margin part than larger, doubly so on bleeding edge nodes where yields are gonna be less than ideal at first.

e; but Apple buying up a big chunk of 3nm and then Intel slurping up the rest makes a lot more sense than Intel managing to buy it all.

DrDork fucked around with this message at 23:15 on Aug 15, 2021

Arivia
Mar 17, 2011

DrDork posted:

I mean...yeah? Intel is on 14nm++++++++ because they have repeatedly made some very bad decisions and squandered their previously unassailable dominance by basically not moving forward meaningfully in 5+ years. AMD is the reverse case: they barely kept the lights on during the Bulldozer era and such, and are only now digging back out of it by catching up to Intel. And both of them are suffering from the reality that the x86_64 platform is hemmed in by the need to support ancient poo poo. I'm sure either company could clean-slate a very interesting and compelling processor, but they won't do it without having the equivalent of Rosetta available to make it so people would actually buy it.

Speaking of x86/x64 being hemmed in by ancient poo poo, I saw an offhand reference the other day that you could still run code from like 8088s on today's processors. I know there was something like the x86 chips these days basically popping themselves through the various modes really quickly at startup so you're out of real mode and protected mode and into running actual x64 code. But I figured that at some point Intel or AMD must have gone "seriously we can just get rid of the poo poo for running programs from like 1990" to clean things up - I know you can't run 16bit executables any more, but still having a bunch of legacy mode support in the CPU feels like a big waste of time. Am I mixing things up/totally wrong?

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

Arivia posted:

Speaking of x86/x64 being hemmed in by ancient poo poo, I saw an offhand reference the other day that you could still run code from like 8088s on today's processors. I know there was something like the x86 chips these days basically popping themselves through the various modes really quickly at startup so you're out of real mode and protected mode and into running actual x64 code. But I figured that at some point Intel or AMD must have gone "seriously we can just get rid of the poo poo for running programs from like 1990" to clean things up - I know you can't run 16bit executables any more, but still having a bunch of legacy mode support in the CPU feels like a big waste of time. Am I mixing things up/totally wrong?

I don’t think there’s anything nominally preventing you from running 16-bit apps in real mode, though Intel cut gate A20 support with Haswell so most DOS memory extenders don’t work any more in protected mode. You can also (sorta) run Win9x, barring the lack of drivers for just about anything made since 2006. Windows imposing limits on 16-bit protected mode code in newer versions (like for legacy program installers) doesn’t necessarily speak to what the CPUs themselves can do. By and large they really can run a TON of old code.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
Someone earlier in the thread said that the processor real estate taken up by instruction decoding is not that big:

FunOne posted:

No, that's not really an issue for modern processors. Space is taken up by cache and VLIW style execution units inside each core. If you take a look at a processor image with a block diagram over it you'll see that decode is a very small segment of the die.

Beef
Jul 26, 2004
The penalty for supporting the old modes is likely in engineering hours for testing. I don't know how modes etc get handled, but older instructions just get purely microcoded.

And yeah, decode/frontend is complex as hell, but relatively doesn't take that much transistor real estate. It does become an issue if you scale down (e.g. Inter nets of poo poo)

movax
Aug 30, 2008

Reminder that this isn’t Reddit — lets stop the personal attacks / name-calling, please (few posts ago / somewhat last page, but just catching up…)

Kerbtree
Sep 8, 2008

BAD FALCON!
LAZY!
The real issue running old stuff on a modern OS is that some stuff that was 16bit (file handles?) is now 32 bit and actually uses all the space, so it can’t just be truncated back down to 16bit.

You can bodge stuff in there to make it work, but it’s pretty fiddly at best.

BurritoJustice
Oct 9, 2012

Multicore enhancement on Rocketlake is absurd, my friends 11900k is reporting 283w package power out of the box in P95. Its a Z590 ROG HERO SUPER GAMER (etc), but it's completely unmodified bios settings other than XMP.. I know it's a power virus but that is absurd it's letting it run that high. It was only managing 4.5GHz too (though I believe with AVX-512).

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

BurritoJustice posted:

Multicore enhancement on Rocketlake is absurd, my friends 11900k is reporting 283w package power out of the box in P95. Its a Z590 ROG HERO SUPER GAMER (etc), but it's completely unmodified bios settings other than XMP.. I know it's a power virus but that is absurd it's letting it run that high. It was only managing 4.5GHz too (though I believe with AVX-512).

I haven't seen much in the way of apples-to-apples comparisons, but do the high end Rocketlake chips actually draw comparable power to the Skylake HEDT platform on average? Or more? I'm having trouble imagining an eight core chip pumping out more heat than my 7940x running at alleged stock clocks.

Hasturtium fucked around with this message at 18:25 on Aug 18, 2021

VorpalFish
Mar 22, 2007
reasonably awesometm

Hasturtium posted:

I haven't seen much in the way of apples-to-apples comparisons, but do the high end Rocketlake chips actually draw comparable power to the Skylake HEDT platform on average? Or more?

If you let them - to revisit mobo chat upthread, the motherboard manufacturers apparently have all the latitude in the world as far as what default power limits are so your out of the box experience can vary wildly.

F4rt5
May 20, 2006

Nomyth posted:

I may not know much about architecture beyond Inside the Machine, but what I have experienced in the past is nerds getting too horny for The Future of Silicon multiple times. It happened before Netburst, it happened with VLIW stuff, it happened before Bulldozer.

Apple having more money to throw at R&D does not mean they get to break the laws of physics. Reality will always find a way to gently caress over the little magic pixie-wrangling sand pile. glass is half empty
Thing is, if Intel had, say, gone ARM RISC fifteen years ago, or whenever 28nm came, and had had Microsoft on board for it, the gains would be relatively the same as we are seeing with the M1 now.

x86(_64) is so horribly inefficient for modern computing, they are now basically RISC themselves with x86 translation on top, and have been since... P3? P pro?

VorpalFish
Mar 22, 2007
reasonably awesometm

F4rt5 posted:

Thing is, if Intel had, say, gone ARM RISC fifteen years ago, or whenever 28nm came, and had had Microsoft on board for it, the gains would be relatively the same as we are seeing with the M1 now.

x86(_64) is so horribly inefficient for modern computing, they are now basically RISC themselves with x86 translation on top, and have been since... P3? P pro?

You can't really say that, given how many of Intel's woes wrt efficiency and stagnation are directly tied to the failures on the fabrication side of the business.

The desktop cpus people love to compare the m1 to are give or take something like 2 full nodes behind, which is a massive deal. Even AMDs mobile ryzen parts are behind by a node - I think TSMCs numbers put 5nm at a 20% power reduction vs 7nm for a given complexity/frequency.

F4rt5
May 20, 2006

VorpalFish posted:

You can't really say that, given how many of Intel's woes wrt efficiency and stagnation are directly tied to the failures on the fabrication side of the business.

Now. Intel were leading the nm race back then. If they had created an i5 10xxx on TSMC 5nm it would still be less efficient than an M1, I think.

[quote]
The desktop cpus people love to compare the m1 to are give or take something like 2 full nodes behind, which is a massive deal. Even AMDs mobile ryzen parts are behind by a node - I think TSMCs numbers put 5nm at a 20% power reduction vs 7nm for a given complexity/frequency.
I
But the architecture itself also plays a huge deal in this.

Guess I'm just blinded by the world finally enjoying the RISC revolution we were promised in the mid-late '90s :p

VorpalFish
Mar 22, 2007
reasonably awesometm

F4rt5 posted:

But the architecture itself also plays a huge deal in this.

I would say the fact that the 5800u is x86-64 and appears to be close to the m1s efficiency with a node disadvantage suggests the x86 is not a huge impediment.

Adbot
ADBOT LOVES YOU

CFox
Nov 9, 2005
I’ll be impressed when they can clone the MacBook Air. Where’s my fanless speedy laptop intel/amd?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply