Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JawnV6
Jul 4, 2004

So hot ...
the real commitment to x86 was when they sold off marvell and the x-scale ARM that was doing quite well at the time

you can talk about dedicating resources to the high end, and I distinctly recall Otellini talking up "if they sell 100 smartphones, we sold a $600 server core to the backend" explanation for why getting eaten alive from below was fine and dandy, but that was the big bet in my recollection

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

That Intel decided the high margin low volume / low margin high volume split model for leading edge fab utilization economics that worked so well for servers/desktops wouldn’t be threatened by someone doing that with phones in volume was puzzling.

Sure, they still sell a lot of desktop chips (at probably higher margins than phone chips) but if TSMC (and Samsung) didn’t have customers and volume they’d have a hard time building leadership fabs.

phongn
Oct 21, 2006

ExcessBLarg! posted:

But what if Intel hadn't announced Itanium and exclusively pushed along x86 designs? Would there have been any significant difference in the outcome? I don't know how much the DEC/Compaq sale was the result of Itanium's annoucnement--maybe DEC would've held on and try to compete against Intel in the server market?

It just seems to me at the end of the day that other vendors couldn't afford to build fabs to outbuild Xeons, and by the early 00s the underlying architecture didn't really matter so much as process node and yields.
There's a nice thread by John Mashey on why DEC ultimately abandoned VAX (a very complex instruction set): Intel and AMD had huge volume that fed back in a virtuous cycle, and all the other guys didn't. And each generation of CPU became more and more expensive to do. HP's PA-RISC, SGI 'big' MIPS, Sun/Fujitsu SPARC, DEC Alpha: not one of them had the volume to compete.

PowerPC tried to hold on a bit with Apple, but the collapse of the PowerPC Reference Platform meant it would be forever volume-constrained, too. Neither Motorola nor IBM could possibly hope to compete with x86/x86-64 on volume. The biggest volume drivers, the consoles (XB360, PS3, GameCube/Wii) were all using relatively constrained processor designs, too, not the high-performance ones Apple demanded. Even the supercomputers were mostly using slower designs but massively parallel. Some of the IBM big iron used fast POWER designs, but they had no application to the wider market and there were never that many of them.

Who does have the volume to compete? The same story that let x86 win: something sold in enormous quantities no matter how slow, driving a virtuous investment cycle. That something is ARM.

JawnV6 posted:

you can talk about dedicating resources to the high end, and I distinctly recall Otellini talking up "if they sell 100 smartphones, we sold a $600 server core to the backend" explanation for why getting eaten alive from below was fine and dandy, but that was the big bet in my recollection

in a well actually posted:

That Intel decided the high margin low volume / low margin high volume split model for leading edge fab utilization economics that worked so well for servers/desktops wouldn’t be threatened by someone doing that with phones in volume was puzzling.

Sure, they still sell a lot of desktop chips (at probably higher margins than phone chips) but if TSMC (and Samsung) didn’t have customers and volume they’d have a hard time building leadership fabs.

Otellini considers not going for mobile when Apple asked them to be his biggest mistake. They could've had the 100 smartphones and the $600 server backend core. They then thought they could make up for lost time with process superiority in shrinking Atom and were never able to make it work.

Worse, it meant that TSMC would receive a flood of money from Apple, which eventually let them take the lead in fabrication process (assisted by Intel's failed bet on cobalt in their 10nm process). Intel's commanding lead was entirely based on process superiority, which is now gone. At best they'll probably be able to compete again with TSMC if they can execute well over the next few years.

phongn fucked around with this message at 22:20 on Oct 3, 2022

ExcessBLarg!
Sep 1, 2001

phongn posted:

Otellini considers not going for mobile when Apple asked them to be his biggest mistake.
How many iPods was Apple really going to sell anyways?

feedmegin
Jul 30, 2008

phongn posted:

There's a nice thread by John Mashey on why DEC ultimately abandoned VAX (a very complex instruction set): Intel and AMD had huge volume that fed back in a virtuous cycle, and all the other guys didn't

That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing,

ExcessBLarg!
Sep 1, 2001
I think it's generally accepted that RISC designs were outpacing CISC designs until they reached the knee-in-the-curve that building a processor around a RISC-ish internal architecture and an instruction set translator (or even just striaght-up do software emulation) became competitive again. For x86 that happened, with the P6/Pentium Pro? I'm not really an architecture guy.

I'm sure DEC engineers in 1986 considered that and felt it was infeasible and just pushed on with a new architecture. Maybe they didn't expect VAX to have such a long tail in the market.

Now, if they had obstinately stuck with VAX through the 90s, could they have considered that? Or is there's something specific about the VAX architecture that makes it wildly difficult to implement that way compared to x86 or 68k.

PBCrunch
Jun 17, 2002

Lawrence Phillips Always #1 to Me

in a well actually posted:

Sure, they still sell a lot of desktop chips (at probably higher margins than phone chips) but if TSMC (and Samsung) didn’t have customers and volume they’d have a hard time building leadership fabs.
Does Samsung really have leadership fabs? As far as I understand it Samsung does a great job making memory and storage chips, but its logic manufacturing is kind of second rate.

Sorry if this is getting too close to politics and/or conspiracy theory, but is it possible the TSMC doesn't actually make much or any profit and that the Taiwanese government pumps money into TSMC in lieu of military spending? The presence of TSMC and its importance to western multinational corporations is a good reason for the US and the Euro zone to have Taiwan's back when it comes to dealing with China.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

PBCrunch posted:

Does Samsung really have leadership fabs? As far as I understand it Samsung does a great job making memory and storage chips, but its logic manufacturing is kind of second rate.

Second or third best is still leadership-class. TSMC has better 10/8/7nm gens but Samsung is still lightyears ahead of SMIC or Glofo.

quote:

Sorry if this is getting too close to politics and/or conspiracy theory, but is it possible the TSMC doesn't actually make much or any profit and that the Taiwanese government pumps money into TSMC in lieu of military spending? The presence of TSMC and its importance to western multinational corporations is a good reason for the US and the Euro zone to have Taiwan's back when it comes to dealing with China.

Nah. Taiwan’s entire military budget was $20b/y. TSMCs revenues were $60 bn/yr. It’s a public company and you can see revenue going in and chips going out.

TSMC is a success story for sustained industrial policy and soft power but it’s market position is 30 years of sustained work and larger market trends.

legooolas
Jul 30, 2004
One CPU I've not heard mentioned here, and I only really know about because I happened to work at Philips at the time, was the TriMedia media processor chip. It was a 5-issue VLIW chip which ran at about 100MHz and was used in various set-top boxes and later versions were used in some early smart-for-the-time TVs.

It seemed pretty neat, and the tools we used had a pretty decent and standards-of-the-time compliant C and C++ compiler and you could whack inline assembly in which looked like C function calls (I forget what the exact nomenclature was) to use some of the more vector-ish or media-specific shuffle instructions that the compiler wouldn't automatically generate. It was pretty funky, although I stayed well away from actually writing assembly for it whenever I could, since as well as being 5 instructions wide each instruction took different numbers of cycles and it could only write to 5 destination registers per cycle otherwise it would just lock up (presumably with some sort of debug trap, I can't remember).

There was also a cycle or two of delay slot on any branch instruction, so allowed enabling/disabling each instruction, but these were according to one of the 128 general-purpose registers rather than separate flags (as on ARM or x86, for example). Keeping code straight-line where possible was important for getting the best performance out of it, and I think all this meant that the compiler and especially instruction scheduler had to work pretty hard :D

(Seems that according to the wikipedia page here : https://en.wikipedia.org/wiki/TriMedia_%28mediaprocessor%29 it kept going for about 12 years before finally being canned)

legooolas fucked around with this message at 23:57 on Oct 4, 2022

phongn
Oct 21, 2006

feedmegin posted:

That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing,
Note also he says "INTEL AND AMD CAN MAKE FAST X86S BECAUSE THEY HAVE VOLUME." This applies to both why DEC could not, and would not, scale VAX to a high-speed out-of-order CISC design, and why all the workstation RISC designs ultimately failed to compete with the ugly duckling x86 (and later x86-64). Does the quoted threat explicitly say it? No. Does the same lesson apply? Yes.

ExcessBLarg! posted:

I think it's generally accepted that RISC designs were outpacing CISC designs until they reached the knee-in-the-curve that building a processor around a RISC-ish internal architecture and an instruction set translator (or even just striaght-up do software emulation) became competitive again. For x86 that happened, with the P6/Pentium Pro? I'm not really an architecture guy.
Yes, it began with the P6. The Pentium Pro was a real shock to the RISC guys, because it was pretty damned fast despite being CISC. Decreasing transistor costs meant that implementing complicated decode stages became feasible (unless you want something really low power).

quote:

I'm sure DEC engineers in 1986 considered that and felt it was infeasible and just pushed on with a new architecture. Maybe they didn't expect VAX to have such a long tail in the market.

Now, if they had obstinately stuck with VAX through the 90s, could they have considered that? Or is there's something specific about the VAX architecture that makes it wildly difficult to implement that way compared to x86 or 68k.
In the Mashey post I linked earlier, if you search for "PART 3. Why it seems difficult to make an OOO VAX competitive" there's a pretty thorough post on why it would be difficult to implement a 'modern' VAX that decoded into micro-ops a'la x86. They also suspected that many instructions would have to be implemented in microcode, too. DEC just didn't have the resources to tackle all of these problems at once, and went for the hail-mary move of a new clean-sheet architecture.

As ugly as it was, what was then x86 was substantially simpler and easier to try and break down, and benefitted from high sales volume to keep the money firehose going. And of course, AMD grafted on a 64-bit extension that was inelegant but more or less worked and was easy to port a compiler to.

As an aside, I kinda wish that IBM had chosen the M68000 instead of the 8086 for the original IBM PC; it was a much cleaner design with a vastly nicer orthogonal ISA. Some people even made an out-of-order, 64-bit version.

phongn fucked around with this message at 20:35 on Oct 4, 2022

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

phongn posted:

As an aside, I kinda wish that IBM had chosen the M68000 instead of the 8086 for the original IBM PC; it was a much cleaner design with a vastly nicer orthogonal ISA. Some people even made an out-of-order, 64-bit version.

If you're what-ifing that, you have to change how the ISA evolved. 68K lost badly to x86 in the mid-1980s, not just the early 80s when IBM selected the 8088 because it was available and cheap at a time when the 68K was neither.

68K took a sharp turn for the worse with the 68020. Motorola's architects got blinded by that orthogonality and beauty and tried to continue the old "close the semantic gap between assembly and high level languages" CPU design philosophy that had led to what we now call CISC. The changes they made were all very pretty on paper, but made it hard to design chips with advanced microarchitectural features. This played a part in 68K falling well behind instead of keeping pace with x86.

(Apollo manages to be OoO because it's a bunch of Amiga cultists with no completely agreed upon project goal other than making something they think is cool to run AmigaOS on. With no commercial pressures, you don't have to simultaneously worry about things like clock speed and power, which makes it easier to do OoO just because you can.)

You can learn more by finding more old Mashey usenet posts! He had a neat series breaking down what makes a RISC a RISC, down to detailed tables comparing ISA features. x86 ends up being substantially closer to RISC than 68020, and in one of the most important ways (addressing modes).

phongn
Oct 21, 2006

BobHoward posted:

If you're what-ifing that, you have to change how the ISA evolved. 68K lost badly to x86 in the mid-1980s, not just the early 80s when IBM selected the 8088 because it was available and cheap at a time when the 68K was neither.

68K took a sharp turn for the worse with the 68020. Motorola's architects got blinded by that orthogonality and beauty and tried to continue the old "close the semantic gap between assembly and high level languages" CPU design philosophy that had led to what we now call CISC. The changes they made were all very pretty on paper, but made it hard to design chips with advanced microarchitectural features. This played a part in 68K falling well behind instead of keeping pace with x86.

(Apollo manages to be OoO because it's a bunch of Amiga cultists with no completely agreed upon project goal other than making something they think is cool to run AmigaOS on. With no commercial pressures, you don't have to simultaneously worry about things like clock speed and power, which makes it easier to do OoO just because you can.)

You can learn more by finding more old Mashey usenet posts! He had a neat series breaking down what makes a RISC a RISC, down to detailed tables comparing ISA features. x86 ends up being substantially closer to RISC than 68020, and in one of the most important ways (addressing modes).
I know there were a lot of sound business reasons for IBM picking the Intel processor, not the least price, second source availability, etc. I just know it was a candidate, and for its later ISA faults it did have a lot going for it that wouldn't really appear on Intel until the 386. Not having to deal with all the different types of memory models on x86 from the start would've been nice (though of course 68K had its own problems with developers using the upper address byte because it "wasn't used" at first). Not having to deal with the weird x87 stack-based FPU would also be nice.

While the 68020 started getting over-complex, Intel also made its own mistakes with the 286 (ref. Gates' reference to it being "brain-dead"). Motorola did seemingly realize its mistakes and removed some of those instructions later on, so I think some of these design issues could've been overcome? I don't think it was as complex as, say, VAX or iAPX 432. The 68060 was competitive with P5, at least. As for Apollo, I know it's made by Amiga fanatics and not a 'real' design with real commercial constraints. It's just kind of a neat project? There are people who dream of the WDC 65C832, too, for the sole reason they liked the accumulator-style MOS 6502.

(I've read a good amount of Mashey's posts on yarchive; I actually discovered that site first for all its rocketry tidbits).

phongn fucked around with this message at 22:50 on Oct 4, 2022

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Amiga weirdos are the best. That Apollo thing is neat!

Yaoi Gagarin
Feb 20, 2014

legooolas posted:

inline assembly in which looked like C function calls (I forget what the exact nomenclature was)

Are you thinking of compiler intrinsics?

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

phongn posted:

It did strongly influence Postscript, right?

Definitely, they’re both stack oriented, as is SPL, the HP System Programming Language for the HP 3000 that was a contemporary of FORTH.

The differences between FORTH and PostScript are substantial, though, it’s kind if like looking at BCPL and then looking at C. HP’s RPL language for the high end calculators like the HP-48SX is similarly advanced relarive to FORTH.

legooolas
Jul 30, 2004

Yes that's it! Presumably they're still a thing, but with compilers doing much more in the way of vectorisation etc they aren't required as often.

feedmegin
Jul 30, 2008

legooolas posted:

Yes that's it! Presumably they're still a thing, but with compilers doing much more in the way of vectorisation etc they aren't required as often.

You would be surprised. Auto-vectorisation is pretty hit and miss.

ExcessBLarg!
Sep 1, 2001

phongn posted:

I know there were a lot of sound business reasons for IBM picking the Intel processor, not the least price, second source availability, etc. I just know it was a candidate, and for its later ISA faults it did have a lot going for it that wouldn't really appear on Intel until the 386.
I'm not even sure that the 8086/8088 ISA was considered "bad" at the time. The overlapping-segment memory model is pretty unusual, but it meant you could have multiple tasks resident in memory even on machines that didn't have lots of RAM by giving them segments significantly smaller than 64k. Of course, this was difficult to make workable in practice due to the lack of any real memory protection, but I assume that was the idea. Of course, the ISA, and the memory model in particular, aged like milk.

Also that the 8086/8088 looked like an 8080 from a distance was beneficial given the popularity of CP/M in business at the time.

Yaoi Gagarin
Feb 20, 2014

feedmegin posted:

You would be surprised. Auto-vectorisation is pretty hit and miss.

Yeah, all the simd-enabled math libraries I know of use either intrinsics or inline asm

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

phongn posted:

I know there were a lot of sound business reasons for IBM picking the Intel processor, not the least price, second source availability, etc. I just know it was a candidate, and for its later ISA faults it did have a lot going for it that wouldn't really appear on Intel until the 386. Not having to deal with all the different types of memory models on x86 from the start would've been nice (though of course 68K had its own problems with developers using the upper address byte because it "wasn't used" at first). Not having to deal with the weird x87 stack-based FPU would also be nice.

For sure. The original 68000 was so much cleaner than x86!

quote:

While the 68020 started getting over-complex, Intel also made its own mistakes with the 286 (ref. Gates' reference to it being "brain-dead"). Motorola did seemingly realize its mistakes and removed some of those instructions later on, so I think some of these design issues could've been overcome? I don't think it was as complex as, say, VAX or iAPX 432. The 68060 was competitive with P5, at least.

Not sure you can say the 68060 was truly competitive with P5. It was extremely late to market and its clock speed was disappointing.

It wasn't new instructions that were the problem, it was new addressing modes, made available to all existing instructions. They were quite fancy. Stuff like (iirc) double indirection - dereference a pointer to a pointer. For many reasons (which Mashey gets into at some point) it's difficult to make high performance implementations of an ISA which generates anything more than a single memory reference per instruction. Despite all its ugliness, this is something x86 actually got right.

Motorola wasn't able to get rid of this stuff in 68K proper. Instead, they defined a cut-down version and called it a new and incompatible CPU architecture, ColdFire. I think this even extended to removing stuff from the baseline 68000 ISA - the idea was "let's review 68K and remove everything which makes it obviously not-a-RISC". It could not boot unmodified 68K operating systems.

Oddly enough, Intel got away with its 286 mistakes because they were so bad almost nobody tried to use them. The market generally treated the 286 as just a faster 8086. IIRC, OS support was limited to an early (and not very widely used) version range of OS/2. Maybe things would have moved eventually, but the 386 offered an obviously superior alternate idea for extending x86, at which point more or less everyone dropped all plans to use the 286 model.

Still, AFAIK, Intel has kept all the 286 weirdness hiding in the dusty corners of every CPU they've made. I think they've only recently started talking about removing some of the legacy stuff. It's very hard to subtract features from an ISA once they're fielded.

phongn
Oct 21, 2006

I suppose my fondness for M68K is because it was my first assembly language (early CS courses; advanced courses used MIPS) and it powered a bunch of systems I had only good memories of (Macintosh, TI-89, etc.)

I wonder if Intel also avoided some of those mistakes (286 aside) because they were going to do all the super-CISC close-language-coupling in iAPX 432 instead.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
https://www.tomshardware.com/news/risc-v-laptop-world-first

icantfindaname
Jul 1, 2008


So why is x86 less power efficient than ARM? Just backwards compatibility with 16 bit instructions/a more complex instruction set, that's all?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

It isn’t, really (cisc decoders etc. don’t eat that many joules relative to alu or cache.) A lot of the perf diff is that designs on the market for ARM are optimized for power efficiency, and also that Intel process teams ate poo poo for a decade while TSMC and Samsung got ahead.

One of the secrets to Apple’s efficiency is that since they are the purchasers of their cpus they can optimize design for most performant design over density for maximum cores per die.

icantfindaname
Jul 1, 2008


So why has there never been a successful x86 phone or embedded chip?

Gwaihir
Dec 8, 2009
Hair Elf
Intel was run by morons in the years it would have been relevant to make one, by the time they took it seriously it was too late because everyone in the whole market had already gotten on board with Qualcomm or designed their own.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

icantfindaname posted:

So why has there never been a successful x86 phone or embedded chip?

Power consumption for one. ARM from day one had crazy excellent power characteristics, including being able to run off small input voltages from I/O pins which they discovered during its initial development.

hobbesmaster
Jan 28, 2008

icantfindaname posted:

So why has there never been a successful x86 phone or embedded chip?

There are tons of embedded x86 processors out there in things like cars, MRIs and PLCs

You may notice something similar about all those applications. ;)

UHD
Nov 11, 2006


hobbesmaster posted:

There are tons of embedded x86 processors out there in things like cars, MRIs and PLCs

You may notice something similar about all those applications. ;)

I was surprised to see that the embedded cpu in my model 3 is an intel atom

whether that is a positive or negative probably depends on your opinion of tesla but in either case they are a pretty high profile customer

ExcessBLarg!
Sep 1, 2001

icantfindaname posted:

So why is x86 less power efficient than ARM? Just backwards compatibility with 16 bit instructions/a more complex instruction set, that's all?
There was probably a time when x86 was "inherently" less efficient but (as stated) the cost of decoding logic has been dwarfed by the functional aspects of modern CPUs.

It also helps that ARM filled the "power efficient" niche from near the beginning, so all aspects of an ARM-platform system including processor/SoCs, compilers, operating systems were designed with power efficiency as a priority and evolved from there. It's much harder to take an inefficient platform and try to nail it down.

icantfindaname posted:

So why has there never been a successful x86 phone or embedded chip?
One of the big benefits of Qualcomm SoCs is that they include both an application processor and radio in tandem (if not in the same package) and the board support so that, from an OEM perspective, they "just work". Intel tried to do this a few times but their radios weren't as evolved as Qualcomm's. Mind you, this was back when you needed all of GSM/UMTS/CDMA/LTE in a single device.

That's also to say nothing about the prevalence of ARM in Android and the availability of ARM binaries in the Play Store. I mean, Android works perfectly fine on Intel too, but it's always been a second-class citizen in the mobile space.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

icantfindaname posted:

So why has there never been a successful x86 phone or embedded chip?

It's really really hard to make a 2 watt x86 system that isn't hot dogshit. Tons and tons of design considerations need to be made very very early in the process to get power draw low enough for that class of device, and x86 doesn't have a lot of those.

JawnV6
Jul 4, 2004

So hot ...
idk there's a more interesting question right next door, why didn't _intel_ make a successful phone chip. And they did! Had a best-in-class offering a while back. Then in 2006 they sold off the entire XScale division to Marvell. Way up at the c-level, it was a bet on "x86" in an abstract sense over continuing to build ARM cores in house.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Intel’s 5G modem efforts ate poo poo so hard Apple had to go back to Qualcomm and settle their lawsuits. I don’t recall if it was process delays or design issues (iirc both?) but they hosed a lot of companies on that.

Intel also had some internal structural and process issues that prevented them from being competitive.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

in a well actually posted:

It isn’t, really (cisc decoders etc. don’t eat that many joules relative to alu or cache.) A lot of the perf diff is that designs on the market for ARM are optimized for power efficiency, and also that Intel process teams ate poo poo for a decade while TSMC and Samsung got ahead.

One of the secrets to Apple’s efficiency is that since they are the purchasers of their cpus they can optimize design for most performant design over density for maximum cores per die.

I think you might be downplaying x86 decoder power a bit too much. One reason why uop caches are popular in x86 designs, and essentially nowhere else, is that cache hits allow decoders to go idle to save power.

But more significant than the decoders themselves are the implications for everything else. An ultra-wide backend would just get bottlenecked by the decoders it's practical to build, so Intel and AMD haven't explored the wider/slower corner of the design space. They've settled on building medium width backends with very high clocks, and that has consequences.

One is that x86 cores are enormous, probably thanks to deep pipelining. Look at these two annotated die photos of M1 and Alder Lake chips with eight P CPUs each.




As you say, Apple is willing to burn area with reckless abandon. Their P cores are the big chunguses of the Arm world. And yet, they're still quite small relative to x86 P cores, even accounting for all the confounding factors in those die photos (m1 pro is about 250mm^2, the alder lake about 210, different nodes, etc).

hobbesmaster
Jan 28, 2008

UHD posted:

I was surprised to see that the embedded cpu in my model 3 is an intel atom

whether that is a positive or negative probably depends on your opinion of tesla but in either case they are a pretty high profile customer

They’re very common. Arstechnica recently reviewed the android automotive based infotainment system in a GMC Yukon which runs on a Gordon Peak atom. https://arstechnica.com/gadgets/2023/01/android-automotive-goes-mainstream-a-review-of-gms-new-infotainment-system/

Intel’s collateral since Gordon peak isn’t on ark: https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/intel-sb-gordon-peak-v4-13132-3.pdf

edit: I suspect this will render many in this thread speechless:

quote:

Android Automotive doesn't let you sideload apps into a production car, but look up Atom A3960 Geekbench scores, and you'll see that the computer in this $78,000 vehicle is barely faster than a $35 Raspberry Pi 4. The GMC Yukon and Polestar 2 both feature one of the slowest CPUs you can buy today in any form factor.

hobbesmaster fucked around with this message at 23:29 on Jan 13, 2023

Pham Nuwen
Oct 30, 2010



UHD posted:

I was surprised to see that the embedded cpu in my model 3 is an intel atom

whether that is a positive or negative probably depends on your opinion of tesla but in either case they are a pretty high profile customer

having owned and used intel atom systems, this is a major negative.


hobbesmaster posted:

They’re very common. Arstechnica recently reviewed the android automotive based infotainment system in a GMC Yukon which runs on a Gordon Peak atom. https://arstechnica.com/gadgets/2023/01/android-automotive-goes-mainstream-a-review-of-gms-new-infotainment-system/

Intel’s collateral since Gordon peak isn’t on ark: https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/intel-sb-gordon-peak-v4-13132-3.pdf

edit: I suspect this will render many in this thread speechless:

Doesn't surprise me in the least, all infotainment systems are garbage (or become garbage after a year).

Takes a long fuckin' time for my VW to boot up when you get in. It's an electric car! Just keep the computer on, jesus!

hobbesmaster
Jan 28, 2008

…those are actually quite powerful by embedded cpu standards.

phongn
Oct 21, 2006

The variable length instructions of x86 make decoding more complicated than the fixed-length instructions seen on ARM. As BobHoward noted, x86 decoders are more of a bottleneck and this is one reason why.

Unfortunately x86-64 was built to be easy to port x86 compilers over to, and so kept some of the old ugliness like variable length instructions, made relatively few named registers, etc. aarch64 instead went with a cleaner slate when they rethought ARM.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

hobbesmaster posted:

edit: I suspect this will render many in this thread speechless:

Not surprising to me at all, but then I've done some time working at a place that designed electronics for vehicles. Embedded electronics for harsh environments is a very different world.

For example, how many fast CPUs do you know of which are rated for operation down to -40C and up to at least +100C? These are common baseline requirements for automotive applications, even for boxes which live inside the passenger compartment rather than the engine bay.

Another: most consumer silicon disappears only two or three years after launch. But designing and qualifying electronics for the harsher environment inside a vehicle is expensive, so you don't want to constantly re-do it - you want to design something really solid and just keep making it for five years, or more. That narrows the list of components you can possibly use quite a bit.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BobHoward posted:

Not surprising to me at all, but then I've done some time working at a place that designed electronics for vehicles. Embedded electronics for harsh environments is a very different world.

For example, how many fast CPUs do you know of which are rated for operation down to -40C and up to at least +100C? These are common baseline requirements for automotive applications, even for boxes which live inside the passenger compartment rather than the engine bay.

Another: most consumer silicon disappears only two or three years after launch. But designing and qualifying electronics for the harsher environment inside a vehicle is expensive, so you don't want to constantly re-do it - you want to design something really solid and just keep making it for five years, or more. That narrows the list of components you can possibly use quite a bit.

Yeah definitely. I imagine the product design cycles for vehicles are pretty long so even by the time the car launches it has been a good 4 years since the devices have been qualified for automotive which often is a lot later than when they are actually new etc..

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply