|
the real commitment to x86 was when they sold off marvell and the x-scale ARM that was doing quite well at the time you can talk about dedicating resources to the high end, and I distinctly recall Otellini talking up "if they sell 100 smartphones, we sold a $600 server core to the backend" explanation for why getting eaten alive from below was fine and dandy, but that was the big bet in my recollection
|
# ? Oct 3, 2022 17:47 |
|
|
# ? May 28, 2024 08:01 |
|
That Intel decided the high margin low volume / low margin high volume split model for leading edge fab utilization economics that worked so well for servers/desktops wouldn’t be threatened by someone doing that with phones in volume was puzzling. Sure, they still sell a lot of desktop chips (at probably higher margins than phone chips) but if TSMC (and Samsung) didn’t have customers and volume they’d have a hard time building leadership fabs.
|
# ? Oct 3, 2022 19:23 |
|
ExcessBLarg! posted:But what if Intel hadn't announced Itanium and exclusively pushed along x86 designs? Would there have been any significant difference in the outcome? I don't know how much the DEC/Compaq sale was the result of Itanium's annoucnement--maybe DEC would've held on and try to compete against Intel in the server market? PowerPC tried to hold on a bit with Apple, but the collapse of the PowerPC Reference Platform meant it would be forever volume-constrained, too. Neither Motorola nor IBM could possibly hope to compete with x86/x86-64 on volume. The biggest volume drivers, the consoles (XB360, PS3, GameCube/Wii) were all using relatively constrained processor designs, too, not the high-performance ones Apple demanded. Even the supercomputers were mostly using slower designs but massively parallel. Some of the IBM big iron used fast POWER designs, but they had no application to the wider market and there were never that many of them. Who does have the volume to compete? The same story that let x86 win: something sold in enormous quantities no matter how slow, driving a virtuous investment cycle. That something is ARM. JawnV6 posted:you can talk about dedicating resources to the high end, and I distinctly recall Otellini talking up "if they sell 100 smartphones, we sold a $600 server core to the backend" explanation for why getting eaten alive from below was fine and dandy, but that was the big bet in my recollection in a well actually posted:That Intel decided the high margin low volume / low margin high volume split model for leading edge fab utilization economics that worked so well for servers/desktops wouldn’t be threatened by someone doing that with phones in volume was puzzling. Otellini considers not going for mobile when Apple asked them to be his biggest mistake. They could've had the 100 smartphones and the $600 server backend core. They then thought they could make up for lost time with process superiority in shrinking Atom and were never able to make it work. Worse, it meant that TSMC would receive a flood of money from Apple, which eventually let them take the lead in fabrication process (assisted by Intel's failed bet on cobalt in their 10nm process). Intel's commanding lead was entirely based on process superiority, which is now gone. At best they'll probably be able to compete again with TSMC if they can execute well over the next few years. phongn fucked around with this message at 22:20 on Oct 3, 2022 |
# ? Oct 3, 2022 20:20 |
|
phongn posted:Otellini considers not going for mobile when Apple asked them to be his biggest mistake.
|
# ? Oct 3, 2022 20:48 |
|
phongn posted:There's a nice thread by John Mashey on why DEC ultimately abandoned VAX (a very complex instruction set): Intel and AMD had huge volume that fed back in a virtuous cycle, and all the other guys didn't That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing,
|
# ? Oct 4, 2022 10:41 |
|
I think it's generally accepted that RISC designs were outpacing CISC designs until they reached the knee-in-the-curve that building a processor around a RISC-ish internal architecture and an instruction set translator (or even just striaght-up do software emulation) became competitive again. For x86 that happened, with the P6/Pentium Pro? I'm not really an architecture guy. I'm sure DEC engineers in 1986 considered that and felt it was infeasible and just pushed on with a new architecture. Maybe they didn't expect VAX to have such a long tail in the market. Now, if they had obstinately stuck with VAX through the 90s, could they have considered that? Or is there's something specific about the VAX architecture that makes it wildly difficult to implement that way compared to x86 or 68k.
|
# ? Oct 4, 2022 14:50 |
|
in a well actually posted:Sure, they still sell a lot of desktop chips (at probably higher margins than phone chips) but if TSMC (and Samsung) didn’t have customers and volume they’d have a hard time building leadership fabs. Sorry if this is getting too close to politics and/or conspiracy theory, but is it possible the TSMC doesn't actually make much or any profit and that the Taiwanese government pumps money into TSMC in lieu of military spending? The presence of TSMC and its importance to western multinational corporations is a good reason for the US and the Euro zone to have Taiwan's back when it comes to dealing with China.
|
# ? Oct 4, 2022 15:27 |
|
PBCrunch posted:Does Samsung really have leadership fabs? As far as I understand it Samsung does a great job making memory and storage chips, but its logic manufacturing is kind of second rate. Second or third best is still leadership-class. TSMC has better 10/8/7nm gens but Samsung is still lightyears ahead of SMIC or Glofo. quote:Sorry if this is getting too close to politics and/or conspiracy theory, but is it possible the TSMC doesn't actually make much or any profit and that the Taiwanese government pumps money into TSMC in lieu of military spending? The presence of TSMC and its importance to western multinational corporations is a good reason for the US and the Euro zone to have Taiwan's back when it comes to dealing with China. Nah. Taiwan’s entire military budget was $20b/y. TSMCs revenues were $60 bn/yr. It’s a public company and you can see revenue going in and chips going out. TSMC is a success story for sustained industrial policy and soft power but it’s market position is 30 years of sustained work and larger market trends.
|
# ? Oct 4, 2022 16:15 |
|
One CPU I've not heard mentioned here, and I only really know about because I happened to work at Philips at the time, was the TriMedia media processor chip. It was a 5-issue VLIW chip which ran at about 100MHz and was used in various set-top boxes and later versions were used in some early smart-for-the-time TVs. It seemed pretty neat, and the tools we used had a pretty decent and standards-of-the-time compliant C and C++ compiler and you could whack inline assembly in which looked like C function calls (I forget what the exact nomenclature was) to use some of the more vector-ish or media-specific shuffle instructions that the compiler wouldn't automatically generate. It was pretty funky, although I stayed well away from actually writing assembly for it whenever I could, since as well as being 5 instructions wide each instruction took different numbers of cycles and it could only write to 5 destination registers per cycle otherwise it would just lock up (presumably with some sort of debug trap, I can't remember). There was also a cycle or two of delay slot on any branch instruction, so allowed enabling/disabling each instruction, but these were according to one of the 128 general-purpose registers rather than separate flags (as on ARM or x86, for example). Keeping code straight-line where possible was important for getting the best performance out of it, and I think all this meant that the compiler and especially instruction scheduler had to work pretty hard (Seems that according to the wikipedia page here : https://en.wikipedia.org/wiki/TriMedia_%28mediaprocessor%29 it kept going for about 12 years before finally being canned) legooolas fucked around with this message at 23:57 on Oct 4, 2022 |
# ? Oct 4, 2022 18:00 |
|
feedmegin posted:That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing, ExcessBLarg! posted:I think it's generally accepted that RISC designs were outpacing CISC designs until they reached the knee-in-the-curve that building a processor around a RISC-ish internal architecture and an instruction set translator (or even just striaght-up do software emulation) became competitive again. For x86 that happened, with the P6/Pentium Pro? I'm not really an architecture guy. quote:I'm sure DEC engineers in 1986 considered that and felt it was infeasible and just pushed on with a new architecture. Maybe they didn't expect VAX to have such a long tail in the market. As ugly as it was, what was then x86 was substantially simpler and easier to try and break down, and benefitted from high sales volume to keep the money firehose going. And of course, AMD grafted on a 64-bit extension that was inelegant but more or less worked and was easy to port a compiler to. As an aside, I kinda wish that IBM had chosen the M68000 instead of the 8086 for the original IBM PC; it was a much cleaner design with a vastly nicer orthogonal ISA. Some people even made an out-of-order, 64-bit version. phongn fucked around with this message at 20:35 on Oct 4, 2022 |
# ? Oct 4, 2022 20:27 |
|
phongn posted:As an aside, I kinda wish that IBM had chosen the M68000 instead of the 8086 for the original IBM PC; it was a much cleaner design with a vastly nicer orthogonal ISA. Some people even made an out-of-order, 64-bit version. If you're what-ifing that, you have to change how the ISA evolved. 68K lost badly to x86 in the mid-1980s, not just the early 80s when IBM selected the 8088 because it was available and cheap at a time when the 68K was neither. 68K took a sharp turn for the worse with the 68020. Motorola's architects got blinded by that orthogonality and beauty and tried to continue the old "close the semantic gap between assembly and high level languages" CPU design philosophy that had led to what we now call CISC. The changes they made were all very pretty on paper, but made it hard to design chips with advanced microarchitectural features. This played a part in 68K falling well behind instead of keeping pace with x86. (Apollo manages to be OoO because it's a bunch of Amiga cultists with no completely agreed upon project goal other than making something they think is cool to run AmigaOS on. With no commercial pressures, you don't have to simultaneously worry about things like clock speed and power, which makes it easier to do OoO just because you can.) You can learn more by finding more old Mashey usenet posts! He had a neat series breaking down what makes a RISC a RISC, down to detailed tables comparing ISA features. x86 ends up being substantially closer to RISC than 68020, and in one of the most important ways (addressing modes).
|
# ? Oct 4, 2022 22:10 |
|
BobHoward posted:If you're what-ifing that, you have to change how the ISA evolved. 68K lost badly to x86 in the mid-1980s, not just the early 80s when IBM selected the 8088 because it was available and cheap at a time when the 68K was neither. While the 68020 started getting over-complex, Intel also made its own mistakes with the 286 (ref. Gates' reference to it being "brain-dead"). Motorola did seemingly realize its mistakes and removed some of those instructions later on, so I think some of these design issues could've been overcome? I don't think it was as complex as, say, VAX or iAPX 432. The 68060 was competitive with P5, at least. As for Apollo, I know it's made by Amiga fanatics and not a 'real' design with real commercial constraints. It's just kind of a neat project? There are people who dream of the WDC 65C832, too, for the sole reason they liked the accumulator-style MOS 6502. (I've read a good amount of Mashey's posts on yarchive; I actually discovered that site first for all its rocketry tidbits). phongn fucked around with this message at 22:50 on Oct 4, 2022 |
# ? Oct 4, 2022 22:46 |
|
Amiga weirdos are the best. That Apollo thing is neat!
|
# ? Oct 5, 2022 04:29 |
|
legooolas posted:inline assembly in which looked like C function calls (I forget what the exact nomenclature was) Are you thinking of compiler intrinsics?
|
# ? Oct 5, 2022 05:58 |
|
phongn posted:It did strongly influence Postscript, right? Definitely, they’re both stack oriented, as is SPL, the HP System Programming Language for the HP 3000 that was a contemporary of FORTH. The differences between FORTH and PostScript are substantial, though, it’s kind if like looking at BCPL and then looking at C. HP’s RPL language for the high end calculators like the HP-48SX is similarly advanced relarive to FORTH.
|
# ? Oct 5, 2022 06:05 |
|
VostokProgram posted:Are you thinking of compiler intrinsics? Yes that's it! Presumably they're still a thing, but with compilers doing much more in the way of vectorisation etc they aren't required as often.
|
# ? Oct 5, 2022 09:01 |
|
legooolas posted:Yes that's it! Presumably they're still a thing, but with compilers doing much more in the way of vectorisation etc they aren't required as often. You would be surprised. Auto-vectorisation is pretty hit and miss.
|
# ? Oct 5, 2022 12:05 |
|
phongn posted:I know there were a lot of sound business reasons for IBM picking the Intel processor, not the least price, second source availability, etc. I just know it was a candidate, and for its later ISA faults it did have a lot going for it that wouldn't really appear on Intel until the 386. Also that the 8086/8088 looked like an 8080 from a distance was beneficial given the popularity of CP/M in business at the time.
|
# ? Oct 5, 2022 14:54 |
|
feedmegin posted:You would be surprised. Auto-vectorisation is pretty hit and miss. Yeah, all the simd-enabled math libraries I know of use either intrinsics or inline asm
|
# ? Oct 5, 2022 18:10 |
|
phongn posted:I know there were a lot of sound business reasons for IBM picking the Intel processor, not the least price, second source availability, etc. I just know it was a candidate, and for its later ISA faults it did have a lot going for it that wouldn't really appear on Intel until the 386. Not having to deal with all the different types of memory models on x86 from the start would've been nice (though of course 68K had its own problems with developers using the upper address byte because it "wasn't used" at first). Not having to deal with the weird x87 stack-based FPU would also be nice. For sure. The original 68000 was so much cleaner than x86! quote:While the 68020 started getting over-complex, Intel also made its own mistakes with the 286 (ref. Gates' reference to it being "brain-dead"). Motorola did seemingly realize its mistakes and removed some of those instructions later on, so I think some of these design issues could've been overcome? I don't think it was as complex as, say, VAX or iAPX 432. The 68060 was competitive with P5, at least. Not sure you can say the 68060 was truly competitive with P5. It was extremely late to market and its clock speed was disappointing. It wasn't new instructions that were the problem, it was new addressing modes, made available to all existing instructions. They were quite fancy. Stuff like (iirc) double indirection - dereference a pointer to a pointer. For many reasons (which Mashey gets into at some point) it's difficult to make high performance implementations of an ISA which generates anything more than a single memory reference per instruction. Despite all its ugliness, this is something x86 actually got right. Motorola wasn't able to get rid of this stuff in 68K proper. Instead, they defined a cut-down version and called it a new and incompatible CPU architecture, ColdFire. I think this even extended to removing stuff from the baseline 68000 ISA - the idea was "let's review 68K and remove everything which makes it obviously not-a-RISC". It could not boot unmodified 68K operating systems. Oddly enough, Intel got away with its 286 mistakes because they were so bad almost nobody tried to use them. The market generally treated the 286 as just a faster 8086. IIRC, OS support was limited to an early (and not very widely used) version range of OS/2. Maybe things would have moved eventually, but the 386 offered an obviously superior alternate idea for extending x86, at which point more or less everyone dropped all plans to use the 286 model. Still, AFAIK, Intel has kept all the 286 weirdness hiding in the dusty corners of every CPU they've made. I think they've only recently started talking about removing some of the legacy stuff. It's very hard to subtract features from an ISA once they're fielded.
|
# ? Oct 5, 2022 22:19 |
|
I suppose my fondness for M68K is because it was my first assembly language (early CS courses; advanced courses used MIPS) and it powered a bunch of systems I had only good memories of (Macintosh, TI-89, etc.) I wonder if Intel also avoided some of those mistakes (286 aside) because they were going to do all the super-CISC close-language-coupling in iAPX 432 instead.
|
# ? Oct 6, 2022 17:41 |
|
https://www.tomshardware.com/news/risc-v-laptop-world-first
|
# ? Oct 10, 2022 03:32 |
|
So why is x86 less power efficient than ARM? Just backwards compatibility with 16 bit instructions/a more complex instruction set, that's all?
|
# ? Jan 11, 2023 17:57 |
|
It isn’t, really (cisc decoders etc. don’t eat that many joules relative to alu or cache.) A lot of the perf diff is that designs on the market for ARM are optimized for power efficiency, and also that Intel process teams ate poo poo for a decade while TSMC and Samsung got ahead. One of the secrets to Apple’s efficiency is that since they are the purchasers of their cpus they can optimize design for most performant design over density for maximum cores per die.
|
# ? Jan 12, 2023 01:13 |
|
So why has there never been a successful x86 phone or embedded chip?
|
# ? Jan 12, 2023 19:25 |
|
Intel was run by morons in the years it would have been relevant to make one, by the time they took it seriously it was too late because everyone in the whole market had already gotten on board with Qualcomm or designed their own.
|
# ? Jan 12, 2023 19:34 |
|
icantfindaname posted:So why has there never been a successful x86 phone or embedded chip? Power consumption for one. ARM from day one had crazy excellent power characteristics, including being able to run off small input voltages from I/O pins which they discovered during its initial development.
|
# ? Jan 12, 2023 19:37 |
|
icantfindaname posted:So why has there never been a successful x86 phone or embedded chip? There are tons of embedded x86 processors out there in things like cars, MRIs and PLCs You may notice something similar about all those applications.
|
# ? Jan 12, 2023 19:45 |
|
hobbesmaster posted:There are tons of embedded x86 processors out there in things like cars, MRIs and PLCs I was surprised to see that the embedded cpu in my model 3 is an intel atom whether that is a positive or negative probably depends on your opinion of tesla but in either case they are a pretty high profile customer
|
# ? Jan 12, 2023 20:46 |
|
icantfindaname posted:So why is x86 less power efficient than ARM? Just backwards compatibility with 16 bit instructions/a more complex instruction set, that's all? It also helps that ARM filled the "power efficient" niche from near the beginning, so all aspects of an ARM-platform system including processor/SoCs, compilers, operating systems were designed with power efficiency as a priority and evolved from there. It's much harder to take an inefficient platform and try to nail it down. icantfindaname posted:So why has there never been a successful x86 phone or embedded chip? That's also to say nothing about the prevalence of ARM in Android and the availability of ARM binaries in the Play Store. I mean, Android works perfectly fine on Intel too, but it's always been a second-class citizen in the mobile space.
|
# ? Jan 12, 2023 21:20 |
|
icantfindaname posted:So why has there never been a successful x86 phone or embedded chip? It's really really hard to make a 2 watt x86 system that isn't hot dogshit. Tons and tons of design considerations need to be made very very early in the process to get power draw low enough for that class of device, and x86 doesn't have a lot of those.
|
# ? Jan 12, 2023 23:28 |
|
idk there's a more interesting question right next door, why didn't _intel_ make a successful phone chip. And they did! Had a best-in-class offering a while back. Then in 2006 they sold off the entire XScale division to Marvell. Way up at the c-level, it was a bet on "x86" in an abstract sense over continuing to build ARM cores in house.
|
# ? Jan 12, 2023 23:41 |
|
Intel’s 5G modem efforts ate poo poo so hard Apple had to go back to Qualcomm and settle their lawsuits. I don’t recall if it was process delays or design issues (iirc both?) but they hosed a lot of companies on that. Intel also had some internal structural and process issues that prevented them from being competitive.
|
# ? Jan 13, 2023 00:39 |
|
in a well actually posted:It isn’t, really (cisc decoders etc. don’t eat that many joules relative to alu or cache.) A lot of the perf diff is that designs on the market for ARM are optimized for power efficiency, and also that Intel process teams ate poo poo for a decade while TSMC and Samsung got ahead. I think you might be downplaying x86 decoder power a bit too much. One reason why uop caches are popular in x86 designs, and essentially nowhere else, is that cache hits allow decoders to go idle to save power. But more significant than the decoders themselves are the implications for everything else. An ultra-wide backend would just get bottlenecked by the decoders it's practical to build, so Intel and AMD haven't explored the wider/slower corner of the design space. They've settled on building medium width backends with very high clocks, and that has consequences. One is that x86 cores are enormous, probably thanks to deep pipelining. Look at these two annotated die photos of M1 and Alder Lake chips with eight P CPUs each. As you say, Apple is willing to burn area with reckless abandon. Their P cores are the big chunguses of the Arm world. And yet, they're still quite small relative to x86 P cores, even accounting for all the confounding factors in those die photos (m1 pro is about 250mm^2, the alder lake about 210, different nodes, etc).
|
# ? Jan 13, 2023 02:11 |
|
UHD posted:I was surprised to see that the embedded cpu in my model 3 is an intel atom They’re very common. Arstechnica recently reviewed the android automotive based infotainment system in a GMC Yukon which runs on a Gordon Peak atom. https://arstechnica.com/gadgets/2023/01/android-automotive-goes-mainstream-a-review-of-gms-new-infotainment-system/ Intel’s collateral since Gordon peak isn’t on ark: https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/intel-sb-gordon-peak-v4-13132-3.pdf edit: I suspect this will render many in this thread speechless: quote:Android Automotive doesn't let you sideload apps into a production car, but look up Atom A3960 Geekbench scores, and you'll see that the computer in this $78,000 vehicle is barely faster than a $35 Raspberry Pi 4. The GMC Yukon and Polestar 2 both feature one of the slowest CPUs you can buy today in any form factor. hobbesmaster fucked around with this message at 23:29 on Jan 13, 2023 |
# ? Jan 13, 2023 23:25 |
|
UHD posted:I was surprised to see that the embedded cpu in my model 3 is an intel atom having owned and used intel atom systems, this is a major negative. hobbesmaster posted:They’re very common. Arstechnica recently reviewed the android automotive based infotainment system in a GMC Yukon which runs on a Gordon Peak atom. https://arstechnica.com/gadgets/2023/01/android-automotive-goes-mainstream-a-review-of-gms-new-infotainment-system/ Doesn't surprise me in the least, all infotainment systems are garbage (or become garbage after a year). Takes a long fuckin' time for my VW to boot up when you get in. It's an electric car! Just keep the computer on, jesus!
|
# ? Jan 14, 2023 00:15 |
|
…those are actually quite powerful by embedded cpu standards.
|
# ? Jan 14, 2023 00:19 |
|
The variable length instructions of x86 make decoding more complicated than the fixed-length instructions seen on ARM. As BobHoward noted, x86 decoders are more of a bottleneck and this is one reason why. Unfortunately x86-64 was built to be easy to port x86 compilers over to, and so kept some of the old ugliness like variable length instructions, made relatively few named registers, etc. aarch64 instead went with a cleaner slate when they rethought ARM.
|
# ? Jan 14, 2023 02:00 |
|
hobbesmaster posted:edit: I suspect this will render many in this thread speechless: Not surprising to me at all, but then I've done some time working at a place that designed electronics for vehicles. Embedded electronics for harsh environments is a very different world. For example, how many fast CPUs do you know of which are rated for operation down to -40C and up to at least +100C? These are common baseline requirements for automotive applications, even for boxes which live inside the passenger compartment rather than the engine bay. Another: most consumer silicon disappears only two or three years after launch. But designing and qualifying electronics for the harsher environment inside a vehicle is expensive, so you don't want to constantly re-do it - you want to design something really solid and just keep making it for five years, or more. That narrows the list of components you can possibly use quite a bit.
|
# ? Jan 14, 2023 04:45 |
|
|
# ? May 28, 2024 08:01 |
|
BobHoward posted:Not surprising to me at all, but then I've done some time working at a place that designed electronics for vehicles. Embedded electronics for harsh environments is a very different world. Yeah definitely. I imagine the product design cycles for vehicles are pretty long so even by the time the car launches it has been a good 4 years since the devices have been qualified for automotive which often is a lot later than when they are actually new etc..
|
# ? Jan 14, 2023 04:47 |