|
BobHoward posted:I just did a bit of google searching on "Palo Alto Shipping Company" and apparently that was a little Forth startup. Forth is such a fascinating bit of computing history; it (or maybe Chuck Moore's genius) convinced a small dedicated following that it could and should be the basis for everything, but realistically Forth had no chance of actually doing that.
|
# ¿ Sep 29, 2022 18:46 |
|
|
# ¿ May 14, 2024 20:49 |
|
ExcessBLarg! posted:But what if Intel hadn't announced Itanium and exclusively pushed along x86 designs? Would there have been any significant difference in the outcome? I don't know how much the DEC/Compaq sale was the result of Itanium's annoucnement--maybe DEC would've held on and try to compete against Intel in the server market? PowerPC tried to hold on a bit with Apple, but the collapse of the PowerPC Reference Platform meant it would be forever volume-constrained, too. Neither Motorola nor IBM could possibly hope to compete with x86/x86-64 on volume. The biggest volume drivers, the consoles (XB360, PS3, GameCube/Wii) were all using relatively constrained processor designs, too, not the high-performance ones Apple demanded. Even the supercomputers were mostly using slower designs but massively parallel. Some of the IBM big iron used fast POWER designs, but they had no application to the wider market and there were never that many of them. Who does have the volume to compete? The same story that let x86 win: something sold in enormous quantities no matter how slow, driving a virtuous investment cycle. That something is ARM. JawnV6 posted:you can talk about dedicating resources to the high end, and I distinctly recall Otellini talking up "if they sell 100 smartphones, we sold a $600 server core to the backend" explanation for why getting eaten alive from below was fine and dandy, but that was the big bet in my recollection in a well actually posted:That Intel decided the high margin low volume / low margin high volume split model for leading edge fab utilization economics that worked so well for servers/desktops wouldn’t be threatened by someone doing that with phones in volume was puzzling. Otellini considers not going for mobile when Apple asked them to be his biggest mistake. They could've had the 100 smartphones and the $600 server backend core. They then thought they could make up for lost time with process superiority in shrinking Atom and were never able to make it work. Worse, it meant that TSMC would receive a flood of money from Apple, which eventually let them take the lead in fabrication process (assisted by Intel's failed bet on cobalt in their 10nm process). Intel's commanding lead was entirely based on process superiority, which is now gone. At best they'll probably be able to compete again with TSMC if they can execute well over the next few years. phongn fucked around with this message at 22:20 on Oct 3, 2022 |
# ¿ Oct 3, 2022 20:20 |
|
feedmegin posted:That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing, ExcessBLarg! posted:I think it's generally accepted that RISC designs were outpacing CISC designs until they reached the knee-in-the-curve that building a processor around a RISC-ish internal architecture and an instruction set translator (or even just striaght-up do software emulation) became competitive again. For x86 that happened, with the P6/Pentium Pro? I'm not really an architecture guy. quote:I'm sure DEC engineers in 1986 considered that and felt it was infeasible and just pushed on with a new architecture. Maybe they didn't expect VAX to have such a long tail in the market. As ugly as it was, what was then x86 was substantially simpler and easier to try and break down, and benefitted from high sales volume to keep the money firehose going. And of course, AMD grafted on a 64-bit extension that was inelegant but more or less worked and was easy to port a compiler to. As an aside, I kinda wish that IBM had chosen the M68000 instead of the 8086 for the original IBM PC; it was a much cleaner design with a vastly nicer orthogonal ISA. Some people even made an out-of-order, 64-bit version. phongn fucked around with this message at 20:35 on Oct 4, 2022 |
# ¿ Oct 4, 2022 20:27 |
|
BobHoward posted:If you're what-ifing that, you have to change how the ISA evolved. 68K lost badly to x86 in the mid-1980s, not just the early 80s when IBM selected the 8088 because it was available and cheap at a time when the 68K was neither. While the 68020 started getting over-complex, Intel also made its own mistakes with the 286 (ref. Gates' reference to it being "brain-dead"). Motorola did seemingly realize its mistakes and removed some of those instructions later on, so I think some of these design issues could've been overcome? I don't think it was as complex as, say, VAX or iAPX 432. The 68060 was competitive with P5, at least. As for Apollo, I know it's made by Amiga fanatics and not a 'real' design with real commercial constraints. It's just kind of a neat project? There are people who dream of the WDC 65C832, too, for the sole reason they liked the accumulator-style MOS 6502. (I've read a good amount of Mashey's posts on yarchive; I actually discovered that site first for all its rocketry tidbits). phongn fucked around with this message at 22:50 on Oct 4, 2022 |
# ¿ Oct 4, 2022 22:46 |
|
I suppose my fondness for M68K is because it was my first assembly language (early CS courses; advanced courses used MIPS) and it powered a bunch of systems I had only good memories of (Macintosh, TI-89, etc.) I wonder if Intel also avoided some of those mistakes (286 aside) because they were going to do all the super-CISC close-language-coupling in iAPX 432 instead.
|
# ¿ Oct 6, 2022 17:41 |
|
The variable length instructions of x86 make decoding more complicated than the fixed-length instructions seen on ARM. As BobHoward noted, x86 decoders are more of a bottleneck and this is one reason why. Unfortunately x86-64 was built to be easy to port x86 compilers over to, and so kept some of the old ugliness like variable length instructions, made relatively few named registers, etc. aarch64 instead went with a cleaner slate when they rethought ARM.
|
# ¿ Jan 14, 2023 02:00 |
|
JawnV6 posted:ILD is not a big problem. like the theoretical worst case is a bubble or two and there's a tradeoff with some fast path logic. but it's really not That Bad like everyone acts. quote:right, right, it was a total mistake for x86-64 to not burn it all down and start from a totally fresh ISA. Why bring IA64 into this except as a strawman?
|
# ¿ Jan 18, 2023 21:45 |
|
If anyone wants to see a few fun thoughts, Cliff Maier, who worked on both K6 and K8 (as well as , more or less bums around here (and sorta on MacRumors) and has nice little insights on how the sausage gets made. He is (more than a bit) biased about Intel and AMD, so take what he says with some grains of salt, but it's not unlike reading yarchive's CPU section.
|
# ¿ Jan 18, 2023 21:52 |
|
hobbesmaster posted:It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance.
|
# ¿ Jan 18, 2023 21:57 |
|
JawnV6 posted:hahaha c'mon are you doing fixed-length decode or not?? this is acting like you can do both trivially instead of sharing those resources, either I'm selling a really lovely 64-bit chip with dead transistors leaking power or I'm not running 32-bit programs. neither would have sold well!
|
# ¿ Jan 20, 2023 05:21 |
|
eschaton posted:If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor. in a well actually posted:Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache. I recall reading papers where the architects thought that the enormous transistor budgets going to out of order execution could not continue to scale and that it would be better used making a huge number of named registers and using magical compiler powers to explicitly schedule highly-threaded code. As you note, it ended up working very well for online transactional processing and various database tasks and hand-written HPC code, and atrociously bad for typical branchy, business-logic pointer-chasing code. Intel eventually had to abandon pure VLIW/EPIC and their Poulson microarchitecture put back in dynamic scheduling and out of order execution (and SMT), but by then it was rather too late. quote:* anything that came to a commercial product; I’m sure some academics have done far worse phongn fucked around with this message at 05:29 on Jan 21, 2023 |
# ¿ Jan 21, 2023 05:23 |
|
|
# ¿ May 14, 2024 20:49 |
|
Bunch of microprocessor guys and Linus Torvalds hang out on the forums at https://www.realworldtech.com but the main site is a shadow of what it once was. Chips and Cheese feels like its spiritual successor. Ars’ deep dive guys are long gone (which included the guy mentioned above). A number of people left to form Tech Report when Ars shifted to become more mainstream; many of TR’s people left to join industry and it too withered. phongn fucked around with this message at 23:55 on Jan 18, 2024 |
# ¿ Jan 18, 2024 23:50 |