|
To be clear I was speechless that someone reviewing an embedded product would be so unfamiliar with it. The CPU speed thing is comical for example.
|
# ? Jan 14, 2023 04:49 |
|
|
# ? May 28, 2024 15:01 |
|
UHD posted:I was surprised to see that the embedded cpu in my model 3 is an intel atom The infotainment in my new i4 runs on an Atom A3960, clocked at 1.9 GHz which was originally released in 2016 . Being made with superior German engineering, however, it's actually great.
|
# ? Jan 14, 2023 05:46 |
|
phongn posted:The variable length instructions of x86 make decoding more complicated than the fixed-length instructions seen on ARM. As BobHoward noted, x86 decoders are more of a bottleneck and this is one reason why. phongn posted:Unfortunately x86-64 was built to be easy to port x86 compilers over to, and so kept some of the old ugliness like variable length instructions, made relatively few named registers, etc. aarch64 instead went with a cleaner slate when they rethought ARM. how's itanium doing anyway??
|
# ? Jan 18, 2023 00:18 |
|
Ars did a 3 part series on the history of ARM. The last part covers what should be fairly familiar territory (mobile market explosion), but I found the earlier ones very interesting: https://arstechnica.com/gadgets/2022/09/a-history-of-arm-part-1-building-the-first-chip/ https://arstechnica.com/gadgets/2022/11/a-history-of-arm-part-2-everything-starts-to-come-together/ https://arstechnica.com/gadgets/2023/01/a-history-of-arm-part-3-coming-full-circle/
|
# ? Jan 18, 2023 09:35 |
|
JawnV6 posted:ILD is not a big problem. like the theoretical worst case is a bubble or two and there's a tradeoff with some fast path logic. but it's really not That Bad like everyone acts. quote:right, right, it was a total mistake for x86-64 to not burn it all down and start from a totally fresh ISA. Why bring IA64 into this except as a strawman?
|
# ? Jan 18, 2023 21:45 |
|
It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance. Even with the example of arm, there’s a lot of armv8 processors running in the aarch32 execution state out there.
|
# ? Jan 18, 2023 21:51 |
|
If anyone wants to see a few fun thoughts, Cliff Maier, who worked on both K6 and K8 (as well as , more or less bums around here (and sorta on MacRumors) and has nice little insights on how the sausage gets made. He is (more than a bit) biased about Intel and AMD, so take what he says with some grains of salt, but it's not unlike reading yarchive's CPU section.
|
# ? Jan 18, 2023 21:52 |
|
hobbesmaster posted:It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance.
|
# ? Jan 18, 2023 21:57 |
|
phongn posted:As I said, I am not arguing for a radical change like IA64, but wondering if something more than the "bolt on 64-bit to IA32" could be done, too. There is a continuum between those options. And sure, lots of ARMv8 processors are running in aarch32 mode. If anything that demonstrates that performant backwards compatibility with legacy code could be maintained while migrating to a nicer future? Sure, it's technically possible. Lots of things are. Would a cleaner break from x86 have been a market success? I have doubts. At the time, AMD was a very small player trying to punch above its weight, and Intel was the monopolist pushing a clean break from x86 in the form of Itanium. If AMD proposed its own new thing, it would have been an uphill battle. AMD needed to do something to differentiate their approach from Intel's. Designing it as an extension of IA32 rather than a replacement helped them get their foot in the door.
|
# ? Jan 19, 2023 03:37 |
|
ConanTheLibrarian posted:Ars did a 3 part series on the history of ARM. The last part covers what should be fairly familiar territory (mobile market explosion), but I found the earlier ones very interesting: Very interesting articles. Thanks for posting them!
|
# ? Jan 19, 2023 13:25 |
|
phongn posted:And yet M1 has gone wider than any x86-64 microarchitecture, and without much trouble feeding that extra-wide design? did I miss something and intel was selling fully integrated consumer electronics with the ability to pivot parts of the stack internally for free? oops! my mistake there, AMD would've been selling fully-integrated devices at a time they weren't even making their own chipsets. every way I try to approach this is fantastical phongn posted:AMD made the right decision given the market at the time, which was to make it easy for everyone to port over existing IA32 compilers. It also meant they brought in a lot of old cruft that could've been perhaps rethought. My surely obvious point was that perhaps a more aggressive ISA design could've been done, given the example of ARMv8's change from ARMv7. phongn posted:As I said, I am not arguing for a radical change like IA64, but wondering if something more than the "bolt on 64-bit to IA32" could be done, too. There is a continuum between those options. And sure, lots of ARMv8 processors are running in aarch32 mode. If anything that demonstrates that performant backwards compatibility with legacy code could be maintained while migrating to a nicer future?
|
# ? Jan 20, 2023 01:59 |
|
JawnV6 posted:hahaha c'mon are you doing fixed-length decode or not?? this is acting like you can do both trivially instead of sharing those resources, either I'm selling a really lovely 64-bit chip with dead transistors leaking power or I'm not running 32-bit programs. neither would have sold well!
|
# ? Jan 20, 2023 05:21 |
|
hobbesmaster posted:It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance. That’s not because it was a big push for a radical change from x86. That’s because it was exceptionally difficult to write well-optimizing compilers for Itanium. If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor. x86-64 and ARMv8/AArch64 succeed in part because they only go a little way afield rather than trying to radically rethink everything in cutting-edge ways. eschaton fucked around with this message at 10:10 on Jan 20, 2023 |
# ? Jan 20, 2023 10:06 |
|
eschaton posted:If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor. And how. Bob Colwell is probably a somewhat biased source, given that he was part of the x86 faction in Intel, but the following has the ring of truthiness because how else could a disaster like Itanium happen? Robert P. Colwell Oral History posted:Anyway, for some reason, there was an organizational meaning at which Albert Yu could not appear. He designated Fred Pollack, but Fred could not appear, so Fred designated me, and I showed up. So first of all I am two organizational levels down from who is supposed to be sitting there and I ended up sitting next to Gordon Moore. This was probably about 1994 or so. The presenter happened to be the same guy who was in the front of the car from when I interviewed with the Santa Clara design team; same guy. He's presenting and he's predicting some performance numbers that looked astronomically too high to me. I did not know anything about how they expected to get there, I just knew what I thought was reasonable, what would be an aggressive boost forward and what would be just wishful thinking. The predictions being shown were in the ludicrous camp as far as I could tell. So I'm sitting and staring at this presentation, wondering what are they doing, how is it humanly possible to get what he's promising. And if it is, is it possible for this particular design team to do it. I was intensely thinking about what's happening here. Finally I just couldn't stand it anymore and I put my hand up. There was some discussion, but you have to realize none of these people were really chip designers or computer architects, with the exception of Gelsinger and Dadi Perlmutter.
|
# ? Jan 20, 2023 14:08 |
|
BobHoward posted:And how. Bob Colwell is probably a somewhat biased source, given that he was part of the x86 faction in Intel, but the following has the ring of truthiness because how else could a disaster like Itanium happen? I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more?
|
# ? Jan 20, 2023 16:53 |
|
Subjunctive posted:I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more? Pretty sure I read that anecdote in https://www.amazon.com/-/es/Robert-P-Colwell/dp/0471736171/
|
# ? Jan 20, 2023 18:07 |
|
To be fair anything can be REALLY fast if all you're doing is making pretty patterns in your registers using code specifically designed to never touch anything outside of L1.
|
# ? Jan 21, 2023 00:57 |
|
Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache. * anything that came to a commercial product; I’m sure some academics have done far worse
|
# ? Jan 21, 2023 02:26 |
|
eschaton posted:If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor. in a well actually posted:Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache. I recall reading papers where the architects thought that the enormous transistor budgets going to out of order execution could not continue to scale and that it would be better used making a huge number of named registers and using magical compiler powers to explicitly schedule highly-threaded code. As you note, it ended up working very well for online transactional processing and various database tasks and hand-written HPC code, and atrociously bad for typical branchy, business-logic pointer-chasing code. Intel eventually had to abandon pure VLIW/EPIC and their Poulson microarchitecture put back in dynamic scheduling and out of order execution (and SMT), but by then it was rather too late. quote:* anything that came to a commercial product; I’m sure some academics have done far worse phongn fucked around with this message at 05:29 on Jan 21, 2023 |
# ? Jan 21, 2023 05:23 |
|
Subjunctive posted:I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more? It's from this: https://www.sigmicro.org/media/oralhistories/colwell.pdf
|
# ? Jan 21, 2023 06:14 |
|
in a well actually posted:Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache. Have you ever done a deep or shallow dive into itanium? I did a shallow dive once (googled technical docs and skimmed for a while), and I can't say that I came out thinking it even qualifies as a VLIW. Don't get me wrong, there's aspects which seem VLIW-inspired, but overall it seems like its own thing. They were trying hard to make something novel, I'll give them that much! Like eschaton said though, what they actually made was the Bizarro CPU. Everything's weird or bad or both, and not in a subtle way.
|
# ? Jan 21, 2023 07:22 |
|
phongn posted:Wasn't i860 also also a VLIW processor with a bunch of compiler-dependent scheduling and pipelining voodoo? DeMone at RWT wrote 22(!) years ago that the promises of IA64 reminded him of Intel's overblown ones for i860 years before. My impression has always been that i860 was more RISC-ish than VLIW-ish, but that compilers for it did turn out to be a Hard Problem. That’s why the majority of use outside massively parallel supercomputing was… massively parallel graphics! The Silicon Graphics RealityEngine treated the i860 as a building block, just like supercomputing systems did. (I have a bunch of VME i860 boards that I’m looking for configuration details on… And a DEC TURBOchannel graphics card with one too, for my AXP 3000-400.) Eventually in the early 1990s compilers for the i860 got pretty decent and you could actually achieve some of the theoretical throughput. Most of that though was obviated by the uses that it wound up being put to, since something like the SGI RealityEngine will have each of its many CPUs running custom assembly that fits in the instruction cache to serve its purpose in the render pipeline.
|
# ? Jan 21, 2023 07:31 |
|
I bought a Pine64 Quartz board and then a bunch of addons including their nice metal case. A bit spendy but it seems a great platform for a low power server box.
|
# ? Jan 21, 2023 16:47 |
|
BobHoward posted:Have you ever done a deep or shallow dive into itanium? I did a shallow dive once (googled technical docs and skimmed for a while), and I can't say that I came out thinking it even qualifies as a VLIW. I’ve worked with Itanium at a previous employer; I wasn’t the one doing instruction-level optimization but I worked with swengs who were; they spent a lot of time avoiding branching at all costs (also drinking and complaining.) Agreed that it’s not exactly VLIW; I elided the “, but worse” for brevity. My favorite part was the slow x86 support.
|
# ? Jan 22, 2023 21:52 |
|
https://www.youtube.com/watch?v=6o38C-ultvw Tom Scott visits the Parkes Radio Telescope, which, it turns out, is still steered by PDP-11s. (This is mostly a telescope video, with only very brief mention and footage of the PDP racks.)
|
# ? Jan 30, 2023 08:01 |
|
can you imagine if people had actually used the WDS-1600 chipset in personal computers in the mid-1970s instead of the 8080/8085/Z80? (that’s what’s in the LSI-11, and WD also used it with different microcode to make the Pascal Engine)
|
# ? Jan 30, 2023 10:38 |
|
There was an S-100 CPU card that used the WD16 from Alpha Microsystems called the AM-100. Not sure what the microcode on it was but the issue with the WD16 was that it was slow as hell compared to other comparable 16-bit machines. 16-bit add times were about 3.5 us, which is only marginally faster than a 2.5 MHz Z80's 4.4 us and slower than a 4 MHz Z80's 2.7 us. Alpha Micro's next S-100 CPU card used an MC68000.
|
# ? Jan 30, 2023 12:11 |
|
except they set it up byte-swapped to be compatible with little-endian S-100 is only an 8-bit bus, but on the other hand the M68000 had no A0 line, only /UDS and /LDS (upper and lower byte data strobe) to indicate which part of a word on the data bus mattered so you could build a little state machine to generate A0 and input or output either or both halves of a word to the S-100 bus and get a pseudo-little-endian linear address space you know, if you didn’t want to just treat S-100 as an even-byte-only 128KB window in the loving 24MB (soon 2GB with 68012, and 4GB theoretical) address space like any sane hardware developer would then again these were the people who pioneered backup to VHS and their OS was essentially a TOPS-10 clone for 68000 so who the hell can say, insanity abounds
|
# ? Jan 31, 2023 10:59 |
|
My VAX let out the magic smoke. Powered it on the other night to fiddle around with netbooting, and a couple minutes in it powered off and smelled distinctly of a capacitor that decided it didn’t want to live on this planet anymore. Anyone ever recap a MicroVAX 3100 before?
|
# ? Feb 15, 2023 03:19 |
|
I know a bunch of people who have recapped the PSUs. The VAXen themselves typically don’t need it.
|
# ? Feb 15, 2023 04:45 |
|
eschaton posted:I know a bunch of people who have recapped the PSUs. The VAXen themselves typically don’t need it. Jesus Christ, I haven’t felt this young in years.
|
# ? Feb 15, 2023 04:50 |
|
we still need a YOSVAX running somewhere
|
# ? Feb 15, 2023 05:15 |
|
eschaton posted:we still need a YOSVAX running somewhere There’s a reason I was trying to netboot it. What’s more yosvax than a yosvax running a bespoke operating system?
|
# ? Feb 15, 2023 05:31 |
|
Nice!!
|
# ? Feb 15, 2023 05:36 |
|
Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines. But at the end of the day you need someone to actually fab RISC-V silicon and making a choice not to infect it with sketchy poo poo. With SiFive cozying up to Intel I'm losing hope fast that there will actually be anything of decent performance (like with real virtualization features, for example) and actually be clean of bullshit.
|
# ? Mar 24, 2023 23:44 |
|
Rescue Toaster posted:Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines. Western Digital, for disk drive controllers. Probably not the answer you're hoping for. It's not clear to me how RISC-V can escape from the deep embedded world. It's lacking in several areas compared to 64-bit Arm, and perhaps more importantly, the reason 64-bit Arm is making any headway against x86 in desktop computing is the giant boost it got from cell phones. Pluton paranoia is generally a bit overwrought. https://mjg59.dreamwidth.org/58125.html If you want a non-x86 personal computer with real virtualization features that's clean of bullshit, it's already here. Just buy an Apple Silicon Mac. There's nothing equivalent to SMM or TrustZone, meaning the OS doesn't have a hypervisor running above it stealing cycles to do random bullshit now and then. And Apple's secure boot is a very clean design with some novel features. Most notably, every OS on the machine has its own independent boot security state, and the minimum state amounts to the user attesting to the machine "yes you should boot this binary because I, the computer's owner, trust it". When you do this it locally signs the binary and stores secrets in the Secure Enclave (Apple's TPM equivalent), making it possible to check for tampering at boot time. Since it's able to check the integrity of a binary not signed by Apple, anyone can build a secure boot chain for a third party OS on top of Apple's infrastructure without asking Apple. This is a breath of fresh air compared to Secure Boot on the PC, which requires a vendor public key to be preloaded into the firmware before it will trust anything. Most PCs ship with only Microsoft public keys, so these days Linux distros have to hand Microsoft some money to get their bootloaders signed.
|
# ? Mar 25, 2023 11:51 |
|
I'll have to look closer at the M1's, thanks. Unfortunately not really available in all the form factors I would like. But still, I'll keep an eye out if any of the more security-oriented distros eventually embrace it. For Pluton it was more that, if you just want a TPM on the chip, you could build or use an open design for a TPM on the chip. Instead yet another thing that can only run MS-signed firmware and can be changed at any time without clear explanation of the limits of what it could do given the hardware interconnections. Instead all we get is 'This is what the current firmware does, and oh BTW that firmware could be changed at any time.' But that's getting off-topic for this thread. My hopeless optimism is one day we get something like a raspberry pi 4 running RISC-V without all the binary blobs that the pi depends on. And from a reasonably trustworthy supplier. But even that is very optimistic unless it takes off for phones, as you say. (Though maybe with ARM getting desperate for cash?) A mid-range PC replacement with full virtualization and IOMMU support, etc... is pure fantasy of course. If anyone is likely to ever build such a processor I think it would be some mfg in China working to get around US sanctions/dependency, and that's not really something I would want to trust, for me personally. Rescue Toaster fucked around with this message at 13:32 on Mar 25, 2023 |
# ? Mar 25, 2023 13:25 |
|
Rescue Toaster posted:Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines. https://riscv.org/news/2021/09/rivo...-semi-analysis/
|
# ? Mar 27, 2023 06:09 |
|
Rescue Toaster posted:Are there any non-chinese companies leaning into RISC-V other than SiFive? https://www.tomshardware.com/news/tenstorrent-shares-roadmap-of-ultra-high-performance-risc-v-cpus-and-ai-accelerators
|
# ? Mar 30, 2023 16:52 |
|
|
# ? May 28, 2024 15:01 |
|
mdxi posted:https://www.tomshardware.com/news/tenstorrent-shares-roadmap-of-ultra-high-performance-risc-v-cpus-and-ai-accelerators oh poo poo, I led their seed round years ago. I didn't know they were into RISC-V now, but it makes sense for them for sure
|
# ? Mar 30, 2023 17:24 |