Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
feedmegin
Jul 30, 2008

Kazinsal posted:

[*]VAX was a CISC architecture minicomputer that replaced DEC's venerable PDP-11 minicomputer series by taking the same general ISA theory and extending it to 32 bits with full baked in support for virtual memory management and privilege levels (thus the name, Virtual Address eXtension). If you find a working VAX, I will quite envious of you, but not of your power bill.

My old workplace had a bunch of MicroVAXes in storage up until about 2016? But then chucked them all out to be recycled. And wouldn't let me sneak out one myself :mad:

Those are no worse than any other 90s-ish workstations for power use, really, they looked like a RISC workstation from the era too.

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

PCjr sidecar posted:

Itanium is worth a mention as well. Intel corporate strategy wanted a clean sheet 64 bit ISA to get away from X86 licenses plus market consolidation from all the different legacy workstation CPUs like PA-RISC from HP plus the end of dennard scaling preventing expected frequency scaling plus over reliance on compilers to make VLIW / EPIC work plus Intel corporate culture was a perfect storm for a disastrous market. It was a real pain in the rear end to work with.

It didn't help that they cooperated directly on it with HP, who wanted a replacement for PA-RISC (and were its main users in recent times), and both sides basically jammed as many features as they could into a CPU that was designed to be as simple in hardware terms as possible to attain a high clock rate.

feedmegin
Jul 30, 2008

BobHoward posted:

Also, on ARM:

It should be noted that 64-bit ARM, aka aarch64 or A64, isn't truly a descendent of the 1980s 32-bit ARM, which has been retroactively named aarch32 or A32. aarch64 is a mostly-clean-sheet redesign.

Why is this important to highlight? Because the other way is what AMD chose for x86-64. It used x86 prefix bytes to add new 64-bit only opcodes, but even when the CPU is in 64-bit mode it's still legal to execute old 32-bit instructions. You can write shims to allow 32-bit code to call into 64-bit libraries, and vice versa.

With aarch64, the CPU's decoders are either in aarch64 mode where they recognize only the new 64-bit instruction encoding, or in aarch32 mode where they only understand the old 32-bit encoding. The encodings are too incompatible to support both at the same time. Decoder mode switches are only possible as a side effect of privilege level changes - hypervisor entry/exit or kernel entry/exit - so userspace 64-bit code can never call 32-bit libraries, or vice versa.

More importantly, mode support is optional. The ARMv8 spec is written to allow both aarch32-only and aarch64-only implementations. Apple went 64-bit only in their A-series iPhone/iPad chips a long time ago, and hasn't changed course now that they're transitioning the Mac to Arm. Arm itself has made some announcements about future cores transitioning to 64-bit only. So, the future of Arm is a relatively clean break from 1980s Arm.

You sort of forgot to mention Thumb. The world now is basically AArch64 (big boy processors) or Thumb (microcontrollers), classic ARM is legacy but both of those are going forward. Cortex-M isn't going anywhere.

feedmegin
Jul 30, 2008

BobHoward posted:

I did kinda skip it, yeah. It's part of aarch32 in the Arm v8 spec, so for the record, the full set of Arm v8 operating modes and instruction sets is:

aarch32 mode: T32 and A32 ISAs
aarch64 mode: A64 ISA

You're completely right that aarch32 is staying around for applications like microcontrollers.

(and who knows, maybe some of the platforms that use dual-mode CPUs today will find aarch32 too sticky to leave behind. Apple didn't, but they made that transition on iOS where they could just set a sunset date for allowing 32-bit code on the App Store.)

Ah, but the whole shtick with the Cortex-M series is they ONLY do Thumb. No A32 support at all. They boot in Thumb and that's all you get. For microcontrollers classic ARM is already dead.

Edit: to expand on this, because most people here probably don't write ARM assembler -

RISC CPUs (i.e. non-x86 later than about 1983) usually use a fixed instruction size, usually 32 bits. This makes instruction fetch much easier (part of the philosophy of keeping things simple) because unlike x86, you don't have to look at the first byte, decide you need to fetch the second byte (or not), then decide how many more bytes you need to fetch to have the complete instruction. Important in the 80s, rather less so now with the massive silicon budgets we have these days. ARM does not deviate from this, a classic ARM instruction is 32 bits, into which it has to fit, depending on instruction, e.g. two source and one destination registers, or one source register, one immediate value and one destination register.

Well that's nice, but now the 90s roll along and ARM is being used in an embedded context. 32 bits for every instruction uses up a lot of space (in 90s terms), can we bring it down somehow? ARM nicks an idea from MIPS, at the time a competitor in the embedded space, which is to have a subset of instructions that can be coded in 16 bits (the MIPS version is called MIPS16). You can switch the processor from 32-bit instruction mode to 16-bit instruction mode and back with a special jump instruction (so as a practical matter, a given function tends to be in one or the other). It's effectively a form of compression. The instructions actually executed are the same in the core, btw, they're just encoded and decoded differently. This is Thumb (retroactively: Thumb 1).

Now someone comes along in the mid 2000s and decides they want to make the smallest, most space efficient possible microprocessor. They decide the best way to do this is to ditch classic 32 bit ARM support altogether and just use Thumb. Recall Thumb 1 is a subset of the ARM ISA. So they add just enough new Thumb instructions to do the stuff Thumb 1 can't. This is Thumb 2 and different Cortex-Ms support different bits of it.

Having written a compiler as a hobby project that targets both classic ARM and a Cortex-M0, Thumb is a bitch to compile for, btw (unsurprisingly given its limited encoding space). All instructions tend to be x86-style dest = dest + one source register (remember earlier? this saves having to fit a third register into 16 bits). You usually only have direct access to 8 registers, not 16 (saves a bit per register). Immediates have a frigging tiny range. Branch displacements too. Constant pools all over the place. You lose that kind of cool predication-for-everything thing that classic ARM has, but then AArch64 ditches that too (it was a great idea in 1990 but it messes with modern OoO cores' performance). It would be horrible to write much assembler by hand for, but, well, it's 2021, even for microcontrollers we don't do that much any more. (One nice feature is that its interrupt handlers are specified to follow the C calling convention for the platform - no special handling needed other than in the linker, a Thumb interrupt handler is Just A Function, you can write firmware for a Cortex-M without a line of assembler).

Your Raspberry Pi or whatever supports both, btw, as does any vaguely modern desktop-y 32-bit-supporting CPU - but don't expect that to be true for too many years longer. ARM as an architecture is a lot less dependent on supporting ancient legacy code than x86 is and behaves accordingly.

feedmegin fucked around with this message at 13:47 on Jun 20, 2021

feedmegin
Jul 30, 2008

BobHoward posted:

In the end, everyone got steamrolled by the Intel fab tech and x86 engineering budget made possible by the mass market Wintel juggernaut. This included Intel's own Itanium division, even though it was the golden boy of senior executives.

Not actually EVERYONE as it happens. Your phone isn't running on Intel and its not like they haven't tried. These days, neither is your Mac.

feedmegin
Jul 30, 2008

BobHoward posted:

It's not inevitable that all PCs will be using Arm CPUs in 5 or 10 years, but I do think we're in an inflection point where that could happen. It depends a great deal on Microsoft having the desire and competence to push it forwards.

I mean, depends. PCs in the trad sense are a bit of a shrinking market and have been for a while now; below it we have phones, tablets and I guess Chromebooks and that's pretty much ARM land. Above that we have servers which care a whole lot less about ISAs and after years of talk ARM servers are actually beginning to become a thing in eg AWS. Microsoft matters when it comes to 'traditional desktop' but people are traditionally desktopping less.

feedmegin
Jul 30, 2008

Hasturtium posted:

I know nobody’s kicked this thread in a while, but I fell down a bit of a YouTube wormhole learning a bit about PA-RISC. Anybody have thoughts or impressions? Of all the machines I ran into in college computer labs or friend’s eBay-harvested collections, they’re one I never managed to run into.

The stack and the heap grow in the opposite direction to normal, that's the main 'weird' thing I remember about it. They also did the same thing Itanium later did (which HP co-developed) of having massive by the standards of the time caches to make up for so-so performance.

You want really weird, try https://en.wikipedia.org/wiki/HP_FOCUS - stack architecture!

As for hardware hypervisors and indeed LPARs, an AIX server will also do those just fine, it's not just a mainframe thing.

feedmegin fucked around with this message at 13:00 on Feb 11, 2022

feedmegin
Jul 30, 2008

priznat posted:

It’s interesting as a lot of SoCs are moving away from Arm to Risc-V, I imagine the sale falling through may slow it down a little bit as Arm may be more willing to get competitive with licensing fees?

I think some SoC vendors and other members of the ecosystem are talking about it specifically to get better deals out of ARM. I don't think there's any kind of seismic shift actually happening. Same as how 32-bit ARM servers were constantly touted as the future a few years ago because that got datacentres better rates out of Intel. Then everyone actually making them, e.g. Cavium, went bust because noone was actually buying it.

An NVidia acquisition would possibly have changed this, because NVidia+ARM is direct competition that makes its own silicon in a way that ARM now specifically is not. You don't want your competitor owning your architecture; ARM has got where it has by being conspicuously neutral.

feedmegin fucked around with this message at 15:23 on Feb 12, 2022

feedmegin
Jul 30, 2008

priznat posted:

For SoCs anything that isn’t requiring datapath processing will be great to move to risc-v. That’s most of what I’m familiar with.. Anything moving from a MIPS32 would be great on a risc-v!

Surely anything moving from a MIPS32 is on an ARM already, by now. Certainly that's been the case multiple places I've worked.

feedmegin
Jul 30, 2008

priznat posted:

A big motivator to go to risc is apparently the tool chain cost for arm stuff, the debugger folks like green hills etc. This is just what I hear from folks I’m not really involved too much on the cpu implementation side.

Why would RISCV change that? Clang and Gcc themselves are free and open source. Meanwhile someone producing SoCs or BSPs has no reason to charge less for proprietary toolchains and tools based on them (or eg commercial debuggers) than they do for ARM. Both instruction sets are equally openly documented afaik and ARM isn't charging you to write a compiler.

feedmegin fucked around with this message at 20:02 on Feb 13, 2022

feedmegin
Jul 30, 2008

movax posted:

Cavium Octeon SoCs are MIPS IIRC, and is fairly prevalent in networking.

This is the sort of thing I'm thinking of...we used to support an Octeon-based network appliance 3 jobs ago. Then we didn't any more because they got retired (the new version was ARM), and this was ~5 years ago.

feedmegin
Jul 30, 2008

Hasturtium posted:

In talking about Itanium, what was the clear advantage it had over HP-UX, and why’d it tank so hard despite the combination of support thrown behind it and the widespread capitulation of entrenched players on the assumption that Intel would just outspend everyone into inevitability?

As for why HP went with it, Intel were willing to bankroll it and had next-gen fabs already up and running. Developing and fabbing your own new CPUs is expensive and gets more so each new generation which is why the various Unix workstation companies got out of the business.

As for why it tanked, the hardware guys expected the software guys to do literal magic because they didn't understand software (specifically compilers) to compensate for a simpler hardware architecture that was supposed to be able to clock higher, only they threw everything and the kitchen sink into the ISA so it didn't; meanwhile they were betting on out-of-order hardware execution stalling out in terms of what it could do and it didn't.

feedmegin
Jul 30, 2008

ExcessBLarg! posted:

Intel made a lot of money off the x86 and the PC, and they used their warchest to crush any competing server/workstation architectures in the 90s. The only stuff that really survived that purge was embedded since x86 couldn't compete on low power, and from there grew ARM.

Well - more accurately, Intel claimed Itanium was going to eat everyone else's lunch and a bunch of other people believed them and cancelled their own architectures (coupled with the ever rising price of developing new CPUs, of course). That said, RISC lasted in the server market longer than the 90s and is still around today in places. IBM is still popping out new POWER chips even.

What really ate the workstation/low-end market was Linux (running on nice cheap Intel gear).

Edit: 'This is more of a business history question, but why did x86 win in the first place?' well taking a wider view, they didn't, of course. There's a dozen ARM chips out there for every x86. Your PC/laptop is x86, but that's old fashioned now and I suspect a much smaller market than phones these days. Intel tried to get into phones, tablets and embedded and flopped face first.

feedmegin fucked around with this message at 13:18 on Oct 3, 2022

feedmegin
Jul 30, 2008

BobHoward posted:

IBM had lots of experience fending off cloners in the mainframe world, so if they'd seen the PC as a real thing before it became an extremely real thing, I'm sure they would've taken steps to make it harder to clone.

Of course they did try this with P/S2 - in particular having a much fancier-technology system bus that was a proto-PCI. Which they thoroughly locked down with patents and licensed out at a premium.

Trouble is though, having better technology (which it did) does not trump cheap and already-existing, so nobody bought it.

feedmegin
Jul 30, 2008

phongn posted:

There's a nice thread by John Mashey on why DEC ultimately abandoned VAX (a very complex instruction set): Intel and AMD had huge volume that fed back in a virtuous cycle, and all the other guys didn't

That's not what that post says at all. They could not keep up with RISC chips in the late 80s, which is why they moved to the DEC Alpha. Nobody in DEC was worried about 386's and poo poo in the server market yet. Not least because it would be a good decade and a half before before 64-bit x86 was a thing,

feedmegin
Jul 30, 2008

legooolas posted:

Yes that's it! Presumably they're still a thing, but with compilers doing much more in the way of vectorisation etc they aren't required as often.

You would be surprised. Auto-vectorisation is pretty hit and miss.

feedmegin
Jul 30, 2008

priznat posted:

I wonder how RISC-V is doing in the embedded space, they have an opening if Arm continues to try to squeeze folks on licensing their cores.

I’ve worked at a couple places where the next product was scheduled to have a risc-v core in it but it ended up going with arm so there must be some kind of pricing shenanigans arm plays when it looks like they are gonna lose out.

Well, yes, of course. Also and contrariwise places will threaten to go to RISC-V with no real intention of doing so in order to get concessions out of ARM. ARM's whole thing is to squeeze people juuuuuust enough and no more that it's worth going with them for the infrastructure/ecosystem advantage rather than trying something more out of left field.

feedmegin
Jul 30, 2008

JawnV6 posted:

In my experience they only let HW folks design FSM's up to a certain point of complexity then they slap a 486/M3 in there and punt it all to software.

Or even a Cortex-M0. Those things are like 1mm square these days, cheap as chips and you can do a lot with 16k of sram or whatever.
(Source: did the software side of this two jobs ago, writing, yes, a fairly complicated FSM sitting on an FPGA to orchestrate some hardware)

feedmegin
Jul 30, 2008

To be fair ARM naming conventions especially old ones are pretty confusing. ARMv3 (the ISA) was implemented in ARM6 (the concrete processor design) - from 1991. I can believe its out of patent but it's also old AF - used in https://en.wikipedia.org/wiki/Risc_PC for example. If you actually mean ARM3 (the processor design in the Archimedes) the big addition there is a multiply instruction and it still has 26-bit not 32-bit addressing.

Meanwhile the Cortex-M0 is a much more recent design that only does Thumb, not classic ARM (so mostly 16 bit instructions, very compact but a bit of a bitch to write a compiler for). It literally can't run ARM6 machine code and vice versa. It''s a low end microcontroller so a v different space from what someone would normally be aiming for with the above (but those specs do suggest 'low end microcontroller').

Oh from that article - 'The only thing it omits from ARMv3 is support for the Thumb instruction set.'. Err. Not sure Thumb was in ARMv3.

feedmegin fucked around with this message at 15:23 on Nov 2, 2023

feedmegin
Jul 30, 2008

Subjunctive posted:

I mean “begun to make” can do a lot of work but it feels like RISC-V is still trying to catch Itanium in terms of actual production usage.

That's about where I am, I think. Lots of people are talking about it, not many people are shipping it. You can't go on Mouser and get a bunch of RISC-V chips. There's some SBCs I guess but nowhere near ARM.

feedmegin
Jul 30, 2008

Twerk from Home posted:

Edit: Oh yeah, Phoronix is still around and doing their thing uncorrupted, which is posting absolutely every Linux-related press release that happens and running their benchmark suite on any hardware that gets sent their way.

You say 'their' but assuming you're not doing a gender thing there it's literally always been one dude, afaik.

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

Hadlock posted:

RISC-V is almost exclusively supported by Linux, so it's (probably) just a RISC-V Linux binary and talks to the kernel for audio video bindings. Presumably whatever custom kernel was compiled for the device should have adequate GPU support and whatnot.

Modern GPU support is almost entirely in userland, the kernel just handles device arbitration and sending shaders etc to the GPU compiled by a .so file in userspace. It has an 'unknown Imagination GPU' - if Imagination has done a RISC-V userland driver for it, cool, though I wonder why they spent that much effort on it (and how much effort they spent). Otherwise you are getting a bare, unaccelerated framebuffer.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply