Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
hobbesmaster
Jan 28, 2008

To be clear I was speechless that someone reviewing an embedded product would be so unfamiliar with it. The CPU speed thing is comical for example.

Adbot
ADBOT LOVES YOU

forbidden dialectics
Jul 26, 2005





UHD posted:

I was surprised to see that the embedded cpu in my model 3 is an intel atom

whether that is a positive or negative probably depends on your opinion of tesla but in either case they are a pretty high profile customer

The infotainment in my new i4 runs on an Atom A3960, clocked at 1.9 GHz which was originally released in 2016 :q:.

Being made with superior German engineering, however, it's actually great.

JawnV6
Jul 4, 2004

So hot ...

phongn posted:

The variable length instructions of x86 make decoding more complicated than the fixed-length instructions seen on ARM. As BobHoward noted, x86 decoders are more of a bottleneck and this is one reason why.
ILD is not a big problem. like the theoretical worst case is a bubble or two and there's a tradeoff with some fast path logic. but it's really not That Bad like everyone acts.

phongn posted:

Unfortunately x86-64 was built to be easy to port x86 compilers over to, and so kept some of the old ugliness like variable length instructions, made relatively few named registers, etc. aarch64 instead went with a cleaner slate when they rethought ARM.
right, right, it was a total mistake for x86-64 to not burn it all down and start from a totally fresh ISA.

how's itanium doing anyway??

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Ars did a 3 part series on the history of ARM. The last part covers what should be fairly familiar territory (mobile market explosion), but I found the earlier ones very interesting:

https://arstechnica.com/gadgets/2022/09/a-history-of-arm-part-1-building-the-first-chip/
https://arstechnica.com/gadgets/2022/11/a-history-of-arm-part-2-everything-starts-to-come-together/
https://arstechnica.com/gadgets/2023/01/a-history-of-arm-part-3-coming-full-circle/

phongn
Oct 21, 2006

JawnV6 posted:

ILD is not a big problem. like the theoretical worst case is a bubble or two and there's a tradeoff with some fast path logic. but it's really not That Bad like everyone acts.
And yet M1 has gone wider than any x86-64 microarchitecture, and without much trouble feeding that extra-wide design?

quote:

right, right, it was a total mistake for x86-64 to not burn it all down and start from a totally fresh ISA.

how's itanium doing anyway??
AMD made the right decision given the market at the time, which was to make it easy for everyone to port over existing IA32 compilers. It also meant they brought in a lot of old cruft that could've been perhaps rethought. My surely obvious point was that perhaps a more aggressive ISA design could've been done, given the example of ARMv8's change from ARMv7.

Why bring IA64 into this except as a strawman?

hobbesmaster
Jan 28, 2008

It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance.

Even with the example of arm, there’s a lot of armv8 processors running in the aarch32 execution state out there.

phongn
Oct 21, 2006

If anyone wants to see a few fun thoughts, Cliff Maier, who worked on both K6 and K8 (as well as , more or less bums around here (and sorta on MacRumors) and has nice little insights on how the sausage gets made.

He is (more than a bit) biased about Intel and AMD, so take what he says with some grains of salt, but it's not unlike reading yarchive's CPU section.

phongn
Oct 21, 2006

hobbesmaster posted:

It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance.

Even with the example of arm, there’s a lot of armv8 processors running in the aarch32 execution state out there.
As I said, I am not arguing for a radical change like IA64, but wondering if something more than the "bolt on 64-bit to IA32" could be done, too. There is a continuum between those options. And sure, lots of ARMv8 processors are running in aarch32 mode. If anything that demonstrates that performant backwards compatibility with legacy code could be maintained while migrating to a nicer future?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

phongn posted:

As I said, I am not arguing for a radical change like IA64, but wondering if something more than the "bolt on 64-bit to IA32" could be done, too. There is a continuum between those options. And sure, lots of ARMv8 processors are running in aarch32 mode. If anything that demonstrates that performant backwards compatibility with legacy code could be maintained while migrating to a nicer future?

Sure, it's technically possible. Lots of things are.

Would a cleaner break from x86 have been a market success? I have doubts. At the time, AMD was a very small player trying to punch above its weight, and Intel was the monopolist pushing a clean break from x86 in the form of Itanium. If AMD proposed its own new thing, it would have been an uphill battle. AMD needed to do something to differentiate their approach from Intel's. Designing it as an extension of IA32 rather than a replacement helped them get their foot in the door.

KYOON GRIFFEY JR
Apr 12, 2010



Runner-up, TRP Sack Race 2021/22

Very interesting articles. Thanks for posting them!

JawnV6
Jul 4, 2004

So hot ...

phongn posted:

And yet M1 has gone wider than any x86-64 microarchitecture, and without much trouble feeding that extra-wide design?
genuinely struggling to find the point here? there's a zillion tradeoffs when you zoom that far out to include multiple clusters (or, as you're doing, the entire SoC) and im questioning the business side of AMD trotting out something wholly incompatible with existing software and tooling.

did I miss something and intel was selling fully integrated consumer electronics with the ability to pivot parts of the stack internally for free? oops! my mistake there, AMD would've been selling fully-integrated devices at a time they weren't even making their own chipsets. every way I try to approach this is fantastical

phongn posted:

AMD made the right decision given the market at the time, which was to make it easy for everyone to port over existing IA32 compilers. It also meant they brought in a lot of old cruft that could've been perhaps rethought. My surely obvious point was that perhaps a more aggressive ISA design could've been done, given the example of ARMv8's change from ARMv7.

Why bring IA64 into this except as a strawman?
its a perfectly cromulent point, as has been explained. you proposed a massive ISA change, I described the results of a better-positioned company trying to do exactly that and faceplanting. I feel like I've had this argument before with someone better informed who actually knew the opcode collisions between the two.

phongn posted:

As I said, I am not arguing for a radical change like IA64, but wondering if something more than the "bolt on 64-bit to IA32" could be done, too. There is a continuum between those options. And sure, lots of ARMv8 processors are running in aarch32 mode. If anything that demonstrates that performant backwards compatibility with legacy code could be maintained while migrating to a nicer future?
hahaha c'mon are you doing fixed-length decode or not?? this is acting like you can do both trivially instead of sharing those resources, either I'm selling a really lovely 64-bit chip with dead transistors leaking power or I'm not running 32-bit programs. neither would have sold well!

phongn
Oct 21, 2006

JawnV6 posted:

hahaha c'mon are you doing fixed-length decode or not?? this is acting like you can do both trivially instead of sharing those resources, either I'm selling a really lovely 64-bit chip with dead transistors leaking power or I'm not running 32-bit programs. neither would have sold well!
Nah, that's fair: got in over my head. BobHoward described things pretty well.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

hobbesmaster posted:

It’s not a strawman, it’s an example of how a big push for a radical change from x86 was unlikely to gain market acceptance.

That’s not because it was a big push for a radical change from x86. That’s because it was exceptionally difficult to write well-optimizing compilers for Itanium.

If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor.

x86-64 and ARMv8/AArch64 succeed in part because they only go a little way afield rather than trying to radically rethink everything in cutting-edge ways.

eschaton fucked around with this message at 10:10 on Jan 20, 2023

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

eschaton posted:

If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor.

And how. Bob Colwell is probably a somewhat biased source, given that he was part of the x86 faction in Intel, but the following has the ring of truthiness because how else could a disaster like Itanium happen?

Robert P. Colwell Oral History posted:

Anyway, for some reason, there was an organizational meaning at which Albert Yu could not appear. He designated Fred Pollack, but Fred could not appear, so Fred designated me, and I showed up. So first of all I am two organizational levels down from who is supposed to be sitting there and I ended up sitting next to Gordon Moore. This was probably about 1994 or so. The presenter happened to be the same guy who was in the front of the car from when I interviewed with the Santa Clara design team; same guy. He's presenting and he's predicting some performance numbers that looked astronomically too high to me. I did not know anything about how they expected to get there, I just knew what I thought was reasonable, what would be an aggressive boost forward and what would be just wishful thinking. The predictions being shown were in the ludicrous camp as far as I could tell. So I'm sitting and staring at this presentation, wondering what are they doing, how is it humanly possible to get what he's promising. And if it is, is it possible for this particular design team to do it. I was intensely thinking about what's happening here. Finally I just couldn't stand it anymore and I put my hand up. There was some discussion, but you have to realize none of these people were really chip designers or computer architects, with the exception of Gelsinger and Dadi Perlmutter.

0:13:53 PE: Sorry Dadi

0:13:54 BC: Dadi Perlmutter, he's one of the executive VPs in charge of all the micros right now.

0:13:58 PE: D A D I

0:14:00 BC: Yeah, his real name is David, he’s an Israeli. Everybody calls him Dadi. And then Pat Gelsinger who was the chip architect, designer in 386 and 486. But most of those guys at this presentation haven't designed anything themselves, they know how to manage complicated large expensive efforts, which is a different animal. Anyway this chip architect guy is standing up in front of this group promising the moon and stars. And I finally put my hand up and said I just could not see how you're proposing to get to those kind of performance levels. And he said well we've got a simulation, and I thought Ah, ok. That shut me up for a little bit, but then something occurred to me and I interrupted him again. I said, wait I am sorry to derail this meeting. But how would you use a simulator if you don't have a compiler? He said, well that's true we don't have a compiler yet, so I hand assembled my simulations. I asked "How did you do thousands of line of code that way?" He said “No, I did 30 lines of code”. Flabbergasted, I said, "You're predicting the entire future of this architecture on 30 lines of hand generated code?" [chuckle], I said it just like that, I did not mean to be insulting but I was just thunderstruck. Andy Grove piped up and said "we are not here right now to reconsider the future of this effort, so let’s move on". I said "Okay, it's your money, if that's what you want."

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

BobHoward posted:

And how. Bob Colwell is probably a somewhat biased source, given that he was part of the x86 faction in Intel, but the following has the ring of truthiness because how else could a disaster like Itanium happen?

I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more?

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!

Subjunctive posted:

I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more?

Pretty sure I read that anecdote in https://www.amazon.com/-/es/Robert-P-Colwell/dp/0471736171/

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
To be fair anything can be REALLY fast if all you're doing is making pretty patterns in your registers using code specifically designed to never touch anything outside of L1.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache.

* anything that came to a commercial product; I’m sure some academics have done far worse

phongn
Oct 21, 2006

eschaton posted:

If Itanium had been a 64-bit RISC, or even a 64-bit equivalent of i860, it probably would have taken off. Instead it was The Bizarro CPU and while it was eventually able to get some serious throughput (my Itanium 2 VMS box does pretty well running FORTRAN) the compiler problem was grossly underestimated as a factor.
Wasn't i860 also also a VLIW processor with a bunch of compiler-dependent scheduling and pipelining voodoo? DeMone at RWT wrote 22(!) years ago that the promises of IA64 reminded him of Intel's overblown ones for i860 years before.

in a well actually posted:

Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache.

I recall reading papers where the architects thought that the enormous transistor budgets going to out of order execution could not continue to scale and that it would be better used making a huge number of named registers and using magical compiler powers to explicitly schedule highly-threaded code. As you note, it ended up working very well for online transactional processing and various database tasks and hand-written HPC code, and atrociously bad for typical branchy, business-logic pointer-chasing code.

Intel eventually had to abandon pure VLIW/EPIC and their Poulson microarchitecture put back in dynamic scheduling and out of order execution (and SMT), but by then it was rather too late.

quote:

* anything that came to a commercial product; I’m sure some academics have done far worse
The Mill guys?

phongn fucked around with this message at 05:29 on Jan 21, 2023

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Subjunctive posted:

I love this, and choose to believe it. major corporate strategy has been set on grounds much weaker than 30 lines of simulated instructions. where can I read more?

It's from this:

https://www.sigmicro.org/media/oralhistories/colwell.pdf

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

in a well actually posted:

Yeah vliw is like the worst unless you’re doing a dsp (or doing hand optimized science code); for general purpose servers you couldn’t choose a worse architecture*. I2 tried to fix the problems with the architecture by putting a shitload of bandwidth in including an astounding for the time 9 mb cache.

* anything that came to a commercial product; I’m sure some academics have done far worse

Have you ever done a deep or shallow dive into itanium? I did a shallow dive once (googled technical docs and skimmed for a while), and I can't say that I came out thinking it even qualifies as a VLIW.

Don't get me wrong, there's aspects which seem VLIW-inspired, but overall it seems like its own thing. They were trying hard to make something novel, I'll give them that much! Like eschaton said though, what they actually made was the Bizarro CPU. Everything's weird or bad or both, and not in a subtle way.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

phongn posted:

Wasn't i860 also also a VLIW processor with a bunch of compiler-dependent scheduling and pipelining voodoo? DeMone at RWT wrote 22(!) years ago that the promises of IA64 reminded him of Intel's overblown ones for i860 years before.

My impression has always been that i860 was more RISC-ish than VLIW-ish, but that compilers for it did turn out to be a Hard Problem. That’s why the majority of use outside massively parallel supercomputing was… massively parallel graphics! The Silicon Graphics RealityEngine treated the i860 as a building block, just like supercomputing systems did. (I have a bunch of VME i860 boards that I’m looking for configuration details on… And a DEC TURBOchannel graphics card with one too, for my AXP 3000-400.)

Eventually in the early 1990s compilers for the i860 got pretty decent and you could actually achieve some of the theoretical throughput. Most of that though was obviated by the uses that it wound up being put to, since something like the SGI RealityEngine will have each of its many CPUs running custom assembly that fits in the instruction cache to serve its purpose in the render pipeline.

redeyes
Sep 14, 2002

by Fluffdaddy
I bought a Pine64 Quartz board and then a bunch of addons including their nice metal case. A bit spendy but it seems a great platform for a low power server box.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

BobHoward posted:

Have you ever done a deep or shallow dive into itanium? I did a shallow dive once (googled technical docs and skimmed for a while), and I can't say that I came out thinking it even qualifies as a VLIW.

Don't get me wrong, there's aspects which seem VLIW-inspired, but overall it seems like its own thing. They were trying hard to make something novel, I'll give them that much! Like eschaton said though, what they actually made was the Bizarro CPU. Everything's weird or bad or both, and not in a subtle way.

I’ve worked with Itanium at a previous employer; I wasn’t the one doing instruction-level optimization but I worked with swengs who were; they spent a lot of time avoiding branching at all costs (also drinking and complaining.) Agreed that it’s not exactly VLIW; I elided the “, but worse” for brevity.

My favorite part was the slow x86 support.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

https://www.youtube.com/watch?v=6o38C-ultvw

Tom Scott visits the Parkes Radio Telescope, which, it turns out, is still steered by PDP-11s. (This is mostly a telescope video, with only very brief mention and footage of the PDP racks.)

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
can you imagine if people had actually used the WDS-1600 chipset in personal computers in the mid-1970s instead of the 8080/8085/Z80?

(that’s what’s in the LSI-11, and WD also used it with different microcode to make the Pascal Engine)

Kazinsal
Dec 13, 2011
There was an S-100 CPU card that used the WD16 from Alpha Microsystems called the AM-100. Not sure what the microcode on it was but the issue with the WD16 was that it was slow as hell compared to other comparable 16-bit machines. 16-bit add times were about 3.5 us, which is only marginally faster than a 2.5 MHz Z80's 4.4 us and slower than a 4 MHz Z80's 2.7 us.

Alpha Micro's next S-100 CPU card used an MC68000.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
except they set it up byte-swapped to be compatible with little-endian

S-100 is only an 8-bit bus, but on the other hand the M68000 had no A0 line, only /UDS and /LDS (upper and lower byte data strobe) to indicate which part of a word on the data bus mattered

so you could build a little state machine to generate A0 and input or output either or both halves of a word to the S-100 bus and get a pseudo-little-endian linear address space

you know, if you didn’t want to just treat S-100 as an even-byte-only 128KB window in the loving 24MB (soon 2GB with 68012, and 4GB theoretical) address space like any sane hardware developer would

then again these were the people who pioneered backup to VHS and their OS was essentially a TOPS-10 clone for 68000 so who the hell can say, insanity abounds

Kazinsal
Dec 13, 2011
My VAX let out the magic smoke. Powered it on the other night to fiddle around with netbooting, and a couple minutes in it powered off and smelled distinctly of a capacitor that decided it didn’t want to live on this planet anymore. Anyone ever recap a MicroVAX 3100 before? :v:

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
I know a bunch of people who have recapped the PSUs. The VAXen themselves typically don’t need it.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

eschaton posted:

I know a bunch of people who have recapped the PSUs. The VAXen themselves typically don’t need it.

Jesus Christ, I haven’t felt this young in years.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
we still need a YOSVAX running somewhere

Kazinsal
Dec 13, 2011

eschaton posted:

we still need a YOSVAX running somewhere

There’s a reason I was trying to netboot it. What’s more yosvax than a yosvax running a bespoke operating system? :science:

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
Nice!!

Rescue Toaster
Mar 13, 2003
Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines.

But at the end of the day you need someone to actually fab RISC-V silicon and making a choice not to infect it with sketchy poo poo. With SiFive cozying up to Intel I'm losing hope fast that there will actually be anything of decent performance (like with real virtualization features, for example) and actually be clean of bullshit.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Rescue Toaster posted:

Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines.

But at the end of the day you need someone to actually fab RISC-V silicon and making a choice not to infect it with sketchy poo poo. With SiFive cozying up to Intel I'm losing hope fast that there will actually be anything of decent performance (like with real virtualization features, for example) and actually be clean of bullshit.

Western Digital, for disk drive controllers. Probably not the answer you're hoping for.

It's not clear to me how RISC-V can escape from the deep embedded world. It's lacking in several areas compared to 64-bit Arm, and perhaps more importantly, the reason 64-bit Arm is making any headway against x86 in desktop computing is the giant boost it got from cell phones.

Pluton paranoia is generally a bit overwrought. https://mjg59.dreamwidth.org/58125.html

If you want a non-x86 personal computer with real virtualization features that's clean of bullshit, it's already here. Just buy an Apple Silicon Mac. There's nothing equivalent to SMM or TrustZone, meaning the OS doesn't have a hypervisor running above it stealing cycles to do random bullshit now and then. And Apple's secure boot is a very clean design with some novel features.

Most notably, every OS on the machine has its own independent boot security state, and the minimum state amounts to the user attesting to the machine "yes you should boot this binary because I, the computer's owner, trust it". When you do this it locally signs the binary and stores secrets in the Secure Enclave (Apple's TPM equivalent), making it possible to check for tampering at boot time.

Since it's able to check the integrity of a binary not signed by Apple, anyone can build a secure boot chain for a third party OS on top of Apple's infrastructure without asking Apple. This is a breath of fresh air compared to Secure Boot on the PC, which requires a vendor public key to be preloaded into the firmware before it will trust anything. Most PCs ship with only Microsoft public keys, so these days Linux distros have to hand Microsoft some money to get their bootloaders signed.

Rescue Toaster
Mar 13, 2003
I'll have to look closer at the M1's, thanks. Unfortunately not really available in all the form factors I would like. But still, I'll keep an eye out if any of the more security-oriented distros eventually embrace it.

For Pluton it was more that, if you just want a TPM on the chip, you could build or use an open design for a TPM on the chip. Instead yet another thing that can only run MS-signed firmware and can be changed at any time without clear explanation of the limits of what it could do given the hardware interconnections. Instead all we get is 'This is what the current firmware does, and oh BTW that firmware could be changed at any time.' But that's getting off-topic for this thread.

My hopeless optimism is one day we get something like a raspberry pi 4 running RISC-V without all the binary blobs that the pi depends on. And from a reasonably trustworthy supplier. But even that is very optimistic unless it takes off for phones, as you say. (Though maybe with ARM getting desperate for cash?) A mid-range PC replacement with full virtualization and IOMMU support, etc... is pure fantasy of course. If anyone is likely to ever build such a processor I think it would be some mfg in China working to get around US sanctions/dependency, and that's not really something I would want to trust, for me personally.

Rescue Toaster fucked around with this message at 13:32 on Mar 25, 2023

ploots
Mar 19, 2010

Rescue Toaster posted:

Are there any non-chinese companies leaning into RISC-V other than SiFive? I was getting really excited about RISC-V from a security perspective, having a new option not plagued by closed 'management engine' processors or whatever in god's name Pluton will be doing once it's integrated in all AMD & Intel x86 machines.

But at the end of the day you need someone to actually fab RISC-V silicon and making a choice not to infect it with sketchy poo poo. With SiFive cozying up to Intel I'm losing hope fast that there will actually be anything of decent performance (like with real virtualization features, for example) and actually be clean of bullshit.

https://riscv.org/news/2021/09/rivo...-semi-analysis/

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Rescue Toaster posted:

Are there any non-chinese companies leaning into RISC-V other than SiFive?

https://www.tomshardware.com/news/tenstorrent-shares-roadmap-of-ultra-high-performance-risc-v-cpus-and-ai-accelerators

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨


oh poo poo, I led their seed round years ago. I didn't know they were into RISC-V now, but it makes sense for them for sure

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply