Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
We have a few Raptor Talos II systems for testing at work and they’re nice and it is great to have a power system that has a relatively friendly bios boot in an ATX form factor (power8s were a huge hassle usually).

They are more fragile than most x86 systems though we have killed a couple just with our reboot/power cycling tests. Not something most people will have to do but it was interesting. I haven’t looked at debugging them but I suspect power regulator issues.

For a while pre Epyc Rome they were the best performers for throughput from a pcie slot being gen4 x16! They did get seem to get bottlenecked possibly due to the amount of cache when trying to really saturate it with fio to a bunch of nvram drives. They are the lesser 4 core Sforza cpus on there which has less L3 cache and 4 RAM channels so we were hitting the limits there it seemed, from working with some IBM FAEs. On the Romes the 8 channel ddr4 and much higher L3 lets us hit line rate on a pcie slot.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I’m really curious to hear about homelab projects too, I can never think of things I want to do yet want to get setups like RPi clusters for no reason :haw:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

CommieGIR posted:

Of course, we also have an awesome homelab thread I started for that: https://forums.somethingawful.com/showthread.php?threadid=3945277

Very neat, bookmarked! Would be interesting to see more on what projects people are using em for too.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I put Ubuntu 18.04 on a Talos II Power9 a while back and it ran great. I don’t really care a lot about distros I just put server on without gui and have my set of scripts I run on it anyway though. I couldn’t tell you what the differences are between fedora or netbsd or ubuntu. It was just I already had a live usb stick with it!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Phobeste posted:

Does anybody have recs for computer architecture, ideally for micros but possibly for other stuff (hopefully not x86) textbooks that are good and a useful read? Ideally more focused on the implications of the architecture on code that runs on the device. Have some coworkers that would get a lot out of one. Unfortunately I picked up what I know out of practical experience and accumulated debugging and so on so I don’t really know what a good one would be.

When I was in school "Computer Architecture - A Quantitative Approach" by Hennessy and Patterson was the gold standard and it looks like there have been several new editions of it since then. It's mostly more general than specific architectures but they have updated it with multicore and even GPU stuff. The newest one (6th edition) does have RISC V examples.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Someone on Twitter found a "RISC-V High Performance Engineer" job posting from Apple -- what are they up too, I wonder...

Wow. I had seen some article header about RISC-V multicore linux capable cpus and wonder if that’s what they’re getting into first perhaps. Something in the data center (no idea what) or perhaps a new wifi base station powered by one?

Oh it was the new SiFive boards and I was excited til I saw they were $999, lol.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Hasturtium posted:

Newer than the Unmatched? Because those are going for a mere $680. Which is more than my alternative CPU-curious rear end can rationalize for sub-Raspberry Pi 4-class performance, but I know boutique hardware will never be cost-competitive with commodity kit.

Oops weird, in my search results it said 5 days ago but the article (for the unleashed) was actually from 2018. I coulda sworn I saw some new RISC V server article pop up recently which I did a mental note to read then forgot about.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

They clearly have good ‘little’ ARM cores, but I wonder if they’re looking for something NVIDIA Falcon like, where it supports NoCs / things like that. Seems perfect to brew your own RV32/RV64 and pay $0 licensing / royalties.

Yeah very interesting especially if they can stamp them into other silicon for some built in telemetry or control plane. Security processor perhaps hence the performance? Sky’s the limit really.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Oh — duh, maybe in their baseband processors for the inevitable Apple modems!

Oooh yeah good call. That’d be perfect for it. I wonder if it’ll get use first on wireless chips for smaller products like the homepods (still think they should do a homepod/mesh wifi network thing) and then get rolled into 5G modems.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

PCjr sidecar posted:

Negotiating leverage for when the acquisition closes and Jensen needs to boost his spatula budget.

Lol I was wondering what the ARM royalties are like for Apple, so they have fixed ones they have to re-up after x years or is it more fluid? They have got to be one of the biggest licensees of it.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
This thread reminded me I had been keeping an eye out for the BeagleV Risc-V SBC, but looks like it got cancelled :sigh:

Any other Risc-V Pi-alikes coming out?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Oh very cool. I'll keep an eye out for that one now!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
It’s interesting as a lot of SoCs are moving away from Arm to Risc-V, I imagine the sale falling through may slow it down a little bit as Arm may be more willing to get competitive with licensing fees?

I don’t think the trend will stop though. Nvidia/arm would have been a solid combo both for enterprise and for consumer stuff. Now I wonder if nvidia joins the risc -v train even!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

CommieGIR posted:

Is it? Honestly most SoCs I see for embedded stuff remain ARM and ARM still has a significant advantage in power consumption. I could be wrong.

Well, it's fairly anecdotal I suppose but I know of a couple fairly major players going that way. And from talking to JTAG probe vendors they're seeing a lot of risc-v adoption happening. It won't be total but it's definitely going to take a bite out of Arm's business.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

The Falcon (embedded GPU control processor / security stuff) is RISC-V I’m pretty sure. For Tegra and friends though, they have the perpetual ARM architecture license still, right?

I don’t know how the nvidia licensing works, but I would imagine they have a pretty solid relationship, so probably not moving away anytime soon. Especially with all the mellanox stuff being deeply intertwined with arm.

For SoCs anything that isn’t requiring datapath processing will be great to move to risc-v. That’s most of what I’m familiar with.. Anything moving from a MIPS32 would be great on a risc-v!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

feedmegin posted:

Surely anything moving from a MIPS32 is on an ARM already, by now. Certainly that's been the case multiple places I've worked.

Yessss.. surely… lol

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I don’t want to say too much in case it’s still sensitive information but some surprisingly recent stuff uses MIPS :haw:

A big motivator to go to risc is apparently the tool chain cost for arm stuff, the debugger folks like green hills etc. This is just what I hear from folks I’m not really involved too much on the cpu implementation side.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

feedmegin posted:

Why would RISCV change that? Clang and Gcc themselves are free and open source. Meanwhile someone producing SoCs or BSPs has no reason to charge less for proprietary toolchains and tools based on them (or eg commercial debuggers) than they do for ARM. Both instruction sets are equally openly documented afaik and ARM isn't charging you to write a compiler.

Not sure tbh, this is what I was told. I think it has something to do with the support that certain tool (debugger) vendors have and how the main one for arm are real jerks about licensing. It makes the firmware people happier if they don’t have to deal with them, and the cpu people are happier if they don’t have to pay to license arm cores. There’s probably more to it but the main gist is the shift is on at least in the couple companies I have a bit of insight into.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Is the Power10 (notice they are changing from POWER) going to have lower end workstations like the POWER9 had with Sforza from companies like Raptor? Everything I've seen makes it look like pretty major server iron.

Interesting that they're not CXL capable either despite having Gen5 PCIe

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Hasturtium posted:

Raptor’s balked at Power10 due to the IMC and at least one other component using closed source firmware, so I wouldn’t count on it. The use of said blobs seems economically driven by Global Foundries’ failure to deliver on sub-14nm and the need to pivot to a different process, so thanks GloFo. :-\

Power10 is gigantic - Raptor ran a Twitter survey to halfassedly assess interest and indicated the end product would be a gigantic single socket in an EATX motherboard. They’re not moving forward with P10 and in at least one interview Tim Pearson indicated they’re looking at less expensive Power solutions from “other potential sources.” I don’t know what that means, but I’d guess it’d be an outgrowth of Microwatt or some other in-development chip. I should have bookmarked one I read about not long ago.

Still waiting for news on my Blackbird shipment, let alone the thing itself. Here’s hoping it’s worth it.

Edit: I’ve seen a couple of SPARC ATX boards periodically appear on eBay… are they hopelessly ancient, or at least potentially fun to play with?

That's a shame. The only Power10 CPU I've seen specced out are kind of monsters too, the Raptors were a lot more reasonable. I got to play with a few of the dual socket ones for PCIe Gen4 testing and they were pretty nice machines. Did manage to kill a board, probably from the repeated power cycling testing.

It'll be interesting to see what comes out from the Power10 development and if they do a version with DDR5 + CXL support. Or perhaps that'll wait for Power11!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

NewFatMike posted:

What kind of power envelopes are y’all working with on those modern workstations? I was thinking that x86 stuff is pretty regularly developing 10+% performance per year (ignore that one decade with bulldozer lol), but the power draw is a lot, even without a GPU.

The talon II from raptor has dual redundant 1620W supplies, although the CPUs are fairly low powered (90W TDP for the 4 core, 160 for 8 core) so I'm not sure why it is that high. Probably was just the supermicro case they bought included them. Or so you could jam a ton of GPUs in there too.

In the hardware compatibility lists there were people running CPU + motherboard + drive with 550-600W supplies without issue.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

NewFatMike posted:

I figure this is the place to ask vv- do we know if the new Apple M1 Ultra is using MCM packaging or is it monolithic? I can’t imagine the latter, the M1 Max is already enormous.

They call it "UltraFusion" how they connect up 2 M1 Max chips (chiplets? dies?) but yeah it sounds a lot like MCM to me.

https://www.apple.com/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/

quote:

The foundation for M1 Ultra is the extremely powerful and power-efficient M1 Max. To build M1 Ultra, the die of two M1 Max are connected using UltraFusion, Apple’s custom-built packaging architecture. The most common way to scale performance is to connect two chips through a motherboard, which typically brings significant trade-offs, including increased latency, reduced bandwidth, and increased power consumption. However, Apple’s innovative UltraFusion uses a silicon interposer that connects the chips across more than 10,000 signals, providing a massive 2.5TB/s of low latency, inter-processor bandwidth — more than 4x the bandwidth of the leading multi-chip interconnect technology.

Kind of a connection that doesn't go out to the substrate but between the two dies directly.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

NewFatMike posted:

That’s pretty dang cool man. Even using a substrate is a pretty interesting packaging advancement. I wonder if they really built their own solution or if they’re using TSMC’s solution. The in depth interviews and package shots are going to be really cool.

Yeah it got me wondering if the really hardcore die folks were all scratching their heads at what was going on at the top of the M1 Max die shots before the Ultra was announced. It was kind of like a clue just out in plain sight! :haw:

I would like to know how they do the interposer. Is it a separate process with 2 Max dies to attach them? Really wild.

priznat fucked around with this message at 02:26 on Mar 10, 2022

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Sweet saving that to watch later! Might answer my question.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Raptor was reeeeeeeally slow on fulfilling orders back after the power9 launch and it didn’t sound like it got much better even before supply chain issues. Good luck I hope it works out!

Bit of a small operation I would imagine, gotta be tough for them.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Oh yeah definitely not suggesting you should be subsidizing them, get your money back for sure. I hope it goes smoothly!

They were a great option for testing with a power9 system that wasn’t in a real pain in the butt form factor.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Sweet! They are pretty nifty machines.

Curious what OS/distro are you planning on installing on it?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I often wonder what would have become of the Amiga were it not for Commodore being a somewhat dysfunctional company. My first real computer was an A500 and I just loved that machine. Just blew away pcs at the time too!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BobHoward posted:

Ceramic 64-pin DIP is a chonker of a chip package.

On my A500 a common issue was the 68k dip would become unseated due to heat flexing and often just tapping the plastic case would seat it back if it had issues. It was a crazy long part, laff.

I actually went in and used fishing line to cinch both ends down at one point then the problems went away!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Amiga weirdos are the best. That Apollo thing is neat!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BobHoward posted:

Not surprising to me at all, but then I've done some time working at a place that designed electronics for vehicles. Embedded electronics for harsh environments is a very different world.

For example, how many fast CPUs do you know of which are rated for operation down to -40C and up to at least +100C? These are common baseline requirements for automotive applications, even for boxes which live inside the passenger compartment rather than the engine bay.

Another: most consumer silicon disappears only two or three years after launch. But designing and qualifying electronics for the harsher environment inside a vehicle is expensive, so you don't want to constantly re-do it - you want to design something really solid and just keep making it for five years, or more. That narrows the list of components you can possibly use quite a bit.

Yeah definitely. I imagine the product design cycles for vehicles are pretty long so even by the time the car launches it has been a good 4 years since the devices have been qualified for automotive which often is a lot later than when they are actually new etc..

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I wonder how RISC-V is doing in the embedded space, they have an opening if Arm continues to try to squeeze folks on licensing their cores.

I’ve worked at a couple places where the next product was scheduled to have a risc-v core in it but it ended up going with arm so there must be some kind of pricing shenanigans arm plays when it looks like they are gonna lose out.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
It’s fun seeing MIPS just popping back up again! One of the places I worked they had MIPS cores for the longest time until moving to an Arm core (with risc-v in the serdes, interestingly) but then other projects that were already in flight would come up with mips32 on there so you’d have to have quite the array of probes to talk to everything at bringup :haw:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

HDD controllers (WD) was one place they were going after it, and I'm sure various SoC designs will consider it vs. dropping an -A9 or -M4/M7 anywhere a core is needed. Didn't NVIDIA at one point have some stupid amount of -A9s in their design? Though, I'm partially convinced those guys just burned die area in the interest of speed to market.

High speed PHYs have arm/RISCV cores on each lane for equalization and station keeping so they can really explode the number of cores in an SoC!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

100G+ / PCIe 5.0 type stuff? Do they run FW that's loaded at runtime / are the algorithms now more suited for a CPU implementation / branchiness vs. using a FSM like the lord intended?

Yeah the really higher speed PCIe is mostly what I'm familiar with them used for, but not everyone does that. It's pretty basic algorithms but it gives them the capability for more tuning options in the wild, and doing periodic checks on the link quality and recentering the eye if needed. These are more for the diagnostics and lane maintenance than the initial link control which is still under hardware. Just having the option to do fixes on the phy stuff after the fact is a huge lifesaver!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
*slaps rack enclosure* You can automate so many lights with this bad boy

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I still like ArsTechnica but Phoronix is also a fav.

Ars has less technical stuff than before it seems, I am not sure if the deep dive guy is still there.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply