|
feedmegin posted:Surely anything moving from a MIPS32 is on an ARM already, by now. Certainly that's been the case multiple places I've worked. Yessss.. surely… lol
|
# ? Feb 13, 2022 16:46 |
|
|
# ? May 28, 2024 13:42 |
|
Aren’t traffic lights the big standout MIPS thing still remaining? I’m sure there’s plenty, but that stuck out to me if my recollection is correct.
|
# ? Feb 13, 2022 17:12 |
|
I don’t want to say too much in case it’s still sensitive information but some surprisingly recent stuff uses MIPS A big motivator to go to risc is apparently the tool chain cost for arm stuff, the debugger folks like green hills etc. This is just what I hear from folks I’m not really involved too much on the cpu implementation side.
|
# ? Feb 13, 2022 17:24 |
|
priznat posted:A big motivator to go to risc is apparently the tool chain cost for arm stuff, the debugger folks like green hills etc. This is just what I hear from folks I’m not really involved too much on the cpu implementation side. Why would RISCV change that? Clang and Gcc themselves are free and open source. Meanwhile someone producing SoCs or BSPs has no reason to charge less for proprietary toolchains and tools based on them (or eg commercial debuggers) than they do for ARM. Both instruction sets are equally openly documented afaik and ARM isn't charging you to write a compiler. feedmegin fucked around with this message at 20:02 on Feb 13, 2022 |
# ? Feb 13, 2022 19:58 |
|
feedmegin posted:Why would RISCV change that? Clang and Gcc themselves are free and open source. Meanwhile someone producing SoCs or BSPs has no reason to charge less for proprietary toolchains and tools based on them (or eg commercial debuggers) than they do for ARM. Both instruction sets are equally openly documented afaik and ARM isn't charging you to write a compiler. Not sure tbh, this is what I was told. I think it has something to do with the support that certain tool (debugger) vendors have and how the main one for arm are real jerks about licensing. It makes the firmware people happier if they don’t have to deal with them, and the cpu people are happier if they don’t have to pay to license arm cores. There’s probably more to it but the main gist is the shift is on at least in the couple companies I have a bit of insight into.
|
# ? Feb 13, 2022 20:52 |
|
NewFatMike posted:Aren’t traffic lights the big standout MIPS thing still remaining? I’m sure there’s plenty, but that stuck out to me if my recollection is correct. Cavium Octeon SoCs are MIPS IIRC, and is fairly prevalent in networking. I think some MediaTek WLAN SoCs might be MIPS as well. I figured one of the driving factors in RISC-V adoption would be the royalty-free ecosystem — appealing to folks paying for Cortex-A/M licenses.
|
# ? Feb 14, 2022 11:06 |
|
movax posted:Cavium Octeon SoCs are MIPS IIRC, and is fairly prevalent in networking. This is the sort of thing I'm thinking of...we used to support an Octeon-based network appliance 3 jobs ago. Then we didn't any more because they got retired (the new version was ARM), and this was ~5 years ago.
|
# ? Feb 14, 2022 12:05 |
|
feedmegin posted:This is the sort of thing I'm thinking of...we used to support an Octeon-based network appliance 3 jobs ago. Then we didn't any more because they got retired (the new version was ARM), and this was ~5 years ago. Hmm — not like the EdgeRouter 4 is the absolute cutting edge of networking technology, but it’s still sold today / is in Ubiquiti’s nominal line-up and I’m like 98% sure when I cat /cpu/procinfo it, it’s a MIPS-based Octeon. One of the strong points of MIPS (IIRC) was the relative ease in which co-processors plugged right into the architecture; PS1/PS2 used this to great-effect. Or, Cavium just happened to have an architecture team / engineers who were familiar w/ MIPS, the licensing price was right, and so it was done.
|
# ? Feb 14, 2022 16:31 |
|
MIPS chips made it into a lot of Blu-Ray players, once upon a time. I’d imagine ARM is dominating that steadily decreasing niche by now, but that was a pocket where they succeeded for a while. In talking about Itanium, what was the clear advantage it had over HP-UX, and why’d it tank so hard despite the combination of support thrown behind it and the widespread capitulation of entrenched players on the assumption that Intel would just outspend everyone into inevitability?
|
# ? Feb 14, 2022 17:15 |
|
Hasturtium posted:In talking about Itanium, what was the clear advantage it had over HP-UX, and why’d it tank so hard despite the combination of support thrown behind it and the widespread capitulation of entrenched players on the assumption that Intel would just outspend everyone into inevitability? As for why HP went with it, Intel were willing to bankroll it and had next-gen fabs already up and running. Developing and fabbing your own new CPUs is expensive and gets more so each new generation which is why the various Unix workstation companies got out of the business. As for why it tanked, the hardware guys expected the software guys to do literal magic because they didn't understand software (specifically compilers) to compensate for a simpler hardware architecture that was supposed to be able to clock higher, only they threw everything and the kitchen sink into the ISA so it didn't; meanwhile they were betting on out-of-order hardware execution stalling out in terms of what it could do and it didn't.
|
# ? Feb 14, 2022 20:31 |
|
Also x86 was supposed to die*, and AMD with it, due to lack of 64 bits. At the time 4GB was around the corner or already SOTA for servers. A great opportunity to leverage said process advantage to sail into a monopolistic future. AMD had other plans. Intel fanboys might optimistically say Intel wanted to replace the crusty, dead x86 arch with a new, modern baroque unproven kitchen sink arch. * mostly by not seeking to extend it as they had previously done
|
# ? Feb 14, 2022 22:43 |
|
karoshi posted:Intel fanboys might optimistically say Intel wanted to replace the crusty, dead x86 arch with a new, modern baroque unproven kitchen sink arch. "Baroque" somehow manages to understate the insanity of Itanium. There's layers. The amazing thing is that it was all justified by claiming that it would be less complicated to implement than OoO RISC - but that clearly wasn't the case in reality.
|
# ? Feb 15, 2022 02:27 |
|
HP’s PA-RISC wasn’t the only one to flip; you also had SGI abandoning MIPS, Digital/Compaq abandoning Alpha, etc. The rising costs or chip design (and lack of someone like TSMC in its current market position), the Pentium Pro which was as fast or faster in the bread and butter workstation market, and Intel’s fab advantage (fueled by their commodity volume) convinced that it was a losing game. It was the same forces that drove SGI to make a Windows NT workstation, lol. Instead of Intel, you could try to get in bed with IBM; while POWER was a great arch IBM as a business partner was brutal.
|
# ? Feb 15, 2022 04:12 |
|
karoshi posted:Also x86 was supposed to die*, and AMD with it, due to lack of 64 bits. At the time 4GB was around the corner or already SOTA for servers. karoshi posted:Intel fanboys might optimistically say Intel wanted to replace the crusty, dead x86 arch with a new, modern baroque unproven kitchen sink arch. I'm not sure that AMD was ever viewed as an Intel competitor until long after everyone else folded and they launched the x86-64 sneak attack.
|
# ? Feb 15, 2022 15:31 |
|
PAE was a gross hack, you couldn’t address more than 4 gb in a single process. It wasn’t a long term solution. Intel was really mad about the alternative x86 licenses and put a lot of business and legal efforts into voiding or obsoleting them. Pentium was specifically named that because Intel lost the trademark on 486. The design choices in Itanium were not driven as a reaction to technical capabilities of RISC chips. There were viable (in some cases faster) AMD alternatives for each Intel generation since the 386.
|
# ? Feb 15, 2022 20:33 |
|
Is the Power10 (notice they are changing from POWER) going to have lower end workstations like the POWER9 had with Sforza from companies like Raptor? Everything I've seen makes it look like pretty major server iron. Interesting that they're not CXL capable either despite having Gen5 PCIe
|
# ? Feb 15, 2022 20:42 |
|
priznat posted:Is the Power10 (notice they are changing from POWER) going to have lower end workstations like the POWER9 had with Sforza from companies like Raptor? Everything I've seen makes it look like pretty major server iron. Raptor’s balked at Power10 due to the IMC and at least one other component using closed source firmware, so I wouldn’t count on it. The use of said blobs seems economically driven by Global Foundries’ failure to deliver on sub-14nm and the need to pivot to a different process, so thanks GloFo. :-\ Power10 is gigantic - Raptor ran a Twitter survey to halfassedly assess interest and indicated the end product would be a gigantic single socket in an EATX motherboard. They’re not moving forward with P10 and in at least one interview Tim Pearson indicated they’re looking at less expensive Power solutions from “other potential sources.” I don’t know what that means, but I’d guess it’d be an outgrowth of Microwatt or some other in-development chip. I should have bookmarked one I read about not long ago. Still waiting for news on my Blackbird shipment, let alone the thing itself. Here’s hoping it’s worth it. Edit: I’ve seen a couple of SPARC ATX boards periodically appear on eBay… are they hopelessly ancient, or at least potentially fun to play with?
|
# ? Feb 15, 2022 22:17 |
|
Hasturtium posted:Raptor’s balked at Power10 due to the IMC and at least one other component using closed source firmware, so I wouldn’t count on it. The use of said blobs seems economically driven by Global Foundries’ failure to deliver on sub-14nm and the need to pivot to a different process, so thanks GloFo. :-\ That's a shame. The only Power10 CPU I've seen specced out are kind of monsters too, the Raptors were a lot more reasonable. I got to play with a few of the dual socket ones for PCIe Gen4 testing and they were pretty nice machines. Did manage to kill a board, probably from the repeated power cycling testing. It'll be interesting to see what comes out from the Power10 development and if they do a version with DDR5 + CXL support. Or perhaps that'll wait for Power11!
|
# ? Feb 15, 2022 22:33 |
|
What kind of power envelopes are y’all working with on those modern workstations? I was thinking that x86 stuff is pretty regularly developing 10+% performance per year (ignore that one decade with bulldozer lol), but the power draw is a lot, even without a GPU.
|
# ? Feb 15, 2022 23:30 |
|
NewFatMike posted:What kind of power envelopes are y’all working with on those modern workstations? I was thinking that x86 stuff is pretty regularly developing 10+% performance per year (ignore that one decade with bulldozer lol), but the power draw is a lot, even without a GPU. The talon II from raptor has dual redundant 1620W supplies, although the CPUs are fairly low powered (90W TDP for the 4 core, 160 for 8 core) so I'm not sure why it is that high. Probably was just the supermicro case they bought included them. Or so you could jam a ton of GPUs in there too. In the hardware compatibility lists there were people running CPU + motherboard + drive with 550-600W supplies without issue.
|
# ? Feb 15, 2022 23:41 |
|
priznat posted:The talon II from raptor has dual redundant 1620W supplies, although the CPUs are fairly low powered (90W TDP for the 4 core, 160 for 8 core) so I'm not sure why it is that high. Probably was just the supermicro case they bought included them. Or so you could jam a ton of GPUs in there too. Even that 160W target is high for the 8 core parts - according to the wiki, where people have measured wall power draw, that’s for the CPU and motherboard with RAM, running an artificially high workload. I predict mine will end up being about like one of the 125W Piledriver machines I used to run, and I’m only slapping a Radeon Pro W5500 in there, so before factoring in drives, fans, and incidentals I’d probably top out at 300W or so. Note that the 18 and 22 core CPUs probably hit very reliably close to their quoted 160W TDP, and if you double them up then you’ll basically have a little more than a single Threadripper Pro’s 280W heat output to deal with. The dual redundant PSUs are likely a holdover from the SuperMicro case, or overspecified just to accommodate somebody tossing a pair of modern RTX cards in for machine learning. CUDA apparently runs fine on little endian Power even if Nvidia won’t bother porting the rest of the driver over.
|
# ? Feb 15, 2022 23:57 |
|
Thanks friends! That’s not too terrible. I forgot that they’re probably being designed for servers and the like, so it’s within that envelope
|
# ? Feb 16, 2022 00:15 |
|
glofo really hosed power; the future power roadmap is steering hard to high end enterprise niche not much momentum around openpower anymore either
|
# ? Feb 16, 2022 00:39 |
|
PCjr sidecar posted:glofo really hosed power; the future power roadmap is steering hard to high end enterprise niche There are still a number of open projects in motion*, but I agree that it’s behind RISC-V. At least it’s not as dead in the water as OpenSPARC. edit: Libre-SoC, that’s the one I was trying to remember earlier. Let’s hope that goes somewhere. Hasturtium fucked around with this message at 02:26 on Feb 16, 2022 |
# ? Feb 16, 2022 00:59 |
|
I figure this is the place to ask vv- do we know if the new Apple M1 Ultra is using MCM packaging or is it monolithic? I can’t imagine the latter, the M1 Max is already enormous.
|
# ? Mar 10, 2022 01:18 |
|
NewFatMike posted:I figure this is the place to ask vv- do we know if the new Apple M1 Ultra is using MCM packaging or is it monolithic? I can’t imagine the latter, the M1 Max is already enormous. They call it "UltraFusion" how they connect up 2 M1 Max chips (chiplets? dies?) but yeah it sounds a lot like MCM to me. https://www.apple.com/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/ quote:The foundation for M1 Ultra is the extremely powerful and power-efficient M1 Max. To build M1 Ultra, the die of two M1 Max are connected using UltraFusion, Apple’s custom-built packaging architecture. The most common way to scale performance is to connect two chips through a motherboard, which typically brings significant trade-offs, including increased latency, reduced bandwidth, and increased power consumption. However, Apple’s innovative UltraFusion uses a silicon interposer that connects the chips across more than 10,000 signals, providing a massive 2.5TB/s of low latency, inter-processor bandwidth — more than 4x the bandwidth of the leading multi-chip interconnect technology. Kind of a connection that doesn't go out to the substrate but between the two dies directly.
|
# ? Mar 10, 2022 01:26 |
|
Do Z80s still turn up in the wild anymore? Back in the late 90s/early 2000s I worked at a company where we where often reverse engineering a lot of embeded stuff to add our own control to it., and Z80s would turn up a lot. To the point I even wrote my own OS for them that was kinda forth based, but could do cross cpu process internonnect via a 2 wire (Actually now I think about it, 3 wires, in, out and latch) message passing system. It was kinda neat but never got anywhere. I loved those crusty old processors. Very easy to write assembler for.
|
# ? Mar 10, 2022 01:42 |
|
duck monster posted:Do Z80s still turn up in the wild anymore? Back in the late 90s/early 2000s I worked at a company where we where often reverse engineering a lot of embeded stuff to add our own control to it., and Z80s would turn up a lot. To the point I even wrote my own OS for them that was kinda forth based, but could do cross cpu process internonnect via a 2 wire (Actually now I think about it, 3 wires, in, out and latch) message passing system. It was kinda neat but never got anywhere. I loved those crusty old processors. Very easy to write assembler for. https://www.digikey.com/en/products/filter/embedded-microcontrollers/685?s=N4IgTCBcDaIFoEsA2B7A5iAugGhAVilAAcoBGXIkyPABgF86g They appear to be at the “well I guess if you want them we’ll keep a line open” level of pricing ie $5/1k up to… whatever this is https://www.mouser.com/ProductDetail/ZiLOG/Z8018220AEG?qs=ZJLcQYZ9%2F%252B57UM50szoAJA%3D%3D
|
# ? Mar 10, 2022 01:54 |
|
priznat posted:They call it "UltraFusion" how they connect up 2 M1 Max chips (chiplets? dies?) but yeah it sounds a lot like MCM to me. That’s pretty dang cool man. Even using a substrate is a pretty interesting packaging advancement. I wonder if they really built their own solution or if they’re using TSMC’s solution. The in depth interviews and package shots are going to be really cool.
|
# ? Mar 10, 2022 02:19 |
|
NewFatMike posted:That’s pretty dang cool man. Even using a substrate is a pretty interesting packaging advancement. I wonder if they really built their own solution or if they’re using TSMC’s solution. The in depth interviews and package shots are going to be really cool. Yeah it got me wondering if the really hardcore die folks were all scratching their heads at what was going on at the top of the M1 Max die shots before the Ultra was announced. It was kind of like a clue just out in plain sight! I would like to know how they do the interposer. Is it a separate process with 2 Max dies to attach them? Really wild. priznat fucked around with this message at 02:26 on Mar 10, 2022 |
# ? Mar 10, 2022 02:23 |
|
Ian Cutress published a video I’m about to watch on it, I’m very excited: https://youtu.be/1QVqjMVJL8I
|
# ? Mar 10, 2022 02:26 |
|
Sweet saving that to watch later! Might answer my question.
|
# ? Mar 10, 2022 02:27 |
|
hobbesmaster posted:https://www.digikey.com/en/products/filter/embedded-microcontrollers/685?s=N4IgTCBcDaIFoEsA2B7A5iAugGhAVilAAcoBGXIkyPABgF86g Those are Z80-based microcontrollers. The modern, higher-clocked CMOS version of the classic Z80 CPU is called the Z84 and is still available for okay prices: https://www.digikey.com/en/products/detail/zilog/Z84C0006PEG/929204 (6 MHz DIP version but there are ones that go up to 20 MHz, and they also come in more modern SMT packages) I've built some simple machines with it on prototyping boards and one day I'll do a PCB layout for a 32 KB RAM/32 KB ROM/parallel IO/serial IO machine I designed. BattleMaster fucked around with this message at 06:58 on Mar 10, 2022 |
# ? Mar 10, 2022 06:55 |
|
Z80 was how your spelled x86 before the IBM PC
|
# ? Mar 10, 2022 12:24 |
|
priznat posted:Yeah it got me wondering if the really hardcore die folks were all scratching their heads at what was going on at the top of the M1 Max die shots before the Ultra was announced. It was kind of like a clue just out in plain sight! Yeah people called it a while ago https://twitter.com/VadimYuryev/status/1466526403331952644 Apparently there were some not-so-subtle hints in MacOS as well https://twitter.com/marcan42/status/1501263782714089472 repiv fucked around with this message at 14:59 on Mar 10, 2022 |
# ? Mar 10, 2022 14:51 |
|
Yeah and there's only support for 2 chips so an M1 Quad Ultra is pure fantasy, like two uncles who work for Nintendo
|
# ? Mar 10, 2022 17:01 |
|
I’m still wondering what Apple will choose to do for the true blue Mac Pro. They’re obviously enjoying the advantages of an integrated SoC design, but if there’s a single market segment of theirs that will demand upgradeable memory and PCIe connectivity, it’s their Pro contingent. Do you suppose they’ll allow upgrades with ECC registered DDR5 and discrete cards, or will they offer ludicrous performance with up to, say, 512GB RAM and tell their customers to fall in line?
|
# ? Mar 10, 2022 18:09 |
|
Numa clusters of m2s linked with pcie Gen 6 and infinity fabric knockoff. (If they had a decent pcie ip core)
|
# ? Mar 10, 2022 19:32 |
|
It will be a big box you can put a Mac Studio inside of. When you want an upgrade you pull out the old Mac Studio and put in a new one.
|
# ? Mar 10, 2022 19:36 |
|
|
# ? May 28, 2024 13:42 |
|
PBCrunch posted:It will be a big box you can put a Mac Studio inside of. When you want an upgrade you pull out the old Mac Studio and put in a new one. 100%, easier to replace the entire module than compromise the memory perf by using DIMMs.
|
# ? Mar 10, 2022 20:57 |