|
DrDork posted:It is an interesting question, yeah. I agree with you that the M1 Pro certainly leans more towards "server style specs," but at the same time, the x86/ARM tax isn't $1500+, either. I don't think that's actually true at all. Variations are actually very cheap, it's not like Intel has 28 different masks that they use to make every single product, there's usually like a couple dies and the difference is just binning and fuses. Binning isn't nearly the bottleneck most people think it is, AMD had something like 80% of Zen2 dies coming off the line with 8 functional cores and that was a brand new product on a brand new node. Obviously that's higher on a big server product, but that also means more weird combinations of defects coming off the line that give you more ability to negotiate on that stuff (since it's garbage/low value to you unless it can be productized). Almost all of binning is actually just marketing seeking the exact maximum price you are willing to pay and in that sense it is a net positive - it is better for Intel to make a SKU to sell you a half million 8160 chips than to have you buy 8150s instead, the additional cost of that custom SKU is almost zero to them. And that's why Intel is willing to do custom SKUs for specific customers and so on. Apple not doing tons of variations is because they're Apple and they've always been "pick your favorite colored happy fun box" and tons of SKUs gets in the way of that. I'm not saying the x86 tax is $1500, but if the CPU is sold as a standalone product then Intel/AMD/NVIDIA (whoever sells the chip) has to make their own cut, and then it has to go into a product that is financially viable to sell to a customer. Apple is vertically integrating and can eat that profit margin themselves, and in this case they are turning around and passing it on to the consumer. It'd be like Intel turning around and saying "OK this Hades Canyon chip is $1500 on the standalone market... or we'll sell it to you in a premium laptop for $2k". Could they do that? Sure. Could anybody else? Probably not. And if you are actually selling the chip as a separate product... at some point you are going to get accused of dumping. You are taking advantage of your lower internal cost to make products that would be financially unviable for your competitors to make, because they're getting charged the higher external cost. In a legal sense, it probably only works because Apple isn't selling the chips as a standalone product.
|
# ? Oct 19, 2021 17:24 |
|
|
# ? Jun 2, 2024 15:28 |
|
a lot of games still scale with fancy new CPUs, especially ones forever stuck on 1 core. and of course the forever bloat of software will eventually make every current CPU choke and die. my work issued laptop is an old 2/4 kaby lake that was joyously snappy when i fist got it issued, and now just chrome work can grind it to a halt. and of course like always, some of the best bennies are the stuff you get from modern platforms. fast NVMe, USB-C charging, trackpads that arent total rear end, actually being able to run a 4k or 1440p/144hz external without freaking the gently caress out or limiting you to 30hz, etc
|
# ? Oct 19, 2021 17:54 |
|
In some countries, coupled sales are straight up illegal. That means selling product (A + B) at a lower price than (A) + (B).
|
# ? Oct 19, 2021 18:02 |
|
repiv posted:I saw it pointed out that the M1 Pro is just the top half of an M1 Max (assuming the official die shots are accurate) It's not manufacturing, it's engineering. Mask design is a real cost. Feature sizes are well below the wavelength of the light being used for lithography steps, even with EUV, so masks aren't simple images of the pattern you want to "print" on the wafer. Instead, they're billions of diffraction gratings designed to create the right interference patterns to make the image you want exposed. Fortunately, the process of generating the diffraction gratings from the desired final pattern can be automated by a computer. Unfortunately, for a big chip, it takes weeks of time on a big and expensive computer. If you want to make two related products like M1 Pro and M1 Max, you can save yourself a lot of effort here (and in other physical design steps) by designing only the biggest configuration, but including cut lines. (this would also involve work earlier in the process, long before back end or physical design - for example, you're going to have busses crossing the cut line(s), so designers will need to make sure that everything's OK when the stuff on the other side of the cut line just doesn't exist.) The physical mask sets are still different AFAIK.
|
# ? Oct 20, 2021 00:17 |
|
Mask design itself also costs, going from RTL to netlist to mask is in itself an expensive process. At that scale, the routing and synthesis tools are stupidly expensive. I've quickly learned what a chicken bit was because I kept hearing "Thanks for finding the RTL bug, but it's too expensive to spin another synthesis, we'll just turn off the chicken bit".
|
# ? Oct 20, 2021 09:05 |
|
I have a question and this seemed like this was the closest thread: I recently bought a new TV, an LG "smart" TV running webOS. The box boasts that it has a "quad-core processor", which makes sense because there *is* a user interface on this thing and it *can* run applications to the point where I don't really need my Google Chromecast anymore. I also assume it's actually an SOC because it looks like it also does post-processing when I'm watching something on Netflix. What kind of computing power are we looking at, here? comparable to a smartphone? comparable to an Atom? I'm just curious.
|
# ? Oct 20, 2021 09:23 |
|
If it's running a Linux-based multitasking OS it's an SoC.
|
# ? Oct 20, 2021 10:54 |
|
gradenko_2000 posted:What kind of computing power are we looking at, here? comparable to a smartphone? comparable to an Atom? I'm just curious.
|
# ? Oct 20, 2021 10:55 |
|
tehinternet posted:I have a car. The car has eight functional cylinders, but only uses four unless you pay money to get the dealer to send the code to activate the cylinders. This metaphor doesn’t really work because of how CPUs are made from larger wafers and binned but yeah. Some new cars even have a shop inside the infotainment where you can actually buy stuff that ship in the car, but are disabled in the software, like butt warmers and CarPlay. E: oh there was another page, oops
|
# ? Oct 20, 2021 11:58 |
gradenko_2000 posted:I have a question and this seemed like this was the closest thread: Given what the market looks like, it's probably an ARMv8 quadcore SoC at ~1.6GHz with a built-in MALI GPU and hardware decoding ASICs for h264 and h265. BlankSystemDaemon fucked around with this message at 12:19 on Oct 20, 2021 |
|
# ? Oct 20, 2021 12:08 |
|
Kivi posted:Hate to tell you but new cars actually work like that. Most cars ship with the hardware for better model (engine power is cut by rev limiter or boost pressure) they're just coded off or require some additional bits. didn’t BMW just eat a bunch of (very deserved) flack for locking a bunch of safety features behind a subscription?
|
# ? Oct 20, 2021 12:16 |
|
This is one argument we really don't need to drag back up from the grave it was resting comfortably in.
|
# ? Oct 20, 2021 12:39 |
|
It was kinda based on a fundamental misunderstanding about semiconductor manufacturing anyhow. There is never a way to "simply avoid any waste" because it's not like chips are made to fuckin order. Even the very largest companies rarely design more than 3 base models per generation (Apple's current M1/pro/max, Intel LCC/MCC/XCC Xeon, Nvidia GA102/104/106, etc). And on any chip design, depending on how large it is, anywhere from 10-40% of the ones on a wafer are going to have a defect ranging from a completely nonfunctional chip to one or more nonfunctional cores. So unless you want to totally waste those, you have to have lower end models available based on the highest end design. Then there's simply market desires- You only have 3 basic designs, but there's also demands for products at a wide range of price points. Some of these, by necessity, are going to be fully working chips with cores fused off, or clocked at lower speeds. Instead of permanent fusing, allowing those models (WHICH ALREADY EXIST), to be later upgraded in place, is a strict upgrade over the existing status quo. Despite all this, however, it's also kinda moot, because the future of chip manufacturing is pretty obviously headed in AMD's direction and away from Intel's large monolithic designs. Building a chip out of 1-12 small, easy to manufacture, uniform building blocks actually does dramatically reduce waste and allow for increased flexibility. AMD only needs 1 basic design, and it's an easy to build one so they only do a 6 and 8 core version for defect harvesting. Then it's just a matter of how many of those do you slap on a total chip, and away you go. Intel is moving there but they're years behind.
|
# ? Oct 20, 2021 13:12 |
|
Gwaihir posted:Despite all this, however, it's also kinda moot, because the future of chip manufacturing is pretty obviously headed in AMD's direction and away from Intel's large monolithic designs. Building a chip out of 1-12 small, easy to manufacture, uniform building blocks actually does dramatically reduce waste and allow for increased flexibility. AMD only needs 1 basic design, and it's an easy to build one so they only do a 6 and 8 core version for defect harvesting. Then it's just a matter of how many of those do you slap on a total chip, and away you go. Intel is moving there but they're years behind. everybody already knows this including intel, which is why intel's next server generation is chiplet based. it's just yet another thing that's gotten caught up with 10nm manufacturing problems, it's not like Intel has some inherent favoritism for 800mm2 monolithic chips, they simply can't make the designs they've got because of the 10nm problems. granted it's only 4 chiplets and they are still relatively large (400mm2), they aren't leaning into it as heavily as AMD yet nor sharing between consumer and server lineups like AMD, and nobody knows yet whether it's going to be a weird NUMA thing like first-gen Epyc was, but yeah, everyone sees the benefits of chiplets. Paul MaudDib fucked around with this message at 16:01 on Oct 20, 2021 |
# ? Oct 20, 2021 15:57 |
|
repiv posted:I saw it pointed out that the M1 Pro is just the top half of an M1 Max (assuming the official die shots are accurate) The M1 Pro definitely looks like the top half of the M1 Max. Also I'm sure the edges of the wafer are filled with smaller dies. Perplx fucked around with this message at 16:23 on Oct 20, 2021 |
# ? Oct 20, 2021 16:20 |
|
Perplx posted:The M1 Pro definitely looks like the top half of the M1 Max. Also I'm sure the edges of the wafer are filled with smaller dies. I... Do not think that is how wafers work lol. You don't get to mix n match different dies on one wafer.
|
# ? Oct 20, 2021 16:30 |
|
Gwaihir posted:I... Do not think that is how wafers work lol. You don't get to mix n match different dies on one wafer. My direct experience is only with giant processes but while you can't mix and match on the line but at design time you can put whatever you want on the wafer, every single element could be different. Well, limited by the fab technology of course.
|
# ? Oct 20, 2021 16:51 |
|
Gwaihir posted:I... Do not think that is how wafers work lol. You don't get to mix n match different dies on one wafer. I do not have much insight on modern deep submicron lithography technology as applied to computer chips, but older manufacturing processes totally allow you to do this. You can put multiple chips on the same maskset provided that they are small enough relative to the max reticle area. Also, you can expose one wafer with images from multiple mask sets. I don’t know if modern lithography systems allow this kind of flexibility. Older, less-fine resolution systems do not need the corrections BobHoward was talking about. The shapes on the mask are the shapes imaged onto on the chip, except 4 or 5 times bigger.
|
# ? Oct 20, 2021 18:07 |
|
https://twitter.com/VideoCardz/status/1450937508343259136 yeah if that's the way it's gonna be I'm just gonna chill the hell out and not even worry about DDR5 for a while
|
# ? Oct 21, 2021 07:46 |
|
GPUs just got down to a mere double MSRP on the grey market here. I expect DDR5 is going to similarly be unobtanium at launch and for a solid year or two afterwards.
|
# ? Oct 21, 2021 08:10 |
|
silence_kit posted:I don’t know if modern lithography systems allow this kind of flexibility. They do, and it's routinely done. TSMC's shuttle run service shares a single mask set across many designs, reducing the cost of testing your A0 chip revision and any ECOs you decide to make after getting first silicon back. Once you're ready for volume production, you pay for a dedicated mask set with only your chip on it. If you want to run wafers with a multi-design mask, I'm sure TSMC is happy to do so. The page notably omits 5nm from the set of process nodes with shuttle services, but I'd bet that's because 5nm is still mostly Apple-exclusive.
|
# ? Oct 21, 2021 11:05 |
|
whoopsie daisy our entire lineup leaked with nice box shots oopssieeee this is not intentional we swear!!! https://twitter.com/VideoCardz/status/1451254622862053384 kinda surprised how far up the stack the twinned DDR4 models made it.
|
# ? Oct 21, 2021 20:57 |
|
Those motherboards really show you need a full atx board to get 3 pcie slots because of 4 slot gpus,
|
# ? Oct 21, 2021 21:14 |
|
Perplx posted:Those motherboards really show you need a full atx board to get 3 pcie slots because of 4 slot gpus, I was bitching hardcore about this a while back in the GPU thread - this is particularly true since motherboards went to 3-slot spacing, meaning that if you use a (true) 3-slot GPU then any card in the middle slot will significantly block airflow. Basically you should add a slot for air spacing meaning 3-3.5 slot cards really take up 4 slots. But motherboards went to 3-slot spacing so basically you can only use the top slot and bottom slot on most motherboards in this era of 3.5 slot GPUs. The 3-slot spacing idea made sense in an era when most cards were 2 or maybe 2.5 slot but you can't buy a midrange or higher card that isn't 3-slot these days. And only a small handful of motherboards use the traditional 2-slot spacing anymore. BIG HEADLINE pointed out the cheeky solution to this: buy a card with AIO liquid cooling and those cards will be 2 slot with no need for airflow. And the EVGA 3090 Kingpin was a rather excellent deal in that area - you could get it for MSRP, which is about the same as Zotac or other brands are charging for their air-cooled models, the queue is relatively short, and you can slap a 10-year warranty on it and since it's an AIO it's pretty much guaranteed to fail in the 5-10 year window without some basic service, so you can trade up for a new card via warranty service at that point. EVGA queues are closed now, unfortunately - but you can still buy an AIO card from a retailer, and it doesn't have to be a 3090.
|
# ? Oct 21, 2021 21:45 |
|
scalpers are putting up the Kingpin for like $4k i'm eyeing that MSI Unify-X like an idiot
|
# ? Oct 21, 2021 22:08 |
|
I'd argue the Kingpin still needs SOME directed airflow despite the AIO. I also found the OEM fans on the radiator rather and replaced them with ML120 Pros. Kept the stock fans around for replacement should the card need servicing, though. And yeah, I don't regret for one second not jumping on the card when I did and "trading up" from my 3090 FTW3 Ultra. The boosted clocks should compare favorably to the 3090S or Ti that's likely never coming out, or will be rebranded as a Titan. BIG HEADLINE fucked around with this message at 01:29 on Oct 22, 2021 |
# ? Oct 22, 2021 01:26 |
|
Have any DDR5 vendors advertised any >4800MT/s ECC DIMMs yet?
|
# ? Oct 22, 2021 11:44 |
|
Sidesaddle Cavalry posted:Have any DDR5 vendors advertised any >4800MT/s ECC DIMMs yet? It's not there anymore but on my phone one of the google recommended articles was about a 6000MT/s stick, I think it was g.skill.
|
# ? Oct 22, 2021 12:46 |
|
There are several vendors with > 4800 DDR5 DIMMs announced, but I'm not sure about modules supporting ECC outside of the DIMM itself. I don't keep up with news in workstation/server spaces.
|
# ? Oct 22, 2021 13:15 |
|
repiv posted:I saw it pointed out that the M1 Pro is just the top half of an M1 Max (assuming the official die shots are accurate) Wait does the top end m1 max with 256bit lpddr5 bandwidth of 400+GB/S means it's pushing about 8x the bandwidth of a single channel ddr5 6400 and more than half of the bandwidth of 3080 (760GB/s)? And that you can have a giant glob of 64GB memory to go hog wild with? neat
|
# ? Oct 22, 2021 13:19 |
|
yes it's a monster of a chip m1 max with 64gb is at least $3300 though
|
# ? Oct 22, 2021 13:34 |
|
I guess at least you can use it to poo poo post from the sofa without worrying about battery. Even the decked out 16gb ipad pro is already 2k
|
# ? Oct 22, 2021 13:47 |
|
God, imagine if Apple made GPUs It's obnoxious how ludicrously good their hardware is
|
# ? Oct 22, 2021 13:48 |
|
I would assume the ARM Mac Pro will have an Apple dGPU (or several)? It'll never work on Windows/Linux though even if it's regular PCIe
|
# ? Oct 22, 2021 14:13 |
|
Zedsdeadbaby posted:It's not there anymore but on my phone one of the google recommended articles was about a 6000MT/s stick, I think it was g.skill. https://www.techpowerup.com/288082/g-skill-announces-worlds-fastest-ddr5-6600-cl36-trident-z5-memory-kits DDR5-6600 CL36. I expect it to cost an arm and a leg.
|
# ? Oct 22, 2021 14:25 |
|
Zedsdeadbaby posted:God, imagine if Apple made GPUs I know what you mean, but, to goonsay it anyhow: they do make gpus- The max chip has an effectively RTX 2080 in it!
|
# ? Oct 22, 2021 15:21 |
|
Gwaihir posted:I know what you mean, but, to goonsay it anyhow: they do make gpus- The max chip has an effectively RTX 2080 in it!
|
# ? Oct 22, 2021 15:59 |
|
repiv posted:yes it's a monster of a chip it appears to score on par with the 16-core Xeon that’s available in the 2019 Mac Pro, so from that perspective it’s quite a bargain trilobite terror fucked around with this message at 16:04 on Oct 22, 2021 |
# ? Oct 22, 2021 16:01 |
|
repiv posted:I would assume the ARM Mac Pro will have an Apple dGPU (or several)? Mac Pro is rumored to be a 40 core configuration, which obviously raises the prospect that it’s four M1 Max on a package, which would be 32+8 obviously, and probably no need for a dGPU especially with four of the Max iGPUs presumably working in MCM configuration. The Max really stacks in the ram chips though - they’re not just on top of the package they’re all around it - so that will certainly be interesting to put 4 of them on a package, in terms of board density. Or maybe they end up going for socketed ram on those I guess - obviously some pros do actually use the 2TB ram configurations (and a lot do need more than 256GB, at least) and there’s no form factor pressure like in a laptop. But either way you’re looking at a 64-channel ram configuration (equivalent bus width to 32 channel ddr4) if they go down that road. That’s tough with actual sockets - you’re talking about 32 sockets on the board for 1 DIMM per channel, with up to a terabyte a stick on DDR5 I think? Maybe less if it doesn’t do RDIMM/LRDIMM, I think DDR5 will do 256GB per UDIMM? It’s technically possible I guess, quad socket or octo socket SMP systems make it work, but bringing it all into a single package will be… interesting. Sockets on both sides of the board I’d assume (i.e. non-ATX form factor), CPU goes in the middle, each chiplet gets its channels routed in from the outsides "flux capacitor style". Gonna have to bring in the Asrock board design team for that one I guess that’s the argument against Apple reusing the m1 max, there’s just not enough ram bandwidth there, so maybe it’s four pros instead - same cores but smaller iGPUs, since you can’t keep four chiplets fed. Maybe you knock it down to like 32 channel ram or 16 channel ram to keep the routing under control - that’s still a lot to feed the cpu side, just won’t be the meme tier iGPU config. The Mac Pro will almost certainly have PCIe for a dGPU or other external expansion cards, pros need IO cards for DAW and video capture/editing and various other Apple-y things, and the trashcan is an obvious mis-step, rumors say they're going back to the old cheesegrater format and that will bring back PCIe expansion. So far there is no evidence of them actually making a dGPU though and that is not really their core business. I guess they could, but it would have to make the same design compromises (VRAM, etc) as other dGPUs, you can't just meme your way through a dGPU, and it would have to be affordable as a standalone product not just bundled into something else, and they don't have CUDA lock-in or other key selling points that they could leverage - nobody actually buys Mac for the graphics APIs or heterogeneous compute languages. So it's an uphill battle all the way. Paul MaudDib fucked around with this message at 17:49 on Oct 22, 2021 |
# ? Oct 22, 2021 17:21 |
|
|
# ? Jun 2, 2024 15:28 |
|
Paul MaudDib posted:Mac Pro is rumored to be a 40 core configuration, which obviously raises the prospect that it’s four M1 Max on a package, which would be 32+8 obviously, and probably no need for a dGPU especially with four of the Max iGPUs presumably working in MCM configuration. part of me hopes they keep the design language + relative form factor of the current cheesegrater, and part of me wants to see them do some really weird poo poo.
|
# ? Oct 22, 2021 17:38 |