|
They made fairly significant investment into the cheese grater and I’m not sure if it’s been worth it. It’s not like Apple uses the Pro as a sort of flagship kind of SKU either. I’m thinking they’ll just shrink it and work with the thermals or keep it roughly the same and pack it to the gills with cores and expansion options while netting a decent margin.
|
# ? Oct 22, 2021 18:42 |
|
|
# ? Jun 7, 2024 19:27 |
|
The Pro/Max step for Apple silicon is super interesting, and the next step is super interesting too. I sort of assumed that the Pro/Max don't have many PCIe lanes, if it is anything like the original M1. I guessed something like 8 general purpose PCIe4 lanes. 3x TB4 burns 6 lanes, plus 1 lane for WiFi/misc, and maybe 1 spare lane for a 10gb ethernet port like we saw on the M1 Mac Mini? If thats true, that limits their utility in a theoretical multi-CPU Mac Pro setup. Gonna burn lanes on the CPU interconnect and not really have much left over for ports or slots, but maybe im totally wrong on that assumption! They could also pull a very Apple move and tell the world that you dont need ports or slots anymore because ~Apple Magic~. Also would be a weird situation with the memory bandwidth like Paul mentioned, and the general complexity of trying to juggle 2-4 NUMA nodes plus 2-4 separate GPUs with their own memory pools in a coherent Apple like way to the end user. But I wouldn't put it past em to try. Gonna be very cool to see where they drive their bus next. I do love it when Apple goes their own way, even if it ends up being the wrong way. Their tight integration and high margins let them do weird stuff nobody else will.
|
# ? Oct 22, 2021 19:44 |
|
Does a Mac Pro really need to have the illusion of one big GPU? The ones they sell now have up to 4 separate GPUs that software needs to wrangle manually AFAIK the pro apps that really lean on the GPU are already set up to handle that
|
# ? Oct 22, 2021 19:54 |
|
I know there used to be some weirdness with multi CPU/GPU setups even in the first party Apple apps, but also in Adobe stuff. No idea if thats still the case.
|
# ? Oct 22, 2021 20:15 |
|
ROG boards have "leaked": https://videocardz.com/newz/asus-rog-maximus-z690-extreme-formula-apex-hero-and-other-alder-lake-s-motherboards-pictured i love that Asus ITX boards keep sprouting more and more daughter boards. So in addition to the three PCB audio/M2 sandwich stack, we've also sprouted what looks to be a SATA breakout board.
|
# ? Oct 22, 2021 20:28 |
|
Jesus christ what is that monstrosity? At some point don't you stop calling it a motherboard and just call the thing a case in and of itself?
|
# ? Oct 22, 2021 20:32 |
|
ROG builders, remember to lift with your legs, not your back.
|
# ? Oct 22, 2021 22:12 |
|
Asrock going to one up them and put a whole drat rack in a tiny enterprise mitx
|
# ? Oct 22, 2021 22:22 |
|
WhyteRyce posted:Asrock going to one up them and put a whole drat rack in a tiny enterprise mitx Just a solid cube of interconnected boards that you bolt into the empty space within a case. gently caress, now I kinda want one.
|
# ? Oct 23, 2021 00:32 |
|
DrDork posted:Just a solid cube of interconnected boards that you bolt into the empty space within a case. isn’t that just a DGX Station?
|
# ? Oct 23, 2021 01:00 |
|
repiv posted:I would assume the ARM Mac Pro will have an Apple dGPU (or several)? Never is a strong word, considering the pace at which Alyssa Rosenzweig has been reverse engineering the Apple GPU for Asahi Linux. However, a PCIe add-in card seems unlikely, so it's not going to be available for x86 PCs. As of the last update I read, the only major issue she's found is that Apple co-designs their hardware and software. Since they didn't give two shits about supporting legacy 3D API concepts in Metal, their GPU hardware doesn't support that stuff either. (She mentioned it makes reverse engineering the GPU easier: everything's very simple and clean.) The Linux driver stack is going to have to emulate some things. It's not a disaster, though. Any modern application which cares about GPU performance should be using the subset of Vulkan which maps to Apple GPU hardware. Apple didn't go off in a totally different direction from the rest of the industry, they just made a cleaner break with the past.
|
# ? Oct 23, 2021 01:09 |
|
Cygni posted:ROG boards have "leaked": The biggest pain in the rear end about this is that most low-profile coolers aren't compatible with this kind of layout. Good luck fitting an NH-L12 or an Alpenföhn Black Ridge in there.
|
# ? Oct 23, 2021 02:05 |
|
Dr. Video Games 0031 posted:The biggest pain in the rear end about this is that most low-profile coolers aren't compatible with this kind of layout. Good luck fitting an NH-L12 or an Alpenföhn Black Ridge in there. That's the legitimate use case for the single fan AIOs.
|
# ? Oct 23, 2021 02:23 |
|
Cygni posted:The Pro/Max step for Apple silicon is super interesting, and the next step is super interesting too. M1 integrates a TB controller into the SoC; no external PCIe lanes to an Intel "Ridge" chip are required. I doubt that's changed in the Pro and Max versions. Plain PCIe isn't too useful for CPU interconnect due to poor latency, high power requirements, and no cache coherency. quote:Also would be a weird situation with the memory bandwidth like Paul mentioned, and the general complexity of trying to juggle 2-4 NUMA nodes plus 2-4 separate GPUs with their own memory pools in a coherent Apple like way to the end user. But I wouldn't put it past em to try. Gonna be very cool to see where they drive their bus next. Data point #1: Apple has some packaging technology patents showing multiple die with adjacent memory stacks side-by-side on some kind of interposer. Data point #2: The leaks which accurately identified the new M1 Pro/Max machine specs (8+2 CPU cores, 16 or 32 GPU cores) mentioned two codenames for the chips: Jade C-Die and Jade C-Chop. It mentioned two further codenames, Jade 2C-Die and Jade 4C-Die. These are supposedly the desktop and Mac Pro versions. Specs are 20 CPU cores / 64 GPU cores for 2C-Die, and 40/128 cores for 4C-Die. The following is just my attempt to make sense of it all. I could be very wrong! I think all Jade chips are based on one physical design, Jade 4C-Die. Apple just put in cut lines to make smaller derivatives: 4C-Die: die-to-die interconnect on top and bottom edges. Supports 4 die in a ring topology. 2C-Die: die-to-die interconnect on one edge only. Supports 2 die. C-Die: no interconnect C-Chop: no interconnect, half the GPU chopped off The following is also my speculation, but this time it's informed because this has been Apple's public messaging so far: They won't be supporting non-Apple GPUs. They're far too all-in on the benefits of unified memory. In the Mac Pro, they may provide some kind of proprietary expansion module to add more Apple GPU cores to the system, but AMD and Nvidia are out forever unless users and developers revolt.
|
# ? Oct 23, 2021 03:48 |
|
There was an intel asus mitx board a few generations ago with a riser board with the vrm components
|
# ? Oct 23, 2021 03:50 |
|
BobHoward posted:M1 integrates a TB controller into the SoC; no external PCIe lanes to an Intel "Ridge" chip are required. I doubt that's changed in the Pro and Max versions. I am just a hobbiest, but my understanding is that TB4 basically has to offer 2xPCIe4 lanes to meet the spec. Whether those are discrete general purpose lanes or an integrated controller that could never be bifurcated, I dunno! I guess this is a similar discussion to the Apple specific nvme controller. Does it or does it not matter to “count” those lanes if they can’t be used for anything else? When I was talking about PCIe as an interface between chips, I was mostly referencing that AMD used PCIe (with an additional interface layer on top of it) for basically everything in Epyc. “Everything is PCIe” was their slogan, after all. If M1 has an explicit, reserved non PCIe interconnect, we obviously haven’t heard of it yet, hence my assumption.
|
# ? Oct 23, 2021 08:08 |
|
Cygni posted:I am just a hobbiest, but my understanding is that TB4 basically has to offer 2xPCIe4 lanes to meet the spec. Whether those are discrete general purpose lanes or an integrated controller that could never be bifurcated, I dunno! I guess this is a similar discussion to the Apple specific nvme controller. Does it or does it not matter to “count” those lanes if they can’t be used for anything else? You seem to be thinking about this as if there's an external Intel "Ridge" series thunderbolt controller: those have upstream PCIe ports to connect to the CPU, DisplayPort inputs to connect to video sources, and Thunderbolt I/O to connect to the physical ports. But Apple integrates the TBT controller inside their SoC. They don't have to implement the full PCIe stack for their TBT ports, just the part which translates between the on-die interconnect and PCIe packets. This is called a PCIe Root Complex, or RC. The PCIe packets entering and exiting the RC don't need to be transported through an actual PCIe SERDES to get to Apple's TBT controller on the same die. There's no need: the point of a SERDES is to reduce pin count, which is not an issue for on-die connections to physically adjacent blocks. What I'm saying is that it's a mistake to think of the resources used by M1 TBT ports as if they're external PCIe lanes. Just ignore the fact that they exist, they're not relevant to asking the question "can a M1 derivative support a lot of off-chip PCIe lanes". quote:When I was talking about PCIe as an interface between chips, I was mostly referencing that AMD used PCIe (with an additional interface layer on top of it) for basically everything in Epyc. “Everything is PCIe” was their slogan, after all. If M1 has an explicit, reserved non PCIe interconnect, we obviously haven’t heard of it yet, hence my assumption. AMD isn't really using PCIe here. They have a tri-mode SERDES (serializer/deserializer) which can be used for a PCIe lane, SATA port, or Infinity Fabric lane. Take a look at this diagram of a pair of die connected through one of these tri-mode SERDES: https://en.wikichip.org/wiki/amd/infinity_fabric#IFIS Only one of the paths through the MUXes can be active. The SDF/CAKE path is Infinity Fabric. (Also, these tri-mode SERDES are only used for Infinity Fabric Inter-Socket or IFIS links. IFOP - "Infinity Fabric On Package" - is used for chiplet-to-chiplet comms. IFOP uses dedicated single-mode SERDES as there's no need for flexible reassignment, and they can be much smaller, faster, and lower power since IFOP links are so short and well-behaved.) What will Apple use? Unknown, but my guess is that it's going to look a bit like IFOP links. That is, low power, extremely short distance synchronous SERDES links. The protocol running on these probably will not be Infinity Fabric. It's Apple, they're likely to roll their own.
|
# ? Oct 23, 2021 10:49 |
|
derp
|
# ? Oct 23, 2021 11:05 |
|
BobHoward posted:The protocol running on these probably will not be Infinity Fabric. It's Apple, they're likely to roll their own. Which was probably why they had some wonky problems like external drives running slower on the M1 when you don’t have something else using th TB port, and runs much faster if you have something else using the thunderbolt port (??) where the opposite is true for intel/pc based implementation
|
# ? Oct 23, 2021 12:39 |
|
BobHoward posted:Correcting my monkey rear end Thanks dude, I learned something today!
|
# ? Oct 23, 2021 20:18 |
|
BobHoward posted:M1 integrates a TB controller into the SoC; no external PCIe lanes to an Intel "Ridge" chip are required. I doubt that's changed in the Pro and Max versions. I'm going to post here since this thread is more technical than the mac thread. I did some simple chip math based of numbers from https://en.wikipedia.org/wiki/Transistor_count and the Jade 4C-Die is insane. The Jade 4C-Die should be roughly 228 Billion transistors and ~1695 mm2 making it the biggest consumer chip ever, double the size of the GA100 Ampere. (M1 Max transistor count * 4)/ (M1 transistor count/M1 area)
|
# ? Oct 24, 2021 05:21 |
|
but its not really a single die right? like BobHoward was talkin about, it is really 4 C-die's on some sort of CoWoS/3D Fabric/2.5D package I would guess? would also make sense since apparently 800mm is the reticle limit on TSMCs 5nm family.
|
# ? Oct 24, 2021 05:40 |
|
Dr. Video Games 0031 posted:The biggest pain in the rear end about this is that most low-profile coolers aren't compatible with this kind of layout. Good luck fitting an NH-L12 or an Alpenföhn Black Ridge in there. Yea, I hate this too. I wish they had more ITX Designs that included a small 20-30mm fan as exhaust near the I/O Backplane.
|
# ? Oct 24, 2021 17:18 |
|
Cygni posted:but its not really a single die right? like BobHoward was talkin about, it is really 4 C-die's on some sort of CoWoS/3D Fabric/2.5D package I would guess? would also make sense since apparently 800mm is the reticle limit on TSMCs 5nm family. Yeah, that's what I was trying to describe. Jade C-Die aka M1 Max is already super larger, to scale up they're going to have to use multi-die. The "2C" and "4C" and core counts suggest inclusion of multi-die interconnect adequate for two and four die, respectively.
|
# ? Oct 25, 2021 01:25 |
|
Are we thinking there will be launch day availability for Alder Lake or is this just a big announcement?
|
# ? Oct 25, 2021 04:30 |
|
Availability is 11/4. Based on the amount of people who seem to already have them in their hands, apparently there is some stock at least!
|
# ? Oct 25, 2021 04:43 |
|
Rocket Lake was relatively available on launch day. All CPUs were present at MSRP for some hours, enough time to order one if you were paying attention. After that, availability was hit or miss for a month or two but they came permanently back into stock in due time (aside from the 11400). Alder Lake may be in higher demand though due to it not being overshadowed by a competing CPU line out of the gate. As long as you're paying attention to the launch windows though, I would expect you to be able to get one.
Dr. Video Games 0031 fucked around with this message at 05:23 on Oct 25, 2021 |
# ? Oct 25, 2021 05:21 |
|
If Alder Lake has the single core performance bump that the leaks point to, it's going to sell out no matter what. Maybe not the CPUs, maybe it's motherboards, but I expect that if this is the next big thing for gaming it's going to sell out fast. People are still in a mood to dump loads of money into gaming (see GPU prices) and it's launching just at the start of the spending spree season. I don't expect any desirable SKUs to have very good availability at MSRP.
|
# ? Oct 25, 2021 05:30 |
|
I'm not so worry about the CPU's availability as I am about the everything else in a PC build.
|
# ? Oct 25, 2021 12:14 |
|
Yeah, motherboard choices + DDR5 DIMMs if you're needy for them may fluctuate quite a bit in the next couple of months too
|
# ? Oct 25, 2021 16:05 |
|
What cpu bottlenecked games that still won't run at consistent 100+fps on a 5600x other than BOTW / RCPS3 crap are there?
|
# ? Oct 26, 2021 05:11 |
|
Sidesaddle Cavalry posted:Have any DDR5 vendors advertised any >4800MT/s ECC DIMMs yet? Answering my own question with ADATA's press release on their XPG Lancers @ 5200 with a 6000 version coming later. https://www.techpowerup.com/288292/adata-announces-xpg-lancer-ddr5-memory-up-to-6000-mt-s-of-rgb-goodness
|
# ? Oct 26, 2021 15:32 |
|
Otakufag posted:What cpu bottlenecked games that still won't run at consistent 100+fps on a 5600x other than BOTW / RCPS3 crap are there? I think there's still room to run if you're looking for 240Hz, though I agree that a 5600X is either enough for 120/144, or you're going to be GPU-bound anyway
|
# ? Oct 26, 2021 15:41 |
|
Sidesaddle Cavalry posted:Answering my own question with ADATA's press release on their XPG Lancers @ 5200 with a 6000 version coming later. Reminder that all DDR5 has on-die ECC, and will be marketed as having ECC, but consumer ram and consumer platforms won’t have per dimm ECC… which is what you actually want if you think you need ECC. There are still separate, server-tier dimms with “real” ECC on DDR5.
|
# ? Oct 26, 2021 17:09 |
|
https://twitter.com/videocardz/status/1452969756412284943 That actually… is cool? Looks like it’s above the slot so should be actually reachable.
|
# ? Oct 26, 2021 17:30 |
|
Cygni posted:https://twitter.com/videocardz/status/1452969756412284943 yeah it's the one thing I don't like about my Noctua cooler, is that larger cards sit so close that you can't hit the slot release without taking a knife and shoving it down there. A quick-release button that doesn't sit under the GPU is a very tidy solution to that. I'd actually rather they just took the slot lock off. Things worked fine without it, I'm not sure why we suddenly needed it other than it being "fancier/newer/better". Obviously you can just rip it off yourself but.. I'd rather not. Unless there's some trick to removing it without putting a ton of force on the slot?
|
# ? Oct 26, 2021 18:17 |
|
Speaking of, someone must have made some kind of tool that slots into the release mechanicsm to make it easier to reach?
|
# ? Oct 26, 2021 18:24 |
|
I use the back non-pointy end of the screwdriver that comes with the D15
|
# ? Oct 26, 2021 18:30 |
|
Cygni posted:Reminder that all DDR5 has on-die ECC, and will be marketed as having ECC, but consumer ram and consumer platforms won’t have per dimm ECC… which is what you actually want if you think you need ECC. There are still separate, server-tier dimms with “real” ECC on DDR5. I like the on-die ECC feature though? Is this implying that there will also be DIMM kits that *won't* enable the on-die ECC-on-refresh feature? I'd like to be wary of those.
|
# ? Oct 26, 2021 18:38 |
|
|
# ? Jun 7, 2024 19:27 |
|
Sidesaddle Cavalry posted:I like the on-die ECC feature though? Is this implying that there will also be DIMM kits that *won't* enable the on-die ECC-on-refresh feature? I'd like to be wary of those. All DDR5 will have the on-die ECC enabled, but it only corrects for a portion of the errors that "real" ECC memory with dimm level ECC corrects for. Im with you that its cool that it has SOMETHING though. Speaking of that, Linus put a pretty good DDR5 explainer video up, including touching on that potentially annoying issue we talked about months ago with DDR5 modules having multiple different kinds of voltage controllers that may limit overclocking on some modules, and others that have "programable" modes that have to be certified by motherboard vendors. He seems to be generally positive about DDR5, and considering they actually have it and are running it, thats probably a good sign. https://www.youtube.com/watch?v=aJEq7H4Wf6U
|
# ? Oct 26, 2021 18:49 |