Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



BobHoward posted:

On the state of macOS: iOS already supports it, and that means macOS can too. iOS began life as a fork of macOS, and both still build a lot of components from the same source code. That includes the XNU kernel. While they likely haven't turned on the code paths for AMP support in Intel Mac XNU kernel builds, obviously they've got a relatively easy path to get there for ARM Macs. (AMP = asymmetric multiprocessing, Apple's chosen terminology for this)

The open source world can technically benefit, as XNU is an open-source kernel. The actual code probably isn't too useful, though, due to a mix of license incompatibility and just being too different. My understanding is that the Linux scheduler algorithm isn't very much like anything else, which would likely make it difficult to apply AMP modifications from a very different scheduler design.

Even the BSDs (which, frankly, don't matter any more) aren't likely to benefit much. XNU is weird in its own way, it's a highly modified mashup of Mach and BSD kernel code. The last time they synced any of the BSD bits up with a mainline BSD was probably before Mac OS X 10.0 in 2001, but more importantly the XNU scheduler is descended from Mach rather than BSD. (The mashup is roughly Mach for scheduler + VM, BSD for traditional UNIX syscalls, and custom NeXT/Apple bits for I/O.)
macOS and iOS are not the same codebase, although they certainly share some things like the MAC Framework (it's one of the things that make jailbreaking so difficult; also in FreeBSD, which it was developed for by Robert Watson). The XNU kernel is a hybrid kernel exactly because it contains BSD bits, specifically taken from FreeBSD and sometimes sparsely updated since - the bits Apples engineers took include the VFS, the netstack, the process model, and as you point out, enough system calls to implement a Unix-like CLI (with Terminal.app) plus a mix of userland utilities for that CLI.

The problem with it is that while the source is published, it's only published very sporadically (ie. no schedule and no announcements).
Nor is there any newer-than-Mach-scheduler code-changes or instances of the word 'AMP' that I've been able to find in a brief look through the codebase, with cscope. Plus, the code hasn't been modified since 2018.

Scheduler difference-wise, it doesn't really matter how different the codebases are - a study comparing CFS with ULE found that in large there's no difference despite the fact that CFS is highly heuristics-based whereas ULE is just very optimized (perhaps ULE has a slight advantage over-all, and a bigger advantage in certain benchmarks, but the difference is probably not enough that it's going to make someone choose one over the other).

As for taking the source code, that's not really a problem. The only requirements are that you cannot hold Apple liable, and you can't make trademarks based upon the code, that you must include copyright and license, state changes, and disclose the source code.
FreeBSD at least would have no problems with any of those, and indeed already contains some code from Apple.

Incidentally, about Mac OS X, someone named Lucy did a very interesting talk at 24C3:
https://www.youtube.com/watch?v=-7GMHB3Plc8
I'm a FreeBSD contributor who's been threatened with a commit bit more than once, and I've been using it since 2000

BlankSystemDaemon fucked around with this message at 09:42 on Jul 5, 2020

Adbot
ADBOT LOVES YOU

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
The cheat for phones is that you can disable the big cores unless the screen is on or the phone is charging. Can't really do that on a desktop or server.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Context provided by quoting my post from the AMD thread:

SwissArmyDruid posted:

Completely and absolutely correct use of your 64 Cores and 128 Threads

Sped up 5x from realtime to achieve sync.

Hmmm.... maybe we need to get the Intel guys in on this. Larrabee or Xeon Phi could do better resolution with a hojillion threads...

The Intel camp appears to have fired back. Please be sure to unmute for best experience.

Unfortunately, I think this is faked based on the fonts of the task manager? Alas.

I believe it was also disclosed that the original AMD one was also faked as well.

SwissArmyDruid fucked around with this message at 07:38 on Jul 6, 2020

JawnV6
Jul 4, 2004

So hot ...

D. Ebdrup posted:

The biggest problem with heterogeneous CPU cores is all the schedulers for every single OS that wants to support this has to grow support for tracking energy use / scheduler quanta, and also needs an algorithm for when something should be moved around.
This isn't strictly a problem with heterogeneous compute. Even on homogenous systems single-threaded perf is going to be limited by a hot spot on the die and one of the ideas that will never die is 'core hopping.' See you could squeeze out some more compute in the same thermal headroom, if only you could c6save off your context and restore it back onto another core. Chug until you're going to throttle, hop over to another core. Say it takes gosh I dunno pick a number out of the air 125ms of downtime, how do you tune a HW scheduler to hop?

BobHoward posted:

they can make sure that both support exactly the same ISA, which is what you really want.
I dunno if I'd go that far. The sticky wicket on the x86 side is the super wide AVX512, you really really don't want the Atom to have any clue what to do with that opcode, it has to call out for the big one.

FunOne posted:

The cheat for phones is that you can disable the big cores unless the screen is on or the phone is charging. Can't really do that on a desktop or server.
On a server, the screen is never on :eng101:


Anyway I read through the Lakeview thing and just came in here to laugh at the fact that Yet Another Naming Scheme is generating "4+1", "4+2", etc. Good job y'all. Is it up to 3 internally? "2+4+2" lmao

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

I dunno if I'd go that far. The sticky wicket on the x86 side is the super wide AVX512, you really really don't want the Atom to have any clue what to do with that opcode, it has to call out for the big one.

IMO that's a problem for big.LITTLE x86. Sure, you don't want the Atom core to have the execution unit required for 512-bit wide execution, but you could at least make sure that if a process ever attempts to run AVX512 on it, no SIGILL kill takes place.

That's the approach on the ARM side: ARM's wide vector ISA, SVE, is designed with semantics which explicitly support CPU microarchitectures that emulate full width execution with multiple passes through a narrower execution unit. If you want to have your big core support big vectors, you can do that without giving up on identical ISA support and without forcing the little core to be bloated and power hungry.

Cygni
Nov 12, 2005

raring to post

Intel released a bunch of stuff on TB4/USB4:

https://www.anandtech.com/show/15902/intel-thunderbolt-4-update-controllers-and-tiger-lake-in-2020

So thinkin about this, specifically the portion about DMA protection, it seems likely that TB4 itself will be exclusive to Intel. I guess that means Macs will probably replace their TB3 ports with USB4 40gb/s ports and call it a day, since it is backwards compatible with TB3.

Really seems like the only things Intel is offering over a good USB4 40g implementation is certifications/mandatory features?

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Cygni posted:

Intel released a bunch of stuff on TB4/USB4:

https://www.anandtech.com/show/15902/intel-thunderbolt-4-update-controllers-and-tiger-lake-in-2020

So thinkin about this, specifically the portion about DMA protection, it seems likely that TB4 itself will be exclusive to Intel. I guess that means Macs will probably replace their TB3 ports with USB4 40gb/s ports and call it a day, since it is backwards compatible with TB3.

Really seems like the only things Intel is offering over a good USB4 40g implementation is certifications/mandatory features?

Maybe. Possibly. But also Apple released a big thing today about how they “partnered with Intel to make thunderbolt a decade ago” and would be committing to supporting it on Apple Silicon. That could mean TB3 or it could mean TB4, I doubt they’d keep the TB branding if it remained a generation behind, the whole point of TB4 seems to be to raise the uniformity and standards and whatnot.

Also Intel apparently won’t be collecting any royalties or licenses, and there doesn’t appear to be a requirement for hardware that only Intel can produce so.... :shrug:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Frankly, universal feature requirements by itself is a pretty nice step forward, instead of the current situation of grab-bag features that has everyone running to the fine print on each different model's spec sheet to figure out what the gently caress those ports actually do.

The bump on internal PCIe speeds is also nice, if not really needed quite yet for most things.

What I'd also like to know is if it'll let laptops put USB4/TB4 ports on both sides of the body instead of usually just the one closer to the chipset.

Cygni
Nov 12, 2005

raring to post

Ok Comboomer posted:

Maybe. Possibly. But also Apple released a big thing today about how they “partnered with Intel to make thunderbolt a decade ago” and would be committing to supporting it on Apple Silicon. That could mean TB3 or it could mean TB4, I doubt they’d keep the TB branding if it remained a generation behind, the whole point of TB4 seems to be to raise the uniformity and standards and whatnot.

Also Intel apparently won’t be collecting any royalties or licenses, and there doesn’t appear to be a requirement for hardware that only Intel can produce so.... :shrug:

Huh, i didnt see that Apple announcement. I guess they have to say it considering the Tiger Lake laptops they are about to announce have TB4 included, and would be orphaned before launch if they werent gonna do it.

The hardware requirement is the DMA portion, although apparently Intel backtracked and said it was not VT-d REQUIRED... but I'm not entirely sure how else you would do it without it? But I'm a dumbass so!

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Cygni posted:

Intel released a bunch of stuff on TB4/USB4:

https://www.anandtech.com/show/15902/intel-thunderbolt-4-update-controllers-and-tiger-lake-in-2020

So thinkin about this, specifically the portion about DMA protection, it seems likely that TB4 itself will be exclusive to Intel. I guess that means Macs will probably replace their TB3 ports with USB4 40gb/s ports and call it a day, since it is backwards compatible with TB3.

Really seems like the only things Intel is offering over a good USB4 40g implementation is certifications/mandatory features?

FYI, I don't think TB4/USB4 can be used interchangeably, in the sense that while the connector will be the same and there will be some interoperability since USB is incorporating TB3 into the USB 4 standard, there are still changes occurring in TB4 that won't be available in USB 4.

So if Apple can incorporate it into their ARM Macs, they'd probably still prefer to, so as to maintain the premium image associated with them. Or they'll just fork the USB-C connector for their own purposes :laugh:

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Cygni posted:

The hardware requirement is the DMA portion, although apparently Intel backtracked and said it was not VT-d REQUIRED... but I'm not entirely sure how else you would do it without it? But I'm a dumbass so!

All it needs is some kind of IOMMU sitting between the thunderbolt interface and memory, that can remap what addresses the hardware sees. That abstraction means the OS can lock out sections of memory where it doesn't want random external hardware poking around, just by not mapping those physical addresses to any device-visible addresses.

VT-d is Intel's brand name for that virtualization layer. AMD has the same basic functionality under the name "AMD-VI." Apple will come up with their own solution.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Cygni posted:

Intel released a bunch of stuff on TB4/USB4:

https://www.anandtech.com/show/15902/intel-thunderbolt-4-update-controllers-and-tiger-lake-in-2020

So thinkin about this, specifically the portion about DMA protection, it seems likely that TB4 itself will be exclusive to Intel. I guess that means Macs will probably replace their TB3 ports with USB4 40gb/s ports and call it a day, since it is backwards compatible with TB3.

Really seems like the only things Intel is offering over a good USB4 40g implementation is certifications/mandatory features?

The updates to AT's article after AT reached out to the various parties make it clear that "required to support Intel VT-d" was the press release writer repping the brand rather than a hard requirement for Intel's IOMMU and no other. This makes sense as VT-d is a CPU feature that isn't even tightly integrated with TB4.

Intel clarification posted:

Thunderbolt is open to non-Intel-based systems. Like any other system, devices must pass Thunderbolt certification and end-to-end testing conducted by third-party labs. Thunderbolt 4 requirements include Intel VT-d based or an equivalent DMA protection technology that provides IO virtualization (often referred to as IO Memory Management Unit or IOMMU), as well as OS implementation support. If the equivalent technology supports prevention against physical attacks, then that should meet the requirement.

e: f,b guess this was redundant with other posts

BlankSystemDaemon
Mar 13, 2009



I'm just waiting for the hardware privilege checks in the IOMMU to be found lacking, as is/was the case with the privilege checking on the branch predictor which led to part of MELTRE.

Cygni
Nov 12, 2005

raring to post

looks like Tiger Lake (and Willow Cove cores, Xe Graphics, and PCIe 4 for Intel by default) will launch on September 2nd

https://videocardz.com/newz/intel-has-something-big-to-share-on-september-2nd

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Good for the laptop market, but wake me up when Intel has something interesting to talk about on the desktop again.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I hope Lenovo hurries up and comes up with some Tiger Lake poo poo, I really need an update :argh:

NewFatMike
Jun 11, 2015

Does this signal the end of multiple whacky processes being the same Core generation?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

NewFatMike posted:

Does this signal the end of multiple whacky processes being the same Core generation?

Nope, they're going to continue cranking out 14nm on the Desktop through 2022.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
I've been trying to keep tabs on the arch/product segment/generation of processors that will start supporting 2nd gen Optane Persistent Memory, but the all the 'Lake names keep making it hard to remember - which one was it again, and is it hopefully not another Skylake?

Cygni
Nov 12, 2005

raring to post

Skylake Cores - Skylake/Kaby Lake/Coffee Lake/Cascade Lake/Cooper Lake/Whiskey Lake/Comet Lake (some variations in compute units and stuff but broadly the same family)
Palm Cove core - Cannon Lake (mostly still skylake shrunk to 10nm)
Sunny Cove core - Ice Lake/Lakefield
Willow Cove core - Tiger Lake (10nm) / Rocket Lake (14nm)
Golden Cove core - ? Sapphire Rapids? Alder Lake?

Cygni fucked around with this message at 22:22 on Jul 15, 2020

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
Which one does Cooper Lake fall under

Edit: Ah, here we go, this is what I was looking at: https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html

Just wondering what the underlying cores in this architecture are, how they talk to each other, etc.

Sidesaddle Cavalry fucked around with this message at 00:33 on Jul 16, 2020

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


I usually dislike Linus Tech Tips but this is a pretty funny watch:

https://www.youtube.com/watch?v=Skry6cKyz50

It's about Intel locking out XMP profiles to the Z series chipset in the current generation:



The lock is set in the BIOS so the motherboard makers can't do anything about it. The reason for it is "well XMP is an overclocking feature so you need a Z series overclocking chipset to enable it" even though the memory controller is on the CPU it self and the chipset has nothing to do with it.

This is yet another anti-consumer thing Intel has done to their product lines and it's pretty fuckin stupid.

orcane
Jun 13, 2012

Fun Shoe
Is that even new? I thought Intel has done that for ages (locking RAM speeds to "supported" JEDEC speeds outside of Z-series chipsets)?

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


It was always an option for motherboard designers to enable if they wanted to but now you must have a Z chipset.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

orcane posted:

Is that even new? I thought Intel has done that for ages (locking RAM speeds to "supported" JEDEC speeds outside of Z-series chipsets)?

it's not new, and not quite accurate (you can exceed JEDEC speeds up to the rated limit of the processor, just not to arbitrary unlimited speeds like Z-series, this is 2933 for Z490 I believe).

it's sort of come to prominence now, because AMD is close to a lot of the lower-tier processors due to the locked clocks, and memory overclocking is something that would push Intel ahead another couple percent, but Linus is being deceptive here presenting this as some kind of a "new" lock, you have never been able to break official DRAM speeds on a non-Z board.

As we enter year 4 of skylake refreshes, it's really just time for the overclocking limits and memory clock limits to die. Unlock memory clocks across the whole lineup on all boards (or maybe keep like one or two for "business" models), unlock clocks on all processors on Z-series boards.

Paul MaudDib fucked around with this message at 19:42 on Jul 22, 2020

orcane
Jun 13, 2012

Fun Shoe
Yeah I meant the CPU's supported max. RAM speed.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Paul MaudDib posted:

it's not new, and not quite accurate (you can exceed JEDEC speeds up to the rated limit of the processor, just not to arbitrary unlimited speeds like Z-series, this is 2933 for Z490 I believe).

it's sort of come to prominence now, because AMD is close to a lot of the lower-tier processors due to the locked clocks, and memory overclocking is something that would push Intel ahead another couple percent, but Linus is being deceptive here presenting this as some kind of a "new" lock, you have never been able to break official DRAM speeds on a non-Z board.

As we enter year 4 of skylake refreshes, it's really just time for the overclocking limits and memory clock limits to die. Unlock memory clocks across the whole lineup on all boards (or maybe keep like one or two for "business" models), unlock clocks on all processors on Z-series boards.

Part of Linus' rant here is that AMD doesn't do any of this limiting at all based on chipset (or seemingly much at all) and overall the rant is "stop segmenting this poo poo, it's dumb now" and he's right on that one.

BlankSystemDaemon
Mar 13, 2009



Don't worry, it's a race to the bottom; AMD will begin segmenting as soon as they think they can.
Case in point, they've already dropped ECC support from many lower-end chips.

movax
Aug 30, 2008

Number19 posted:

I usually dislike Linus Tech Tips but this is a pretty funny watch:

https://www.youtube.com/watch?v=Skry6cKyz50

It's about Intel locking out XMP profiles to the Z series chipset in the current generation:



The lock is set in the BIOS so the motherboard makers can't do anything about it. The reason for it is "well XMP is an overclocking feature so you need a Z series overclocking chipset to enable it" even though the memory controller is on the CPU it self and the chipset has nothing to do with it.

This is yet another anti-consumer thing Intel has done to their product lines and it's pretty fuckin stupid.

:hmmyes: this is what I should do whilst my product line is under assault from all sides

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
so you buy a locked CPU, and decide to pair it with an H410, B460, or H470 motherboard, since you don't need the overclocking capabilities of a Z490

except you're leaving performance on the table because your memory speeds are capped/can't be tuned as tightly

so it pushes you towards wanting a Z490 board instead

except pairing a locked CPU with an overclocking board is dumb

but getting both an unlocked CPU and an overclocking board is also that much more expensive

but kneecapping your memory by using a non-Z490 chipset is also dumb

is that about right? seems bad

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

gradenko_2000 posted:

so you buy a locked CPU, and decide to pair it with an H410, B460, or H470 motherboard, since you don't need the overclocking capabilities of a Z490

except you're leaving performance on the table because your memory speeds are capped/can't be tuned as tightly

so it pushes you towards wanting a Z490 board instead

except pairing a locked CPU with an overclocking board is dumb

but getting both an unlocked CPU and an overclocking board is also that much more expensive

but kneecapping your memory by using a non-Z490 chipset is also dumb

is that about right? seems bad

yeah, intel is suffering from the paradox of choice, even if you found one of those combinations to perform acceptably... they are rubbing it in your face that you are missing out on something. AMD gives you a fully enabled system, where overclocking is largely pointless and memory overclocking is necessary to even catch up to a locked Intel configuration, but it's "fully enabled", isn't that just mentally easier?

it is the same thing as why the "hyperthreading DLC" produced a backlash and the 4700U or 9700K does not. Give people a choice and they hate it.

Intel persisting in over-segmenting their lineup in the face of an incredibly competitive AMD (especially in the lower-end segments where this becomes a price+performance concern) is loving stupid but on the other hand it's pretty easy to talk yourself into circles like that on all kinds of products, and the paradox of choice is the term for that.

Paul MaudDib fucked around with this message at 03:33 on Jul 23, 2020

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

movax posted:

:hmmyes: this is what I should do whilst my product line is under assault from all sides

STILL THE FASTEST IN SINGLE THREADED PERFORMANCE! :shepface:

Encrypted
Feb 25, 2016

Also lol @ their lack of ECC support.

Lets have 32GB+ of ram but no ecc, just buy our xeons if you want ecc memory to go with our gob of cores.

Rollie Fingers
Jul 28, 2002

I'm a VFX artist and Intel's behaviour over the last decade has made sure I'm sticking with AMD as long as their CPUs are within ~15% of the performance and similar or better power requirement.

The fact Intel stuck with 4c/8t and 6c/12t as the only affordable options for well over a decade has made me despise them. The higher core processers were prohibitively expensive and our work relies heavily on multithreading. After a decade of using 2 or 4 core processors, I bought a 5820k in 2015 and still found it lacking for multithreading. I couldn't afford a Xeon so had to suck it up. Being able to buy a 12 (or more) core processor for my home workstation seemed like a fantasy as long as AMD weren't a threat to Intel.

I now have a 3950x and it's made me bitter I didn't have a 12 or 16 core processor from Intel available at an affordable price in the past. I could have saved quite a bit of money over the years from not using render farms. Sorry for the rant, but most artists I know were jumping for joy when AMD's 12+ core processors were shown to wipe the floor with Intel's offerings. Everyone I know in the industry that's upgraded their machines has a 3900x or 3950x.

Carecat
Apr 27, 2004

Buglord
Why do we even need to do "warranty breaking" XMP overclocking of RAM anyway? We had 3200 in 2014, how come poo poo doesn't just work at 3600+ in 2020? Is it just that DDR5 is taking kind of a long time to arrive?

Carecat fucked around with this message at 12:33 on Jul 23, 2020

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Encrypted posted:

Also lol @ their lack of ECC support.

Lets have 32GB+ of ram but no ecc, just buy our xeons if you want ecc memory to go with our gob of cores.
ECC pisses me off. Why the gently caress is no one making modules with XMP profiles? If not that, JEDEC, get the gently caress cracking on higher frequencies!

BlankSystemDaemon
Mar 13, 2009



ECC is confirmed-working up to 3200MHz, so what's the issue exactly? Once you go much higher than that, the CAS latency timings begin being a problem for memory-intensive workloads which would otherwise benefit from the higher bandwidth, if memory serves.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Combat Pretzel posted:

ECC pisses me off. Why the gently caress is no one making modules with XMP profiles? If not that, JEDEC, get the gently caress cracking on higher frequencies!

it's my understanding, though I could be wrong, that by DDR5 we're just going to have ECC all the time because the tolerances are so tight that you need to have ECC just to make the thing work

BlankSystemDaemon
Mar 13, 2009



gradenko_2000 posted:

it's my understanding, though I could be wrong, that by DDR5 we're just going to have ECC all the time because the tolerances are so tight that you need to have ECC just to make the thing work
That's been true since ECC was abandoned by the PC clones for being too expensive, though.
Several studies have shown this, and although it's a pity I can't find it, I also have vague memories of some statistics published by Microsoft based on their system information gathering, that showed that ECC memory accounts for absolutely staggering amounts of lost work over the decades.

EDIT: Here's one more thing from all the way back in 2000, in addition to the five things I already linked.

BlankSystemDaemon fucked around with this message at 14:36 on Jul 23, 2020

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

ECC is confirmed-working up to 3200MHz, so what's the issue exactly?
Is it? Really? Do I need to hunt for specific modules with fabled B-die or some poo poo like that? I'd rather not. The fastest ECC UDIMMs I see are rated 2666MHz. The 3200MHz ones are typicall RDIMM. Guess what I can't use.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply