Don Dongington posted:Any OS recommendations for a small form factor desktop pc with a single large capacity drive/maybe an ssd for OS, to store a bunch of media and host a handful of docker containers to manage it + plex? Don't use shingled unless they are free or nearly so. I'm using them, but **only** because they were literally sitting around gathering dust otherwise. Their write performance sucks rear end.
|
|
# ? Jul 12, 2023 15:03 |
|
|
# ? May 28, 2024 16:30 |
|
IOwnCalculus posted:
Can't disagree with that. I mentioned this earlier, but I have a few TB backed up to Google's super-archive storage via CloudBerry Backup for Windows (https://www.msp360.com/backup/). For my ~3TB it works out to about ~8/mo in storage costs. I had to buy the software originally and pay for an annual maintenance contract that is ~5 bucks, but their support is legit. I'm also a huge fan that everything on my storage bucket is encrypted with keys I control. Access to the storage bucket just yields a massive number of UUID style named files. Obviously, in a restore scenario you're shelling out big bucks to pull that storage back down in read and transfer fees, but I figure that's a "poo poo hit the fan" scenario anyway.
|
# ? Jul 12, 2023 15:04 |
|
Yeah I'm using 16 6TB SMRs I got for free from a local business who didn't know what they were. In a ZFS pool they were catastrophically bad but for bulk media under ext4+Snapraid they work just fine. Again would have been great had they you know, been a budget option instead of stealthed into existing product lines.
|
# ? Jul 12, 2023 15:06 |
Less Fat Luke posted:Yeah I'm using 16 6TB SMRs I got for free from a local business who didn't know what they were. In a ZFS pool they were catastrophically bad but for bulk media under ext4+Snapraid they work just fine. Again would have been great had they you know, been a budget option instead of stealthed into existing product lines. A read/write ssd cache can help alleviate the abysmal performance on SMR drives. At least during uploads. They'll still be working constantly in a ZFS/BTRFS array shuffling bits around with their lovely write speeds. But at least when you are wanting to upload stuff smaller in size than your cache it'll be done quicker. Though it obviously increases cost, and as I understand it with Synology, the write cache can be a risky thing to enable. That said, if you're just getting like a pair of 250gb drives the price really is not bad. Edit: I have a 500gb read cache on my Synology NAS which has 4 SMR 8tb drives. It has noticably helped speed up recall times from the drives with reads even. Since the heads are constantly busy catching up on needed writes. So the NAS can pull from the read cache 80% of the time for requests which frees up a lot of head time for writes. Nitrousoxide fucked around with this message at 15:16 on Jul 12, 2023 |
|
# ? Jul 12, 2023 15:11 |
|
It wasn't the performance during daily use that was an issue (which was honestly fine with concurrent uploads limited), it was rebuild events basically destroying the pool performance-wise until the new drive would get failed out. I think it's much better now with the newer ZFS resilver/rebuild strategies that are less random though FWIW.
|
# ? Jul 12, 2023 15:14 |
That Works posted:Whats a shingled drive? New term for me. This made sense to everyone, and was largely agreed as a good idea, because it mirrors how magnetic tape uses SCSI Stream Commands - and incidentally also makes sense because like magnetic tape, SMR was thought of as a Write Once Read Many type of storage. All of this, however, turned out to only be available to hyperscalers - neither Seagate nor WD will sell you what's now known as host-managed SMR. Instead, they both submarined (ie. snuck in changes to existing product lines without product change notifications) a variant called drive-managed SMR, where a bunch of firmware on the disk is responsible for controlling a bit of non-volatile flash memory on the disk, which is then responsible for writing everything properly to the disk. Needless to say, when you only have a few hundred MB (maybe half a GB on modern SMR drives, but it's still a tiny amount), any kind of pattern that isn't strictly sequential (such as random I/O or resilvering a RAID, which looks like random I/O) makes the drive just utterly give up trying to maintain any kind of semblance of performance. So we got the worst of all words, and in ways people only discovered after they'd bought the hardware. Less Fat Luke posted:Yeah I'm using 16 6TB SMRs I got for free from a local business who didn't know what they were. In a ZFS pool they were catastrophically bad but for bulk media under ext4+Snapraid they work just fine. Again would have been great had they you know, been a budget option instead of stealthed into existing product lines. Less Fat Luke posted:It wasn't the performance during daily use that was an issue (which was honestly fine with concurrent uploads limited), it was rebuild events basically destroying the pool performance-wise until the new drive would get failed out. I think it's much better now with the newer ZFS resilver/rebuild strategies that are less random though FWIW.
|
|
# ? Jul 12, 2023 15:47 |
|
BlankSystemDaemon posted:"It works for me" doesn't work when the "working" part is not having any kind of checksuming or self-healing capabilities, or even the ability to discover silent bit-rot. quote:No, resilver for most RAID implementations (read: not just ZFS) still looks random from the point of view of any particular disk that's being read from - which is what kills the performance of drive-managed SMR. https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0 This seems like a weird response, we're nerds talking about filesystems, checksumming and SMR-vs-CMR drives lol. Edit: This is the only bitrot corrupted file it found in the years of using it, good riddance (and it ended up being bad RAM): code:
Less Fat Luke fucked around with this message at 16:15 on Jul 12, 2023 |
# ? Jul 12, 2023 16:07 |
Thanks yall!
|
|
# ? Jul 12, 2023 16:08 |
|
Speaking of not-SMR drives, looks like Amazon have got the 8TB non-pro ironwolf at 25%/£50 off
|
# ? Jul 12, 2023 16:10 |
|
Anyone seen a good QNAP for plex on sale during all this amazon nonsense? The synology all use AMD now which doesn't do hardware encoding, and the one QNAP on newegg seems underpowered. Yes yes unraid roll your own etc. I just want to buy one and turn it on and host movies for my friends, thanks
|
# ? Jul 12, 2023 16:32 |
|
Kerbtree posted:Speaking of not-SMR drives, looks like Amazon have got the 8TB non-pro ironwolf at 25%/£50 off they've also got a Prime Day sale on the WD 8tb Red Plus drives (the CMR ones), down to $123
|
# ? Jul 12, 2023 16:55 |
Less Fat Luke posted:Weirdly combative but SnapRAID is a checksumming, parity and repair tool that has a scrub and certainly detects bitrot! It's nothing I'd trust important files to and ZFS is much more of a production ready system but for a gigantic media server it's worked quite well. Sequential resilver works by reordering blocks in memory before they're written to the disk being resilvered (to ensure that sequential writes are being performed, as they're faster). The reads from the other disks in the array still follow the regular read rules: Least busy leaf vdev (meaning the one with the shortest queue), I/O locality (meaning it won't cross NUMA domains or SAS chassies/adapters unless it has to) and rotational device information (meaning it will prioritize reads from devices with higher RPM). All of that can still result in reads, from the point of view of an individual drive, looking like random I/O. How does a backup tool (the nomenclature SnapRAID uses) help you on a live filesystem without checksumming like ext4, or is it somehow overlaying on top? I've been reading the source code, and it doesn't seem like it's doing any kind of a hash-tree structure, so it presumably stores the checksums along-side whatever inode-/file-like structure it uses? That does get you checksums, but doesn't negate the other classes of things that ZFS protects from (namely phantom writes, misdirected I/O, DMA parity errors, and driver bugs). If it doesn't overlay, it seems not-very-different from something like rsync, which doesn't deal well with live filesystems where files can change while rsync is running, which is an issue when you're dealing with a lot of data (this is another whole class of issues that zfs send|receive saves you from, because you know that it was successful if the snapshot exists on the destination; you can't have a partial backup). There's also precious little academic research into Cauchy Reed-Solomon, with the only paper I've found being one that focuses on space efficiency. Also, Final Fantasy Spirits Within was a pretty alright movie, as far as I remember - I hope you had a backup! BlankSystemDaemon fucked around with this message at 19:06 on Jul 12, 2023 |
|
# ? Jul 12, 2023 18:58 |
|
SnapRAID is a like an offline RAID tool, similar to building parity files using par for files. It's not at all suitable for live files that are edited in place, but when used for media servers that deal in large static files like shows and movies it's great as it let's you alter how many parity drives you want at any time, and mix and match drive sizes (as long as the parity drives are as large as your biggest data disk). In the example of that movie you can just repair it or replace it with SnapRAID as you could with an existing drive. It does no weird poo poo at all with filesystems so most users will use MergerFS on top of it to present a combined view to Jellyfin or whatever. I also didn't say any of the things you're attributing to SnapRAID, I just mentioned that for my NAS ZFS was pretty problematic with SMR drives and I moved to SnapRAID which has worked great for some years now. I wouldn't recommend it for anything other than exactly what it's suited for (large, static files) so I have a smaller ZFS setup for things like source repos, family photos, etc. I'm not going to recreate my 16-drive ZFS setup to test it but the linear resilvering definitely helped speed up SMR rebuilds enough that it wouldn't kick drives out of the array but moving to SnapRAID has worked fine for my needs. Plus now I don't need to replace all of the drives at once (or technically 8 at once since it was two RAIDZ2 vdevs) when I run out of space. BlankSystemDaemon posted:Also, Final Fantasy Spirits Within was a pretty alright movie, as far as I remember Less Fat Luke fucked around with this message at 20:05 on Jul 12, 2023 |
# ? Jul 12, 2023 19:56 |
|
A friend of mine has a synology nas and a lot of extra storage laying around that she doesn't know what to do with. Is there a storage heavy grid computing workload she could dedicate it to? Run a waffleimages node?
|
# ? Jul 12, 2023 20:55 |
|
Talorat posted:A friend of mine has a synology nas and a lot of extra storage laying around that she doesn't know what to do with. Is there a storage heavy grid computing workload she could dedicate it to? Run a waffleimages node? Almost anything you'd want to do would be more limited by internet bandwidth than storage capacity, I'd bet. One thing that does come to mind is running scraper proxies for the Internet Archive: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior. Almost anything else that you could temporarily donate or rent storage space and compute capacity to is going to be tied up with insane cryptocurrency scams and have mostly fallen apart over the year.
|
# ? Jul 12, 2023 21:04 |
Any recs on a cheap CPU / mobo / ram combo for an Unraid NAS? I have currently an intel i3-3330 (Ivy Lake) from like 2012 and just tried to update it from an older mobo to a used DH77KC intel mobo but I can't get it to post. RAM and CPU show as compatible for this board and everything but the board was coming from a functioning NAS. Instead of futzing around with trying another mobo I am leaning towards just updating the mobo, ram and cpu. I need it to be fairly low power, the Unraid system runs about 8 HDDs and 2 cache drives (one was on a pcie card fitting an M.2 nvme drive). Full ATX or mini is fine format wise. Given that this is only hosting files and running a few of the *arr dockers and Rclone I don't need a ton here. I got a tiny old intel GPU on it for running Tdarr as well and my Plex server is hosted on another separate computer. So I guess I need a fairly lowish power setup that is cheap / somewhat old that has a suitable number of SATA ports on the board. Any recs?
|
|
# ? Jul 13, 2023 02:15 |
|
You can get a Supermicro LGA1150 mATX board for $35: https://www.ebay.com/itm/303710536288 E3-1225v3 to go in it is $10: https://www.ebay.com/itm/174232717481 The Intel stock cooler for the socket should be fine for this, or pretty much any aftermarket cooler. Add your DDR3 from the old system, or go buy some ECC sticks for $40(https://www.ebay.com/itm/175319505223) and you're good to go. Regarding getting up to 10 or whatever SATA ports, I'd use an add-in card for that. There are some server boards with onboard SAS controllers which would have that many but I think the premium you'd pay to get an ATX one would be more than a card. Eletriarnation fucked around with this message at 03:56 on Jul 13, 2023 |
# ? Jul 13, 2023 03:47 |
Eletriarnation posted:You can get a Supermicro LGA1150 mATX board for $35: https://www.ebay.com/itm/303710536288 Thank you this is much appreciated. Because I am running into more issues finding compatibility info etc I am also considering something a bit newer generation. Any thoughts there on a CPU/mobo combo? Was thinking just cheap current celeron build or something along those lines, preferably something I can get quick online given that my NAS is dead in the water until I get this going again.
|
|
# ? Jul 13, 2023 12:03 |
|
That Works posted:Thank you this is much appreciated. CPU - i3-12100 - https://www.amazon.com/Intel-Core-i3-12100-Quad-core-Processor/dp/B09NPHJLPT Mobo - ASUS D4-CSM - https://www.amazon.com/ASUS-D4-CSM-Commercial-Motherboard-DisplayPort/dp/B0C3ZM464W HBA w/Cables - https://www.ebay.com/itm/1554215550...%3ABFBM9OL2yapi 32GB RAM - https://www.amazon.com/Corsair-VENGEANCE-3200MHz-Compatible-Computer/dp/B07RW6Z692 HBA was needed to get all eight drives connected and to keep things cheap. Hope this helps.
|
# ? Jul 14, 2023 14:43 |
Another option is a Zimaboard which has two (external) SATA ports and an (external) PCIe port. If you don't need a ton of drives and a simple mirror zfs raid array is good enough for you this can easily suffice for a NAS. And the base model is only like a hundred bucks. Only a little more than a Pi4 but with an x86-64 chip and way more expansion available.
Nitrousoxide fucked around with this message at 18:05 on Jul 14, 2023 |
|
# ? Jul 14, 2023 18:03 |
Thanks all I appreciate it
|
|
# ? Jul 14, 2023 19:24 |
|
Is there some catch I'm missing with Wasabi? We're looking for an archive Veeam SOBR and initially put it into cold storage on Azure but got blown away by the write operation costs. Wasabi seems ridiculously cheap and we're tossing up between that and just grabbing another box of spinning disks and chucking it at another site.
|
# ? Jul 15, 2023 01:47 |
|
Any reason I shouldn't be SSHing files to/from my Unraid server instead of transferring using a Samba share? Moving between folders is noticeably faster and the speeds seem just as fast, if not faster. I never did get around to trying NFS - should I set that up, too? I'm primarily using a MacBook.
|
# ? Jul 15, 2023 05:46 |
|
Nitrousoxide posted:A read/write ssd cache can help alleviate the abysmal performance on SMR drives. At least during uploads. They'll still be working constantly in a ZFS/BTRFS array shuffling bits around with their lovely write speeds. But at least when you are wanting to upload stuff smaller in size than your cache it'll be done quicker. Though it obviously increases cost, and as I understand it with Synology, the write cache can be a risky thing to enable. That said, if you're just getting like a pair of 250gb drives the price really is not bad. A lot of caveats here: * You will burn up any consumer SSD as a write cache for a NAS. Been there, done that. You need enterprise U.2/SAS/PCIe SSDs with endurance ratings in DWPD (Drive Writes per Day) with a 5 year warranty. A consumer SSD can be written for 100 - 200x its capacity before failing. Enterprise SSD endurance ratings start at around 2000x the capacity and go up from there. * * You can avoid both of those by sticking to a read cache, but that does almost nothing to help write speeds on shingles since writing a single track requires one read and two writes to "layer" the changes. Agree with your edit though, it does help significantly with contention. If you're acquiring linux ISOs via usenet or torrents to store on your NAS I suggest using a SSD where the downloads 'land' and using the postprocessing scripting of your downloader to copy them to the main array later. One long write is a lot better than all the random writes it has to do when downloading, and if that SSD dies you only lose things you were in the middle of downloading anyway. BlankSystemDaemon posted:All of that can still result in reads, from the point of view of an individual drive, looking like random I/O. I think we've got two things conflated here: spinning rust in general hates random IO because it has to physically move objects with mass to seek, and SMR being somehow even worse because on those work the same way as flash: append-only blocks that have to be erased before they can be rewritten except now its done at the speed of physical media instead of solid state. It's a lot of words but the tl;dr is SMR is complete dogshit and if you get one that wasn't marked as such demand a refund. Why is ZFS reslivering random IO though? It should be contiguous with the only seeks being to hop over empty blocks. I'm missing the benefit of having a given block of data not being at the same position on all the drives involved. E: oh NVM lmao it's so awful why on earth would anyone think it was a good idea? https://blogs.oracle.com/solaris/post/sequential-resilvering Premature optimization: not even once. Harik fucked around with this message at 07:28 on Jul 15, 2023 |
# ? Jul 15, 2023 06:59 |
|
My old NAS is getting fairly long in the tooth, being cobbled together from a recycled netgear readyNAS motherboard (built circa 2011, picked it up in 2018) and a pair of old xeon x3450s. It worked out pretty well but the ancient xeons limit to 8GB RAM is preventing me from offloading much unto it aside from serving files, so my desktop ends up running all my in-house containers. Not a great setup. Looking at something like an older epyc system (https://www.ebay.com/itm/175307460477 / https://www.supermicro.com/en/products/motherboard/H11SSL-i) But I'm curious of anyone else has run across other recycled gear that's a good fit for a NAS + VM host. Also, has anyone used PCIe U.2 adapters, such as https://www.amazon.com/StarTech-com-U-2-PCIe-Adapter-PEX4SFF8639/dp/B072JK2XLC ? I've had good luck with PCIe-NVMe adapters so I'm hoping it's a similar thing where it just brings out the signal lines and lets the drive do whatever.
|
# ? Jul 15, 2023 07:59 |
Harik posted:A lot of caveats here: I can't open that link, but you should be aware that OracleZFS and OpenZFS are not the same anymore; not only did they start diverging back in 2009, but at this point way more than 50% of the shared code has been rewritten, and a lot more code has been added. All of which is to say that however it works on OracleZFS, it's irrelevant unless you're one of the unfortunate folks that're locked into funding Larry Ellisons research into sucking blood out of young people to inject into himself.
|
|
# ? Jul 15, 2023 09:51 |
|
Do the common SMR drives even use flash cache? I thought they used a chunk of the platters as CMR and staged data there (so your best case performance was like a normal spinning disk), but a flash/SMR hybrid would probably make sense for certain workloads.
|
# ? Jul 15, 2023 11:34 |
Computer viking posted:Do the common SMR drives even use flash cache? I thought they used a chunk of the platters as CMR and staged data there (so your best case performance was like a normal spinning disk), but a flash/SMR hybrid would probably make sense for certain workloads.
|
|
# ? Jul 15, 2023 11:36 |
|
I'm looking to buy (at least) a pair of extra hard drives to increase my storage. One would go into my PC, the other one into my NAS for backups. They would primarily contain videos and pictures. (All my programs and Steam library and poo poo are on SSDs.) It looks like 16TB is about the point where you get diminishing returns in the GBs you get for your money, is that correct? Are there any recommendations for the most reliable brands and/or models? The speed should be reasonable, but as they would just be used to read/write said videos and pictures, and as backup, they don't have to be speed demons either.
|
# ? Jul 15, 2023 11:56 |
|
That Works posted:Any recs on a cheap CPU / mobo / ram combo for an Unraid NAS? I’ve been eyeing the beelink eq12 and syba 8 bay jbod, haven’t pulled the trigger yet but n100 celeron, 16gb ram, sub $500 all in
|
# ? Jul 15, 2023 13:23 |
e.pilot posted:I’ve been eyeing the beelink eq12 and syba 8 bay jbod, haven’t pulled the trigger yet but n100 celeron, 16gb ram, sub $500 all in I ended up going with a Celeron G4930 and B365 chipset mobo, 32g of 2666hz ram and cooler for just under $200
|
|
# ? Jul 15, 2023 14:22 |
|
I've been eying that EQ12 with the n305 as a homelab ESXi host to play around with. Nothing strenuous, obviously. But it'd be fine to play with.
|
# ? Jul 15, 2023 14:44 |
|
I have an old laptop (running updated Ubuntu 22.04/gnome) with a small SSD/8GB RAM. I also have about 3TB worth of SATA HDDs with SATA>USB adapter cables + enough USB ports to make do. Everything is on a wired network in the same room. I don’t need a media server or lots of data moved quickly, more like just around 1.3TB of iTunes music/Kindle books/old digital photos that I would like off of my Win 11 desktop, with the future option of transferring my Google Drive/OneDrive (together less than 100GB) storage that rarely synced (most is Kindle books I don’t want to re-download). Is there a goon-approved method of going about this, using just the tools I have on hand? I’m still very much a Linux n00b, but I’m willing (and motivated!) to learn new things and I like to tinker!
|
# ? Jul 15, 2023 16:06 |
|
BlankSystemDaemon posted:Sure, spinning rust hates random I/O, but SMR is still many times worse at it than regular harddrives - because the non-volatile flash that it uses to speed up regular I/O when using a traditional filesystem without RAID is simply masking the terrible performance that SMR has for anything but sequential access. quote:I can't open that link, but you should be aware that OracleZFS and OpenZFS are not the same anymore; not only did they start diverging back in 2009, but at this point way more than 50% of the shared code has been rewritten, and a lot more code has been added. quote:In the initial days of ZFS some pointed out that ZFS resilvering was metadata driven and was therefore super fast : after all we only had to resilver data that was in-use compared to traditional storage that has to resilver entire disk even if there is no actual data stored. And indeed on newly created pools ZFS was super fast for resilvering. Given the description of the problem with openZFS it sounds like they followed Oracle down first the wrong path than copied their fix.
|
# ? Jul 15, 2023 16:32 |
|
Harik posted:Looking at something like an older epyc system (https://www.ebay.com/itm/175307460477 / https://www.supermicro.com/en/products/motherboard/H11SSL-i) But I'm curious of anyone else has run across other recycled gear that's a good fit for a NAS + VM host. That's a great platform to build your NAS on. I'd pick H12 series MB because 1) upgrade path to Milan at some point (20-30% perf uplift, no more weird NUMA) 2) PCIe 4.0 probably won't make a difference but you know, more bandwidth never hurt?
|
# ? Jul 15, 2023 17:53 |
|
Kivi posted:tugm4770 is legit awesome seller, have bought twice from him and the communication and goods have been superb. I bought H12SSL-i and 7302p combo from him, and later upgraded that 7302 to 7443 and both transaction went without hitch. Surprisingly good on power too. well that's cool, I wasn't expecting someone'd done the exact thing I was thinking of. Cool. I've gotta decide on a few other bits of the build and figure out what my budget for more drives is. Just missed the sale on 18TB red pros damnit, $13/tb.
|
# ? Jul 15, 2023 20:52 |
|
I think I have an explanation for my ZFS checksum errors. It seemed a bit weird that they were perfectly distributed between both drives in the mirror, and that the drives also seem otherwise perfectly happy, which is why I started by checking the memory: E: Looking at it, I'm running four sticks of memory at their XMP speeds on an AMD memory controller, I guess that alone is a bit optimistic. Computer viking fucked around with this message at 03:21 on Jul 16, 2023 |
# ? Jul 16, 2023 03:04 |
|
Computer viking posted:I think I have an explanation for my ZFS checksum errors. It seemed a bit weird that they were perfectly distributed between both drives in the mirror, and that the drives also seem otherwise perfectly happy, which is why I started by checking the memory: I looked at your pic and started trying to rememberif ryzen supported ecc. That lead me to a post on sth saying that it does "sort of". Anyway, coincidentally, the 4th post mentions getting "wonky errors" with a 3600 in zfs using XMP profiles. I know you've worked that out anyway, but it's good to have some confirmation quote:There is "ECC Support" for the Ryzen CPU's but, the implementation of actual error correcting seems to be up in the air. AMD has not be too forthcoming with what they mean by "ECC Support" with Ryzen. I researched this because of wonky errors I was getting with my ZFS pools that were memory related. I have two Ryzen 7 3700X CPU's with the ASRock Rack X470D series boards doing KVM work and ZFS storage, both of these machines do not use ECC RAM. https://forums.servethehome.com/index.php?threads/ryzen-w-ddr4-ecc-unbuffered.30673/ YerDa Zabam fucked around with this message at 10:07 on Jul 16, 2023 |
# ? Jul 16, 2023 10:02 |
|
IIRC, really confirmed ECC support happens from the 5xxx series up. As in getting EDAC errors in Linux on the 5xxx ones. If you want WHEA to start complaining, it seems to require a 7xxx. That said, it's nice to have a confirmation that ZFS (or any other filesystem) not needing ECC is bullshit.
|
# ? Jul 16, 2023 10:10 |
|
|
# ? May 28, 2024 16:30 |
|
Combat Pretzel posted:IIRC, really confirmed ECC support happens from the 5xxx series up. As in getting EDAC errors in Linux on the 5xxx ones. If you want WHEA to start complaining, it seems to require a 7xxx. My partner has had a long series of annoying and fleeting issues with his PCs, so in an effort to eliminate at least one cause, his current desktop is a 5700 with ECC memory. It runs Fedora, and he has actually seen a corrected memory error bubble far enough up that he noticed it. So I can at least confirm that that works. The file server is my old gaming tower, so it's plain non-ECC memory. One of the sticks was just outright dead; running with that stick alone it still failed within seconds while the other three seem to be fine. And yes, of course ECC memory is useful in anything where data you want to keep does at any point reside in memory. The reason people are touchy when people talk about ZFS+ECC is that one old article that predicted that bad memory would lead ZFS to self-corrupt into nothingness worse than other file systems, which seems to be based on a misunderstanding.
|
# ? Jul 16, 2023 10:48 |