Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zorak of Michigan
Jun 10, 2006


IOwnCalculus posted:

Seems highly unusual to use raidz3 for just five drives, personally. Even if you're highly paranoid about drive failures I'd either do raidz2 across all five, raidz2 across four with a hot spare, or a pair of mirrors with a hot spare.

I agree with you about raidz2 over raidz3 for a five-drive vdev, but why would you prefer raidz2 with a host spare over raidz3? Seems like that's the same amount of capacity but less resilience.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Zorak of Michigan posted:

I agree with you about raidz2 over raidz3 for a five-drive vdev, but why would you prefer raidz2 with a host spare over raidz3? Seems like that's the same amount of capacity but less resilience.

Less time spent in a degraded state since it can start rebuilding immediately upon failure. It's also nice in the future if you decide you want to upgrade by replacing every drive, you can pull the spare and put in a larger drive without putting the array in a degraded state before you start the rebuild.

But personally, I'd either shoot for more capacity (five disk raidz) or minimum drive count (three disk raidz) with offsite backup of critical data.

Computer viking
May 30, 2011
Now with less breakage.

Also, if you're doing a five-disk raidz3, you'd get the same capacity and better performance from two mirrors and a hot spare, though of course traded against a bit higher risk.

BlankSystemDaemon
Mar 13, 2009



IOwnCalculus posted:

Seems highly unusual to use raidz3 for just five drives, personally. Even if you're highly paranoid about drive failures I'd either do raidz2 across all five, raidz2 across four with a hot spare, or a pair of mirrors with a hot spare.

Alternatively, consider just doing raidz across three drives and using the money saved towards an offsite backup.
5-disk raidz3 would be for something where data availability is very very important, to the point that I'd argue you could benefit from doing high-availability failover instead.

Zorak of Michigan posted:

I agree with you about raidz2 over raidz3 for a five-drive vdev, but why would you prefer raidz2 with a host spare over raidz3? Seems like that's the same amount of capacity but less resilience.
It depends on what you're after; A better option might be to go with 2d2p1s draid to have distributed hotspares, so that resilvers aren't as slow if data availability is that important.

Computer viking posted:

Also, if you're doing a five-disk raidz3, you'd get the same capacity and better performance from two mirrors and a hot spare, though of course traded against a bit higher risk.
Your data availability drops through the loving floor though, as you can lose 2 disks by sheer coincidence and need to go to backups (which of course you have, right?).

EDIT: 5x 16TB disks with 10^15 BER vs 250k MTBF (a good real-world estimate) vs 4x the same.



RAID 10 is roughly equivalent to ZFS striped mirrors.

BlankSystemDaemon fucked around with this message at 02:21 on Jun 12, 2022

Computer viking
May 30, 2011
Now with less breakage.

While obviously worse than a z3, those are not atrocious numbers. There's also the effect of having a hot spare around - ideally, that should shrink the window of vulnerability down to the time it takes to resilver a mirror?

Of course, real life problems like batch effects and identical aging takes those numbers way down, so there's nothing wrong in being more careful. I would at least consider it, though.

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

While obviously worse than a z3, those are not atrocious numbers. There's also the effect of having a hot spare around - ideally, that should shrink the window of vulnerability down to the time it takes to resilver a mirror?

Of course, real life problems like batch effects and identical aging takes those numbers way down, so there's nothing wrong in being more careful. I would at least consider it, though.
A hotspare isn't going to give you more availability, it just means you can automate device replacement.
That requires zfsd or zed though, and even then it probably also needs SES.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

A hotspare isn't going to give you more availability, it just means you can automate device replacement.
That requires zfsd or zed though, and even then it probably also needs SES.

If we consider the chance of losing data from a mirror as "hours spent in single disk operation × chance of failure per hour", wouldn't anything that reliably shortens that window (by immediately starting a resilvering) reduce the overall chance of data loss?

I must admit I haven't ever touched the automated disk replacement tools, though. Maybe on the new server.

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

If we consider the chance of losing data from a mirror as "hours spent in single disk operation × chance of failure per hour", wouldn't anything that reliably shortens that window (by immediately starting a resilvering) reduce the overall chance of data loss?

I must admit I haven't ever touched the automated disk replacement tools, though. Maybe on the new server.
I'm not sure I'd say that, no. Data availability is only for data that's actually more available than on a single drive.

Automated disk replacement requires a lot of testing for you to be confident that it's going to work, otherwise that hot spare isn't going to do anything for you.
This is analogous to how you need to test your backups by restoring them regularly, so you know how to do it when the poo poo hits the fan - which in turn means you need to it programmatically and in a well-documented way.

Computer viking
May 30, 2011
Now with less breakage.

This feels like it's more of a question of definitions - in which case I assume you're right. :)

Still, consider a hypothetical. Two identical companies run identical storage hardware with identical raid levels. One has a well tested automatic hot spare system, and one has a weekly "walk around and replace broken disks" routine. Everything else being equal, I would expect the latter to have more cases of the second disk dying and taking out the mirror before the first failed disk was replaced, and thus more downtime (and restores from backups)?

Point taken on this not being a "just works" kind of thing, though.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



I don't quite get it. I deleted all files and folders off my DS112j, emptied the recyle bins, and it says there is 200GB in use?



I would expect some crud to accumulate over 9 years in the operating system or something like that, but this seems a bit much considering I did a reset with a fresh DSM install (while keeping the data) a couple of days ago.

Do I have any options beyond, idk, deleting and recreating the volume? Would that even do anything? I feel like I'm missing something obvious.

Thanks Ants
May 21, 2004

#essereFerrari


How long have you waited after deleting files to taking that screenshot?

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Thanks Ants posted:

How long have you waited after deleting files to taking that screenshot?
Idk, something like 6-8 hours. I had the list of background tasks open and waited for the thing to finish, basically. It was then that I realized that I had forgotten that it had been putting everything in the recycle bin. Emptying that took another 3 hours or so of the free space counter increasing. When that stabilized, there was nothing in the share but the #recycle folder and there was nothing in that as well according to file station. I don't know which other way to check.

I've had it open now another 8 hours maybe while I slept and nothing has changed.

E: Ugh, recreating the volume with checking for bad sectors enabled is going to be another 30 hour operation. I hate the waiting so much. Good thing the new DS220+ is up and running in the mean time. Solid little thing, pleasantly responsive.

Flipperwaldt fucked around with this message at 12:26 on Jun 13, 2022

Korean Boomhauer
Sep 4, 2008
Thanks for the advice on the drives. Lots to think about. What's the suggested hardware for a NAS if I wanna get ryzen stuff? I think it gets pretty tricky if I want ECC. Might be easier to go Intel I think. I decided to let my NAS be a NAS and get a beelink for hosting services like plex or whatever else.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Intel gives you ECC on Xeon CPUs only. In the past, the i3 ones had it too, but I think they disabled it nowadays on newer models.

With Ryzen, if a mainboard lists ECC support, and either use a full Ryzen or the Pro version of the APU, you're half-way there. I say half-way, because while ECC works, most BIOSes don't enable error reporting.

BlankSystemDaemon
Mar 13, 2009



ECC isn't just magically going to make everything better.

I posted about it previously here and will add that I think at this point it's pretty well-established that the AsrockRack boards are just about the only Ryzen boards outside of things like Sypermicro and Tyan that are known to have good ECC support.

Korean Boomhauer
Sep 4, 2008
So in short, don't fret over ECC and just get whatever fits my budget?

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Combat Pretzel posted:

Intel gives you ECC on Xeon CPUs only. In the past, the i3 ones had it too, but I think they disabled it nowadays on newer models.

With Ryzen, if a mainboard lists ECC support, and either use a full Ryzen or the Pro version of the APU, you're half-way there. I say half-way, because while ECC works, most BIOSes don't enable error reporting.

I was going to say this. A lot of Ryzen boards "support" ECC memory but disable ECC to do so, which defeats the purpose if that's something you need. Just be sure to check the spec sheet/manual for any boards you consider to check on that as well as XMP compatibility since Ryzen chips seem to care about fast memory more than Intel ones. Not sure how big a different it makes in a NAS but the difference between XMP off and on with the 3600X in my gaming rig is significant.

I'm torn on the issue, I went from an i5-2500T to a Ryzen 1600AF, and now to a 3900X and while I love the cost to performance ratio and core count of the AMD chips there have been plenty of times where I've wished that I had an iGPU for both troubleshooting boot issues and to free up a slot in my server by getting rid of the GPU in it.

Combat Pretzel posted:

They support ECC just fine and is enabled (per chipset register readout), if they specifically mention it. Just that error reporting doesn't (always?) work.

Pretty much every DDR4 capable board "supports" ECC modules, in that the additional 8 lanes, and therefore the ninth memory chip, just get ignored (since the error checking and correction happens on the memory controller on the CPU).

ah, this is good to know! I managed to choose RAM that doesn't have XMP support on my current board back before I knew better, and this gives me more of an excuse to replace it with something that does.

Scruff McGruff fucked around with this message at 20:24 on Jun 13, 2022

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
They support ECC just fine and is enabled (per chipset register readout), if they specifically mention it. Just that error reporting doesn't (always?) work.

Pretty much every DDR4 capable board "supports" ECC modules, in that the additional 8 lanes, and therefore the ninth memory chip, just get ignored (since the error checking and correction happens on the memory controller on the CPU).

--edit:
Either way, even if error reporting doesn't work, on an 24/7 NAS appliance that might cache data in RAM for extended amounts of time, depending on what you do, you still might be interested for silent error correction.

BlankSystemDaemon
Mar 13, 2009



Korean Boomhauer posted:

So in short, don't fret over ECC and just get whatever fits my budget?
I mean, it depends on the goal.
ZFS has a toggle for doing checksumming in memory, but it's hidden behind a debug option because on Suns systems, they were built with ECC to begin with.

Do you need to use the debug option if you're on a system without ECC? Absolutely not. However, ZFS is not the only reason to have ECC memory - a much bigger reason is simply system stability.

There's some studies in my post history about DRAM error rates and how often they result in crashes, that suggest that upwards of 40% of system or program crashes can be avoided if PC clones hadn't cheapened out and not included ECC in their designs (as, for example, the cache in the CPU, which does feature ECC).

You can find a used Supermicro board for pretty cheap but even if you go the retail routem ECC memory can be as cheap as regular memory if you happen to find a good deal and even the markup can be as little as 5-10%.

Combat Pretzel posted:

--edit:
Either way, even if error reporting doesn't work, on an 24/7 NAS appliance that might cache data in RAM for extended amounts of time, depending on what you do, you still might be interested for silent error correction.
Why would data cached in memory ever be written to disk?
If the data on-disk gets modified, the current memory cache will get invalidated either by the VM or simply by aging out of the cache.

It matters for the ZFS asynchronous dirty data buffer, but that's a maximum of 5 seconds long at any point, so it isn't very likely.

BlankSystemDaemon fucked around with this message at 20:30 on Jun 13, 2022

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
370€ for 2x32GB just this weekend.

:shepspends:

BlankSystemDaemon posted:

Why would data cached in memory ever be written to disk?
Corrupted cached data -> Read into app -> Written back to disk

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Corrupted cached data -> Read into app -> Written back to disk
Do you have a particular implementation in mind with this behaviour?

ZFS ARC, irrespective of corruption, won't ever be written back to the disk as it's purely a read cache.
If you're worried about the 5 second asynchronous dirty data buffer, throw a pair of small NVMe SLC SSDs (or a couple of larger MLC SSDs that've been underprovisioned down to a few tens of GB to increase the write endurance) in the pool and set every dataset to sync=always.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
No not directly. But I'm sure there's some kinds of workloads that probably fit and stay into the ARC of a decently specced system (i.e. a lot of RAM). If there's not enough users working the thing, ARC may not cycle enough.

Say after a done day of work, saving your poo poo to the NAS, chances are high it'll keep a lot of it in the ARC. Over night, a bit gets knocked over. The morning after, you fire up your productive application again, pick up where you left, load things from your NAS, and it happens to come mostly from ARC, with what being the last thing you saved, you may work with corrupted data.

As I said, it's a bunch of hypotheticals which depend on what you do and how much you value data integrity. A video editor may not even notice this one stray broken macroblock in his MJPEG stream (then again, that's a streaming workload that'll disqualify, because of how huge datasets are), but would affect the final output, if it happens at render stage. Somewhere else an Excel sheet may calculate some bullshit, which you may save back to disk if you don't notice. Someone using the system for iSCSI volumes and installing applications to it might get a stray unexplained crash and gently caress up god knows what.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

No not directly. But I'm sure there's some kinds of workloads that probably fit and stay into the ARC of a decently specced system (i.e. a lot of RAM). If there's not enough users working the thing, ARC may not cycle enough.

Say after a done day of work, saving your poo poo to the NAS, chances are high it'll keep a lot of it in the ARC. Over night, a bit gets knocked over. The morning after, you fire up your productive application again, pick up where you left, load things from your NAS, and it happens to come mostly from ARC, with what being the last thing you saved, you may work with corrupted data.

As I said, it's a bunch of hypotheticals which depend on what you do and how much you value data integrity. A video editor may not even notice this one stray broken macroblock in his MJPEG stream (then again, that's a streaming workload that'll disqualify, because of how huge datasets are), but would affect the final output, if it happens at render stage. Somewhere else an Excel sheet may calculate some bullshit, which you may save back to disk if you don't notice. Someone using the system for iSCSI volumes and installing applications to it might get a stray unexplained crash and gently caress up god knows what.
Sure, but ECC is good for much else besides this, so it's never a bad idea to have it if you can get it.

Also, a recent commit in FreeBSD added support for RDMA with RoCEv2 and iWARP for the E810.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

Somewhere else an Excel sheet may calculate some bullshit, which you may save back to disk if you don't notice.

AFAIK most office type docs should catch a single-bit error in the important content when you open the file, because it's all compressed data and will barf on a bit-flip.


Combat Pretzel posted:

As I said, it's a bunch of hypotheticals which depend on what you do and how much you value data integrity.

And putting ECC on your NAS seems like a fools errand if you don't also have ECC on the PCs where you work on the files. If you care about your data integrity enough to be paranoid about bit-flips, you should be that paranoid everywhere.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Klyith posted:

AFAIK most office type docs should catch a single-bit error in the important content when you open the file, because it's all compressed data and will barf on a bit-flip.

And putting ECC on your NAS seems like a fools errand if you don't also have ECC on the PCs where you work on the files. If you care about your data integrity enough to be paranoid about bit-flips, you should be that paranoid everywhere.

Like many others I probably don't have anything that really needs ECC. Work products are small sized files (think MS Office) that are only worked on for periods of weeks-months. Long-term stuff I care about is mostly family media (photos and home movies). Linux ISOs are generally re-downloadable. But I spent $400 on 128 GB of ECC RAM so I'm going to imagine it's helping me in some way.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Klyith posted:

And putting ECC on your NAS seems like a fools errand if you don't also have ECC on the PCs where you work on the files. If you care about your data integrity enough to be paranoid about bit-flips, you should be that paranoid everywhere.
Personally I’m running it everywhere. Which is why the unavailability of DDR5 ECC UDIMMs for my new future desktop is pissing me off right now. :(

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Sure, but ECC is good for much else besides this, so it's never a bad idea to have it if you can get it.

Also, a recent commit in FreeBSD added support for RDMA with RoCEv2 and iWARP for the E810.

Oh 100Gbit, that's fun.

I'm still thinking about setting up some 10Gbit at home, and I genuinely don't have the storage or consumers to make use of more. Even at work 10Gbit over ethernet is honestly more than good enough - but it would be neat to play with RDMA just to have some experience with it.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

And putting ECC on your NAS seems like a fools errand if you don't also have ECC on the PCs where you work on the files. If you care about your data integrity enough to be paranoid about bit-flips, you should be that paranoid everywhere.
ECC has more functionality than just guarding against file corruption, though.
System stability is quite nice.

Computer viking posted:

Oh 100Gbit, that's fun.

I'm still thinking about setting up some 10Gbit at home, and I genuinely don't have the storage or consumers to make use of more. Even at work 10Gbit over ethernet is honestly more than good enough - but it would be neat to play with RDMA just to have some experience with it.
If you're doing iSCSI over Ethernet (especially if backed with NVMe storage, but even without), you benefit quite a bit from the 10 times faster latency on fiber with SFP+ modules compared with RJ45.
Other than that, it's not gonna mean much.

wolrah
May 8, 2006
what?

Computer viking posted:

Oh 100Gbit, that's fun.

I'm still thinking about setting up some 10Gbit at home, and I genuinely don't have the storage or consumers to make use of more. Even at work 10Gbit over ethernet is honestly more than good enough - but it would be neat to play with RDMA just to have some experience with it.

For homelab-level fuckery 40G is the real sweet spot IMO. The hardware isn't much more expensive than 10G (in some cases it's cheaper) and it's generally compatible with 10G using inexpensive SFP+>QSFP+ adapters because it's literally just 4x10G acting as a single interface. Some 40G hardware, most commonly switches but also nicer NICs, can even break out those links in to four independent 10G interfaces.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
IMO the point of ECC isn't just about memory integrity but that it's market segmented into other hardware that will be likely have easy support in terms of drivers and OSes. Boards supporting ECC will likely have better onboard NICs, BMC, and workstation / server class chipsets that get better testing by the OSS maintainers and contributors from manufacturers compared to consumer stuff. It's worth the headaches sometimes to stop caring about whether that brand new piece of gear will have some weird behavior under BSD and Linux like wrt power management too.

BlankSystemDaemon posted:

ECC has more functionality than just guarding against file corruption, though.
System stability is quite nice.
ECC is a bit of a double edged sword. While certain error patterns are automatically corrected others that are uncorrectable will produce a hard fault and cause the OS to straight up panic and reboot. Not going to forget the $55k+ system that was purple screening in ESXi randomly because it turned out to be a faulty RDIMM in the end. It is a requirement for systems that need to have strong guarantees that they will not write faulty data and it would be better to crash and stop than to write a single incorrect bit at all. As such, at scale simply dropping the whole server when RAM starts to fail is the right call while on some of the home machines I've had they would crash with uncorrelatable errors and we had no record. With ECC at least I can count the RAM errors right in the BIOS and quickly diagnose that there's a faulty module.

These things all matter to me at home, too. I'm a cheapskate though and am probably going to bow out of ECC with my next NAS build. For BCM purposes I'm putting the savings toward a PiKVM or TinyPilot box (because I'm so not paying for that Spider IP KVM for home use). When Matt Ahrens and other ZFS devs basically say it's not that big of a deal to run ZFS without ECC I'll move on.

wolrah
May 8, 2006
what?

necrobobsledder posted:

ECC is a bit of a double edged sword. While certain error patterns are automatically corrected others that are uncorrectable will produce a hard fault and cause the OS to straight up panic and reboot. Not going to forget the $55k+ system that was purple screening in ESXi randomly because it turned out to be a faulty RDIMM in the end. It is a requirement for systems that need to have strong guarantees that they will not write faulty data and it would be better to crash and stop than to write a single incorrect bit at all. As such, at scale simply dropping the whole server when RAM starts to fail is the right call while on some of the home machines I've had they would crash with uncorrelatable errors and we had no record. With ECC at least I can count the RAM errors right in the BIOS and quickly diagnose that there's a faulty module.

What exactly is the other edge to this sword? That without ECC you can have silent faults that if you're lucky don't crash the system or cause an immediately noticeable fuckup? As far as I've ever seen the only real downsides to ECC are the obvious cost aspect and limited support on consumer platforms, plus I believe there's also a slight performance/latency impact compared to equivalently clocked non-ECC modules, but it's not like there's really anything where ECC is actively harmful.

Personally I'd rather be able to know for sure that a certain module had an error rather than having to occasionally play the "my system is doing something weird that might be memory related, run memtest overnight and see what happens, then if there are errors having to play the guessing game of which module is mapped where in the address space.

Even for consumer computing there are very few situations where I'd be willing to say "yeah this data isn't important at all, we should definitely just ignore any potential errors as long as they don't entirely crash us out" outside of maybe a streaming stick or similar content consumption appliance where it truly does not matter as long as it's not bad enough for the user to notice.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

If you're doing iSCSI over Ethernet (especially if backed with NVMe storage, but even without), you benefit quite a bit from the 10 times faster latency on fiber with SFP+ modules compared with RJ45.
Other than that, it's not gonna mean much.

I have played with it, but no - it's all plain file storage. Makes sense, though.

wolrah posted:

For homelab-level fuckery 40G is the real sweet spot IMO. The hardware isn't much more expensive than 10G (in some cases it's cheaper) and it's generally compatible with 10G using inexpensive SFP+>QSFP+ adapters because it's literally just 4x10G acting as a single interface. Some 40G hardware, most commonly switches but also nicer NICs, can even break out those links in to four independent 10G interfaces.

That is an interesting idea - though I'm very dependant on what shows up used; I'm not paying full price just for the bragging rights learning experience. I'll keep it in mind when searching, though. :)

Computer viking fucked around with this message at 17:33 on Jun 14, 2022

movax
Aug 30, 2008

Computer viking posted:

I have played with it, but no - it's all plain file storage. Makes sense, though.

That is an interesting idea - though I'm very dependant on what shows up used; I'm not paying full price just for the bragging rights learning experience. I'll keep it in mind when searching, though. :)

I got a pair of Chelsio T6-based cards off eBay for a decent price; I think Mellanox is the most common bang-for-buck used option these days though.

I've been seeing some SN640s on eBay for decent prices -- for home NAS usage, the more 'data' orientated models of NVMe enterprise drives are more than enough, right? Generally there are the IOPS-optimized ones (like a PM1735, which I have in my desktop) and then there are read-optimized models, which I think the PM9A3 / SN640 / CD6 series are?

Thinking about doing a flash pool along with my spinning pool, to use for smaller files / some VMs, and then focus spinning pool on media / backups.

Computer viking
May 30, 2011
Now with less breakage.

It's worth noting that I'm in Norway, and something about being in the EEC and Schengen and whatever else is relevant, but not the EU, makes the cost of international shipping here really unpredictable.

I'll keep the Chelsio cards in mind, though.

wolrah
May 8, 2006
what?

Computer viking posted:

That is an interesting idea - though I'm very dependant on what shows up used; I'm not paying full price just for the bragging rights learning experience. I'll keep it in mind when searching, though. :)
40G is obsolete for a variety of reasons so there isn't really much as far as new gear out there for it, but that means there's a lot of used stuff that's been retired in favor of 50/100G.

I got a pair of Mellanox cards off eBay for $24 a piece back in mid 2020 and spent about the same on a 10 foot DAC from FS. Prices seem to have gone up on the NICs but they can still be had for double digit prices without much effort.

edit: OK, non-US probably affects prices but I'd still be willing to bet you can get some used 40G gear for less than new 10G.

Computer viking
May 30, 2011
Now with less breakage.

Looks like I can get mellanox 40gbit cards without transceivers for €67 from Germany, which isn't half bad. It's another €35 for shipping if I'm not happy with "early July to early august", but ... I may be.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I got Mellanox ConnectX3 VPI 40/56 cards here. Windows comes with an inbox driver ostensibly from Microsoft, but the versioning scheme matches Mellanox, so eh. Linux also supports it out of the box, too. So none of the WinOF and OFED poo poo needed (so no fear for driver obsoletion or something), and I'm trucking on RDMA just fine.

Klyith
Aug 3, 2007

GBS Pledge Week

wolrah posted:

What exactly is the other edge to this sword? That without ECC you can have silent faults that if you're lucky don't crash the system or cause an immediately noticeable fuckup?

If some error types will cause a complete machine crash even if the error occurs in unimportant memory, that's a bit of a bummer on a desktop machine where large amounts of your in-use memory are trivial unimportant crap. Like even in a word processor program most of the memory is about how the GUI looks and how the document is rendered, and the memory of the program's code and the document itself is very small by comparison.

wolrah posted:

As far as I've ever seen the only real downsides to ECC are the obvious cost aspect and limited support on consumer platforms, plus I believe there's also a slight performance/latency impact compared to equivalently clocked non-ECC modules

It's more than a slight performance penalty, especially as ECC caps out at much slower speed than non-ECC memory. Though I'd guess that's more about the market than an inherent part of ECC.

It's like, I totally agree with the Linus rant that we could all be using ECC if it wasn't for corporate greed, and it wouldn't cost much more than a couple extra bucks per stick... but this is not that world. And in this world, I'm not sure that I see ECC as worth either the money or the performance trades for anyone doing DIY builds of a small number of PCs.


There's an old Google study on memory errors in their servers. If you read that article carefully, particularly stuff like:

quote:

  • We find that for all platforms, 20% of the machines with errors make up more than 90% of all observed errors for that platform.
  • Across the entire fleet, 8.2% of all DIMMs are affected by correctable errors
  • For all platforms, the top 20% of DIMMs with errors make up over 94% of all observed errors
there's a pretty strong conclusion: the vast majority of errors are what in non-ECC memory are bad memory modules, but ECC is keeping them alive. If this was regular memory it would be a module that produced plenty of errors in memtest and you would replace.

For Google or any other server operator, that means ECC is 100% worth paying for. They have like a billion servers, they can't afford to memtest them all or pay people to find bad sticks & replace them. For a DIYer who manages one or three PCs, the calculus is probably different. Unless you *really* hate any added janitoring tasks, just memtest your machines overnight once or twice per year. As long as you have no errors, you can probably rest easy that your data isn't being silently corrupted by memory bitflips.

wolrah
May 8, 2006
what?

Klyith posted:

If some error types will cause a complete machine crash even if the error occurs in unimportant memory, that's a bit of a bummer on a desktop machine where large amounts of your in-use memory are trivial unimportant crap. Like even in a word processor program most of the memory is about how the GUI looks and how the document is rendered, and the memory of the program's code and the document itself is very small by comparison.

My point is that you don't know what's going to land in the bad memory until something does. Personally given the choice of having a fault that comes with its own useful diagnostic and a silent potential data loss I'd very much prefer the former.

I have had one bad RAM stick and one BIOS-related RAM issue in my server's current form, both of which were caught because they eventually caused crashes but in both cases ZFS had been crying bloody murder to the console for some time prior to the crash. Literally hundreds of gigabytes of bad data was written to disk before something critical enough to cause a crash happened to end up in the problematic area. Sure, my system stayed online and looked like it was working that whole time, but is that really a good thing?

I can only assume that when I've had RAM issues on my desktop/laptop machines in the past I've probably written bad data to disk before the problem became apparent as well. Obviously they wouldn't see the same volume of data as my server that runs the *arr apps but the importance of the data is generally higher.

wolrah fucked around with this message at 20:46 on Jun 14, 2022

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

wolrah posted:

My point is that you don't know what's going to land in the bad memory until something does. Personally given the choice of having a fault that comes with its own useful diagnostic and a silent potential data loss I'd very much prefer the former.

Yeah, I don't really agree with necrobobsledder saying it's a double edged sword, because "your PC might crash" is about as sharp as a butter knife.

wolrah posted:

I have had one bad RAM stick and one BIOS-related RAM issue in my server's current form, both of which were caught because they eventually caused crashes but in both cases ZFS had been crying bloody murder to the console for some time prior to the crash. Literally hundreds of gigabytes of bad data was written to disk before something critical enough to cause a crash happened to end up in the problematic area. Sure, my system stayed online and looked like it was working that whole time, but is that really a good thing?

Hmmm. So it definitely wasn't a good thing. But if I'd been in that situation, you know who I'd be pissed at?

One, myself for never looking at my server's error logs / memtesting my machines. That's why I went on to say that IMO ECC isn't worth it, as long as you're willing to trade ~1 hour per year for saving 500 bucks on hardware.

Two, ZFS and/or TrueNas or whichever distro set ZFS up. The ZFS advocates ITT give btrfs a lot of poo poo for having unstable, integrity-not-guaranteed features that can be turned on. If ZFS is critically dependent on ECC memory, that feature to do memory checksumming should be way more exposed so that anyone who doesn't have ECC will turn it on.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply