|
Thanks to everyone for the help. Here's what I ultimately ended up getting based on price/availability etc. Mainboard/CPU: Supermicro A2SDi-8C+-HLN4F Motherboard Memory: Supermicro Certified MEM-DR416L-SL06-ER24 Samsung 16GB DDR4-2400 LP ECC REG m.2 SSD for OS: Western Digital 250GB WD Blue 3D NAND Internal PC SSD - SATA III 6 Gb/s, M.2 2280, Up to 550 MB/s - WDS250G2B0B (I agree about NVME being far superior but it didn't matter for this use case) Case: Fractal Design Node 304 - Black - Mini Cube Compact Computer Case - Small form factor - Mini ITX – mITX - High Airflow - Modular interior - 3x Fractal Design Silent R2 120mm Fans Included - USB 3.0 PSU: EVGA 550 B5, 80 Plus BRONZE 550W, Fully Modular, EVGA ECO Mode, 5 Year Warranty, Compact 150mm Size, Power Supply 220-B5-0550-V1 as finding a decent 250-300W PSU is actually pretty hard and not all that cost effective as best as I can tell, or they only have like 2 SATA connectors or whatever. This one had 6 which is what I need. And then I'll just bring over my 6 drive ZFS array and get it going after I redo Debian on the m.2 drive. Seriously, appreciate the help. Some of this is probably overkill but whatever.
|
# ? Dec 9, 2020 20:47 |
|
|
# ? May 16, 2024 12:26 |
|
BlankSystemDaemon posted:Maybe if you're on an inferior grid. 230V grid means higher efficiency than 110V. Yeah, it'd be nice if we could get 230v in the US, but...no. While it's not impossible to get better efficiencies out of a Bronze unit, it's unlikely to get them that high--more likely at that point that they'd bump up certification to Silver or Gold, and price it accordingly. Hell, even a 10% efficiency difference for a sub-200W system like that only works out to like $3-4/yr. That's a looooong time to make back spending $40 extra for a more efficient unit. DrDork fucked around with this message at 20:50 on Dec 9, 2020 |
# ? Dec 9, 2020 20:47 |
|
TraderStav posted:They're booting off of the exact same USB stick, so all of the settings and programs should be the same. Same card for the JBOD (LSI) also. When I switch machines I unplug the JBOD, pull out the LSI card and the Cache drive (nVME on a PCI card) and then drop them in the other. Plug in the USB and boot, so very little differences. Yeah that's an odd one. I would email them with the list of part make+model's with a description of the problem and see what they say.
|
# ? Dec 9, 2020 21:28 |
|
Re 2.5” hddchat: Look up the seagate rosewood if you wanna see some real fun. It has a load bearing
|
# ? Dec 10, 2020 01:48 |
|
Wild EEPROM posted:Re 2.5” hddchat: Those drives are such pieces of poo poo. Seagate really is the second/bottom tier in drives.
|
# ? Dec 10, 2020 12:53 |
|
I want a proper web interface on my NAS. I am currently using an old PC with Ubuntu 20.04 installed, and I'm considering switching it to the gold standard Green NAS, but they are now TrueNAS Core. Is moving to TrueNAS Core a good idea is there something else better out there? The biggest issue I will run into is that I have ~5Tb of data on an MDADM array, I believe Free/Grey NAS will not import an MDADM array. For what it's worth, I know ZFS is God's file system or whatever, but I am not very familiar with BSD and I don't think ZFS would benefit me much in a home use setting. What features am I missing out on with my current MDADM + ext4 setup compared to BSD + ZFS?
|
# ? Dec 12, 2020 12:46 |
Not Wolverine posted:I want a proper web interface on my NAS. I am currently using an old PC with Ubuntu 20.04 installed, and I'm considering switching it to the gold standard Green NAS, but they are now TrueNAS Core. Is moving to TrueNAS Core a good idea is there something else better out there? But maybe you don't care about any of that, which is fine, as all it really boils down to is this: ZFS was designed to keep data safe irrespective of what harddrives try to do to it (and they will do a lot of things which no other filesystem can deal with). As for 'BSD', it's FreeBSD which is descended from BSD - and you're absolutely right, it won't work with mdadm. There's a couple of reasons for this: namely that nobody has made a GEOM module for it, and that Linux doesn't have a stable KBI - so the FreeBSD release schedule combined with a release model that tends to work in bursts, means that there's potentially big gap between feature parity if someone had implemented it. BlankSystemDaemon fucked around with this message at 13:14 on Dec 12, 2020 |
|
# ? Dec 12, 2020 13:10 |
|
BlankSystemDaemon posted:EXT (whichever version) and ZFS can't really be directly compared because one is just a filesystem (based, in part, on UFS which is still actively maintained in FreeBSD, by many people including its original creator who made it in the early 1980s) whereas the other combines volume management (for example MDADM in Linux or GEOM in FreeBSD) with a filesystem <snip> quote:and features like copy-on-write atomicity, meaning no block is ever overridden and the data on-disk moves from one consistent state to the next (ie. if something ever goes wrong, you can rewind to the last-known good state). ZFS also keeps checksums for every single record, stored in the parent block, in a hashtree-like structure, which ensures data integrity. In a real work environment with an IT budget and someone dedicated to making sure the NAS is OK ZFS sounds awesome, but as a home user, I think I prefer the simplicity of MDADM + Ext4 over ZFS. But regardless of my opinion on what I need, I think I still need BSD + ZFS if only because the easy option to setting up a NAS is not Ubuntu + MDADM + Ext4 + SSH + etc, but instead TrueNAS Core. If anything, I think my only fear now is that I kinda dislike the idea of an all in one solution simply because if something ever did break (like with the OS itself, not the file system) I think it would be more difficult to troubleshoot a broken FreeNAS than a broken Linux server. With my current setup, I know that so long as my disk drives are intact, I can replace the whole operating system with anything else Linux based and be able to use MDADM to import the array. I could switch my NAS to Ahego Linux tomorrow if I wanted to, I'm not sure BSD + ZFS could do that.
|
# ? Dec 12, 2020 13:32 |
|
Honestly once you get ZFS set up it can basically be forgotten about, and the setup isn't really difficult in the first place. If you can move your data somewhere and then put it back after recreating the array, I can't really come up with any good arguments *not* to use ZFS especially because most of your arguments are "I don't understand why it's better" which is fair, and "I don't think I'll benefit from it really" which is also fair, but you probably will benefit without realizing it and it makes it a bit more portable since mdadm effectively locks you into Linux. You are not going to have data loss through your cables, it would retry the packet(s). At worst with that you'll either lose link or just have pretty lovely speeds. You don't need to use snapshots or dedup at home unless you feel like it. ZFS is going to make it easier to not lose data in general as it's far more fault tolerant than other solutions you've mentioned. ZFS also works fine on Linux - I use it on Debian personally. Really the only ongoing work you should do "dedicated to making sure the NAS is OK" is set up a cron job to check the array for disk failures that runs once a week or so and fires an email off if it detects any (this is easy and I can just give you a script to do it, once I get my replacement hardware and can access it again), and maybe a monthly scrub in a cron job as well. You do not ever have to touch it beyond OS security updates which you should be doing regardless of whatever choice you make. If you don't want to use ZFS for whatever reasons that's fine, but so far your reasons seem more uninformed about how much effort it is than anything else. ssb fucked around with this message at 15:02 on Dec 12, 2020 |
# ? Dec 12, 2020 14:55 |
|
shortspecialbus posted:Honestly once you get ZFS set up it can basically be forgotten about, and the setup isn't really difficult in the first place. If you can move your data somewhere and then put it back after recreating the array, I can't really come up with any good arguments *not* to use ZFS especially because most of your arguments are "I don't understand why it's better" which is fair, and "I don't think I'll benefit from it really" which is also fair, but you probably will benefit without realizing it and it makes it a bit more portable since mdadm effectively locks you into Linux. quote:Really the only ongoing work you should do "dedicated to making sure the NAS is OK" is set up a cron job to check the array for disk failures that runs once a week or so and fires an email off if it detects any (this is easy and I can just give you a script to do it, once I get my replacement hardware and can access it again), and maybe a monthly scrub in a cron job as well. You do not ever have to touch it beyond OS security updates which you should be doing regardless of whatever choice you make.
|
# ? Dec 12, 2020 15:12 |
|
Not Wolverine posted:You're absolutely right my argument is mainly that I am not informed enough about why I need ZFS, but also the fact that I think ZFS is supported better on BSD than Linux. Until I've been using ZFS on Debian for like What specifically do you believe won't work properly with it on Linux? quote:I think I could be happy if i just setup a cron script on my Ubuntu NAS but it hasn't toasted my data yet so it's on the back burner, same for security updates. For security updates, if I could do them via a web console I would do them on a regular basis. Hell, I think I would be totally fine with using a cron script to automatically do security updates weekly, is this possible? You can absolutely set up a cron job to automatically do security upgrades, and I think ubuntu might have something that does that automatically if you enable it - unattended-upgrades I think?
|
# ? Dec 12, 2020 15:16 |
|
shortspecialbus posted:What specifically do you believe won't work properly with it on Linux?
|
# ? Dec 12, 2020 15:56 |
|
I manage both ZFS and LVM with mdraid at work on Linux and I prefer ZFS both professionally and for personal use. Other professionals may disagree with me but my point is more that ZFS is really not a big deal at home as long as you can deal with its propensity to gobble up RAM for some specific scenarios for performance and how you may need to actually plan your array expansions a little. ZFS on Linux for most use cases even professionally is fine to run if you feed your array lots of RAM and actually think about your storage IO workload and do some measurements. I was on the “be wary of ZoL for a while” train and then it passed the number of years that it was available on OpenIndiana and other OSS Solaris distributions before I stopped being obstinate.
|
# ? Dec 12, 2020 16:49 |
|
Now that ZFS on Linux has become OpenZFS, I don't think it's going to just go away. It's an important part of too many projects. It's also the technology I'd outright prefer if I was concerned about making my disk array work even if I had to transplant it into a new build. ZFS is mature enough today that I'd have no fear whatsoever taking the array out of my current FreeNAS box, installing them in any current Linux build, and importing the pool. I admit that I am lightly biased. I used to be a professional Solaris toucher and learned ZFS at work. Whenever I have to touch mdadm or lvm, I feel a deep longing for the simplicity and power of ZFS.
|
# ? Dec 12, 2020 17:31 |
|
shortspecialbus posted:You don't need to use ... dedup at home unless you feel like it. I would bump this from "don't need to" to "DO NOT ENABLE UNDER ANY CIRCUMSTANCE" because holy gently caress you will need a lot of RAM to make a deduplicated pool not perform like poo poo. I've got to imagine dedup is only worth it in very specific workloads where more/larger drives simply aren't possible but RAM/cache SSDs are. For the home user, there's always bigger drives. Other than that, ZFS is worth it over mdraid because mdraid will poo poo your entire array over a single read error during a rebuild process. ZFS won't, and in most cases will be able to tell you which exact files are corrupted and need to be restored from the backups you have. Speaking of ZFS and resilvering, is there any way to tell ZFS to do multiple drive replacements simultaneously? I'm doing some drive replacements in four-drive raidz vdevs and even if I stick them in immediately following commands: code:
|
# ? Dec 12, 2020 18:09 |
|
Not Wolverine posted:I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now. You know the on-disk format won't change right? If the module is replaced or put into the kernel tree the new one will still import your pool just fine.
|
# ? Dec 12, 2020 18:09 |
|
I use lvm professionally (not mdadm though, we do hardware raid on things that aren't vms) as well as for the non-zfs portion of the server and its excellent. I was going to write a long effort post on your concerns and all but other posters have hit on some of it pretty well and I'll leave it at "your concerns are wholly unfounded" RE: dedup - that's a valid point, probably not ideal for most home setups in general.
|
# ? Dec 12, 2020 18:38 |
Not Wolverine posted:with BSD the choices are dedicated NAS distros, or Free, Net, Open BSD, or a Hackintosh NAS. FreeBSD, OpenBSD, NetBSD, and DragonFlyBSD for that matter, are all completely separate OS' - they're not just different versions of the same kernel shipped with different libraries and userlands, like Linux is. Yes, they have common heritage (the eponymous BSD, and through that, the original UNIX), and all try to implement against the POSIX standard - but unlike Linux distributions, they completely different in scope. Also, macOS isn't a BSD, and while it's true that Darwin, the opensource part of macOS that includes the XNU kernel from Mach, implements some code (the VFS, process model, netstack, and some command-line line utilities), that code is there because Apple wanted macOS to be a certified UNIX according to the Single Unix Specification. The XNU kernel, all of the libraries in the OS (libc, libc++, Metal, CoreAudio, CoreVideo, et cetera ad nauseum), the compiler (LLVM fronted with clang), every single one of the drivers, and and every part of the user experience that 99.95% of the userbase interacts with, all of that is entirely done by Apple (or subcontracted, as is the case with LLVM, OpenBSM, MAC, and the audit framework - although to make everything more complicated, all of those also exist in FreeBSD - and Robert Watson who wrote OpenBSM, MAC, and the audit framework made them for FreeBSD while being paid to work on them for Apple). As for NAS stuff, FreeNAS/TrueNAS Core is based on FreeBSD, and is really more of an appliance OS. Not Wolverine posted:I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now. The hope is that OpenZFS will eventually also have support for macOS, NetBSD (which has a fork of the old FreeBSD implementation, and should therefore be easy to move to the new one), Illumos, and Windows. Zorak of Michigan posted:Now that ZFS on Linux has become OpenZFS, I don't think it's going to just go away. It's an important part of too many projects. It's also the technology I'd outright prefer if I was concerned about making my disk array work even if I had to transplant it into a new build. ZFS is mature enough today that I'd have no fear whatsoever taking the array out of my current FreeNAS box, installing them in any current Linux build, and importing the pool. VostokProgram posted:You know the on-disk format won't change right? If the module is replaced or put into the kernel tree the new one will still import your pool just fine. shortspecialbus posted:RE: dedup - that's a valid point, probably not ideal for most home setups in general. Maybe once he's done with the raidz expansion?
|
|
# ? Dec 12, 2020 20:00 |
|
BlankSystemDaemon posted:Well, dedup is in a weird place right now, because as it stands, there's very little reason to actually try and use it for anyone. Matt Ahrens has commented on it a few times, and at one point even said that he had some thoughts and bar-napkin sketches for how to do a proper deduplication implementation - but as it stands, there appears to be nobody working on it. Yeah. Drives are cheap, drive enclosures are cheap-ish, RAM and fast SSDs aren't. You'd need one hell of a weird corner case for dedup to make more sense, at least the way ZFS does it.
|
# ? Dec 12, 2020 21:20 |
|
Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options. First of all, assuming I should avoid some cloud-sync type setup because that isn't a backup (i.e. deletions and corruptions get replicated)? Or am I ok to do something like this and set it up to sync on a weekly frequency or whatever? To be honest I'm more worried about accidental destruction of the drives (failure/fire/theft/etc) than I am about something like an encryption scam so perhaps this is my best option. Second, is there a recommended provider to use for this? I am planning for a capacity of 2TB which looks like it ends up being somewhere in the ballpark of $100/yr.
|
# ? Dec 13, 2020 01:01 |
|
iirc Backblaze is popular as an offsite, but you have to do some dumb stuff like using the desktop client to back up shares mounted on your desktop machine so you don't have to shell out for the business tier.
|
# ? Dec 13, 2020 01:13 |
|
Ruggan posted:Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options. B2 integrates nicely and supports versioning, which helps with accidental deletion and cryptolockering. It does not help with corruption without a huge amount of work on your part.
|
# ? Dec 13, 2020 02:08 |
|
Synology also has a first party app that does cloud sync of backups, it used to be Cloud Drive Sync but I think they redid it and it's just Cloud Drive now. I just started investigating it and plan to get it going soon. From what I can tell, BackBlaze B2 and Wasabi both have similar pricing structures but Wasabi doesn't charge for egress traffic. The fine print on that is that if you download more from them in a month than total data you've got stored with them then you're "not a good fit" - meaning if you have 100TB stored with them in total but you download more than 100TB in a single month then that's a no-no. That seems fine to me versus having to pay for transfer when I need to restore a VM/PC/etc. They also assume a minimum 3 month lifecycle for items stored in their buckets which again shouldn't be a big deal for an offsite backup repository, just depends on your planned retention.
|
# ? Dec 13, 2020 03:23 |
|
I am almost convinced to take the plunge and install FreeNAS, however I'm concerned about memory. Is ECC required for ZFS? My NAS has a GA-770T-USB3 motherboard and Phenom II x4 840 with 12GB of whatever non-ECC DDR3 I could find lying around. The manual for my motherboard states that it supports ECC RAM if you install an ECC capable CPU, but can I even physically fit ECC DDR3 in this motherboard? I thought ECC RAM was always keyed differently. Even if I did find some unicorn DDR3 ECC keyed as non-ECC, are there any AM3 CPUs in the Phenom/Phenom II line that support ECC? If ECC is required, I think that will be convincing enough for me to keep using Linux with MDraid and EXT4.
|
# ? Dec 13, 2020 15:31 |
|
Not Wolverine posted:I am almost convinced to take the plunge and install FreeNAS, however I'm concerned about memory. Is ECC required for ZFS? My NAS has a GA-770T-USB3 motherboard and Phenom II x4 840 with 12GB of whatever non-ECC DDR3 I could find lying around. The manual for my motherboard states that it supports ECC RAM if you install an ECC capable CPU, but can I even physically fit ECC DDR3 in this motherboard? I thought ECC RAM was always keyed differently. Even if I did find some unicorn DDR3 ECC keyed as non-ECC, are there any AM3 CPUs in the Phenom/Phenom II line that support ECC? If ECC is required, I think that will be convincing enough for me to keep using Linux with MDraid and EXT4. ECC is optional. Definitely a nice to have but optional, and shouldn’t really be a deciding factor. Backups are more important if you’re concerned about being able to recover from the rare flipped bit.
|
# ? Dec 13, 2020 15:35 |
|
Ruggan posted:Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options. Synology Hyper Backup is their turnkey backup application; it'll make versioned, compressed backups of anything on your NAS either to its local storage (so you can arrange your own offsite backups if you like) or it'll plug straight into Google Drive, Onedrive, Amazon S3, or a few other cloud services, and it'll back up directly to the cloud. You can either restore files straight to the NAS from within the Hyper Backup app, or if a meteor falls on your house or something you'll need to retrieve the entire backup set and use a desktop client to restore your stuff. Synology Cloud Sync is a slightly simpler app that does exactly what it says on the tin; it syncs stuff between your NAS and cloud services. You can get as fiddly as you like with scheduling, two-way or one-way sync etc, but it won't do versioning or compression by itself. [e] me personally, I use Synology Drive to mirror My Documents etc. between my PC and my NAS, then I use Hyper Backup to backup from the NAS to Google Drive at 1am nightly. Your needs might be different though spincube fucked around with this message at 15:47 on Dec 13, 2020 |
# ? Dec 13, 2020 15:44 |
|
Matt Zerella posted:ECC is optional. Definitely a nice to have but optional, and shouldn’t really be a deciding factor. Backups are more important if you’re concerned about being able to recover from the rare flipped bit. I still think it's cute you guys think I have a budget large enough for backing up my NAS. I know I'm living on the edge, and I fully expect this post to be quoted when I inevitably come rage posting about loosing all my Data. I actually plan to implement backups someday if I ever finish re-encode all my video files to a reasonable size, 5TB of porn is a little excessive even by my standards. That said, I am a little concerned because my system has 12GB RAM and I believe ZFS needs about 1GB per TB, so if I upgraded to 4x 4TB drives I would be at the limit for my RAM, which I can upgrade to 16GB but I don't want to spend the money unless I have to. Or is this a case of diminishing returns, like if I put in 4x 18TB drives I wouldn't actually need 54GB of RAM? I am still curious to know if this motherboard (GA-770T-USB3) actually can somehow support DDR3 ECC RAM.
|
# ? Dec 13, 2020 16:02 |
|
It does not need that much ram. I ran a 21TB RAIDz2 on 4GB ram and an old i5-750 until the hardware died and had no performance issues with normal home usage. That said, are you hitting your NAS crazy hard with nonstop transfers from multiple endpoints at a time? What actually is your use profile? Edit: checked, actually had 4gb on that old thing. Also you don't really need ECC for normal home use but it's nice to have. I've never had it in a zfs NAS until my new build which parts should show up for next week. ssb fucked around with this message at 16:13 on Dec 13, 2020 |
# ? Dec 13, 2020 16:09 |
|
ZFS doesn't need 1GB RAM per TB, that might be recommended for deduplication but in normal usage it seems happy with eight GB or higher. I've run >100TB pools on 8GB and 16GB of memory (non-ECC) with no issue.
|
# ? Dec 13, 2020 16:14 |
|
Also chiming in on the ram requirements not being that high for datasets. As for ECC, it’s devolved into so much confusion because as far as I can tell there were a few influential posters on the freenas forums who were assholes about it, because it’s something they ran at work. However I don’t think anyone of them were zfs devs, so it came across as BOFH FUD. As far as I understand it, ECC on ZFS is a nice to have if you’re paranoid about bit flips. However what’s lost in the argument often is that ZFS is already way more resistant to bit flips then something like EXT4 due to design, which is often lost in the discussion. So as a terrible analogy it’s like someone saying you’re gonna die if you don’t have a roll cage in your car, overlooking the crumple zones, and side airbags. For most people that’s still a massive step over a land yacht that maybe didn’t have seatbelts. Edit: https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ A link to specifically address/debunk the ECC issue. Cyber jock is the guy who screamed it at everyone on the freenas forums. freeasinbeer fucked around with this message at 16:36 on Dec 13, 2020 |
# ? Dec 13, 2020 16:31 |
|
Not Wolverine posted:I still think it's cute you guys think I have a budget large enough for backing up my NAS. I know I'm living on the edge, and I fully expect this post to be quoted when I inevitably come rage posting about loosing all my Data. I actually plan to implement backups someday if I ever finish re-encode all my video files to a reasonable size, 5TB of porn is a little excessive even by my standards. You’re really overthinking things here. What I meant was the money you’d spend On trying to find some weird ecc ram is better spend on a service like backblaze to back your poo poo up offsite. ECC is a nice to have, it’s in no way required. Others have answered your ram question.
|
# ? Dec 13, 2020 16:32 |
|
shortspecialbus posted:It does not need that much ram. I ran a 21TB RAIDz2 on 4GB ram and an old i5-750 until the hardware died and had no performance issues with normal home usage. freeasinbeer posted:As far as I understand it, ECC on ZFS is a nice to have if youre paranoid about bit flips. However whats lost in the argument often is that ZFS is already way more resistant to bit flips then something like EXT4 due to design, which is often lost in the discussion. So as a terrible analogy its like someone saying youre gonna die if you dont have a roll cage in your car, overlooking the crumple zones, and side airbags. For most people thats still a massive step over a land yacht that maybe didnt have seatbelts.
|
# ? Dec 13, 2020 16:35 |
|
Not Wolverine posted:My actual use profile is minimal, just me and couple PCs occasionally shoveling over video files every once in a while, "Linux ISOs", etc. I'm over-estimating mainly because I just don't know any better. But I have no plans to enable de-dup so I think I'm fine as far as RAM goes. I’m double posting this but that was FUD spread by some rear end in a top hat: https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
|
# ? Dec 13, 2020 16:37 |
|
I've been doing some testing over the past week or two with a TrueNAS 12.0 VM (HBA passthrough) with 14x4tb RAID-Z and 16gb (non-ECC ram). So far so good, can easily saturate a gigabit connection.
|
# ? Dec 13, 2020 18:10 |
|
H2SO4 posted:From what I can tell, BackBlaze B2 and Wasabi both have similar pricing structures but Wasabi doesn't charge for egress traffic. The fine print on that is that if you download more from them in a month than total data you've got stored with them then you're "not a good fit" - meaning if you have 100TB stored with them in total but you download more than 100TB in a single month then that's a no-no. Wasabi will let you pay for bandwidth, just like B2. At least their reps tell me that on the commercial side. You just need to have an estimate up front and it helps if you're hooked up to aws us-east-1. For a consumer though downloading from Wasabi or B2 should be a once or never thing, fingers crossed.
|
# ? Dec 13, 2020 18:21 |
|
freeasinbeer posted:few influential posters on the freenas forums who were assholes about it, because it’s something they ran at work. However I don’t think anyone of them were zfs devs, so it came across as BOFH FUD. I see you've read anything that CyberJock has EVER posted. Good god he's annoying and pedantic. He knows what he's doing, but if you don't set things up the way HE insists, he'll just blame that for the problem.
|
# ? Dec 13, 2020 19:10 |
Less Fat Luke posted:ZFS doesn't need 1GB RAM per TB, that might be recommended for deduplication but in normal usage it seems happy with eight GB or higher. I've run >100TB pools on 8GB and 16GB of memory (non-ECC) with no issue. The recommendation for deduplication is 5GB per 1TB of storage, but a more useful way of thinking about it is "number of blocks on disk times 330" (or 70, depending on if you're using Illumos-derived ZFS or OpenZFS), and then divide that by 1024 a couple of times until you get a number that makes sense.
|
|
# ? Dec 14, 2020 00:29 |
|
Well two things, first we're talking about home usage, and second "citation needed". Recommended by whom? The ZFS on Linux team or docs? Because at this point ECC and RAM sizing seems to be very much tribal knowledge at best or an old wives tale at worst. ZFS is very flexible and you can adapt it in various ways with SLOGs, L2ARC, and so on to make it fit your workload. I don't think any blanket "X per Y" makes sense for the filesystem anymore.
|
# ? Dec 14, 2020 01:22 |
|
FreeNAS/TrueNAS is still running solid for me with 12GB of ECC on a 4x3TB RAIDZ2 array. I tossed an SSD in there a few months ago to run a VM for game servers and maybe once in a while I have to reboot to free up some RAM to start it up but it works great. One of the drives has been giving me errors during the monthly long smart test, but I've been too lazy/poor to get around to replacing it. It's been like that for a year or two with no obvious problems so I've been putting it off until I can afford or get a deal on a whole new array with larger drives. I've got an offsite backup of important stuff, and I try not to be a data hoarder so it hasn't become a pressing issue yet.
|
# ? Dec 14, 2020 03:26 |
|
|
# ? May 16, 2024 12:26 |
|
Less Fat Luke posted:ZFS is very flexible and you can adapt it in various ways with SLOGs, L2ARC, and so on to make it fit your workload. I don't think any blanket "X per Y" makes sense for the filesystem anymore. I don't even have anything but spinning disks in my ZFS and I'm well under 1GB/TB with no performance impacts when I'm not replacing an entire vdev at a time. I might throw some more RAM at my system eventually but it's not ZFS driving that.
|
# ? Dec 14, 2020 03:55 |