|
Buff Hardback posted:Unraid parity != checksumming bingo
|
# ? Jan 3, 2022 21:02 |
|
|
# ? Jun 8, 2024 18:25 |
|
For comparison: I have a 15 bay case, and probably will never change it. Populating it entirely with 8TB drives in Unraid (as that's what my current parity drive is) and using double parity gives me 104TB of storage. If I were to throw everything out and use ZFS and plan ahead (which would require buying a whole lot of drives right this second), the same 8TB drives in RAIDZ1 with 3 vdevs of 5 drives would give me only 96TB of usable space. edit: am I perfectly aware that Unraid isn't perfect and has flaws that would mean you shouldn't use it in enterprise use cases? yes. Do I want to pull out a spreadsheet and calculator in order to optimize the best way to buy and install drives for my Linux ISO server? god no I just wanna be able to open palm slam the buy button on a shuckable drive and not worry about it, and ZFS doesn't really support that use case. Raymond T. Racing fucked around with this message at 21:19 on Jan 3, 2022 |
# ? Jan 3, 2022 21:16 |
|
Just also for the record BSD, I'm not trying to start a fight and I enjoy your posts in here.
|
# ? Jan 3, 2022 21:23 |
|
Kind of a weird world where someone builds a 104TB net capacity "ISO archive" and then doesn't care about reliability. I could understand it for some dual or triple drive Synology NAS, but this? 100TB is kind of in the "my precious" territory.
|
# ? Jan 3, 2022 21:29 |
|
Yeah I'm glad folks have offsites and poo poo but I've got a RaidZ2 with an onsite backup Powervault hosted on a 6c12t Xeon with 80 loving gigs of memory and l2arc for...~30TB lol (my drives don't spin up and it rules)
|
# ? Jan 3, 2022 21:40 |
|
I am thinking about migrating my NAS with a hardware upgrade, currently running unraid with 4 6TB HGSTs and a 128GB cache drive (just because I had it kicking around). Is there a better option to move to with more reliability then that doesn’t require me to full time computer janitor it? I also keep my important docs and photos in the cloud but still would rather not lose all my poo poo.
|
# ? Jan 3, 2022 21:43 |
Welp, this turned out to be more than I expected.Matt Zerella posted:It's bulletproof as in its low maintenance. I really think you've got a different definition on that (and that's ok!). I'm not a perscriptivist linguist (in fact, I'm not any kind of a linguist except maybe a cunning one), but I sometimes do wish that people would say what they mean when they use a word that has meanings that're as subject to interpretation as this word is turning out to be. Even though I don't like it very much for reasons beyond the scope of this post, TrueNAS has all of what you said except "official" support, but as someone who does community support for FreeBSD (in addition to being a committer) I have to say that all-too-often I hear horror-stories about what Linux commercial support is like. The major difference is that it'll do a better job of keeping your data safe, or at least telling you if your data is corrupt, than UnRAID will. Buff Hardback posted:but also who cares, I don't wanna buy 5 drives at a time to use ZFS, I don't have the money for that Buff Hardback posted:For comparison: If we're going to use examples from the real world instead of the fantasy world, here's an example for planning a 15-bay disk-chassis today: Buying 5x 8TB drives and setting up a RAIDZ2 would get you just over 20TB - so unless you're attempting to fill it up as soon as possible, there's at least a chance that OpenZFS 3.0 with RAIDZ expansion will land before you've filled it up. And here's some backup plans: If it isn't out by then, you buy another 5x 8TB drives and setup another pool - then once OpenZFS 3.0 is out, you can expand the original pool with 5 drives using RAIDZ expansion, move all the data from the extra pool, delete the extra pool, and add the rest of the disks using RAIDZ expansion. Worst case, if by then you can't afford to buy 5 disks at once, you can enable autoexpand and buy one bigger disk a month to replace one of the 8TB disks with, and in less than half a year you'll have added at least a bit more storage until RAIDZ expansion lands, and at that point you can keep adding one disk per month of the ones you've been buying (or even bigger, doesn't matter - because you can keep replacing the smallest individual disk periodically and have constant small growth if you want). Result is, you end up with at least 86TB of disk space if one or all disks are 8TB - but if all disks end up being 14TB by the time you've filled up, you end up with just north of 155TB. Also, UnRAID adopt a drive you insert, without you doing anything? If you're using SAS chassis with SES and zfsd plus the autoexpand feature, ZFS can be configured to automatically replace an existing disk or adopt a new one once RAIDZ expansion lands - so you can open-palm slam your disks out of the enclosure and into the rack. Matt Zerella posted:Just also for the record BSD, I'm not trying to start a fight and I enjoy your posts in here. Combat Pretzel posted:Kind of a weird world where someone builds a 104TB net capacity "ISO archive" and then doesn't care about reliability. I could understand it for some dual or triple drive Synology NAS, but this? 100TB is kind of in the "my precious" territory. BlankSystemDaemon fucked around with this message at 22:48 on Jan 3, 2022 |
|
# ? Jan 3, 2022 22:35 |
|
Phone posting so don’t want to quote to narrow down on the specifics: but just to be perfectly clear: Unraid double parity is not mirrored parity but is in fact P+Q. Two data drives can fail and the two parity drives can reconstruct.
|
# ? Jan 3, 2022 22:47 |
|
Also, the plugin that BSD linked to earlier does not require you to start with a blank drive. You can calculate a hash on existing data. Granted, his argument that it is a plug in, and you have to enable it are still valid. Not to mention that calculating a file integrity hash on existing data is no guarantee that the data isn't already corrupt/bitrotted.
|
# ? Jan 3, 2022 22:53 |
Buff Hardback posted:Phone posting so don’t want to quote to narrow down on the specifics: but just to be perfectly clear: Unraid double parity is not mirrored parity but is in fact P+Q. Two data drives can fail and the two parity drives can reconstruct. Also, unless it's changed, it's something you have to pay extra for, by getting the pro version over the basic version? Lowen SoDium posted:Also, the plugin that BSD linked to earlier does not require you to start with a blank drive. You can calculate a hash on existing data. There's some subtle differences about hash file-lists vs hashtrees (the latter of which is what ZFS uses) and the use of copy-on-write when it comes to data integrity - for example, what happens when (not if) lots of files are modified but the periodic integrity is only done monthly? What happens if a bitflip causes data of an existing file in memory that's dirty (ie. modified) to change, and it's then written to disk? ZFS has solutions to both of these (though, for the sake of transparency I must point out that at least the latter is somewhat cumbersome and requires scheduled maintenance), but I don't see how UnRAID can. BlankSystemDaemon fucked around with this message at 23:04 on Jan 3, 2022 |
|
# ? Jan 3, 2022 22:53 |
|
BlankSystemDaemon posted:That’s at least something - but your maths still doesn’t check out. Nope, that’s super duper duper old. Any version of Unraid (assuming you have the slots available in your license) can happily do true double parity.
|
# ? Jan 3, 2022 23:00 |
|
We're not trying to be overly harsh BH--its just that we have to janitor posts in here that go off the rails. You seem like you know the risks and know what you're doing; you do you. I agree that anyone as seriously thinking about this as you are should give the modern implementations of ZFS a second look though, especially if you've got the hardware or cash to make an appliance.
|
# ? Jan 3, 2022 23:43 |
|
Since everyone is mentioning Unraid, I took a a look at their web page https://unraid.net/product. Is that the one? If that's so, what exactly do they do? A VMs host? Anything they do better than proxmox or a plain Linux installation (distro of choice)?
|
# ? Jan 4, 2022 00:46 |
|
Volguus posted:Since everyone is mentioning Unraid, I took a a look at their web page https://unraid.net/product. Is that the one? If that's so, what exactly do they do? A VMs host? Anything they do better than proxmox or a plain Linux installation (distro of choice)? As the resident Unraid defender: - A very nice and easy to use JBOD array system. While yes, you’re not striping files across drives (so performance is limited to a single drive, rather than the sum of drives) you do gain some disaster benefits in that if you lose more drives than your parity can support, since each file only is stored on a single drive, you only lose data on the affected drives. Also expanding exists right now. Want to add another drive to your pool? Slide it in and add it as a data drive, no wasted space (assuming the new drive is smaller than or equal to your parity drive size) - convenient docker host It’s significantly less janitor-y than Proxmox or TrueNAS or similar, with the tradeoff that the array mechanics aren’t as robust as would be elsewhere.
|
# ? Jan 4, 2022 00:51 |
|
Buff Hardback posted:It’s significantly less janitor-y than Proxmox or TrueNAS or similar, with the tradeoff that the array mechanics aren’t as robust as would be elsewhere. I'm curious about what kind of janitor-y stuff you've had to do on TrueNAS or that you think it requires? All I've done is replaced drives that failed and periodically upgraded it. I recently upgraded my HBA card because that was basically ancient. Past that I've added pools, changed shares/exports.......I don't feel like I've been janitoring for the last 7 years or so. I've definitely spent multiples of more time on "simple" things like a plex VM.
|
# ? Jan 4, 2022 00:58 |
|
Motronic posted:I'm curious about what kind of janitor-y stuff you've had to do on TrueNAS or that you think it requires? All I've done is replaced drives that failed and periodically upgraded it. I recently upgraded my HBA card because that was basically ancient. Past that I've added pools, changed shares/exports.......I don't feel like I've been janitoring for the last 7 years or so. I've definitely spent multiples of more time on "simple" things like a plex VM. That's kinda what I'm getting at. Getting Plex working on my Unraid server was as simple as: 1. Make sure I've got the Community Applications plugin installed 2. Search for "Plex" in CA 3. Download the one from linuxserver.io 4. Point it at my shares 5. done
|
# ? Jan 4, 2022 01:10 |
|
Also as someone who is payed big bucks to to docker/kubernetes at my day job, jails are massively simpler to janitor then docker when dealing with services on a single host.
|
# ? Jan 4, 2022 01:10 |
|
Buff Hardback posted:That's kinda what I'm getting at. Getting Plex working on my Unraid server was as simple as: This is more steps than TrueNAS.
|
# ? Jan 4, 2022 01:18 |
|
freeasinbeer posted:Also as someone who is payed big bucks to to docker/kubernetes at my day job, jails are massively simpler to janitor then docker when dealing with services on a single host. I'm also payed to docker/kubernetes at my day job (and am currently trying to clean up metadata damage on a Ceph filesystem!) and I disagree. Give me a Docker image any day. My raspberry pi just runs docker swarm with a single stack deployed to it. Nice and easy. So I guess this, like many other things, comes down to personal preference
|
# ? Jan 4, 2022 01:24 |
|
Buff Hardback posted:That's kinda what I'm getting at. Getting Plex working on my Unraid server was as simple as: I'm not sure if you're just being willfully ignorant here, but you described how to install it, skipping the part where you've already installed the OS. The TruesNAS install is actually easier than that. The rest of the things I described need to be done for basic maintenance (updates), repairs (replacing failed drives, components that are aging out) and if your needs change (adding pools, configuring shares/exports). Does Unraid do all of those things for you somehow? I have no horse in this race. I just happened to land on FreeNAS years ago. I might have choosen something different now, who knows. But what you are posting is just blatantly disingenuous and an obviously slanted misrepresentation for some reason I've not been able to determine. Motronic fucked around with this message at 02:04 on Jan 4, 2022 |
# ? Jan 4, 2022 02:02 |
|
Motronic posted:I'm not sure if you're just being willfully ignorant here, but you described how to install it, skipping the part where you've already installed the OS. The TruesNAS install is actually easier than that. Am I probably an idiot and haven't kept up on the specifics of TrueNAS and ZFS expansion? yes Do I prefer the ability to easily expand a JBOD style array by adding single drives at a time, and not have to worry about matching vdevs and following best practices for for best ZFS performance? absolutely yes. Do I have a hate against TrueNAS or a ulterior motive like you're getting at? absolutely not. For my use case and preferences when it comes to configuration, I prefer Unraid over TrueNAS.
|
# ? Jan 4, 2022 02:29 |
|
Buff Hardback posted:Am I probably an idiot and haven't kept up on the specifics of TrueNAS and ZFS expansion? yes Nothing I posted that you have responded to has anything to do with ZFS expansion/pool expansion/expansion of any type. You seem to be really hung up on that as an option, and that's fine if that's what you personally need. But the things you're talking about beyond that are just way the hell off base and objectively wrong. Why? How about responding to this part: "The rest of the things I described need to be done for basic maintenance (updates), repairs (replacing failed drives, components that are aging out) and if your needs change (adding pools, configuring shares/exports). Does Unraid do all of those things for you somehow?"
|
# ? Jan 4, 2022 02:33 |
|
BlankSystemDaemon posted:What happens if a bitflip causes data of an existing file in memory that's dirty (ie. modified) to change, and it's then written to disk? Considering so far all buses have FEC or other error correction mechanisms with retransmit, all buffer and caches in the system have some form of ECC, with the exception of main memory, and I haven't had the sort of drive failures that'd make sectors unreadable (due to failing error correction, which there also is on disk sectors), I'm fairly convinced it came from transiting RAM (write) buffers during one of the several copy operations. tl,dr: I insist on ECC in both my NAS and my main workstation.
|
# ? Jan 4, 2022 02:33 |
|
Motronic posted:Nothing I posted that you have responded to has anything to do with ZFS expansion/pool expansion/expansion of any type. You seem to be really hung up on that as an option, and that's fine if that's what you personally need. But the things you're talking about beyond that are just way the hell off base and objectively wrong. Replacing drives/maintenance isn’t really what I meant by janitoring. There really isn’t multiple pools in Unraid the way TrueNAS handles it so I think we might be talking past each other at this point?
|
# ? Jan 4, 2022 02:38 |
|
I would also say that it sounds like a lot of us adopted unRAID back in like 2016/17 and I don't remember things like TrueNAS or Proxmox seeming nearly as user friendly then, but I may just be misremembering. Looking at both TrueNAS and Proxmox now it does seem like they're all very similar in UI/UX, so I wouldn't say that's as compelling a reason as it used to be. I also think that the terminology makes a difference, at least for people like me that was just getting into trying a home server/NAS from Windows. I remember reading stuff about creating multiple pools, jails, vdevs, raidz1-3, and feeling overwhelmed. It also didn't help that I'd never even heard of FreeBSD before, I at least had an idea of what Linux was. UnRAID seemed very straightforward with "put all my disks in the one array, create a share, get apps from the CA store." I didn't even have to mess with Docker Compose for containers, just fill in template fields. Again, I'm not saying that it's not that simple in other options but for me at least back then it made more sense conceptually then the TrueNAS stuff did. The biggest draw for me though was still just the fact that I could throw any drives of any size together in the one array, I didn't need to group drives of the same size together. Since I was building this out of spare parts I didn't have any drives that were the same size at the time so that was my first priority over stuff like data integrity or redundancy. Scruff McGruff fucked around with this message at 13:55 on Jan 4, 2022 |
# ? Jan 4, 2022 03:03 |
|
I think I might just build a completely different setup using Truenas, does it support additional drives of different size added? Get a few 18TBs or whatever is good shuckable. It looks interesting!
|
# ? Jan 4, 2022 03:20 |
|
Scruff McGruff posted:I also think that the terminology makes a difference, at least for people like me that was just getting into trying a home server/NAS from Windows. I remember reading stuff about creating multiple pools, jails, vdevs, raidz1-3, and feeling overwhelmed. Now THIS is a fair critique. I'm coming from the background of having done this poo poo with real big boy expensive (read: often overpriced) equipment for years/decades before setting up Free/TrueNAS. So my basis for "this doesn't sound to bad" is different. I simply still can't agree about the "janitoring" part though. Once it's set up it has done what it said on the tin.
|
# ? Jan 4, 2022 03:20 |
|
Motronic posted:Now THIS is a fair critique. I'm coming from the background of having done this poo poo with real big boy expensive (read: often overpriced) equipment for years/decades before setting up Free/TrueNAS. So my basis for "this doesn't sound to bad" is different. That’s definitely fair and I probably overemphasized the janitoring difference. I hadn’t really done any enterprise NAS/SAN stuff prior to setting up Unraid, and at this point I’m comfortable with it’s quirks and know it’s limitations. Lots of different ways to store your poo poo, and there really isn’t a wrong way (as long as it isn’t a bunch of 2.5 inch portable drives running over USB)
|
# ? Jan 4, 2022 03:24 |
|
Buff Hardback posted:Lots of different ways to store your poo poo, and there really isn’t a wrong way (as long as it isn’t a bunch of 2.5 inch portable drives running over USB) Oh man, was there an update from that dude? Craziest setup ITT by a mile
|
# ? Jan 4, 2022 03:26 |
|
Enos Cabell posted:Oh man, was there an update from that dude? Craziest setup ITT by a mile See I was just pulling something out of my butt to lighten the mood, I didn’t mean to actually be referencing someone
|
# ? Jan 4, 2022 03:30 |
|
Buff Hardback posted:See I was just pulling something out of my butt to lighten the mood, I didn’t mean to actually be referencing someone Haha yeah, someone came in asking for advice on some crazy setup with like a dozen external drives and daisy chained usb hubs. Was a while back, I'll see if I can find it.
|
# ? Jan 4, 2022 03:33 |
|
ZFS question: I had a drive go degraded and eventually faulted in a 4x10tb raidz1 pool. I ordered a replacement, and before it arrived, two of the other drives went degraded. I swapped the faulted drive out and ran the replace command, and resilvering kept restarting and never finished. The drives kept going back to online and then degraded as the resilvering was progressing. I ordered a second replacement, but also stuck the first bad disk into another computer and ran Seatools or whatever seagate’s drive tool thing is and it passed the long test. Since several drives are having issues, I then guessed it might be the sata card and ordered a new one. Once the card arrives, what’s my first step? Just plug them back in and see if resilvering finishes/a scrub succeeds? A future step is to migrate to raidz2 or 3.
|
# ? Jan 4, 2022 03:44 |
|
This is kinda what happened to my old 3TB drives, I haven't found a good way to get a detailed reason for why the resilvering keeps restarting but all my data was readable so I just copied everything over to a new array since the old drives were all 7 years old. I think what happened in my case was that drive firmware was so slow to respond while trying to deal with read errors that the lsisas driver timed out the drive and reset sata links, causing zfs to see it as a drive getting detached and attached. And then it loses track of the array status and restarts the scrub from the beginning. I don't think zfs is smart enough to immdiately force a write with parity data to an unreadable block, since zfs is copy-on-write and likes to always write sequentially to empty space on the drives I have a feeling the unreadable block is left alone instead and that's why my drives had such high "pending remap" SMART counters. I've now installed 12x newer 8TB drives on the same hardware that this was happening on and they have no issues scrubbing. If your drives are very old, I'd recommend a full hardware refresh and try doing an rsync of the files between the old and new zfs pools to make sure you don't end up losing data.
|
# ? Jan 4, 2022 09:08 |
Combat Pretzel posted:That poo poo happens occasionally. In the past, in the pre-Netflix times I was pretty anal about video quality for archiving purposes, so if I happened to come across decoding errors, the material was "resourced". Over the long long years, for reasons, things have been copied back and forth lots of times, and have found rewatching the stuff in recent times, there's been files had an occasional decoding errors (bad macroblock/smear). Very few other files have this kind of built-in resistance, and some files (binary disk images and executable binaries, some image formats, and others) are the complete opposite where even a single bitflip means you might as well throw the file at /dev/null. The original IBM PC spec was written with the assumption that the main memory would be ECC memory, because that's the memory you found in the mainframes IBM had been making for decades. I'm not sure you can exactly blame the lack of ECC memory being the default on consumer gear on the people making PC clones, since they cut a lot of corners in a lot of places (and the lower price is perhaps the biggest contributor to computers being as ubiquitous as they are now) - but as I and others have pointed out in this thread before, it's a provable fact that ECC memory helps with data integrity for dirty data and also immensely affects system stability; and considering how cheap ECC memory is now compared to any other time in history, there's a lot more reasons for consumers to use it now than there ever was. Scruff McGruff posted:I would also say that it sounds like a lot of us adopted unRAID back in like 2016/17 and I don't remember things like TrueNAS or Proxmox seeming nearly as user friendly then, but I may just be misremembering. Looking at both TrueNAS and Proxmox now it does seem like they're all very similar in UI/UX, so I wouldn't say that's as compelling a reason as it used to be. When FreeNAS was started by ocochard@ (one of the FreeBSD commiters, who's still active - although nowadays he's more interested in networking and maintains the BSDRP project), it was the only appliance NAS software (OpenMediaVault using Debian was started by someone who initially worked with ocochard@). Eventually, it landed in the hands of iXsystems (which itself has a long history that traces back through what's now Wind River Systems and all the way back to BSDi who were the ones that got sued by AT&T), and has changed considerably since then, becoming a much more visually polished product. Importantly though, FreeNAS didn't originally support ZFS, because ZFS support in FreeBSD didn't land before version 7 in Febuary, 2008 - so it was a lot closer to what UnRAID is nowadays. XigmaNAS (which started as a fork of FreeNAS called NAS4Free) still has all of the old features including, if memory serves, the webui to do things like gconcat and graid4 - meaning that it can function like a SPAN array (ie. where if you lose any one disk, you only lose what's on that disk), or it can function much like UnRAID does (although with the caveat that UnRAID does raid4 at the file level while XigmaNAS does it at the block level). Enos Cabell posted:Haha yeah, someone came in asking for advice on some crazy setup with like a dozen external drives and daisy chained usb hubs. Was a while back, I'll see if I can find it. Erwin posted:ZFS question: I had a drive go degraded and eventually faulted in a 4x10tb raidz1 pool. I ordered a replacement, and before it arrived, two of the other drives went degraded. I swapped the faulted drive out and ran the replace command, and resilvering kept restarting and never finished. The drives kept going back to online and then degraded as the resilvering was progressing. It's also possible that r u ready to WALK is right and that it has to do with TLER configuration (or the lack of). Self-tests (like anything from S.M.A.R.T) is about as reliable as the firmware is (ie. not very, unless you chart specific raw attribute values - and even then it's guesswork based on bathtub curves). I don't really see what you can do other than plug it in - you seem right on the edge of catastrophic dataloss as-is, so I really hope you have backups. r u ready to WALK posted:This is kinda what happened to my old 3TB drives, I haven't found a good way to get a detailed reason for why the resilvering keeps restarting but all my data was readable so I just copied everything over to a new array since the old drives were all 7 years old.
|
|
# ? Jan 4, 2022 14:38 |
|
BlankSystemDaemon posted:The things that I know can cause ZFS to restart a resilvering process is if an URE is encountered, or if a transient error occurs (ie. controller/bad cable, bitflips in the storage stack and issues of that nature) - both of which would be reflected by the READ, WRITE and CHECKSUM columns. BlankSystemDaemon posted:I don't really see what you can do other than plug it in - you seem right on the edge of catastrophic dataloss as-is, so I really hope you have backups. Yes, the important stuff is backed up so I'm not worried about data loss other than saving some restore time. I hadn't backed up my Plex library because I still have the physical media, but now that I'm thinking about re-ripping it all I'm regretting not backing it up. Luckily I was able to copy 99% of it off to other storage and will back it up moving forward. The other stuff on the zpool was backups from my desktop and laptop, which were backed up from the zpool to s3.
|
# ? Jan 4, 2022 15:16 |
|
Buff Hardback posted:Lots of different ways to store your poo poo, and there really isn’t a wrong way (as long as it isn’t a bunch of 2.5 inch portable drives running over USB) This is my current setup, and it works great for my light duty needs. Probably this year I'll move to a single enclosure, but I have proven that a redundant array of inexpensive disks actually does work.
|
# ? Jan 4, 2022 15:49 |
|
BlankSystemDaemon posted:An 8TB parity drive (or even two of them, since it's mirrored parity and not RAID6/P+Q, as has been previously established ITT) https://wiki.unraid.net/Parity unraid posted:two redundancy disks: ‘P’, which is the ordinary XOR parity, and ‘Q’, which is a Reed-Solomon code. This allows unRAID to recover from any 2 disk errors, with minimal impact on performance Mirroring p would be loving useless, so that's not done
|
# ? Jan 4, 2022 16:02 |
|
The value of reliability for a system even for Linux ISOs is non-zero because one's free time is non-zero as well as anyone that depends upon it for any reason at all. I also get some professional value out of running ZFS given I do a bunch of computer touching at some fair scale and we use ZFS here and there. It seems odd to spend literally over a thousand dollars in drives and not be bothered to spend maybe another couple hundred to make the system significantly more reliable, but that's just me and my cost-benefit sensitivity. I'm generally a cheap bastard but do weigh my risks carefully. Which is why I backup anything that's difficult or impossible to recreate on my system and don't really care otherwise. My bigger problem with reliability in practice is properly organizing my 2 decades of digital crap and trying to figure out my own ADHD addled past self's organizational scheme (or lack thereof). After all, if you can't find your data it's as good as gone.
|
# ? Jan 4, 2022 20:09 |
|
When my Synology dies my goal is to replace it with a TrueNAS server and have the TrueNAS be my "canonical" storage system and existing Unraid will be a backup target, host Plex, minecraft server, etc. ZFS is a little more intimidating to the newbie it seems to me.
|
# ? Jan 4, 2022 23:17 |
|
|
# ? Jun 8, 2024 18:25 |
|
Smashing Link posted:When my Synology dies my goal is to replace it with a TrueNAS server and have the TrueNAS be my "canonical" storage system and existing Unraid will be a backup target, host Plex, minecraft server, etc. ZFS is a little more intimidating to the newbie it seems to me. I think its important to point out that hosting all those applications in jails on the TrueNAS will be far easier and safer, IMO, and, a presumption, but the newer hardware will be far faster. There's even a community supported Minecraft plugin that makes its own jail for you https://www.reddit.com/r/truenas/comments/n2lw33/setting_up_mineos_on_truenas_with_best_practices/
|
# ? Jan 5, 2022 15:05 |