|
BlankSystemDaemon posted:Striped mirrors are a way of increasing the IOPS in a RAID array, because spinning rust has a physical upper limit on how many IOPS it is capable of providing - but beyond that, striped mirrors also have a failure mode that striped data with distributed parity doesn't, which is that it can lose data if two specific disks die, whereas raid6/raidz2 will at least let you replace one of the failed drives without faulting the array, unless an URE occurs when there's no data availability. Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this. That said, I'm still looking for a recommendation for an ethernet or USB-attached RAID enclosure that will play nice with a windows server running Linux VMs. Baseline storage would be 10TB. Does this make sense, or should I just build a PC, slap in a decent RAID card, and manage it myself? That was my initial instinct, but I want to make sure I'm not wasting time doing stuff the old way. The PC would otherwise be consumer-grade components. LASER BEAM DREAM fucked around with this message at 15:04 on Aug 16, 2022 |
# ? Aug 16, 2022 14:58 |
|
|
# ? May 15, 2024 08:14 |
|
Adolf Glitter posted:Maybe, but that's also what is shown on stuff that's been out of stock for months/years. ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more
|
# ? Aug 16, 2022 17:09 |
|
LASER BEAM DREAM posted:Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this.
|
# ? Aug 16, 2022 19:03 |
|
Korean Boomhauer posted:ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more Prices on ram seem to vary particularly wildly. I was looking at non registered ecc ddr4 a while back and scan (I think) was something like £140 ebuyer £110 and I eventually got it off ebay for £90 (roughly, I don't recall the exact amounts, but it was that sort of range) The ebay seller was a refurbisher and it had a decent warranty and returns policy Ram is alway pretty volatile though and regional pricing is a big thing. poo poo, the uk prices are now all so much higher than the us. I loving hate this place Fingers crossed for you :-)
|
# ? Aug 16, 2022 22:24 |
Computer viking posted:Sure, but I would have expected the problems to be "it's hard to get full speed over most cabling" or "it uses too much power", not "the hardware, firmware and drivers all seems to have been made by the less competent interns". LASER BEAM DREAM posted:Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this. First, let's cover something: DAS = Direct Attached Stoarge, NAS = Network Attached Storage - and I think you want the first, so I'd recommend looking for a USB3.2 Gen2 JBOD DAS - something like this but with fewer disks? The reason you want USB3 is that it's got 128/130b encoding (instead of USB2s 8/10b encoding, which gives 20% overhead), and the reason you want 3.2 Gen2 is that it ensures you're getting the full 10Gbps link. Ideally it'll also do USB Attached SAS, instead of Bulk-only Transfer Protocol which is the default for most USB. DAS' absolutely make sense if you don't have multiple computers accessing the same data.
|
|
# ? Aug 16, 2022 23:39 |
|
BlankSystemDaemon posted:It's bad because 2.5G will happily drop down to 1000BaseT or even 100BaseTX at the slightest provocation, and sometimes even seemingly without any reason. Wow, that's pretty pricy! I'm thinking of just going with the old-school onboard SATA route in a Fractal Design R5 for easy access. In between my original post I've been googling around the Home Server reddit, and for people running windows I've seen discussion of "File and Storage Services". Essentially you JBOD the disks and let storage services create virtual disks for you. Does anyone have any experience with it, or is that a bad path to pursue? I like Linux and will be using this PC to host VMs of it, but I don't want to troubleshoot the inevitable issues on the bare-metal machine when I'm only semi-competent in the OS. LASER BEAM DREAM fucked around with this message at 01:39 on Aug 17, 2022 |
# ? Aug 17, 2022 01:31 |
|
LASER BEAM DREAM posted:In between my original post I've been googling around the Home Server reddit, and for people running windows I've seen discussion of "File and Storage Services". Essentially you JBOD the disks and let storage services create virtual disks for you. Windows Storage Spaces has a kinda unsettling track record of MS update fuckups but if your one alternative is Linux then ZFS-on-Linux is almost certainly worse. Btrfs can do mirrors fine but is still experimental on parity. BSD is the preferred OS for NAS servers. If you're planning to use 10 for a while you're probably fine, since 10 is now in minimal-update maintenance mode. If you're gonna use 11 I would set a long delay for feature updates.
|
# ? Aug 17, 2022 03:41 |
|
Korean Boomhauer posted:ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more i timed my complaining on this well, becuase the motherboard cropped up on newegg with a back in stock date of next week yesssss
|
# ? Aug 17, 2022 04:21 |
Klyith posted:Windows Storage Spaces has a kinda unsettling track record of MS update fuckups but if your one alternative is Linux then ZFS-on-Linux is almost certainly worse. Btrfs can do mirrors fine but is still experimental on parity. BSD is the preferred OS for NAS servers. The OpenZFS repo contains a majority of system-independent code, as well as some bits of system-dependent code for FreeBSD and Linux respectively. The plan is that eventually, support for Windows, macOS, NetBSD and even Illumos will also be added. I still think FreeBSD is the better option, because Linux is still not at a point where the tooling is as integrated; you still can't easily do boot environments on Linux, and there are still the device-by-id gotchas because of how Linux handles floppy support (ie. it won't go away until floppy support does, and only once someone fixes it after that) - but them using the same codebase means there's less divergence in features, which is nice (specifically, it means that Linux can now use TRIM, which only FreeBSD could before).
|
|
# ? Aug 17, 2022 09:37 |
|
BlankSystemDaemon posted:I still think FreeBSD is the better option, because Linux is still not at a point where the tooling is as integrated; you still can't easily do boot environments on Linux, You have the ZFSBootMenu project now that supports boot environments for Linux based systems. It's pretty cool and easy to get set up (for a nerd), although you need to jump through some pretty big hoops to not have to enter your decryption passphrase twice during boot.
|
# ? Aug 17, 2022 10:04 |
Keito posted:You have the ZFSBootMenu project now that supports boot environments for Linux based systems. It's pretty cool and easy to get set up (for a nerd), although you need to jump through some pretty big hoops to not have to enter your decryption passphrase twice during boot. The fundamental problem is that most Linux bootloaders appear to think that proper filesystem support is too hard, so they require a copy of the kernel and other ancillary files on the boot disk to be loaded into memory. If FreeBSD has been doing it for decades, there's no reason anyone else can't do it other than not-invented-here syndrome. Also, I just noticed that I'm the one who touched that document last. Perhaps I should look into updating it for UEFI?
|
|
# ? Aug 17, 2022 13:45 |
|
Worst thing is, grub2 has support for reading a whole bunch of file systems and is (as far as I can tell without trying) designed to make it easy to plug in more. They just like their convoluted initrd designs over in linux land, I guess.
|
# ? Aug 17, 2022 14:54 |
|
I don’t understand. I’ve been doing proxmox with zfs boot on Linux for the past 5 years. What’s lacking?
|
# ? Aug 17, 2022 22:02 |
|
Hughlander posted:I don’t understand. I’ve been doing proxmox with zfs boot on Linux for the past 5 years. What’s lacking? Nothing, if it works it works. It's just not a given on the larger linux ecosystem - IIRC, Fedora routinely breaks ZFS if you install their recommended kernel upgrades. Also, a more zfs-first OS may have some neat extra tools. The boot environments mentioned are basically the opportunity to make clones of the boot drive before upgrades (or indeed at any point you want), and boot from any of them or roll back to them at will. It's possible to make work on linux, it's not the end of the world to not have it ... but it is neat.
|
# ? Aug 17, 2022 22:35 |
|
LASER BEAM DREAM posted:Wow, that's pretty pricy! I'm thinking of just going with the old-school onboard SATA route in a Fractal Design R5 for easy access. I ran a storage spaces setup for about 6 years and never had an issue. I eventually moved away from it in the last year or so primarily because the whole hardware setup was getting old and I just moved to unraid instead on a new system.
|
# ? Aug 18, 2022 06:22 |
|
Last I heard ZFS expansion is still aiming for a "Q3 2022" release. Should I reasonably expect it to be available by end of year or would that be hopelessly naive?
|
# ? Aug 18, 2022 08:07 |
A Bag of Milk posted:Last I heard ZFS expansion is still aiming for a "Q3 2022" release. Should I reasonably expect it to be available by end of year or would that be hopelessly naive? I imagine we'll know more after the OpenZFS developer summit coming up.
|
|
# ? Aug 18, 2022 09:31 |
|
Q3 was mentioned in this blog post. https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/ (I don't know if integration is the same as release though) YerDa Zabam fucked around with this message at 11:15 on Aug 18, 2022 |
# ? Aug 18, 2022 11:10 |
|
Thank you for the previous help, thread. I've got my little Synology 218play up and running and feeding the house various bits of content I've had sat around on external drives for far too many years (I have far too much music, holy poo poo). Picked up a Nvidia Shield TV Pro and am very impressed with that too, the combination of NAS and the Shield is pretty much the solution I've been after for years but didn't quite realise it.
|
# ? Aug 18, 2022 14:39 |
|
BlankSystemDaemon posted:You can follow the progress of it here, but I've never heard of Q3 2022 as a specific timeline - only thing can remember reading in the OpenZFS leadership meeting agenda is that it's meant for OpenZFS 3.0. Nice, this is the only thing keeping me on unraid and not just running a Proxmox host with virtualized storage.
|
# ? Aug 18, 2022 14:45 |
|
BlankSystemDaemon posted:You can follow the progress of it here, but I've never heard of Q3 2022 as a specific timeline - only thing can remember reading in the OpenZFS leadership meeting agenda is that it's meant for OpenZFS 3.0. OK, thanks for the info. Hopefully the last week of October will offer something more concrete. My poor raidz2 pool is at 82% and I hate looking at that little caution symbol
|
# ? Aug 18, 2022 20:10 |
A Bag of Milk posted:OK, thanks for the info. Hopefully the last week of October will offer something more concrete. My poor raidz2 pool is at 82% and I hate looking at that little caution symbol The one thing ZFS can't do by itself, which needs admin intervention, is avoid using all transaction groups; if it's 100% capacity and can't possibly write a transaction group to disk, it can't delete anything. There are ways to fix it, but they're non-trivial. I don't know why ZFS doesn't default to N% reservation on the pool dataset like UFS does (it has 8% reserved that only the superuser can write to by default), but it probably should?
|
|
# ? Aug 18, 2022 22:03 |
|
If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient
|
# ? Aug 18, 2022 22:16 |
VostokProgram posted:If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient With large enough numbers of mirrored vdevs, you end up having lower MTBDL than a a single disk. Even if you buy a mix of vendors and don't use disks with serial numbers too close together, there's still a point at which striped mirrors don't make sense anymore. BlankSystemDaemon fucked around with this message at 22:33 on Aug 18, 2022 |
|
# ? Aug 18, 2022 22:28 |
|
BlankSystemDaemon posted:It also means if two disks in a mirror fail, you lose your entire pool. You should have backups!
|
# ? Aug 18, 2022 22:34 |
VostokProgram posted:You should have backups!
|
|
# ? Aug 18, 2022 22:36 |
|
BlankSystemDaemon posted:ZFS has done a lot to negate the things that caused people to conclude that 80% ~= 100% capacity - I've had several pools reach 100% capacity and recover just fine when I started deleting files. This is all great info. I suppose I can treat 90% as my new temporary 100% and not really worry about it. The difference between 80% and 90% is 5TB for me, a non-trivial amount that I can't imagine I'll fill before expansion drops. VostokProgram posted:If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient I've preferred 6 drive raidz2 because it's far more economical and provides plenty of redundancy. There are (temporary?) tradeoffs in terms of flexibility as we can see here, sure. But if I can expand a 6 drive raidz2 pool to 7 drives, that's still quite safe in terms of total pool size, and then I get all the space of the 7th drive for the cost of just one drive, with full redundancy. My goal is lots of space for cheap, and I'd never get there with mirroring.
|
# ? Aug 18, 2022 22:57 |
|
Maneki Neko posted:I ran a storage spaces setup for about 6 years and never had an issue. I eventually moved away from it in the last year or so primarily because the whole hardware setup was getting old and I just moved to unraid instead on a new system. Thanks for the endorsement! I'll post a trip report in the thread once it's up and going!
|
# ? Aug 19, 2022 00:24 |
A Bag of Milk posted:This is all great info. I suppose I can treat 90% as my new temporary 100% and not really worry about it. The difference between 80% and 90% is 5TB for me, a non-trivial amount that I can't imagine I'll fill before expansion drops. LASER BEAM DREAM posted:Thanks for the endorsement! I'll post a trip report in the thread once it's up and going! For ZFS, that occurred with Jeff Bonwick and Matt Ahrens using it for their /home after about a year of development and Sun moving their entire business to it in 2004-2005, respectively. I'm not sure Microsoft uses either storage spaces or ReFS. BlankSystemDaemon fucked around with this message at 01:22 on Aug 19, 2022 |
|
# ? Aug 19, 2022 01:20 |
|
BlankSystemDaemon posted:If you want endorsements, the gold standard of filesystems is when the creators start using them and when the company moves their entire business to it. My dude I know you love ZFS and all, but there are good ways to show your love without badmouthing other systems. Especially when you're saying things that make you look real stupid with a 30 second google. Yes, MS uses Refs and storage spaces
|
# ? Aug 19, 2022 01:34 |
|
I'm still using mdraid + XFS and it just works with no drama...
|
# ? Aug 19, 2022 01:49 |
Klyith posted:My dude I know you love ZFS and all, but there are good ways to show your love without badmouthing other systems. There's also a slight difference in what I was talking about with ZFS and Sun and how they used it; they moved their entire business to using it internally, and nothing about the article suggests Microsoft does that with storage spaces and ReFS - though if they do, that's awesome. What I was getting at is more the situation with BTRFS and Facebook; Facebook will readily tell you how they use it for their load-dependent scale-out servers (ie. spin up more servers when there's a spike in demand) - but what they don't tell you is that those systems are entirely transient and are assumed to be volatile. The actual storage solution they use has changed from a combination of HDFS and Hadoop that they used for many many years to something called TEUTONIC, which near as anyone can tell is a proprietary thing Facebook seems to have no interest in opening up.
|
|
# ? Aug 19, 2022 10:00 |
|
Storage Spaces is something that weirds me out. It feels like a knee-jerk reaction to ZFS to me, that happened around the time latter became famous. Last I remember, all the clustering and Storage Spaces Direct stuff, that came way later, sits in separate drivers on top of the initial SS stuff. Might as well be independent. I agree with the ReFS sentiment. If it was worth anything, they'd roll it out consumer machines. At least way back, that was considered an option. Nowadays, you're hard pressed to find any info besides what was released back during Windows 8 times.
|
# ? Aug 19, 2022 10:32 |
|
My impression of Microsoft and consumer file systems is that they want to treat Windows PCs like fat clients, with the primary storage being Onedrive, or a NAS for business desktops. If local disk is only really for impersonal data (software which can be downloaded again) and checked out copies from cloud/NAS, then there's less incentive to develop a fancier new file system.
|
# ? Aug 19, 2022 12:14 |
|
BlankSystemDaemon posted:There's also a slight difference in what I was talking about with ZFS and Sun and how they used it; they moved their entire business to using it internally, and nothing about the article suggests Microsoft does that with storage spaces and ReFS - though if they do, that's awesome. Ok yeah, that's true. MS doesn't think ReFS is a replacement for NTFS, because for applications where you need speed NTFS is faster. But in the same way, did Sun really use ZFS for their entire business way back when? Nothing but ZFS on anything with storage? I super doubt it. ZFS and high-performance databases didn't mix for a long time and still needs careful setup, tuning, and tons of caching. Also, Sun went out of business. So it's nice that they were extremely confident in the new FS they made, but maybe using it for everything wasn't actually the correct decision? BlankSystemDaemon posted:What I was getting at is more the situation with BTRFS and Facebook; Facebook will readily tell you how they use it for their load-dependent scale-out servers (ie. spin up more servers when there's a spike in demand) - but what they don't tell you is that those systems are entirely transient and are assumed to be volatile. On that level, I have no idea what MS is doing themselves to provide the storage back-end for Azure, I'd assume it is also all proprietary and custom. But when you're talking about hyperscale cloud storage, I don't think anybody is looking to add redundancy or error-correction at the base filesystem level. Oracle Exascale doesn't run on ZFS, it uses XFS. I think the idea that one FS can be good for everything is self-evidently bogus. ZFS is great at a bunch of stuff, and for single-machine NAS like we talk about ITT it's the best! But the fact that a company that runs a giant platform at incomprehensible scale doesn't use btrfs or ReFS is no knock against those FSes. They don't run a mega-cloud on ZFS either. It's a fallacious argument.
|
# ? Aug 19, 2022 14:10 |
Klyith posted:Ok yeah, that's true. MS doesn't think ReFS is a replacement for NTFS, because for applications where you need speed NTFS is faster. Sun didn't go out of business, they got acquired by Oracle - but that has little to do with them using ZFS internally, and more to do with the foundations for their making money disappearing under them with big iron going the way of the dodo and x86 roundly beating anything they could put out on the CPU side. From what I've been told by several people who worked there (including Ahrens). they absolutely were using it internally for everything requiring storage - but I don't know about the details enough to tell you whether postgres was being hosted on something else. I've heard a few war stories shared over drinks from people who worked at Microsoft, but nothing that bears repeating here since it's probably out-of-date and is at any rate hearsay.
|
|
# ? Aug 19, 2022 14:31 |
|
Sun was headed out of business but I have to agree that it was not due to problems with their technology, it was that their tech was expensive and Linux was eating their lunch. Trying to compete by also making Solaris free was a decision that would have made a big difference five years earlier, but by the time they tried it, it was too little too late.
|
# ? Aug 19, 2022 16:38 |
Zorak of Michigan posted:Sun was headed out of business but I have to agree that it was not due to problems with their technology, it was that their tech was expensive and Linux was eating their lunch. Trying to compete by also making Solaris free was a decision that would have made a big difference five years earlier, but by the time they tried it, it was too little too late. Getting Pandora to come out of her box is not as easy as all that, and it's a marvel of evil that Larry Ellison managed to put her back in there once she got free.
|
|
# ? Aug 19, 2022 17:29 |
|
BlankSystemDaemon posted:The ironic part is that the first time they tried to opensource Solaris was back in the 90s, but they couldn't because there was a shitload of drivers in Solaris written by second-party companies or subcontractors who they couldn't source release forms from. nit: Pandora is the one who opens the box, she isn't in it
|
# ? Aug 19, 2022 18:03 |
|
|
# ? May 15, 2024 08:14 |
VostokProgram posted:nit: Pandora is the one who opens the box, she isn't in it Let Pandora be free to do her thing!
|
|
# ? Aug 19, 2022 18:31 |