|
H110Hawk posted:I feel like you might be better suited by 2-3 devices rather than one all in one. Build a NAS/Virtualization box (What's GNS?) which you build once a decade. Build a gamer computer which you rebuild as it doesn't play your video games, but which also has 0 non-ephemeral stuff on it. Right now my desktop is basically just applications, steam, and my NAS mounted as a network drive. Buddy of mine is giving me his spare (old) Dell PowerEdge 110, we’ll see how suited my needs are by it. I mean, it’s not like I absolutely need to have a desktop that can double as a space heater? Thank you, you’ve helped me sort of hash-out what I actually want this machine to do. E: GNS3 is like Packet Tracer but beefier I guess? Home lab CCNA poo poo. Also part of my lovely Masters Schadenboner fucked around with this message at 20:24 on Jul 9, 2019 |
# ? Jul 9, 2019 19:56 |
|
|
# ? Jun 2, 2024 17:47 |
Schadenboner posted:I should probably take these to the PC thread, the original question (can I use Reds in a desktop RAID) isn’t really operative anymore. I've been thinking that if I had the money for it, a hexacore system with SMT, 64GB memory, 8x 6TB disks, and L2ARC + 2x SLOG SSDs would be a nice base for FreeBSD, which has access to 2 cores+SMT and bhyve on a ZFS volume (a dataset that acts like a disk) which would get the remaining 4 cores+SMT for Windows, which could then be used for vidya gayms. BlankSystemDaemon fucked around with this message at 20:44 on Jul 9, 2019 |
|
# ? Jul 9, 2019 20:38 |
|
Crossposting my NAS for sale: https://forums.somethingawful.com/showthread.php?threadid=3893490CloFan posted:Buffalo LinkStation 4-bay NAS. Model LS441D (https://www.buffalo-technology.com/productpage/linkstation-ls441d/) I have 4x 4TB drives to go with it:
|
# ? Jul 10, 2019 01:30 |
|
e: oops
|
# ? Jul 10, 2019 02:08 |
|
Hmm, I just remembered I started upgrading the capacity of the drives in one of my pools many months ago but then got sidetracked, so I've got a pool with two 3TB drives and four 8TB drives. I've been burning warranty and lifetime on those 8TB all this time while only using 3TB of their capacity! That's a real problem with upgrading the capacity of a ZFS pool....it can take well over a week!
|
# ? Jul 10, 2019 16:28 |
Thermopyle posted:Hmm, I just remembered I started upgrading the capacity of the drives in one of my pools many months ago but then got sidetracked, so I've got a pool with two 3TB drives and four 8TB drives. I've been burning warranty and lifetime on those 8TB all this time while only using 3TB of their capacity!
|
|
# ? Jul 10, 2019 18:02 |
|
There are some tuning steps you can take that seemed to help me last time I did a big drive exchange. A little bash script I keep handy:code:
With that said I haven't done any drive swaps since swapping to ZoL 0.8. Given the dramatic decrease in scrub times I would hope it would go a bit quicker.
|
# ? Jul 10, 2019 18:27 |
|
D. Ebdrup posted:Sure, it can take quite a while, but it shouldn't take that long (about the same time as a non-sequential scrub, really). What OS are you doing this on? Because I could've sworn something went in FreeBSD which should improve this step, but if you're not using that then I'm not sure it's worth looking for since it doesn't benefit you anyway. It's on ubuntu. IIRC, it was something like 40 hours per drive. I'll swap another of the drives today and see for sure how long it takes.
|
# ? Jul 10, 2019 18:36 |
Ah, fair enough. Well, the tuning tips that IOwnCalculus posted ought to be alright, with one exception: Be careful of disabling prefetch - as that's one of the things that'll decrease system responsiveness quite a lot (it's even mentioned in the ZFS Evil Tuning Guide); although you can mitigate it somewhat by adjusting the scheduler to preempt threads at a lower priority (although I don't know how on Linux, I'd be shocked if it doesn't have full preemption so it's just a question of finding it). The best idea might be to switch it on temporarily, and then set it back once you've got it fully resilvered.
|
|
# ? Jul 10, 2019 19:03 |
|
Yeah, making those changes via echo means they don't persist past a reboot. You'd need to edit some config files (which ones, I have now forgotten) to make any of them permanent.
|
# ? Jul 10, 2019 21:48 |
IOwnCalculus posted:Yeah, making those changes via echo means they don't persist past a reboot. You'd need to edit some config files (which ones, I have now forgotten) to make any of them permanent. It was more to warn about disabling prefetch and interactivity, because I've had to do it on a server that, while it ended up completing its job that I gave it, took over 3 hours to log into via a physical console, because the kernel was too busy doing other things that had higher priority. At that point I gave up on top -q (which nice's top to -20, meaning there's a better chance of getting to kill the process in question) as it was at that point that I remembered that it was a ZFS kernel thread, so it might not be the best idea to try and kill it on a production server. Mind you, after it'd completed, it sprang right back to operating like intended, and after I tweaked the preemption threshold for the kernel (which can be done at runtime) and ran the next command in sequence which would've had the same effect, it worked without a hitch.
|
|
# ? Jul 10, 2019 23:28 |
|
Even with those settings I honestly haven't seen that much degradation on my server - it's still responsive to the console and usually serving up some poo poo on Plex at the same time.
|
# ? Jul 10, 2019 23:50 |
|
I really wanted to try and build a ceph based NAS but I don't think it's going to work too well if I'm forced to put all my docker containers on CephFS vs having a native docker supported volume backend, anybody have any experience with this? I'm currently running a 4 disk zfs NAS on CentOS but every single time I went to upgrade it broke and building and rebuilding the kernel modules has been a huge PITA.
|
# ? Jul 11, 2019 00:56 |
|
ILikeVoltron posted:I really wanted to try and build a ceph based NAS but I don't think it's going to work too well if I'm forced to put all my docker containers on CephFS vs having a native docker supported volume backend, anybody have any experience with this? I'm currently running a 4 disk zfs NAS on CentOS but every single time I went to upgrade it broke and building and rebuilding the kernel modules has been a huge PITA. If you're not completely in love with CentOS you could pick something that's got better ZFS support, export the pools and import into ubuntu server or something. I get why you'd want to get your feet wet with Ceph (and it's pretty cool), but man, what you just described sounds like pretty much the poster child of complicated for the sake of being complicated.. unless you wanna be a Ceph admin or something.
|
# ? Jul 11, 2019 02:48 |
|
originalnickname posted:If you're not completely in love with CentOS you could pick something that's got better ZFS support, export the pools and import into ubuntu server or something. I do this for a living, so I don't mind tinkering with it for fun. I have managed several ceph deployments, openstack deployments and openshift deployments in the past. I don't fear running ceph at all, and even if it fails catastrophically that's ok too. As far as running ubuntu or something, I figure that's what I'll most likely end up doing. I think the other big reason it's such a pain is because of attempting to do root ZFS, so I might buy a M.2 ssd for my box too.
|
# ? Jul 11, 2019 03:18 |
|
I grabbed a MediaSonic ProRAID enclosure and eight 6TB HDDs. My plan is to use this as a Time Machine backup, but here's the problem: Time Machine allows you to exclude volumes from the backup, but this applies to all Time Machine volumes - you can't tell it to backup volumes A and B to one TM disk and volumes B and C to another. Part two of the problem is that (as far as I know) the ProRAID functions as two four-bay RAID5 enclosures. Is this something I can circumvent? Can the ProRAID format all eight drives in a different parity as one volume, or is there something I can do within OSX to treat them as one volume? Thanks again.
|
# ? Jul 11, 2019 04:54 |
|
Hopefully going to get some good dealing on external drives for shucking during the Amazon prime sales. Other than the WD 8/10 TB Elements are there any other good options to keep a lookout for ?
|
# ? Jul 14, 2019 10:42 |
|
Baconroll posted:Hopefully going to get some good dealing on external drives for shucking during the Amazon prime sales. Other than the WD 8/10 TB Elements are there any other good options to keep a lookout for ? I was just noting that the 8TB seagate drives were pretty low right now, thinking I'll pick up a couple when prime day starts
|
# ? Jul 14, 2019 22:14 |
|
Mister Speaker posted:I grabbed a MediaSonic ProRAID enclosure and eight 6TB HDDs. My plan is to use this as a Time Machine backup, but here's the problem: Time Machine allows you to exclude volumes from the backup, but this applies to all Time Machine volumes - you can't tell it to backup volumes A and B to one TM disk and volumes B and C to another. Part two of the problem is that (as far as I know) the ProRAID functions as two four-bay RAID5 enclosures. Is this something I can circumvent? Can the ProRAID format all eight drives in a different parity as one volume, or is there something I can do within OSX to treat them as one volume? Thanks again. OSX has built-in software RAID 0 and 1, and built in LVM that can do spanning. None of this is something you'd want with that RAID enclosure, RAID 0 or spanning two RAID 5s together is just a bad idea. If you don't mind paying $180 for third party driver software, and the enclosure can be put into JBOD mode, you can buy SoftRAID and use it to make a RAID 5 of all the disks. I'm afraid that MediaSonic is cheap junk. If you can still return it, I'd do that and get something else. Something like a Synology DS1817 or DS1819 is a lot more expensive (more than 2x) but is an incredibly better idea. From your requirements I'm guessing that you are trying to back up a large quantity of data which matters to you, otherwise you wouldn't have spent easily $1000 on disks. This is not the place to pinch pennies.
|
# ? Jul 15, 2019 02:38 |
|
Baconroll posted:Hopefully going to get some good dealing on external drives for shucking during the Amazon prime sales. Other than the WD 8/10 TB Elements are there any other good options to keep a lookout for ? Just remember that the sale prices have been ~$130 for 8 TB and ~$160 for 10 TB. Anything better than that should be a new low price. The WD drives in their externals are generally Red or equivalent (so higher-quality NAS drives.) Not sure what Seagate is using at higher capacities (i.e. could be consumer or NAS drives.)
|
# ? Jul 15, 2019 06:46 |
|
Man, drives got cheap. Just picked up two WD 10TB (probably reds? will shuck) for $340 out the door. I paid that much for two 6TB a year and a half ago. In a Synology 4 drive unit, if I have 5 TB of data, 2x6tb and adding 2x10tb what is the best option, setup one disk as a hot spare and let the other work as data duplication? Synology drive calculator seems to suggest that SHR1 is a better use of the disks than SHR2 https://www.synology.com/en-us/support/RAID_calculator Also once I finalize my new folder structure, gonna backup everything to see glacier. I think if I did my math right it's only about $3/mo to backup 4TB
|
# ? Jul 15, 2019 11:47 |
Hadlock posted:Man, drives got cheap. Just picked up two WD 10TB (probably reds? will shuck) for $340 out the door. I paid that much for two 6TB a year and a half ago.
|
|
# ? Jul 15, 2019 15:33 |
|
So I see the Synology ds218j on sale for Prime Day. Would I regret buying it if I want to share videos between networked computers and to stream videos off of it to be played with Kodi on a Fire TV stick? Security cameras might be the only foreseeable additional functionality I might use, but I'm still on the fence about that.
|
# ? Jul 15, 2019 15:43 |
|
D. Ebdrup posted:SHR1 is more efficient because less space is used for Reed-Solomon codes. You should be aware, though, that RAID5 (the traditional RAID level that SHR1 is equivalent to) has been a bad idea since They upped the specs for this very issue. But raid 5 is really not a replacement ever for a back up. Here is the same guy writing about that very same issue in 2016 why raid 5 usually still works: https://www.zdnet.com/article/why-raid-5-still-works-usually/ Still and this bares always repeating over and over: copy all data from the array before replacing the failed drive
|
# ? Jul 15, 2019 15:59 |
Axe-man posted:They upped the specs for this very issue. But raid 5 is really not a replacement ever for a back up. To make matters worse, drives have grown more than 10x during that time, so even if we were getting only one URE for every 1 in 1 trillion bits, we'd still be at the exact same failure rate when rebuilding big arrays. Enterprise drives have always had higher values (even in 2007 or 2009), but those aren't the drives typically being bought by consumers, not even the "prosumers" in this thread. In a similar vein with enterprise drives, they're typically rated at 1 or more drive writes per day, instead of a number of TB per year which is what consumers are typically rated for (and usually only one order of magnitude more than their capacity). EDIT: And again, since it's not a checksumming filesystem, it can't know which files are broken, so it'll usually just declare the entire array kaput and let the consumer cry about it because they thought RAID was backup. BlankSystemDaemon fucked around with this message at 16:31 on Jul 15, 2019 |
|
# ? Jul 15, 2019 16:20 |
|
The 8TB Easystores are down to $130.
|
# ? Jul 15, 2019 16:37 |
|
D. Ebdrup posted:WD Red Pros are still meassured as number of UREs for every 100 billion bits, except now it's not 1 bit for every 100 billion bits, now it's "under 10" according to their own specs. Yeah that is always a problem. I went raid 6 personally and tons of backups for this reason. Every day, I get some small business who lost everything cause they thought it was. Raid is a hardware convenience and should be seen as a method to make a disk going bad only take a few hours instead of completely having to restore from backup
|
# ? Jul 15, 2019 17:23 |
Axe-man posted:Yeah that is always a problem. I went raid 6 personally and tons of backups for this reason. Every day, I get some small business who lost everything cause they thought it was. RAID, in the traditional RAS-tripod that holds up mainframes, has little to do with reliability, most to do with availability and some to do with servicability.
|
|
# ? Jul 15, 2019 17:43 |
|
Theres a good lightning deal on prime for UK people - 10TB WD My Book for £146.99 - Limit of 1 per person sadly, https://www.amazon.co.uk/Western-Digital-Password-Protection-Software/dp/B07CRZK9BX Looks like this is another shuckable WD White. I've ordered one so will hopefully be cracking open the case later this week.
|
# ? Jul 15, 2019 17:51 |
|
D. Ebdrup posted:WD Red Pros are still meassured as number of UREs for every 100 billion bits, except now it's not 1 bit for every 100 billion bits, now it's "under 10" according to their own specs. Do we classify Ultrastars as "Prosumer-plus" or are they proper enterprise?
|
# ? Jul 15, 2019 19:13 |
Schadenboner posted:Do we classify Ultrastars as "Prosumer-plus" or are they proper enterprise?
|
|
# ? Jul 15, 2019 21:17 |
|
So, I currently have a Synology RS815 with 4 3TB reds in RAID5 that has served me well for the last four years (important files backed up to a local USB drive nightly and backblaze b2). I just started playing around with Plex and ripping all my optical media in order to put everything physical into storage - and I think my ~9TB of storage is going to be sadly inadequate. In light of the discussions around RAID5 - what is my best option that weighs price vs. reliability? I could push to RAID6 with a RX418 expander, and get bigger drives simultaneously - or I could just roll the dice and pick up some 8TBs and shove them into my current NAS one at a time. The discussion previously lead me to believe that even though my DVD/BD rips aren't exactly critical data - a rebuild of the array could put my actual critical data at risk from a single failure during rebuild. Am I better off expanding and going to RAID6 to help tolerate that or just relying on my backups?
|
# ? Jul 16, 2019 05:16 |
|
Anecdata but I’ve done 4 rebuilds on RAIDZ2 arrays of size 8 (2 parity, 6 data) and have had one failure during a rebuild... because a cat jumped on the array during the rebuild.
|
# ? Jul 16, 2019 05:19 |
|
necrobobsledder posted:Anecdata but I’ve done 4 rebuilds on RAIDZ2 arrays of size 8 (2 parity, 6 data) and have had one failure during a rebuild... because a cat jumped on the array during the rebuild. New infosec vulnerablity CATATTACK
|
# ? Jul 16, 2019 05:22 |
|
D. Ebdrup posted:SHR1 is more efficient because less space is used for Reed-Solomon codes. You should be aware, though, that RAID5 (the traditional RAID level that SHR1 is equivalent to) has been a bad idea since 2 things 1 is shr1 considered raid 5 for the purposes of your discussion 2 what's the point of shr2 if shr1 is better Edit: I'm also backing up to aws glacier as my off-site Hadlock fucked around with this message at 08:27 on Jul 16, 2019 |
# ? Jul 16, 2019 08:19 |
Hadlock posted:2 things SHR1 isn't better, it just provides more efficiency - which is all that calculator cares about. You really ought to use both this and this as that'll give you a Mean-Time-To-Data-Loss estimation range. BlankSystemDaemon fucked around with this message at 10:02 on Jul 16, 2019 |
|
# ? Jul 16, 2019 09:50 |
|
E: Disregard this, I found a picture of Synology’s replacement fans, it looks like it’s 3-pin. This might be a bit of a niche question but: Are the 92mm fans in the Synology 4 and 5-Bay DSes (918+/1019+): 1. User replaceable (I see spares for sale on Synology so I’m thinking yes)? 2. 3-pin or 4-pin (PWM)? Schadenboner fucked around with this message at 13:15 on Jul 20, 2019 |
# ? Jul 20, 2019 13:12 |
CRU means no skill with a screwdriver is required. FRU means skill with a screwdriver is required.
|
|
# ? Jul 20, 2019 13:37 |
|
Well I've ignored it as long as is feasible. My wife's mac is full from her "Photos" library. We want to go to a consolidated library of photos on the synology. What sort of software are people using to accomplish this using an iphone+mac? Is there a sane way to "export" the photos library onto the NAS short of just opening it and dragging the "Masters" folder out? I guess the overall question - is there a way to maintain a "gallery" like view of the pictures a-la photos, preferably cross platform, and hopefully one which will sort files into folders for us but not freak out if I rename things? H110Hawk fucked around with this message at 00:48 on Jul 21, 2019 |
# ? Jul 21, 2019 00:46 |
|
|
# ? Jun 2, 2024 17:47 |
|
H110Hawk posted:Well I've ignored it as long as is feasible. My wife's mac is full from her "Photos" library. We want to go to a consolidated library of photos on the synology. What sort of software are people using to accomplish this using an iphone+mac? Is there a sane way to "export" the photos library onto the NAS short of just opening it and dragging the "Masters" folder out? Not so sure about from the Mac, but I have an app called 'PhotoSync' on my iPhone. It's setup to automatically copy all my photos over to the NAS whenever I reconnect to my home network. After copying I also get a prompt for deleting all the photos that got copied. But I'm not so sure about having it automatically create galleries as I wanted to do that manually, so never looked into it.
|
# ? Jul 22, 2019 22:34 |