|
When using separate (automated) compute and media storage, how is best to deal with storage unavailability? I guess it depends on use case, so for example, my HP N40L (running linux) has been going great for years doing everything and I'm still reasonably happy with it, but it's starting to chug a bit when nzbget is downloading/unpacking/writing to disk while sonnar/radarr/deluge are also doing their things, and I'm streaming SMB data to kodi over the network. I figure I can relieve some of that by keeping the N40L as just a storage server, and put everything else on a compute box. All those apps I've mentioned are in containers and will be trivial to move, as long as I update the volumes they use. But they'll point back to the old server for media. Currently, if the server goes down for whatever reason, the containers go down too, so there's no worry, nothing is trying to access the data. But if I do it this way I'll need to consider that issue. I can get the compute box to connect to the storage via SMB (since I already have that for Kodi and Windows machines) or NFS (if that addresses the problem I'm raising). I can either mount the network share in the host compute box and the container volumes can access them locally (which would be the least work), or I can mount SMB/NFS individually as docker volumes for each app (slightly more work), or I can use SMB/NFS media sources from within each app (the most work). That last option would be the most reconfiguring, but I'm wondering if that fixes the issue I'm worried about. If sonarr/radarr/deluge are aware that the media sources are network shares, will they handle it better if they become unavailable? I'm worried they'll delete their libraries or redownload stuff if a host machine mount appears empty, while I probably foolhardily assume they'll be more chill if they are aware the unavailable data is over the network. I could just move nzbget since it's the biggest drain, carefully handling transfer after download, and keep the others on the N40L. But I guess this is a solved problem and I've just not thought about it the right way. gabensraum fucked around with this message at 05:23 on Jan 24, 2022 |
# ? Jan 24, 2022 03:34 |
|
|
# ? May 25, 2024 20:32 |
|
https://slickdeals.net/f/15584659-18tb-wd-element-desktop-hard-drive-300
|
# ? Jan 24, 2022 15:37 |
|
Dammit if I hadn't just bought a 14tb...
|
# ? Jan 24, 2022 15:46 |
|
Not sure if I want to try my luck with those when I grabbed two of their Ultrastar DC drives for $49 more each with more warranty.
|
# ? Jan 24, 2022 16:39 |
|
gabensraum posted:When using separate (automated) compute and media storage, how is best to deal with storage unavailability? I've had my N40L as just storage with freenas for years and do the usenet stuff on virtual machines with another host. I think I set it up in 2012 or 2013. There's really no right answer since you can set it up however you want. I just found mounting samba shares was the easiest and it works so I didn't bother getting more complicated. Generally if the NAS is down it will just fail to move after decompression but usually just picks right back up when it's back. Usually if the NAS is down then everything's down though due to a power outage. I redid my NAS a couple of years ago to use 8TB disks and I just set up the same share names and logins and haven't had to mess with the VMs. I should because sickbeard is barely finding anything at this point but .
|
# ? Jan 24, 2022 18:28 |
|
Re: HDD enclosure chat, This device is an 8-bay NAS by Orico looks nice and compact and has an actual HDMI out port! Looks like you can even run TrueNAS on it. Would be perfect for my next post-Synology box but unfortunately seems to be out of stock everywhere except Walmart which has it for $900. I am also looking at this nice looking OWC enclosure which looks really cool but is way too expensive. Looking on ebay I am seeing affordable rack servers that are way too big and a lot of 10+ year old cheap plastic enclosures that are just a lot less appealing for my small home lab.
|
# ? Jan 24, 2022 18:42 |
|
I forgot about ASRock's crazy accessories https://www.asrockrack.com/general/products.asp#Accessories. Check out that 1U OCulink riser... would go nicely with EPYC platforms if you wanted to fan out to even MORE drives (until your chassis gets full). Wiki has a decent breakdown of all the various SFF-xxxx connectors, is there another table / page with details of both ends of the connectors + places to buy them? I want to be sure I'm buying the right poo poo.
|
# ? Jan 24, 2022 19:03 |
|
movax posted:I forgot about ASRock's crazy accessories https://www.asrockrack.com/general/products.asp#Accessories. This is neat, and I'm curious what the ICs are. I'm guessing the SOIC is a i2c fanout or possibly PERST#, and the QFN is a clock fanout. In a previous life we had similar risers but they didn't have the clock/i2c/reset on each oculink only the first one. Check out serial cables for oculink pinouts: https://www.serialcables.com/product-category/pcie4-oculink-cables/ They should have schematics for all. There are a lot of different options so you'd have to pick the one that works for you. But they all have pdfs of the pinouts.
|
# ? Jan 24, 2022 19:20 |
|
priznat posted:This is neat, and I'm curious what the ICs are. I'm guessing the SOIC is a i2c fanout or possibly PERST#, and the QFN is a clock fanout. I didn't end up getting that one board w/ OCulink connectors (I think) -- https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications what are the 'SLIMx' connectors on this board, if you can tell? The Node 804 (stock) has mounting points for 4 2.5" drives, but for 2.5" solid state drives / U.2 drives, they essentially are happy on whatever the gently caress you mount them too so I can keep filling it up, and as I do, breaking out all the PCIe lanes to connectors for drives is of interest to me (of course). Really wish they made a non-window side panel for the 804. You could even bolt drives to that if you were crazy enough, lot of surface area and 2.5" drives don't care.
|
# ? Jan 24, 2022 19:40 |
|
movax posted:I didn't end up getting that one board w/ OCulink connectors (I think) -- https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications what are the 'SLIMx' connectors on this board, if you can tell? The Node 804 (stock) has mounting points for 4 2.5" drives, but for 2.5" solid state drives / U.2 drives, they essentially are happy on whatever the gently caress you mount them too so I can keep filling it up, and as I do, breaking out all the PCIe lanes to connectors for drives is of interest to me (of course). Looks like Serialcables has slimSAS too https://www.serialcables.com/product-category/pcie4-slimsas-cables/, I'm not familiar with those because we had gone with oculink thinking it would be where the connectors were going (lol). And then Gen5 just became MCIO instead, whatever!! RIP OcuLink you were sort of good but also a huge lie because there are zero optical transceiver versions (or were anyway) Looks like the slimSAS are next to the miniSAS HD plugs, one right angle on the edge and 2 vertical just below the DIMM slots.
|
# ? Jan 24, 2022 19:47 |
|
necrobobsledder posted:Not sure if I want to try my luck with those when I grabbed two of their Ultrastar DC drives for $49 more each with more warranty. Do you really get to use the warranty if you've shucked the drive? Assuming nobody in this thread is really using an unshucked external drive.
|
# ? Jan 24, 2022 19:56 |
|
priznat posted:Looks like Serialcables has slimSAS too https://www.serialcables.com/product-category/pcie4-slimsas-cables/, I'm not familiar with those because we had gone with oculink thinking it would be where the connectors were going (lol). And then Gen5 just became MCIO instead, whatever!! Yeah -- nothing optical at all! Taking a look at the Serialcables page... if I understand correctly, ASRock has 8 lanes going to each slimSAS connector. Assuming that those also support 2 x4 bifurcation, feels like I need something to go from that to 2x U.2, each with the full 4 lanes. Seems like the cables on the page only do x4...
|
# ? Jan 24, 2022 19:59 |
|
movax posted:Yeah -- nothing optical at all! This one looks likely: https://www.serialcables.com/product/gen4-slimsas-x8-sff-8654-to-2-pcie-drive-receptacle-sff-8639-cable-for-u-2-1x4-only/ (the descriptions are tough to parse, I go by the pictures usually ) So it's going from the x8 PCIe to 2 x4 U.2 connections. They also have U.3 on there too. Looks like each connector also plugs into the 15 pin SATA power for power, so connect those from the PSU and you're golden.
|
# ? Jan 24, 2022 20:13 |
|
VelociBacon posted:Do you really get to use the warranty if you've shucked the drive? Assuming nobody in this thread is really using an unshucked external drive. I actually have one, but it's because it's backup storage attached to a tiny SFF PC that just runs my Ubiquiti and pihole. The drive is bigger than the rest of the computer. As far as warranties, one of the few teeth the FTC has left is enforcing a clause that 'opening the device' no longer voids anything. I still keep my enclosures around just in case, though I've been lucky enough to not have a single failure of a shucked drive. Whenever I do have one die, I'll put it back in it's original enclosure and ship it that way.
|
# ? Jan 24, 2022 20:27 |
|
Yeah as far as returning them goes, either send them bare, or send them in the original enclosure. If you send them in a different enclosure they get upset and allege you’re trying to defraud them
|
# ? Jan 24, 2022 21:17 |
|
Rexxed posted:I've had my N40L as just storage with freenas for years and do the usenet stuff on virtual machines with another host. I think I set it up in 2012 or 2013. There's really no right answer since you can set it up however you want. I just found mounting samba shares was the easiest and it works so I didn't bother getting more complicated. Generally if the NAS is down it will just fail to move after decompression but usually just picks right back up when it's back. Usually if the NAS is down then everything's down though due to a power outage. Yeah sounds like I'm overthinking it, thanks. I'll push on and see what happens.
|
# ? Jan 24, 2022 23:54 |
|
There's probably better set ups but for me being able to fix issues or recreate it without jumping through any hoops (including remembering the details of what I did last time I looked at it) was the biggest concern.
|
# ? Jan 25, 2022 07:06 |
Rexxed posted:There's probably better set ups but for me being able to fix issues or recreate it without jumping through any hoops (including remembering the details of what I did last time I looked at it) was the biggest concern. Say it with me now: Documentation.
|
|
# ? Jan 25, 2022 11:32 |
|
BlankSystemDaemon posted:Oh, oh. I know this one!
|
# ? Jan 25, 2022 16:31 |
|
crossposting from the yospos security thread:tmfc posted:QNAP devices are getting hit with a new cryptolocker 0day called DEADBOLT hopefully no one here has one of these devices exposed to the internet
|
# ? Jan 25, 2022 23:32 |
|
BlankSystemDaemon posted:Oh, oh. I know this one! Sometimes I can look back at IRC logs to see what I was complaining about to my friends. It's basically the same thing.
|
# ? Jan 25, 2022 23:41 |
|
I finally figured out how to setup unraid + rclone + gdrive enterprise ($12 a month for "unlimited") and boy is it nifty being able to setup scripts to backup my more important files. If I still had 1 Gbps Fiber, I'd heavily consider offloading my Plex server onto it.
|
# ? Jan 26, 2022 01:38 |
|
I've run into a moderate hitch in my efforts to offload my processing from my nas: I can't figure out how to take advantage of server side copy in my containers, so moving files between datasets is requiring a network roundtrip. It seems to be working fine in windows explorer, so I know truenas at least supports it with smb shares.
|
# ? Jan 26, 2022 02:30 |
Rexxed posted:Sometimes I can look back at IRC logs to see what I was complaining about to my friends. It's basically the same thing.
|
|
# ? Jan 26, 2022 03:13 |
|
Ok so my DS420+ is pretty much completely setup now and I am happy to say that it is both cool and good. Don't know if I would ever be able to go back to living without a NAS now. I took advice from this thread and over the course of about 3-4 months bought two 14tb and one 18tb WD MyBook drives on sale and shucked them. I bought a domain and setup DDNS and I'm able to access my drive through both SFTP and Synology Quickconnect. I also got Plex working off network seamlessly. I'm actually really surprised that this thing is able to keep up with 4 of my friends streaming 1080p content at once (assuming Plex likes the codecs and you don't have to optimize one of the files). Right now I have a 2tb drive I had laying around and the 40tb pool of drives listed above. I'm trying to use the 2tb drive as a redundant backup of only "essential files" which for me just means new music downloads and documents. I do not yet have a backup for my video files on the 40tb pool but it's more important for me to just have the RAW space and I'm not that worried about losing that data. I've been looking for a way to "sync" specific folders across pools so only the folders I want "synced" to the 2tb are saved. Is there a way to do this? I've tried using Synology drive to sync folders between my desktop and the NAS and it works great just not sure if there is a way to do this across storage pools on the NAS itself or if I'm missing something. Also I have a server in the public cloud that I access via FTP primarily. I used to use CuteFTP on Mac OSX back in the day and it had a feature where it would sync folders between FTP servers. Is there something similar I could use on the Synology NAS? I set up FTP on the NAS and it works fine just wasn't sure if there was a way I could have it automatically pull files when it sees something new. I hope that makes sense? I'm having a hard time explaining this lol
|
# ? Jan 26, 2022 17:03 |
|
BlankSystemDaemon posted:The best recommendation I can give is to setup a small test dataset to try with - because I can't remember off the top of my head. I figured it out: zfs receive let the incoming snapshot try to mount itself after it got the first of two snapshots from the sending filesystem, which was the snapshot I made to move the pool the last time. Mounting failed because it was trying to use the same mountpoint that old pool was using and it errored out before it received the second snapshot. I didn't think to compare the snapshot list on the new and old pools before and just noticed tonight that the new one didn't have the 'evac' snapshot that I made to move the data to the new one. Seems like the lesson here it to disable automounting before you send|receive a pool on the same machine.
|
# ? Jan 27, 2022 03:33 |
|
WD's store has the 18TB Elements for $280: https://www.westerndigital.com/products/external-drives/wd-elements-desktop-usb-3-0-hdd#WDBWLG0180HBK-NESN I saw it on Slickdeals so it may get slammed, make sure the $50 off in cart shows up! https://slickdeals.net/f/15591769-wd-elements-desktop-external-hard-drive-18tb-280-fs?src=frontpage
|
# ? Jan 28, 2022 07:51 |
|
I saw a thing that there was a QNAP device vulnerability to getting ransomware'd so update your system ASAP if you got one! (and also don't have it directly on the internet but I doubt anyone here is doing that) https://www.theregister.com/2022/01/27/qnas_ransomware_deadbolt_nas_targeting/
|
# ? Jan 28, 2022 16:59 |
|
Is anyone using TrueNAS Scale? I started on ESXI/FreeNAS, moved to Proxmox for LXC/Docker, and have my eye on TrueNAS Scale for their upcoming k3s clustered NAS. Looking to see if anyone here is using it and what their experiences were.
|
# ? Jan 28, 2022 23:07 |
|
I use it with some manual docker containers (don't like the k3s overhead for a single node) and doing NVMe-oF. So long you don't intend to do anything that it doesn't do out of the box, I guess it's fine. Because it wipes your manual changes every update, since it just unpacks an image into a new dataset instead of cloning and upgrading your existing one.
|
# ? Jan 28, 2022 23:38 |
|
Apparently LTT had some problems with their multiple 1.2PB arrays for storing all their footage: 1) No scheduled periodic scrub setup. 2) Obviously no email/push notification about failed drives. 3) Did not have clean shutdown on power failure setup. Not entirely clear if they had a UPS at all. https://www.youtube.com/watch?v=Npu7jkJk5nM
|
# ? Jan 29, 2022 19:17 |
|
That has got to be a stunt for clicks right
|
# ? Jan 29, 2022 19:19 |
At what point does what they do go from wilful ignorance to intentional incompetence? There's a few things which lead me to ask this, based on the script they wrote for the video: Since there are examples of all three types of errors ZFS tracks and they intermix the meanings of them, it seems they don't understand that there's a meaning behind the values - ie. READS and WRITES (ie. ATA READ/WRITE or SCSI READ/WRITE commands returning an error, usually resulting in a very noisy output on the console) or CKSUM (ie. the disks returned data that it thought was correct, but which ZFS knows isn't correct (because it doesn't match the checksum that's stored in the parent record); this leads them to conclude that the manufacturers aren't the problem, despite there being nothing about the data they show that suggests anything of the sort - especially since we know that harddisks lie. They use software that's clearly not meant for production use since they picked something that doesn't have a notion of running a regular scrub despite that idea being more than a decade old and then proceed to never ever update it. And speaking of the scrub, they don't understand the difference between scrub and resilver - ie. a scrub is the only way to ensure that every single mirror, striped and distributed parity, as well as ditto block gets checked against its checksum, whereas the resilver is the process that actually tries to recreate the data from the mirror, distributed parity and/or ditto blocks. They only have a single ZFS Separate Intent Log (known as a slog, listed as log device) without any mirroring, when this is the device that's responsible for every single synchronous write that the array handles - which means that if it disappears, you will lose data. They also have two cache drives, which doesn't do anything besides take up memory that could be used for caching as SATA SSDs are still many times slower than DRAM. EDIT: Despite all of this, they think they're competent enough that they build storage solutions for other people! BlankSystemDaemon fucked around with this message at 20:08 on Jan 29, 2022 |
|
# ? Jan 29, 2022 19:22 |
|
Bitrot due to powerloss shutdowns? What is he talking about? Isn't this backwards metadata tree writes with uberblock being last supposed to avoid this? Also, L2ARC, eh. If you have a huge pool with lots of files, caching metadata seems reasonable. I'm currently using an L2ARC for both metadata and ZVOL caching (16KB volblocksize). It's currently using 200MB of RAM for headers representing 65GB of (--edit: compressed) data. Seems an OK trade-off. Combat Pretzel fucked around with this message at 21:16 on Jan 29, 2022 |
# ? Jan 29, 2022 21:14 |
|
I dunno how power loss could be causing that quantity of errors to a storage system that is almost entirely data at rest. This is their dump of archival footage, by their own description they barely touch the thing. A power loss shouldn't cause widespread "bitrot" to old, static data. Unless the LTT server room looks like this, I'd bet on much deeper problems. BlankSystemDaemon posted:EDIT: Despite all of this, they think they're competent enough that they build storage solutions for other people! Wait, what? Is LTT starting up a server business or something?
|
# ? Jan 29, 2022 21:32 |
|
pretty sure they just used a bunch of sponsored drives, they could have been sent a bad batch, or they cooked them with that density and not enough cooling not scrubbing the data regularly to detect failing drives was probably the real death sentence
|
# ? Jan 29, 2022 21:34 |
|
Klyith posted:Wait, what? Is LTT starting up a server business or something? I think Linus was just referencing some of the collabs they've done where they built storage solutions for other tech tubers.
|
# ? Jan 29, 2022 21:42 |
|
I look forward to the follow up video in 4 years when he repeats that they still haven't hired someone whose sole role is IT and nobody has been checking on that server.
|
# ? Jan 29, 2022 22:27 |
|
LTT is a vapid entertainment channel, don't take what they do so seriously.
|
# ? Jan 30, 2022 00:48 |
|
|
# ? May 25, 2024 20:32 |
|
Rescue Toaster posted:Apparently LTT had some problems with their multiple 1.2PB arrays for storing all their footage: this got me realizing i really, really need to set up backups myself at home. so i'm looking at buying a 4TB (or the 5TB) western digital elements external HD for backups (solely backups, of 1 or 2 SSDs and 1 hard disk, totaling 4.5TB but with plenty of unused space) https://www.amazon.ca/gp/product/B0713WPGLL/ref=ox_sc_act_title_2?smid=A3DWYIK6Y9EEQB&th=1 and that second SSD because I'm running out of space for games https://www.amazon.ca/dp/B078211KBB/ref=emc_b_5_i?th=1 are there any problems with the above devices/recommendations for windows 10 backup software? ideally something plug and play like Apple's Time Machine would be great, just leave the external HD plugged in and have it back itself up every so often. I'm not opposed to a different model of external HD, even if it's a big chonker that takes a separate power brick or something, I just grabbed the WD elements since the thread seems big on those. e: Klyith posted:Wait, what? Is LTT starting up a server business or something? they've done a bunch of videos where they make a NAS server for iJustine or similar other youtubers. Arivia fucked around with this message at 01:56 on Jan 30, 2022 |
# ? Jan 30, 2022 01:54 |