Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arson Daily
Aug 11, 2003

Is there a generic rule of thumb for overhead in your NAS? I've got about 29TB of storage to go but I also just rediscovered Usenet ISO's so I'm wondering how far ahead I need to think about upgrading

Adbot
ADBOT LOVES YOU

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Arson Daily posted:

Is there a generic rule of thumb for overhead in your NAS? I've got about 29TB of storage to go but I also just rediscovered Usenet ISO's so I'm wondering how far ahead I need to think about upgrading

Make a graph and extrapolate. Plan on replacing the drives when the warranty is up. Give the buffer you are comfortable with to avoid having to replace drives before their EOL. That's it!

Nulldevice
Jun 17, 2006
Toilet Rascal
I think it depends on the size of the ISOs you're downloading. Smaller ISOs take up less space, but those raw ISO rips can eat up space quickly. Depending on the content I tend to use ISOs on the smaller side. 29TB can go a long way with that strategy. If you plan on huge ISOs you may want to consider larger disks or building a new system. For my case I'm using 2xSyno 1522+, one with 10x16TB drives giving me about 84TB of space between two volumes, and a second one for on site backup with 5x12TB drives, and 5x10TB drives giving me about 50TB drive space. I'm using about 32% space on each volume (a majority of each volume is two categories of ISOs), this also includes disk images and templates for Proxmox, and the backup disks for the Proxmox Backup Software (two of them). I also run daily snapshots on each machine so that adds a little overhead but not that much. I'm a little picky on what I download, so that helps to keep space free. For best performance I run an 11th gen i5 NUC with all of the standard software like Plex, Sonarr, etc. I probably won't need to upgrade for a few years at the rate I download ISOs.

So 29TB can go a long way if you're not downloading all the ISOs and even further if you look at decently optimized FHD ISOs. If you're going for those fancy 4k remux ISOs, you're not going to last long. I'd say look at drives in the 16TB range as I believe that's the best value at $$/GB spent. Last I checked anyway. Make sure whatever you decide on that it's expandable. Even TrueNAS can be expanded by replacing smaller drives with larger drives and making sure auto expand is enabled on the pool. Or you can add another vdev of the same make up as the first one. Just don't use raidz-1/raid5 on a pool with 16TB drives on any system. Too risky with larger drives and longer rebuild times.

As far as thinking ahead goes, that's kind of a tough one to figure out. I think seeing how much space you consume over the next week will determine your growth needs. If you grow by 15TB this week for example, first off examine what you've downloaded and determine if it's worth keeping. If you can weed out some of the cruft you should get a more accurate idea of where you will be in a month to six months. I think you'll do the majority of your downloading in the first two to three months, so we'll say you need 60TB. If your storage is completely full by the end of the week, we're gonna need a bigger boat. Again this is where you come back to look and see if what you downloaded is really worth keeping. Chances are high that a good portion is just not worth hanging onto. So we delete the garbage and find a realistic number. We'll say you used 18TB this time. We'll say your expected monthly growth over the next three months is about 15TB a month, so that's 63GB used. To keep usable space you'd have to build for 80-90TB of space. As a point of note, this is all napkin math and assuming you're downloading a poo poo ton of stuff.

What kind of storage system do you have?

wolrah
May 8, 2006
what?
FWIW I'd consider myself a pretty heavy user of Usenet and I have a lot of things in Sonarr and Radarr configured to grab 4K, always preferring REMUX releases so it's a worst case scenario for file size, and I'm averaged just over half a terabyte a month in 2022 and so far in 2023. My 2021 was pretty heavy at almost 16TB but that was because I got a 4K TV that year so the *arrs were going back and upgrading a lot of things after I updated their profiles.

Aware
Nov 18, 2003
To be honest I just delete poo poo that doesn't get watched when I run out of space. It's not like anything isnt a few hours away at worst if I did decide to re-watch something. I find I can easily clear up 500gb in a few minutes whenever I run out.

Dyscrasia
Jun 23, 2003
Give Me Hamms Premium Draft or Give Me DEATH!!!!
I just have nzb360 on my phone. I add things as I think of them and delete after watching. I have favorites in the permanent list, otherwise delete after watching routinely. Series take up much more space, but still clean up what won't be rewatched. I have 5x10tb in raidz2, I generally use around 12tb including snapshots. Snapshots are my primary growth. Just churn really.

Dyscrasia fucked around with this message at 00:30 on May 7, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use Tdarr to automatically transcode the stuff I download to h265. It saves a TON of space.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Nitrousoxide posted:

I use Tdarr to automatically transcode the stuff I download to h265. It saves a TON of space.


Its excellent. Even with the shittiest old nvidia card it owns

Aware
Nov 18, 2003
Sadly my stuff is split fairly evenly between BT and Usenet and I like to just leave stuff seeding forever so I'm never going the tdarr route.

Hughlander
May 11, 2005

Nitrousoxide posted:

I use Tdarr to automatically transcode the stuff I download to h265. It saves a TON of space.


I need to set it up again. Last time I had it set up across multiple nodes and it just allocated a billion movies on a 3rd node but didn’t do anything

VelociBacon
Dec 8, 2009

Is it a reasonable expectation that every device can play h265 natively these days? I have a decently powerful i5-11600k these days for the plex server with the hardware acceleration (no GPU) but I don't want to be transcoding all my friend's poo poo because their ipads can't natively stream h265 files.

KKKLIP ART
Sep 3, 2004

So what are the current go-to drives? Ive had bad luck shucking drives in terms of longevity.

VelociBacon
Dec 8, 2009

KKKLIP ART posted:

So what are the current go-to drives? Ive had bad luck shucking drives in terms of longevity.

For a NAS? I just bought a 12tb WD red for pretty cheap during the international backup month or whatever in April. I'd go that route personally, WD is just a known value and I don't have to worry about it failing.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Nitrousoxide posted:

I use Tdarr to automatically transcode the stuff I download to h265. It saves a TON of space.


Hell yeah, just doing my TV library got me about 11TB back

MTGWolfGirl
Apr 4, 2023
Question regarding QNAP, Docker, Radarr/Sonarr etc

Upgraded from a custom built server to a QNAP TS-464-4G, and I am an absolute noob at this Docker stuff.
I am trying to get the Docker containers to be able to access the files on the Drives. So far, nadda.

Installed the Radarr and Sonarr using Docker Compose, trying different commands but get 'Not writable' errors

Would appreciate some help or directions to guides

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Can you post your compose file here?

El Mero Mero
Oct 13, 2001

I've currently got a synology + expansion unit with a single volume that crosses the two devices. It kinda makes me nervous, but I don't have a good idea of the best way to decompose it all and re-make the volumes so they don't cross devices anymore.

I've got it all backed up onto set of random cold storage disks...so I can destroy my volumes, recreate them correctly, and then restore from backup...but is there an easier way that I'm missing?

MTGWolfGirl
Apr 4, 2023

Nitrousoxide posted:

Can you post your compose file here?

The file is taken from the Radarr website:
https://docs.linuxserver.io/images/docker-radarr

---
version: "2.1"
services:
radarr:
image: lscr.io/linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
volumes:
- /path/to/data:/config
- /path/to/movies:/movies #optional
- /path/to/downloadclient-downloads:/downloads #optional
ports:
- 7878:7878
restart: unless-stopped

fralbjabar
Jan 26, 2007
I am a meat popscicle.
You need to replace those /path/to/etc lines with the paths from your NAS to what you want to share with your container.

Ex if I have appdata on the host system at /mnt/user/appdata and want to map the config for my container to that I'd have

volumes:
- /mnt/user/appdata:/config

You then may need to assign these within settings as well in the application itself.

edit - corrected from wildly inaccurate first post while phoneposting.

fralbjabar fucked around with this message at 05:10 on May 7, 2023

AlternateAccount
Apr 25, 2005
FYGM
Good grief, the difference in performance on MacOS between SMB and NFS on Unraid is absolutely massive.

MTGWolfGirl
Apr 4, 2023

fralbjabar posted:

You need to replace those /path/to/etc lines with the paths from your NAS to what you want to share with your container.

Ex if I have appdata on the host system at /mnt/user/appdata and want to map the config for my container to that I'd have

volumes:
- /mnt/user/appdata:/config

You then may need to assign these within settings as well in the application itself.

edit - corrected from wildly inaccurate first post while phoneposting.

Thanks for that, I tried to change the path before but it kept saying "Folder not writable" when I selected it in Radarr settings. Is there a default way the path is structured, like "drive/appdata/Folder name...." ?

MTGWolfGirl
Apr 4, 2023

MTGWolfGirl posted:

Thanks for that, I tried to change the path before but it kept saying "Folder not writable" when I selected it in Radarr settings. Is there a default way the path is structured, like "drive/appdata/Folder name...." ?

Managed to fix it, was an idiot who didn't realise there was a loving installer for this that makes it all a hell of lot easier

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VelociBacon posted:

Is it a reasonable expectation that every device can play h265 natively these days? I have a decently powerful i5-11600k these days for the plex server with the hardware acceleration (no GPU) but I don't want to be transcoding all my friend's poo poo because their ipads can't natively stream h265 files.

Nice devices including iPads can going quite far back. Browser support is still iffy, so if you have users watching in browser h265 is likely to result in transcodes that look really awful to watch.

The biggest bugbear I've found is that Plex can't differentiate between 8 bit and 10 bit h265 encodes, but a ton of devices can only hardware decode 8 bit, so it'll attempt to direct play, fail, and fall back.

SlowBloke
Aug 14, 2017

VelociBacon posted:

Is it a reasonable expectation that every device can play h265 natively these days? I have a decently powerful i5-11600k these days for the plex server with the hardware acceleration (no GPU) but I don't want to be transcoding all my friend's poo poo because their ipads can't natively stream h265 files.

Apple A9/A9X (2015) is where H265 decode support was introduced. The main issue with iPads and hevc is their primordial hate for mkv files.

AlternateAccount
Apr 25, 2005
FYGM

AlternateAccount posted:

Good grief, the difference in performance on MacOS between SMB and NFS on Unraid is absolutely massive.

OK, spoke too soon, quoting my own post.

Is there a way to get MacOS to write different permissions to NFS shares? The way it does it now, no other machines can read them out, I have to change the permissions manually.

I assume there's a uname command that can do this, but wouldn't that be global?

bawfuls
Oct 28, 2009

Got the last of my hardware for my first NAS build (Unraid) this past Saturday and started with a pre-clear, which completed without error this morning. (shoutout to necrobobsledder for hooking me up with affordable second hand drives and packaging them well for the journy) I've got 3x8TB drives, assigned one to parity and the other two to data. When I went to start the array I couldn't find an option to declare that parity was already valid (it should be, since all 3 drives were zeroed out during pre-clear), so I just canceled the parity sync (per spaceinvaderone's guides).

Then I took my old 1 TB drive with media out of my desktop and put it into the new server to begin transferring the initial bulk of data. (I set up a couple shares first, again per spaceinvaderone's instructions). This of course filled my 500GB cache pretty quick, so I initiated the mover manually. Now it has completed the copy off the 1 TB drive, though the mover still has about 250Gb left to push from cache to the array.

Since everything is on the server now, I figured I'd let it start parity sync. It has been running a parity sync (not parity check) for about 40 minutes now and the rate is 7-10 MB/s, while data transfer speeds during the initial copy averaged 140MB/s. CPU usage is under 20%.

Is this parity sync just slow because it's happening while the mover is still active? Should I pause parity sync and wait until mover is complete before resuming? The current estimate is over a week to complete parity sync, which seems way too long for only 1 TB of data (split over 2 data drives).

edit: mover is complete and now the parity is running at 185 MB/s so I guess that explains it.

bawfuls fucked around with this message at 03:42 on May 9, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
yeah don’t do parity and mover at the same time

KKKLIP ART
Sep 3, 2004

What’s the best way to move data from and old pool to a brand new pool in TrueNAS? I’m sure there has to be some command line script that does it.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



can't you just rsync the root directory(s) in one over to the new pool?

Hughlander
May 11, 2005

KKKLIP ART posted:

What’s the best way to move data from and old pool to a brand new pool in TrueNAS? I’m sure there has to be some command line script that does it.

zfs send oldpool | zfs recv newpool

https://unix.stackexchange.com/questions/263677/how-to-one-way-mirror-an-entire-zfs-pool-to-another-zfs-pool

Covers it all with a few other examples. I like the syncoid approach personally

Hughlander fucked around with this message at 16:15 on May 9, 2023

Zorak of Michigan
Jun 10, 2006


I hadn't heard of syncoid. Now that I look at it, it seems to already do everything I would have mentioned. I specifically want to recommend taking advantage of syncoid's support for mbuffer. It's been years since I had to mess with zfs send/receive at work, but when I did, having a larger buffer between sender and receiver did a lot to optimize performance. Just 128mb was enough.

MTGWolfGirl
Apr 4, 2023
Having a bit of a head scratcher rn

I have a QNAP NAS, with my SOnarr/Delgue/Radarr running in a container, which uses a Wireguard VPN. I have Plex running as just an App, outside the containers.

Everytime I try to connect the Sonarr to the Plex, I get this message:

Unable to connect to Plex Media Server, Error: ConnectFailure (Connection refused): 'https://localhost:32400/library/sections?X-Plex-Client-Identifier=384de01d-464c-4df6-a389-e84b8f29e6b1&X-Plex-Product=Sonarr&X-Plex-Platform=Windows&X-Plex-Platform-Version=7&X-Plex-Device-Name=Sonarr&X-Plex-Version=3.0.10.1567&X-Plex-Token=ssjojUTgrKwdSBSPwWec'

I have no idea why it's not working, trying to change Localhost to the IP just results in the test timing out.

Any ideas?

Beve Stuscemi
Jun 6, 2001




Is there any way in truenas to see how full each individual physical disk is? It lets you see the pool, but I can’t find remaining capacity for the disks themselves

Zorak of Michigan
Jun 10, 2006


I thought I remembered one, but I can't track it down anywhere. Why does it matter?

IOwnCalculus
Apr 2, 2003





"zpool iostat -v" will show you the disk usage on a per-vdev basis, which is as low as it makes any sense to go - everything in a given vdev should have identical usage. On top of that, the only reason you should end up with different usage percentages from one vdev to the next is if you add vdevs after adding data, since there's no way to completely rebalance a pool onto itself.

Splinter
Jul 4, 2003
Cowabunga!

e.pilot posted:

current gen celeron mini pc from aliexpress (N5105 or N6005) and 4-5 bay JBOD and unraid

modern quicksync is very good, like 3-4 simultaneous 4K transcodes good

Ended up with a N5105 NUC and 4x16TB Seagate Exos drives (due to a great deal on 2 packs which of course were packed like poo poo by amazon and will probably die immediately). For the enclosure, to keep my options open I got one of those 4 bay JBOD enclosures with 10 Gbps USB3 and also a QNAP TR-004 which can do both JBOD and various RAID modes with 5 Gbps USB3. Will end up returning one of those.

I have a few questions: If I want a desktop environment with unraid, what's the best way to accomplish that? I assume a Windows or Linux VM that has access to the GPU/HDMI-out? Is there any ssue with giving both the VM the GPU for this purpose while also running a Plex container that has GPU access for hardware transcoding? Any reason not to run qbittorrent within that VM rather than via a separate container in order to use the native app rather than the web interface?

With 4 HDDs, 5 Gbps (625 MB/s) vs 10 Gbps USB3 shouldn't matter, correct, regardless of RAID vs Unraid etc?

I've seen some posts recommending against having an unraid party drive on an external/USB3. Is that a legitimate concern that will lead to noticeable annoyance or a non-issue for a media server/torrents? I gather unraid will be slower in general (vs a striped setup like RAID5) but that it shouldn't be a bottle neck for these use cases, but is there something about having the parity drive over USB that will introduce additional slowness?

Beve Stuscemi
Jun 6, 2001




Zorak of Michigan posted:

I thought I remembered one, but I can't track it down anywhere. Why does it matter?

I guess it doesnt matter, because I did not know that

IOwnCalculus posted:

everything in a given vdev should have identical usage.

That clears up why I cant find the per-disk usage. I'm more used to Unraid, which will fill disks in a couple different ways, depending on how you have it set up


Thanks!

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Splinter posted:

Ended up with a N5105 NUC and 4x16TB Seagate Exos drives (due to a great deal on 2 packs which of course were packed like poo poo by amazon and will probably die immediately). For the enclosure, to keep my options open I got one of those 4 bay JBOD enclosures with 10 Gbps USB3 and also a QNAP TR-004 which can do both JBOD and various RAID modes with 5 Gbps USB3. Will end up returning one of those.

I have a few questions: If I want a desktop environment with unraid, what's the best way to accomplish that? I assume a Windows or Linux VM that has access to the GPU/HDMI-out? Is there any ssue with giving both the VM the GPU for this purpose while also running a Plex container that has GPU access for hardware transcoding? Any reason not to run qbittorrent within that VM rather than via a separate container in order to use the native app rather than the web interface?

With 4 HDDs, 5 Gbps (625 MB/s) vs 10 Gbps USB3 shouldn't matter, correct, regardless of RAID vs Unraid etc?

I've seen some posts recommending against having an unraid party drive on an external/USB3. Is that a legitimate concern that will lead to noticeable annoyance or a non-issue for a media server/torrents? I gather unraid will be slower in general (vs a striped setup like RAID5) but that it shouldn't be a bottle neck for these use cases, but is there something about having the parity drive over USB that will introduce additional slowness?

unraid works fine over USB, especially a multi-bay JBOD, you shouldn’t see any noticeable performance hit

why do you want a desktop environment on unraid? you can do pretty much everything you could possibly need to do on unraid via the web gui or on very rare occasions ssh, having a desktop kind of defeats the purpose and isn’t what it’s designed for

you can pass through the GPU to a VM but then that’s all you can do with the GPU, you can’t assign it to multiple operating systems (multiple containers is fine)

IOwnCalculus
Apr 2, 2003





Beve Stuscemi posted:

That clears up why I cant find the per-disk usage. I'm more used to Unraid, which will fill disks in a couple different ways, depending on how you have it set up


Thanks!

Yep - it doesn't matter how you configure a ZFS pool, if all vdevs are configured before any data gets written, then usage across all drives in the pool should be equal. If you stripe across multiple vdevs, data is going to be split equally among them. If your vdev is a mirror, all components of that mirror are identical by definition. If your vdev is any form of parity, the specific blocks written to each disk will vary but the amount of blocks written to each disk will be identical.

When you get into adding vdevs down the line, or swapping disks for larger, or mismatched vdevs, you can end up with some wildly different usage percentages. In that case, ZFS tries to write
more data to the vdevs with more free space. For the users in this thread, this shouldn't ever be a problem:


code:
$ zpool iostat -v tank
                                                         capacity     operations     bandwidth
pool                                                   alloc   free   read  write   read  write
-----------------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                                    146T  35.5T    340    114  13.9M  1.18M
  raidz1-0                                             34.6T  1.82T     53     25  2.24M   181K
    scsi-SATA_WDC_WD101EMAZ-11_                            -      -     13      7   590K  48.9K
    scsi-SATA_WDC_WD100EMAZ-00_                            -      -     13      5   556K  42.7K
    scsi-SSEAGATE_ST10000NM0096                            -      -     13      6   589K  47.7K
    scsi-SSEAGATE_ST10000NM0226                            -      -     13      5   554K  41.3K
  raidz1-1                                             34.1T  2.30T     82     20  3.42M   190K
    scsi-SHGST_H7210A520SUN010T                            -      -     20      5   889K  50.4K
    scsi-SATA_WDC_WD101EMAZ-11_                            -      -     20      4   837K  44.7K
    scsi-SSEAGATE_ST10000NM0096                            -      -     20      5   914K  50.4K
    scsi-SATA_WDC_WD100EMAZ-00_                            -      -     21      4   857K  44.2K
  raidz1-2                                             26.1T  10.3T     69     28  2.53M   345K
    scsi-SHGST_H7210A520SUN010T                            -      -     17      7   664K  92.2K
    scsi-SSEAGATE_ST10000NM0096                            -      -     16      7   621K  84.7K
    scsi-SSEAGATE_ST10000NM0226                            -      -     17      7   678K  92.2K
    scsi-SATA_WDC_WD100EMAZ-00_                            -      -     17      6   631K  76.0K
  raidz1-3                                             30.1T  6.25T     71     18  3.09M   226K
    scsi-SSEAGATE_ST10000NM0096                            -      -     17      5   809K  59.7K
    scsi-SSEAGATE_ST10000NM0096                            -      -     17      4   764K  55.2K
    scsi-SATA_WDC_WD101EMAZ-11_                            -      -     18      4   818K  59.7K
    scsi-SHGST_H7210A520SUN010T                            -      -     18      4   771K  51.6K
  raidz1-4                                             21.6T  14.8T     63     21  2.64M   268K
    scsi-SSEAGATE_ST10000NM0096                            -      -     16      5   719K  70.9K
    scsi-SHGST_H7210A520SUN010T                            -      -     16      4   674K  58.8K
    scsi-SATA_WDC_WD101EMAZ-11_                            -      -     15      5   676K  70.9K
    scsi-SSEAGATE_ST10000NM0096                            -      -     15      5   639K  67.5K
-----------------------------------------------------  -----  -----  -----  -----  -----  -----
But if you have Real Workloads on your storage, I've seen cases where trying to expand a very large array by just a single mirrored vdev resulted in performance for new data written to the array being choked down to the capability of a single disk.

Adbot
ADBOT LOVES YOU

bawfuls
Oct 28, 2009

hope this is the right thread for my noob troubleshooting questions after google failed me...

I've got plex and deluge set up successfully on my new unraid system, with deluge properly using my vpn.

Now I'm trying to get Sonarr set up and running into issues. (some) Guides tell me I need to enable the WebUi plugin within Deluge so that Sonarr can connect to it, but when I go to Deluge preferences, I can not check the WebUi box under plugins. Clicking just doesn't check the box. Other plugins are enabled, like Blocklist and Extractor. Under the Daemon I have "allow remote connections" checked as well. Other guides do not mention this step at all.

Within Sonarr, I have not been able to add Deluge as a download client yet. Sonarr says it's unable to connect to Deluge.

As best I can tell from guides, I have the settings correct in Sonarr for adding Deluge as a downoad client. The Host is my server IP, and if I enter that IP plus the port I have listed into a web browser it takes me right to the Deluge webui.

I haven't even gotten to the indexer stuff yet, which apparently requires yet another docker container (Jackett).

Feeling like I bit off more than I can chew here, documentation online is all over the place

bawfuls fucked around with this message at 19:34 on May 11, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply