Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Just moved houses and reran Ethernet everywhere. All I did to my UniFi network rack was cut the patch panel and put a new one in here. Was banging my head for an hour trying to figure out why my Poe devices wouldn't work despite maxing my internet speed on my laptop directly wired to the drops.

Forgot the 16 port poe switch only has eight powered ports! Swapped a few and BAM! I'm back baby.

Thanks for listening.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?

TraderStav posted:

Just moved houses and reran Ethernet everywhere. All I did to my UniFi network rack was cut the patch panel and put a new one in here. Was banging my head for an hour trying to figure out why my Poe devices wouldn't work despite maxing my internet speed on my laptop directly wired to the drops.

Forgot the 16 port poe switch only has eight powered ports! Swapped a few and BAM! I'm back baby.

Thanks for listening.

One of my biggest peeves about the new UniFi switches is how many of them are partial PoE. I hate partial PoE for this exact reason. I like to keep my patch panels sane and try to connect the wires in order whenever possible, partial PoE makes that impossible unless the wiring plan is specifically built around that.

It's tolerable for desktop switches as a cost-saving measure since most of those won't need to power all ports, but it should never be a thing in rackmount hardware.

The way they varied PoE capabilities in the earlier models made more sense, all variants had the same port config but there would be one model that could support all ports being run at maximum draw while the other one would handle about half that on average. That way if you had a lot of high-power wireless devices or heated cameras or whatever you could get the max power option, but those who were just powering a bunch of 3-5 watt VoIP phones could save money.

wolrah fucked around with this message at 16:28 on Jul 4, 2021

codo27
Apr 21, 2008

So I was in asking about a drive cloning solution and came up with Macrium. I'm actually in disbelief at how easy it was.

Popped in the NVMe drive, booted up, installed Macrium free, set up the clone which was also super simple. Figured it would take til this evening sometime just due to it being a lovely old hard drive. I told the user to advise me if there were any error popups or anything. Contacted me within 30 minutes I'd say, I expected it had failed. Nope, clone complete. Shut down, pull out the HDD, thought for sure, no way would it just boot up fine. Booted up fine with no further input required. So, so easy. Very pleased.

hbag
Feb 13, 2021

not sure if this is the right thread but i couldnt find one that did fit and the thing i want to do is ON a NAS so whatever
would i be able at all to set different seeding rules in qbittorrent for different trackers, or can i only set that for all torrents

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

codo27 posted:

So I was in asking about a drive cloning solution and came up with Macrium. I'm actually in disbelief at how easy it was.

Popped in the NVMe drive, booted up, installed Macrium free, set up the clone which was also super simple. Figured it would take til this evening sometime just due to it being a lovely old hard drive. I told the user to advise me if there were any error popups or anything. Contacted me within 30 minutes I'd say, I expected it had failed. Nope, clone complete. Shut down, pull out the HDD, thought for sure, no way would it just boot up fine. Booted up fine with no further input required. So, so easy. Very pleased.

I really like Macrium and save those backups to my UnRaid server, which is then shot up to Crashplan. It's good enough to buy, but you don't even need to.

CopperHound
Feb 14, 2012

hbag posted:

not sure if this is the right thread but i couldnt find one that did fit and the thing i want to do is ON a NAS so whatever
would i be able at all to set different seeding rules in qbittorrent for different trackers, or can i only set that for all torrents
Not in qBittorrent, but sonarr has per tracker seed ratio settings iirc.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Is there a "best practices" sort of thing for saving the config and setup for an UnRaid server? Or, some practical way to store a minimal image of the OS and VM and associated settings somewhere?

Just thinking of if there was a catastrophic failure of some kind I'd hate to have to set up each of my docker configs again from scratch since some of them were pretty fiddly. I also have a VM running Homeassistant on the server that I'd like to preserve if possible. I am not a real strong computer toucher so I am wondering if there's some kind of standard protocol or guidance here.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

That Works posted:

Is there a "best practices" sort of thing for saving the config and setup for an UnRaid server? Or, some practical way to store a minimal image of the OS and VM and associated settings somewhere?

Just thinking of if there was a catastrophic failure of some kind I'd hate to have to set up each of my docker configs again from scratch since some of them were pretty fiddly. I also have a VM running Homeassistant on the server that I'd like to preserve if possible. I am not a real strong computer toucher so I am wondering if there's some kind of standard protocol or guidance here.

https://youtu.be/cZTWC_z9rKs

One of the plug-ins there will back up your appdata folder for docker automatically.

Also idk if anyone else has seen this YouTube channel but it's really good for UnRAID stuff. Definitely seems as good as space invader one.

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.
I am researching my next NAS. Currently I'm running Windows Home Server 2011 with Stablebit Drivepool on some 4Tb/8Tb drives on an Athlon FM2 with 8Gb of RAM. I built the server in 2014 and it's time for a new one (and new drives) I've never cared for RAID much as I prefer to maximize storage. Maybe only 500Gb-1Tb of user-created pics/vids/documents is important to me and I back that data up elsewhere. I can always redownload Linux ISOs and re-backup my media, but I do want at least 30Tb of usable space.

Research has got me looking seriously at Proxmox, although I'm still getting my head around it and would love some advice if you'd be so kind. As I understand it, Proxmox would be the main OS on the machine, installed on its own SSD. I'd have a bunch of spinning disks for bulk storage.

Then I'd have a VM for a NAS OS, pointing that towards the spinning disks, and sharing that data over my LAN to my main desktop, HTPC, phones etc. Does this sound about right? So I would have a VM running something like OpenMediaVault (which I've played with a little) for the NAS side of things.

Is OMV the best OS for the JBOD approach? Is there a better approach I could be taking altogether? Thanks, goons.

hbag
Feb 13, 2021

i got two 2TB drives in my DS220+ (running SHR so its 1 logical volume of 2TB) and despite being just under half full i already want to put bigger disks in it

i dont even have the money for that

Thwomp
Apr 10, 2003

BA-DUHHH

Grimey Drawer

hbag posted:

i got two 2TB drives in my DS220+ (running SHR so its 1 logical volume of 2TB) and despite being just under half full i already want to put bigger disks in it

i dont even have the money for that

I felt the same when I got two 4TB drives in my QNAP about 4 years ago.



I just put two 10TB drives in it last month. :getin:

redeyes
Sep 14, 2002

by Fluffdaddy

modeski posted:

I am researching my next NAS. Currently I'm running Windows Home Server 2011 with Stablebit Drivepool on some 4Tb/8Tb drives on an Athlon FM2 with 8Gb of RAM. I built the server in 2014 and it's time for a new one (and new drives) I've never cared for RAID much as I prefer to maximize storage. Maybe only 500Gb-1Tb of user-created pics/vids/documents is important to me and I back that data up elsewhere. I can always redownload Linux ISOs and re-backup my media, but I do want at least 30Tb of usable space.

Research has got me looking seriously at Proxmox, although I'm still getting my head around it and would love some advice if you'd be so kind. As I understand it, Proxmox would be the main OS on the machine, installed on its own SSD. I'd have a bunch of spinning disks for bulk storage.

Then I'd have a VM for a NAS OS, pointing that towards the spinning disks, and sharing that data over my LAN to my main desktop, HTPC, phones etc. Does this sound about right? So I would have a VM running something like OpenMediaVault (which I've played with a little) for the NAS side of things.

Is OMV the best OS for the JBOD approach? Is there a better approach I could be taking altogether? Thanks, goons.

You can always do what I do and use Stablebit Drivepool on Windows 10 Pro. Been rocking and rolling for me for the last 4-5 years that way after WHS died.

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.

redeyes posted:

You can always do what I do and use Stablebit Drivepool on Windows 10 Pro. Been rocking and rolling for me for the last 4-5 years that way after WHS died.

I've thought about that, but I want to keep away from Windows if I can help it. I've moved to Linux Mint as my main OS on my desktop, for example. Other than for the odd game, everything else on my network is Linux/Android-based now.

El Mero Mero
Oct 13, 2001

Thwomp posted:

I felt the same when I got two 4TB drives in my QNAP about 4 years ago.



I just put two 10TB drives in it last month. :getin:

It's neverending. Wait until to start plunking down for the 14tb ones :rms:

codo27
Apr 21, 2008

I've had my 3TB Reds for about 7 years now I guess. Its only slowly starting to creep towards full lately.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

modeski posted:

I am researching my next NAS. Currently I'm running Windows Home Server 2011 with Stablebit Drivepool on some 4Tb/8Tb drives on an Athlon FM2 with 8Gb of RAM. I built the server in 2014 and it's time for a new one (and new drives) I've never cared for RAID much as I prefer to maximize storage. Maybe only 500Gb-1Tb of user-created pics/vids/documents is important to me and I back that data up elsewhere. I can always redownload Linux ISOs and re-backup my media, but I do want at least 30Tb of usable space.

Research has got me looking seriously at Proxmox, although I'm still getting my head around it and would love some advice if you'd be so kind. As I understand it, Proxmox would be the main OS on the machine, installed on its own SSD. I'd have a bunch of spinning disks for bulk storage.

Then I'd have a VM for a NAS OS, pointing that towards the spinning disks, and sharing that data over my LAN to my main desktop, HTPC, phones etc. Does this sound about right? So I would have a VM running something like OpenMediaVault (which I've played with a little) for the NAS side of things.

Is OMV the best OS for the JBOD approach? Is there a better approach I could be taking altogether? Thanks, goons.

Are you familiar with Virtualization? Honestly I dont recommend this approach at all.

If you want free and ZFS, go with FreeNAS (or whatever its called)

If you want some flexibility, and dont mind paying, with the added benefit of basically being hands off (while still having a ton of features if you want to go nuts), pay for Unraid.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I'm trying to play around with setting up a toy implementation of iSCSI / PXE / network booting and tbh I'm so far out of my depth that I don't even know where to start and all the documentation is flying above my head. I can't really find a "PXE for dummies" type thing and there's just so many approaches and options and moving pieces.

  • Serving static images: let's say you want to serve the Windows installer ISO and some linux installer ISOs. This doesn't need to go through iSCSI at all, right? It just goes as an image on the tftp server? Can it go through iscsi if you want, like a read-only iscsi target/LUN? Is there any point to doing that?
  • (One oddity with the FreeBSD iso that I've noticed is that they're serious that the disc image needs to be burned to a disc, I haven't managed to make it work with tools like Rufus or unetbootin to create a USB stick, and the USB image is different/smaller (I assume it has less stuff). So far I've always installed it via virtual CDROM on the IPMI. By presenting it as iSCSI maybe it could "lie" that it's a CD perhaps?)
  • Serving images specific to a particular machine: iirc PXE has the ability to map from a MAC address to a specific thing that device is supposed to boot. Does this actually need to be an ISO image on the tftp that has (eg) grub or the windows bootloader and the bootloader tells the system how to initiate iSCSI, or is there a way to have PXE directly present "this is the iSCSI target/LUN you're going to boot" and the BIOS loads and initiates it itself? If there are bootloader images, are there "defaults" for windows/linux/bsd/etc? Do these need to be configured in some specific way so that they know which iSCSI to grab?
  • Serving default images: PXE also has the ability to send another LUN when it doesn't have a target configured for that MAC address. Seems self-explanatory given the above, I assume it works the same way.
  • Static vs per-client images: so the block devices exposed by iSCSI are full-service read/write, correct? I can actually boot an iSCSI image from the network with the OS, and have a fully custom OS install that just happens to live on my NAS? Obviously there are a lot of ways it's desirable to go the route of a single static image and have things like user/home directories, bin/lib directories, etc pushed off to a separate nfs share of course, but you can just treat it like a logical volume manager and have your drives be shares on the network if you want?
  • Is it possible to present an iSCSI image as "mutable-but-ephemeral", like you have some "base" image and it "feels" like a writeable iSCSI volume but after the volume is dismounted (the client shuts down) it reverts to the base state? Obviously there's ways you could do that with ZFS and snapshots and so on, just not sure if there's a better way.
  • Per-system configuration: obviously even in the case of a "static" image there are some things that need to be changed on a per-host basis. Having duplicated hostnames, for example, would be bad. How is this generally done? On Linux you could do some of this with a script that runs at boot and checks with a centralized server, gets the configuration for this MAC address, and applies it. Not sure how it would be done on windows at all (among other things hostnames require a restart, and the next boot you still would be coming from the default image again and need to configure again...). I assume for windows the configuration would to exist in the image itself? Maybe you would set up an "enrollment" image (or script on the tftp/iscsi server that the enrollment image calls) that creates a new clone of the base image/zvolume, configures it, and sets it as the PXE target for this MAC address?
  • Are there any premade scripts/suites of software (hopefully free) that are designed to do this, help you overlay config changes onto a base image?
  • If you are doing the thing where your linux image is a "thin" base image and the real "userland" actually lives on an NFS share, I assume this means the base image still has to have a minimal userland included to boot at all. Is the idea that once you get the system up, all the users will set up their own PATH with the new NFS userland instead of the native ones? (you can update the .profile skeleton and so on, of course.) How does modifying the NFS userland work? Like if I wanted to update the shared software set to a new version, obviously the way Linux works is that any fileptr to an open file will stay open, but if the open program then goes to try and load a new library and the version is mismatched between the program and the library, it could potentially cause issues, right? Is there a guarantee that all shared libraries are loaded at program start such that the risk is minimal and it'll just "atomically" change the next time the program closes over, or is the best approach to basically force a restart of the entire cluster when I do the changeover?
  • I am thinking of trying out SLURM workload manager. Maybe there are tools in there to help some of this? Also, in terms of the userland thing - it should be possible to actually specify as part of the job scripts a specific userland version that I want included (by specifying a certain path) and maybe that sidesteps that issue somewhat.

I realize this is a huge can of worms, it's a very complex topic, if someone wouldn't mind talking my dumb rear end through some of this (and/or just dropping the "right" ways to do it) I'd really really appreciate it. It's just so many new pieces at once that I don't even know where to start.

I'm thinking that maybe the best way to get started here is Virtualbox and getting used to iscsi and then layering the PXE boot stuff over the top of that. Virtualbox would also allow me to do virtual networks where I could mess around with tftp and DHCP without affecting my main network.

Paul MaudDib fucked around with this message at 22:54 on Jul 7, 2021

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.

Matt Zerella posted:

Are you familiar with Virtualization? Honestly I dont recommend this approach at all.

If you want free and ZFS, go with FreeNAS (or whatever its called)

If you want some flexibility, and dont mind paying, with the added benefit of basically being hands off (while still having a ton of features if you want to go nuts), pay for Unraid.

I am a little bit familiar with virtualization, I have a couple of VMs going in Virtualbox for various things. I don't mind paying for things if the functionality is good; so I'll definitely check out Unraid also.

Internet Explorer
Jun 1, 2005





Paul MaudDib posted:

I'm trying to play around with setting up a toy implementation of iSCSI / PXE / network booting and tbh I'm so far out of my depth that I don't even know where to start and all the documentation is flying above my head. I can't really find a "PXE for dummies" type thing and there's just so many approaches and options and moving pieces.

I think this is pretty far outside the topic of the NAS thread. Not that it's too off-topic, but you might not get as many eyes on it as you could. Maybe linking in the IT thread might get some more. But I'll take a crack at it. You're definitely asking fairly complicated questions, which there's a lot of enterprise solutions to help do what it sounds like you're trying to do.

  • Serving static images - This does not need iSCSI. No, can't use iSCSI and there's no reason to. You'd basically be PXE booting into something that can read and deploy ISOs. Check out how WDS/MDT or FOG (https://fogproject.org/) handles.
  • Serving images specific to a particular machin - Yes, PXE can be linked to a MAC address. I don't know the nitty-gritty of this because tools usually obfuscate this. For PXE boot into iSCSI, yes, you can do that. It used to be fairly rare with your NIC firmware support, no idea how common it is these days, especially with consumer gear. This article might help - https://www.itprotoday.com/cloud-computing/boot-directly-iscsi-san I'm a little confused by your iSCSI defaults bit, but that may be because you're thinking you can do iSCSI to an ISO. iSCSI gets you to a specific LUN, there's no personalization going on there. You're attaching a disk and that's it. To the point that if you're not using a clustered file system, you are going to have a bad day.
  • Serving default images: - Again, I think there's some confusion going on here. You don't go to a default LUN, if there's no assignment it just won't mount. Because you really don't want to mount something accidentally or by default.
  • Static vs per-client images: Again, confused by "iSCSI image" But yes, if you iSCSI mount a LUN, you can boot your system drive off it. Firmware requirements as above. But again, you're not customizing your OS this way. A LUN is a disk drive, nothing more. Your normal deployment stuff would still need to occur.
  • Is it possible to present an iSCSI image as "mutable-but-ephemeral" - Again, not an image, but if you take a snapshot of a LUN and then use some sort of orchestration to attach that snapshot to another device, then yes. I don't think there's a better way, again strictly using iSCSI. Using virtual disks and the like, then yes and you've basically just described non-persistent VDI.
  • Per-system configuration: - Usually the orchestration tool handles this. Changes the hostname, in the Active Directory world, resets the computer account, etc. It usually requires an agent or startup script, like you said. There must be ways to get around the hostname requiring reboot thing, because they are able to do it. The Windows stuff would be in the golden image itself, but you want to keep that as basic as possible and deploy configs using various IaC type things. Easier to maintain. [edit: Any VDI solution does this, so again check out Horizon, XenDesktop, Azure Virtual Desktop. FOG probably does, too.]
  • Are there any premade scripts/suites of software - I don't know about the iSCSI LUN snapshot orchestration type of thing, especially not free, and the booting off of might be the hardest requirement. You probably don't need to boot off iSCSI tbqh. But look at something like Catalogic EXC or Dell AppSync. Not to necessarily buy them, but to see what's out there. Might help you find a solution. For editing images, in the Windows world you just boot up the VHD in your hypervisor or use DISM if you can. In the Linux world, I don't know, but I bet FOG might be able to do it or lead you down the right path.
  • If you are doing the thing where your linux image - I don't know the Linux side very well, but I think my responses in the Windows thread cover that side of things. I'm sure there's a way in Linux - https://forums.somethingawful.com/showthread.php?noseen=1&threadid=3137721&pagenumber=810&perpage=40#post516050009
  • SLURM workload manager - Sorry, I don't know anything about this, but at a glance it looks worth looking into.

Hope that helps a bit. Might be worthwhile to say what you're trying to do and go from there? And might be worth checking in with the IT thread and/or Linux thread.

Internet Explorer fucked around with this message at 23:49 on Jul 7, 2021

hbag
Feb 13, 2021

man getting anything done on my DS220+ is kinda a pain in the rear end when NZBGet's trying to download an entire season of something
would be nice if i could limit its volume utilization because at the moment it just uses 100% all the time when its downloading poo poo

Impotence
Nov 8, 2010
Lipstick Apathy
are the marvell/realtek non-x86 synologies useful for anything on their own? i've always treated them purely as a nfs/smb mount and nothing else - all the heavy lifting needs to be done elsewhere even a TLS file transfer has difficulty exceeding 4-6 MB/s while it's freaking out at 100% CPU trying to just wget a https file

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Internet Explorer posted:

I think this is pretty far outside the topic of the NAS thread. Not that it's too off-topic, but you might not get as many eyes on it as you could. Maybe linking in the IT thread might get some more. But I'll take a crack at it. You're definitely asking fairly complicated questions, which there's a lot of enterprise solutions to help do what it sounds like you're trying to do.

Sure, I will cross-post to the IT thread and reply.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

hbag posted:

man getting anything done on my DS220+ is kinda a pain in the rear end when NZBGet's trying to download an entire season of something
would be nice if i could limit its volume utilization because at the moment it just uses 100% all the time when its downloading poo poo

You'd probably get more details in the Usenet thread, but does limiting the max download rate not keep it from thrashing your system resources, or is it more verification/unpacking traffic? Pretty sure you can also just set up a schedule of times it will use to do stuff, so it could run 100% overnight but back things off during 'office hours'.

hbag
Feb 13, 2021

Takes No Damage posted:

You'd probably get more details in the Usenet thread, but does limiting the max download rate not keep it from thrashing your system resources, or is it more verification/unpacking traffic? Pretty sure you can also just set up a schedule of times it will use to do stuff, so it could run 100% overnight but back things off during 'office hours'.

limiting the network speed isnt gonna limit the volume utilization
and my sleep schedule is so disjointed that a schedule wouldnt work probably

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

hbag posted:

limiting the network speed isnt gonna limit the volume utilization
and my sleep schedule is so disjointed that a schedule wouldnt work probably

Well! Nevertheless... the Usenet thread can probably recommend settings to minimize the resource footprint in general:
https://forums.somethingawful.com/showthread.php?threadid=3409898

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

modeski posted:

I am a little bit familiar with virtualization, I have a couple of VMs going in Virtualbox for various things. I don't mind paying for things if the functionality is good; so I'll definitely check out Unraid also.

Just a heads up you can also run VMs on UnRAID if you want to tinker there as well. Unfortunately there's no API access so tools like vagrant aren't useable like with proxmox. But it's nice using their docker plugin system and being able to run VMs alongside it.

UnRAID has its faults but drat if I haven't really run into any of them.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

hbag posted:

man getting anything done on my DS220+ is kinda a pain in the rear end when NZBGet's trying to download an entire season of something
would be nice if i could limit its volume utilization because at the moment it just uses 100% all the time when its downloading poo poo

what do you mean "volume utilization" here? that sounds like disk space, but from context it sounds like you're talking about CPU utilization or disk access time.

if you're talking about bandwidth or cpu, you can set a bandwidth limit in nzbget and that will reduce these. Look at the throughput you're getting at peak and set a limit to maybe 75% of that. You probably won't notice the difference unless you're actively sitting there waiting for something specific to finish (and in this case you can release the limits and reset them later) but it will make it behave a lot better for other stuff that needs some resources.

for cpu and disk utilization the repair and unpack may be an additional factor. These will typically go as fast as either disk or CPU will let them and they are cpu-intensive steps. What you probably want to do is set "nice" and "ionice" values for the par2 and unrar steps, these will make the process more willing to yield CPU and disk to other processes. Higher niceness value is "lower priority" and you can probably just set them to idle/highest niceness (which is iirc 19 for nice and 3 for ionice). If NZBGet does not directly allow this, you can probably do a script like this that does it. (you can combine nice and ionice in a single command too, like "nice -n 19 ionice -n 4 unrar $1" or whatever the script does, and the par2 script would look similar)

Note also that iirc nzbget does have a "pause for X minutes" option so if you are gaming or something and you don't want it trashing the network for the next hour, you can just pause it entirely.

Paul MaudDib fucked around with this message at 01:24 on Jul 8, 2021

hbag
Feb 13, 2021

Paul MaudDib posted:

what do you mean "volume utilization" here? that sounds like disk space, but from context it sounds like you're talking about CPU utilization or disk access time.

if you're talking about bandwidth or cpu, you can set a bandwidth limit in nzbget and that will reduce these. Look at the throughput you're getting at peak and set a limit to maybe 75% of that. You probably won't notice the difference unless you're actively sitting there waiting for something specific to finish (and in this case you can release the limits and reset them later) but it will make it behave a lot better for other stuff that needs some resources.

for cpu and disk utilization the repair and unpack may be an additional factor. These will typically go as fast as either disk or CPU will let them and they are cpu-intensive steps. What you probably want to do is set "nice" and "ionice" values for the par2 and unrar steps, these will make the process more willing to yield CPU and disk to other processes. Higher niceness value is "lower priority" and you can probably just set them to idle/highest niceness (which is iirc 19 for nice and 3 for ionice). If NZBGet does not directly allow this, you can probably do a script like this that does it. (you can combine nice and ionice in a single command too, like "nice -n 19 ionice -n 4 unrar $1" or whatever the script does, and the par2 script would look similar)

Note also that iirc nzbget does have a "pause for X minutes" option so if you are gaming or something and you don't want it trashing the network for the next hour, you can just pause it entirely.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

synology’s support documentation basically says that’s how much of the time there was a request pending and if it’s high but other things are fine then don’t worry about it. When you have multiple parallel workloads running it’s pretty much gonna be 100%.

That’s a disk IO metric though, so if your interactive workload feels bad when it’s downloading then you probably need to add an ionice to the nzbget worker process (basically, “idle” priority in the disk access subsystem). You may have to edit whatever container script to add the ionice to the launch command, but you should be able to run the command at the shell to test it in the meantime (ionice -n 3 -p PID, get the pid from ps -A). also you will want to make sure the repair (par2) and unpack processes are running at the higher ionice level, not sure if child processes inherit the ionice value, you can check this with something like top or iotop if available, or use “ionice -p pid” (no -n) to look it up for the child process pid.

It is likely this won’t make that “volume” metric go down, and if the total number of IO requests is up then your disk utilization % will probably rise as well. There will also of course be a performance hit on the nzbget process. But if overall it feels more responsive for your interactive workload that’s a win to usability.

If you are running multiple parallel download workers that will increase disk traffic too. Downloading four file segments at 1x actually pulls way more IO then two at 2x, etc. If you can make it work with ionice that’s obviously better, as at some point this will have a performance impact, but more threads isn’t necessarily better here, and may be amplifying the IO problems.

There is sometimes also an option to pause downloads during an unpack or PAR2 repair and you can consider the performance vs IO consequences (repair and unpack jobs often run in an additional thread which is more IO, and potentially a lot more CPU/memory - these are relatively heavy operations). This may help to keep performance from really tanking when those heavier steps kick in - but either way do make sure that the child processes are being ioniced properly too, you still want them to run at ionice 3 (idle) even if it’s the only worker running.

Note also that prebuilt NASs usually ship with “minimal” amounts of RAM and running multiple containers and more concurrent/parallel workloads tasks will tax ram harder and will benefit from having more RAM available. If you are starting to swap out to your spinning rust that will affect your other IO a ton too. If you see swap utilization start to happen it should be cheap to figure out what ram it uses and just order 2x4GB or 2x8GB or something. Depending on your workload it will probably help performance/responsiveness to add ram even before you start to swap.

I have the quad core version of that and I noticed swapping at 8GB while running at the windows desktop. I didn’t at 16GB (this is not officially supported, but it works on my NUC if you stay within the (global) requirements of 2400 C16 memory), it is a very nice little light desktop with a bit more memory. Don’t skimp on memory though, 2GB really is not a lot for a server even on Linux.

NZBGet is really fantastically lean for a nzb client though (apart from par2/unrar, which you can’t really do anything about), it’s like 70 or 90 mb running from what I remember - I used to run it on the OG shitbox Model 1B raspberry pi, the 700 MHz one with 256MB RAM. The power of actual good C coding in action. Samba is reasonably light weight too, if it’s just those two you will be fine.

The J4025 is actually not godawful as far as prebuilt NASs go, which is basically high praise. It is only 2C2T but I really like those Gemini Lake chips, they are reasonably fast (above core2 IPC, around midrange core2 performance) and have AES-NI which helps reduce CPU utilization of SSH connections, SSL endpoints (whether downloading or hosting), and they have hardware transcoding for Plex, and an advanced media encoder/decoder/IO block (backported from XE/Icelake, including hdmi 2.0b), and Intel’s Linux drivers are massively solid (and open source). I have a couple NUCs with the quad core variant that I picked up for $125 a pop (plus ram/ssd), and I really like them as low power servers, or light desktops, or TV PCs, they just are super nice low power processors with a wide feature set. They are like my Athlon 5350 server I used for a lot of years, solid and fast and cheap. I’ve watched Intel do their thing with Baytrail and Cherry Trail and so on and own a lot of the iterations there (Liva/Liva-X/etc) and it was ok but not really that great, it kinda fizzled out but the new Silvermont/Goldmont variants are actually great. Gemini Lake actually slaps for a low power architecture given the wattage and performance and the featureset, and the high clocked desktop variants (J-) are competitive with Core2, no poo poo.

My (fellow) here is sitting on a core2duo sleeper loaded with all the instructions and encryption sets and media codecs and protocols that core2 never knew about.

Paul MaudDib fucked around with this message at 12:46 on Jul 8, 2021

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Matt Zerella posted:

Are you familiar with Virtualization? Honestly I dont recommend this approach at all.

If you want free and ZFS, go with FreeNAS (or whatever its called)

If you want some flexibility, and dont mind paying, with the added benefit of basically being hands off (while still having a ton of features if you want to go nuts), pay for Unraid.

Why can't you recommend virtualization? It's pretty great.

TrueNAS would be a poor fit here, because modeski wants JBOD and it doesn't cater to this use case at all. I thought OMV sounded like a good fit for them, what does Unraid bring to the table other than a price tag to make it worth consideration in this case?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



modeski posted:

I am researching my next NAS. Currently I'm running Windows Home Server 2011 with Stablebit Drivepool on some 4Tb/8Tb drives on an Athlon FM2 with 8Gb of RAM. I built the server in 2014 and it's time for a new one (and new drives) I've never cared for RAID much as I prefer to maximize storage. Maybe only 500Gb-1Tb of user-created pics/vids/documents is important to me and I back that data up elsewhere. I can always redownload Linux ISOs and re-backup my media, but I do want at least 30Tb of usable space.

Research has got me looking seriously at Proxmox, although I'm still getting my head around it and would love some advice if you'd be so kind. As I understand it, Proxmox would be the main OS on the machine, installed on its own SSD. I'd have a bunch of spinning disks for bulk storage.

Then I'd have a VM for a NAS OS, pointing that towards the spinning disks, and sharing that data over my LAN to my main desktop, HTPC, phones etc. Does this sound about right? So I would have a VM running something like OpenMediaVault (which I've played with a little) for the NAS side of things.

Is OMV the best OS for the JBOD approach? Is there a better approach I could be taking altogether? Thanks, goons.

There can be reasons to go with the proxmox into a NAS VM approach. But you probably need to have a very specific purpose in mind for that. If for instance you're trying to build a home lab or something which requires you to want to share the hardware on your storage computer between multiple VM's through passthrough.

The vast majority of people will have a far smoother experience just installing the NAS OS on the bare metal and either using a separate server to host the applications you want running, or picking a NAS OS that supports virtualization and containers to do that on the NAS box itself.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

Keito posted:

Why can't you recommend virtualization? It's pretty great.

TrueNAS would be a poor fit here, because modeski wants JBOD and it doesn't cater to this use case at all. I thought OMV sounded like a good fit for them, what does Unraid bring to the table other than a price tag to make it worth consideration in this case?

You can emulate jbod in TrueNAS by just having a bunch of individual disk vdevs in stripe mode, then add them all to a pool. The redundancy/raidZ stuff is optional. TrueNAS has the benefit of running jails/plugins on the bare metal too.

BlankSystemDaemon
Mar 13, 2009



withoutclass posted:

You can emulate jbod in TrueNAS by just having a bunch of individual disk vdevs in stripe mode, then add them all to a pool. The redundancy/raidZ stuff is optional. TrueNAS has the benefit of running jails/plugins on the bare metal too.
ZFS stripes data across vdevs, so you end up with a RAID0 with a bunch of devices which means you've lowered MTBDF.
On FreeBSD, I'd recommend gconcat(8) - it may also exist in TrueNAS but you won't be able to use the WebUI for any of it.

hbag
Feb 13, 2021

Paul MaudDib posted:

synology’s support documentation basically says that’s how much of the time there was a request pending and if it’s high but other things are fine then don’t worry about it. When you have multiple parallel workloads running it’s pretty much gonna be 100%.

That’s a disk IO metric though, so if your interactive workload feels bad when it’s downloading then you probably need to add an ionice to the nzbget worker process (basically, “idle” priority in the disk access subsystem). You may have to edit whatever container script to add the ionice to the launch command, but you should be able to run the command at the shell to test it in the meantime (ionice -n 3 -p PID, get the pid from ps -A). also you will want to make sure the repair (par2) and unpack processes are running at the higher ionice level, not sure if child processes inherit the ionice value, you can check this with something like top or iotop if available, or use “ionice -p pid” (no -n) to look it up for the child process pid.

It is likely this won’t make that “volume” metric go down, and if the total number of IO requests is up then your disk utilization % will probably rise as well. There will also of course be a performance hit on the nzbget process. But if overall it feels more responsive for your interactive workload that’s a win to usability.

If you are running multiple parallel download workers that will increase disk traffic too. Downloading four file segments at 1x actually pulls way more IO then two at 2x, etc. If you can make it work with ionice that’s obviously better, as at some point this will have a performance impact, but more threads isn’t necessarily better here, and may be amplifying the IO problems.

There is sometimes also an option to pause downloads during an unpack or PAR2 repair and you can consider the performance vs IO consequences (repair and unpack jobs often run in an additional thread which is more IO, and potentially a lot more CPU/memory - these are relatively heavy operations). This may help to keep performance from really tanking when those heavier steps kick in - but either way do make sure that the child processes are being ioniced properly too, you still want them to run at ionice 3 (idle) even if it’s the only worker running.

Note also that prebuilt NASs usually ship with “minimal” amounts of RAM and running multiple containers and more concurrent/parallel workloads tasks will tax ram harder and will benefit from having more RAM available. If you are starting to swap out to your spinning rust that will affect your other IO a ton too. If you see swap utilization start to happen it should be cheap to figure out what ram it uses and just order 2x4GB or 2x8GB or something. Depending on your workload it will probably help performance/responsiveness to add ram even before you start to swap.

I have the quad core version of that and I noticed swapping at 8GB while running at the windows desktop. I didn’t at 16GB (this is not officially supported, but it works on my NUC if you stay within the (global) requirements of 2400 C16 memory), it is a very nice little light desktop with a bit more memory. Don’t skimp on memory though, 2GB really is not a lot for a server even on Linux.

NZBGet is really fantastically lean for a nzb client though (apart from par2/unrar, which you can’t really do anything about), it’s like 70 or 90 mb running from what I remember - I used to run it on the OG shitbox Model 1B raspberry pi, the 700 MHz one with 256MB RAM. The power of actual good C coding in action. Samba is reasonably light weight too, if it’s just those two you will be fine.

The J4025 is actually not godawful as far as prebuilt NASs go, which is basically high praise. It is only 2C2T but I really like those Gemini Lake chips, they are reasonably fast (above core2 IPC, around midrange core2 performance) and have AES-NI which helps reduce CPU utilization of SSH connections, SSL endpoints (whether downloading or hosting), and they have hardware transcoding for Plex, and an advanced media encoder/decoder/IO block (backported from XE/Icelake, including hdmi 2.0b), and Intel’s Linux drivers are massively solid (and open source). I have a couple NUCs with the quad core variant that I picked up for $125 a pop (plus ram/ssd), and I really like them as low power servers, or light desktops, or TV PCs, they just are super nice low power processors with a wide feature set. They are like my Athlon 5350 server I used for a lot of years, solid and fast and cheap. I’ve watched Intel do their thing with Baytrail and Cherry Trail and so on and own a lot of the iterations there (Liva/Liva-X/etc) and it was ok but not really that great, it kinda fizzled out but the new Silvermont/Goldmont variants are actually great. Gemini Lake actually slaps for a low power architecture given the wattage and performance and the featureset, and the high clocked desktop variants (J-) are competitive with Core2, no poo poo.

My (fellow) here is sitting on a core2duo sleeper loaded with all the instructions and encryption sets and media codecs and protocols that core2 never knew about.

well poo poo time to figure out how to do that then since ive literally never heard of ionice before lmao


also my stack's a docker-compose file that someone else wrote and i hosed with a little

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

BlankSystemDaemon posted:

ZFS stripes data across vdevs, so you end up with a RAID0 with a bunch of devices which means you've lowered MTBDF.
On FreeBSD, I'd recommend gconcat(8) - it may also exist in TrueNAS but you won't be able to use the WebUI for any of it.

Ah I didn't realize you'd need to stripe across too, welp.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Keito posted:

Why can't you recommend virtualization? It's pretty great.

TrueNAS would be a poor fit here, because modeski wants JBOD and it doesn't cater to this use case at all. I thought OMV sounded like a good fit for them, what does Unraid bring to the table other than a price tag to make it worth consideration in this case?

Full drive size mix and match JBOD with Parity (max drive size is only limited by the size of your parity disk, and you can do dual parity if you want) Cache drives (can also be dual mirrored), Docker app based "plugin" system. A non poo poo community full of helpful people and actual support from the company who makes it. And it has virtualization on top of it.

There's a shitload more.

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Matt Zerella posted:

Full drive size mix and match JBOD with Parity (max drive size is only limited by the size of your parity disk, and you can do dual parity if you want) Cache drives (can also be dual mirrored), Docker app based "plugin" system. A non poo poo community full of helpful people and actual support from the company who makes it. And it has virtualization on top of it.

There's a shitload more.

Plus a feature-unrestricted trial.

The major downside with Unraid is simply how it stores data- it doesn't stripe in the main array, so you are limited in read/write speeds for a single file to single-disk speeds, and there isn't native ZFS support.

That being said, there is community support for ZFS, and you CAN set up your drives into striped pools if you want, it's just not the standard way of setting it up.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Matt Zerella posted:

Full drive size mix and match JBOD with Parity (max drive size is only limited by the size of your parity disk, and you can do dual parity if you want) Cache drives (can also be dual mirrored), Docker app based "plugin" system. A non poo poo community full of helpful people and actual support from the company who makes it. And it has virtualization on top of it.

There's a shitload more.

If their goal is to maximize available storage space as stated in the original post, it sounds to me like parity isn't really needed or wanted in this specific case. As for OMV, Docker support is there while cache drives are not. No idea about virtualization, but regardless of OS choice they should then go with the original idea of running their NAS OS on top of a hypervisor instead of going the opposite way about it.

SolusLunes posted:

Plus a feature-unrestricted trial.
Competing against feature-unrestricted non-trial software, yeah.

SolusLunes posted:

The major downside with Unraid is simply how it stores data- it doesn't stripe in the main array, so you are limited in read/write speeds for a single file to single-disk speeds, and there isn't native ZFS support.
Isn't that supposed to be its strength? That home users get to haphazardly mix disk types and sizes? No idea why you'd be using that OS for any other reason, it seems like the worst possible place to decide you want to run a ZFS setup.

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Keito posted:

If their goal is to maximize available storage space as stated in the original post, it sounds to me like parity isn't really needed or wanted in this specific case. As for OMV, Docker support is there while cache drives are not. No idea about virtualization, but regardless of OS choice they should then go with the original idea of running their NAS OS on top of a hypervisor instead of going the opposite way about it.

Competing against feature-unrestricted non-trial software, yeah.

Isn't that supposed to be its strength? That home users get to haphazardly mix disk types and sizes? No idea why you'd be using that OS for any other reason, it seems like the worst possible place to decide you want to run a ZFS setup.

It is its largest strength and its greatest weakness IMO- unstriped jbod is GREAT for making sure your data stays intact, but it DOES limit your r/w speeds as compared to traditional-style striped setups.

I still think it's worth it (I use unraid for my home server, so, yeah), but I do want to be clear about the slight speed downsides.

Adbot
ADBOT LOVES YOU

Variable 5
Apr 17, 2007
We do these things not because they are easy, but because we thought they would be easy.
Grimey Drawer
So Unraid would be my best bet to utilize all these random external hard drives I have? Just shuck them and stick them in a case?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply