Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
hbag
Feb 13, 2021

good question i just had: when i eventually get around to putting some bigger drives in this thing, how would i go about wiping the old drives? and storing them safely for later use, i guess
i dont wanna DESTROY the drives because thatd be wasteful as gently caress and i might want to put them in something else eventually, but i also dont want all the data sitting on them

Adbot
ADBOT LOVES YOU

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Variable 5 posted:

So Unraid would be my best bet to utilize all these random external hard drives I have? Just shuck them and stick them in a case?

Pretty much! It's basically designed for that usecase, it's how I set up my unraid server
just use your largest or two largest drives for parity (depending on how many drives you have)

hbag posted:

good question i just had: when i eventually get around to putting some bigger drives in this thing, how would i go about wiping the old drives? and storing them safely for later use, i guess
i dont wanna DESTROY the drives because thatd be wasteful as gently caress and i might want to put them in something else eventually, but i also dont want all the data sitting on them

what type of drives are you using (hdd/ssd)? what operating system are you running for your NAS? you may have easy options there

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Keito posted:

Why can't you recommend virtualization? It's pretty great.


My personal main hangup is manually setting up a docker jail for something with a million settings; if I lose that .env file ... poo poo is not going to be just thrown but also being actively thrown down the gullet of every service that requires that vm to be up.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Finally jumped on the TrueNAS train from my FreeNAS box, so far pretty happy. Feels about the same.

hbag
Feb 13, 2021

SolusLunes posted:

Pretty much! It's basically designed for that usecase, it's how I set up my unraid server
just use your largest or two largest drives for parity (depending on how many drives you have)

what type of drives are you using (hdd/ssd)? what operating system are you running for your NAS? you may have easy options there

HDDs, one western digital, one seagate (as i figured buying two different brands would mitigate the risk of both being from the same bad batch if one happened to fail), and it's a DS220+ running DSM

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

SolusLunes posted:

Plus a feature-unrestricted trial.

The major downside with Unraid is simply how it stores data- it doesn't stripe in the main array, so you are limited in read/write speeds for a single file to single-disk speeds, and there isn't native ZFS support.

That being said, there is community support for ZFS, and you CAN set up your drives into striped pools if you want, it's just not the standard way of setting it up.

Having complete files per drive is one of the great features of unraid, it makes data recovery way simpler (or just flat out possible!). Performance can be helped by cache drives, but here's the thing about unraid: if you need more performance than it delivers, it's not the solution for you - that's fine, there are lots of other solutions.

Unraid has focus on what it does well, (being a very flexible nas for the home user) and I appreciate that

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
If you're a cloud/linux toucher/knower UnRAID is really nice because you can set and forget, you can tinker, or you can strike a balance between the two as you go.

It's especially nice if you don't give a poo poo about ZFS but want some parity for emergencies (and you don't want to deal with janitoring ZFS which isn't often but can be extremely annoying when you have to). As for performance if you're setting up for plex and Linux iso automation it's basically perfect and gets you as close to synology as you can get with using old hardware instead of shelling out for the enclosure.

I'll absolutely recommend it over virtualizing your NAS appliance if you're not super familiar with virtualization and you're willing to pay.

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

hbag posted:

HDDs, one western digital, one seagate (as i figured buying two different brands would mitigate the risk of both being from the same bad batch if one happened to fail), and it's a DS220+ running DSM

Simple enough, I suppose- not sure of anything that the DS220 has for software (I'm not familiar with Synology software), but you can simply hook the drive up to your computer via USB and just (non-quick) format it- or, if you're feeling particularly paranoid, there's always DBAN to just nuke spinning drives. They'll still be perfectly functional after this. You could also format it to NTFS, use Windows bitlocker to encrypt the drive, and then format it after that- that's probably the easiest method without additional software required if you have a windows box.

HalloKitty posted:

Having complete files per drive is one of the great features of unraid, it makes data recovery way simpler (or just flat out possible!). Performance can be helped by cache drives, but here's the thing about unraid: if you need more performance than it delivers, it's not the solution for you - that's fine, there are lots of other solutions.

Unraid has focus on what it does well, (being a very flexible nas for the home user) and I appreciate that


Matt Zerella posted:

If you're a cloud/linux toucher/knower UnRAID is really nice because you can set and forget, you can tinker, or you can strike a balance between the two as you go.

It's especially nice if you don't give a poo poo about ZFS but want some parity for emergencies (and you don't want to deal with janitoring ZFS which isn't often but can be extremely annoying when you have to). As for performance if you're setting up for plex and Linux iso automation it's basically perfect and gets you as close to synology as you can get with using old hardware instead of shelling out for the enclosure.

I'll absolutely recommend it over virtualizing your NAS appliance if you're not super familiar with virtualization and you're willing to pay.

I agree with all of this- it's definitely a better option than a clunky virtualization setup, and yeah, data recovery is easy with Unraid since the disks are all just regular XFS with regular files on them.

hbag
Feb 13, 2021

SolusLunes posted:

Simple enough, I suppose- not sure of anything that the DS220 has for software (I'm not familiar with Synology software), but you can simply hook the drive up to your computer via USB and just (non-quick) format it- or, if you're feeling particularly paranoid, there's always DBAN to just nuke spinning drives. They'll still be perfectly functional after this. You could also format it to NTFS, use Windows bitlocker to encrypt the drive, and then format it after that- that's probably the easiest method without additional software required if you have a windows box.

And for storage just buy a couple anti-static bags?

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

hbag posted:

And for storage just buy a couple anti-static bags?

Pretty much. That's almost even overkill, if you've got them in a temperate/dry/not jostling environment. My spare disks just sit in a 4U shelf on my rack.

wolrah
May 8, 2006
what?

hbag posted:

good question i just had: when i eventually get around to putting some bigger drives in this thing, how would i go about wiping the old drives? and storing them safely for later use, i guess
i dont wanna DESTROY the drives because thatd be wasteful as gently caress and i might want to put them in something else eventually, but i also dont want all the data sitting on them

Encrypt the entire disk using any method of your choice, lose the key, and then format it again. Any data on the drive should be unreadable randomness. If you're really paranoid, good old DBAN will give you as many passes of whatever "NSA Wipe" as you want to try, but there's a point where if you're that worried about someone reading the data you should just destroy the drives.

Less Fat Luke
May 23, 2003

Exciting Lemon
When I sell old drives I just tell people my files are steganographically hidden in all that porn.

hbag
Feb 13, 2021

Paul MaudDib posted:

synology’s support documentation basically says that’s how much of the time there was a request pending and if it’s high but other things are fine then don’t worry about it. When you have multiple parallel workloads running it’s pretty much gonna be 100%.

That’s a disk IO metric though, so if your interactive workload feels bad when it’s downloading then you probably need to add an ionice to the nzbget worker process (basically, “idle” priority in the disk access subsystem). You may have to edit whatever container script to add the ionice to the launch command, but you should be able to run the command at the shell to test it in the meantime (ionice -n 3 -p PID, get the pid from ps -A). also you will want to make sure the repair (par2) and unpack processes are running at the higher ionice level, not sure if child processes inherit the ionice value, you can check this with something like top or iotop if available, or use “ionice -p pid” (no -n) to look it up for the child process pid.

It is likely this won’t make that “volume” metric go down, and if the total number of IO requests is up then your disk utilization % will probably rise as well. There will also of course be a performance hit on the nzbget process. But if overall it feels more responsive for your interactive workload that’s a win to usability.

If you are running multiple parallel download workers that will increase disk traffic too. Downloading four file segments at 1x actually pulls way more IO then two at 2x, etc. If you can make it work with ionice that’s obviously better, as at some point this will have a performance impact, but more threads isn’t necessarily better here, and may be amplifying the IO problems.

There is sometimes also an option to pause downloads during an unpack or PAR2 repair and you can consider the performance vs IO consequences (repair and unpack jobs often run in an additional thread which is more IO, and potentially a lot more CPU/memory - these are relatively heavy operations). This may help to keep performance from really tanking when those heavier steps kick in - but either way do make sure that the child processes are being ioniced properly too, you still want them to run at ionice 3 (idle) even if it’s the only worker running.

Note also that prebuilt NASs usually ship with “minimal” amounts of RAM and running multiple containers and more concurrent/parallel workloads tasks will tax ram harder and will benefit from having more RAM available. If you are starting to swap out to your spinning rust that will affect your other IO a ton too. If you see swap utilization start to happen it should be cheap to figure out what ram it uses and just order 2x4GB or 2x8GB or something. Depending on your workload it will probably help performance/responsiveness to add ram even before you start to swap.

I have the quad core version of that and I noticed swapping at 8GB while running at the windows desktop. I didn’t at 16GB (this is not officially supported, but it works on my NUC if you stay within the (global) requirements of 2400 C16 memory), it is a very nice little light desktop with a bit more memory. Don’t skimp on memory though, 2GB really is not a lot for a server even on Linux.

NZBGet is really fantastically lean for a nzb client though (apart from par2/unrar, which you can’t really do anything about), it’s like 70 or 90 mb running from what I remember - I used to run it on the OG shitbox Model 1B raspberry pi, the 700 MHz one with 256MB RAM. The power of actual good C coding in action. Samba is reasonably light weight too, if it’s just those two you will be fine.

The J4025 is actually not godawful as far as prebuilt NASs go, which is basically high praise. It is only 2C2T but I really like those Gemini Lake chips, they are reasonably fast (above core2 IPC, around midrange core2 performance) and have AES-NI which helps reduce CPU utilization of SSH connections, SSL endpoints (whether downloading or hosting), and they have hardware transcoding for Plex, and an advanced media encoder/decoder/IO block (backported from XE/Icelake, including hdmi 2.0b), and Intel’s Linux drivers are massively solid (and open source). I have a couple NUCs with the quad core variant that I picked up for $125 a pop (plus ram/ssd), and I really like them as low power servers, or light desktops, or TV PCs, they just are super nice low power processors with a wide feature set. They are like my Athlon 5350 server I used for a lot of years, solid and fast and cheap. I’ve watched Intel do their thing with Baytrail and Cherry Trail and so on and own a lot of the iterations there (Liva/Liva-X/etc) and it was ok but not really that great, it kinda fizzled out but the new Silvermont/Goldmont variants are actually great. Gemini Lake actually slaps for a low power architecture given the wattage and performance and the featureset, and the high clocked desktop variants (J-) are competitive with Core2, no poo poo.

My (fellow) here is sitting on a core2duo sleeper loaded with all the instructions and encryption sets and media codecs and protocols that core2 never knew about.

yeeeeah ive been googling this poo poo and i really cant figure out how to get this thing to run in ionice

i mean lmao this is the fuckin container in the docker-compose file:

pre:
# NZBGet - https://hotio.dev/containers/nzbget/
# <mkdir /volume1/docker/appdata/nzbget>
  nzbget:
    container_name: nzbget
    image: ghcr.io/hotio/nzbget:latest
    restart: unless-stopped
    logging:
      driver: json-file
    networks:
      - arrNet
    ports:
      - 6789:6789/tcp
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/nzbget:/config:rw
      - ${DOCKERSTORAGEDIR}/usenet:/data/usenet:rw

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer

hbag posted:

good question i just had: when i eventually get around to putting some bigger drives in this thing, how would i go about wiping the old drives?

I can't speak for how best to store old drives, but Synology DSM has a 'secure erase' function built-in:

Applications -> Storage Manager -> HDD/SSD - 'deactivate' the drive, then 'secure erase'.

hbag
Feb 13, 2021

hbag posted:

yeeeeah ive been googling this poo poo and i really cant figure out how to get this thing to run in ionice

i mean lmao this is the fuckin container in the docker-compose file:

pre:
# NZBGet - https://hotio.dev/containers/nzbget/
# <mkdir /volume1/docker/appdata/nzbget>
  nzbget:
    container_name: nzbget
    image: ghcr.io/hotio/nzbget:latest
    restart: unless-stopped
    logging:
      driver: json-file
    networks:
      - arrNet
    ports:
      - 6789:6789/tcp
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/nzbget:/config:rw
      - ${DOCKERSTORAGEDIR}/usenet:/data/usenet:rw

nvm i THINK i got it

hbag
Feb 13, 2021

Bash code:
bash-5.1# ps -A
PID   USER     TIME  COMMAND
    1 root      0:00 s6-svscan -t0 /var/run/s6/services
   38 root      0:00 s6-supervise s6-fdholderd
  226 root      0:00 s6-supervise nzbget
  228 hotio     1h36 /app/nzbget --server --option OutputMode=log --configfile /config/nzbget.conf
  821 root      0:00 bash
 1284 root      0:00 ps -A
bash-5.1# ionice -n 3 -p 228
ionice: ioprio_set: Invalid argument
bash-5.1#
or not

hbag
Feb 13, 2021



this doesnt seem to have worked either since now when i try to navigate to the web interface uh



i'll give it a few more minutes i guess

hbag
Feb 13, 2021

yeeeah i


dont think this docker image has ionice or some poo poo

hbag
Feb 13, 2021

loving christ i cant even figure one thing out can i
why am i such garbage at everything i try to do

hbag
Feb 13, 2021

even if i got ionice working would it even loving work properly inside a docker container

Impotence
Nov 8, 2010
Lipstick Apathy
joke option: ps aux and grep for the process running inside docker from the host, then ionice that process from the host

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

hbag posted:

loving christ i cant even figure one thing out can i
why am i such garbage at everything i try to do

You're not garbage at everything you do :frogbon:

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.

Nitrousoxide posted:

There can be reasons to go with the proxmox into a NAS VM approach. But you probably need to have a very specific purpose in mind for that. If for instance you're trying to build a home lab or something which requires you to want to share the hardware on your storage computer between multiple VM's through passthrough.

The vast majority of people will have a far smoother experience just installing the NAS OS on the bare metal and either using a separate server to host the applications you want running, or picking a NAS OS that supports virtualization and containers to do that on the NAS box itself.

Well, now you've got me thinking that a NAS OS on bare metal that supports virtualization *may* be the way to go. I'm not looking to get into a hardcore home lab type setup. Maybe it would be best to describe my goals:

- Make all my media accessible to all the devices on my LAN (Android tablets, Android phones, ODROID N2 running CoreElec)
- JBOD (I'll repurpose my existing NAS into a backup NAS for my valuable data)
- Definitely run Sonarr, Radarr, SABnzbd
- Possibly run - a torrent client, pihole, maybe a VPN, Soulseek, a VM with all my Linux Distro websites etc. contained to keep it separate from my main desktop.
- I want everything to feel 'snappy'/quick, e.g. when browsing directories remotely, and for things not to slow to a crawl when SABnzbd is extracting/repairing files, for example.

Any recommendations for a NAS OS that supports virtualization/containers?

modeski fucked around with this message at 01:11 on Jul 9, 2021

BlankSystemDaemon
Mar 13, 2009



wolrah posted:

Encrypt the entire disk using any method of your choice, lose the key, and then format it again. Any data on the drive should be unreadable randomness. If you're really paranoid, good old DBAN will give you as many passes of whatever "NSA Wipe" as you want to try, but there's a point where if you're that worried about someone reading the data you should just destroy the drives.
NIST hasn't recommended DBAN or "NSA Wipes" for a long time - in fact, I believe they actively discourage it now.
Disassemble and recycle the metal after it's been through a shredder along with recycling the circuit board and components.

hbag
Feb 13, 2021

BlankSystemDaemon posted:

NIST hasn't recommended DBAN or "NSA Wipes" for a long time - in fact, I believe they actively discourage it now.
Disassemble and recycle the metal after it's been through a shredder along with recycling the circuit board and components.

did you miss the part where we were explicitly trying to not destroy the drives

BlankSystemDaemon
Mar 13, 2009



hbag posted:

did you miss the part where we were explicitly trying to not destroy the drives
:negative:

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

CommieGIR posted:

Finally jumped on the TrueNAS train from my FreeNAS box, so far pretty happy. Feels about the same.

Was just about to ask if anyone had made the leap from Free to True. Were you on the latest FreeNAS already? Is it just the standard update process through the GUI and select TrueNAS from the dropdown and hit GO? I'm a little trigger shy because I don't think you can roll back or restore to FreeNAS once you've booted up with TrueNAS (Or can you?).

BlankSystemDaemon
Mar 13, 2009



Takes No Damage posted:

Was just about to ask if anyone had made the leap from Free to True. Were you on the latest FreeNAS already? Is it just the standard update process through the GUI and select TrueNAS from the dropdown and hit GO? I'm a little trigger shy because I don't think you can roll back or restore to FreeNAS once you've booted up with TrueNAS (Or can you?).
If they've implemented boot environments on the appliance disk(s) that FreeNAS boots from, you can absolutely roll back.
That's the entire point of ZFS boot environments, to be able to roll back any upgrade no matter what breaks, unless there's catastrophic dataloss involved.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Nitrousoxide posted:

There can be reasons to go with the proxmox into a NAS VM approach. But you probably need to have a very specific purpose in mind for that. If for instance you're trying to build a home lab or something which requires you to want to share the hardware on your storage computer between multiple VM's through passthrough.

The vast majority of people will have a far smoother experience just installing the NAS OS on the bare metal and either using a separate server to host the applications you want running, or picking a NAS OS that supports virtualization and containers to do that on the NAS box itself.

If you're planning to do virtualization at all I'd argue you should go with a hypervisor like ESXi or Proxmox VE from the beginning instead of using a half-baked solution within your NAS OS of choice. If you already have the technical competency to go all-out with virtualization instead of bare metal installation for the OS I'd argue there's very little benefit to the latter.

EVIL Gibson posted:

My personal main hangup is manually setting up a docker jail for something with a million settings; if I lose that .env file ... poo poo is not going to be just thrown but also being actively thrown down the gullet of every service that requires that vm to be up.

This is the same whether you set up a physical or virtual server though. If you don't do backups and lose your settings there'll be a shitshow regardless.

hbag posted:

even if i got ionice working would it even loving work properly inside a docker container

You can't adjust process nice values within a container without granting it the CAP_SYS_NICE capability. A better solution would probably be to use Docker's resource constraint capabilities, but assuming you're not using the swarm mode you'll then need to use the Compose v2 spec:
https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources

wolrah
May 8, 2006
what?

BlankSystemDaemon posted:

NIST hasn't recommended DBAN or "NSA Wipes" for a long time - in fact, I believe they actively discourage it now.
Disassemble and recycle the metal after it's been through a shredder along with recycling the circuit board and components.
My intent was for that to come across as a "if it'll make you feel better you can do this the same as always", not an actual suggestion.

If someone is not confident in FDE rendering the data irrecoverable, then it basically comes down to how much effort they're willing to put in to satisfying their own paranoia versus destroying a few hundred bucks worth of hardware.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

If they've implemented boot environments on the appliance disk(s) that FreeNAS boots from, you can absolutely roll back.
That's the entire point of ZFS boot environments, to be able to roll back any upgrade no matter what breaks, unless there's catastrophic dataloss involved.

The only thing that will prevent you from rolling back is the new ZFS feature codes. As long as you don't update them you can freely roll back.


Takes No Damage posted:

Was just about to ask if anyone had made the leap from Free to True. Were you on the latest FreeNAS already? Is it just the standard update process through the GUI and select TrueNAS from the dropdown and hit GO? I'm a little trigger shy because I don't think you can roll back or restore to FreeNAS once you've booted up with TrueNAS (Or can you?).

It was pretty easy, just change update train and it took care of it.

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

The only thing that will prevent you from rolling back is the new ZFS feature codes. As long as you don't update them you can freely roll back.
For that there's checkpointing. They make it so that every write is append-only and can be rewinded back to the last transaction group before the checkpoint was set.

Besides, you're not supposed to upgrade your OS and zpool at the same time.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

For that there's checkpointing. They make it so that every write is append-only and can be rewinded back to the last transaction group before the checkpoint was set.

Besides, you're not supposed to upgrade your OS and zpool at the same time.

No, you don't upgrade them at the same time. But the ZFS feature upgrade notifies you about it as soon as you boot into TrueNAS the first time.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I remember hardware raid ~2006 where if you didn't have the correct drives exactly in the correct IDE plugs, well, there is obviously no raid.

I will admit, plugging a disk from an UNRAID array and seeing files is cool. You could do the same thing with that really old Windows File Server dedicated to JBOD arrays and I realized a lot of those files on those 160GB-750GB drives that were in there during that time would not be able to pulled from.

For those worried about the difficulty of restoring ZFS and have been through normal RAID restores, ZFS is really loving hard to kill even if your server dies or boot drive dies (maybe both!)

Even without properly exporting the drives, ZFS writes a lot of metadata on every drive (including spares) that it can use that info

  • zpool import disk - will see you have a zpool on the disk and if you want to restore it
  • zpool import poolname - will look through all drives you have attached and ask if you want to recreate the pool and add those disks that were in it before
  • after importing, it will automatically restore all SAMBA/NFS shares - seriously, when it did this, I kneeled down in awe because NFS is a bitch

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

hbag posted:

yeeeeah ive been googling this poo poo and i really cant figure out how to get this thing to run in ionice

i mean lmao this is the fuckin container in the docker-compose file:

pre:
# NZBGet - https://hotio.dev/containers/nzbget/
# <mkdir /volume1/docker/appdata/nzbget>
  nzbget:
    container_name: nzbget
    image: ghcr.io/hotio/nzbget:latest
    restart: unless-stopped
    logging:
      driver: json-file
    networks:
      - arrNet
    ports:
      - 6789:6789/tcp
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/nzbget:/config:rw
      - ${DOCKERSTORAGEDIR}/usenet:/data/usenet:rw

lol, sorry, I should have guessed that Docker might lock down whatever kernel internals ionice needs to touch. I don't really use Docker (but I actually really do need to learn it sometime here) and this isn't my forte at all, but as mentioned it sounds like you might be able to add a capability/permission that lets it set ionice (CAP_SYS_NICE). Doing a search it does look like child processes will inherit the ionice of the parent so you probably don't need to worry about that.

To start though, just see if you can get a root shell and ionice the process manually ("ps -A" or "ps -a", and get the PID for nzbget, then "ionice -n 3 -p PID") and see if it improves things sufficiently before you get too deep into fighting the docker configs. It sounds like there are also probably some docker-level io limits that you can set, but you will have to experimentally determine what works, while "ionice" should in theory just automatically service all processes in order according to the priority levels you set.

edit: nevermind I see you tried that too. hmm. If adding that capability doesn't help, maybe you could try the docker-level IO resource limits mentioned. Sorry, really not familiar with docker, I just use a normal OS and install bare-metal.

Paul MaudDib fucked around with this message at 22:59 on Jul 9, 2021

hbag
Feb 13, 2021

Paul MaudDib posted:

lol, sorry, I should have guessed that Docker might lock down whatever kernel internals ionice needs to touch. I don't really use Docker (but I actually really do need to learn it sometime here) and this isn't my forte at all, but as mentioned it sounds like you might be able to add a capability/permission that lets it set ionice (CAP_SYS_NICE). Doing a search it does look like child processes will inherit the ionice of the parent so you probably don't need to worry about that.

To start though, just see if you can get a root shell and ionice the process manually ("ps -A" or "ps -a", and get the PID for nzbget, then "ionice -n 3 -p PID") and see if it improves things sufficiently before you get too deep into fighting the docker configs. It sounds like there are also probably some docker-level io limits that you can set, but you will have to experimentally determine what works, while "ionice" should in theory just automatically service all processes in order according to the priority levels you set.

edit: nevermind I see you tried that too. hmm. If adding that capability doesn't help, maybe you could try the docker-level IO resource limits mentioned. Sorry, really not familiar with docker, I just use a normal OS and install bare-metal.

turns out docker has some built-in poo poo for it, i figured it out eventually






anyway, i want to move all the logic poo poo to another machine but im not sure if thats even viable considering im broke as gently caress
i mean i COULD get a pi but i dont know if that has the processing power id need for transcoding videos n poo poo

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

hbag posted:

turns out docker has some built-in poo poo for it, i figured it out eventually

anyway, i want to move all the logic poo poo to another machine but im not sure if thats even viable considering im broke as gently caress
i mean i COULD get a pi but i dont know if that has the processing power id need for transcoding videos n poo poo

glad you figured it out.

nah that's the opposite of what I'm saying, buy some RAM and you can probably run everything on this machine.

this machine will also do plex transcoding, and intel low-key has great support for all of their products on *nix (they had an open source driver a while before AMD). Like I said I really really like these Gemini Lake chips.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

Finally jumped on the TrueNAS train from my FreeNAS box, so far pretty happy. Feels about the same.

I put together a new build months ago and started with True as 12. It's honestly been bulletproof. It's a whitebox machine running ESXi with an LSI card passed through to the TrueNAS VM.

Currently running 4x14tb and 4x8tb disks.

I did find the SMB/NTFS share permissions to be funky from what I am used to, but it functions and is stable.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Is there a new process for updating Docker containers in DSM 7? I download the new image, stop the running container, select "reset" (formerly "clear") and restart the container. This procedure worked in DSM 6 but doesn't seem to in 7.

Adbot
ADBOT LOVES YOU

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Is there some way to mount remote storage to TrueNAS (for example, a SMB share on a different computer) like you can with proxmox? Sure, you could just connect at the VM level, but I don't want to do that (and I want to use their built-in docker functionality).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply