Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Duck and Cover
Apr 6, 2007

BlankSystemDaemon posted:

As for the hardware, it's seems to be a bit overpriced for what it is - it being a several-generations old Haswell-era Xeon on a Supermicro board without much in the way of remote management, which is kind of a big deal for servers.
This is close to half the price, and only a single generation earlier - meaning the only thing you lose out on is some support for AVX2, but I can't imagine it'll do you much good if you aren't doing HPC workloads or something very specific.
The other option is to simply search for "SuperServer" as those are typically the units being retired by companies, instead of built by someone else.

Rack servers, especially 2U units, aren't going to ever be very quiet. You need at least 3U to get fans big enough that they can move enough air without running super fast.

3u seems unwieldy. Guess it doesn't really matter once it's in place.

Adbot
ADBOT LOVES YOU

Crunchy Black
Oct 24, 2017

by Athanatos
I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it?

Ultimately I personally don't care about performance so much as drive longevity, so I should be parking heads pretty regularly since I pretty much have a WORM setup, right?

Duck and Cover posted:

3u seems unwieldy. Guess it doesn't really matter once it's in place.

3u is perfect in a 25u+ rack, IMO. If there was a version of the md1000 with some smarts in it, I'd run one of those by itself, 100%.

BlankSystemDaemon
Mar 13, 2009



Crunchy Black posted:

I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it?

Ultimately I personally don't care about performance so much as drive longevity, so I should be parking heads pretty regularly since I pretty much have a WORM setup, right?
Power related stuff in FreeBSD is ultimately handled by CAM, and is controlled through camcontrol(8) using the powermode, idle, standby, and sleep sub-commands.

Drives today are made to be running constantly, unless you're not doing ANYTHING on the drive for days at a time it's almost always better to not park the heads.

Crunchy Black posted:

3u is perfect in a 25u+ rack, IMO. If there was a version of the md1000 with some smarts in it, I'd run one of those by itself, 100%.
4U rack servers, unless they're the very long kind (over 900mm), tend to be about the size of a standard ATX PC case - so I don't know if they're that huge.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

Crunchy Black posted:

I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it?


I've never done anything for the 7ish years I've been running FreeNAS/TrueNAS and haven't ever had a drive issue. Whatever it is doing or the drives are handling seems to be enough, for drive health anyway.

CerealKilla420
Jan 3, 2014

"I need a handle man..."
I have two very vague questions and I'm hoping someone here can help me with them.

1.) Has anyone upgraded the Ram in their Synology DS920+? Was it worth it and what use case would warrant the RAM upgrade?

2.) Does anyone use their DS920+ (or any Synology NAS really) to run a dedicated private game server of any kind? My friends are playing a lot of project Zomboid lately as the game just rolled out an online multiplayer mode. I found a docker image for the game and from what I've read online the 4GB my DS920+ has should be sufficient (if you are unfamiliar with the game it basically looks like the first SIMs game so it's not exactly a huge resource hog).

What are your experiences running game servers (Minecraft or anything really) on your NAS?

Raymond T. Racing
Jun 11, 2019

Scruff McGruff posted:

Sure, but I don't recommend it. You just move whatever data on the drive(s) off of them, then stop the array, remove the drives from it, then power down and replace the physical drives, power back up and add the new drives to the array as new drives. I guess if you have the physical capacity in the server you could probably even add the new drives to the array with the old drives, then transfer all the data from the old drives to the new ones, then remove the old drives from the array. Now if your concern there is downtime during the rebuild, unRAID will emulate the drive via parity while its rebuilding so you can still run your server normally during that process.

Correction from earlier: it sounds like, while unRAID will do its own disk clear to new drives, the Pre-Clear plugin is something that's still recommended because you can run it before adding the new drives to the array (while they're still unassigned devices). This means that when you do add the drive to the array unRAID won't have to run its own drive clear first which prevents the array from running until it completes. I haven't done this myself so I can't confirm it but that's what I've read and I'm definitely going to do that next time I add a new drive.

Pre-clear hasn't been necessary for a few years now, Unraid can clear the drive without taking the array down natively.

BlankSystemDaemon
Mar 13, 2009



Buff Hardback posted:

Pre-clear hasn't been necessary for a few years now, Unraid can clear the drive without taking the array down natively.
If pre-clear is working as a stand-in for a proper burn-in (which is more common than I would have expected), then it's still "required" - but should probably be replaced with an actual burn-in.

Sir DonkeyPunch
Mar 23, 2007

I didn't hear no bell

CerealKilla420 posted:

I have two very vague questions and I'm hoping someone here can help me with them.

1.) Has anyone upgraded the Ram in their Synology DS920+? Was it worth it and what use case would warrant the RAM upgrade?

2.) Does anyone use their DS920+ (or any Synology NAS really) to run a dedicated private game server of any kind? My friends are playing a lot of project Zomboid lately as the game just rolled out an online multiplayer mode. I found a docker image for the game and from what I've read online the 4GB my DS920+ has should be sufficient (if you are unfamiliar with the game it basically looks like the first SIMs game so it's not exactly a huge resource hog).

What are your experiences running game servers (Minecraft or anything really) on your NAS?

1) It’s incredibly easy, and honestly, at like 35 bucks for an 8 gb stick it seemed like a no brainer.

2) I don’t have anything to offer here, sorry. Thought you might consider putting in a cheap SSD for read write caching?

Duck and Cover
Apr 6, 2007

Any proprietary bullshit that I should look out for while searching ebay? I'd hate to get a server and find out "oh we only use overpriced Dell memory"

Duck and Cover fucked around with this message at 02:02 on Dec 13, 2021

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Finally decided what to do with two HP microserver's worth of 2TB disks. Into the Chenbro 1U!


Not sure what I'm going to do with it yet, most of these disks are 5-8 years old and were running constantly until 2019 so whatever it is it'll need redundancy. At least the bag of grommets fits in the spots for the last two disks I don't have.

TVGM
Mar 17, 2005

"It is not moral, it is not acceptable, and it is not sustainable that the top one-tenth of 1 percent now owns almost as much wealth as the bottom 90 percent"

Yam Slacker

CerealKilla420 posted:

I have two very vague questions and I'm hoping someone here can help me with them.

1.) Has anyone upgraded the Ram in their Synology DS920+? Was it worth it and what use case would warrant the RAM upgrade?

2.) Does anyone use their DS920+ (or any Synology NAS really) to run a dedicated private game server of any kind? My friends are playing a lot of project Zomboid lately as the game just rolled out an online multiplayer mode. I found a docker image for the game and from what I've read online the 4GB my DS920+ has should be sufficient (if you are unfamiliar with the game it basically looks like the first SIMs game so it's not exactly a huge resource hog).

What are your experiences running game servers (Minecraft or anything really) on your NAS?

1. If you're going to run a lot of containers, yes.

2. I tried running a Minecraft server on my 920 using the container for it and the server could not handle it. That's with no mods, and me just connecting through my local network. It was extremely rubberbandy.

BlankSystemDaemon
Mar 13, 2009



Duck and Cover posted:

Any proprietary bullshit that I should look out for while searching ebay? I'd hate to get a server and find out "oh we only use overpriced Dell memory"
HPE and Dell both do varying levels of proprietary bullshit, whereas Supermicro and Tyan are pretty decent about sticking the basics.

As an example, HPE servers will "warn" about not being able to enable "HPE SmartMemory", which is a requirement to enable RAIM on top of ECC, if you haven't bought the right model of HPE branded memory, but the system will still work fine.
Depending on the DRAM width (ie. 4x2+1, 8x1+1, or some other combination), and what ranking is used, RAIM can end up taking between 33.3…% and 50% of the available memory, so there is at least some argument for why it could matter.

Rexxed posted:

Finally decided what to do with two HP microserver's worth of 2TB disks. Into the Chenbro 1U!


Not sure what I'm going to do with it yet, most of these disks are 5-8 years old and were running constantly until 2019 so whatever it is it'll need redundancy. At least the bag of grommets fits in the spots for the last two disks I don't have.
Use it as a secondary offline backup that you boot up once every week and do zfs send|receive to for the most important data you have.
There's no such thing as too much backup.

TVGM posted:

1. If you're going to run a lot of containers, yes.

2. I tried running a Minecraft server on my 920 using the container for it and the server could not handle it. That's with no mods, and me just connecting through my local network. It was extremely rubberbandy.
I think CommieGIR can comment on it further (since I believe he runs a Minecraft server for someone), but I imagine Minecraft is still as memory hungry as it's always been - so it's likely rubberbandy because it's having to do a lot of swapping since Synology NAS' generally don't ship with a lot of memory.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
4TB Seagate Iron Wolfs are on sale for $80 on Amazon right now, the lowest I've seen them.

PRADA SLUT
Mar 14, 2006

Inexperienced,
heartless,
but even so
Question about the Synology NAS:

I have an encrypted folder on the NAS. Is there a way to run some command on a computer to decrypt and mount that folder (and then unmount later)?

Right now I have to log in to my Synology, decrypt it, then mount it as a network share, then unmount it and log back in and re-encrypt.

Running macOS.

Crunchy Black
Oct 24, 2017

by Athanatos

BlankSystemDaemon posted:

Power related stuff in FreeBSD is ultimately handled by CAM, and is controlled through camcontrol(8) using the powermode, idle, standby, and sleep sub-commands.

Drives today are made to be running constantly, unless you're not doing ANYTHING on the drive for days at a time it's almost always better to not park the heads.

4U rack servers, unless they're the very long kind (over 900mm), tend to be about the size of a standard ATX PC case - so I don't know if they're that huge.

Always with the good info BSD, thanks! I don't see the power options selector, now, do I need to upgrade the pool? It's probably time to do so...

And yes I agree, I have the Rosewill 4u case as the home-base for my local storage, the Powervault is just a cool, well-built, inexpensive play thing, all things considered.

TVGM
Mar 17, 2005

"It is not moral, it is not acceptable, and it is not sustainable that the top one-tenth of 1 percent now owns almost as much wealth as the bottom 90 percent"

Yam Slacker

BlankSystemDaemon posted:


I think CommieGIR can comment on it further (since I believe he runs a Minecraft server for someone), but I imagine Minecraft is still as memory hungry as it's always been - so it's likely rubberbandy because it's having to do a lot of swapping since Synology NAS' generally don't ship with a lot of memory.

I upgraded the RAM to 20 GB! Happy to be wrong if this model can run a Minecraft server, though.

Sir DonkeyPunch
Mar 23, 2007

I didn't hear no bell

TVGM posted:

I upgraded the RAM to 20 GB! Happy to be wrong if this model can run a Minecraft server, though.

how much ram did you give the container access to?

BlankSystemDaemon
Mar 13, 2009



Crunchy Black posted:

Always with the good info BSD, thanks! I don't see the power options selector, now, do I need to upgrade the pool? It's probably time to do so...

And yes I agree, I have the Rosewill 4u case as the home-base for my local storage, the Powervault is just a cool, well-built, inexpensive play thing, all things considered.


I'm not sure I understand why you should upgrade the pool, to be honest.
ZFS doesn't care about where the devices come from, as long as they register as character devices on FreeBSD or block devices on Linux (with whatever additional caching that that implies).

CAM in FreeBSD is based on a largely forgotten standard that wasn't ever adopted, except in FreeBSD it has been or is being extended to basically everything that can act like storage including the SCSI it was intended for, iSCSI, ATA drives, ATAPI, NVMe disks, and even non-volatile flash storage via MMC is being added to CAM.
Basically, it's what takes care of standardizing disk behaviour between the device drivers drivers themselves and devfs(5), which itself is responsible for populating /dev/.

Crunchy Black
Oct 24, 2017

by Athanatos
It was mostly because it's probably using a version from ~2017 ? When the pool was created. If there's no significant reason to do so then, it can keep on doing its thing.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Upgrade the pool for the features. The spacemap V2 will probably do something good for fragmented pools, sequential resilver speeds up said activity, and ZStandard compression would be interesting for data that compresses good (probably anything that ain't pictures and videos), getting a bit more space out of the pool.

There's probably a few more things, but I can't find a list of introduction dates for all the features.

--edit:
I guess this one will do somewhat: https://en.wikipedia.org/wiki/OpenZFS#OpenZFS_2.0

Combat Pretzel fucked around with this message at 15:53 on Dec 14, 2021

kri kri
Jul 18, 2007

I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection.

Crunchy Black
Oct 24, 2017

by Athanatos

Combat Pretzel posted:

Upgrade the pool for the features. The spacemap V2 will probably do something good for fragmented pools, sequential resilver speeds up said activity, and ZStandard compression would be interesting for data that compresses good (probably anything that ain't pictures and videos), getting a bit more space out of the pool.

There's probably a few more things, but I can't find a list of introduction dates for all the features.

--edit:
I guess this one will do somewhat: https://en.wikipedia.org/wiki/OpenZFS#OpenZFS_2.0

Well now that I'm over the upgrade hump, seems like a good idea to just go ahead and upgrade. Will proceed with the test pool first then production to make sure but that's a lot of cool upgrades to have.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

kri kri posted:

I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection.

Sabrent, IcyDock, and StarTech all make good external drive enclosures and docks that would serve this purpose, though most are usually USB 3.0 there are some USB-C ones like this Sabrent dual bay dock. I can't speak to it specifically but I've used docks/enclosures/bay adapters from all three companies and they've all been solid.

CopperHound
Feb 14, 2012

kri kri posted:

I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection.
I bet if you ask, people here will be willing to send you easy store chassis for free/cheap. I have a few empties.

E: Sorry, didn't read the USB c request

Rescue Toaster
Mar 13, 2003

CopperHound posted:

I bet if you ask, people here will be willing to send you easy store chassis for free/cheap. I have a few empties.

E: Sorry, didn't read the USB c request

USB C to USB 3.1 micro b superspeed cables (which would go into the easystore chassis) are totally a thing, I have one for the one easystore I keep in the chassis.

kri kri
Jul 18, 2007

Thanks for the help y'all. I think I am actually just going to use my toaster and some of these bad boys, as I don't really need them in an enclosure unless they are getting backed up at my desk.

https://www.amazon.com/gp/product/B071ZFD6VG/?th=1

Chris Knight
Jun 5, 2002

me @ ur posts


Fun Shoe
I had a stupid NAS question last week, but I think I answered it myself by buying a new one and a pair of 8TB disks to basically start over with more space.

Old one is a DS414 that's proven itself invaluable over the last almost 8 years, but it's too limited volume size-wise due to the old 32 chip. New one is a DS920+ that I'm currently backup up to external HDDs before I swap 2 of the old unit's drives into the new one with the 8TB jobbies.

Guess I'll know how successful that backup was when I get back from Christmas holidays on Monday lol.

Sickening
Jul 16, 2007

Black summer was the best summer.
Okay folks, I am feeling like amazoning myself a stupid home nas. Which is best....

TERRAMASTER, qnap, synology, Asustor, something else?

Not looking to rackmount anything. Not looking to build a custom pc.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
synology

BlankSystemDaemon
Mar 13, 2009



QNAP and Synology are basically feature-and-price-equivalent, all the others are strictly-worse-but-cheaper.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Meh, I knew there was a catch with TrueNAS updates. Instead of doing an incremental one, it just unpacks a new image into a new dataset and erases any custom modifications. That's dumb.

I mean, custom Wireguard, Docker and nvmetcli configs are restored fast enough, but I'd rather not do it every drat update.

Chris Knight
Jun 5, 2002

me @ ur posts


Fun Shoe

BlankSystemDaemon posted:

QNAP and Synology are basically feature-and-price-equivalent, all the others are strictly-worse-but-cheaper.
Agreed. My only experience is with Synology so that's what I recommend to folks. A friend bought one last year and loves it. I'm about to load up my second. Can't say enough good things about it.

wolrah
May 8, 2006
what?

Combat Pretzel posted:

Meh, I knew there was a catch with TrueNAS updates. Instead of doing an incremental one, it just unpacks a new image into a new dataset and erases any custom modifications. That's dumb.

I mean, custom Wireguard, Docker and nvmetcli configs are restored fast enough, but I'd rather not do it every drat update.
It's a tradeoff of course, image-based updates are generally faster to apply, easier to validate, and less likely to fail than any other method. There is a single end state for the OS partition. Anyone who isn't loving around with the OS gets a lot of advantages, and the only disadvantage is that those who are loving around with the OS have more annoying updates.

I think a good balance point would be an overlay filesystem solution like what a lot of phones and other Linux-powered appliances do where the system partition is a read-only image and there's a "user" overlay where whatever changes you make can be held and that just gets mounted on top of the system. Make it an A/B system and it's even better.

CopperHound
Feb 14, 2012

I set TrueNas Core and I'm afraid I made the mistake of conflating sequential performance with real world IOPS when laying out my zpool.

I have a pool of 8 mechanical drives in one raidz2 vdev. It can easily saturate my network connection copying big files back and forth over smb, but if I start seeding torrents the write speed crawls down around 20M/s.

What should I try first?
- More Ram? Right now I have 16gb. The motherboard can fit 32, but I would need 4 x 8gb ddr3 ecc udimms. They don't seem to be very common from reputable sellers.
- Just put torrents on their own pool of one drive
- Some stuff about l2arc or slog? Idk.

BlankSystemDaemon
Mar 13, 2009



I don't know what client you're using, but if it's one where everything's done syncronously (which there's no reason for), and the files being seeded are stored separately, you can zfs set sync=disabled tank/dataset.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in.

CopperHound
Feb 14, 2012

Combat Pretzel posted:

Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in.
Transmission has a cache setting, I would assume it is a write buffer, but I don't know for sure.

CopperHound fucked around with this message at 19:43 on Dec 24, 2021

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in.
That's why forcing it to use asynchronous writes on ZFS is a good idea; it ensures that the dirty write buffer and/or 5 seconds (by default) has passed between each write.
What I've done is add a dataset with sync=disabled, set its mountpoint to a temporary download directory which ctorrent downloads into automatically, until the torrent data in question is fully downloaded - at which point flexget move it to a more permanent position based on various parameters.

EDIT: Also, make sure you turn off things like preallocation.

BlankSystemDaemon fucked around with this message at 19:50 on Dec 24, 2021

CopperHound
Feb 14, 2012

I already had sync off for downloads. Copying completed torrents to a separate dateset for seeding has drastically helped, and I can now do normal file copying at gigabit speeds. I guess the fragmentation was that bad.

One interesting thing I'm seeing is that 'zpool iostat 30' shows a read rate about 7-10 times higher than my torrent seed rate.

e: oh that was 1mb record size. I'll try with 128kb

CopperHound fucked around with this message at 21:57 on Dec 24, 2021

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



The read shouldn't matter, since all the active pieces should be part of the MRU or MFU lists that make up the ARC, unless the resident data set is more than your ARC is configured to use as its maximum.

ZFS doesn't really have a way to track fragmentation, and won't experience it unless you heavily intermix asynchronous and synchronous I/O, and even then you have to have some pretty oddly behaving userspace programs to really run into it.
What it does have is free space fragmentation, which is an indicator of how difficult it is to find contiguous free space to allocate recordsized groups of sectors - ie. the percentage that you see in zpool list is the percentage made up of blocks which are less than recordsize (which defaults to 128k), out of the total free space (ie. it says nothing about the space that's used up, which is what a lot of people talk about when they mention fragmentation).

EDIT: Just saw your edit; remember that ZFS is variable-sized, so a record doesn't have to be 1MB just because that's what the dataset is configured as - it depends on the dirty write buffer, if there's synchronous I/O involved, and a bunch of other factors.

EDIT2: I'm also not really sure if I explained free space fragmentation very well so here's another way to think about it: Imagine if you have 10GB free, and the fragmentation says 50% - that means that there's 50% of the space that can be used to write out records of the optimal size. The rest can be anywhere from 1 to N-1 bytes too small - but ZFS will still try to write things sequentially when it can.

BlankSystemDaemon fucked around with this message at 22:08 on Dec 24, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply