|
BlankSystemDaemon posted:As for the hardware, it's seems to be a bit overpriced for what it is - it being a several-generations old Haswell-era Xeon on a Supermicro board without much in the way of remote management, which is kind of a big deal for servers. 3u seems unwieldy. Guess it doesn't really matter once it's in place.
|
# ? Dec 12, 2021 19:48 |
|
|
# ? May 10, 2024 03:47 |
|
I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it? Ultimately I personally don't care about performance so much as drive longevity, so I should be parking heads pretty regularly since I pretty much have a WORM setup, right? Duck and Cover posted:3u seems unwieldy. Guess it doesn't really matter once it's in place. 3u is perfect in a 25u+ rack, IMO. If there was a version of the md1000 with some smarts in it, I'd run one of those by itself, 100%.
|
# ? Dec 12, 2021 21:25 |
Crunchy Black posted:I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it? Drives today are made to be running constantly, unless you're not doing ANYTHING on the drive for days at a time it's almost always better to not park the heads. Crunchy Black posted:3u is perfect in a 25u+ rack, IMO. If there was a version of the md1000 with some smarts in it, I'd run one of those by itself, 100%.
|
|
# ? Dec 12, 2021 21:31 |
|
Crunchy Black posted:I've always heard conflicting things about drive power management so I want to poll the thread of Free/TrueNAS users, are you manually managing your powerstates or just letting the drive firmware handle it? I've never done anything for the 7ish years I've been running FreeNAS/TrueNAS and haven't ever had a drive issue. Whatever it is doing or the drives are handling seems to be enough, for drive health anyway.
|
# ? Dec 12, 2021 21:50 |
|
I have two very vague questions and I'm hoping someone here can help me with them. 1.) Has anyone upgraded the Ram in their Synology DS920+? Was it worth it and what use case would warrant the RAM upgrade? 2.) Does anyone use their DS920+ (or any Synology NAS really) to run a dedicated private game server of any kind? My friends are playing a lot of project Zomboid lately as the game just rolled out an online multiplayer mode. I found a docker image for the game and from what I've read online the 4GB my DS920+ has should be sufficient (if you are unfamiliar with the game it basically looks like the first SIMs game so it's not exactly a huge resource hog). What are your experiences running game servers (Minecraft or anything really) on your NAS?
|
# ? Dec 12, 2021 22:11 |
|
Scruff McGruff posted:Sure, but I don't recommend it. You just move whatever data on the drive(s) off of them, then stop the array, remove the drives from it, then power down and replace the physical drives, power back up and add the new drives to the array as new drives. I guess if you have the physical capacity in the server you could probably even add the new drives to the array with the old drives, then transfer all the data from the old drives to the new ones, then remove the old drives from the array. Now if your concern there is downtime during the rebuild, unRAID will emulate the drive via parity while its rebuilding so you can still run your server normally during that process. Pre-clear hasn't been necessary for a few years now, Unraid can clear the drive without taking the array down natively.
|
# ? Dec 13, 2021 00:26 |
Buff Hardback posted:Pre-clear hasn't been necessary for a few years now, Unraid can clear the drive without taking the array down natively.
|
|
# ? Dec 13, 2021 00:27 |
|
CerealKilla420 posted:I have two very vague questions and I'm hoping someone here can help me with them. 1) It’s incredibly easy, and honestly, at like 35 bucks for an 8 gb stick it seemed like a no brainer. 2) I don’t have anything to offer here, sorry. Thought you might consider putting in a cheap SSD for read write caching?
|
# ? Dec 13, 2021 01:31 |
|
Any proprietary bullshit that I should look out for while searching ebay? I'd hate to get a server and find out "oh we only use overpriced Dell memory"
Duck and Cover fucked around with this message at 02:02 on Dec 13, 2021 |
# ? Dec 13, 2021 01:53 |
|
Finally decided what to do with two HP microserver's worth of 2TB disks. Into the Chenbro 1U! Not sure what I'm going to do with it yet, most of these disks are 5-8 years old and were running constantly until 2019 so whatever it is it'll need redundancy. At least the bag of grommets fits in the spots for the last two disks I don't have.
|
# ? Dec 13, 2021 02:42 |
|
CerealKilla420 posted:I have two very vague questions and I'm hoping someone here can help me with them. 1. If you're going to run a lot of containers, yes. 2. I tried running a Minecraft server on my 920 using the container for it and the server could not handle it. That's with no mods, and me just connecting through my local network. It was extremely rubberbandy.
|
# ? Dec 13, 2021 04:52 |
Duck and Cover posted:Any proprietary bullshit that I should look out for while searching ebay? I'd hate to get a server and find out "oh we only use overpriced Dell memory" As an example, HPE servers will "warn" about not being able to enable "HPE SmartMemory", which is a requirement to enable RAIM on top of ECC, if you haven't bought the right model of HPE branded memory, but the system will still work fine. Depending on the DRAM width (ie. 4x2+1, 8x1+1, or some other combination), and what ranking is used, RAIM can end up taking between 33.3…% and 50% of the available memory, so there is at least some argument for why it could matter. Rexxed posted:Finally decided what to do with two HP microserver's worth of 2TB disks. Into the Chenbro 1U! There's no such thing as too much backup. TVGM posted:1. If you're going to run a lot of containers, yes.
|
|
# ? Dec 13, 2021 07:57 |
|
4TB Seagate Iron Wolfs are on sale for $80 on Amazon right now, the lowest I've seen them.
|
# ? Dec 13, 2021 13:27 |
|
Question about the Synology NAS: I have an encrypted folder on the NAS. Is there a way to run some command on a computer to decrypt and mount that folder (and then unmount later)? Right now I have to log in to my Synology, decrypt it, then mount it as a network share, then unmount it and log back in and re-encrypt. Running macOS.
|
# ? Dec 13, 2021 23:04 |
|
BlankSystemDaemon posted:Power related stuff in FreeBSD is ultimately handled by CAM, and is controlled through camcontrol(8) using the powermode, idle, standby, and sleep sub-commands. Always with the good info BSD, thanks! I don't see the power options selector, now, do I need to upgrade the pool? It's probably time to do so... And yes I agree, I have the Rosewill 4u case as the home-base for my local storage, the Powervault is just a cool, well-built, inexpensive play thing, all things considered.
|
# ? Dec 13, 2021 23:17 |
|
BlankSystemDaemon posted:
I upgraded the RAM to 20 GB! Happy to be wrong if this model can run a Minecraft server, though.
|
# ? Dec 14, 2021 03:43 |
|
TVGM posted:I upgraded the RAM to 20 GB! Happy to be wrong if this model can run a Minecraft server, though. how much ram did you give the container access to?
|
# ? Dec 14, 2021 13:18 |
Crunchy Black posted:Always with the good info BSD, thanks! I don't see the power options selector, now, do I need to upgrade the pool? It's probably time to do so... ZFS doesn't care about where the devices come from, as long as they register as character devices on FreeBSD or block devices on Linux (with whatever additional caching that that implies). CAM in FreeBSD is based on a largely forgotten standard that wasn't ever adopted, except in FreeBSD it has been or is being extended to basically everything that can act like storage including the SCSI it was intended for, iSCSI, ATA drives, ATAPI, NVMe disks, and even non-volatile flash storage via MMC is being added to CAM. Basically, it's what takes care of standardizing disk behaviour between the device drivers drivers themselves and devfs(5), which itself is responsible for populating /dev/.
|
|
# ? Dec 14, 2021 14:08 |
|
It was mostly because it's probably using a version from ~2017 ? When the pool was created. If there's no significant reason to do so then, it can keep on doing its thing.
|
# ? Dec 14, 2021 14:33 |
|
Upgrade the pool for the features. The spacemap V2 will probably do something good for fragmented pools, sequential resilver speeds up said activity, and ZStandard compression would be interesting for data that compresses good (probably anything that ain't pictures and videos), getting a bit more space out of the pool. There's probably a few more things, but I can't find a list of introduction dates for all the features. --edit: I guess this one will do somewhat: https://en.wikipedia.org/wiki/OpenZFS#OpenZFS_2.0 Combat Pretzel fucked around with this message at 15:53 on Dec 14, 2021 |
# ? Dec 14, 2021 15:48 |
|
I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection.
|
# ? Dec 14, 2021 17:09 |
|
Combat Pretzel posted:Upgrade the pool for the features. The spacemap V2 will probably do something good for fragmented pools, sequential resilver speeds up said activity, and ZStandard compression would be interesting for data that compresses good (probably anything that ain't pictures and videos), getting a bit more space out of the pool. Well now that I'm over the upgrade hump, seems like a good idea to just go ahead and upgrade. Will proceed with the test pool first then production to make sure but that's a lot of cool upgrades to have.
|
# ? Dec 14, 2021 17:16 |
|
kri kri posted:I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection. Sabrent, IcyDock, and StarTech all make good external drive enclosures and docks that would serve this purpose, though most are usually USB 3.0 there are some USB-C ones like this Sabrent dual bay dock. I can't speak to it specifically but I've used docks/enclosures/bay adapters from all three companies and they've all been solid.
|
# ? Dec 14, 2021 18:00 |
|
kri kri posted:I am looking to re-purpose some 8TB easystores as backups. My enclosures they came with I tossed, what is my best bet for an enclosure for a 3.5" drive? Preferably with a USB-C connection. E: Sorry, didn't read the USB c request
|
# ? Dec 14, 2021 19:17 |
|
CopperHound posted:I bet if you ask, people here will be willing to send you easy store chassis for free/cheap. I have a few empties. USB C to USB 3.1 micro b superspeed cables (which would go into the easystore chassis) are totally a thing, I have one for the one easystore I keep in the chassis.
|
# ? Dec 14, 2021 20:22 |
|
Thanks for the help y'all. I think I am actually just going to use my toaster and some of these bad boys, as I don't really need them in an enclosure unless they are getting backed up at my desk. https://www.amazon.com/gp/product/B071ZFD6VG/?th=1
|
# ? Dec 14, 2021 22:48 |
|
I had a stupid NAS question last week, but I think I answered it myself by buying a new one and a pair of 8TB disks to basically start over with more space. Old one is a DS414 that's proven itself invaluable over the last almost 8 years, but it's too limited volume size-wise due to the old 32 chip. New one is a DS920+ that I'm currently backup up to external HDDs before I swap 2 of the old unit's drives into the new one with the 8TB jobbies. Guess I'll know how successful that backup was when I get back from Christmas holidays on Monday lol.
|
# ? Dec 23, 2021 22:48 |
|
Okay folks, I am feeling like amazoning myself a stupid home nas. Which is best.... TERRAMASTER, qnap, synology, Asustor, something else? Not looking to rackmount anything. Not looking to build a custom pc.
|
# ? Dec 24, 2021 07:21 |
|
synology
|
# ? Dec 24, 2021 07:37 |
QNAP and Synology are basically feature-and-price-equivalent, all the others are strictly-worse-but-cheaper.
|
|
# ? Dec 24, 2021 11:04 |
|
Meh, I knew there was a catch with TrueNAS updates. Instead of doing an incremental one, it just unpacks a new image into a new dataset and erases any custom modifications. That's dumb. I mean, custom Wireguard, Docker and nvmetcli configs are restored fast enough, but I'd rather not do it every drat update.
|
# ? Dec 24, 2021 15:01 |
|
BlankSystemDaemon posted:QNAP and Synology are basically feature-and-price-equivalent, all the others are strictly-worse-but-cheaper.
|
# ? Dec 24, 2021 16:34 |
|
Combat Pretzel posted:Meh, I knew there was a catch with TrueNAS updates. Instead of doing an incremental one, it just unpacks a new image into a new dataset and erases any custom modifications. That's dumb. I think a good balance point would be an overlay filesystem solution like what a lot of phones and other Linux-powered appliances do where the system partition is a read-only image and there's a "user" overlay where whatever changes you make can be held and that just gets mounted on top of the system. Make it an A/B system and it's even better.
|
# ? Dec 24, 2021 17:51 |
|
I set TrueNas Core and I'm afraid I made the mistake of conflating sequential performance with real world IOPS when laying out my zpool. I have a pool of 8 mechanical drives in one raidz2 vdev. It can easily saturate my network connection copying big files back and forth over smb, but if I start seeding torrents the write speed crawls down around 20M/s. What should I try first? - More Ram? Right now I have 16gb. The motherboard can fit 32, but I would need 4 x 8gb ddr3 ecc udimms. They don't seem to be very common from reputable sellers. - Just put torrents on their own pool of one drive - Some stuff about l2arc or slog? Idk.
|
# ? Dec 24, 2021 18:08 |
I don't know what client you're using, but if it's one where everything's done syncronously (which there's no reason for), and the files being seeded are stored separately, you can zfs set sync=disabled tank/dataset.
|
|
# ? Dec 24, 2021 18:24 |
|
Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in.
|
# ? Dec 24, 2021 19:38 |
|
Combat Pretzel posted:Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in. CopperHound fucked around with this message at 19:43 on Dec 24, 2021 |
# ? Dec 24, 2021 19:39 |
Combat Pretzel posted:Regarding torrents, I'm still looking for a client that waits until the whole piece has been downloaded before it writes it down, especially since piece size in torrents are typically like 512KB, 1MB or 2MB. But it seems they (at least Transmission) tend more towards writing them out partially in 16KB blocks as data comes in. What I've done is add a dataset with sync=disabled, set its mountpoint to a temporary download directory which ctorrent downloads into automatically, until the torrent data in question is fully downloaded - at which point flexget move it to a more permanent position based on various parameters. EDIT: Also, make sure you turn off things like preallocation. BlankSystemDaemon fucked around with this message at 19:50 on Dec 24, 2021 |
|
# ? Dec 24, 2021 19:44 |
|
I already had sync off for downloads. Copying completed torrents to a separate dateset for seeding has drastically helped, and I can now do normal file copying at gigabit speeds. I guess the fragmentation was that bad. One interesting thing I'm seeing is that 'zpool iostat 30' shows a read rate about 7-10 times higher than my torrent seed rate. e: oh that was 1mb record size. I'll try with 128kb CopperHound fucked around with this message at 21:57 on Dec 24, 2021 |
# ? Dec 24, 2021 21:49 |
|
|
# ? May 10, 2024 03:47 |
The read shouldn't matter, since all the active pieces should be part of the MRU or MFU lists that make up the ARC, unless the resident data set is more than your ARC is configured to use as its maximum. ZFS doesn't really have a way to track fragmentation, and won't experience it unless you heavily intermix asynchronous and synchronous I/O, and even then you have to have some pretty oddly behaving userspace programs to really run into it. What it does have is free space fragmentation, which is an indicator of how difficult it is to find contiguous free space to allocate recordsized groups of sectors - ie. the percentage that you see in zpool list is the percentage made up of blocks which are less than recordsize (which defaults to 128k), out of the total free space (ie. it says nothing about the space that's used up, which is what a lot of people talk about when they mention fragmentation). EDIT: Just saw your edit; remember that ZFS is variable-sized, so a record doesn't have to be 1MB just because that's what the dataset is configured as - it depends on the dirty write buffer, if there's synchronous I/O involved, and a bunch of other factors. EDIT2: I'm also not really sure if I explained free space fragmentation very well so here's another way to think about it: Imagine if you have 10GB free, and the fragmentation says 50% - that means that there's 50% of the space that can be used to write out records of the optimal size. The rest can be anywhere from 1 to N-1 bytes too small - but ZFS will still try to write things sequentially when it can. BlankSystemDaemon fucked around with this message at 22:08 on Dec 24, 2021 |
|
# ? Dec 24, 2021 22:02 |