Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009




Nam Taf posted:

I personally chose to align with one of Plex, Emby, Jellyfin or Kodi for naming conventions. It makes media metadata scraping easier and it was close to what I had already anyway.
For what it's worth, the naming schemes can be edited pretty easily - so if you're using one, it's not impossible to go to another if you decide to use some other software.

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



IMO, it's probably easier to run it through one of the -arr applications to normalize your naming conventions. You can import the files that you already legally own through it to be added to plex/emby/whatever so you don't have to manually do all that.

Generic Monk
Oct 31, 2011

Nitrousoxide posted:

IMO, it's probably easier to run it through one of the -arr applications to normalize your naming conventions. You can import the files that you already legally own through it to be added to plex/emby/whatever so you don't have to manually do all that.

TinyMediaManager downloads metadata, lets you edit metadata/covers easily, lets you pick the metadata/naming format you want and lets you edit metadata/covers easily, in case you wanted to do it ad-hoc. It is a weird feeling java app but not like you'll be spending all day in it

Windows 98
Nov 13, 2005

HTTP 400: Bad post
Right now I have a 240tb server but I am quickly running out of space. I was thinking of just adding a disk shelf, but I have a few questions. Firstly, is the 1gb RAM per 1tb rule for TrueNAS a bunch of bullshit? I have 512gb of RAM, but my concern is that if I will be exceed that rule if I add another 240tb. Truth be told I was mulling over 480tb, which would put me way past that. I've popped into the homelab discord and asked but I feel like half the responses are saying it's bullshit, and the other half are saying it's real. I don't know who to believe.

Secondly, if I picked up this disk shelf, which has an IOM6, can I still put 12gbps SAS drives in? My pool currently is made entirely of 12gbps drives and my backplane is 12gbps too. I am wondering if it will even work, and further more if I added to the pool would it down clock the rest of my drives to 6gbps? Can I roll with this for a bit then later replace the IOM6's with IOM12's and have everything be peachy later? Also, what's the situation on connecting these shelves? I know I will obviously need some sort of card, but I kind of only have 1 slot open on the motherboard. If I added 2 disk shelves and 48 drives (specifically the one linked), are there cards available that could connect both onto the same card? I have a Quadro RTX 5000 in there and its quite large and it is doing that thing where it covers another slot and so I just have very little breathing room left in the box. The card also can't be all that long. The dimm slots on the motherboard will prevent me from slotting it if it exceeds the length of the slot.

BlankSystemDaemon
Mar 13, 2009




The 1GB per 1TB rule of thumb is an old rule that was meant to ensure that the pool would remain performant for production workloads, and was made with assumptions about how much space each cached bit of data takes up.
If you're not running a business on your server, you can get away with much less (as an example, I run at least one ZFS pool with 1GB memory on an RPI3, but its only purpose is to write log data from all devices on my network to a 3-way mirrored set of spinning rust).
Even if you aren't running a business on your server, those assumptions have changed with newer versions of OpenZFS.

Assuming the SAS enclosure you linked is like any other NetApp DiskShelf I've ever worked with (and there's no reason it wouldn't be, since their entire business strategy was to be very predictable), the only thing you get from something that interacts directly with the IOM6 controller is SAS multi-path - ie. datapath redundancy.
However, in order to make full use of that, you need at minimum SAS disks with a dual-head SAS connector, and preferably dual-actuators (as this also doubles the IOPS of the spinning rust) - so in short, it's nothing you need to worry about.

Depending what sort of SAS controller you have, one controller can daisy chain SAS enclosures into some pretty wild setups (that sometimes involves SAS expanders if they go beyond 6 SAS enclosures per chain).
Most controllers seem to top out at 1024 disks, but I think that's a software limit that can be hacked as I seem to remember seeing firmware modified to do much more - though I didn't wanna use it, since I was doing productions systems then.
On the other hand, when you're at 1024 disks sharing 16 12Gbps SAS lanes, each disk only gets ~24MBps of bandwidth, so it's definitely a bad idea to even take things that far, let alone much further.

EDIT: I think, realistically, you'd want no more than 200 disks on 16 lanes of SAS 12Gbps - that gets you right around the real-world max bandwidth of spinning rust doing sequential I/O, without oversubscribing any one lane.

EDIT2: I also just saw that you kinda hint that you don't have a SAS controller - so this eBay search should get you started.
Make sure to look for something that uses the same connector as your SAS enclosure has, or at least one that can be converted (as an example, SFF-8436 to SFF-8088 or SFF-8436 to MiniSAS HD).
If you have the option, go for something tagged "IT FW", "IT Mode" (could also be called "Initiator Target Mode") - this ensures you don't need to flash it before being able to use it for ZFS, as ZFS requires access to the disk cache, which it can only get in initiator target mode.

BlankSystemDaemon fucked around with this message at 22:20 on Jun 23, 2023

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Flyndre posted:

Would you advice against using write cache? I have the same Synology model and is also considering adding those drives as cache

In my experience with Synology - read cache fine, write cache very bad.

Aware
Nov 18, 2003

Windows 98 posted:

Right now I have a 240tb server but I am quickly running out of space. I was thinking of just adding a disk shelf, but I have a few questions. Firstly, is the 1gb RAM per 1tb rule for TrueNAS a bunch of bullshit? I have 512gb of RAM, but my concern is that if I will be exceed that rule if I add another 240tb. Truth be told I was mulling over 480tb, which would put me way past that. I've popped into the homelab discord and asked but I feel like half the responses are saying it's bullshit, and the other half are saying it's real. I don't know who to believe.

Secondly, if I picked up this disk shelf, which has an IOM6, can I still put 12gbps SAS drives in? My pool currently is made entirely of 12gbps drives and my backplane is 12gbps too. I am wondering if it will even work, and further more if I added to the pool would it down clock the rest of my drives to 6gbps? Can I roll with this for a bit then later replace the IOM6's with IOM12's and have everything be peachy later? Also, what's the situation on connecting these shelves? I know I will obviously need some sort of card, but I kind of only have 1 slot open on the motherboard. If I added 2 disk shelves and 48 drives (specifically the one linked), are there cards available that could connect both onto the same card? I have a Quadro RTX 5000 in there and its quite large and it is doing that thing where it covers another slot and so I just have very little breathing room left in the box. The card also can't be all that long. The dimm slots on the motherboard will prevent me from slotting it if it exceeds the length of the slot.


Only posting to say that's an incredible amount of home storage. Maybe the biggest in the thread? Frankly I just cycle out poo poo that doesn't get watched to keep within my 24tb

Windows 98
Nov 13, 2005

HTTP 400: Bad post

Aware posted:

Only posting to say that's an incredible amount of home storage. Maybe the biggest in the thread? Frankly I just cycle out poo poo that doesn't get watched to keep within my 24tb

I have another 72tb in a different server too lol. When I add the disk shelves I was thinking of converting it to a back up storage. Maybe turn it into a hyper visor, it’s kinda beefy too.

Thanks Ants
May 21, 2004

#essereFerrari


Since streaming services look to be deleting shows that they don't renew (I guess some cynical way to mean they never have to pay royalties to people, idk) it looks like home hoarders are going to end up creating the next Library of Alexandria

Windows 98
Nov 13, 2005

HTTP 400: Bad post
Thanks for the run down on the disk shelf stuff, friends. I did it. That brings the total in this server to 720tb. The other server is 72tb. So that brings my total to 792tb.




(I find it annoying that TrueNAS displays in TiB)

Also does anyone have any idea why TrueNAS displays
code:
3 x RAIDZ1 | 24 wide | 9.1TiB
Shouldn't it say
code:
3 x RAIDZ1 | 8 wide | 9.1TiB
And further more. I swear my math is right on this. 3 vdevs. 24 drives - 3 parity. 21 drives. 21 * 9.1 (TiB) = 191.1TiB. Why do I only have 183.56TiB usable? Does TrueNAS just use 7.54TiB in some form or another for striping across the pool? Or other various ZFS related bullshit? Am I crazy?

Windows 98 fucked around with this message at 16:43 on Jun 24, 2023

BlankSystemDaemon
Mar 13, 2009




I have no idea about TrueNAS' WebUI.

What's the output from zpool status and zfs list when run in the WebUI shell?

Windows 98
Nov 13, 2005

HTTP 400: Bad post
https://pastebin.com/FFDs12u8

Looks perfectly normal

BlankSystemDaemon
Mar 13, 2009




Yeah that looks fine.

To know why it's 184TB, I'd need to know more about the actual disks involved.
How many LBAs do they have, what's the sector size, et cetera ad nauseum.

Most of what I need should be available via diskinfo -v /dev/gptid/*.

Windows 98
Nov 13, 2005

HTTP 400: Bad post
That command was not playing nice. It was giving an error that the directory did not exist. Here is what is going on in the /dev https://pastebin.com/Qv3gxZPk

I did try and run a different command to get some information about the drives. Let me know if this doesn't have the info required. https://pastebin.com/uG5dzpD1

BlankSystemDaemon
Mar 13, 2009




Oh, you're using TrueNAS Scale, that's based on Linux - I should've spotted that before.

No clue how to get info equivalent to diskinfo, but here's what it looks like on FreeBSD:
pre:
/dev/nda0
	512         	# sectorsize
	256060514304	# mediasize in bytes (238G)
	500118192   	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	SAMSUNG MZVLW256HEHP-000L7	# Disk descr.
	S35ENA0K621397	# Disk ident.
	nvme0       	# Attachment
	Yes         	# TRIM/UNMAP support
	0           	# Rotation rate in RPM




I'm specifically interested in sector size and media size in sectors (ie. number of blocks using Logical Block Addressing), and S.M.A.R.T doesn't really report that.

According to the specs pdf, it should have 2441609216 sectors.
This works out to roughly 102TB of parity+padding across the entire pool and 4.27TB of slop space, which leaves 132.38TB of practical usable space according to the old formulas, so while your calculations don't take account of padding + slop space, the old formulas clearly aren't correct anymore either.

EDIT: From the details at the bottom, this looks like it should be one of the better approximation tools.

BlankSystemDaemon fucked around with this message at 20:06 on Jun 24, 2023

Windows 98
Nov 13, 2005

HTTP 400: Bad post
That is very interesting. The calculator shows my usable storage to be accurate then. I guess it's just some ZFS bullshit. I appreciate your help.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

Windows 98 posted:

That is very interesting. The calculator shows my usable storage to be accurate then. I guess it's just some ZFS bullshit. I appreciate your help.

Yeah, ZFS adds some overhead to each disk in addition to your full parity drives. I have an 8x8 pool, but out of those 64TB 2 are for parity, bringing me down to 48. BUT, ZFS shenanigans takes up another 8TB across the remaining drives leaving me with 40TB of usable space.

I'm afraid to ask what you're using all those bits for, unless you're that guy from a few months ago who mentioned they used to rent DVDs from Netflix back in the day and rip them all before mailing them back. You... you aren't just mirroring Netflix are you :ohdear:

Unrelated, I finally bit the bullet and am sending things up to a Backblaze bucket. I'm trying to set up my rclone bandwidth schedule as intelligently as possible so I don't tank my network during busy hours, and had a question about basic disk usage practices. I've always tried to avoid having a HD try to do 2 things at once, since on a single drive you're making it seek back and forth a bunch, but on a NAS pool that's kind of its whole thing so does that matter anymore? I'm trying to gauge what bandwidth I can leave uploading while also using the NAS for TV/movies without putting "undue stress" on the drives.

BlankSystemDaemon
Mar 13, 2009




Since it's completely legal to break copyright protection on anything you own, and since the contents of the public libraries in Denmark are owned by every citizen, I'm the person who rips every single album and audiobook casette and CD I've ever borrowed.

0.003% of my collection multiple tens of TB is not-lossless, but unfortunately I've not succeeded in bringing the number of items down in over a decade, because those albums haven't been reissued, are sold out everywhere, and don't ever go on sale on any sites I've looked at, despite having custom search agents for them.

Takes No Damage posted:

Yeah, ZFS adds some overhead to each disk in addition to your full parity drives. I have an 8x8 pool, but out of those 64TB 2 are for parity, bringing me down to 48. BUT, ZFS shenanigans takes up another 8TB across the remaining drives leaving me with 40TB of usable space.

I'm afraid to ask what you're using all those bits for, unless you're that guy from a few months ago who mentioned they used to rent DVDs from Netflix back in the day and rip them all before mailing them back. You... you aren't just mirroring Netflix are you :ohdear:

Unrelated, I finally bit the bullet and am sending things up to a Backblaze bucket. I'm trying to set up my rclone bandwidth schedule as intelligently as possible so I don't tank my network during busy hours, and had a question about basic disk usage practices. I've always tried to avoid having a HD try to do 2 things at once, since on a single drive you're making it seek back and forth a bunch, but on a NAS pool that's kind of its whole thing so does that matter anymore? I'm trying to gauge what bandwidth I can leave uploading while also using the NAS for TV/movies without putting "undue stress" on the drives.
The overhead you're talking about is padding for raidz, metaslabs for spacemaps, storage pool allocator slop, and probably a few other things I'm forgetting - they account for between 3-10%.

Even if you had direct control over I/O because you were only using synchronous operations, you still don't get to control metadata,compression, and a whole bunch of other stuff that can affect things, like the above overhead.

ZFS is also very different than other filesystems, in that it doesn't really get fragmented - what does get fragmented is the free space, so that the point when you're under 20% capacity with very high free space fragmentation, your write operations can begin deteriorating (but any read operation can still qualify for inclusion in the ARC, so won't really suffer any performance issues).

Honestly I'd just say to not worry about it, because I'm not sure you can control the I/O scheduler from userspace, and even if you could, I'm not sure it's worth it.

wolrah
May 8, 2006
what?

BlankSystemDaemon posted:

ZFS is also very different than other filesystems, in that it doesn't really get fragmented - what does get fragmented is the free space, so that the point when you're under 20% capacity with very high free space fragmentation, your write operations can begin deteriorating (but any read operation can still qualify for inclusion in the ARC, so won't really suffer any performance issues).
How are you defining fragmentation to get to the idea that ZFS "doesn't really get fragmented"? Isn't it inherent to a copy-on-write filesystem that operations that would have just overwritten existing data on traditional filesystems create fragments?

I'm a lot newer to the ZFS world than you but I'm not aware of it doing anything unique to avoid fragmentation in any of the cases where it would also be expected happen on a traditional filesystem, so it seems to me that for any given set of operations ZFS would end up either roughly equally fragmented or more fragmented than say ext4 or NTFS. The designers just chose to not care about that, presumably because in the original target applications throwing hardware at the problem was relatively easy and the advantages CoW offered to snapshotting and other things were worth the tradeoff.

BlankSystemDaemon
Mar 13, 2009




wolrah posted:

How are you defining fragmentation to get to the idea that ZFS "doesn't really get fragmented"? Isn't it inherent to a copy-on-write filesystem that operations that would have just overwritten existing data on traditional filesystems create fragments?

I'm a lot newer to the ZFS world than you but I'm not aware of it doing anything unique to avoid fragmentation in any of the cases where it would also be expected happen on a traditional filesystem, so it seems to me that for any given set of operations ZFS would end up either roughly equally fragmented or more fragmented than say ext4 or NTFS. The designers just chose to not care about that, presumably because in the original target applications throwing hardware at the problem was relatively easy and the advantages CoW offered to snapshotting and other things were worth the tradeoff.
On a traditional filesystem, fragmentation happens because whenever you do a partial overwrite to an existing file, that file gets written in full in a new place on the physical platters.
Defragmentation helps with this by aligning everything so data is contiguous, though it should also be noted that the data also benefits from sequential I/O patterns as well as being aligned on the outer part of the disk where it's rotating the fastest.

ZFS, being copy-on-write, will instead write those partial writes as a delta of changed bits to a subsequent record, and leave the data unchanged.
This has side-effect that when you delete files, you typically end up with much larger chunks of contiguous free space when the records are finally freed when they're no longer being used (this is an asynchronous task done in the background by a zfs kernel thread, and isn't part of the unlink(2) call), though it should also be said that before spacemap, it was one of the things that kept ZFS from being very performant on a filer with a large amount of churn - but thankfully, the last bits of issues with that were solved by spacemap v2.

Mind you, it's not perfect - and you can get into scenarios where compression means that a record that would've fit neatly into where a previous record was suddenly leaves a tiny bit of space in between - but in that instance, it's typically a matter of single or at most a few bytes.
But it's still a damned sight better than traditional filesystems with fragmentation - and for us packrats who don't know the meaning of deletions, it's a non-issue.

Incidentally, if you're using FreeBSD, you can spot the difference between the outer and inner part of the platter, by doing diskinfo -i [/path/to]<device>.
Here's an example from one of the disks in my always-on server:
pre:
/dev/ada0
        512             # sectorsize
        6001175126016   # mediasize in bytes (5.5T)
        11721045168     # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        11628021        # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        WDC WD60EFRX-68L0BN1    # Disk descr.
        WD-WXQ1H26U79HX # Disk ident.
        ahcich0         # Attachment
        id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00 # Physical path
        No              # TRIM/UNMAP support
        5700            # Rotation rate in RPM
        Not_Zoned       # Zone Mode

Transfer rates:
        outside:       102400 kbytes in   0.581062 sec =   176229 kbytes/sec
        middle:        102400 kbytes in   0.697880 sec =   146730 kbytes/sec
        inside:        102400 kbytes in   1.115892 sec =    91765 kbytes/sec
As you can see, it makes a pretty big difference.

Also, I've no idea when it happened, but at some point enclosure physical path support was added to the HPE Smart Array S100i SR Gen10 SATA controller.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

On a traditional filesystem, fragmentation happens because whenever you do a partial overwrite to an existing file, that file gets written in full in a new place on the physical platters.
Defragmentation helps with this by aligning everything so data is contiguous, though it should also be noted that the data also benefits from sequential I/O patterns as well as being aligned on the outer part of the disk where it's rotating the fastest.

ZFS, being copy-on-write, will instead write those partial writes as a delta of changed bits to a subsequent record, and leave the data unchanged.
This has side-effect that when you delete files, you typically end up with much larger chunks of contiguous free space when the records are finally freed when they're no longer being used (this is an asynchronous task done in the background by a zfs kernel thread, and isn't part of the unlink(2) call), though it should also be said that before spacemap, it was one of the things that kept ZFS from being very performant on a filer with a large amount of churn - but thankfully, the last bits of issues with that were solved by spacemap v2.

Mind you, it's not perfect - and you can get into scenarios where compression means that a record that would've fit neatly into where a previous record was suddenly leaves a tiny bit of space in between - but in that instance, it's typically a matter of single or at most a few bytes.
But it's still a damned sight better than traditional filesystems with fragmentation - and for us packrats who don't know the meaning of deletions, it's a non-issue.

Incidentally, if you're using FreeBSD, you can spot the difference between the outer and inner part of the platter, by doing diskinfo -i [/path/to]<device>.
Here's an example from one of the disks in my always-on server:
pre:
/dev/ada0
        512             # sectorsize
        6001175126016   # mediasize in bytes (5.5T)
        11721045168     # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        11628021        # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        WDC WD60EFRX-68L0BN1    # Disk descr.
        WD-WXQ1H26U79HX # Disk ident.
        ahcich0         # Attachment
        id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00 # Physical path
        No              # TRIM/UNMAP support
        5700            # Rotation rate in RPM
        Not_Zoned       # Zone Mode

Transfer rates:
        outside:       102400 kbytes in   0.581062 sec =   176229 kbytes/sec
        middle:        102400 kbytes in   0.697880 sec =   146730 kbytes/sec
        inside:        102400 kbytes in   1.115892 sec =    91765 kbytes/sec
As you can see, it makes a pretty big difference.

Also, I've no idea when it happened, but at some point enclosure physical path support was added to the HPE Smart Array S100i SR Gen10 SATA controller.

I don't think this is true. Most FS just mutate the file in-place. Not doing that is what makes COW, COW

Windows 98
Nov 13, 2005

HTTP 400: Bad post

Takes No Damage posted:

I'm afraid to ask what you're using all those bits for, unless you're that guy from a few months ago who mentioned they used to rent DVDs from Netflix back in the day and rip them all before mailing them back. You... you aren't just mirroring Netflix are you :ohdear:

Lots of Linux ISOs

wolrah
May 8, 2006
what?

BlankSystemDaemon posted:

On a traditional filesystem, fragmentation happens because whenever you do a partial overwrite to an existing file, that file gets written in full in a new place on the physical platters.
That is absolutely not true and would be completely nonsensical. When you overwrite a file in place it writes directly on top of the original data, the only time other locations get involved is if the file is expanding. If things worked how you describe then editing metadata on large movies would take ages as it rewrote the entire file. Instead it happens in a blink because nothing but the metadata was actually changed on disk.

Fragmentation happens in a traditional filesystem when a large file is written to a disk without enough contiguous free space, or if a file is appended to after other data has been written to the following clusters. This is why almost every torrent client has an option to preallocate the entirety of downloaded files rather than writing them out in the order they come in, without that any large files being downloaded from multiple peers are basically fragmentation generators where with preallocation the files end up as sequential as free space allows.

BlankSystemDaemon
Mar 13, 2009




wolrah posted:

That is absolutely not true and would be completely nonsensical. When you overwrite a file in place it writes directly on top of the original data, the only time other locations get involved is if the file is expanding. If things worked how you describe then editing metadata on large movies would take ages as it rewrote the entire file. Instead it happens in a blink because nothing but the metadata was actually changed on disk.

Fragmentation happens in a traditional filesystem when a large file is written to a disk without enough contiguous free space, or if a file is appended to after other data has been written to the following clusters. This is why almost every torrent client has an option to preallocate the entirety of downloaded files rather than writing them out in the order they come in, without that any large files being downloaded from multiple peers are basically fragmentation generators where with preallocation the files end up as sequential as free space allows.
Sorry, yeah I didn't explain it well enough - the thing I was talking about was very specifically the thing about opening a file, modifying it in-place, but it exceeding the orignal blocks it was stored in.

Pre-allocation doesn't work out on any CoW filesystem though, which is why for torrents you're much better off using an asynchronous (sync=disabled in zfsprops(7)) scratch dataset for temporary files.

But I guess you're right, ZFS does kinda ignore the problem of traditional fragmentation; it's a trade-off you make when you design a copy-on-write filesystem that never rewrites data once written.
That'd also break the hash-tree of checksums, so it's impossible without block pointer rewrite, which we probably don't need to go into why is such a bad idea.

BlankSystemDaemon
Mar 13, 2009




Double-post, but the latest OpenZFS leadership meeting is about raidz expansion and next-gen dedup:

https://www.youtube.com/watch?v=2p32m-7FNpM

Yaoi Gagarin
Feb 20, 2014

In ext* if you grow the file and there isn't enough room after it, a new extent gets allocated somewhere else for that part of the file. So you can get fragmentation within the file itself, not just in the free space. On spinny disks this is of course bad because now you need to move the head twice to read the entire file.

On SSDs I don't think it matters at all

BlankSystemDaemon
Mar 13, 2009




VostokProgram posted:

In ext* if you grow the file and there isn't enough room after it, a new extent gets allocated somewhere else for that part of the file. So you can get fragmentation within the file itself, not just in the free space. On spinny disks this is of course bad because now you need to move the head twice to read the entire file.

On SSDs I don't think it matters at all
You actively want to avoid de-fragmentation cycles on an SSD.

IOwnCalculus
Apr 2, 2003





SSDs don't have to physically seek from one part of a disk to another, so yeah - doesn't matter where on the SSD the data is, it all comes out the same. And what BSD said, defragmenting would just be undue cycles on the flash itself for no benefit.

Question for the thread since I'm re-evaluating some of my backup options. What's the current go-to for the smallest, lowest-power possible server that will hold one, maybe two 3.5" disks? Ideally something that I can just stick Ubuntu on instead of a NAS appliance since one of the things I want to try is sending zfs snapshots. I'd wait around for another ML30 deal but I want something a lot smaller than that since I'm going to be sticking one under a desk at my mom's place.

BlankSystemDaemon
Mar 13, 2009




IOwnCalculus posted:

SSDs don't have to physically seek from one part of a disk to another, so yeah - doesn't matter where on the SSD the data is, it all comes out the same. And what BSD said, defragmenting would just be undue cycles on the flash itself for no benefit.

Question for the thread since I'm re-evaluating some of my backup options. What's the current go-to for the smallest, lowest-power possible server that will hold one, maybe two 3.5" disks? Ideally something that I can just stick Ubuntu on instead of a NAS appliance since one of the things I want to try is sending zfs snapshots. I'd wait around for another ML30 deal but I want something a lot smaller than that since I'm going to be sticking one under a desk at my mom's place.
If you're only doing a single or maybe two disks, an RPI and a couple of USB connected disks can work out absolutely fine, provided the disks have an external power source - I'd recommend getting a single brick that can power everything.

BlankSystemDaemon fucked around with this message at 18:36 on Jun 27, 2023

Hed
Mar 31, 2004

Fun Shoe

BlankSystemDaemon posted:

Double-post, but the latest OpenZFS leadership meeting is about raidz expansion and next-gen dedup:

https://www.youtube.com/watch?v=2p32m-7FNpM

Thanks for this. It's great that OpenZFS is continuing work on this (and IX is supporting it).

It's weird though, back in 2017 or whatever I would be all over this, now I'm at the point where I am casually looking at a new build to replace my trusty Supermicro X10SDV and planning on just doing a zfs send to sync data over to some new vdevs.

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

If you're only doing a single or maybe two disks, an RPI and a couple of USB connected disks can work out absolutely fine, provided the disks have an external power source - I'd recommend getting a single brick that can power everything.

I might do this at home but I definitely want a single-box/single-power setup for the one I'm sticking elsewhere. Might even just bite the bullet and get a Synology for that one.

Hughlander
May 11, 2005

Hed posted:

Thanks for this. It's great that OpenZFS is continuing work on this (and IX is supporting it).

It's weird though, back in 2017 or whatever I would be all over this, now I'm at the point where I am casually looking at a new build to replace my trusty Supermicro X10SDV and planning on just doing a zfs send to sync data over to some new vdevs.

As an X10 haver let me know what you wind up with.

BlankSystemDaemon
Mar 13, 2009




Hed posted:

Thanks for this. It's great that OpenZFS is continuing work on this (and IX is supporting it).

It's weird though, back in 2017 or whatever I would be all over this, now I'm at the point where I am casually looking at a new build to replace my trusty Supermicro X10SDV and planning on just doing a zfs send to sync data over to some new vdevs.
Let's be honest, raidz expansion is a gateway drug to getting a rack full of storage.

IOwnCalculus posted:

I might do this at home but I definitely want a single-box/single-power setup for the one I'm sticking elsewhere. Might even just bite the bullet and get a Synology for that one.
Part of a good backup strategy is 3-2-1; data stored in 3 places, on 2 mediums (meaning different filesystems and physical media, one of them off-line), with 1 off-site - so there are a lot worse ideas you can come up with.
At least Synology has BTRFS working to some degree with all their proprietary code, which is more than I can say for anything else.

Hughlander posted:

As an X10 haver let me know what you wind up with.
I'd keep an eye on the Zen4 APU market, if it was me.
And that will be me, because I'm still interested in doing a DIY replacement for my HP Microserver Gen10 Plus, for the simple reason that I'd like to have my always-online machine be capable of doing everything from serving files to being a HTPC.

One option is the X11SDV-8C-TP8F which uses a Xeon D-2146NT, which supports QuickAssist - meaning you can offload SHA-512 checksums. Unfortunately however, GFNI isn't supported on those CPUs because of Intels market segmentation nonsense (it's only available on 3rd-gen Xeon Scalable and Core 9 Ultra CPUs, as far as I know).
Maybe Zen5 APUs might feature it, like AMD has been including SHA-Ni on every single CPU - which is part of why AMD was always appealing, because they never skimped out on features, even at the low-end.

BlankSystemDaemon fucked around with this message at 23:37 on Jun 27, 2023

deong
Jun 13, 2001

I'll see you in heck!
Is there a current top contender for a youtube-dl docker?

BlankSystemDaemon
Mar 13, 2009




deong posted:

Is there a current top contender for a youtube-dl docker?
If yt-dlp isn't in your software repo, just grab it via pipx.

It's way more likely to be kept up-to-date, rather than having to rely on a third-party to update something.

deong
Jun 13, 2001

I'll see you in heck!

BlankSystemDaemon posted:

If yt-dlp isn't in your software repo, just grab it via pipx.

It's way more likely to be kept up-to-date, rather than having to rely on a third-party to update something.

The docker images package a nice webui for it. I was using one before, but I'm redoing everything and just figured i'd see if there have been any forks etc.
I ran across this : https://github.com/Tzahi12345/YoutubeDL-Material

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
ignore this post, I thought youtube-dl's dev was stopped a while back, but maybe not?
it's still all about yt-dlp these days anyway

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Hrm, the latest IX newsletter mentions that they’re going to move several core services from the main OS into apps with TrueNAS Scale Cobia. So you can’t even disable this Kubernetes poo poo without kneecapping the system.

I’m between a rock and a hard place, because there’s no easy way to customise Unraid (because setting up NVMe-oF).

power crystals
Jun 6, 2007

Who wants a belly rub??

Combat Pretzel posted:

Hrm, the latest IX newsletter mentions that they’re going to move several core services from the main OS into apps with TrueNAS Scale Cobia. So you can’t even disable this Kubernetes poo poo without kneecapping the system.

I’m between a rock and a hard place, because there’s no easy way to customise Unraid (because setting up NVMe-oF).

From what I've seen the kubernetes subsystem works fine, the issue is just that truecharts is run by absolute morons. The stuff managed by ix themselves has been fine. So if they're putting DDNS etc. into the mainline applications then sure, but if they're punting to truecharts then time to look into running a VM just to run those services I guess.

Adbot
ADBOT LOVES YOU

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
What are the advantages people like going for Docker that made it worth it to switch to TrueNAS Scale? Seems like a big pain over just setting up jails.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply