Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
https://www.storagereview.com/review/seagate-exos-x26z-review-25tb-host-managed-smr-hdd

Realized I forgot to link the article on them, my bad.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

I occasionally come back to this thread, check my past posts and use it as a bit of reflection in how long I put off projects...

I think I have decided upon a sane plan of going bare-metal TrueNAS, mostly as I've gotten to know myself better + my prospensity to tweak things and I don't want a hyperconverged appliance where loving with that takes down my NAS. Though, now that I type this, I wonder if it's worth doing Proxmox just as a simple way to have HA for my simple VMs like Home Assistant and Homebridge... plenty of CPU horsepower to run it, I may just need a bit of additional storage outside of these SATA DOMs.

The hardware I have / will keep:

CPU: E3-1285 v6 (bought it before realizing my chipset / mobo doesn't run iGPU...)
Motherboard: X11SSL-CF (C232 PCH, gently caress you Intel, QuickSync would have been nice!)
RAM: 64GB
Case: Node 804
Drive config:
* 8x 16TB Exos X16 -- RAID-Z2
* 2x 7.68TB Micron 9400 SATA -- RAID 1 (I'm not actually worried about failure here, but RAID 0 just seems like a bad idea + I'm lazy to do separate mount points.)
* 2x 32GB InnoDisk SATA DOM (boot / mirror pool for TrueNAS).
NIC: Chelsio T520
OS: TrueNAS Scale (just in case I want a container of something, but my intent is to spin up a separate host for that).

I don't believe I need ZIL / SLOG, as everything should be async on this box. L2ARC as well, I don't know if it's worth strapping a random SATA SSD for this given the amount of RAM I have. The last time I touched ZFS in detail was 2008 ; I'm assuming in 15 years that ashift / 4K sectors / etc are just not a thing to think about anymore?

I think the X11SSL-CF is just old enough that it'll never get the nice HTML5 BMC interface, which irks me a bit but I bought it / own it already so I'll deal with it.

Sane / typical config? There is just really not more room in the case to add NVMe drives / additional stuff, so I'm going to limit the scope. Best case I could add another pair of SATA SSDs if it was compelling enough to do so.

movax fucked around with this message at 21:30 on Dec 17, 2023

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Put in a SSD for L2ARC and then set secondarycache to metadata on the root on the pool. Bam, el-cheapo version of metadata special vdev.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

movax posted:

Sane / typical config? There is just really not more room in the case to add NVMe drives / additional stuff, so I'm going to limit the scope. Best case I could add another pair of SATA SSDs if it was compelling enough to do so.

Looks good to me. I have a very similar setup from a storage perspective, but using a solo NVMe drive (could add a second but I've been lazy) instead of a mirror for apps/VMs. My primary 6-drive RAIDZ2 (also 16TB X16s) is on an 8-port SAS2 controller, with the boot SSD mirror and a 2-HDD mirror on the X570 motherboard's SATA ports. I am not messing with cache/log drives since mostly this just gets used as a Plex library and local VM host, and I followed all of the defaults as far as sectors and drive cache.

I only have 32GB of memory but haven't noticed any issues, even with 10GB dedicated to VMs. I am considering going up to 64 since I have 12 cores to work with and they're mostly idle right now, but to add more VMs with 32GB I'd need to limit the ZFS cache size.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Put in a SSD for L2ARC and then set secondarycache to metadata on the root on the pool. Bam, el-cheapo version of metadata special vdev.
You'll also wanna configure the L2ARC to be persistent, otherwise you have to wait for your ARC to warm up, and only then can you start seeing results out of the FIFO L2ARC.

You also miss out on the biggest reason for using the special vdev:
Being able to put files smaller than the recordsize onto your mirrored SSDs and have them be fast, while the spinning rust handles rotating rust at entirely sequential I/O speeds.

movax
Aug 30, 2008

Combat Pretzel posted:

Put in a SSD for L2ARC and then set secondarycache to metadata on the root on the pool. Bam, el-cheapo version of metadata special vdev.

I used to run a L2ARC on my original NAS, but with 64GB of RAM, and it being mostly media, is it even worth it to slap in a SATA SSD for this? I have so many old ones sitting around, just have to find room in the case.

BlankSystemDaemon posted:

You'll also wanna configure the L2ARC to be persistent, otherwise you have to wait for your ARC to warm up, and only then can you start seeing results out of the FIFO L2ARC.

You also miss out on the biggest reason for using the special vdev:
Being able to put files smaller than the recordsize onto your mirrored SSDs and have them be fast, while the spinning rust handles rotating rust at entirely sequential I/O speeds.

Ah -- so you have raised something here, I was going to have two separate pools entirely as I thought it was still very silly to mix SSD / HDD vdevs. Sounds like you're suggesting that I might be able to have a hybrid storage instead of two divorced pools? Clearly has been awhile since I've dove into ZFS, I don't remember the special vdev being a thing. Can I partition those drives? Not sure how big the special vdev has to be.

I also am staring at my case, and realize I have 2 more X16 16TB drives from an old Chia pool and a spot to put them. I'm not even going to bother to try and sell the Chia, but this gets me to 10 drives for $0.

So now, with 10 drives... (HDD), 8 on a SAS3008 and 2 on PCH SATA... Spinning drives are slow enough that I don't think I have to worry about data bandwidth across PCIe / DMI. But, it feels weird to split a vdev across HBAs. I think my two choices are:

* 5x RAID1 mirrors -- total ~80TB storage, probably highest performance (doesn't matter for media)
* 1x 8-drive RAID-Z2 + 1x 2-drive RAID1 -- total 112 TB, 'meh' performance (again, probably doesn't matter for media)

Not sure any other configurations really make sense there if I don't want to split across HBAs.

That leaves me 4x PCH ports, of which 2 are mirrored SATA DOMs for boot, and then 2 left for the 7.68TB SSDs, so have nothing left there.

The other hardware I have in the PCIe slots though...

* 6.4TB Samsung PM1733 (single device)
* NIC
* M.2 drive via x1 PCIe adapter

I could probably make that x1 drive the L2ARC and forget about it, since it doesn't have to be mirrored. I was thinking of making that 6.4TB device just a fast, non-redundant scratchpad for... things. Not sure if it makes sense to partition it up.

tl;dr -- available I/O assignment without buying anything new / using existing hardware:

* SAS3008 HBA (PCIe x8 to CPU) -- (8 drive, 16TB Exos, RAID-Z2)
* C236 PCH 0, 1 -- SATA DOM (mirror, boot)
* C236 PCH 2, 3 -- 7.68 TB Micron 9400 DATA SSD (2 drive mirror, special)
* C236 PCH 4, 5 -- (2 drive, 16TB Exos, mirror)
* PCIe 0 (PCIe x8) -- Samsung 6.4 TB NVMe
* PCIe 1 (PCIe x4) -- Chelsio T520
* PCIe 2 (PCIe x1) -- (1 drive, 1 TB Samsung M.2, L2ARC)

I do have a Hyper M.2 card I could put into that x8 slot, but I don't think this Supermicro board can bifurcate the link further. And I don't think it's worth getting a Highpoint PCIe RAID / HBA card to put in that slot, considering its PCIe 3.0 x8 and I don't have room in this chassis to install NVMe drives with good airflow.

movax fucked around with this message at 23:13 on Dec 17, 2023

BlankSystemDaemon
Mar 13, 2009



movax posted:

I used to run a L2ARC on my original NAS, but with 64GB of RAM, and it being mostly media, is it even worth it to slap in a SATA SSD for this? I have so many old ones sitting around, just have to find room in the case.

Ah -- so you have raised something here, I was going to have two separate pools entirely as I thought it was still very silly to mix SSD / HDD vdevs. Sounds like you're suggesting that I might be able to have a hybrid storage instead of two divorced pools? Clearly has been awhile since I've dove into ZFS, I don't remember the special vdev being a thing.

I also am staring at my case, and realize I have 2 more X16 16TB drives from an old Chia pool and a spot to put them. I'm not even going to bother to try and sell the Chia, but this gets me to 10 drives for $0.

So now, with 10 drives... (HDD), 8 on a SAS3008 and 2 on PCH SATA... Spinning drives are slow enough that I don't think I have to worry about data bandwidth across PCIe / DMI. But, it feels weird to split a vdev across HBAs. I think my two choices are:

* 5x RAID1 mirrors -- total ~80TB storage, probably highest performance (doesn't matter for media)
* 1x 8-drive RAID-Z2 + 1x 2-drive RAID1 -- total 112 TB, 'meh' performance (again, probably doesn't matter for media)

Not sure any other configurations really make sense there if I don't want to split across HBAs.

That leaves me 4x PCH ports, of which 2 are mirrored SATA DOMs for boot, and then 2 left for the 7.68TB SSDs, so have nothing left there.

The other hardware I have in the PCIe slots though...

* 6.4TB Samsung PM1733 (single device)
* NIC
* M.2 drive via x1 PCIe adapter

I could probably make that x1 drive the L2ARC and forget about it, since it doesn't have to be mirrored. I was thinking of making that 6.4TB device just a fast, non-redundant scratchpad for... things. Not sure if it makes sense to partition it up.
So, originally, the special vdev was introduced to do a couple of things:
Firstly it was meant to offset one of the major disadvantages of DRAID (namely that it doesn't have variable records, so if you set recordsize=10M then everything you write will be written in 10MB segments.
Secondly, it was done to speed up all metadata I/O, since when you have 10MB records, that's going to be the only thing that takes your disks out of sequential I/O mode (which, as I hinted at, is much faster).

The way it works is that you take a pair (or more, N-way mirrors are of course better) of mirrored SSDs with very high write endurance, and it'll automatically store metadata on the special vdev from there on.
What you can optionally do is define an allocation class, which is a per-dataset property, which forces writes up to that size to value to be written to the special vdev.

Effectively, this becomes a hybrid pool as you put it, but the big thing about it is that it's not limited to the DRAID feature.

As for your pool configuration, I'd absolutely stick with an 8-way raidz2, because the "meh" performance will be plenty fast to serve your home movies, since video+audio bandwidth is measured in 10s Mbps, not the hundreds you get from even 1000BaseT with SMB or NFS, let alone the thousands that I expect you'll see from your raidz2.
Now, if you're targeting 10G wirespeed for SMB or NFS, you'll probably need the striped mirrors, but in that case I'd also recommend investing in SLOG devices.

As for whether you'll be able to use the 6.4TB SSD for L2ARC, it'll depend entirely on your ARC hit ratio, which you can't know before you're running the system - but unlike special or log vdevs, L2ARC can be added any time.
Do remember, though, that at ~70 bytes per LBA, you're looking at needing over 250GB of system memory to map the L2ARC into memory... So maybe don't OOM your system and remember to use a partition that's much smaller than the entire drive? :v:
EDIT: This also has the added advantage that it basically overprovisions your SSD, giving you much better write endurance, as once cells start dying, it'll start using unused ones instead of just making itself as read-only or just disappearing off the bus (SSDs have... interesting failure modes that I have some warstories about).

BlankSystemDaemon fucked around with this message at 23:29 on Dec 17, 2023

movax
Aug 30, 2008

BlankSystemDaemon posted:

So, originally, the special vdev was introduced to do a couple of things:
Firstly it was meant to offset one of the major disadvantages of DRAID (namely that it doesn't have variable records, so if you set recordsize=10M then everything you write will be written in 10MB segments.
Secondly, it was done to speed up all metadata I/O, since when you have 10MB records, that's going to be the only thing that takes your disks out of sequential I/O mode (which, as I hinted at, is much faster).

The way it works is that you take a pair (or more, N-way mirrors are of course better) of mirrored SSDs with very high write endurance, and it'll automatically store metadata on the special vdev from there on.
What you can optionally do is define an allocation class, which is a per-dataset property, which forces writes up to that size to value to be written to the special vdev.

Effectively, this becomes a hybrid pool as you put it, but the big thing about it is that it's not limited to the DRAID feature.

Thanks -- I will play around with this in TrueNAS when I get the leftover parts in. I will have to think about how to get the dataset vs. pools lined up correctly to take advantage of this; I don't see it being useful for GB-sized MKVs but if little thumbnails / etc get scattered around, I can see that being useful. I was thinking of using the RAID 1 mirror to store audio files + documents + things like that, and I'm not sure I can have that and use that mirror as the special vdev.

quote:

As for your pool configuration, I'd absolutely stick with an 8-way raidz2, because the "meh" performance will be plenty fast to serve your home movies, since video+audio bandwidth is measured in 10s Mbps, not the hundreds you get from even 1000BaseT with SMB or NFS, let alone the thousands that I expect you'll see from your raidz2.
Now, if you're targeting 10G wirespeed for SMB or NFS, you'll probably need the striped mirrors, but in that case I'd also recommend investing in SLOG devices.

As for whether you'll be able to use the 6.4TB SSD for L2ARC, it'll depend entirely on your ARC hit ratio, which you can't know before you're running the system - but unlike special or log vdevs, L2ARC can be added any time.
Do remember, though, that at ~70 bytes per LBA, you're looking at needing over 250GB of system memory to map the L2ARC into memory... So maybe don't OOM your system and remember to use a partition that's much smaller than the entire drive? :v:
EDIT: This also has the added advantage that it basically overprovisions your SSD, giving you much better write endurance, as once cells start dying, it'll start using unused ones instead of just making itself as read-only or just disappearing off the bus (SSDs have... interesting failure modes that I have some warstories about).

I'll start with the 8-drive Z2 + 2-drive mirror and call that good for now. I see drive prices have fallen but I have a hard time replacing essentially unused 16TB drives / don't want to deal with selling them.

For L2ARC, I was thinking either an entire dedicated drive (since I have that spare x1 slot) or a partition on the 6.4TB NVMe. I figured I'd carve out a partition for SABnzbd/etc unpacking + scratch at a minimum.

BlankSystemDaemon
Mar 13, 2009



movax posted:

Thanks -- I will play around with this in TrueNAS when I get the leftover parts in. I will have to think about how to get the dataset vs. pools lined up correctly to take advantage of this; I don't see it being useful for GB-sized MKVs but if little thumbnails / etc get scattered around, I can see that being useful. I was thinking of using the RAID 1 mirror to store audio files + documents + things like that, and I'm not sure I can have that and use that mirror as the special vdev.

I'll start with the 8-drive Z2 + 2-drive mirror and call that good for now. I see drive prices have fallen but I have a hard time replacing essentially unused 16TB drives / don't want to deal with selling them.

For L2ARC, I was thinking either an entire dedicated drive (since I have that spare x1 slot) or a partition on the 6.4TB NVMe. I figured I'd carve out a partition for SABnzbd/etc unpacking + scratch at a minimum.
On a raidz2, all the things that take a long time with audio files and documents are all things related to metadata, which'll be on your special vdev - so striped mirrors won't give you an advantage there.
Striped mirrors exist almost exclusively because the IOPS of spinning rust has been completely stagnant for more than two decades, whereas capacities have far-outpaced anything anyone ever dreamed of, and people were trying to make fast storage happen with spinning rust.
It didn't work, and now we have solid-state. If you need fast storage, it involves solid-state (which can easily reach IOPS rates that professional storage engineers like me struggled to achieve in reallife workloads, back in the day).

Do also remember that raidz expansion is coming - as in, it's merged and being tested by folks.

Making good use out of the NVMe SSD is definitely possible, and it sounds like you've already got the good ideas. Just remember that write-intensive workloads benefit from the over-provisioning I mentioned.

movax
Aug 30, 2008

BlankSystemDaemon posted:

On a raidz2, all the things that take a long time with audio files and documents are all things related to metadata, which'll be on your special vdev - so striped mirrors won't give you an advantage there.
Striped mirrors exist almost exclusively because the IOPS of spinning rust has been completely stagnant for more than two decades, whereas capacities have far-outpaced anything anyone ever dreamed of, and people were trying to make fast storage happen with spinning rust.
It didn't work, and now we have solid-state. If you need fast storage, it involves solid-state (which can easily reach IOPS rates that professional storage engineers like me struggled to achieve in reallife workloads, back in the day).

Do also remember that raidz expansion is coming - as in, it's merged and being tested by folks.

Making good use out of the NVMe SSD is definitely possible, and it sounds like you've already got the good ideas. Just remember that write-intensive workloads benefit from the over-provisioning I mentioned.

What's the record-size setting / recommendation for the special vdev? Mostly, looking at folks suggesting making the special vdev ~1% of your total pool size -- I'd like to make sure that capacity is being used appropriately. I guess it comes down to the user (me) defining zpools in a way to optimize, or getting clever about configuring ZFS to have it transparent. This seems like something I have to get right / correct at the start vs. L2ARC which can come and go.

Reading up more on the Exos drives, there's a neat little configuration utility (SeaChest?) and I think I might have some drives in 512e mode vs. 4kn. No downsides in TYOOL 2023 on 'modern' HBAs (SAS3008, Intel PCH) in just forcing them all to 4kn and letting ashift do its thing? (12, I assume)?

Mantle
May 15, 2004

Both myself and a friend of mine have Synology devices with current DSM 7 support. What's the best way for us to share access to each other's media with minimal exposure to the internet as a whole?

I was hoping for some sort of DSM-based solution to set up a tunnel between the two NASes but I couldn't find one.

IOwnCalculus
Apr 2, 2003





I don't know poo poo about Synology's versioning but assuming that means you can use their app store, tailscale is almost certainly the way to go.

edit: If "media" literally means TV and movies then yes, Plex is better

IOwnCalculus fucked around with this message at 06:06 on Dec 18, 2023

Internet Explorer
Jun 1, 2005





Mantle posted:

Both myself and a friend of mine have Synology devices with current DSM 7 support. What's the best way for us to share access to each other's media with minimal exposure to the internet as a whole?

I was hoping for some sort of DSM-based solution to set up a tunnel between the two NASes but I couldn't find one.

Media? Plex.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Re: L2ARC memory usage, it’s 70 bytes per ZFS block, not LBA. Given the disparity between a 512b LBA and 128KB default record size, that’s kinda important to mention.

BlankSystemDaemon
Mar 13, 2009



movax posted:

What's the record-size setting / recommendation for the special vdev? Mostly, looking at folks suggesting making the special vdev ~1% of your total pool size -- I'd like to make sure that capacity is being used appropriately. I guess it comes down to the user (me) defining zpools in a way to optimize, or getting clever about configuring ZFS to have it transparent. This seems like something I have to get right / correct at the start vs. L2ARC which can come and go.

Reading up more on the Exos drives, there's a neat little configuration utility (SeaChest?) and I think I might have some drives in 512e mode vs. 4kn. No downsides in TYOOL 2023 on 'modern' HBAs (SAS3008, Intel PCH) in just forcing them all to 4kn and letting ashift do its thing? (12, I assume)?
Metadata won't be more than ~1-2%, but if you're also planning on use allocation classes, you should size it after that.

There has never been any downside in sending bigger blocks than the native sector size of the drive, as the firmware will always handle things appropriately - which is why the FreeBSD installer switched to using ashift=12 back in 2014.
And that just made me feel old :smith:

Combat Pretzel posted:

Re: L2ARC memory usage, it’s 70 bytes per ZFS block, not LBA. Given the disparity between a 512b LBA and 128KB default record size, that’s kinda important to mention.
It's still a shitload of RAM being taken up by a FIFO cache, instead of being a MFU+MRU cache - and one that's much slower than main memory by a few orders of magnitude.

Tamba
Apr 5, 2010

Mantle posted:

Both myself and a friend of mine have Synology devices with current DSM 7 support. What's the best way for us to share access to each other's media with minimal exposure to the internet as a whole?

I was hoping for some sort of DSM-based solution to set up a tunnel between the two NASes but I couldn't find one.

Before you try to set up something on the NAS, maybe first check your routers to see if they have an easy way to set up a VPN between your two networks

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

It's still a shitload of RAM being taken up by a FIFO cache, instead of being a MFU+MRU cache - and one that's much slower than main memory by a few orders of magnitude.
Highest usage I’ve seen is like 900MB of headers to keep 380GB of data warm*. That’s largely just dataset metadata and caching 16KB block sized ZVOLs. Seems like a good trade off. :shrug:

And I certainly notice the difference, since I’m running Steam games from the ZVOL. My L2ARC hit rates were far beyond 80%.

(That ratio should even improve, because I switched from 16KB volblocksize/NTFS clusters to 64KB to improve ZStd compression ratios.)

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Mantle posted:

Both myself and a friend of mine have Synology devices with current DSM 7 support. What's the best way for us to share access to each other's media with minimal exposure to the internet as a whole?

I was hoping for some sort of DSM-based solution to set up a tunnel between the two NASes but I couldn't find one.

If you want to sync your files between Synology devices then Synology DSM has you covered. It sounds like you want to have on-demand file shares between the two devices then I don't think Synology has you covered.

The easiest way is to use Plex and share Plex Servers (assuming this is all media that is in Plex). This has a problem that if you don't have your Plex Servers on the internet you'll use the Plex Relay which only allows 1 mbps (free) or 2 mbps (Plex Pass) which probably means transcoding on the host side. You can expose your Plex Server to the internet so that you don't need the Plex Relay servers but that has all the issues of exposing a service to the internet.

If you don't want to expose your Plex Servers to the internet then you want a site-to-site VPN you need routers that can do this. There might be other solutions you can host that will do this for you. You can use Tailscale which is more of a overlay network that can route between places over a VPN. However, you'll need a router with Tailscale or the devices you are playing from will. I think a Chromecast with Google TV can and didn't look like Roku could. I didn't check anything else.

This seems like a very easy request but it's a very challenging and technical thing to do. Maybe someone else has a more detailed route to take.

gariig fucked around with this message at 23:42 on Dec 18, 2023

movax
Aug 30, 2008

BlankSystemDaemon posted:

Metadata won't be more than ~1-2%, but if you're also planning on use allocation classes, you should size it after that.

There has never been any downside in sending bigger blocks than the native sector size of the drive, as the firmware will always handle things appropriately - which is why the FreeBSD installer switched to using ashift=12 back in 2014.
And that just made me feel old :smith:

It's still a shitload of RAM being taken up by a FIFO cache, instead of being a MFU+MRU cache - and one that's much slower than main memory by a few orders of magnitude.

My hardware is on the way, so I'll do some reading on the allocation classes. Still kind of coin flipping between TrueNAS Core or Scale... think I might double-down on the 'single-purpose is best' and just go with Core. All the VMs can live on my little M920 Proxmox box as they'll have at minimum a 10Gbit connection (and the Chelsio T520 is two port, so I can even do a DAC between those two...).

Yaoi Gagarin
Feb 20, 2014

Speaking of ashift - what is a good value for an SSD?

susan b buffering
Nov 14, 2016

IOwnCalculus posted:

I don't know poo poo about Synology's versioning but assuming that means you can use their app store, tailscale is almost certainly the way to go.

edit: If "media" literally means TV and movies then yes, Plex is better

tailscale and plex aren't mutually exclusive solutions

IOwnCalculus
Apr 2, 2003





susan b buffering posted:

tailscale and plex aren't mutually exclusive solutions

You're not wrong, though I'm of the opinion that Plex is a perfectly fine thing to just expose on its own instead of trying to wrap it in a VPN.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



You should be able to login to your locally hosted plex instance here even without a reverse proxy or opened ports.

Computer viking
May 30, 2011
Now with less breakage.

VostokProgram posted:

Speaking of ashift - what is a good value for an SSD?

I have been wondering the same, and concluded that no matter what magic it does inside, the firmware will presumably be written to do ok with 4k-aligned blocks.

I really should test this assumption, though.

BlankSystemDaemon
Mar 13, 2009



movax posted:

My hardware is on the way, so I'll do some reading on the allocation classes. Still kind of coin flipping between TrueNAS Core or Scale... think I might double-down on the 'single-purpose is best' and just go with Core. All the VMs can live on my little M920 Proxmox box as they'll have at minimum a 10Gbit connection (and the Chelsio T520 is two port, so I can even do a DAC between those two...).
I think my opinion on the choice is so obvious that I don't even need to say it.

Two-port T520s can do LACP, and while you won't get 20Gbps for a single connection (without something like bbcp), it'll still benefit most other workloads.

If you're hosting iSCSI or NFS for your hypervisor, you probably benefit from the IOPS offered by striped mirrors, depending on what kind of IOPS you're needing - so I think the best advice now is to wait until you have the hardware, and test thoroughly with fio.

VostokProgram posted:

Speaking of ashift - what is a good value for an SSD?
Doesn't change regardless of storage medium; equivalent or higher than the native sector size

Computer viking posted:

I have been wondering the same, and concluded that no matter what magic it does inside, the firmware will presumably be written to do ok with 4k-aligned blocks.

I really should test this assumption, though.
Yep, that's exactly what happens.

There's no real way to test it, since all firmware (including RAID controllers) is just software running on a processor you can't inspect the state of.
The only thing you'll find out is if the cache is capable of keeping up or not.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

VostokProgram posted:

Speaking of ashift - what is a good value for an SSD?
ZFS currently automatically chooses ashift 13 for SSD only pools. That's 8KB.

Aware
Nov 18, 2003

Nitrousoxide posted:

You should be able to login to your locally hosted plex instance here even without a reverse proxy or opened ports.

Hey stop sharing the link to my Plex :mad:

M_Gargantua
Oct 16, 2006

STOMP'N ON INTO THE POWERLINES

Exciting Lemon

Nitrousoxide posted:

You should be able to login to your locally hosted plex instance here even without a reverse proxy or opened ports.

If you can access your local plex library at app.plex.tv, you have an internet facing plex server. That means something somewhere has opened a port when you clicked all the boxes to do so during the install.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

M_Gargantua posted:

If you can access your local plex library at app.plex.tv, you have an internet facing plex server. That means something somewhere has opened a port when you clicked all the boxes to do so during the install.

Yes and no. Software can make itself accessible without "opening ports" by establishing an outbound connection somewhere and having traffic come back in via that link.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, Plex will relay streams up to 2Mb/s each through their servers even if you never set up port forwarding at all. It's also nice because if something breaks your port forwarding like an update AT&T pushed to your router, external clients just have to deal with low end 720p for a bit instead of being locked out.

BlankSystemDaemon
Mar 13, 2009



M_Gargantua posted:

If you can access your local plex library at app.plex.tv, you have an internet facing plex server. That means something somewhere has opened a port when you clicked all the boxes to do so during the install.
The local Plex instance establishes a TCP connection to the public servers, and the servers then use that to create a reverse tunnel interface through that existing connection.
Most programs don't include this functionality since it smells like a RAT; it's been a feature of SSH (and tools to use SSH, like PuTTY) for at least a decade, but in that instance it's a toggle that's enabled per-connection.

I wonder how many people have considered the implications of the amount of trust required in the remote server and who has access to it from an infosec point of view.

BlankSystemDaemon fucked around with this message at 16:02 on Dec 19, 2023

M_Gargantua
Oct 16, 2006

STOMP'N ON INTO THE POWERLINES

Exciting Lemon
People haven't considered it. That's why I just block those sort of connections and only use a VPN into my local network.

movax
Aug 30, 2008

BlankSystemDaemon posted:

I think my opinion on the choice is so obvious that I don't even need to say it.

Two-port T520s can do LACP, and while you won't get 20Gbps for a single connection (without something like bbcp), it'll still benefit most other workloads.

If you're hosting iSCSI or NFS for your hypervisor, you probably benefit from the IOPS offered by striped mirrors, depending on what kind of IOPS you're needing - so I think the best advice now is to wait until you have the hardware, and test thoroughly with fio.

Core!

I wasn't planning an iSCSI or NFS host on this box as I have no plans to do a ZIL / SLOG to optimize sync performance. The VMs on that other box are so small (homeassistant, etc.) that I just run them on Proxmox's mirrored boot on a pair of P31s. When I actually get around to building my more focused NVMe-based storage solution, I'll come back to that...

Wiggly Wayne DDS
Sep 11, 2010



BlankSystemDaemon posted:

I wonder how many people have considered the implications of the amount of trust required in the remote server and who has access to it from an infosec point of view.
i have, part of why i don't use plex (it also doesn't like handling long video files for seeking/resume purposes... and hates non-tv/movie video folder structures)

BlankSystemDaemon
Mar 13, 2009



M_Gargantua posted:

People haven't considered it. That's why I just block those sort of connections and only use a VPN into my local network.
Aren't they using the connection for a whole bunch of stuff that makes it sorta awkward to block, unless you like giving up on other things?
Last time I looked at it, it seemed like if you blocked things properly, you might as well just stick with Kodi - so I did.

movax posted:

Core!

I wasn't planning an iSCSI or NFS host on this box as I have no plans to do a ZIL / SLOG to optimize sync performance. The VMs on that other box are so small (homeassistant, etc.) that I just run them on Proxmox's mirrored boot on a pair of P31s. When I actually get around to building my more focused NVMe-based storage solution, I'll come back to that...
Got it in one, to the surprise of nobody!

I wish I had the money to play with stuff, but I don't so :shrug:

Wiggly Wayne DDS posted:

i have, part of why i don't use plex (it also doesn't like handling long video files for seeking/resume purposes... and hates non-tv/movie video folder structures)
Are you also a Kodi user then?

It's what I've consistently stuck with for my HTPCs, and I do both YouTube, Twitch, DR (the Danish equivalent of BBC) and others live/VoD services on it just fine.

History Comes Inside!
Nov 20, 2004




I will never plex, Kodi forever

Wiggly Wayne DDS
Sep 11, 2010



BlankSystemDaemon posted:

Are you also a Kodi user then?

It's what I've consistently stuck with for my HTPCs, and I do both YouTube, Twitch, DR (the Danish equivalent of BBC) and others live/VoD services on it just fine.
yeah kodi with an emby plugin covers 99% of my use-case

VelociBacon
Dec 8, 2009

What's Kodi's thing? I use plex for tv/movies and Jellyfin for motorsport events/sports. Does Kodi let you set it to just display the filenames for the titles?

Volguus
Mar 3, 2009
Yes, I just browse the movies folder and they're all nicely alphabetically arranged, in their own folder (in my case), ready to be played. If there is a cover image downloaded in there by the *arr app they'll show it, if not they won't. The movies don't need to be transcoded before that, it'll just play whatever format/codec they came in.

I think it can be configured to scan "library" and show stuff in a more "like netflix" ui, but there's no requirement for that.

Adbot
ADBOT LOVES YOU

Kibner
Oct 21, 2008

Acguy Supremacy
My partner is going all in on Plex with our nas. Sorting all the different editions and extras for all the movies and shows we rip. It’s honestly very cool and it works out great but she is certainly spending a ton of time going through the ripped files and figuring out what is what.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply