|
Anyone happens to know how ZFS will behave in regards to prefetching if I set primarycache=metadata on a dataset? I guess I'll finally set my big file datasets like the movie ones to that. But it's always been nice of ZFS to prefetch data in big chunks. --edit: There seems to be some kind of minor prefetching to be done. Playing a movie starts with 2MB of IO every 1 second or so to eventually space out to 6MB every 3-4 seconds. But nothing special beyond that. --edit: Now 8MB every 5-6 seconds. Probably gonna go up some more. That's nice. Keeps disk IO down despite no data caching. Combat Pretzel fucked around with this message at 23:31 on Nov 2, 2023 |
# ¿ Nov 2, 2023 23:18 |
|
|
# ¿ May 12, 2024 04:44 |
|
BlankSystemDaemon posted:I know that feeling all to well
|
# ¿ Nov 3, 2023 17:52 |
|
It'll work just fine in 30 years.
|
# ¿ Nov 5, 2023 16:53 |
|
Containers on TrueNAS Scale are a pain in the butt. Sure, they work, but the management interface is very meh. If you’re used to something akin to Portainer, you’d be going like four steps backwards. For instance, you can’t check current resource usage of a running container vs (yet). Or inspect any other details. I’m not sure what you can do in TrueCommand, it’s not free.
|
# ¿ Nov 13, 2023 09:49 |
|
Someone on the TrueNAS forums made a script that sets up a Debian container in containerd on SCALE, in which you can then setup Docker, so you can use Portainer and poo poo again. So I switched back from Apps to Portainer in Cobia, and the baseline power consumption dropped by 2W. Combat Pretzel fucked around with this message at 20:58 on Nov 13, 2023 |
# ¿ Nov 13, 2023 19:18 |
|
Twerk from Home posted:Are you guys buying Intel X710s or Mellanox something or is there a cheaper, cooler 10GbaseT NIC out there now? ConnectX3 cards are cheap as gently caress on Ebay, and Windows has an inbox driver that works fine and even does RDMA.
|
# ¿ Nov 19, 2023 22:38 |
|
If the drive is actually air tight and Helium molecules diffuse out of it due to quantum tunneling or whatever bullshit, shouldn't it just drop the inside pressure and maintain the low drag environment?
|
# ¿ Nov 23, 2023 19:30 |
|
Potential fix for the BRT issue? Reads like it. https://github.com/openzfs/zfs/pull/15571 https://github.com/openzfs/zfs/pull/15566 Combat Pretzel fucked around with this message at 16:41 on Nov 25, 2023 |
# ¿ Nov 25, 2023 16:38 |
|
I think there’s already a chart for this in TrueNAS’ own chart repo.
|
# ¿ Nov 28, 2023 15:55 |
|
There's sure some BRT related fixes going into OpenZFS right now.
|
# ¿ Dec 13, 2023 19:36 |
|
Put in a SSD for L2ARC and then set secondarycache to metadata on the root on the pool. Bam, el-cheapo version of metadata special vdev.
|
# ¿ Dec 17, 2023 21:54 |
|
Re: L2ARC memory usage, it’s 70 bytes per ZFS block, not LBA. Given the disparity between a 512b LBA and 128KB default record size, that’s kinda important to mention.
|
# ¿ Dec 18, 2023 07:18 |
|
BlankSystemDaemon posted:It's still a shitload of RAM being taken up by a FIFO cache, instead of being a MFU+MRU cache - and one that's much slower than main memory by a few orders of magnitude. And I certainly notice the difference, since I’m running Steam games from the ZVOL. My L2ARC hit rates were far beyond 80%. (That ratio should even improve, because I switched from 16KB volblocksize/NTFS clusters to 64KB to improve ZStd compression ratios.)
|
# ¿ Dec 18, 2023 13:28 |
|
VostokProgram posted:Speaking of ashift - what is a good value for an SSD?
|
# ¿ Dec 19, 2023 13:48 |
|
I guess the BSD versions of TrueNAS are already one foot in the grave.quote:We have no plans for a FreeBSD 14-based TrueNAS at this time, and the 13.1 release will be a longer-lived maintenance train for those who want to continue running on the BSD product before migrating to SCALE later at some later date. quote:CORE we still will maintain with updates for a while as their are large enough numbers of users on 13.1 to justify it. I mean, the full quote might be subject to interpretation, but alas.
|
# ¿ Dec 21, 2023 13:09 |
|
If you're not hung up on it being specifically BSD or Linux, it matters gently caress all because both Core and Scale do NFS. That said, some people have strong opinions about that. If you qualify, I guess you're SOL. IIRC there's at least one other BSD based NAS distro, if you want it to come with an UI out of the box. But I can't remember what it was called. NAS4Free probably. Personally I'm on Scale mainly just because the Linux kernel comes with a NVMe-oF target driver, that also does RDMA. The Kubernetes stuff irks me a lot, but it sounds like they're doing work to support containers via systems-nspawn or something like that, as alternative to k3s. Combat Pretzel fucked around with this message at 16:07 on Dec 21, 2023 |
# ¿ Dec 21, 2023 16:04 |
|
Kung-Fu Jesus posted:Someone from iX in reddit comments saying "this is the end of CORE" as the only way this is being communicated seems pretty sketch to me. Kung-Fu Jesus posted:Beforehand, I tested SCALE, encountered the ARC memory size restriction/default configuration, read up on some of the reasons why it does that, and cheerfully hosed off back to CORE.
|
# ¿ Dec 22, 2023 20:02 |
|
BlankSystemDaemon posted:In practice, the first won't happen because the GPL license is incompatible with CDDL (according to some lawyers), and the second won't happen because Linux kernel maintainers' opinions on ZFS.
|
# ¿ Dec 23, 2023 13:54 |
|
BlankSystemDaemon posted:Can’t say I like the idea of making distribution-specific changes that aren’t upstreamed. Also, TIL that Illumos is still alive.
|
# ¿ Dec 23, 2023 16:25 |
|
RAID-Z striping is a little dynamic. It tries to stretch a block across the array, but it can just decide to write shorter stripes, depending on the conditions (block size vs. stripe width vs. ashift vs. moon phase etc.) Random image to illustrate:
|
# ¿ Dec 27, 2023 16:48 |
|
Didn't they act essentially like two drives in one enclosure? At least the SAS ones? --edit: Wendell has a video about the SATA version. It's basically just concats two disks. Some kind of interleaving would have been interesting to see what it does. Combat Pretzel fucked around with this message at 22:00 on Mar 6, 2024 |
# ¿ Mar 6, 2024 21:57 |
|
Yea. You just need to mess around in the terminal to do everything, tho. TrueNAS ships all relevant modules for md, ext4, xfs and so on. Perhaps just not btrfs. --edit: Apparently even btrfs. code:
|
# ¿ Apr 12, 2024 20:07 |
|
Hmm degraded mirrors? Last I remember, you can create a zpool with a single disk, and when you zpool attach (not add) another to it, it'll turn into a mirror.
|
# ¿ Apr 13, 2024 09:51 |
|
When I do reverse proxying, I don’t need to install the SSL certificate in every drat container? Traefik worthwhile or too much?
|
# ¿ Apr 17, 2024 06:19 |
|
Seeing the BRT fixes that keep going into OpenZFS, they sure got blindsided by their own project complexity. The 2.2.4 release of OpenZFS is gonna get some speculative prefetcher improvements I'm putting my hopes in, to improve streaming game data from a spinning rust mirror on a cold cache.
|
# ¿ Apr 17, 2024 22:37 |
|
Last I remember, TrueNAS creates 2GB swap partitions on all disks you put in a pool. I created my pools on the command line to do whole unpartitioned disks like in ye olde OpenSolaris days.
|
# ¿ Apr 20, 2024 16:12 |
|
The issue I have with Unraid is the inability to modify the base system. I could live with having to redo it every update, but it doesn't even let me. I have a bash script that does that for TrueNAS. I essentially want NVMe-oF for high performance block I/O over the network. And I can't have that on Unraid. There's this obnoxious plugin system, but gently caress that one. If TrueCharts/k8s charts is poo poo, so is this. That said, it also doesn't even have native iSCSI support. Some NAS software this Unraid.
Combat Pretzel fucked around with this message at 17:44 on Apr 25, 2024 |
# ¿ Apr 25, 2024 17:41 |
|
Twerk from Home posted:This is pretty drat niche in my opinion, given that the primary usage case for both is still spinning disk for big cheap bulk storage. I have 40GBe Mellanox cards in my desktop and my NAS. I'm specifically using NVMe-oF, because that's the only way to get RDMA, thanks to the Starwind initiator on Windows and Linux' nvmet+nvmet-rdma kernel modules. As far as latency, I don't have hard numbers. I just know that last time I benchmarked it with DiskMark, in the worst metric, i.e. single threaded random 4KB reads, iSCSI crap out at 34MB/s, whereas it gets up to 90MB/s using NVMe-oF. That says enough. Forgot what 32 threaded sequential was at, but it was beyond 3GB/s. Either way, if Unraid would just let me gently caress around with the base system, that'd be fine. Then I could replicate it.
|
# ¿ Apr 25, 2024 18:48 |
|
You definitely need high performance for the streaming assets type of games, see my earlier post what that implies. For level loading games, they’ll take somewhat longer to load on 1/2.5Gbit Ethernet. More so, if you solely rely on spinning rust (forget streaming assets in that scenario, the very least for more recent games).
|
# ¿ Apr 26, 2024 11:50 |
|
That's why I saved myself all the grief and just got a Define 7 XL instead.
|
# ¿ May 4, 2024 18:33 |
|
It's KiB, MiB, GiB etc. for data volumes, and K, M, G etc. for counts.
|
# ¿ May 9, 2024 23:34 |
|
ARC total accesses? I presume so. Regarding as to how much or little it is, you have to consider that these VMs run their own disk caches.
|
# ¿ May 9, 2024 23:42 |
|
BlankSystemDaemon posted:and that every LBA on your L2ARC device will take up 70 bytes of memory that could otherwise be used by the ARC code:
|
# ¿ May 10, 2024 19:18 |
|
|
# ¿ May 12, 2024 04:44 |
|
Unless you go bonkers with L2ARC, or you're severely memory limited, the trade-off may be worth it. In my case, it's limited to pool metadata and ZVOLs with either 16KB or 64KB volblocksize. I'm giving away 0.61GB of the 52GB of ARC to keep 400GB of data warm on a Gen3 NVMe SSD. Works fine for running games on Steam (per fast loads after clearing ARC via a reboot). If you're working with the default ZFS record size of 128KB (or bigger), you might get better ratios of headers vs data. Compression reduces it only so far (which I'm using on these ZVOLs, too). Combat Pretzel fucked around with this message at 23:42 on May 10, 2024 |
# ¿ May 10, 2024 23:38 |