|
Eletriarnation posted:Maybe in a mirror, but as far as I understand it the performance characteristics of distributed parity topologies like RAID-5/6/Z* have more similarities to striped arrays. You of course have a lot more CPU overhead, and at any given time some subset of your disks is reading/writing parity blocks that don't contribute to your final application-available bandwidth. Still, modern CPUs are fast so that's not much of a bottleneck to HDDs and you can absolutely get very fast numbers for sustained, sequential transfers. Ah, neat. In that case I second what Wibla said. Make a raidz1 or raidz2 of your drives and you're good
|
# ¿ Mar 31, 2023 21:37 |
|
|
# ¿ May 17, 2024 00:33 |
|
You could also run whatever OS you want in a VM on top of truenas. It is a little more configuration effort than a second box because you have to set up a network bridge, but after that it'll be the same.
|
# ¿ Apr 12, 2023 23:06 |
|
You're also committing to forever use that pool on machines that have that much RAM. What if you're poorer in the future
|
# ¿ Apr 13, 2023 18:08 |
|
I want to read some data from a bunch of snapshots. A while ago I was playing around with truenas and I set up "backups" dataset with a recurring snapshot task. The idea being that I could simply use it as a dumb mirror of my other computer's files using rsync or syncthing or robocopy, and rely on the snapshots to provide versioning. I never got around to actually using it though. Now I'm trying to use this truenas machine again, and I see that those backup snapshots are holding on to 30 GB: code:
E: I figured it out! /mnt/tank/backups/.zfs/snapshots has them. ls -a will not show .zfs/ even though it exists. What confused me is that USED for tank/backups is at 30G, but is empty, while the snapshots have low USED numbers but can see the files. I would have expected the USED number on the snapshots to account for the data, since it was deleted from tank/backups itself. Yaoi Gagarin fucked around with this message at 02:19 on May 13, 2023 |
# ¿ May 13, 2023 01:26 |
|
Moey posted:Seems like the migration from TrueNAS Core to Scale is pretty straightforward. Thinking about giving it a rip on the next few days. I have done it twice in the last few days since I've been experimenting with different setups. Both times it went through flawlessly. Even kept my shares configured. Go for it imo
|
# ¿ May 14, 2023 05:23 |
|
BlankSystemDaemon posted:On a traditional filesystem, fragmentation happens because whenever you do a partial overwrite to an existing file, that file gets written in full in a new place on the physical platters. I don't think this is true. Most FS just mutate the file in-place. Not doing that is what makes COW, COW
|
# ¿ Jun 26, 2023 18:49 |
|
In ext* if you grow the file and there isn't enough room after it, a new extent gets allocated somewhere else for that part of the file. So you can get fragmentation within the file itself, not just in the free space. On spinny disks this is of course bad because now you need to move the head twice to read the entire file. On SSDs I don't think it matters at all
|
# ¿ Jun 27, 2023 17:18 |
|
the user inside the container isnt uid 0, and the container itself has limited access to stuff, so i dont think its any more vulnerable than just running a normal process as a normal user
|
# ¿ Jun 30, 2023 00:37 |
|
Windows 98 posted:
My dude I think you should make each vdev raidz2. You don't want to sit there restoring all that from backup if you get a double failure
|
# ¿ Aug 9, 2023 17:49 |
|
Just don't turn on dedupe or L2ARC
|
# ¿ Aug 11, 2023 21:50 |
|
The Meshify 2 XL is basically the Define 7 XL with a breathable front panel. I don't know if anyone's actually tested thermals with all 18 drives populated, though. I would probably upgrade to high static pressure fans if you fill it, and maybe rig up some cardboard so that the air is forced to go between the drives instead of escaping through the side
|
# ¿ Aug 18, 2023 02:07 |
|
Btrfs can do arbitrary disk mixing, but it has the write hole problem
|
# ¿ Aug 27, 2023 23:35 |
|
maybe bcachefs will be the one
|
# ¿ Aug 28, 2023 00:06 |
|
Worth noting that the speed on a hard drive also depends on how far from the center the data is
|
# ¿ Sep 15, 2023 02:28 |
|
Nitrousoxide posted:Personally, I have a vm that I have docker or podman installed on and I do the hosting in those docker/podman containers. It's still the same container workflow, but you get the benefits of easily backing up the whole vm and individual containers which gives you a ton of flexibility on how you can restore stuff if (when) it goes wrong. A container blows up after you pull a new image and the config is hosed? Roll back just that container with your backup solution. You do a major version upgrade on the docker host and it fucks up the containerd engine? Roll that back from the VM interface in proxmox. You get new hardware because your server is old or failing? Just add it as a member of the proxmox cluster and migrate the vm's over to it and start them back up. Want to experiment with a service or a way to architecture your homelab? spin up some vm's with Proxmox and play around and nuke them if they don't serve your purpose. I tried this for my sonarr+sabnzbd set up but I had a lot of trouble with permissions errors. I had truenas core, hosting both the data and a VM running Fedora server, and then I tried to run the linuxcontainer.io images in podman. I'm sure I could have figured it out eventually, but I gave up because there's so many layers that could be configured wrong: ZFS permissions, NFS share permissions, NFS mount permissions, permissions inside the container. A lot of effort. Now I'm using Scale and while I don't love TrueCharts, at least it wasn't that hard to set it up
|
# ¿ Oct 14, 2023 00:36 |
|
Sub Rosa posted:I just built a ten* 3.5" disk NAS in a Fractal Node 804. Plus one nvme and already ran sata cables to where I can add two more 2.5inch drives later. As someone with an 804: how the gently caress did you fit all that in
|
# ¿ Nov 14, 2023 16:51 |
|
Speaking of ashift - what is a good value for an SSD?
|
# ¿ Dec 18, 2023 22:39 |
|
raidz expansion is in upstream zfs so truenas will eventually get that too
|
# ¿ Jan 8, 2024 21:23 |
|
does having 2 disks of redundancy solve that or do you need 3.
|
# ¿ Jan 11, 2024 00:57 |
|
Perhaps someone can recommend a case for me? I've done a lot of googling but cannot seem to find anything that meets all these requirements: 1. supports at least micro-ATX motherboards 2. has 8 3.5" hot swap bays 3. can use a normal ATX power supply 4. can keep the 8 hard drives at safe temperature 5. quiet 6. not a heavy rackmount box The closest thing I have found is the Silverstone CS381, but I see lots of people on the internet saying their drives run hot in that thing. They also make a CS382 and it has fans directly on the drive cage, but they are half blocked by its SAS backplane. I think the latter is new enough that I can't find any info about how hot it runs, maybe half-blocked fans are OK?
|
# ¿ Jan 11, 2024 07:31 |
|
fletcher posted:I was on a quest for this as well, you can see my posts about the CS381 in this thread. I gave up on it though, the drives were running too hot for my tastes. Ended up getting a Node 804 - silent and drive temps are great. Just had to give up hot-swap - do you really need it? funny, I have a node 804 right now and I feel like it's such a pain to work with. drives are mounted upside down hanging from the top so you have to plug in cables from the bottom. I only have two 3.5" drives right now so it's not a huge deal, but even so I haven't been able to put them in adjacent slots because then my power supply cable bunches up. And I've got 3 2.5" drives just floating around too. Maybe on the weekend I'll tear everything out and see if I can find a neater way to route cables. as is I can't see myself cramming 8 drives in it even though it can theoretically handle that
|
# ¿ Jan 12, 2024 02:37 |
|
Stux posted:looking to put together a nas, largely for plex with some random storage on the side, and need some help picking a raid setup and a sanity check on the cpu ive been looking at. supposedly raidz1 and raid5 are both a bad idea on modern drives because resilvering takes so long that another drive might die in the process. if you have backups of this data, you can start with a mirror, and when it's full buy 3 more drives, destroy the mirror, and put all 5 into raidz2. also, openzfs added raidz expansion a few months ago, at some point that'll get enabled by distros and then you can grow your vdev one drive at a time until you fill your case.
|
# ¿ Feb 25, 2024 20:56 |
|
wait gently caress they're killing podcasts I actually use that
|
# ¿ Mar 16, 2024 04:05 |
|
Shumagorath posted:The last file system converter I remember was Windows 2K/XP FAT32 -> NTFS. Are they going to provide an offramp? just copy the files to your new storage pool?
|
# ¿ Mar 16, 2024 04:53 |
|
Can't hardlink but you could symlink, or use a bind mount. But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?
|
# ¿ Mar 28, 2024 02:57 |
|
Would you mind pasting the output of `zpool status -v` here?
|
# ¿ Mar 28, 2024 11:19 |
|
mekyabetsu posted:I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger. Basically if you care a lot about throughput and iops you want all writes to be spread as evenly as possible among the vdevs. For example if you need to write 10 GB and you have 4 vdevs, the fastest thing is to have each vdev take 2.5 GB so they all finish at the same time. However zfs wants to balance the % used, so bigger vdevs get more writes, as do new empty vdevs. That means after adding a new one, that entire 10 GB write will go just to the new vdev. This is slower because we aren't doing anything in parallel. For home use this is not a problem.
|
# ¿ Apr 16, 2024 18:05 |
|
aside from a weekly scrub you should set up timed snapshots. theres scripts floating around somewhere that handle all the logic like keep X daily snapshots and Y monthly snapshots etc
|
# ¿ May 2, 2024 22:45 |
|
id just wait for the correct heatsink, otherwise you'll have to clean off the thermal goop and reapply it
|
# ¿ May 14, 2024 04:03 |
|
afaik it is not a mirror of the metadata, when you make a special vdev it has all the metadata.
|
# ¿ May 15, 2024 05:40 |
|
|
# ¿ May 17, 2024 00:33 |
|
It should probably throw a bunch of warnings at you if you try to make a special vdev without redundancy.
|
# ¿ May 15, 2024 21:46 |