|
BlankSystemDaemon posted:Now, you may rightly say, that this is something that WireGuard can get, and in principle I would agree - but Jason has unfortunately shown himself rather resistant to independent implementations, in three separate instances. I find it very interesting that you decide to blame Jason when it was Netgate's contractor pushing insecure, subpar code into FreeBSD that led to the whole shitstorm. Seems very insincere to me, but I guess pfSense must be defended at all costs as one of the last bastions of relevance for FreeBSD.
|
# ¿ Jun 7, 2021 15:44 |
|
|
# ¿ May 4, 2024 20:51 |
|
Matt Zerella posted:Are you familiar with Virtualization? Honestly I dont recommend this approach at all. Why can't you recommend virtualization? It's pretty great. TrueNAS would be a poor fit here, because modeski wants JBOD and it doesn't cater to this use case at all. I thought OMV sounded like a good fit for them, what does Unraid bring to the table other than a price tag to make it worth consideration in this case?
|
# ¿ Jul 8, 2021 08:59 |
|
Matt Zerella posted:Full drive size mix and match JBOD with Parity (max drive size is only limited by the size of your parity disk, and you can do dual parity if you want) Cache drives (can also be dual mirrored), Docker app based "plugin" system. A non poo poo community full of helpful people and actual support from the company who makes it. And it has virtualization on top of it. If their goal is to maximize available storage space as stated in the original post, it sounds to me like parity isn't really needed or wanted in this specific case. As for OMV, Docker support is there while cache drives are not. No idea about virtualization, but regardless of OS choice they should then go with the original idea of running their NAS OS on top of a hypervisor instead of going the opposite way about it. SolusLunes posted:Plus a feature-unrestricted trial. SolusLunes posted:The major downside with Unraid is simply how it stores data- it doesn't stripe in the main array, so you are limited in read/write speeds for a single file to single-disk speeds, and there isn't native ZFS support.
|
# ¿ Jul 8, 2021 16:23 |
|
Nitrousoxide posted:There can be reasons to go with the proxmox into a NAS VM approach. But you probably need to have a very specific purpose in mind for that. If for instance you're trying to build a home lab or something which requires you to want to share the hardware on your storage computer between multiple VM's through passthrough. If you're planning to do virtualization at all I'd argue you should go with a hypervisor like ESXi or Proxmox VE from the beginning instead of using a half-baked solution within your NAS OS of choice. If you already have the technical competency to go all-out with virtualization instead of bare metal installation for the OS I'd argue there's very little benefit to the latter. EVIL Gibson posted:My personal main hangup is manually setting up a docker jail for something with a million settings; if I lose that .env file ... poo poo is not going to be just thrown but also being actively thrown down the gullet of every service that requires that vm to be up. This is the same whether you set up a physical or virtual server though. If you don't do backups and lose your settings there'll be a shitshow regardless. hbag posted:even if i got ionice working would it even loving work properly inside a docker container You can't adjust process nice values within a container without granting it the CAP_SYS_NICE capability. A better solution would probably be to use Docker's resource constraint capabilities, but assuming you're not using the swarm mode you'll then need to use the Compose v2 spec: https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources
|
# ¿ Jul 9, 2021 14:46 |
|
Nam Taf posted:Am I right in thinking that you could just set up 2 firewall rules, one that takes 53 and one that takes 443, and have them both forward to the same internal IP/port for the VPN server? That way you can switch between the two and it should be functionally identical? You could, but generally you want to use UDP rather than TCP for a VPN link. If you direct 53/udp to your server's UDP port, and 443/tcp (and maybe also 53/tcp) to its TCP port, you'll probably have good results.
|
# ¿ Aug 4, 2021 12:41 |
|
hogofwar posted:I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct? If you want a storage solution with redundancy where you can later add one disk at a time to increase capacity, ZFS is probably not (yet) for you.
|
# ¿ Aug 19, 2021 15:52 |
|
Twerk from Home posted:Cross posting because I bet that people here will have some good ideas about how to upload huge files over HTTP. Any ideas? I'd use minio for this.
|
# ¿ Sep 4, 2021 19:54 |
|
Mephistopheles posted:[...]Old hardware is rather ancient[...] How many video streams are you expecting to be encoding simultaneously? It'd have to be more than a couple to bottleneck on that CPU - I was running Emby/Jellyfin on a Q6600 until finally replacing that old beast early this year, and while sweating hard it could handle a couple of transcoding streams. There's not a hard requirement on having a GPU although it'll obviously be more efficient. Scruff McGruff posted:ECC memory is unnecessary for a small home server IMO, you can upgrade it later if you want but don't worry about going out of your way for it now. It's really only needed if you're running critical hough uptime servers where a single wrong bit could cause irreplaceable damage. Saying it's unnecessary for home use is going a bit too far in the opposite direction. If you care about ensuring that the data in memory is not getting corrupted before being committed to disk, you'll want to be using ECC memory. If Intel weren't purposefully crippling their consumer hardware to segment it away from the enterprise market I think we'd likely be using ECC RAM in all our computers by now.
|
# ¿ Nov 4, 2021 12:35 |
|
Scruff McGruff posted:That's fair, I guess I should caveat it with "It's not worth paying the premium for it in version 1 of your home server" because if prices were more equivalent (and motherboard compatibility better) then definitely I'd have it in my server too. I just don't think it's worth prioritizing over other upgrades early on. Yeah, definitely agree with this. He's got 24GB of non-ECC RAM that will be used for the v1 server and that is totally fine. When choosing new hardware I think this is sensible to include in your list of requirements but obviously there's no hard requirement. Personally I went with AMD because I didn't want to pay Intel's ECC tax.
|
# ¿ Nov 4, 2021 13:13 |
|
XFS doesn't really bring that much to the table compared to ext4, but Red Hat pushes it heavily. Neither is especially good for the NAS use case.
|
# ¿ Feb 10, 2022 16:20 |
|
Kivi posted:I'm bit on the edge to migrate to something I've not used and reading on the net ZFS seems to require tons of extra horse power and hardware (1 GB of RAM per TB, log and cache disks) for benefits that are not applicable in my use case? What am I missing? I've been running a ZFS pool with 8 x 14 TB disks on a (virtual) machine with 2 CPU cores and 16 GB of RAM for the past year, with no special device disks either. Have yet to experience any issues with it. I went with ZFS because of its design focus on data integrity, but for me at least snapshots, file system level compression and being able to apply different properties per filesystem/dataset have really been killer features as well. I've also found the management interface quite nice to use.
|
# ¿ Feb 15, 2022 10:25 |
|
Holy poo poo what are they doing
|
# ¿ Feb 19, 2022 14:09 |
|
priznat posted:The Node 804 case has been weirdly hard to find here, any other recommended mATX cases with lots of 3.5” bay spots? The Node 804 case sucks IMO so you're better off looking for alternatives. Individual drives are hard to access in the vertical cages, the adapters you need for mounting modern drives are made with poor accuracy and about half were absolutely awful to fit, but worst of all is that any and all drive movement and noise resonates loudly out through the cages and is somehow amplified by the side panel of the case. I've stuffed the NAS in a place where I can't hear the thing so it's OK, but it was driving me crazy while the thing sat in my living room during initial setup, and getting that lovely case is the one thing I really regret from my build spec.
|
# ¿ Mar 7, 2022 19:24 |
|
Klyith posted:1. Do you want to use true/freeNAS and do the setup and janitoring of the system, and have more ownership? What janitoring is there? If your needs are simple you can just install TrueNAS, set up your shares and be done with it. Nitrousoxide posted:If you can wait a bit, TrueNAS Scale will probably get a full release this year and it being linux (with docker built in) will make it a much smoother transition for you to migrate your services over to there. SCALE was released last month actually: https://www.truenas.com/docs/releasenotes/scale/22.02.0/ It comes with Kubernetes, not Docker, for its container orchestration.
|
# ¿ Mar 8, 2022 20:30 |
|
Klyith posted:C'mon, versus a commercial purpose-made device, you are far more likely to have indeterminate problems that you have to troubleshoot. For example, whatever was going on for this guy ITT. That's not to say it's bad or not worth doing, but it's more work than a NAS box you just plug drives into. In the example you posted, ZFS has faulted the disk and says it should be replaced. Our guy decided to not listen to this and start troubleshooting instead. Not really sure what the difference with an appliance is there, unless the appliance has stripped out the tooling that would be needed for troubleshooting. Klyith posted:(IDK maybe you feel "janitoring" is more of a negative epithet than I do.)
|
# ¿ Mar 8, 2022 23:44 |
|
Smashing Link posted:Anyone have any clever solutions to an OS disk for TrueNAS? I'm using a 256 GB NVME but it's overkill for the OS and uses the whole disk. I have read USB disks are not good either because TrueNAS does a lot of writing to the OS disk. You can install a hypervisor on the disk and put TrueNAS in an appropriately sized virtual disk instead. The flexibility this offers is really nice.
|
# ¿ Mar 21, 2022 17:12 |
|
withoutclass posted:I've been using the same USB stick for over 8 years now without a problem. I wouldn't do that in a production environment probably but for home use it works just fine. As a counterpoint I had three USB sticks die over the course of a year between 2019-2020. They were acting as boot devices for my previous server, but running Debian rather than TrueNAS.
|
# ¿ Mar 21, 2022 23:51 |
|
Poopernickel posted:I'm Linux-savvy, and good with networking. But I don't want to janitor my NAS, so I don't want to build one with an Rpi or a PC running FreeNAS or something. What is this janitorial work that comes with TrueNAS but not Synology?
|
# ¿ Mar 26, 2022 21:54 |
|
Poopernickel posted:I guess I assume it's like running OpenWRT on your router. Poopernickel posted:Definitely possible, and lots of features. Some will or won't work depending on hardware. Some will require a lot of messing around. I'll forget how to maintain the weird hacks I had to do to get everything just right. Probably I wind up installing a bunch of user-made packages that have wildly variable quality. If you install a bunch of unsupported third-party components this will likely result in the same the same kind of jank and amount of janitorial work regardless of going with TrueNAS or Synology. Poopernickel posted:If I do a bunch of research and buy exactly the right hardware, it'll work great (until it doesn't because something breaks in a software update). If I get it even slightly wrong, it's a world of pain. Possibly I have to do a bunch of research and build a custom setup.
|
# ¿ Mar 27, 2022 01:29 |
|
BlankSystemDaemon posted:No, it's an inherit property of ZFS, BTRFS, APFS, and basically everything that isn't a clone of a filesystem designed back in 1980 (FFS/UFS, which is still also in FreeBSD). Btrfs actually allows you to disable COW with the mount option nodatacow, see btrfs(5). This comes with the major caveat that you disable checksumming and compression, which is very important to consider alongside this note from the top of the section: btrfs(5) posted:Most mount options apply to the whole filesystem and only options in the first mounted subvolume will take effect. This is due to lack of implementation and may change in the future. This means that (for example) you can’t set per-subvolume nodatacow, nodatasum, or compress using mount options. It's as if they're trying their best to mess up their users' data.
|
# ¿ Apr 20, 2022 16:59 |
|
BlankSystemDaemon posted:I'm not sure how that's better than simply disabling synchronous writes completely, thereby forcing the dirty data buffer to be used. Which would be as easy as setting sync=disabled for your scratch dataset, yeah? Can you explain why you on the previous page argued that primarycache should be disabled? And what of compression, from what I've read I thought lz4 basically just gives you better perf than uncompressed data on spinning rust as it can be offloaded and allows reading/writing more data in less time (for data as well as metadata), but as I assume you'd know better than me so it would be very interesting to hear your reasoning.
|
# ¿ Apr 20, 2022 20:16 |
|
Klyith posted:Two, ZFS and/or TrueNas or whichever distro set ZFS up. The ZFS advocates ITT give btrfs a lot of poo poo for having unstable, integrity-not-guaranteed features that can be turned on. If ZFS is critically dependent on ECC memory, that feature to do memory checksumming should be way more exposed so that anyone who doesn't have ECC will turn it on.
|
# ¿ Jun 14, 2022 21:34 |
|
I'll just use ECC instead of all this hassle, thanks.
|
# ¿ Jun 16, 2022 15:55 |
|
I built my NAS last year using an X570D4I-2T board. It's been fantastic once up and running, but potential buyers need to be aware that they come with idiosyncrasies like requiring an LGA115X CPU cooler despite being an AM4 board, or that it uses OCuLink for hooking up the HDDs. Better read the fine print if you're planning an ASRockRack system. The Node 804 however I would really not recommend as it's like an acoustic resonance superconductor of hard drive noise, and the hard drive cages are really awkward to work with too. It's the part of my NAS that I'm the least happy with for sure.
|
# ¿ Jun 30, 2022 18:46 |
|
BlankSystemDaemon posted:I still think FreeBSD is the better option, because Linux is still not at a point where the tooling is as integrated; you still can't easily do boot environments on Linux, You have the ZFSBootMenu project now that supports boot environments for Linux based systems. It's pretty cool and easy to get set up (for a nerd), although you need to jump through some pretty big hoops to not have to enter your decryption passphrase twice during boot.
|
# ¿ Aug 17, 2022 10:04 |
|
Ihmemies posted:I'd really like a hypervisor tho so I could do things like test new operating systems and stuff without breaking anything horribly. Files will mostly sit archived, idle, so it would be waste of hardware to not use the server for as many things as possible! I'm running TrueNAS CORE on top of ESXi with SATA controller passthrough to the VM. Then I run services in containers under a different VM. It's been chugging along for a year and half without issues so far.
|
# ¿ Sep 16, 2022 17:40 |
|
Maybe Plex is blocked at your workplace because they don't want employees watching Plex while at work? You could always SSH tunnel home I suppose.
|
# ¿ Oct 8, 2022 12:38 |
|
Generic Monk posted:the 4,8 disk stuff is bullshit based on a misunderstanding of how the technology works. use however many disks you want, though obviously not a ridiculous amount (idk what the recommended upper limit is with raidz2 - 10? 12?) RAIDZ1 is generally not recommended for pools made up of large disks due to risk of secondary failure during long resilver operations. Might be better off going with mirrored vdevs instead then. I built my pool with RAIDZ2 as being able to deal with any 2 disks dying is much more important for my use than performance. If perf was important I wouldn't be using HDDs, the way I see it.
|
# ¿ Oct 13, 2022 15:07 |
|
Ihmemies posted:How would one copy files from a HDD and verify that they were copied successfully? rsync will verify that checksums match post transfer, so you can be certain that the source data has been copied as-is to the destination. Verifying the integrity of the source data is another matter entirely. It can be automated to some degree but is challenging as the checks would have to be implemented specific to each file format, and even then 100% assurance can't be guaranteed for many formats.
|
# ¿ Oct 14, 2022 13:07 |
|
Combat Pretzel posted:So apparently Kubernetes had announced deprecation of Dockershim, which allowed TrueNAS to use Docker as container engine. 1. Docker is the worst container runtime at this point, good riddance 2. I don't have any first-hand experience with using ZFS as storage backend but OverlayFS is what's generally being used all over the place. With the merging of this recent PR it should get less hacky 3. If TrueNAS SCALE lets you use Debian repos you can probably keep installing Docker and Podman via apt - or learn to use the container tooling that's actually supported on there
|
# ¿ Nov 13, 2022 15:21 |
|
Linux containers != Docker. If you want to use Docker to orchestrate containers you could benefit from a different host OS choice than TrueNAS SCALE where it's unsupported.
|
# ¿ Nov 13, 2022 16:08 |
|
Lowen SoDium posted:I tried Jellyfin (coming from Emby) but I was quickly turned off from it because they use something other than thetvdb for episode numbering (which is a GOOD thing because thetvdb operators are tools) and like half of my shows got their numbering all messed up because they don't match. That's a configuration issue on your side, there's a TVDB plugin (https://github.com/jellyfin/jellyfin-plugin-tvdb) which I've been using since I switched from Emby to Jellyfin back when it was still the primary metadata downloader in both. Guess TVDB backpedaled on their plans to charge normal users for API access because the plugin still just works for the time being.
|
# ¿ Nov 17, 2022 15:58 |
|
deong posted:Its at version 8.0.0.0 in the jellyfin plugin repo, so just go to plugin's and click on TVDB plug in. Not sure how far behind it is, but its in the same major release. It's the same plugin as you're seeing in the repo, GitHub is where the code is maintained. Was just telling Lowen SoDium it's an option.
|
# ¿ Nov 17, 2022 19:00 |
|
VostokProgram posted:This whole space is just begging for someone to create a convenient and easy to use wireguard endpoint in a box I'm pretty sure the GL.iNet guys had done this already. Checking out their website this upcoming Brume 2 device looks very on point. Personally I use Tailscale with a self-hosted control server (headscale) as an always-on VPN for accessing my services, both from home and elsewhere. Very convenient, but perhaps not so easy (yet?) to host yourself.
|
# ¿ Nov 18, 2022 09:59 |
|
Gay Retard posted:If they're using any iOS devices, it's a Test Flight beta app, which I know concerned some of my more privacy centric friends due to more advanced logging with beta apps. User also have to manually enter the Jellyfin server address when adding a new device. No that's very outdated info, Jellyfin has been out on the iOS App Store for years at this point. It's also not what fletcher was asking about. fletcher posted:What's the Wireguard & Jellyfin setup like for the non-tech family members? The WireGuard mobile apps can scan in configs from QR codes, so presumably you'd send them one along with some instructions. They would need to install two apps, scan this QR code, and then as Gay Retard wrote in Jellyfin they would have to enter the domain name that the server runs on before logging in. It's not something I'd do but I'm sure they could manage without a degree in engineering.
|
# ¿ Nov 19, 2022 10:14 |
|
BlankSystemDaemon posted:Mind you, I'm not saying BTRFS doesn't have any innovation - but if it's gonna catch up like Linux folks like to imagine it will, it's gonna have to pick up the pace a fair bit. There's anyone left with hope for btrfs? I thought that ship had sailed by now. Maybe bcachefs will be good one day.
|
# ¿ Nov 20, 2022 21:01 |
|
On the other hand btrfs doesn't have any advantages outside of being "easier" to get started with on Linux. Yeah, sure, you've got btrfs it in the kernel (unless you're on a recent RHEL distro), but what else? On Debian for instance ZFS is in contrib and getting your system ready for it is as easy as running "apt install zfs-dkms zfsutils-linux".
|
# ¿ Feb 10, 2023 19:54 |
|
Was reminded of the discussion on VPN vs port forwarding from the beginning of this month, and in particular the following post:Corb3t posted:Y'all should just open port 32400 and enjoy seamless streaming from any device. Plex supports 2FA - it's probably a lot easier convincing the people you share Plex with to turn that on instead of setting up a VPN or using a Plex-specific device. while reading today's golden nugget from the infosec thread: https://arstechnica.com/information-technology/2023/02/lastpass-hackers-infected-employees-home-computer-and-stole-corporate-vault/
|
# ¿ Feb 28, 2023 15:03 |
|
BlankSystemDaemon posted:Also, what the gently caress does that even mean. I assume it means they'll create a zpool with a single disk vdev and a single filesystem for use in the jankraid.
|
# ¿ Mar 21, 2023 11:13 |
|
|
# ¿ May 4, 2024 20:51 |
|
Nitrousoxide posted:You can use docker compose in TrueNAS now. They don't require the use of kubernetes anymore. Is that just something for parsing Compose file YAML and using it to orchestrate k8s? All the links in that thread are dead so I can't really find out much about it from there. https://truecharts.org/news/docker-compose/ <- this turned up after some web searches So it's running Docker in a container, and then you attach a shell and run the compose CLI tool there? I guess that would work but it seems a bit messy. I don't understand why using something that's not Docker apparently is a non-starter for so many home users. The docker CLI is OK but not that great. Compose YAML is pretty poo poo. k8s YAML is even uglier, but it's not exactly hard to get a grasp of. There just seems to be this huge aversion to learning something new, like how'd you get started with Linux containers in the first place if you hate everything you don't know? Weird.
|
# ¿ Mar 21, 2023 16:00 |