|
Pardot posted:What benefit does that provide them over just running the binary directly? In short, all of the cgroups stuff, so per-resource limits for CPU, memory, the ability to do port mapping, as well as getting it into whatever container-based orchestration service you're using.
|
# ? Sep 15, 2021 16:13 |
|
|
# ? May 27, 2024 19:48 |
SolusLunes posted:Containerization isn't really "installing a full OS", and tbh thinking of it that way's probably counterproductive (even if, in the technical sense, it's true-ish.) Twerk from Home posted:For a single self-contained binary, you can use a FROM scratch docker container that has no files in it other than the binary. Go docker containers that don't need CGO can end up under 10MB total. Pardot posted:What benefit does that provide them over just running the binary directly? They're typically a lot more difficult to maintain because you lose the ability to build a userland as there's no compiler or linker, and you don't have access to binary upgrades either, which is why they're only used for things you're touch as infrequently as possible. The big advantage is that you've got basically no way to affect the host state without breaking out of the guest, and accomplishing that is the sort of thing that might even make APTs think twice, as there's not a bunch of binaries that you can affect the state of, and you're relying solely on remote code execution, privilege escalation and a bunch of other stuff to even get access to the jump jail, while the host is still completely unaffected. Twerk from Home posted:In short, all of the cgroups stuff, so per-resource limits for CPU, memory, the ability to do port mapping, as well as getting it into whatever container-based orchestration service you're using. BlankSystemDaemon fucked around with this message at 16:25 on Sep 15, 2021 |
|
# ? Sep 15, 2021 16:21 |
|
BlankSystemDaemon posted:Why are cgroups tied to containers? In FreeBSD, rctl(8) can be applied to processes, jails, users, or login classes - singly and in combination. cgroups isn't limited to containers, but a lot of the modern tooling for orchestrating processes at scale across a cluster assumes that everything is a container. This is more an issue of walking the common path vs building your own solution from lower level primitives (cgroups itself).
|
# ? Sep 15, 2021 16:43 |
|
BlankSystemDaemon posted:Containerization is a form of virtualization, and it by definition needs to be isolated from the host as one of the central ideas of virtualization is that guest state doesn't affect host state in a multi-tenancy environment. You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product. ...That was a terrible analogy, but I've found it easier to visualize containerization as "what if running programs on a machine was done in a sane manner"? Also, there's a desktop OS where absolutely everything is containerized- making it an enormous resource hog, sure, but damned if that doesn't sound interesting to me. Cannot for the life of me remember what it was called, though. SolusLunes fucked around with this message at 17:05 on Sep 15, 2021 |
# ? Sep 15, 2021 16:59 |
SolusLunes posted:You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product. Fedora Silverblue? The version with an immutable userspace that installs everything through flatpaks (except for stuff you install via rpm-ostree, which you should use only as a last resort and requires a rebuild of the OS system image every time you install/update something) You can also install stuff via toolbox which sort of works like a stripped down vm which can interact with the UI of the host system in a more integrated way where you install stuff not supported by flatpak and which you don't want to install via rpm-ostree. Updating stuff installed via toolbox is a pain in the rear end though since you have to essentially either manually go into , or write a script to go into each toolbox one by one to update them using rpm as the host system can't just tell them in mass to do an update. Nitrousoxide fucked around with this message at 17:17 on Sep 15, 2021 |
|
# ? Sep 15, 2021 17:14 |
|
EVIL Gibson posted:For applications like Sonarr / Radarr, I can say yes to it. The "service" in this case is a VNC interface to a tool that is either difficult or impossible to use without a GUI. Most of jlesage's containers provide this type of functionality for otherwise-headless systems trying to run things like crashplan, mkvtoolnix, handbrake, or even a loving Firefox install. It just so happens that the container also includes the tool itself. Also, if the tool is something you might only use once in a while, or once and not again, "docker stop container && docker rm container && docker image prune" means it's gone forever. No stray configs or logs or anything anywhere else in your system.
|
# ? Sep 15, 2021 17:24 |
SolusLunes posted:You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product. It's the sort of thing that's better accomplished by kernel enforced capabilities which can be further enhanced by enforcing them in hardware such as ARM Morello. Are you talking about QubesOS? Because that explicitly uses a hardware-accelerated hypervisor (Xen, if memory serves) to accomplish the isolation, as the isolation found in docker simply isn't good enough. Docker can be made to have the same level of isolation as jails and bhyve/Xen offer on FreeBSD, but it involves a shitload of configuration and engineering effort to get there. IOwnCalculus posted:The "service" in this case is a VNC interface to a tool that is either difficult or impossible to use without a GUI. Most of jlesage's containers provide this type of functionality for otherwise-headless systems trying to run things like crashplan, mkvtoolnix, handbrake, or even a loving Firefox install. It just so happens that the container also includes the tool itself.
|
|
# ? Sep 15, 2021 17:42 |
|
BlankSystemDaemon posted:I think you might be mistaking terminology, in so far as you're using containerization as a form of sandboxing - which is a specific form of virtualization, but for a single program. That's fair re: terminology, I've learned everything I know about containerization by what I've gotten to work in my homelab. and yeah, QubesOS is it! rip Sun, taken from us too early.
|
# ? Sep 15, 2021 17:51 |
|
Pardot posted:What benefit does that provide them over just running the binary directly? This doesn't apply to my NAs but I use single binaries all the time at work for apps that rely on things like python. We use "gimmie-aws-creds" constantly but multiple versions of ansible, so instead of having to install it in all my vends with my different versions of ansible, I have a docker with an alias that mounts th needed files in my $HOME directory that'll get my token for me without having to worry about the needed dependencies. It's very nice and the container exits when the binary is done working so the overhead outside of docker desktop (lol) isn't much. It basically ensures dependency or os specific things are present for the binary and keeps everything on your host system neat and clean. Docker has its faults and the company sucks but its incredibly handy even outside of running apps like radarr/sonarr. Entrypoint is what you're looking for here if you want to do any research. BlankSystemDaemon posted:I think you might be mistaking terminology, in so far as you're using containerization as a form of sandboxing - which is a specific form of virtualization, but for a single program. Can we take this to the linux thread or something? I get you love BSD and you bring a ton of good poo poo in here but cmon, do we really need this benign pissing match? Docker all the things on your NAS, who gives a poo poo about all of this.
|
# ? Sep 15, 2021 17:52 |
Well then, let me instead mention a cool new upcoming feature in ZFS. It's part of a much bigger change, but the part I'm highlighting is specifically for including physical paths (and the line below for enclosure paths), which I think I might've briefly mentioned earlier, but at the time couldn't remember if it was just in the planning stages or actually part of a pull request. So once that lands, if you have SES, you no longer have to use GPT, GEOM or other labeling techniques on your disks, all of ZFS' administrative commands will list the physical/enclosure path of a given disk, for example in zpool status, zdb, and everywhere else that information appears.
|
|
# ? Sep 15, 2021 18:22 |
|
FYI Crashplan users, a policy change has been emailed to you, deleted files are only retained for 90 days, starting mid October.
|
# ? Sep 18, 2021 00:12 |
What is this "file deletion" that you speak of? I'm not sure I understand.
|
|
# ? Sep 18, 2021 12:16 |
|
The Consumer NAS/Storage Megathread: What is this "File Deletion" You Speak of?
|
# ? Sep 18, 2021 15:55 |
|
It's that thing that happens when you don't want to keep a file anymore. Not WORM... WTRM?
|
# ? Sep 18, 2021 16:17 |
Does not compute.
|
|
# ? Sep 18, 2021 16:22 |
|
File deletion? You mean deduplication, right?
|
# ? Sep 18, 2021 17:03 |
|
Legends tell of a forbidden spell, "unlink(2)", but no one has used such power in millenia...
|
# ? Sep 18, 2021 19:22 |
|
Why bother with legends, just look back in the logs retained for all time.
|
# ? Sep 18, 2021 23:06 |
Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now.
|
|
# ? Sep 19, 2021 20:39 |
|
tuyop posted:Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now. 6 won't be EOL until sometime in 2023
|
# ? Sep 20, 2021 02:13 |
|
tuyop posted:Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now. They changed the way the Plex works. It'll prompt you to change the service account. You might need to change the permissions on the folders Plex accesses as well.
|
# ? Sep 20, 2021 03:09 |
|
I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick. Is this a decent idea? And would these be OK parts? 1x https://www.amazon.com/Synology-Bay-DiskStation-DS720-Diskless/dp/B087Z6SNC1 2x https://www.amazon.com/Seagate-IronWolf-RAID-Internal-Drive/dp/B07H7CKYGT
|
# ? Sep 20, 2021 23:37 |
Are you going to be moving the Nas while it is on? Like in a car or RV?
|
|
# ? Sep 20, 2021 23:46 |
|
Nope, attached to a router where I'm staying or directly to a laptop (is the latter possible?)
|
# ? Sep 20, 2021 23:54 |
|
If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that? Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?
|
# ? Sep 21, 2021 00:07 |
|
Twerk from Home posted:If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that? what's your chassis going to look like, or are you flexible / haven't reached that decision point yet? it's all just a balancing act of cost, performance, density, and durability. The main feature points are SATA vs NVMe, TLC vs QLC, and DRAM vs DRAM-less. You could, for example, buy a 2TB WD 3D Blue 2.5" with TLC and DRAM for about $175, but that would also get you an entry-level 2TB QLC without DRAM. A high end (but not super premium) HP EX950 2TB would be about $270 for that same 2TB but you get TLC and DRAM again. 2.5" is generally cheaper just because it can be physically larger, there's only so many chips you can fit onto a 2280 stick, and TLC 2280 sticks really top out at 2TB for the most part (there are some really high-end 4TB TLC sticks but they are definitely premium-priced). In some cases this lets you go from what would have been QLC on a M.2 to TLC on a SATA but NVMe is probably still faster even with QLC. But you can find 4TB 2.5" drives with TLC a bit more easily (still expensive but they're out there). https://docs.google.com/spreadsheets/d/1B27_j9NDPU3cNlj2HKcrfpJKHkOf-Oi1DbuuQva2gT4/edit?usp=sharing https://pcpartpicker.com/products/internal-hard-drive/#f=3&t=0&A=1800000000000,18000000000000&sort=ppgb https://pcpartpicker.com/products/internal-hard-drive/#f=122080&t=0&A=1800000000000,18000000000000&sort=ppgb&D=1 Depending on your board/chassis/goals you can get there with either 2.5" or M.2 - there are HBAs like the High Point Technologies lineup that let you put 4x or 8x m.2 NVMe sticks on a single PCIe slot for about $300-400 a card (iirc), so if you really want to stack in the capacity and you want to stay NVMe it's still possible. And on the flip side you'll need more places to mount 2.5" drives as well, make sure your chassis can handle that too. There are a lot of off-the-shelf offerings there but you'll have to watch the topology because you may not be able to max all the drives at once anyway. https://www.highpoint-tech.com/USA_new/series-r1000-fan-overview.html one way to tweak performance a bit on SSDs would be to use an Optane drive for ZFS SLOG. The latency there is way lower than normal flash drives so they should really help to avoid slowdowns from the intent log. You don't need tons, even the lovely 32GB or 64GB drives are plenty for ZFS SLOG. But it all depends on how much writing you intend to do - even without a SLOG NVMe is going to be very very fast on writes, and that's the performance bottleneck in ZFS, reads will happen at whatever speeds the array allows. QLC also is known to have problems with big sustained writes when it's full, but again, once it's written then it's a normal SSD in terms of read performance. And they have significantly worse write endurance, but if this is just your plex library or steam library then it may not be churning that much. So how much that one matters depends again on how much you care about writes vs reads. Paul MaudDib fucked around with this message at 00:36 on Sep 21, 2021 |
# ? Sep 21, 2021 00:33 |
|
Little late on the name change, but the thread was definitely due. Thanks y'all!
|
# ? Sep 21, 2021 00:52 |
I'm not sure I've ever been responsible for the name of a thread before.Paul MaudDib posted:one way to tweak performance a bit on SSDs would be to use an Optane drive for ZFS SLOG. The latency there is way lower than normal flash drives so they should really help to avoid slowdowns from the intent log. You don't need tons, even the lovely 32GB or 64GB drives are plenty for ZFS SLOG. But it all depends on how much writing you intend to do - even without a SLOG NVMe is going to be very very fast on writes, and that's the performance bottleneck in ZFS, reads will happen at whatever speeds the array allows. You also need it to be mirrored, because if the SLOG drive disappears, so does any data that was on it prior to being flushed to disk. The ZIL exists to replay lost transaction groups in case of a power outage, crash, or failure modes that aren't catastrophic enough to take down the entire array. While the system is in normal operational mode, the ZIL isn't used at all. You might be thinking of the dirty data buffer that ZFS has, which is a 5 second/1GB buffer (or until an administrative command is issued, since that triggers an automatic flush) where data is stored until it gets flushed to disk as a transaction group.
|
|
# ? Sep 21, 2021 07:38 |
|
droll posted:I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick. I ran a Syno 4-bay off of literal RV deep cycle 12v batteries, like, wired straight up to the batteries with a cut-up wall wart cable and a fuse, for two years. They're remarkably resilient and I had like 700 days uptime at one point. This was an older DS410+ but they are very, very tolerant of some garbage lovely DC power.
|
# ? Sep 21, 2021 08:02 |
|
BlankSystemDaemon posted:I'm not sure I've ever been responsible for the name of a thread before. AFAIK the ZIL always exists, its just that if no SLOG device is provided it is stored on the pool with everything else. Twerk from Home posted:If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that? Whether you want redundancy, or just a striped/spanned config, really just comes down to: how painful is restoring in your backup scheme? Redudant option: Since you want fast, and only "a couple terabytes", IMO a striped ZFS mirror with 4x 1 TB nvme drives is the way to go. Meaning, 2 mirror vdevs, each with 2 of the drives. And you've got no need for an SLOG then even if running a high speed database because your pool is already fast. Non-redundant option: Buy 2x 1 TB NVME drives, stripe them in ZFS, and just rely on restoring from backup. In this scenario your first tier "backup" can even just be 2x spinning rust drives in a mirror within the same machine, just on a second pool you never use directly. Then a cron job backs up the SSD pool to the HDD pool periodically using ZFS send/recv. Obviously in either scenario you have more backups like an offsite one or w/e depending on how valuable this data is.
|
# ? Sep 21, 2021 08:17 |
droll posted:Nope, attached to a router where I'm staying or directly to a laptop (is the latter possible?) I haven't done this but it looks possible! https://nascompares.com/answer/can-i-connect-synology-diskstation-nas-directly-to-a-pc-or-mac/ If you need wireless then I have had mine work when my internet connection is out but my router is still operating, so you can just plug it into any router you're bringing with you and access it that way.
|
|
# ? Sep 21, 2021 19:13 |
|
Twerk from Home posted:If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that? If you really want a network attached storage, then I think the main question is what is your network setup? Unless you have a 10Gbit or better I don't think this is worth considering.
|
# ? Sep 22, 2021 01:01 |
droll posted:I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick. Why not just get a raid enclosure?
|
|
# ? Sep 22, 2021 04:08 |
|
tuyop posted:Why not just get a raid enclosure? I thought NAS would make it easier to share content with my fellow travelers and family/friends along the way, rather than everyone having to connect and copy the data off. Is that the best RAID enclosure you recommend?
|
# ? Sep 22, 2021 18:02 |
droll posted:I thought NAS would make it easier to share content with my fellow travelers and family/friends along the way, rather than everyone having to connect and copy the data off. Is that the best RAID enclosure you recommend? I haven’t used a RAID enclosure before, it just seems much more straightforward to plug it into whatever hardware you need and copy/paste while having redundant disks in case one of them shits the bed. I guess I don’t know what workflow you’re going to use with the NAS for sharing to someone else. The enclosure would just be a really big USB stick from the user perspective and I think everyone else will understand what to do with that. Also saves $$$ spent on router, NAS, cables and adapters depending on the laptops you’ll encounter.
|
|
# ? Sep 24, 2021 02:22 |
|
very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme
|
# ? Sep 24, 2021 05:34 |
|
Crunchy Black posted:very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme This is a weird post.
|
# ? Sep 24, 2021 05:36 |
|
Crunchy Black posted:very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme Just use a single disk enclosure as the probability of disk failure is lower than two disks. You are also less likely to lose data on a disk if you only have one copy (compared to two copies).
|
# ? Sep 24, 2021 05:47 |
Help, I'm being triggered.
|
|
# ? Sep 24, 2021 09:10 |
|
|
# ? May 27, 2024 19:48 |
|
I store my data on floppy disks that i keep in a drawer under the microwave. Try to steal that data hackers!
|
# ? Sep 24, 2021 11:29 |