Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Pardot posted:

What benefit does that provide them over just running the binary directly?

In short, all of the cgroups stuff, so per-resource limits for CPU, memory, the ability to do port mapping, as well as getting it into whatever container-based orchestration service you're using.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



SolusLunes posted:

Containerization isn't really "installing a full OS", and tbh thinking of it that way's probably counterproductive (even if, in the technical sense, it's true-ish.)

It's a tool that fits my needs for the server, and docker is the easiest way to utilize it, is all. Ease of use can certainly be a significant reason to do something in a specific way.
Containerization is a form of virtualization, and it by definition needs to be isolated from the host as one of the central ideas of virtualization is that guest state doesn't affect host state in a multi-tenancy environment.

Twerk from Home posted:

For a single self-contained binary, you can use a FROM scratch docker container that has no files in it other than the binary. Go docker containers that don't need CGO can end up under 10MB total.

Pardot posted:

What benefit does that provide them over just running the binary directly?
In FreeBSD, this is called service jails and involves building a static version of a piece of software (typically, OpenSSH), letting the sshd and ssh binaries and their respective config files be the only two files in the entire jail (all with the schg flag, then configuring it so that the only command that's allowed to be run is ssh, and finally running it at the highest securelevel.

They're typically a lot more difficult to maintain because you lose the ability to build a userland as there's no compiler or linker, and you don't have access to binary upgrades either, which is why they're only used for things you're touch as infrequently as possible.

The big advantage is that you've got basically no way to affect the host state without breaking out of the guest, and accomplishing that is the sort of thing that might even make APTs think twice, as there's not a bunch of binaries that you can affect the state of, and you're relying solely on remote code execution, privilege escalation and a bunch of other stuff to even get access to the jump jail, while the host is still completely unaffected.

Twerk from Home posted:

In short, all of the cgroups stuff, so per-resource limits for CPU, memory, the ability to do port mapping, as well as getting it into whatever container-based orchestration service you're using.
Why are cgroups tied to containers? In FreeBSD, rctl(8) can be applied to processes, jails, users, or login classes - singly and in combination.

BlankSystemDaemon fucked around with this message at 16:25 on Sep 15, 2021

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

Why are cgroups tied to containers? In FreeBSD, rctl(8) can be applied to processes, jails, users, or login classes - singly and in combination.

cgroups isn't limited to containers, but a lot of the modern tooling for orchestrating processes at scale across a cluster assumes that everything is a container. This is more an issue of walking the common path vs building your own solution from lower level primitives (cgroups itself).

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

BlankSystemDaemon posted:

Containerization is a form of virtualization, and it by definition needs to be isolated from the host as one of the central ideas of virtualization is that guest state doesn't affect host state in a multi-tenancy environment.

You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product.

...That was a terrible analogy, but I've found it easier to visualize containerization as "what if running programs on a machine was done in a sane manner"?

Also, there's a desktop OS where absolutely everything is containerized- making it an enormous resource hog, sure, but damned if that doesn't sound interesting to me. Cannot for the life of me remember what it was called, though.

SolusLunes fucked around with this message at 17:05 on Sep 15, 2021

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



SolusLunes posted:

You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product.

...That was a terrible analogy, but I've found it easier to visualize containerization as "what if running programs on a machine was done in a sane manner"?

Also, there's a desktop OS where absolutely everything is containerized- making it an enormous resource hog, sure, but damned if that doesn't sound interesting to me. Cannot for the life of me remember what it was called, though.

Fedora Silverblue? The version with an immutable userspace that installs everything through flatpaks (except for stuff you install via rpm-ostree, which you should use only as a last resort and requires a rebuild of the OS system image every time you install/update something)

You can also install stuff via toolbox which sort of works like a stripped down vm which can interact with the UI of the host system in a more integrated way where you install stuff not supported by flatpak and which you don't want to install via rpm-ostree.

Updating stuff installed via toolbox is a pain in the rear end though since you have to essentially either manually go into , or write a script to go into each toolbox one by one to update them using rpm as the host system can't just tell them in mass to do an update.

Nitrousoxide fucked around with this message at 17:17 on Sep 15, 2021

IOwnCalculus
Apr 2, 2003





EVIL Gibson posted:

For applications like Sonarr / Radarr, I can say yes to it.

I cannot see why you need to docker a single binary/tool that does not provide a service.

Is it not the same thing as installing a full OS just for 'ls'?

The "service" in this case is a VNC interface to a tool that is either difficult or impossible to use without a GUI. Most of jlesage's containers provide this type of functionality for otherwise-headless systems trying to run things like crashplan, mkvtoolnix, handbrake, or even a loving Firefox install. It just so happens that the container also includes the tool itself.

Also, if the tool is something you might only use once in a while, or once and not again, "docker stop container && docker rm container && docker image prune" means it's gone forever. No stray configs or logs or anything anywhere else in your system.

BlankSystemDaemon
Mar 13, 2009



SolusLunes posted:

You're absolutely right from a technical standpoint. But from a practical standpoint, it's a single program, architected in such a way that if and when it shits the bed, it only shits its own bed, instead of making GBS threads in everyone else's bed in amounts proportionate to the shittiness of the product.

...That was a terrible analogy, but I've found it easier to visualize containerization as "what if running programs on a machine was done in a sane manner"?

Also, there's a desktop OS where absolutely everything is containerized- making it an enormous resource hog, sure, but damned if that doesn't sound interesting to me. Cannot for the life of me remember what it was called, though.
I think you might be mistaking terminology, in so far as you're using containerization as a form of sandboxing - which is a specific form of virtualization, but for a single program.
It's the sort of thing that's better accomplished by kernel enforced capabilities which can be further enhanced by enforcing them in hardware such as ARM Morello.

Are you talking about QubesOS? Because that explicitly uses a hardware-accelerated hypervisor (Xen, if memory serves) to accomplish the isolation, as the isolation found in docker simply isn't good enough.
Docker can be made to have the same level of isolation as jails and bhyve/Xen offer on FreeBSD, but it involves a shitload of configuration and engineering effort to get there.

IOwnCalculus posted:

The "service" in this case is a VNC interface to a tool that is either difficult or impossible to use without a GUI. Most of jlesage's containers provide this type of functionality for otherwise-headless systems trying to run things like crashplan, mkvtoolnix, handbrake, or even a loving Firefox install. It just so happens that the container also includes the tool itself.

Also, if the tool is something you might only use once in a while, or once and not again, "docker stop container && docker rm container && docker image prune" means it's gone forever. No stray configs or logs or anything anywhere else in your system.
So what you're saying is that Linux has reinvented X forwarding on a thin client connected to BigIron, like IRIX and Sun was working on back in the 90s? :allears:

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

BlankSystemDaemon posted:

I think you might be mistaking terminology, in so far as you're using containerization as a form of sandboxing - which is a specific form of virtualization, but for a single program.
It's the sort of thing that's better accomplished by kernel enforced capabilities which can be further enhanced by enforcing them in hardware such as ARM Morello.

Are you talking about QubesOS? Because that explicitly uses a hardware-accelerated hypervisor (Xen, if memory serves) to accomplish the isolation, as the isolation found in docker simply isn't good enough.
Docker can be made to have the same level of isolation as jails and bhyve/Xen offer on FreeBSD, but it involves a shitload of configuration and engineering effort to get there.

So what you're saying is that Linux has reinvented X forwarding on a thin client connected to BigIron, like IRIX and Sun was working on back in the 90s? :allears:

That's fair re: terminology, I've learned everything I know about containerization by what I've gotten to work in my homelab.

and yeah, QubesOS is it! rip Sun, taken from us too early.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Pardot posted:

What benefit does that provide them over just running the binary directly?

This doesn't apply to my NAs but I use single binaries all the time at work for apps that rely on things like python. We use "gimmie-aws-creds" constantly but multiple versions of ansible, so instead of having to install it in all my vends with my different versions of ansible, I have a docker with an alias that mounts th needed files in my $HOME directory that'll get my token for me without having to worry about the needed dependencies.

It's very nice and the container exits when the binary is done working so the overhead outside of docker desktop (lol) isn't much.

It basically ensures dependency or os specific things are present for the binary and keeps everything on your host system neat and clean.

Docker has its faults and the company sucks but its incredibly handy even outside of running apps like radarr/sonarr.

Entrypoint is what you're looking for here if you want to do any research.


BlankSystemDaemon posted:

I think you might be mistaking terminology, in so far as you're using containerization as a form of sandboxing - which is a specific form of virtualization, but for a single program.
It's the sort of thing that's better accomplished by kernel enforced capabilities which can be further enhanced by enforcing them in hardware such as ARM Morello.

Are you talking about QubesOS? Because that explicitly uses a hardware-accelerated hypervisor (Xen, if memory serves) to accomplish the isolation, as the isolation found in docker simply isn't good enough.
Docker can be made to have the same level of isolation as jails and bhyve/Xen offer on FreeBSD, but it involves a shitload of configuration and engineering effort to get there.

So what you're saying is that Linux has reinvented X forwarding on a thin client connected to BigIron, like IRIX and Sun was working on back in the 90s? :allears:

Can we take this to the linux thread or something? I get you love BSD and you bring a ton of good poo poo in here but cmon, do we really need this benign pissing match?

Docker all the things on your NAS, who gives a poo poo about all of this.

BlankSystemDaemon
Mar 13, 2009



Well then, let me instead mention a cool new upcoming feature in ZFS.

It's part of a much bigger change, but the part I'm highlighting is specifically for including physical paths (and the line below for enclosure paths), which I think I might've briefly mentioned earlier, but at the time couldn't remember if it was just in the planning stages or actually part of a pull request.
So once that lands, if you have SES, you no longer have to use GPT, GEOM or other labeling techniques on your disks, all of ZFS' administrative commands will list the physical/enclosure path of a given disk, for example in zpool status, zdb, and everywhere else that information appears.

Rooted Vegetable
Jun 1, 2002
FYI Crashplan users, a policy change has been emailed to you, deleted files are only retained for 90 days, starting mid October.

BlankSystemDaemon
Mar 13, 2009



What is this "file deletion" that you speak of? I'm not sure I understand.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
The Consumer NAS/Storage Megathread: What is this "File Deletion" You Speak of?

Rooted Vegetable
Jun 1, 2002
It's that thing that happens when you don't want to keep a file anymore. Not WORM... WTRM?

BlankSystemDaemon
Mar 13, 2009



Does not compute.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
File deletion? You mean deduplication, right?

Yaoi Gagarin
Feb 20, 2014

Legends tell of a forbidden spell, "unlink(2)", but no one has used such power in millenia...

Rooted Vegetable
Jun 1, 2002
Why bother with legends, just look back in the logs retained for all time.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

tuyop posted:

Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now.

6 won't be EOL until sometime in 2023

TVGM
Mar 17, 2005

"It is not moral, it is not acceptable, and it is not sustainable that the top one-tenth of 1 percent now owns almost as much wealth as the bottom 90 percent"

Yam Slacker

tuyop posted:

Is there any compelling reason to update a Synology from DSM 6 to DSM 7? I'm not sure what will break since my NAS is pretty much just file storage and a Plex server now.

They changed the way the Plex works. It'll prompt you to change the service account. You might need to change the permissions on the folders Plex accesses as well.

droll
Jan 9, 2020

by Azathoth
I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick.

Is this a decent idea? And would these be OK parts?
1x https://www.amazon.com/Synology-Bay-DiskStation-DS720-Diskless/dp/B087Z6SNC1
2x https://www.amazon.com/Seagate-IronWolf-RAID-Internal-Drive/dp/B07H7CKYGT

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Are you going to be moving the Nas while it is on? Like in a car or RV?

droll
Jan 9, 2020

by Azathoth
Nope, attached to a router where I'm staying or directly to a laptop (is the latter possible?)

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that?

Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Twerk from Home posted:

If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that?

Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?

what's your chassis going to look like, or are you flexible / haven't reached that decision point yet?

it's all just a balancing act of cost, performance, density, and durability. The main feature points are SATA vs NVMe, TLC vs QLC, and DRAM vs DRAM-less. You could, for example, buy a 2TB WD 3D Blue 2.5" with TLC and DRAM for about $175, but that would also get you an entry-level 2TB QLC without DRAM. A high end (but not super premium) HP EX950 2TB would be about $270 for that same 2TB but you get TLC and DRAM again.

2.5" is generally cheaper just because it can be physically larger, there's only so many chips you can fit onto a 2280 stick, and TLC 2280 sticks really top out at 2TB for the most part (there are some really high-end 4TB TLC sticks but they are definitely premium-priced). In some cases this lets you go from what would have been QLC on a M.2 to TLC on a SATA but NVMe is probably still faster even with QLC. But you can find 4TB 2.5" drives with TLC a bit more easily (still expensive but they're out there).

https://docs.google.com/spreadsheets/d/1B27_j9NDPU3cNlj2HKcrfpJKHkOf-Oi1DbuuQva2gT4/edit?usp=sharing

https://pcpartpicker.com/products/internal-hard-drive/#f=3&t=0&A=1800000000000,18000000000000&sort=ppgb

https://pcpartpicker.com/products/internal-hard-drive/#f=122080&t=0&A=1800000000000,18000000000000&sort=ppgb&D=1

Depending on your board/chassis/goals you can get there with either 2.5" or M.2 - there are HBAs like the High Point Technologies lineup that let you put 4x or 8x m.2 NVMe sticks on a single PCIe slot for about $300-400 a card (iirc), so if you really want to stack in the capacity and you want to stay NVMe it's still possible. And on the flip side you'll need more places to mount 2.5" drives as well, make sure your chassis can handle that too. There are a lot of off-the-shelf offerings there but you'll have to watch the topology because you may not be able to max all the drives at once anyway.

https://www.highpoint-tech.com/USA_new/series-r1000-fan-overview.html

one way to tweak performance a bit on SSDs would be to use an Optane drive for ZFS SLOG. The latency there is way lower than normal flash drives so they should really help to avoid slowdowns from the intent log. You don't need tons, even the lovely 32GB or 64GB drives are plenty for ZFS SLOG. But it all depends on how much writing you intend to do - even without a SLOG NVMe is going to be very very fast on writes, and that's the performance bottleneck in ZFS, reads will happen at whatever speeds the array allows.

QLC also is known to have problems with big sustained writes when it's full, but again, once it's written then it's a normal SSD in terms of read performance. And they have significantly worse write endurance, but if this is just your plex library or steam library then it may not be churning that much. So how much that one matters depends again on how much you care about writes vs reads.

Paul MaudDib fucked around with this message at 00:36 on Sep 21, 2021

Internet Explorer
Jun 1, 2005





Little late on the name change, but the thread was definitely due. Thanks y'all!

BlankSystemDaemon
Mar 13, 2009



I'm not sure I've ever been responsible for the name of a thread before.

Paul MaudDib posted:

one way to tweak performance a bit on SSDs would be to use an Optane drive for ZFS SLOG. The latency there is way lower than normal flash drives so they should really help to avoid slowdowns from the intent log. You don't need tons, even the lovely 32GB or 64GB drives are plenty for ZFS SLOG. But it all depends on how much writing you intend to do - even without a SLOG NVMe is going to be very very fast on writes, and that's the performance bottleneck in ZFS, reads will happen at whatever speeds the array allows.
The Separate Intent Log only records synchronous writes - so unless you're dealing with databases, a short list of other userspace programs doing (A|F|O)_SYNC, or doing administrative tasks, there is absolutely no need to have one.
You also need it to be mirrored, because if the SLOG drive disappears, so does any data that was on it prior to being flushed to disk.

The ZIL exists to replay lost transaction groups in case of a power outage, crash, or failure modes that aren't catastrophic enough to take down the entire array. While the system is in normal operational mode, the ZIL isn't used at all.
You might be thinking of the dirty data buffer that ZFS has, which is a 5 second/1GB buffer (or until an administrative command is issued, since that triggers an automatic flush) where data is stored until it gets flushed to disk as a transaction group.

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp

droll posted:

I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick.

Is this a decent idea? And would these be OK parts?
1x https://www.amazon.com/Synology-Bay-DiskStation-DS720-Diskless/dp/B087Z6SNC1
2x https://www.amazon.com/Seagate-IronWolf-RAID-Internal-Drive/dp/B07H7CKYGT

I ran a Syno 4-bay off of literal RV deep cycle 12v batteries, like, wired straight up to the batteries with a cut-up wall wart cable and a fuse, for two years. They're remarkably resilient and I had like 700 days uptime at one point. This was an older DS410+ but they are very, very tolerant of some garbage lovely DC power.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

I'm not sure I've ever been responsible for the name of a thread before.

The Separate Intent Log only records synchronous writes - so unless you're dealing with databases, a short list of other userspace programs doing (A|F|O)_SYNC, or doing administrative tasks, there is absolutely no need to have one.
You also need it to be mirrored, because if the SLOG drive disappears, so does any data that was on it prior to being flushed to disk.

The ZIL exists to replay lost transaction groups in case of a power outage, crash, or failure modes that aren't catastrophic enough to take down the entire array. While the system is in normal operational mode, the ZIL isn't used at all.
You might be thinking of the dirty data buffer that ZFS has, which is a 5 second/1GB buffer (or until an administrative command is issued, since that triggers an automatic flush) where data is stored until it gets flushed to disk as a transaction group.

AFAIK the ZIL always exists, its just that if no SLOG device is provided it is stored on the pool with everything else.


Twerk from Home posted:

If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that?

Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?

Whether you want redundancy, or just a striped/spanned config, really just comes down to: how painful is restoring in your backup scheme?

Redudant option: Since you want fast, and only "a couple terabytes", IMO a striped ZFS mirror with 4x 1 TB nvme drives is the way to go. Meaning, 2 mirror vdevs, each with 2 of the drives. And you've got no need for an SLOG then even if running a high speed database because your pool is already fast.

Non-redundant option: Buy 2x 1 TB NVME drives, stripe them in ZFS, and just rely on restoring from backup. In this scenario your first tier "backup" can even just be 2x spinning rust drives in a mirror within the same machine, just on a second pool you never use directly. Then a cron job backs up the SSD pool to the HDD pool periodically using ZFS send/recv.

Obviously in either scenario you have more backups like an offsite one or w/e depending on how valuable this data is.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

droll posted:

Nope, attached to a router where I'm staying or directly to a laptop (is the latter possible?)

I haven't done this but it looks possible! https://nascompares.com/answer/can-i-connect-synology-diskstation-nas-directly-to-a-pc-or-mac/

If you need wireless then I have had mine work when my internet connection is out but my router is still operating, so you can just plug it into any router you're bringing with you and access it that way.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Twerk from Home posted:

If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that?

Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?

If you really want a network attached storage, then I think the main question is what is your network setup? Unless you have a 10Gbit or better I don't think this is worth considering.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

droll posted:

I'm possibly going to be traveling around to places that have slow or really low data cap internet connections for about a year. Staying a few weeks/months in various locations before moving on. Instead of a single external USB drive, or 2 where I manually keep them in sync for a backup, I was thinking a small 2 bay NAS might do the trick.

Is this a decent idea? And would these be OK parts?
1x https://www.amazon.com/Synology-Bay-DiskStation-DS720-Diskless/dp/B087Z6SNC1
2x https://www.amazon.com/Seagate-IronWolf-RAID-Internal-Drive/dp/B07H7CKYGT

Why not just get a raid enclosure?

droll
Jan 9, 2020

by Azathoth

tuyop posted:

Why not just get a raid enclosure?

I thought NAS would make it easier to share content with my fellow travelers and family/friends along the way, rather than everyone having to connect and copy the data off. Is that the best RAID enclosure you recommend?

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

droll posted:

I thought NAS would make it easier to share content with my fellow travelers and family/friends along the way, rather than everyone having to connect and copy the data off. Is that the best RAID enclosure you recommend?

I haven’t used a RAID enclosure before, it just seems much more straightforward to plug it into whatever hardware you need and copy/paste while having redundant disks in case one of them shits the bed.

I guess I don’t know what workflow you’re going to use with the NAS for sharing to someone else.

The enclosure would just be a really big USB stick from the user perspective and I think everyone else will understand what to do with that. Also saves $$$ spent on router, NAS, cables and adapters depending on the laptops you’ll encounter.

Crunchy Black
Oct 24, 2017

by Athanatos
very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme

droll
Jan 9, 2020

by Azathoth

Crunchy Black posted:

very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme

This is a weird post.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Crunchy Black posted:

very much looking forward to the posts from forums user droll about how he lost all of his data because he thought a multi disk raid enclosure or NAS would be sufficient protection and didn't have a proactive backup scheme

Just use a single disk enclosure as the probability of disk failure is lower than two disks. You are also less likely to lose data on a disk if you only have one copy (compared to two copies).

BlankSystemDaemon
Mar 13, 2009



Help, I'm being triggered.

Adbot
ADBOT LOVES YOU

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I store my data on floppy disks that i keep in a drawer under the microwave. Try to steal that data hackers!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply