Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

You can absolutely encrypt swap on FreeBSD, but I know basically nothing about TrueNAS.
That's not quite how it works. ZFS per-dataset encryption CANNOT be used for FDE until the boot loader has been modified to support decrypting the relevant supported algorithms.
At least FreeBSDs standard loader (which TrueNAS uses, I believe) supports reading and working with ZFS (including boot environments) just fine, and it supports reading GELI encrypted pools, but I don't think the code for reading the per-dataset encrypted filesystems has landed yet. Someone is working on it, though.

I would imagine L2ARC and SLOG devices could also be sources of information leak, but those can be encrypted with GELI too, just like swap devices can - at least on FreeBSD, though I assume you can do the same with TrueNAS on the command-line.

EDIT:

Don't. Swap is good for no other reason than not having a place to put kernel dumps is bad. Thankfully, FreeBSD supports encrypted kernel dumps. And if you really don't want swap, you can always netdump on FreeBSD.

In the TrueNAS case I believe dataset encryption can be a reasonable alternative to FDE, specifically for pool disks. Since TrueNAS only uses those disks for zfs datasets, as long as you haven't manually put anything on a separate partition, everything on a given disk is encrypted. Someone who possess the drives can distinguish zero and nonzero blocks but thats all.

E: it says as much in the docs: https://www.truenas.com/docs/hub/initial-setup/storage/encryption/ (scroll to the picture of the warning dialog)

E2: swap is also good because it lets the OS trade rarely used memory pages for frequently used pages from the file cache. Though idk if freebsd does that

Yaoi Gagarin fucked around with this message at 18:08 on Jan 1, 2021

Adbot
ADBOT LOVES YOU

Warbird
May 23, 2012

America's Favorite Dumbass

Lookit this fancy boy who doesn’t take a 12 gauge to their drives once they’re done.



They really should have that though.

IOwnCalculus
Apr 2, 2003





I mean, would using ATA Secure Erase not accomplish the same thing without waiting for a dban/nwipe run through the drive?

Also:


Warbird posted:

Lookit this fancy boy who doesn’t take a 12 gauge to their drives once they’re done.

Pretty much this. When I'm done with a drive it's because the drive is loving toast. I disassemble it for the magnets and chuck the rest without any worry that anyone is going to read anything from the remnants.

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
The swap thing is just me being :tinfoil:.

More realistically, encrypted data and os volumes with an unencrypted boot volume (like with FileVault2) would be fine but as far as I know Synology doesn't support that scenario either and won't in the near future.

Warbird
May 23, 2012

America's Favorite Dumbass

IOwnCalculus posted:

I mean, would using ATA Secure Erase not accomplish the same thing without waiting for a dban/nwipe run through the drive?

Would SE even work if the drive is FUBAR? Can’t zero out what you can’t interface with and whatnot. That said popping open one for magnets sounds fun. If someone is going to try and extract my data at that point they’re not going to be stopped by most means. OP does have legitimate concerns though.

H110Hawk
Dec 28, 2006
Swap is for cowards. Especially on a dedicated NAS device.

IOwnCalculus
Apr 2, 2003





Warbird posted:

Would SE even work if the drive is FUBAR? Can’t zero out what you can’t interface with and whatnot. That said popping open one for magnets sounds fun. If someone is going to try and extract my data at that point they’re not going to be stopped by most means. OP does have legitimate concerns though.

ATA Secure Erase doesn't actually physically zero anything, it just changes the encryption key on the drive - right?

At any rate if you're that paranoid about data recovery on a drive you dispose of, physical destruction is far and away the best. If someone has the capability to look at a trash bag with the platters from four different hard drives mixed together and figure out how to read anything useful off of what was one-of-many vdevs in my zpool... they aren't using that capability for me.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

E2: swap is also good because it lets the OS trade rarely used memory pages for frequently used pages from the file cache. Though idk if freebsd does that
That's the point of any paging, and has been a thing since before any of the modern OS' or their ancestors existed (it was first implemented in 1963).

IOwnCalculus posted:

I mean, would using ATA Secure Erase not accomplish the same thing without waiting for a dban/nwipe run through the drive?

Warbird posted:

Would SE even work if the drive is FUBAR? Can’t zero out what you can’t interface with and whatnot. That said popping open one for magnets sounds fun. If someone is going to try and extract my data at that point they’re not going to be stopped by most means. OP does have legitimate concerns though.
Decommisioning harddrives shouldn't involve the idea of DBANing, secure erasing, or anything else, because it turns out that with the equivalent of a scanning electron microscope but for electromagnetism, you can read data off disks that have had the data overwritten more than 7 times, even if it's entirely random data that's been written.
Physical destruction is the only method recommended by NIST and all other such agencies, and disassembling drives is always fun because it nets you magnets as you've both mentioned, but also lets you apply hammer. If you gotta do it at scale, there are companies who'll bring a mobile shredder to your door so you can have your ears melted off your face by the racket it makes.

Warbird
May 23, 2012

America's Favorite Dumbass

So bird, buck, or slugs then? Only cowards use drills.

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice

BlankSystemDaemon posted:

Decommisioning harddrives shouldn't involve the idea of DBANing, secure erasing, or anything else, because it turns out that with the equivalent of a scanning electron microscope but for electromagnetism, you can read data off disks that have had the data overwritten more than 7 times, even if it's entirely random data that's been written.

Crazy. News to me. All the more reason to start with everything encrypted then.


Warbird posted:

So bird, buck, or slugs then? Only cowards use drills.

What if you're using a Hole Hawg drill though?

H110Hawk
Dec 28, 2006

BlankSystemDaemon posted:

Decommisioning harddrives shouldn't involve the idea of DBANing, secure erasing, or anything else, because it turns out that with the equivalent of a scanning electron microscope but for electromagnetism, you can read data off disks that have had the data overwritten more than 7 times, even if it's entirely random data that's been written.

This is not a thing for a home user. If you're storing state secrets they should be encrypted end of story. If you're worried about identity theft then a single pass of zeros is all you need. SSD's provide a mechanism for it as well. Physical destruction is a last resort. If you want a quick way to recycle it hit the circuit board with the claw end of a hammer and be done with it.

insta
Jan 28, 2009
Yeah, that just sounds like a thing drive manufacturers would exploit for higher density storage.

BlankSystemDaemon
Mar 13, 2009



Warbird posted:

So bird, buck, or slugs then? Only cowards use drills.
I'm pretty confident that my post history in this thread includes a youtube video of someone presenting ways to automate data destruction in the datacenter with remote triggers, by testing out electromagnetic, explosives, and other methods.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

BlankSystemDaemon posted:

I'm pretty confident that my post history in this thread includes a youtube video of someone presenting ways to automate data destruction in the datacenter with remote triggers, by testing out electromagnetic, explosives, and other methods.
DEFCON 19. Shane Lawson and Deviant Ollam: https://youtu.be/1M73USsXHdc
DEFCON 23. ZoZ and crew: https://youtu.be/-bpX8YvNg6Y

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

That's the point of any paging, and has been a thing since before any of the modern OS' or their ancestors existed (it was first implemented in 1963).


No, that is not the point of paging, it's a particular optimization only possible with paging + swap. Without swap any page in the working set, no matter how stale, must be backed by physical memory. If I allocate a 1GB buffer, write a byte to each page to force it to be allocated, and then never touch that buffer again while my program does other stuff for an hour, that entire time I'm wasting physical memory capacity. With swap the OS has somewhere to stash these rarely used pages. Because of this, swap can provide a benefit even when the working set is smaller than physical memory capacity: rarely used pages pushed to disk leaves more space for the file cache.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

No, that is not the point of paging, it's a particular optimization only possible with paging + swap. Without swap any page in the working set, no matter how stale, must be backed by physical memory. If I allocate a 1GB buffer, write a byte to each page to force it to be allocated, and then never touch that buffer again while my program does other stuff for an hour, that entire time I'm wasting physical memory capacity. With swap the OS has somewhere to stash these rarely used pages. Because of this, swap can provide a benefit even when the working set is smaller than physical memory capacity: rarely used pages pushed to disk leaves more space for the file cache.
Where are you going to page to, if not a swap device?
Are you talking about the user being able to prioritize things? If so, FreeBSD accomplishes this with the sysctls in the vm.swap_idle_* OID and vm.disable_swapspace_pageouts.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

Where are you going to page to, if not a swap device?
Are you talking about the user being able to prioritize things? If so, FreeBSD accomplishes this with the sysctls in the vm.swap_idle_* OID and vm.disable_swapspace_pageouts.

Paging's most important purpose is to allow translation between virtual and physical addresses

H110Hawk
Dec 28, 2006
It's edging into :goonsay: levels of under the hood technicalities. When you page in/out you could be moving to various different devices, one of which is swap. I believe this is also how you make new copies of pages in your local Numa node, I assume it's how like Intel handles their persistent memory dimms confusingly named optane.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

Paging's most important purpose is to allow translation between virtual and physical addresses
No, paging is the act of moving resident memory around - translation is handled by the page table/map (and hardware MMU, if available), which is another part of the VM subsystem that paging is also a part of.

In FreeBSD nomenclature (because that's what I know) it's the difference between src/sys/vm/vm_page.c which is the resident memory management module (and src/sys/vm/vm_pageout.c which handles swapping to disk, specifically) and, as an example, src/sys/amd64/amd64/pmap.c which is the pagemap for amd64.
The reason they live in different parts of the tree is that FreeBSD uses a machine-dedependent/machine-independent code separation, which it inherited from BSD, to try and keep code-reuse to a minimum.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

No, paging is the act of moving resident memory around - translation is handled by the page table/map (and hardware MMU, if available), which is another part of the VM subsystem that paging is also a part of.

In FreeBSD nomenclature (because that's what I know) it's the difference between src/sys/vm/vm_page.c which is the resident memory management module (and src/sys/vm/vm_pageout.c which handles swapping to disk, specifically) and, as an example, src/sys/amd64/amd64/pmap.c which is the pagemap for amd64.
The reason they live in different parts of the tree is that FreeBSD uses a machine-dedependent/machine-independent code separation, which it inherited from BSD, to try and keep code-reuse to a minimum.

You know what, you're right. Paging does specifically refer to swapping. My bad :eng99:.

But even in the absence of a swap device the OS can map in parts of files to the fs cache, or to any VMA a process requests it to. My point was swap is good because it gives the OS the option to use physical memory in a more useful way, even when the working set does not exceed physical memory capacity.

ArcticZombie
Sep 15, 2010
You're disagreeing over a difference in meaning of paging. One of you is referring to the concept of memory pages, the other to the concept of moving a memory page (or pages) to secondary storage.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

You know what, you're right. Paging does specifically refer to swapping. My bad :eng99:.

But even in the absence of a swap device the OS can map in parts of files to the fs cache, or to any VMA a process requests it to. My point was swap is good because it gives the OS the option to use physical memory in a more useful way, even when the working set does not exceed physical memory capacity.
I don't know how Linux handles this because it doesn't have a unified buffer cache, but filesystem caching is part of that in FreeBSD, so it's subject to the exact same paging as anything else (depending on runtime configuration).
The exception is ZFS, but one of the smartest people I know named Jeff Roberson may be working on integrating ARC and ZIL in the FreeBSD VM subsystem, which would give it a unique advantage over any other implementation of ZFS - and it's not available in Solaris, and likely can't be done in Linux for both technical and political reasons. ie. no unified buffer cache and Linus would quite possibly lose quite a few marbles and/or hair over it.

lampey
Mar 27, 2012

BlankSystemDaemon posted:

because it turns out that with the equivalent of a scanning electron microscope but for electromagnetism, you can read data off disks that have had the data overwritten more than 7 times, even if it's entirely random data that's been written.

Physical destruction is the only method recommended by NIST and all other such agencies,

Neither of these are true. NIST 800-88 allows for many other options besides physical destruction, including a single pass of all zeroes. The idea that data can be recovered from a wiped drive through magnetic force microscopy is based on a research paper from 1996 by Peter Gutmann. It claimed that in theory, after a single pass of all zeros it is possible to recover a single bit of data 95% of the time. The paper was never corroborated. In 2001 the author made partial retractions, that 35 passes is never necessary. Later papers show in practice, for 2000 era high density drives it is no better than 50% to recover a single bit after a zero wipe

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



With Google photos dropping their free uploading in a few months I’m starting to look at building a NAS. I have a bunch of shingled 8 TB hard drives from a dumb foray into crypto a few years ago. Obviously shingled hard drives have abysmal write speeds but have acceptable read speeds. If I were to shuck these drives and toss them in a freeNas set up using an old laptop of mine to drive it would it work? This would mostly be for archival of my photos and maybe to run a single 4K stream capable Plex server.

Obviously it being capable of 4K would depend on the power of the laptop. But it is able to drive a 4K video locally right now in windows so I would imagine it might work.

Alternatively, could I just connect the drives via USB 3 instead of shucking them? That would simplify the set up at least.

It’s really not a huge deal for me if it’s not the quickest thing in the world. I am perfectly willing to accept a five second to 10 second buffering time to start a video or back up a picture. I would set up that stuff to run on a schedule when I’m not using my phone anyway so it’s not a big deal if it takes longer than ideal while it’s charging on my nightstand at night.

Nitrousoxide fucked around with this message at 04:24 on Jan 3, 2021

movax
Aug 30, 2008

OK, since I got some solid unblocked time for the first time in gently caress, 2 years or so, I started bringing up ESXi and my two project NAS boxes.

The first, simple question really, I intend to ship off to mini-PC colo land and act as a seed box / my own personal offsite storage solution. ASRock DeskMini box, two 7.68 TB SATA drives passed through to TrueNAS (via passing through the entire AHCI controller, not RDM, so no worries there). IIRC, ZFS / TrueNAS does not have a "JBOD" concept, right? So my only option for one contiguous big pool is RAID 0 / striping it? It will be a secondary/tertiary backup location, so if the stripe died, my other poo poo would have to all die at the same time. Some part of my brain keeps saying "doing a stripe for your backup location is a dumbass idea" so I figure I'd float it here for you all to tell me that it is, in fact, a dumb idea. In which case, I will probably just suck it up and have two pools, each ~7 TB in size. Or, really navel gaze, see how much data I may want to end up putting there / seeding, and just mirroring. No L2ARC or SLOG planned here.

Second question, for the NAS build I have planned for home, I need a sanity check as well. Hardware wise, I've got the following drives all piled into it:

* 8x Exos X16 via SAS3008
* 2x 860 EVO SATA SSD (Intel PCH)
* 2x WD Blue SATA SSD (intel PCH)
* 1x Samsung PM1725A 7 TB NVMe (via x8 PCIe)

Storage pool-wise, I think I will do:

* RAID-Z2 of the Exos as my main spinny storage pile
* Stripe of mirrors for the SATA SSDs for ~3 TB of redundant storage (I just have the drives already, and the case has room, so... why not?)
* Maybe whatever spare / leftover storage as a non-RAID'd scratch storage option on the PM1725.

I think I understand the SLOG better now and why it exists; the spinning pool will basically be only offered via SMB / I plan for only async writes to it, so I don't think it'll benefit from having a SLOG. For the all-flash pool, they're all SATA, so that's going to be a limiting factor in performance anyways, but if I do want to expose that pool via iSCSI for ESXi. Since this is homelab, I was thinking of just partitioning off 20 GB or so (overkill) of the PM1725A and giving it as a SLOG to the all-flash pool, and then offering up the rest as scratch storage.

L2ARC, not going to touch for either pool; I'll reserve 32 GB for TrueNAS and call it good. IIRC, SLOGs and L2ARCs can be added/removed from pools more or less at will, right? (Assuming no active writes, etc etc).

The last thing, which I realized whilst configuring is... since I'm doing an ESXi setup, I need a datastore to host VMs on. While ESXi can run off the USB stick, I'll need TrueNAS at a minimum to live somewhere, and then make sure that the VMs boot in the right order for ESXi to use the iSCSI share(s) exported by TrueNAS to turn on the other VMs (kind of a bootstrap scenario). Since I'm going to give the entire PCH AHCI controller to TrueNAS, ESXi will have no disks to use for datastore.

So, I have one PCIe x1 3.0 slot left on my mobo... I can just chuck a P31 or some NVMe drive onto it to use as a datastore + swap. Or, I can dig up some HW RAID controller for RAID 1 of NVMe drives. If I keep the other VMs on the iSCSI pool exported by the TrueNAS VM, then I don't really need much storage here. And, if I don't bother RAIDing the NVMe drive, with TrueNAS's + ESXi config backed up elsewhere, it's just some minor downtime for my home machine while I replace it.

I see now why real enterprise environments w/ SANs are used to solve the ESXi datastore need..., or you have a hardware RAID controller in the ESXi host that presents storage for usage.

movax
Aug 30, 2008

Nitrousoxide posted:

With Google photos dropping their free uploading in a few months I’m starting to look at building a NAS. I have a bunch of shingled 8 TB hard drives from a dumb foray into crypto a few years ago. Obviously shingled hard drives have abysmal write speeds but have acceptable read speeds. If I were to shuck these drives and toss them in a freeNas set up using an old laptop of mine to drive it would it work? This would mostly be for archival of my photos and maybe to run a single 4K stream capable Plex server.

Obviously it being capable of 4K would depend on the power of the laptop. But it is able to drive a 4K video locally right now in windows so I would imagine it might work.

Alternatively, could I just connect the drives via USB 3 instead of shucking them? That would simplify the set up at least.

It’s really not a huge deal for me if it’s not the quickest thing in the world. I am perfectly willing to accept a five second to 10 second buffering time to start a video or back up a picture. I would set up that stuff to run on a schedule when I’m not using my phone anyway so it’s not a big deal if it takes longer than ideal while it’s charging on my nightstand at night.

I would try to avoid USB at all, even if it's low stakes application such as the one you described. Does that old laptop have Thunderbolt? At least then you could run to some kind of PCIe controller externally and mostly use your existing hardware.

Warbird
May 23, 2012

America's Favorite Dumbass

Without getting too into the weeds, what sort of crypto scheme were you running that left you with umpteen multiTB drives lying around? GPUs I could understand.

As others have said try for sata or the like over USB is possible for better speeds, but it wouldn’t be the end of the world if you didn’t.

movax
Aug 30, 2008

Warbird posted:

Without getting too into the weeds, what sort of crypto scheme were you running that left you with umpteen multiTB drives lying around? GPUs I could understand.

As others have said try for sata or the like over USB is possible for better speeds, but it wouldn’t be the end of the world if you didn’t.

One of my co-workers got into Chia, might be that?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
There were a few proof of capacity coins floating about for a time, like Chia or Burstcoin. They never caught on and are all basically worthless.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

Nitrousoxide posted:

With Google photos dropping their free uploading in a few months I’m starting to look at building a NAS. I have a bunch of shingled 8 TB hard drives from a dumb foray into crypto a few years ago. Obviously shingled hard drives have abysmal write speeds but have acceptable read speeds. If I were to shuck these drives and toss them in a freeNas set up using an old laptop of mine to drive it would it work? This would mostly be for archival of my photos and maybe to run a single 4K stream capable Plex server.

Obviously it being capable of 4K would depend on the power of the laptop. But it is able to drive a 4K video locally right now in windows so I would imagine it might work.

Alternatively, could I just connect the drives via USB 3 instead of shucking them? That would simplify the set up at least.

It’s really not a huge deal for me if it’s not the quickest thing in the world. I am perfectly willing to accept a five second to 10 second buffering time to start a video or back up a picture. I would set up that stuff to run on a schedule when I’m not using my phone anyway so it’s not a big deal if it takes longer than ideal while it’s charging on my nightstand at night.

My understanding (which is admittedly limited) is that SMR drives should not be used for NAS at all. It isn't just the slower performance, they are actively less reliable in a RAID config. It's that the data shuffling that comes with shingling butts up directly against the distributed data writing of a RAID, especially in ZFS which you'd get in Free/TrueNAS. It basically causes the drives to be thrashing around all the time, shortening their lifetimes that much more (and these are already old/used drives). So one of them dies and you put in a new replacement drive, your pool will have to resliver the array to write all the necessary data to the new drive, which will thrash all your other drives as it reads through each one 100%. That's already a slow and vulnerable process, and because of shingling it can take days instead of hours.

If this is just a sandbox/proof of concept thing to play around with then sure, but I wouldn't start storing anything on an all-shingled RAID that I cared about losing. Welcome any corrections if any of the above is inaccurate.

lampey
Mar 27, 2012

SMR drives are fine for raid configs that do not use distributed parity, raid 0, 1, 10, snapraid, unraid, spanned. And they can be acceptable for raid 5, it depends more on the specific disks, how much you are writing, and if the sustained write speed is still fast enough. Most of the 8tb SMR drives have 20-25gb of non shingled data. It is used as a cache before data is written to the shingled portion. With four drives in a raid 10 or spanned etc, for writing files less than ~100gb you will get speeds in the 125mb a sec range and wouldn't notice any difference between conventional drives. If you use a type of raid with distributed parity you will hit the limit much faster, parity is going to be computed multiple times as data is shuffled around, and specifically ZFS will split things into a greater number of smaller writes, more likely to need to rewrite shingled areas. For 8tb SMR disks you still get ~30mb a sec per disk for sustained writes instead of 150-180 for a non SMR drive, with 4 disks you are still limited by gigabit networking and not the disks. Smaller and older disks are slower. The main time this is an issue is when you are rebuilding a failed disk, you are getting much faster speeds without the network. It will take around 48-60 hours hours to rebuild an SMR drive compared to around 16 hours to rebuild a regular 8tb sata drive.

Specifically for Freenas with ZFS, the seagate drives are fine. The WD red(non plus) or other WD SMR drives have more problems, and it could take more than a week to resilver after a drive fails. You can mitigate this some by tuning the ZFS pool, but I wouldn't use WD SMR drives with raidz-1/2.

BlankSystemDaemon
Mar 13, 2009



Or you could decide to vote with your wallet by not supporting companies pulling stupid poo poo like submarining SMR into their prosumer line of products.
That's also an option.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



movax posted:

I would try to avoid USB at all, even if it's low stakes application such as the one you described. Does that old laptop have Thunderbolt? At least then you could run to some kind of PCIe controller externally and mostly use your existing hardware.

Yeah, two thunderbolt ports, so I could power it with one and use the other to drive a PCIe enclosure. Maybe that's the way to do it.

Maybe something like this?

https://eshop.macsales.com/item/AKi...cB&gclsrc=aw.ds

Warbird posted:

Without getting too into the weeds, what sort of crypto scheme were you running that left you with umpteen multiTB drives lying around? GPUs I could understand.

As others have said try for sata or the like over USB is possible for better speeds, but it wouldn’t be the end of the world if you didn’t.

As someone said before, they were for Burstcoin. A proof of capacity crypto where you process the math only once and write it to a harddrive for later searching. It's much lower power usage (since you're not driving gpu's with tons of power calculating stuff over and over) was appealing to me at the time. It never really took off, and they changed the math about a year into my foray into it and it invalidated all the stuff written into my harddrives and would have forced me to recalculate everything again. Since it took my piddly GPU a month to do it last time I just said gently caress it and put them into storage.

Takes No Damage posted:

My understanding (which is admittedly limited) is that SMR drives should not be used for NAS at all. It isn't just the slower performance, they are actively less reliable in a RAID config. It's that the data shuffling that comes with shingling butts up directly against the distributed data writing of a RAID, especially in ZFS which you'd get in Free/TrueNAS. It basically causes the drives to be thrashing around all the time, shortening their lifetimes that much more (and these are already old/used drives). So one of them dies and you put in a new replacement drive, your pool will have to resliver the array to write all the necessary data to the new drive, which will thrash all your other drives as it reads through each one 100%. That's already a slow and vulnerable process, and because of shingling it can take days instead of hours.

If this is just a sandbox/proof of concept thing to play around with then sure, but I wouldn't start storing anything on an all-shingled RAID that I cared about losing. Welcome any corrections if any of the above is inaccurate.

Mirroring should be okay then though right? That's just a straightforward mirrored backup of what's on another drive. Obviously mirroring doesn't scale well, but if I just use two of these then that might be okay? I've got no other use for them otherwise.

https://www.amazon.com/gp/product/B01HAPGEIE/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1

8TB will probably be sufficient for my needs for the time being.

lampey posted:

SMR drives are fine for raid configs that do not use distributed parity, raid 0, 1, 10, snapraid, unraid, spanned. And they can be acceptable for raid 5, it depends more on the specific disks, how much you are writing, and if the sustained write speed is still fast enough. Most of the 8tb SMR drives have 20-25gb of non shingled data. It is used as a cache before data is written to the shingled portion. With four drives in a raid 10 or spanned etc, for writing files less than ~100gb you will get speeds in the 125mb a sec range and wouldn't notice any difference between conventional drives. If you use a type of raid with distributed parity you will hit the limit much faster, parity is going to be computed multiple times as data is shuffled around, and specifically ZFS will split things into a greater number of smaller writes, more likely to need to rewrite shingled areas. For 8tb SMR disks you still get ~30mb a sec per disk for sustained writes instead of 150-180 for a non SMR drive, with 4 disks you are still limited by gigabit networking and not the disks. Smaller and older disks are slower. The main time this is an issue is when you are rebuilding a failed disk, you are getting much faster speeds without the network. It will take around 48-60 hours hours to rebuild an SMR drive compared to around 16 hours to rebuild a regular 8tb sata drive.

Specifically for Freenas with ZFS, the seagate drives are fine. The WD red(non plus) or other WD SMR drives have more problems, and it could take more than a week to resilver after a drive fails. You can mitigate this some by tuning the ZFS pool, but I wouldn't use WD SMR drives with raidz-1/2.

I'm not sure which drive these are (linked above) since they're still in their enclosure. My research says they are most likely seagate archive drives, but that they also used other drives in their manufacture as well.

Nitrousoxide fucked around with this message at 16:12 on Jan 3, 2021

redeyes
Sep 14, 2002

by Fluffdaddy
When google said they were stopping the free photo storage I decided to do Nextcloud on a linux server. Threw that through Cloudflare. Tossed apps on my phone. AWESOME!!

Rooted Vegetable
Jun 1, 2002
Guess it's time for my thrice yearly battle with Nextcloud's docker and Traefik. Anyone got a reliable guide to getting those two working together? Ideally considering Unraid too but I think that's asking too much.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
https://youtu.be/fUPmVZ9CgtM

With companion video for accessing remotely via swag/lets encrypt in the video description.

Swap out swag with traefik and you're set yeah?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



redeyes posted:

When google said they were stopping the free photo storage I decided to do Nextcloud on a linux server. Threw that through Cloudflare. Tossed apps on my phone. AWESOME!!

Did you just use the free cloudflare set up?

Maybe I should just go with a proper NAS set up Instead of trying to rig up something with these shingled drives. There are a few things that I would want to do on there including media hosting for local streaming through Plex, back up of pictures and video taken by my phone, and hosting a handful of servers or modules like home bridge that are currently being hosted on a few raspberry pi's.

I suppose I could still use the shingled drives to provide the offsite back up for my NAS. Just periodically back it up to them In case my house catches fire or something. Or would people recommend I use a cloud provider to provide the offsite backup portion of a secure and properly backed up NAS solution?

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
I hadn't considered using Plex and/or Nextcloud for my photos/videos backup from my phone. Are there apps that mimic the google photos auto-backup feature but with NC or Plex as the destination?

Also, what function is Cloudflare serving in this setup? Dynamic DNS?

TraderStav fucked around with this message at 23:54 on Jan 3, 2021

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
I assume the Android app also does this but the Nextcloud iOS app will auto upload your pictures to a folder of your choice

Adbot
ADBOT LOVES YOU

Warbird
May 23, 2012

America's Favorite Dumbass

Plex photo backup is sketchy af in my experience. Check out resilience sync or the moments app if you have a Synology NAS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply