Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Actuarial Fables
Jul 29, 2014

Taco Defender

Siochain posted:

The MD1200 is hooked up via a LSI (rebranded as Dell) SAS2008 HBA (aka H200E) controller, which is funning firmware 7.15.08.00-IT. This looks like it is already setup correctly for FreeNAS to use without issue. Or should I try and update the firmware/flash it to a stock LSI firmware?

That firmware is the one you want. The Initiator Target (IT) firmware presents the connected disks directly to the host OS without any modification.

Siochain posted:

Also - should I install FreeNAS bare-metal and run VM's on it, or should I use VMWare, and install FreeNAS onto a VM? I do want to run a few small VM's on the system (Win 10, Linux, possibly Server 2016 or 2019) for lab/playing around.

It's pretty easy to virtualize/unvirtualize FreeNAS if all your disks are connected to an HBA as the ZFS pool isn't tied to the OS and all the settings can be downloaded and applied to a new install, so if you change your mind later on it's not too difficult to change it up. I would lean towards running FreeNAS in a VM and using your hypervisor of choice if you're limited to a single system - I had a poor experience trying to get VMs stable on FreeNAS. Either way you'll need a drive independent from the ZFS pool to install FreeNAS/vmware/whatever.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



If you virtualize FreeNAS the HBA needs to be passed through to the host via VT-d - the CPUs in question are Westmere, so you'll need to look for that option in your firmware to confirm that it's enabled. Any other way of presenting the hardware involves some level of software caching, which is what you want to avoid at all costs.

dexefiend
Apr 25, 2003

THE GOGGLES DO NOTHING!
I am less of a Linux expert than most people in this thread.

I love Unraid. Nothing bad has happened to me.

I like the simplicity and the Community Apps/Docker container library is a great resource.

Watch the SpaceInvader Unraid videos on Youtube.

Rooted Vegetable
Jun 1, 2002
I spent my time learning Linux by way of setting up Ubuntu Server, OpenMediaVault, Xubuntu on various NAS/Server/Desktops. I still use Unraid, despite that "but I know the really complicated nerd-cred way". That old way may have done when I had time and was proving stuff to myself, but these days I'm happy I know it and have a kid. Unraid works reliably and stablely. $89 for Pro well spent.

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE


Taco Defender

Kreeblah posted:

Has anybody tried these things in an N54L? I've heard of problems with some older hosts not being able to get these drives to power on.

In case anybody else is wondering, these drives work fine in my N54L without loving with the 3.3V pin. That said, I am running a modded bios, though I don't know whether that makes any difference.

Kreeblah fucked around with this message at 00:40 on Sep 14, 2019

sockpuppetclock
Sep 12, 2010
Before I just go ahead, is there anything wrong with the QNAP TS-453Be?

Schadenboner
Aug 15, 2011

by Shine
WTF is Qtier? Does it have redundancy as an option/built-in or is it "JBOD but we move ur poo poo around based on last-opened date"?

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

sockpuppetclock posted:

Before I just go ahead, is there anything wrong with the QNAP TS-453Be?

No, it's pretty decent.

Schadenboner posted:

WTF is Qtier? Does it have redundancy as an option/built-in or is it "JBOD but we move ur poo poo around based on last-opened date"?

No you have storage pools so you might have 5 hard disks in a raid 5/6 and a pair of SSDs in raid 1. So you have typical redundancy, in fact I don't think you can enable qtier on a single SSD. The access priority is a bit more sophisticated than last opened date as it looks at actual IO use over time.
https://www.qnap.com/solution/qtier-auto-tiering/en-us/

Schadenboner
Aug 15, 2011

by Shine
Goddamn, with 9 dicks crammed into it I'll have to call it "Yr Mum, lol" if I get one of those and I don't know if you can even have commas in device names?

:ohdear:

Wondering if the ARM will not be enough oomphs, especially since the AMD option only like 60 bucks more and the Intel is 20 bucks more than that.

Good find though. :wow:

ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?
Grimey Drawer
pre:
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       1
...
  9 Power_On_Hours          0x0032   032   032   000    Old_age   Always       -       49964
:smug:

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
I love my unraid for loving around with homelab stuff. It's got a nice clean interface for Qemu/KVM and works pretty well passing real hardware to VM's including graphics cards.

The app store for ready made dockers is just plain nice too.

BlankSystemDaemon
Mar 13, 2009



ChiralCondensate posted:

pre:
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       1
...
  9 Power_On_Hours          0x0032   032   032   000    Old_age   Always       -       49964
:smug:
code:
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0026   055   054   000    Old_age   Always       -       18317
  3 Spin_Up_Time            0x0023   081   061   025    Pre-fail  Always       -       5767
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       153
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       71391
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       150
181 Program_Fail_Cnt_Total  0x0022   100   100   000    Old_age   Always       -       11126
191 G-Sense_Error_Rate      0x0022   100   100   000    Old_age   Always       -       12
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   064   060   000    Old_age   Always       -       28 (Min/Max 14/40)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   252   252   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       62
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       6202
:smugbert:

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

D. Ebdrup posted:

code:
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0026   055   054   000    Old_age   Always       -       18317
  3 Spin_Up_Time            0x0023   081   061   025    Pre-fail  Always       -       5767
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       153
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       71391
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       150
181 Program_Fail_Cnt_Total  0x0022   100   100   000    Old_age   Always       -       11126
191 G-Sense_Error_Rate      0x0022   100   100   000    Old_age   Always       -       12
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   064   060   000    Old_age   Always       -       28 (Min/Max 14/40)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   252   252   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       62
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       6202
:smugbert:

I wish I had the reports from the Synology that I build for my old job. Those drives had something like 95k hours when we finally retired the unit. IIRC, they were 1GB WD Greens.
I've got a D-link DNS-323 that's been running non-stop for 10 years, but no way to run SMART diagnostics on the drives (Samsung 1TB EcoGreen F1's). I'd bet they're at nearly 90k.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
What is the best way you would all recommend to back up possibly up to 18TB?

System is ZFS so send/receive is possible, but would just a rsync work?

The cold storage is another question. Should it be like a temp PC with 3 10TB drives at Raidz1 (2 HDD + 1 Parity) or would Raidz0 be fine since the drives would mostly off.

I wish I could create a new zpool using the current ZFS pool and link the cold storage as a mirror, but not sure I would be able to put up with the constantly degraded state. Probably snapshots.

The two options to write to the cold storage disks is either a spare PC system with all of them mounted and snapshot/sync them over SSH with the built in zfs tools or connecting them all via USB/eSATA on the main system, but I would have to mount this controller card to do the eSATA thing unless it's possible to create a new zpool on the spare PC and maybe connecting them one at a time so they can get their part.

BlankSystemDaemon
Mar 13, 2009



There's so little difference between rsync and zfs send when you're doing the first bulk transfer that whatever difference there is gets lost in the standard deviation of tcp over even a short bit of ethernet cable. The advantage zfs send has is that it can be incremental and can keep state - so subsequent transfers are MUCH quicker, and if it gets interrupted it won't have to start from the beginning.

I would personally lean towards the RAIDZ1 option, since even if you're not actively using the disks, you don't know at which point of the disks bathtub curve they're going to fail (it's the paradox of all disk-based storage, you can't know when they're going to fail or be decommissioned, until they have).

Remember that if your system has three USB hubs free (or room for a multi-hub USB3 controller in a pci-ex slot), it's perfectly possible to connect the disks via USB, back up to them that way, and disconnect the disks when they're not in use. The only reason to avoid USB in day-to-day usage is that the connectors can be a little fragile and it sucks to end up with a faulted pool because a connector came lose.

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE


Taco Defender
Hmmm. Question for the ZFS folks in here. I swapped out all my drives in my array with some shucked 10TB Easystore drives (WD100EMAZ disks). However, I expected FreeBSD to just take care of setting them up properly when I did a zfs replace, so I didn't wipe any of the partitions before doing that. Now, when I boot, I get messages like:

code:
GEOM: ada0: corrupt or invalid GPT detected.
GEOM: ada0: GPT rejected -- may not be recoverable.
And, taking a look at these disks, I'm not totally sure what's going on with the partitioning. gpart doesn't recognize them at all, and fdisk gets me:

code:
******* Working on device /dev/ada0 *******
parameters extracted from in-core disklabel are:
cylinders=19377850 heads=16 sectors/track=63 (1008 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=19377850 heads=16 sectors/track=63 (1008 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
    start 1, size 4294967295 (2097151 Meg), flag 0
	beg: cyl 0/ head 0/ sector 2;
	end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
Does this matter? The pool itself is online and seems to work fine, so I'm not sure there's an actual problem. If there is an issue, though, is there a way to fix this without destroying/recreating my pool since I expanded it before noticing these errors?

My only thought here is that the ZFS resilver repartitioned the disks with a GPT scheme, but didn't wrap them in FreeBSD-friendly layout. If so, that probably means that my first thought, to try to create a GPT partition and resilver each disk again, might actually require enough space just for the partition layout that ZFS wouldn't let me use the disk for it. If that's the case, I might just let it ride until the next time I upgrade.

Kreeblah fucked around with this message at 09:32 on Sep 16, 2019

BlankSystemDaemon
Mar 13, 2009



Kreeblah posted:

:words: about ZFS
What's the output from 'gpart show ada0', and what output do you get from 'file -s /dev/ada0*'? For both the old drives and the new ones, please.
It sounds like you set it up for ZFS to use raw disks rather than partitions? How were the previous disks set up? Unless you've got a separate (pair of?) boot disk(s), you cannot use ZFS on the whole disk as the firmware won't know how to boot from the disks.
Assuming you have partition information on the old disks, what you need to do is use gpart to setup a similar set of partitions on the new disk then use zfs replace on the zfs partition you created with gpart, instead of the whole disk.

The reason ZFS can't deal with partition layouts is because there's a LOT of ways to lay out partitions depending on what platform the system is on, so ZFS can't assume any one platform.
On Solaris, OpenBoot could contain enough information in the firmware of the systems to be able to read ZFS whole-disks, so that wasn't an issue then. It's one of the few gotchas with ZFS that still haven't been completely ironed out, because there simply isn't any solution for it.

BlankSystemDaemon fucked around with this message at 10:14 on Sep 16, 2019

IOwnCalculus
Apr 2, 2003





I was going to post how I always just point ZoL at whole disks and never set up manual partitions, but decided to check the partition table of one just now.

code:
Device           Start         End     Sectors  Size Type
/dev/sdk1         2048 19532855295 19532853248  9.1T Solaris /usr & Apple ZFS
/dev/sdk9  19532855296 19532871679       16384    8M Solaris reserved 1

BlankSystemDaemon
Mar 13, 2009



Well, it's perfectly possible to use raw disks with OpenZFS, you just need somewhere to store the boot information.
An UEFI firmware with 512KB programmable space on the SPI flash would be enough, since .EFI files are programmed in C and it's not difficult to make a loader that way; FreeBSDs standard loader in its EFI binary takes up 447KB as of 12.0-RELEASE, and has support for ZFS and boot environments.

There's even room in the OpenZFS implementation to put loader information on-disk, in case there's no option for modifying the firmware and you need to support the INT 0x17 call that BIOS' make.
It just hasn't been taken advantage of yet since there's no agreed-upon standard for how it should be used for now - but perhaps that's one of the things that can be improved upon with the new OpenZFS repo where everything is being unified to.

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE


Taco Defender

D. Ebdrup posted:

What's the output from 'gpart show ada0', and what output do you get from 'file -s /dev/ada0*'? For both the old drives and the new ones, please.
It sounds like you set it up for ZFS to use raw disks rather than partitions? How were the previous disks set up? Unless you've got a separate (pair of?) boot disk(s), you cannot use ZFS on the whole disk as the firmware won't know how to boot from the disks.
Assuming you have partition information on the old disks, what you need to do is use gpart to setup a similar set of partitions on the new disk then use zfs replace on the zfs partition you created with gpart, instead of the whole disk.

The reason ZFS can't deal with partition layouts is because there's a LOT of ways to lay out partitions depending on what platform the system is on, so ZFS can't assume any one platform.
On Solaris, OpenBoot could contain enough information in the firmware of the systems to be able to read ZFS whole-disks, so that wasn't an issue then. It's one of the few gotchas with ZFS that still haven't been completely ironed out, because there simply isn't any solution for it.

I did this for ada4 since that's an easier disk to swap (no caddies to gently caress with). It's the same info for all the disks, though.

Old:
code:
[user@nas ~]$ sudo gpart show ada4
gpart: No such geom: ada4.
[user@nas ~]$ sudo file -s /dev/ada4*
/dev/ada4:     data
/dev/ada4.nop: data
New:
code:
[user@nas ~]$ sudo gpart show ada4
gpart: No such geom: ada4.
[user@nas ~]$ sudo file -s /dev/ada4*
/dev/ada4:     DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last)
/dev/ada4.nop: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last)
Which is . . . interesting. I guess the old ones just used the whole disk (which, thinking about it, makes some amount of sense, given that I don't think they came with any data on them at all, as opposed to these Easystore disks that were intended to be ready to go out of the box).

As far as booting goes, I have a separate UFS drive, so that's not impacted here.

BlankSystemDaemon
Mar 13, 2009



.nop devices are GEOM devices, so this makes no sense at all. I'm really not sure what the gently caress is up with it. Does 'camcontrol devlist' list the devices just fine?

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE


Taco Defender

D. Ebdrup posted:

.nop devices are GEOM devices, so this makes no sense at all. I'm really not sure what the gently caress is up with it. Does 'camcontrol devlist' list the devices just fine?

Yup.

code:
[user@nas ~]$ sudo camcontrol devlist
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus5 target 0 lun 0 (pass4,ada4)
<SanDisk Cruzer Glide 1.00>        at scbus6 target 0 lun 0 (pass5,da0)
I'm pretty confused about this, too, but I think I'm just going to let it ride. It seems to be working fine.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have a zpool that consists of two HP EX920 nvme drives in a simple configuration (each disk as a vdev). I am getting really, really terrible performance off the pool... like deleting 114 files took 2-3 minutes. Moving 1 TB between two datasets on the pool took 20 minutes to move like 50 GB. Copying off, it takes a couple seconds to spin up to 100 MB/s native gigabit ethernet throughput (single direction only).

I think part of the problem might be the fact that I ran the pool raw for a while before creating a specific dataset? Like maybe ZFS makes some assumptions that the root dataset on a pool is not particularly big and can be pinned into cache or something?

zfs get all output (the one filesystem I created at the end is excluded):

code:
znvme  size                           1.86T                          -
znvme  capacity                       25%                            -
znvme  altroot                        -                              default
znvme  health                         ONLINE                         -
znvme  guid                           14661218058347980275           default
znvme  version                        -                              default
znvme  bootfs                         -                              default
znvme  delegation                     on                             default
znvme  autoreplace                    off                            default
znvme  cachefile                      -                              default
znvme  failmode                       wait                           default
znvme  listsnapshots                  off                            default
znvme  autoexpand                     off                            default
znvme  dedupditto                     0                              default
znvme  dedupratio                     1.00x                          -
znvme  free                           1.39T                          -
znvme  allocated                      479G                           -
znvme  readonly                       off                            -
znvme  comment                        -                              default
znvme  expandsize                     -                              -
znvme  freeing                        0                              default
znvme  fragmentation                  9%                             -
znvme  leaked                         0                              default
znvme  bootsize                       -                              default
znvme  checkpoint                     -                              -
znvme  feature@async_destroy          enabled                        local
znvme  feature@empty_bpobj            active                         local
znvme  feature@lz4_compress           active                         local
znvme  feature@multi_vdev_crash_dump  enabled                        local
znvme  feature@spacemap_histogram     active                         local
znvme  feature@enabled_txg            active                         local
znvme  feature@hole_birth             active                         local
znvme  feature@extensible_dataset     enabled                        local
znvme  feature@embedded_data          active                         local
znvme  feature@bookmarks              enabled                        local
znvme  feature@filesystem_limits      enabled                        local
znvme  feature@large_blocks           enabled                        local
znvme  feature@large_dnode            enabled                        local
znvme  feature@sha512                 enabled                        local
znvme  feature@skein                  enabled                        local
znvme  feature@device_removal         enabled                        local
znvme  feature@obsolete_counts        enabled                        local
znvme  feature@zpool_checkpoint       enabled                        local
znvme  feature@spacemap_v2            active                         local
code:
znvme                   type                  filesystem             -
znvme                   creation              Mon Jan 28  3:04 2019  -
znvme                   used                  758G                   -
znvme                   available             1.06T                  -
znvme                   referenced            112K                   -
znvme                   compressratio         1.00x                  -
znvme                   mounted               yes                    -
znvme                   quota                 none                   default
znvme                   reservation           none                   default
znvme                   recordsize            128K                   default
znvme                   mountpoint            /znvme                 default
znvme                   sharenfs              off                    default
znvme                   checksum              on                     default
znvme                   compression           off                    default
znvme                   atime                 on                     default
znvme                   devices               on                     default
znvme                   exec                  on                     default
znvme                   setuid                on                     default
znvme                   readonly              off                    default
znvme                   jailed                off                    default
znvme                   snapdir               hidden                 default
znvme                   aclmode               discard                default
znvme                   aclinherit            restricted             default
znvme                   createtxg             1                      -
znvme                   canmount              on                     default
znvme                   xattr                 off                    temporary
znvme                   copies                1                      default
znvme                   version               5                      -
znvme                   utf8only              off                    -
znvme                   normalization         none                   -
znvme                   casesensitivity       sensitive              -
znvme                   vscan                 off                    default
znvme                   nbmand                off                    default
znvme                   sharesmb              off                    default
znvme                   refquota              none                   default
znvme                   refreservation        none                   default
znvme                   guid                  9480060411453799612    -
znvme                   primarycache          all                    default
znvme                   secondarycache        all                    default
znvme                   usedbysnapshots       0                      -
znvme                   usedbydataset         112K                   -
znvme                   usedbychildren        758G                   -
znvme                   usedbyrefreservation  0                      -
znvme                   logbias               latency                default
znvme                   dedup                 off                    default
znvme                   mlslabel                                     -
znvme                   sync                  standard               default
znvme                   dnodesize             legacy                 default
znvme                   refcompressratio      1.00x                  -
znvme                   written               112K                   -
znvme                   logicalused           757G                   -
znvme                   logicalreferenced     43K                    -
znvme                   volmode               default                default
znvme                   filesystem_limit      none                   default
znvme                   snapshot_limit        none                   default
znvme                   filesystem_count      none                   default
znvme                   snapshot_count        none                   default
znvme                   redundant_metadata    all                    default
znvme/encode            type                  filesystem             -
znvme/encode            creation              Tue Apr 30  2:09 2019  -
znvme/encode            used                  758G                   -
znvme/encode            available             1.06T                  -
znvme/encode            referenced            758G                   -
znvme/encode            compressratio         1.00x                  -
znvme/encode            mounted               yes                    -
znvme/encode            quota                 none                   default
znvme/encode            reservation           none                   default
znvme/encode            recordsize            128K                   default
znvme/encode            mountpoint            /znvme/encode          default
znvme/encode            sharenfs              off                    default
znvme/encode            checksum              on                     default
znvme/encode            compression           off                    default
znvme/encode            atime                 on                     default
znvme/encode            devices               on                     default
znvme/encode            exec                  on                     default
znvme/encode            setuid                on                     default
znvme/encode            readonly              off                    default
znvme/encode            jailed                off                    default
znvme/encode            snapdir               hidden                 default
znvme/encode            aclmode               discard                default
znvme/encode            aclinherit            restricted             default
znvme/encode            createtxg             606225                 -
znvme/encode            canmount              on                     default
znvme/encode            xattr                 off                    temporary
znvme/encode            copies                1                      default
znvme/encode            version               5                      -
znvme/encode            utf8only              off                    -
znvme/encode            normalization         none                   -
znvme/encode            casesensitivity       sensitive              -
znvme/encode            vscan                 off                    default
znvme/encode            nbmand                off                    default
znvme/encode            sharesmb              off                    default
znvme/encode            refquota              none                   default
znvme/encode            refreservation        none                   default
znvme/encode            guid                  6250502280993705873    -
znvme/encode            primarycache          all                    default
znvme/encode            secondarycache        all                    default
znvme/encode            usedbysnapshots       0                      -
znvme/encode            usedbydataset         758G                   -
znvme/encode            usedbychildren        0                      -
znvme/encode            usedbyrefreservation  0                      -
znvme/encode            logbias               latency                default
znvme/encode            dedup                 off                    default
znvme/encode            mlslabel                                     -
znvme/encode            sync                  standard               default
znvme/encode            dnodesize             legacy                 default
znvme/encode            refcompressratio      1.00x                  -
znvme/encode            written               758G                   -
znvme/encode            logicalused           757G                   -
znvme/encode            logicalreferenced     757G                   -
znvme/encode            volmode               default                default
znvme/encode            filesystem_limit      none                   default
znvme/encode            snapshot_limit        none                   default
znvme/encode            filesystem_count      none                   default
znvme/encode            snapshot_count        none                   default
znvme/encode            redundant_metadata    all                    default
Nor is the disk overheating... it was at 38C while I was observing it strugglebussing to move data from dataset to dataset.

edit: right now I am getting 16 MB/s sustained read off it... with no contention.

(pay no attention to specific disk sizes, I'm moving stuff off the pool so I can try destroying and recreating it)

edit2: also this is a specific afflication that seems to come on with longer system uptimes... reboot and it goes away.

Paul MaudDib fucked around with this message at 06:38 on Sep 21, 2019

BlankSystemDaemon
Mar 13, 2009



I've had harddrives dying cause arrays to exhibit exactly those kinds of behaviour, despite not showing any rising S.M.A.R.T attributes (all of which were tracked with collectd) - so it may be worth trying to determine if one of the disks is failing?

I'm not sure how you managed to run a zpool without a dataset, as far as I know datasets are either filesystems or volumes and you can't really store anything on ZFS without using either one or the other.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

D. Ebdrup posted:

There's so little difference between rsync and zfs send when you're doing the first bulk transfer that whatever difference there is gets lost in the standard deviation of tcp over even a short bit of ethernet cable. The advantage zfs send has is that it can be incremental and can keep state - so subsequent transfers are MUCH quicker, and if it gets interrupted it won't have to start from the beginning.

I would personally lean towards the RAIDZ1 option, since even if you're not actively using the disks, you don't know at which point of the disks bathtub curve they're going to fail (it's the paradox of all disk-based storage, you can't know when they're going to fail or be decommissioned, until they have).

Remember that if your system has three USB hubs free (or room for a multi-hub USB3 controller in a pci-ex slot), it's perfectly possible to connect the disks via USB, back up to them that way, and disconnect the disks when they're not in use. The only reason to avoid USB in day-to-day usage is that the connectors can be a little fragile and it sucks to end up with a faulted pool because a connector came lose.

Thanks for your reccomendation!

Went RAIDZ1 with 3 10TB drives.

Shucked 2 of them from a WD Elements and 1 from newegg. I had a new 2 drive USB dock but I couldn't find the thick USB connector for the 1 dock I had before so I just used the board that came with one of the elements drive to connect it via USB.

Put both USB cables in a hub and connected it to the front panel since it looks like any board-based USB on the system should be USB3. Made the snapshot, made a zpool/zfs of the backup drive, and ran the following command

code:
zfs send -vR  neriak/files@092119 | pv | zfs recv -F ebon_mask/neriak
It started the transfer at a whopping 20MBps :bahgawd:

Looking like all the traffic going across one hub was too much even though the cable it used was blue colored suggest super speed (or just USB3 at least)

Separated the 2dock and the 1 dock (since I just found the thick usb connector cable) out and put them in the back in their own USB port and now I'm getting 200-300MBps write transfers.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

8TB Easystore down to $121.09 at B&H. Is that the cheapest? I don't remember seeing under $129.99 for those previously but maybe I forgot a black friday deal.
https://www.bhphotovideo.com/c/prod...a338d3381850INT

astral
Apr 26, 2004

Rexxed posted:

8TB Easystore down to $121.09 at B&H. Is that the cheapest? I don't remember seeing under $129.99 for those previously but maybe I forgot a black friday deal.
https://www.bhphotovideo.com/c/prod...a338d3381850INT

Looks like it's over; I'm seeing it as a $139.99 Elements. But yeah, that would've been a great price!

el_caballo
Feb 26, 2001
Posting this here since it's so Unraid specific and also it's very long, so sorry. After Crashplan Home was discontinued, I took the Crashplan Small Biz discount for a year and now it's expired. So I just started setting up an Unraid server and installed a Duplicacy-Web GUI docker. I'm not sure I'm setting everything up correctly. The guide on the Duplicacy website only shows how to set up a local folder backup.

My backup needs are: Mainly just one cloud-mirrored folder on my Unraid server that catches 1) everything I drag and drop (most important stuff) 2) Windows desktop automated backups pushed into that Unraid folder 3) important local Unraid config files that the CA Backup plugin is backing up. I am not backing up any media bullshit except for photos.

This is also the first docker I've set up from "scratch," i.e., not checked out from Community Applications. I gave it the extra parameter "--hostname <randomnumbers>", which is what the docker passed to the Duplicacy customer license page on first activation so I just copied it. It seems to be persisting through docker restarts so I think that's fine? The docker may also be doing some machine-id thing that I don't understand. I gave it the Config, Log, and Cache host paths it needs and then for access, I just gave it the whole "mnt/user" path, so that it could see every Unraid share. Not sure if that's bad. I think that was it for docker setup besides the host/container ports. Not sure if it needs anything else.

The first thing I did in the web GUI was put in my Backblaze B2 bucket as a Duplicacy storage location (Same thing as repo? Web GUI doesn't seem to use the same terminology as CLI.): "b2_bucket". Then I made a new local Unraid share with the folder "backup_to_b2". Whenever it asked if I wanted to encrypt, I did.

Then I went to the "Backup" tab and created a backup from the folder "backup_to_b2" to the "b2_bucket" in the cloud. It ran fine and then I deleted my test files and restored them fine.

BUT, from reading around, it seems like most people only run backup jobs locally to a local storage and then copy that to the cloud. So would the correct setup be like this?
    1. Make a new "Main_Duplicacy_Backups" storage (aka repo?) location in Duplicacy.
    2. Make a backup job that backs up my "backup_to_b2" folder to this new "Main_Duplicacy_Backups" storage
    3. Make one or two new backup jobs that backup the various important folders in Unraid to "Main_Duplicacy_Backups"
    4. I'll probably install a 2nd Duplicacy license on my Windows machine and then backup poo poo like my Windows documents, appdata, photos, etc. to the "backup_to_b2" folder on the the network (with a bunch of exclusions to weed out crap like .exe's etc.).
    5. Make a copy job that copies "Main_Duplicacy_Backups" to "b2_bucket".

I'm also not sure if I understand the best practices for backup, check, prune, etc. Assuming I want the "Main_Duplicacy_Backups" copied to the cloud at 12:00am daily, that means all of my backup jobs on all my machines run before that (maybe hourly I dunno), then a check job on the local "Main_Duplicacy_Backups", then the copy to the cloud "b2_bucket", then a check job on the cloud "b2_bucket", and then once a week, after all of the above, a prune job (with the default settings) on just the "Main_Duplicacy_Backups".

TenementFunster
Feb 20, 2003

The Cooler King
i’m eventually gonna need more space on my DS218+. are 16tb drives gonna keep being the largest size available for awhile, or are larger drives soon to come down to drive the price of the 16tb drives down?

alternatively, would i be better off buying the 5 bay DX517 expansion and filling it full of smaller drives i have on hand, or just selling the DS218+ and getting a larger unit?

a DS418 is $360, while a DX517 is $450. would i be able to use the DX517 with future, faster version of the DS line?

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

TenementFunster posted:

i’m eventually gonna need more space on my DS218+. are 16tb drives gonna keep being the largest size available for awhile, or are larger drives soon to come down to drive the price of the 16tb drives down?

alternatively, would i be better off buying the 5 bay DX517 expansion and filling it full of smaller drives i have on hand, or just selling the DS218+ and getting a larger unit?

a DS418 is $360, while a DX517 is $450. would i be able to use the DX517 with future, faster version of the DS line?

WD claims 20TB disks will be out next year, but there's no certain way to tell how it will impact pricing:
https://www.techradar.com/news/you-will-be-able-to-buy-a-20tb-hard-drive-in-2020

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
The DS218+ can't do expansion bays. You can tell cause it just has 2 which is the max drives. If you had the DS718+ it could accept expansion bays (2+5).
https://www.synology.c/en-us/products/DS218+#specs
Verus:
https://www.synology.com/en-us/products/DS718+#specs

Also if you are going to move to a new unit. Keep within the same series, for example instead of the DS418, you would want the DS918+ same amount of drives, but is expandable, and since it is a plus series it has access to the same applications as the DS218+. The DS418 doesn't have access to a lot of the "business" rated applications, so if you are using them, they will suddenly be gone.

If you have an app you absolutely need to make sure is there: https://www.synology.com/en-us/dsm/packages
They have every package and the models that can use them.

CopperHound
Feb 14, 2012

Axe-man posted:

The DS218+ can't do expansion bays. You can tell cause it just has 2 which is the max drives. If you had the DS718+ it could accept expansion bays (2+5).
https://www.synology.c/en-us/products/DS218+#specs
Verus:
https://www.synology.com/en-us/products/DS718+#specs
Sort of... You can use it with ds218+, but it has to be a separate volume: https://www.synology.com/en-us/know...Expansion_Units

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Rexxed posted:

WD claims 20TB disks will be out next year, but there's no certain way to tell how it will impact pricing:
https://www.techradar.com/news/you-will-be-able-to-buy-a-20tb-hard-drive-in-2020
Google absolutely had 8 TB and larger hard drives far ahead of the consumer market in 2013 so they’ve probably already been rolling with those for a year or two by now minimum.

aluminumonkey
Jun 19, 2002

Reggie loves tacos
What are good 8+ drive cases that can do hotswap? I want to to retire my synology 1813+ and create 1 system that does both transcoding/downloading and nas functionality. I already have a powerful enough system to do it but I don't know which case and which backplane/sata expansion cards to get.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If you want 8+ bays that are hotswap and not just 8+ bays total with a few being hotswap, I'd look at rackmount cases. Rosewill has some that do 12-15 drives and use 120mm fans so they aren't screamingly loud, and you can always just set it on its side if what you really want is a tower.

I had to upgrade recently because I wanted at least 9 drives total and even without the hotswap requirement, once you go past 8 your inexpensive options are pretty limited.

TenementFunster
Feb 20, 2003

The Cooler King

CopperHound posted:

Sort of... You can use it with ds218+, but it has to be a separate volume: https://www.synology.com/en-us/know...Expansion_Units
which i’m okay with, but would the dummy expansion bay work with future releases? like if i wanna upgrade to a DS222+ in a few years, will the DX517 work, or is there not cross-compatibility between generations?

Hughlander
May 11, 2005

Greetings from the past. I'm still catching up on a few months of the thread I missed so sorry if this was covered recently...


I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM. I use it as a ZFS server and my docker homelab. I have 2 SSDs for boot plugged into the on-board sata ports, then 8 more drives plugged into the built in LSI SAS ports, and another 8 in a JBOD external array plugged into a different LSI card. What I'd like to do is to move to a Ryzen 3900 with 128GB ram, but I don't see a good motherboard that would give IPMI or more than 6 ports. It seems like I'd need to get a second LSI card and flash it to IT mode.

That said does anyone have a recommendation for a Ryzen compatible motherboard that they'd use in my case? Or another solution to increase the amount of memory for the least amount of money?

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Hughlander posted:

Greetings from the past. I'm still catching up on a few months of the thread I missed so sorry if this was covered recently...


I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM. I use it as a ZFS server and my docker homelab. I have 2 SSDs for boot plugged into the on-board sata ports, then 8 more drives plugged into the built in LSI SAS ports, and another 8 in a JBOD external array plugged into a different LSI card. What I'd like to do is to move to a Ryzen 3900 with 128GB ram, but I don't see a good motherboard that would give IPMI or more than 6 ports. It seems like I'd need to get a second LSI card and flash it to IT mode.

That said does anyone have a recommendation for a Ryzen compatible motherboard that they'd use in my case? Or another solution to increase the amount of memory for the least amount of money?

I haven't done a lot of research but I suspect your issues are that the Ryzen CPU is a desktop line of CPUs but things like IPMI or extra sata controllers/ports are typically features for workstation or server boards. Those are going to be Threadripper for workstation or EPYC for server. They also won't be too cheap unless you're looking at older models and older is only a couple of years at this point.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Hughlander posted:

I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM.

What CPU and mainboard are you using that's limited to 32 gigs of ram, you might be better off replacing the mainboard if it's slot limited or density limited, I'm having a hard time remembering how far back you'd have to go to get a Xeon that's limited to 32 gigs of ram

Adbot
ADBOT LOVES YOU

phosdex
Dec 16, 2005

It's probably socket 1150 haswell. I have the same "problem". My solution is to keep that as mostly storage server and build another box for vms.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply