|
Siochain posted:The MD1200 is hooked up via a LSI (rebranded as Dell) SAS2008 HBA (aka H200E) controller, which is funning firmware 7.15.08.00-IT. This looks like it is already setup correctly for FreeNAS to use without issue. Or should I try and update the firmware/flash it to a stock LSI firmware? That firmware is the one you want. The Initiator Target (IT) firmware presents the connected disks directly to the host OS without any modification. Siochain posted:Also - should I install FreeNAS bare-metal and run VM's on it, or should I use VMWare, and install FreeNAS onto a VM? I do want to run a few small VM's on the system (Win 10, Linux, possibly Server 2016 or 2019) for lab/playing around. It's pretty easy to virtualize/unvirtualize FreeNAS if all your disks are connected to an HBA as the ZFS pool isn't tied to the OS and all the settings can be downloaded and applied to a new install, so if you change your mind later on it's not too difficult to change it up. I would lean towards running FreeNAS in a VM and using your hypervisor of choice if you're limited to a single system - I had a poor experience trying to get VMs stable on FreeNAS. Either way you'll need a drive independent from the ZFS pool to install FreeNAS/vmware/whatever.
|
# ? Sep 13, 2019 21:57 |
|
|
# ? May 30, 2024 03:26 |
If you virtualize FreeNAS the HBA needs to be passed through to the host via VT-d - the CPUs in question are Westmere, so you'll need to look for that option in your firmware to confirm that it's enabled. Any other way of presenting the hardware involves some level of software caching, which is what you want to avoid at all costs.
|
|
# ? Sep 13, 2019 22:12 |
|
I am less of a Linux expert than most people in this thread. I love Unraid. Nothing bad has happened to me. I like the simplicity and the Community Apps/Docker container library is a great resource. Watch the SpaceInvader Unraid videos on Youtube.
|
# ? Sep 13, 2019 22:26 |
|
I spent my time learning Linux by way of setting up Ubuntu Server, OpenMediaVault, Xubuntu on various NAS/Server/Desktops. I still use Unraid, despite that "but I know the really complicated nerd-cred way". That old way may have done when I had time and was proving stuff to myself, but these days I'm happy I know it and have a kid. Unraid works reliably and stablely. $89 for Pro well spent.
|
# ? Sep 13, 2019 22:35 |
|
Kreeblah posted:Has anybody tried these things in an N54L? I've heard of problems with some older hosts not being able to get these drives to power on. In case anybody else is wondering, these drives work fine in my N54L without loving with the 3.3V pin. That said, I am running a modded bios, though I don't know whether that makes any difference. Kreeblah fucked around with this message at 00:40 on Sep 14, 2019 |
# ? Sep 14, 2019 00:34 |
|
Before I just go ahead, is there anything wrong with the QNAP TS-453Be?
|
# ? Sep 14, 2019 01:31 |
|
WTF is Qtier? Does it have redundancy as an option/built-in or is it "JBOD but we move ur poo poo around based on last-opened date"?
|
# ? Sep 14, 2019 01:40 |
|
sockpuppetclock posted:Before I just go ahead, is there anything wrong with the QNAP TS-453Be? No, it's pretty decent. Schadenboner posted:WTF is Qtier? Does it have redundancy as an option/built-in or is it "JBOD but we move ur poo poo around based on last-opened date"? No you have storage pools so you might have 5 hard disks in a raid 5/6 and a pair of SSDs in raid 1. So you have typical redundancy, in fact I don't think you can enable qtier on a single SSD. The access priority is a bit more sophisticated than last opened date as it looks at actual IO use over time. https://www.qnap.com/solution/qtier-auto-tiering/en-us/
|
# ? Sep 14, 2019 21:42 |
|
Goddamn, with 9 dicks crammed into it I'll have to call it "Yr Mum, lol" if I get one of those and I don't know if you can even have commas in device names? Wondering if the ARM will not be enough oomphs, especially since the AMD option only like 60 bucks more and the Intel is 20 bucks more than that. Good find though.
|
# ? Sep 14, 2019 22:24 |
|
pre:1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 1 ... 9 Power_On_Hours 0x0032 032 032 000 Old_age Always - 49964
|
# ? Sep 15, 2019 03:31 |
|
I love my unraid for loving around with homelab stuff. It's got a nice clean interface for Qemu/KVM and works pretty well passing real hardware to VM's including graphics cards. The app store for ready made dockers is just plain nice too.
|
# ? Sep 15, 2019 07:06 |
ChiralCondensate posted:
code:
|
|
# ? Sep 15, 2019 12:25 |
|
D. Ebdrup posted:
I wish I had the reports from the Synology that I build for my old job. Those drives had something like 95k hours when we finally retired the unit. IIRC, they were 1GB WD Greens. I've got a D-link DNS-323 that's been running non-stop for 10 years, but no way to run SMART diagnostics on the drives (Samsung 1TB EcoGreen F1's). I'd bet they're at nearly 90k.
|
# ? Sep 15, 2019 15:16 |
|
What is the best way you would all recommend to back up possibly up to 18TB? System is ZFS so send/receive is possible, but would just a rsync work? The cold storage is another question. Should it be like a temp PC with 3 10TB drives at Raidz1 (2 HDD + 1 Parity) or would Raidz0 be fine since the drives would mostly off. I wish I could create a new zpool using the current ZFS pool and link the cold storage as a mirror, but not sure I would be able to put up with the constantly degraded state. Probably snapshots. The two options to write to the cold storage disks is either a spare PC system with all of them mounted and snapshot/sync them over SSH with the built in zfs tools or connecting them all via USB/eSATA on the main system, but I would have to mount this controller card to do the eSATA thing unless it's possible to create a new zpool on the spare PC and maybe connecting them one at a time so they can get their part.
|
# ? Sep 15, 2019 18:19 |
There's so little difference between rsync and zfs send when you're doing the first bulk transfer that whatever difference there is gets lost in the standard deviation of tcp over even a short bit of ethernet cable. The advantage zfs send has is that it can be incremental and can keep state - so subsequent transfers are MUCH quicker, and if it gets interrupted it won't have to start from the beginning. I would personally lean towards the RAIDZ1 option, since even if you're not actively using the disks, you don't know at which point of the disks bathtub curve they're going to fail (it's the paradox of all disk-based storage, you can't know when they're going to fail or be decommissioned, until they have). Remember that if your system has three USB hubs free (or room for a multi-hub USB3 controller in a pci-ex slot), it's perfectly possible to connect the disks via USB, back up to them that way, and disconnect the disks when they're not in use. The only reason to avoid USB in day-to-day usage is that the connectors can be a little fragile and it sucks to end up with a faulted pool because a connector came lose.
|
|
# ? Sep 15, 2019 18:44 |
|
Hmmm. Question for the ZFS folks in here. I swapped out all my drives in my array with some shucked 10TB Easystore drives (WD100EMAZ disks). However, I expected FreeBSD to just take care of setting them up properly when I did a zfs replace, so I didn't wipe any of the partitions before doing that. Now, when I boot, I get messages like:code:
code:
My only thought here is that the ZFS resilver repartitioned the disks with a GPT scheme, but didn't wrap them in FreeBSD-friendly layout. If so, that probably means that my first thought, to try to create a GPT partition and resilver each disk again, might actually require enough space just for the partition layout that ZFS wouldn't let me use the disk for it. If that's the case, I might just let it ride until the next time I upgrade. Kreeblah fucked around with this message at 09:32 on Sep 16, 2019 |
# ? Sep 16, 2019 09:17 |
Kreeblah posted:about ZFS It sounds like you set it up for ZFS to use raw disks rather than partitions? How were the previous disks set up? Unless you've got a separate (pair of?) boot disk(s), you cannot use ZFS on the whole disk as the firmware won't know how to boot from the disks. Assuming you have partition information on the old disks, what you need to do is use gpart to setup a similar set of partitions on the new disk then use zfs replace on the zfs partition you created with gpart, instead of the whole disk. The reason ZFS can't deal with partition layouts is because there's a LOT of ways to lay out partitions depending on what platform the system is on, so ZFS can't assume any one platform. On Solaris, OpenBoot could contain enough information in the firmware of the systems to be able to read ZFS whole-disks, so that wasn't an issue then. It's one of the few gotchas with ZFS that still haven't been completely ironed out, because there simply isn't any solution for it. BlankSystemDaemon fucked around with this message at 10:14 on Sep 16, 2019 |
|
# ? Sep 16, 2019 10:06 |
|
I was going to post how I always just point ZoL at whole disks and never set up manual partitions, but decided to check the partition table of one just now.code:
|
# ? Sep 16, 2019 18:28 |
Well, it's perfectly possible to use raw disks with OpenZFS, you just need somewhere to store the boot information. An UEFI firmware with 512KB programmable space on the SPI flash would be enough, since .EFI files are programmed in C and it's not difficult to make a loader that way; FreeBSDs standard loader in its EFI binary takes up 447KB as of 12.0-RELEASE, and has support for ZFS and boot environments. There's even room in the OpenZFS implementation to put loader information on-disk, in case there's no option for modifying the firmware and you need to support the INT 0x17 call that BIOS' make. It just hasn't been taken advantage of yet since there's no agreed-upon standard for how it should be used for now - but perhaps that's one of the things that can be improved upon with the new OpenZFS repo where everything is being unified to.
|
|
# ? Sep 16, 2019 19:02 |
|
D. Ebdrup posted:What's the output from 'gpart show ada0', and what output do you get from 'file -s /dev/ada0*'? For both the old drives and the new ones, please. I did this for ada4 since that's an easier disk to swap (no caddies to gently caress with). It's the same info for all the disks, though. Old: code:
code:
As far as booting goes, I have a separate UFS drive, so that's not impacted here.
|
# ? Sep 17, 2019 01:45 |
.nop devices are GEOM devices, so this makes no sense at all. I'm really not sure what the gently caress is up with it. Does 'camcontrol devlist' list the devices just fine?
|
|
# ? Sep 17, 2019 09:00 |
|
D. Ebdrup posted:.nop devices are GEOM devices, so this makes no sense at all. I'm really not sure what the gently caress is up with it. Does 'camcontrol devlist' list the devices just fine? Yup. code:
|
# ? Sep 17, 2019 17:01 |
|
I have a zpool that consists of two HP EX920 nvme drives in a simple configuration (each disk as a vdev). I am getting really, really terrible performance off the pool... like deleting 114 files took 2-3 minutes. Moving 1 TB between two datasets on the pool took 20 minutes to move like 50 GB. Copying off, it takes a couple seconds to spin up to 100 MB/s native gigabit ethernet throughput (single direction only). I think part of the problem might be the fact that I ran the pool raw for a while before creating a specific dataset? Like maybe ZFS makes some assumptions that the root dataset on a pool is not particularly big and can be pinned into cache or something? zfs get all output (the one filesystem I created at the end is excluded): code:
code:
edit: right now I am getting 16 MB/s sustained read off it... with no contention. (pay no attention to specific disk sizes, I'm moving stuff off the pool so I can try destroying and recreating it) edit2: also this is a specific afflication that seems to come on with longer system uptimes... reboot and it goes away. Paul MaudDib fucked around with this message at 06:38 on Sep 21, 2019 |
# ? Sep 21, 2019 05:12 |
I've had harddrives dying cause arrays to exhibit exactly those kinds of behaviour, despite not showing any rising S.M.A.R.T attributes (all of which were tracked with collectd) - so it may be worth trying to determine if one of the disks is failing? I'm not sure how you managed to run a zpool without a dataset, as far as I know datasets are either filesystems or volumes and you can't really store anything on ZFS without using either one or the other.
|
|
# ? Sep 21, 2019 12:27 |
|
D. Ebdrup posted:There's so little difference between rsync and zfs send when you're doing the first bulk transfer that whatever difference there is gets lost in the standard deviation of tcp over even a short bit of ethernet cable. The advantage zfs send has is that it can be incremental and can keep state - so subsequent transfers are MUCH quicker, and if it gets interrupted it won't have to start from the beginning. Thanks for your reccomendation! Went RAIDZ1 with 3 10TB drives. Shucked 2 of them from a WD Elements and 1 from newegg. I had a new 2 drive USB dock but I couldn't find the thick USB connector for the 1 dock I had before so I just used the board that came with one of the elements drive to connect it via USB. Put both USB cables in a hub and connected it to the front panel since it looks like any board-based USB on the system should be USB3. Made the snapshot, made a zpool/zfs of the backup drive, and ran the following command code:
Looking like all the traffic going across one hub was too much even though the cable it used was blue colored suggest super speed (or just USB3 at least) Separated the 2dock and the 1 dock (since I just found the thick usb connector cable) out and put them in the back in their own USB port and now I'm getting 200-300MBps write transfers.
|
# ? Sep 22, 2019 05:32 |
|
8TB Easystore down to $121.09 at B&H. Is that the cheapest? I don't remember seeing under $129.99 for those previously but maybe I forgot a black friday deal. https://www.bhphotovideo.com/c/prod...a338d3381850INT
|
# ? Sep 22, 2019 09:32 |
|
Rexxed posted:8TB Easystore down to $121.09 at B&H. Is that the cheapest? I don't remember seeing under $129.99 for those previously but maybe I forgot a black friday deal. Looks like it's over; I'm seeing it as a $139.99 Elements. But yeah, that would've been a great price!
|
# ? Sep 23, 2019 01:06 |
|
Posting this here since it's so Unraid specific and also it's very long, so sorry. After Crashplan Home was discontinued, I took the Crashplan Small Biz discount for a year and now it's expired. So I just started setting up an Unraid server and installed a Duplicacy-Web GUI docker. I'm not sure I'm setting everything up correctly. The guide on the Duplicacy website only shows how to set up a local folder backup. My backup needs are: Mainly just one cloud-mirrored folder on my Unraid server that catches 1) everything I drag and drop (most important stuff) 2) Windows desktop automated backups pushed into that Unraid folder 3) important local Unraid config files that the CA Backup plugin is backing up. I am not backing up any media bullshit except for photos. This is also the first docker I've set up from "scratch," i.e., not checked out from Community Applications. I gave it the extra parameter "--hostname <randomnumbers>", which is what the docker passed to the Duplicacy customer license page on first activation so I just copied it. It seems to be persisting through docker restarts so I think that's fine? The docker may also be doing some machine-id thing that I don't understand. I gave it the Config, Log, and Cache host paths it needs and then for access, I just gave it the whole "mnt/user" path, so that it could see every Unraid share. Not sure if that's bad. I think that was it for docker setup besides the host/container ports. Not sure if it needs anything else. The first thing I did in the web GUI was put in my Backblaze B2 bucket as a Duplicacy storage location (Same thing as repo? Web GUI doesn't seem to use the same terminology as CLI.): "b2_bucket". Then I made a new local Unraid share with the folder "backup_to_b2". Whenever it asked if I wanted to encrypt, I did. Then I went to the "Backup" tab and created a backup from the folder "backup_to_b2" to the "b2_bucket" in the cloud. It ran fine and then I deleted my test files and restored them fine. BUT, from reading around, it seems like most people only run backup jobs locally to a local storage and then copy that to the cloud. So would the correct setup be like this?
2. Make a backup job that backs up my "backup_to_b2" folder to this new "Main_Duplicacy_Backups" storage 3. Make one or two new backup jobs that backup the various important folders in Unraid to "Main_Duplicacy_Backups" 4. I'll probably install a 2nd Duplicacy license on my Windows machine and then backup poo poo like my Windows documents, appdata, photos, etc. to the "backup_to_b2" folder on the the network (with a bunch of exclusions to weed out crap like .exe's etc.). 5. Make a copy job that copies "Main_Duplicacy_Backups" to "b2_bucket". I'm also not sure if I understand the best practices for backup, check, prune, etc. Assuming I want the "Main_Duplicacy_Backups" copied to the cloud at 12:00am daily, that means all of my backup jobs on all my machines run before that (maybe hourly I dunno), then a check job on the local "Main_Duplicacy_Backups", then the copy to the cloud "b2_bucket", then a check job on the cloud "b2_bucket", and then once a week, after all of the above, a prune job (with the default settings) on just the "Main_Duplicacy_Backups".
|
# ? Sep 23, 2019 19:08 |
|
i’m eventually gonna need more space on my DS218+. are 16tb drives gonna keep being the largest size available for awhile, or are larger drives soon to come down to drive the price of the 16tb drives down? alternatively, would i be better off buying the 5 bay DX517 expansion and filling it full of smaller drives i have on hand, or just selling the DS218+ and getting a larger unit? a DS418 is $360, while a DX517 is $450. would i be able to use the DX517 with future, faster version of the DS line?
|
# ? Sep 25, 2019 09:59 |
|
TenementFunster posted:i’m eventually gonna need more space on my DS218+. are 16tb drives gonna keep being the largest size available for awhile, or are larger drives soon to come down to drive the price of the 16tb drives down? WD claims 20TB disks will be out next year, but there's no certain way to tell how it will impact pricing: https://www.techradar.com/news/you-will-be-able-to-buy-a-20tb-hard-drive-in-2020
|
# ? Sep 25, 2019 10:54 |
|
The DS218+ can't do expansion bays. You can tell cause it just has 2 which is the max drives. If you had the DS718+ it could accept expansion bays (2+5). https://www.synology.c/en-us/products/DS218+#specs Verus: https://www.synology.com/en-us/products/DS718+#specs Also if you are going to move to a new unit. Keep within the same series, for example instead of the DS418, you would want the DS918+ same amount of drives, but is expandable, and since it is a plus series it has access to the same applications as the DS218+. The DS418 doesn't have access to a lot of the "business" rated applications, so if you are using them, they will suddenly be gone. If you have an app you absolutely need to make sure is there: https://www.synology.com/en-us/dsm/packages They have every package and the models that can use them.
|
# ? Sep 25, 2019 11:29 |
|
Axe-man posted:The DS218+ can't do expansion bays. You can tell cause it just has 2 which is the max drives. If you had the DS718+ it could accept expansion bays (2+5).
|
# ? Sep 25, 2019 16:16 |
|
Rexxed posted:WD claims 20TB disks will be out next year, but there's no certain way to tell how it will impact pricing:
|
# ? Sep 25, 2019 17:29 |
|
What are good 8+ drive cases that can do hotswap? I want to to retire my synology 1813+ and create 1 system that does both transcoding/downloading and nas functionality. I already have a powerful enough system to do it but I don't know which case and which backplane/sata expansion cards to get.
|
# ? Sep 25, 2019 18:11 |
|
If you want 8+ bays that are hotswap and not just 8+ bays total with a few being hotswap, I'd look at rackmount cases. Rosewill has some that do 12-15 drives and use 120mm fans so they aren't screamingly loud, and you can always just set it on its side if what you really want is a tower. I had to upgrade recently because I wanted at least 9 drives total and even without the hotswap requirement, once you go past 8 your inexpensive options are pretty limited.
|
# ? Sep 25, 2019 18:19 |
|
CopperHound posted:Sort of... You can use it with ds218+, but it has to be a separate volume: https://www.synology.com/en-us/know...Expansion_Units
|
# ? Sep 25, 2019 19:09 |
|
Greetings from the past. I'm still catching up on a few months of the thread I missed so sorry if this was covered recently... I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM. I use it as a ZFS server and my docker homelab. I have 2 SSDs for boot plugged into the on-board sata ports, then 8 more drives plugged into the built in LSI SAS ports, and another 8 in a JBOD external array plugged into a different LSI card. What I'd like to do is to move to a Ryzen 3900 with 128GB ram, but I don't see a good motherboard that would give IPMI or more than 6 ports. It seems like I'd need to get a second LSI card and flash it to IT mode. That said does anyone have a recommendation for a Ryzen compatible motherboard that they'd use in my case? Or another solution to increase the amount of memory for the least amount of money?
|
# ? Sep 26, 2019 00:46 |
|
Hughlander posted:Greetings from the past. I'm still catching up on a few months of the thread I missed so sorry if this was covered recently... I haven't done a lot of research but I suspect your issues are that the Ryzen CPU is a desktop line of CPUs but things like IPMI or extra sata controllers/ports are typically features for workstation or server boards. Those are going to be Threadripper for workstation or EPYC for server. They also won't be too cheap unless you're looking at older models and older is only a couple of years at this point.
|
# ? Sep 26, 2019 01:56 |
|
Hughlander posted:I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM. What CPU and mainboard are you using that's limited to 32 gigs of ram, you might be better off replacing the mainboard if it's slot limited or density limited, I'm having a hard time remembering how far back you'd have to go to get a Xeon that's limited to 32 gigs of ram
|
# ? Sep 26, 2019 05:07 |
|
|
# ? May 30, 2024 03:26 |
|
It's probably socket 1150 haswell. I have the same "problem". My solution is to keep that as mostly storage server and build another box for vms.
|
# ? Sep 26, 2019 05:15 |