Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast
I'm getting dangerously close to tight on my main 8x8tb Synology and looking at adding an expansion unit (DX517)

What's the best value in shuckables these days?

Adbot
ADBOT LOVES YOU

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
8tb WD externals. Or 10tb. But the 8s have better price per TB. Usual price is like $160.

Atomizer
Jun 24, 2007



Sniep posted:

I'm getting dangerously close to tight on my main 8x8tb Synology and looking at adding an expansion unit (DX517)

What's the best value in shuckables these days?

It's generally the WD Easystore/Elements/MyBook. Typical sale prices are 6/$100, 8/$130, 10/$160, and now 12/$200. You actually just missed out on a promo Amazon had for a few weeks, 15% credit (off of those aforementioned prices) if you have Prime.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

sharkytm posted:

8tb WD externals. Or 10tb. But the 8s have better price per TB. Usual price is like $160.

Atomizer posted:

It's generally the WD Easystore/Elements/MyBook. Typical sale prices are 6/$100, 8/$130, 10/$160, and now 12/$200. You actually just missed out on a promo Amazon had for a few weeks, 15% credit (off of those aforementioned prices) if you have Prime.

Yeah ok so it's still 8s huh

I was hoping 12s were occasionally hitting best buy sales or something lol

e: I would be happy with $200 12tbs. How long ago was the last one of that deal?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I’d argue that the 8TB are no longer optimal when you consider the overhead cost per bay.

iirc you can get the 14TB WD externals for $170 or so which seems pretty sensible compared to 8TB at $105 per drive or something similar.

If you’ve got space and you’re outgrowing a 8 bay NAS it might be time to think about a rackable disk shelf. You can still get Netapp DS4243 for about $300 loaded with 24x500 GB SAS drives. Replace the IOMs to let you use a standard SAS controller card and you’re at about $400 for 24 bays.

Paul MaudDib fucked around with this message at 07:24 on Dec 25, 2019

BeastOfExmoor
Aug 19, 2003

I will be gone, but not forever.

Sniep posted:

Yeah ok so it's still 8s huh

I was hoping 12s were occasionally hitting best buy sales or something lol

e: I would be happy with $200 12tbs. How long ago was the last one of that deal?

Best buy has 12TB's for $180 through Google Express at this moment:
https://www.google.com/shopping/pro...ontent=13713149

Externals have all been basically hovering around $15/TB this year with very occasional dips just under that.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Paul MaudDib posted:

iirc you can get the 14TB WD externals for $170 or so

now this i could get into.

Paul MaudDib posted:

If you’ve got space and you’re outgrowing a 8 bay NAS it might be time to think about a rackable disk shelf.

gonna stop ya right here, this is currently in my living room media unit so im adding another 5 bays (DX517) into my existing DS1817+, there's no rack mount anything going on here at least for the foreseeable future.


BeastOfExmoor posted:

Best buy has 12TB's for $180 through Google Express at this moment:

14tb for $170 sounds better, but thanks, good to know that this is all available

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I guess it was 12TB for $180 and 14TB for $200 but, point stands, the bays cost you something too and filling your bays with 8TB drives in 2020 is like filling your bays with 4TB drives in 2017.


https://reddit.com/r/DataHoarder/comments/ed6grj/wd_easystore_8_12_and_14tb_for_130_180_and_200/

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast
yeah no, i have only intention on using 12s or 14s, now i guess just to wait until they go on sale again now?

my existing is 8x8 and that's ... gonna get fixed after i fill up this 5 bay addon unit

I absolutely get the per-bay cost part of this equation lol

Atomizer
Jun 24, 2007



Sniep posted:

Yeah ok so it's still 8s huh

I was hoping 12s were occasionally hitting best buy sales or something lol

e: I would be happy with $200 12tbs. How long ago was the last one of that deal?

It's indeed been Best Buy (either directly or via their eBay store) with the 12/14 TB external sales, aside from first-time Google Express offers. I had to look back at sale prices and I was wrong; about a week ago the 14 TB was actually $200, with the 12 for $180, and the latter has been that price a few times over the past month. Consider those the new sales on high-capacity drives, so if you're in the market just hang tight, I'll post them here.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Atomizer posted:

It's indeed been Best Buy (either directly or via their eBay store) with the 12/14 TB external sales, aside from first-time Google Express offers. I had to look back at sale prices and I was wrong; about a week ago the 14 TB was actually $200, with the 12 for $180, and the latter has been that price a few times over the past month. Consider those the new sales on high-capacity drives, so if you're in the market just hang tight, I'll post them here.

word, i've got like 7tb free i should be good for a decent while, just all the dials are RED OR ORANGE now DANGER WILL ROBINSON

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
I'm looking for 10s, I've got 6 and need 2 more to build a new array. Post em if you got em!

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

It seems like he got caught up in making all these PPA's without understanding how OSS has always worked.

He did add me to the private PPA quickly when I requested, but apparently as of a few days ago he opened the public ones back up with a different salty-as-gently caress message.

PRADA SLUT
Mar 14, 2006

Inexperienced,
heartless,
but even so
Question on OS choice:

Use case is general NAS things, Plex things, TimeMachine network backups, Bittorrent sync, and running Docker containers for training TensorFlow models (Docker and a browser).

movax
Aug 30, 2008

priznat posted:

We are doing drive performance testing on arrays at work and found a big difference when filling all memory channels and, welp!

And yes we invalidate cache etc so it isn’t just local caching everything into RAM for the cache :haw:

Array of 16 Gen3 x4 nvram drives connected via switch :getin:

Which PLX switch is that running on! 8748? 8796?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Which PLX switch is that running on! 8748? 8796?

Not a PLX :mrgw:

Microchip/microsemi 100 lane-er

movax
Aug 30, 2008

priznat posted:

Not a PLX :mrgw:

Microchip/microsemi 100 lane-er

Oh man, I forgot about those. Went to a training for the Switchtecs in San Jose a few years back, I thought the product wasn’t quite as mature as the 3rd gen PLX stuff.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Oh man, I forgot about those. Went to a training for the Switchtecs in San Jose a few years back, I thought the product wasn’t quite as mature as the 3rd gen PLX stuff.

They’re in a lot of stuff now but all large design in stuff and not so much motherboards and add in cards, but there are some available!

Like these: https://www.serialcables.com/product/pci-ad-x16he-m/

Also the gen4 ones are coming to production very soon!

fatman1683
Jan 8, 2004
.
What are the largest drives that would be considered 'safe' to use in an 8-drive, RAIDZ2 vdev? Planning to finally get off my rear end and build a FreeNAS box this spring.

IOwnCalculus
Apr 2, 2003





Any. A single unrecoverable block during a rebuild doesn't nuke an entire ZFS array like it would with other RAID solutions.

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

Any. A single unrecoverable block during a rebuild doesn't nuke an entire ZFS array like it would with other RAID solutions.

This is disingenuous. There is a point where the rebuild time for 0 data loss will exceed the statistical likelihood of a second (third) disk going. Especially if you can't expand zdev's, meaning you're batching drives, and so it's more likely to be rebuilding towards the end of life of your disks.

That being said it's a larger number than you're imagining, and it's a function of the device/density ratio, along with your sanity waiting for the rebuild to happen. I would try to think of it in common consumer enclosure sizes - 4, 8, I'll be generous and say 12. But the rebuild time on a 12x14tb array is going to be extraordinarily long. I am already a little nervous about my 8 disk array.

BlankSystemDaemon
Mar 13, 2009




There's also a calculator that can do Mean Time To Resilver and Mean Time To Data Loss calculations based on Mean Time Between Drive Failure and Mean Time To Physical Replacement.

Crunchy Black
Oct 24, 2017

by Athanatos

fatman1683 posted:

What are the largest drives that would be considered 'safe' to use in an 8-drive, RAIDZ2 vdev? Planning to finally get off my rear end and build a FreeNAS box this spring.

Nothing for something you can't lose forever.

Raid/ZFS is not backup.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Crunchy Black posted:

Nothing for something you can't lose forever.

Raid/ZFS is not backup.

Are we ever going to stop saying this?

It's obvious to any person with brain cells that the only copy of your data is not a backup

movax
Aug 30, 2008

HalloKitty posted:

Are we ever going to stop saying this?

It's obvious to any person with brain cells that the only copy of your data is not a backup

The universe will always provide a better idiot...

fatman1683
Jan 8, 2004
.

D. Ebdrup posted:

There's also a calculator that can do Mean Time To Resilver and Mean Time To Data Loss calculations based on Mean Time Between Drive Failure and Mean Time To Physical Replacement.

Thanks for this. With drive MTBFs in the hundreds of thousands of hours, it seems like I'd have to get up into two-digit numbers of 10TB drives before MTTDL drops into the ~10 year range, which I feel pretty ok about.

I've also been looking at Xpenology, doing Btrfs over SHR-2, anyone have opinions about this setup? I'm mostly interested in the easy expandability (ZFS is probably still a couple of years away from having that in stable) and the more-refined user experience provided by a 'commercial' product.

Crunchy Black
Oct 24, 2017

by Athanatos

HalloKitty posted:

Are we ever going to stop saying this?

It's obvious to any person with brain cells that the only copy of your data is not a backup

my *non-necessary gender-binary* person,

have you seen the kind of bullshit idiocy we've seen come in this thread? Just covering our bases cause OP left a lot of them open.

e: that said with the above seems they know up from down, still. Bases. Cover them. IMO.

IOwnCalculus
Apr 2, 2003





H110Hawk posted:

This is disingenuous. There is a point where the rebuild time for 0 data loss will exceed the statistical likelihood of a second (third) disk going. Especially if you can't expand zdev's, meaning you're batching drives, and so it's more likely to be rebuilding towards the end of life of your disks.

How long are you guys seeing for rebuilds? My array is a cluster gently caress I've documented in here before because apparently I love abusing ZFS, but even that mess rarely has rebuild times exceeding 24-48 hours.

If your drives are so fragile you don't trust them for that, especially in a raidz2 array, it'd be faster and safer to build a new array and migrate the data.

The biggest fear I always had with rebuilds on large drives was a random URE on a "healthy" drive, because the expected URE rate is in the ballpark of a drive capacity. Same reason everyone was calling RAID5 dead a decade ago. ZFS can't save your data in that scenario but it will flag the bad data and not kill your array.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

Crunchy Black posted:

Nothing for something you can't lose forever.

Raid/ZFS is not backup.

The day I stop seeing hospitals come to me for help saying that "buttt it was in a raid isn't that my backup" is the day I stop saying it.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Crunchy Black posted:

my *non-necessary gender-binary* person,

have you seen the kind of bullshit idiocy we've seen come in this thread? Just covering our bases cause OP left a lot of them open.

e: that said with the above seems they know up from down, still. Bases. Cover them. IMO.

I'm not having a go at you, just to make that clear;

Crunchy Black
Oct 24, 2017

by Athanatos

HalloKitty posted:

I'm not having a go at you, just to make that clear;

Nor I you! Call it a lovely pseudo snipe on my part lol

IOwnCalculus posted:

How long are you guys seeing for rebuilds? My array is a cluster gently caress I've documented in here before because apparently I love abusing ZFS, but even that mess rarely has rebuild times exceeding 24-48 hours.

If your drives are so fragile you don't trust them for that, especially in a raidz2 array, it'd be faster and safer to build a new array and migrate the data.

The biggest fear I always had with rebuilds on large drives was a random URE on a "healthy" drive, because the expected URE rate is in the ballpark of a drive capacity. Same reason everyone was calling RAID5 dead a decade ago. ZFS can't save your data in that scenario but it will flag the bad data and not kill your array.

I've never had a resilver take more than 12 hours but I also run pretty beefy hardware. That's always my concern when doing it as well--most of my data is pretty cold so what happens when I do need to rebuild and it hits a [potentially bad] sector that hasn't been touched in maybe years? (obviously I know the answer in ZFS speak here, just narrating internally) Some of these .isos are pretty difficult to find these days.

To this end I'm reevaluating my stack and I'm probably going to move to a more tiered setup that has a similarly sized ZFS2 that sits idle and gets spun up and cronned every week just for sanity's sake. But I don't want to do a full copy, just a sanity check on maybe 10% of files, and obviously not the same files every time. Is there a better utility/way to do this?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Crunchy Black posted:

To this end I'm reevaluating my stack and I'm probably going to move to a more tiered setup that has a similarly sized ZFS2 that sits idle and gets spun up and cronned every week just for sanity's sake. But I don't want to do a full copy, just a sanity check on maybe 10% of files, and obviously not the same files every time. Is there a better utility/way to do this?

Snapshots. I use a Zraid1 that is mostly offline to store the snapshot of my main zraid2. The best thing about this is you can create dated snapshots and store only the relative changes between dates with references.

The first snapshot will be huge and take forever. During the next month if you didn't change 99% of the data, the snapshot will only take the space consisting of that remaining 1%. It will make notes that the files from before are still present or deleted and that will not even take anymore space.

Even if you move/copy files around to different directories, it will not take any space because it can still refer to the initial file from the first snapshot without actually storing more.


https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/

You can use zfs send and recv to talk between zfs systems to send the snapshots and dictate if you need to make an incremental backup or a full backup.

BlankSystemDaemon
Mar 13, 2009




There's also a little trick that "they" don't tell you about, whereby if your database can fit into system memory (meaning you aren't using the RDBMS caching), you can prime the ARC with its contents by doing zfs send > /dev/null.

Also, it's raidz, not zraid.
Because terminology really matters to me.

Hughlander
May 11, 2005

That’s pretty much what I do. I use syncoid to send snapshots to another local server for some datasets. Then others go to a cloud server on digital ocean with a few hundred gigs of attached volumes. Then daily a full backup using duplicacy to glacier and finally a hot backup to google drive using rclone.

Probably overkill doing glacier and drive but there you go. The rclone is also mounted on Windows with the google caching drive system.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I've got the itch to upgrade my current NAS/lab off my current E3-1230v1. I'd like 8c/12t and 64GB RAM more or less as cheaply as possible. I've been snooping around ebay deals on E5-2650v2's, which would let me stay with much cheaper DDR3. Between the extra cost of the motherboard and RAM, upgrading to a E5-26xxv3 would probably be $200+. Is the performance difference really that notable? Similarly, are there any pre-builts that I should be looking at that would get me the same bulk parts cheap? I know there's usually a few Dell and HP workstation-class units, but I'm not sure what ones would be ideal in the Haswell-ish era.

This will largely be for ZFS storage, Plex, and a smattering (6?) of small-footprint VMs for poo poo like bittorrents, maybe pfsense if I get off my rear end, etc., and 2-3 VMs for dicking around with. This probably means I'll need to go with ESXi or similar as the hypervisor, since using my current FreeNAS OS for that sort of thing sounds like an exercise in frustration.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

fatman1683 posted:

I've also been looking at Xpenology, doing Btrfs over SHR-2, anyone have opinions about this setup?

I just started fiddling around with Xpenology a couple days ago, and ran into some problems creating SHR-2. Most of the guides and downloads I've found channeled me down the path of making DSM think the hardware is a DS3615xs, which is one of Synology's enterprise oriented boxes, and for whatever reason Synology disables creation of SHR-2 arrays on their enterprise line. Furthermore, although you're supposed to be able to edit a text config file in /etc to work around this (supposedly works even on real Synology enterprise hardware), so far I've been unable to get that to work.

I'll retry soon, but set up for a different consumer line DiskStation model.

Crunchy Black
Oct 24, 2017

by Athanatos
Ah yeah I figured I was overthinking this, thanks!

DrDork posted:

Want to upgrade from a consumer platform to a more server-y one.

Going to run some VMs.

Moving from Sandy Bridge (v2) to Haswell (v3) or Broadwell (v4) will net you a decent increase in PPW/$ and introduces a bunch of forward-looking virtualization features that improves performance and security therein. If you're not already bought into the ecosystem, I'd be hard-pressed to justify not going v3 or v4, especially because you can get a platform-lighting CPU 2603-v3 for ~20$ on ebay and any CPU from those familys will drop in if you need to upgrade. Remember, Sandy Bridge socket 2011 is not the same as Haswell/Broadwell 2011! At this rate, DDR3 is only going to get more expensive and DDR4 is only going to get cheaper.

You might get the chorus of "you can't run FreeNAS virtualized" here or elsewhere but look up the caveats and ensure you're up for the risks, if you'd like. It can be done but it's not officially supported and if something breaks, you're up poo poo creek.

movax
Aug 30, 2008

Crunchy Black posted:

You might get the chorus of "you can't run FreeNAS virtualized" here or elsewhere but look up the caveats and ensure you're up for the risks, if you'd like. It can be done but it's not officially supported and if something breaks, you're up poo poo creek.

Interesting, I have been very behind on keeping up with this thread and didn’t realize that was a thing. I have an ESXi box that I’ve been meaning to put FreeNAS and other OSes on (probably a Fedora VM to run Usenet / other SW / things that want Linux, not BSD) and was going to use HW pass through of my HBAs. No longer a good idea, wasn’t ever a good idea?

The NPC
Nov 21, 2010


For people running other services on their file servers, do you segregate them at all? What about containers? Do you run those on the host or make a VM to be the docker host?

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

FYI, I put together the new Docker/VM machine separate from my NAS that I've been talking about off and on for awhile. Final tally:

Define R6 Case - Super overkill I'll probably replace this in a few years
Ryzen 3900x - 12 core goodness
4 sticks 32gig ECC memory - 128 gigs was key since I was memory bound on my old 32 gig NAS
2 1TB Intel M.2 drives - Only drives in the system, ZFS Raid 1, Syncoid backed up to main NAS
ASRock X470D4U2-2T - This guy actually exists. It's Micro ATX and so drat small. Takes 128 gigs memory, IPMI, and 2 10G ethernets! It's the full deal for what I wanted.
Seasonic PRIME Ultra 650W 80+ Titanium Power Supply - Overkill but so quiet I love it

I originally had a be quiet! CPU cooler, but the micro ATX is so small that it couldn't have all 4 ram sticks plugged in and fit, so I used the stock AMD cooler instead.

Setting it up was a super pain mostly due to my mistakes. There was a short on the board that lost me about 3 days time until I had it completely out of the case, and then swapping the Coolers led to bad thermal paste job, so had to get more thermal paste and use rubbing alcohol to clean off the CPU. After that installing proxmox and having it boot from NVMe was a bit challenging, but now I've migrated all but 1 container from the old machine. I think I'm going to leave Plex on the NAS and put everything else on the new machine.

I'm really happy with that ASRock board and with the pre and post sales tech support I've gotten from them.

EDIT: Content

The NPC posted:

For people running other services on their file servers, do you segregate them at all? What about containers? Do you run those on the host or make a VM to be the docker host?

As above I'm in the middle of moving everything off my fileserver due to performance. But what I had done before was run ESXI with passing through the LSI controllers to FreeNAS. I got tired of paying the memory tax there, so I redid it as Proxmox keeping the same ZFS pools. In that I ran LXCs for things that needed them:
Plex - Didn't want it's performance / uptime to be related to anything else.
AirPrint/Google Cloud Print - Passed the USB to it
Minecraft worlds - Each saved world is it's own LXC since they have well known ports that the LAN uses.

Then one catch all docker server (Also LXC not VM) that had ~50 containers running through 3-4 compose files. I was really sloppy with this and anything that needed outside connection was in the same compose file since that's where the nginx reverse proxy was.

My new machine I mentioned above is also Proxmox, but I'm installing Docker raw on it so I can use the docker zfs volume driver and get fine grained control on snapshots.

With Proxmox the only time I do VMs is if it's not Linux, otherwise it's either docker container (preferred) or LXC (If something forces it to be.)

Hughlander fucked around with this message at 21:17 on Dec 29, 2019

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply