Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

TBQH, I'm throwing money at the problem because I've got disposable income and I'm sort of rewarding myself for the years as a college student / shortly thereafter where I always had to compromise because I couldn't afford what I needed at the time. I'm just electing to nuke the problem "once", for me right now -- 8x8TB drives in a Fractal Node 804 knowing full well that expansion would require another investment, another chassis, etc. -- but also not planning on it. I have a serious loving data hoarding problem if I get past 70% utilization on a RAID-Z2 of these drives. I'm a single dude who uses the storage to store projects, files, videos, etc -- don't have multiple users pounding the device to watch poo poo or a high-performance database running on it.

Likewise, going for a Skylake Xeon so I can have ECC -- I design radiation-tolerant electronics for a living and know full well what the actual rates / effects of bit flips and such on systems are, and figured I can just pay an extra $200 and get ECC and not worry about it -- risk reduction with a small infusion of cash.

I realize it isn't an option for everyone, but that's my thought process on it. Very much overkill, but it's kind of fun in a way. Honestly, this started out with me looking at a QNAP, and I'd have probably gone for the QNAP if they had some kind of ZFS-style bit-rot defense for the data committed to disk -- their software suite would save me a ton of time.

re: the SSD choices, I elected to not do any dedicated L2ARC or SLOG device for now. Picked up 2 850 EVOs for the ESXi VM data stores, and a single 960 EVO M.2 w/ a PCIe x4 adapter to make available as pass-through for the Linux VM to use as a fast-rear end disk for unpack/unRAR/databases/Plex caching/ whatever the gently caress. If I end up needing L2ARC or SLOG, I'll pick up a SAS expander and deal with it then.

At least FreeNAS makes ZFS way easier to use now than rolling your own OpenSolaris install, which is what I did because I'm a dumb dumb.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Doing things with overkill after you've had to restrict yourself can be quite cathartic, yes.

And speaking of QNAP, they do make a ZFS appliance, but I'm pretty sure it's in the "if you have to ask for a price, you can't afford it"-range.

Melp
Feb 26, 2004

You know the drill.
I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long):

http://jro.io/nas

I've posted this around a couple places, but you guys might be interested in it as well. Obviously, it's overkill (and then some) for standard home use, but I've had a ton of fun with it, and the hardware headroom lets me play around with lots of VMs and other stuff.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
I went way overkill on my FreeNAS setup, but I had the cash to buy it all upfront and wanted a project that I could just set and forget anyway. Now I have something super great and will continue to be useful for whatever bullshit I throw at it for 5-10+ years. :shrug:

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

Melp posted:

I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long):

http://jro.io/nas

I've posted this around a couple places, but you guys might be interested in it as well. Obviously, it's overkill (and then some) for standard home use, but I've had a ton of fun with it, and the hardware headroom lets me play around with lots of VMs and other stuff.

I read this the other day when you posted it elsewhere and it's a drat fine writeup.

movax
Aug 30, 2008

Melp posted:

I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long):

http://jro.io/nas

I've posted this around a couple places, but you guys might be interested in it as well. Obviously, it's overkill (and then some) for standard home use, but I've had a ton of fun with it, and the hardware headroom lets me play around with lots of VMs and other stuff.

I've only just clicked on the link to read it but thanks for doing what I've always dreamt of and doing a "gently caress, here's what I've learned from scraping the Internet for hours, parsing through various amounts of bullshit and what I learned when I actually tried to do it".

I haven't finished buying all the parts yet, but what I've got so far:

CPU: E3-1230v5
Cooler: Hyper212 EVO
Motherboard: Supermicro X11SSL-CF
RAM: 4x 16GB ECC DDR4 UDIMMs
HDD: 8x WD80EZFX (Target: 8-drive RAID-Z2)
SSD: 2x 850 EVO 500GB (RAID 1 for ESXi), 1x 960 EVO M.2 w/ adapter (for pass-through), 1x SuperMicro DOM (for actual ESXi + w/e)
Case: Fractal Node 804
PSU: Corsair RM650x
Fans: Noctua NF-F12s and NF-S12As

To buy:
* SAS-HD Cables
* Goodies to crimp / solder custom lengths of SATA power connectors for the drives

Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet.

The other difference in this build compared to my younger days is patience -- I'm just putting poo poo together until I find something missing, then I'll measure, order it, and wait for it to show up instead of doing fuckloads of analysis up front and maybe ending up with too short / too long cables. Ahh, youth.

e: Hey, is 512K vs 4K drives still a thing to worry about? I remember much consternation when the transition was first happening.

movax fucked around with this message at 01:35 on Apr 12, 2017

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
That is a nice looking build, I am curious why so much space/raid for ESXi drive(s)? Why not a DOM?

(I have not looked into this at all yet but I thought that was a popular solution)

movax
Aug 30, 2008

priznat posted:

That is a nice looking build, I am curious why so much space/raid for ESXi drive(s)? Why not a DOM?

(I have not looked into this at all yet but I thought that was a popular solution)

Oops -- forgot to put the DOM on there. I'm going to install ESXi to that DOM and then probably leave 60GB of unused space on it (w/e, I suppose).

The RAID 1 pool (assuming I can do cheap / simple RAID 1 from the PCH controller without loving ESXi) would host the VMDKs for FreeNAS, domain controller, whatever else. Then I PCI pass-through my SAS3008 to FreeNAS along with the Reds, and PCI pass-through the 960 to my Linux VM as a mount for disk-intensive stuff. We'll see if this all falls apart when I actually turn it on...

I have no idea what disk space is needed for a Windows Server core installation, but it turned into one of those 'should I just spend an extra $150 to get 2 500GB drives instead of 2 250GB drives? Why the gently caress not!' moments.

IOwnCalculus
Apr 2, 2003





movax posted:

Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet.

Two solutions here:

1) Get the girl first :v:
2) Put the whole mess out in the garage anyway. I actually haven't seen heat death yet on anything. Probably helps that I don't worry much about noise. Not full on datacenter loud, but louder than I'd like in my house.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

IOwnCalculus posted:

1) Get the girl first :v:

Real talk, right here. I dealt with the horrible mess of different sized drives in my desktop until after I had been married for a little bit, then had a fail and lost 3TB of data. My wife signed off on me doing whatever I wanted to have data integrity so long as it didn't send us to the poorhouse and I promised not to put it in our bedroom. Now I just need to get her to sign off on the upgrade...

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

movax posted:

TBQH, I'm throwing money at the problem because I've got disposable income and I'm sort of rewarding myself for the years as a college student / shortly thereafter where I always had to compromise because I couldn't afford what I needed at the time. I'm just electing to nuke the problem "once", for me right now -- 8x8TB drives in a Fractal Node 804 knowing full well that expansion would require another investment, another chassis, etc. -- but also not planning on it. I have a serious loving data hoarding problem if I get past 70% utilization on a RAID-Z2 of these drives. I'm a single dude who uses the storage to store projects, files, videos, etc -- don't have multiple users pounding the device to watch poo poo or a high-performance database running on it.

I did that in 2013 to protect myself from any chance of cryptolocker, 4x3 TB drives. poo poo fills up, was probably like a year or two.

Now I bought a pair of 6 TB Toshiba X300s and I'm deduping that poo poo and then the LVM array with 4x3TB becomes my travel setup and I eventually buy another 2x6 TB for my home server, plus one for my desktop. I loving love those X300s, fantastic drives and cheap as hell.

quote:

CPU: E3-1230v5
Cooler: Hyper212 EVO
Motherboard: Supermicro X11SSL-CF
RAM: 4x 16GB ECC DDR4 UDIMMs
HDD: 8x WD80EZFX (Target: 8-drive RAID-Z2)
SSD: 2x 850 EVO 500GB (RAID 1 for ESXi), 1x 960 EVO M.2 w/ adapter (for pass-through), 1x SuperMicro DOM (for actual ESXi + w/e)
Case: Fractal Node 804
PSU: Corsair RM650x
Fans: Noctua NF-F12s and NF-S12As

To buy:
* SAS-HD Cables
* Goodies to crimp / solder custom lengths of SATA power connectors for the drives

This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server?

You need like 8 GB ECC memory, cheapo Xeon, maybe one 960 Evo as a cache drive. Even with 16 or 32 GB of RAM it's still super overkill. Does SAS help with the VMs somehow?

quote:

Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet.

I don't know what you're talking about, man, bitches love that poo poo. Set up Sonarr to download all her Grimm and other chick shows and then see what she says :v:

Mr Shiny Pants
Nov 12, 2012
If you think ZFS is hard or overkill, I don't know what to tell you. Yes you need to learn it, but everything is hard at first and IT is ever changing so you are constantly learning anyway.
But your data means something to you right? That is why you are thinking about a NAS in the first place I guess. Why would you ever choose anything but ZFS, and by extension with ECC, for your data?

As for hardware, I am done dicking around with hardware, I can wait everywhere else, I don't want to wait on my servers or my own computer in my free time.

IOwnCalculus
Apr 2, 2003





Paul MaudDib posted:

This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server?

You need like 8 GB ECC memory, cheapo Xeon, maybe one 960 Evo as a cache drive. Even with 16 or 32 GB of RAM it's still super overkill. Does SAS help with the VMs somehow?


New Xeons don't really get much lower end than the 1230 v5. And then to properly use both memory channels, you'd need at least two sticks, and there's not much savings to be had for finding the smallest ones on the market.

SAS makes everything easier to cable at the very least, and gets him a separate controller to hand off using VTd. Pure SATA controllers with large port counts are rare, expensive, lovely, and poorly supported. The only way you can check all the boxes for less would be used gear.

JacksAngryBiome
Oct 23, 2014
Running zfs in an HP Microserver N54L I bought used for cheap with 8 gigs of ecc ram included. I think I paid $400, and half of that was for two new WD reds at $100 a drive. It stores family pictures and media files just fine.

ZFS/Freenas doesn't really require a lot to run and snapshots have saved me.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Mr Shiny Pants posted:

Why would you ever choose anything but ZFS, and by extension with ECC, for your data?


Because there's other solutions that make tradeoffs in other areas that may or may not be more important to your needs?

eames
May 9, 2009

Is this little disclaimer new? :v:

Only registered members can see post attachments!

Melp
Feb 26, 2004

You know the drill.

movax posted:

e: Hey, is 512K vs 4K drives still a thing to worry about? I remember much consternation when the transition was first happening.
The drives having 512 Byte vs. 4 KByte sectors isn't an issue any more; all your drives will be 4K. The issue is that many drives still report themselves as 512B to the OS, so in the case of ZFS, you can wind up with a vdev with ashift=9 and performance will go to poo poo. The number of drives that misreport this information to the OS is so large, someone wrote a routine for the ZFS on Linux project to automatically check the make and model of your drive against a database of drives that are known to misreport their sector size. The database section in this routine can be helpful in figuring out if you need to manually correct your ashift value on your vdevs: https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c#L108

edit: For ZFS, more info on ashift here: http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29

Melp fucked around with this message at 16:47 on Apr 12, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

eames posted:

Is this little disclaimer new? :v:

Not really. They've been pretty clear that 10/Corral is still technically a beta, albeit hopefully a pretty solid one in that they're slapping a RC title on it. But until it officially releases as a Stable version it'll have that tag on there, just to remind people.

movax
Aug 30, 2008

Paul MaudDib posted:

This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server?

You need like 8 GB ECC memory, cheapo Xeon, maybe one 960 Evo as a cache drive. Even with 16 or 32 GB of RAM it's still super overkill. Does SAS help with the VMs somehow?

Oh, it's totally overkill. I picked the cheapest Xeon that had 4C/8T in LGA1151, and maxed out RAM now because gently caress it, when has too much RAM ever been a bad thing? I'm not using SAS drives -- motherboard just comes with a SAS controller, but I got regular WD Red SATA drives. Makes it easy to go one SAS-HD connector -> 4x SATA, and to an expander in the future if I need it.

I'm on the fence about whether I'll use a VM on this machine to host a bunch of cross-compilers / FPGA sim stuff, or just run a local VM on my desktop to do it all, but hey -- options / spare CPU cycles are good to have! Real overkill is probably moving to 10GbE networking gear -- which in a few years may just be cheap enough to get for shits and giggles.

Melp posted:

The drives having 512 Byte vs. 4 KByte sectors isn't an issue any more; all your drives will be 4K. The issue is that many drives still report themselves as 512B to the OS, so in the case of ZFS, you can wind up with a vdev with ashift=9 and performance will go to poo poo. The number of drives that misreport this information to the OS is so large, someone wrote a routine for the ZFS on Linux project to automatically check the make and model of your drive against a database of drives that are known to misreport their sector size. The database section in this routine can be helpful in figuring out if you need to manually correct your ashift value on your vdevs: https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c#L108

edit: For ZFS, more info on ashift here: http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29

Thanks -- looks like I'm good to go with my 8TB drives. IIRC, my 2TB drives, I have a mix of true 512B drives and "fake" 512B drives, so my pool was made of vdevs with two different ashifts.

SamDabbers
May 26, 2003



movax posted:

Real overkill is probably moving to 10GbE networking gear -- which in a few years may just be cheap enough to get for shits and giggles.

It already is cheap enough for both making GBS threads and giggling:

Two-pack of SFP+ NICs with 3 meter cables - $40 shipped
24-port GigE web-managed switch with two SFP+ ports - $130 shipped

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

SamDabbers posted:

It already is cheap enough for both making GBS threads and giggling:

Two-pack of SFP+ NICs with 3 meter cables - $40 shipped
24-port GigE web-managed switch with two SFP+ ports - $130 shipped

If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house.

Even those are getting more affordable, if you look at Xeon-D with multiple onboard 10G-BaseT and the Ubiquiti 10GbE switch.

CheddarGoblin
Jan 12, 2005
oh
Fiber cable is cheap, and CAT6A is a huge pain in the rear end.

You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

n.. posted:

Fiber cable is cheap, and CAT6A is a huge pain in the rear end.
Cat6A/E isn't that bad. I wired up most of a house using it a year or two ago, and it wasn't problematic at all.

n.. posted:

You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.
Somehow I missed these. You are not helping me convince myself that Plex and torrents and whatnot don't really need 10Gb...

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

n.. posted:

Fiber cable is cheap, and CAT6A is a huge pain in the rear end.

You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.

Including 2 transceivers? Those are the expensive parts, right?

evol262
Nov 30, 2010
#!/usr/bin/perl
Terrible self-promotion for any lazy goons who want to buy crap cheap.

I'm selling my Norco 4220+drives (as a complete system, really -- no need to supply your own CPU/memory/etc).

I can't be bothered parting it out on eBay unless I absolutely have to. Please reprimand me and delete this if I'm badposting.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Twerk from Home posted:

Including 2 transceivers? Those are the expensive parts, right?

Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well.

Huh, thanks for opening my eyes. 10G home network here I come.

Thanks Ants
May 21, 2004

#essereFerrari


How many actual makers of optics are there? Is SFP snobbery justified, or is it just a thing to bear in mind if you need vendor support?

CheddarGoblin
Jan 12, 2005
oh
Optics are a commodity now as far as I'm concerned, I've never noticed a difference between Cisco SFPs and cheapo ones. Other than forced incompatibility on Cisco's part.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
SFP has lower latency than RJ45, and needs less power, even with optical transceivers.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
This is somewhat offtopic but are there qsfp28 modules that allow you to connect GbE if you have a high end network card?

Want to test out some high end cards but just for functionality and not for performance.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

DrDork posted:

Cat6A/E isn't that bad. I wired up most of a house using it a year or two ago, and it wasn't problematic at all.

Somehow I missed these. You are not helping me convince myself that Plex and torrents and whatnot don't really need 10Gb...

Heh, I've often thought of going 10gigE because I can get it straight out to the internet at that speed...

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Looks like one of my 2TB drives in my ZFS NAS is dying. As part of a gradual upgrade, I'm going to replace it with a 4TB disk.

What 4TB NAS drive should I go for? I've been using WD Reds, and Amazon has the 4TB for 139.99. Which is the same price I paid in Nov 2015, but better than the price it was a few months ago.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Adios FreeNAS 10

https://forums.freenas.org/index.php?threads/important-announcement-regarding-freenas-corral.53502/

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
https://forums.freenas.org/index.php?threads/important-announcement-regarding-freenas-corral.53502/

So poo poo has gotten real on the FreeNAS front. The announcement isn't 100% clear, but it appears that Corral development is being halted entirely, and they're going to focus on a new UI for 9.10 and backporting Corral features into it. So far, people seem really split between happiness about the devs admitting there was a problem and trying to fix it, and angry about having bought into something that's going to be ripped away. I'm on the fence, personally. Hopefully they can get this all straightened out without a ton of user hassle.

E: f, b

IOwnCalculus
Apr 2, 2003





Ahaha. gently caress me. :shepicide:

eames
May 9, 2009

:suspense:

Counting the hours until "the community" announces a Corral fork. Anyway, based on my very brief experience with FreeNAS 10 this is a good decision.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

eames posted:

:suspense:

Counting the hours until "the community" announces a Corral fork. Anyway, based on my very brief experience with FreeNAS 10 this is a good decision.

I'm really really saddened. I threw Corral on a Dell R710 over the weekend, and I'm actually pretty impressed despite some install bugs I encountered. I mean, its a NAS with a Hypervisor, I'm running Windows and Red Hat VMs on it, and Plex as a Docker Container.

I don't know if I'll switch back to the 9.X dev fork...

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

n.. posted:

Optics are a commodity now as far as I'm concerned, I've never noticed a difference between Cisco SFPs and cheapo ones. Other than forced incompatibility on Cisco's part.

Intel, HP and Cisco all have transceiver brand lockin, you have to use their lovely branded ones or it flat out won't work, which made it loving hellish getting my HP switch to talk to my Intel NIC using a copper cable. I think Cisco still has the iOS command that lets you use the non-branded ones, but intel and HP's response was more or less 'eat a dick' when I asked them how to disable it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply