|
TBQH, I'm throwing money at the problem because I've got disposable income and I'm sort of rewarding myself for the years as a college student / shortly thereafter where I always had to compromise because I couldn't afford what I needed at the time. I'm just electing to nuke the problem "once", for me right now -- 8x8TB drives in a Fractal Node 804 knowing full well that expansion would require another investment, another chassis, etc. -- but also not planning on it. I have a serious loving data hoarding problem if I get past 70% utilization on a RAID-Z2 of these drives. I'm a single dude who uses the storage to store projects, files, videos, etc -- don't have multiple users pounding the device to watch poo poo or a high-performance database running on it. Likewise, going for a Skylake Xeon so I can have ECC -- I design radiation-tolerant electronics for a living and know full well what the actual rates / effects of bit flips and such on systems are, and figured I can just pay an extra $200 and get ECC and not worry about it -- risk reduction with a small infusion of cash. I realize it isn't an option for everyone, but that's my thought process on it. Very much overkill, but it's kind of fun in a way. Honestly, this started out with me looking at a QNAP, and I'd have probably gone for the QNAP if they had some kind of ZFS-style bit-rot defense for the data committed to disk -- their software suite would save me a ton of time. re: the SSD choices, I elected to not do any dedicated L2ARC or SLOG device for now. Picked up 2 850 EVOs for the ESXi VM data stores, and a single 960 EVO M.2 w/ a PCIe x4 adapter to make available as pass-through for the Linux VM to use as a fast-rear end disk for unpack/unRAR/databases/Plex caching/ whatever the gently caress. If I end up needing L2ARC or SLOG, I'll pick up a SAS expander and deal with it then. At least FreeNAS makes ZFS way easier to use now than rolling your own OpenSolaris install, which is what I did because I'm a dumb dumb.
|
# ? Apr 11, 2017 20:37 |
|
|
# ? May 30, 2024 11:36 |
Doing things with overkill after you've had to restrict yourself can be quite cathartic, yes. And speaking of QNAP, they do make a ZFS appliance, but I'm pretty sure it's in the "if you have to ask for a price, you can't afford it"-range.
|
|
# ? Apr 11, 2017 21:59 |
|
I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long): http://jro.io/nas I've posted this around a couple places, but you guys might be interested in it as well. Obviously, it's overkill (and then some) for standard home use, but I've had a ton of fun with it, and the hardware headroom lets me play around with lots of VMs and other stuff.
|
# ? Apr 11, 2017 22:06 |
|
I went way overkill on my FreeNAS setup, but I had the cash to buy it all upfront and wanted a project that I could just set and forget anyway. Now I have something super great and will continue to be useful for whatever bullshit I throw at it for 5-10+ years.
|
# ? Apr 11, 2017 22:25 |
|
Melp posted:I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long): I read this the other day when you posted it elsewhere and it's a drat fine writeup.
|
# ? Apr 12, 2017 00:52 |
|
Melp posted:I built a ~60TB FreeNAS a few months back (posted about it itt), but I just finished a very detailed write-up on the whole process (~40 pages long): I've only just clicked on the link to read it but thanks for doing what I've always dreamt of and doing a "gently caress, here's what I've learned from scraping the Internet for hours, parsing through various amounts of bullshit and what I learned when I actually tried to do it". I haven't finished buying all the parts yet, but what I've got so far: CPU: E3-1230v5 Cooler: Hyper212 EVO Motherboard: Supermicro X11SSL-CF RAM: 4x 16GB ECC DDR4 UDIMMs HDD: 8x WD80EZFX (Target: 8-drive RAID-Z2) SSD: 2x 850 EVO 500GB (RAID 1 for ESXi), 1x 960 EVO M.2 w/ adapter (for pass-through), 1x SuperMicro DOM (for actual ESXi + w/e) Case: Fractal Node 804 PSU: Corsair RM650x Fans: Noctua NF-F12s and NF-S12As To buy: * SAS-HD Cables * Goodies to crimp / solder custom lengths of SATA power connectors for the drives Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet. The other difference in this build compared to my younger days is patience -- I'm just putting poo poo together until I find something missing, then I'll measure, order it, and wait for it to show up instead of doing fuckloads of analysis up front and maybe ending up with too short / too long cables. Ahh, youth. e: Hey, is 512K vs 4K drives still a thing to worry about? I remember much consternation when the transition was first happening. movax fucked around with this message at 01:35 on Apr 12, 2017 |
# ? Apr 12, 2017 01:16 |
|
That is a nice looking build, I am curious why so much space/raid for ESXi drive(s)? Why not a DOM? (I have not looked into this at all yet but I thought that was a popular solution)
|
# ? Apr 12, 2017 01:34 |
|
priznat posted:That is a nice looking build, I am curious why so much space/raid for ESXi drive(s)? Why not a DOM? Oops -- forgot to put the DOM on there. I'm going to install ESXi to that DOM and then probably leave 60GB of unused space on it (w/e, I suppose). The RAID 1 pool (assuming I can do cheap / simple RAID 1 from the PCH controller without loving ESXi) would host the VMDKs for FreeNAS, domain controller, whatever else. Then I PCI pass-through my SAS3008 to FreeNAS along with the Reds, and PCI pass-through the 960 to my Linux VM as a mount for disk-intensive stuff. We'll see if this all falls apart when I actually turn it on... I have no idea what disk space is needed for a Windows Server core installation, but it turned into one of those 'should I just spend an extra $150 to get 2 500GB drives instead of 2 250GB drives? Why the gently caress not!' moments.
|
# ? Apr 12, 2017 01:38 |
|
movax posted:Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet. Two solutions here: 1) Get the girl first 2) Put the whole mess out in the garage anyway. I actually haven't seen heat death yet on anything. Probably helps that I don't worry much about noise. Not full on datacenter loud, but louder than I'd like in my house.
|
# ? Apr 12, 2017 03:08 |
|
IOwnCalculus posted:1) Get the girl first Real talk, right here. I dealt with the horrible mess of different sized drives in my desktop until after I had been married for a little bit, then had a fail and lost 3TB of data. My wife signed off on me doing whatever I wanted to have data integrity so long as it didn't send us to the poorhouse and I promised not to put it in our bedroom. Now I just need to get her to sign off on the upgrade...
|
# ? Apr 12, 2017 04:22 |
|
movax posted:TBQH, I'm throwing money at the problem because I've got disposable income and I'm sort of rewarding myself for the years as a college student / shortly thereafter where I always had to compromise because I couldn't afford what I needed at the time. I'm just electing to nuke the problem "once", for me right now -- 8x8TB drives in a Fractal Node 804 knowing full well that expansion would require another investment, another chassis, etc. -- but also not planning on it. I have a serious loving data hoarding problem if I get past 70% utilization on a RAID-Z2 of these drives. I'm a single dude who uses the storage to store projects, files, videos, etc -- don't have multiple users pounding the device to watch poo poo or a high-performance database running on it. I did that in 2013 to protect myself from any chance of cryptolocker, 4x3 TB drives. poo poo fills up, was probably like a year or two. Now I bought a pair of 6 TB Toshiba X300s and I'm deduping that poo poo and then the LVM array with 4x3TB becomes my travel setup and I eventually buy another 2x6 TB for my home server, plus one for my desktop. I loving love those X300s, fantastic drives and cheap as hell. quote:CPU: E3-1230v5 This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server? You need like 8 GB ECC memory, cheapo Xeon, maybe one 960 Evo as a cache drive. Even with 16 or 32 GB of RAM it's still super overkill. Does SAS help with the VMs somehow? quote:Drives still cost more than everything else. My old (well, current I guess -- it's been off for 2 years because I don't have time to fix the OS) NAS is a Norco RPC-4020 w/ 3 6-drive RAID-Z2s made up of 2TB drives. I had kind of a violent reaction and went running away from a giant rack-mount case with tons of bays because I'd rather buy something that looks decent / doesn't scare women away because of a giant ugly server droning away in my closet. I don't know what you're talking about, man, bitches love that poo poo. Set up Sonarr to download all her Grimm and other chick shows and then see what she says
|
# ? Apr 12, 2017 08:22 |
|
If you think ZFS is hard or overkill, I don't know what to tell you. Yes you need to learn it, but everything is hard at first and IT is ever changing so you are constantly learning anyway. But your data means something to you right? That is why you are thinking about a NAS in the first place I guess. Why would you ever choose anything but ZFS, and by extension with ECC, for your data? As for hardware, I am done dicking around with hardware, I can wait everywhere else, I don't want to wait on my servers or my own computer in my free time.
|
# ? Apr 12, 2017 08:28 |
|
Paul MaudDib posted:This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server? New Xeons don't really get much lower end than the 1230 v5. And then to properly use both memory channels, you'd need at least two sticks, and there's not much savings to be had for finding the smallest ones on the market. SAS makes everything easier to cable at the very least, and gets him a separate controller to hand off using VTd. Pure SATA controllers with large port counts are rare, expensive, lovely, and poorly supported. The only way you can check all the boxes for less would be used gear.
|
# ? Apr 12, 2017 08:57 |
|
Running zfs in an HP Microserver N54L I bought used for cheap with 8 gigs of ecc ram included. I think I paid $400, and half of that was for two new WD reds at $100 a drive. It stores family pictures and media files just fine. ZFS/Freenas doesn't really require a lot to run and snapshots have saved me.
|
# ? Apr 12, 2017 09:10 |
|
Mr Shiny Pants posted:Why would you ever choose anything but ZFS, and by extension with ECC, for your data? Because there's other solutions that make tradeoffs in other areas that may or may not be more important to your needs?
|
# ? Apr 12, 2017 15:05 |
|
Is this little disclaimer new?
|
# ? Apr 12, 2017 15:36 |
|
movax posted:e: Hey, is 512K vs 4K drives still a thing to worry about? I remember much consternation when the transition was first happening. edit: For ZFS, more info on ashift here: http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29 Melp fucked around with this message at 16:47 on Apr 12, 2017 |
# ? Apr 12, 2017 16:38 |
|
eames posted:Is this little disclaimer new? Not really. They've been pretty clear that 10/Corral is still technically a beta, albeit hopefully a pretty solid one in that they're slapping a RC title on it. But until it officially releases as a Stable version it'll have that tag on there, just to remind people.
|
# ? Apr 12, 2017 18:29 |
|
Paul MaudDib posted:This poo poo is all incredibly overkill. Are you going to be serving a high-demand Postgres instance off this server? Oh, it's totally overkill. I picked the cheapest Xeon that had 4C/8T in LGA1151, and maxed out RAM now because gently caress it, when has too much RAM ever been a bad thing? I'm not using SAS drives -- motherboard just comes with a SAS controller, but I got regular WD Red SATA drives. Makes it easy to go one SAS-HD connector -> 4x SATA, and to an expander in the future if I need it. I'm on the fence about whether I'll use a VM on this machine to host a bunch of cross-compilers / FPGA sim stuff, or just run a local VM on my desktop to do it all, but hey -- options / spare CPU cycles are good to have! Real overkill is probably moving to 10GbE networking gear -- which in a few years may just be cheap enough to get for shits and giggles. Melp posted:The drives having 512 Byte vs. 4 KByte sectors isn't an issue any more; all your drives will be 4K. The issue is that many drives still report themselves as 512B to the OS, so in the case of ZFS, you can wind up with a vdev with ashift=9 and performance will go to poo poo. The number of drives that misreport this information to the OS is so large, someone wrote a routine for the ZFS on Linux project to automatically check the make and model of your drive against a database of drives that are known to misreport their sector size. The database section in this routine can be helpful in figuring out if you need to manually correct your ashift value on your vdevs: https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c#L108 Thanks -- looks like I'm good to go with my 8TB drives. IIRC, my 2TB drives, I have a mix of true 512B drives and "fake" 512B drives, so my pool was made of vdevs with two different ashifts.
|
# ? Apr 12, 2017 19:29 |
|
movax posted:Real overkill is probably moving to 10GbE networking gear -- which in a few years may just be cheap enough to get for shits and giggles. It already is cheap enough for both making GBS threads and giggling: Two-pack of SFP+ NICs with 3 meter cables - $40 shipped 24-port GigE web-managed switch with two SFP+ ports - $130 shipped
|
# ? Apr 12, 2017 20:44 |
|
SamDabbers posted:It already is cheap enough for both making GBS threads and giggling: If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house.
|
# ? Apr 12, 2017 21:05 |
|
DrDork posted:If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house. Even those are getting more affordable, if you look at Xeon-D with multiple onboard 10G-BaseT and the Ubiquiti 10GbE switch.
|
# ? Apr 12, 2017 21:15 |
|
Fiber cable is cheap, and CAT6A is a huge pain in the rear end. You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.
|
# ? Apr 12, 2017 21:17 |
|
n.. posted:Fiber cable is cheap, and CAT6A is a huge pain in the rear end. n.. posted:You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.
|
# ? Apr 12, 2017 21:29 |
|
n.. posted:Fiber cable is cheap, and CAT6A is a huge pain in the rear end. Including 2 transceivers? Those are the expensive parts, right?
|
# ? Apr 12, 2017 21:33 |
|
Terrible self-promotion for any lazy goons who want to buy crap cheap. I'm selling my Norco 4220+drives (as a complete system, really -- no need to supply your own CPU/memory/etc). I can't be bothered parting it out on eBay unless I absolutely have to. Please reprimand me and delete this if I'm badposting.
|
# ? Apr 12, 2017 21:33 |
|
Twerk from Home posted:Including 2 transceivers? Those are the expensive parts, right? Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well.
|
# ? Apr 12, 2017 21:47 |
|
Moey posted:Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well. Huh, thanks for opening my eyes. 10G home network here I come.
|
# ? Apr 12, 2017 21:55 |
|
How many actual makers of optics are there? Is SFP snobbery justified, or is it just a thing to bear in mind if you need vendor support?
|
# ? Apr 12, 2017 22:31 |
|
Optics are a commodity now as far as I'm concerned, I've never noticed a difference between Cisco SFPs and cheapo ones. Other than forced incompatibility on Cisco's part.
|
# ? Apr 12, 2017 22:43 |
|
SFP has lower latency than RJ45, and needs less power, even with optical transceivers.
|
# ? Apr 13, 2017 00:15 |
|
This is somewhat offtopic but are there qsfp28 modules that allow you to connect GbE if you have a high end network card? Want to test out some high end cards but just for functionality and not for performance.
|
# ? Apr 13, 2017 00:18 |
|
DrDork posted:Cat6A/E isn't that bad. I wired up most of a house using it a year or two ago, and it wasn't problematic at all. Heh, I've often thought of going 10gigE because I can get it straight out to the internet at that speed...
|
# ? Apr 13, 2017 00:23 |
|
Looks like one of my 2TB drives in my ZFS NAS is dying. As part of a gradual upgrade, I'm going to replace it with a 4TB disk. What 4TB NAS drive should I go for? I've been using WD Reds, and Amazon has the 4TB for 139.99. Which is the same price I paid in Nov 2015, but better than the price it was a few months ago.
|
# ? Apr 13, 2017 09:14 |
|
Adios FreeNAS 10 https://forums.freenas.org/index.php?threads/important-announcement-regarding-freenas-corral.53502/
|
# ? Apr 13, 2017 15:53 |
|
https://forums.freenas.org/index.php?threads/important-announcement-regarding-freenas-corral.53502/ So poo poo has gotten real on the FreeNAS front. The announcement isn't 100% clear, but it appears that Corral development is being halted entirely, and they're going to focus on a new UI for 9.10 and backporting Corral features into it. So far, people seem really split between happiness about the devs admitting there was a problem and trying to fix it, and angry about having bought into something that's going to be ripped away. I'm on the fence, personally. Hopefully they can get this all straightened out without a ton of user hassle. E: f, b
|
# ? Apr 13, 2017 15:54 |
|
Ahaha. gently caress me.
|
# ? Apr 13, 2017 16:03 |
|
Counting the hours until "the community" announces a Corral fork. Anyway, based on my very brief experience with FreeNAS 10 this is a good decision.
|
# ? Apr 13, 2017 16:17 |
|
eames posted:
I'm really really saddened. I threw Corral on a Dell R710 over the weekend, and I'm actually pretty impressed despite some install bugs I encountered. I mean, its a NAS with a Hypervisor, I'm running Windows and Red Hat VMs on it, and Plex as a Docker Container. I don't know if I'll switch back to the 9.X dev fork...
|
# ? Apr 13, 2017 17:00 |
|
|
# ? May 30, 2024 11:36 |
|
n.. posted:Optics are a commodity now as far as I'm concerned, I've never noticed a difference between Cisco SFPs and cheapo ones. Other than forced incompatibility on Cisco's part. Intel, HP and Cisco all have transceiver brand lockin, you have to use their lovely branded ones or it flat out won't work, which made it loving hellish getting my HP switch to talk to my Intel NIC using a copper cable. I think Cisco still has the iOS command that lets you use the non-branded ones, but intel and HP's response was more or less 'eat a dick' when I asked them how to disable it.
|
# ? Apr 13, 2017 17:24 |