Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
titaniumone
Jun 10, 2001

I don't understand the appeal of a lack rack. The entire point of rack mounting is space saving and ease of maintenance due to rails. Every lack rack I've seen just has a device screwed directly into the table legs, making it totally impossible to get at without unscrewing it from the legs.

Is it just some lovely cargo cult thing or am I missing something that makes it better than just putting your device on top of the table?

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


I guess you can take something out the middle without taking everything else out of position but I don't get it either. Lack tables are cheap poo poo and any screws you put into them are going to pull straight out with any sort of loading. Proper racks get thrown out all the time, just keep an eye on Craigslist/Gumtree/eBay etc.

thebigcow
Jan 3, 2001

Bully!

titaniumone posted:

I don't understand the appeal of a lack rack. The entire point of rack mounting is space saving and ease of maintenance due to rails. Every lack rack I've seen just has a device screwed directly into the table legs, making it totally impossible to get at without unscrewing it from the legs.

Is it just some lovely cargo cult thing or am I missing something that makes it better than just putting your device on top of the table?

no, you're right

movax
Aug 30, 2008

thebigcow posted:

no, you're right

I could see a Lack rack being decent for only a switch or something, certainly not an actual rack server.

I got the deal of a lifetime from Craigslist; Dell PowerEdge 24U rack for $100.

stray
Jun 28, 2005

"It's a jet pack, Michael. What could possibly go wrong?"

SamDabbers posted:

Vanilla FreeBSD has the ability to emulate (most of) the Linux kernel API, which enables (most) 32-bit Linux binaries to run directly. You can use this to run a Linux userland in a jail, so yes, your assumption is correct :)
Ah, OK. I'm a bit sad that it can only handle 32-bit Linux, but hey, it's emulation and it's free, so who am I to bitch?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

stray posted:

Ah, OK. I'm a bit sad that it can only handle 32-bit Linux, but hey, it's emulation and it's free, so who am I to bitch?
It's not really emulation in the same way that Wine Is Not an Emulator. The FreeBSD/PC-BSD Linux compatibility layer is a set of API call wrappings that are linked to the native BSD calls underneath. In practice, like almost everything about Open Source stuff that isn't backed by a closed source company somewhere along its path, this actually doesn't solve that many problems because a lot of applications will flat out not work because this API wrapper set is incomplete, including the very important calls epoll and inotify, both of which are needed to run Dropbox for Linux (someone managed to post all the calls not available prefixed with DUMMY() in the FreeBSD SVN). Epoll wasn't implemented as of FIVE YEARS ago when I first heard of the compatibility layer, and it's still not, so I doubt it will ever be done. Therefore, I can't use this for most stuff I want to use this for (namely, applications that are only available as closed source Linux binaries). Otherwise, it's back to trying to use FreeNAS as a virtualization host which leads down to two exasperating paths:

1. Run VMware ESXi and map everything from your HBAs in FreeNAS down to your direct hardware via VT-d. Really not an option if you just bought an i3-4130 (or most other i3 processors) which explicitly does NOT have VT-d extensions necessary to run DirectPath I/O for VMware.

2. Run the BSD hypervisor, bhyve, somehow on FreeNAS. This is a lot harder than anything else in practice.

3. Go into mostly uncharted territory by running Virtualbox under FreeNAS 9+ (there are PBIs for FreeNas 7 which might as well be considered dead).

4. Rube Goldberg machine of networked machines to accomplish the basic service / task you're trying to do. SpiderOak + Dropbox on another machine being an example.

The problems I mostly see is that the FreeNAS forum top users are all about going full hog prosumer / small business oriented and so they'd rather just push everyone to do the Right Thing (admittedly correct but missing the point as usual for most nerd pedantic conversations) and have a separate virtualization server from the NAS / SAN stack. The Linux community's cowboy approach of "screw it, who cares if it's not practical, let's see if it even works!" has gotten them further in this respect at the cost of quality of what is implemented.


In short, my usual summation of "gently caress computers"

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.
I've been pondering these things too, and I ended up pulling the trigger and ordering a system based around a SuperMicro X10SL7-F MicroATX motherboard and E3-1240v3 CPU. Anything MiniITX is either unavailable, crippled in virtualization or consumer (or, according to the FreeNAS forum spergs, "low grade") oriented stuff.

It should allow me to do run ESXi, host FreeNAS inside, forward the entire LSI controller, the whole shebang. It's annoyingly expensive, but at least it's not crippled or unsupported. And I hope I don't have to dig into any source code.

Edit: Hit enter too soon.

evol262
Nov 30, 2010
#!/usr/bin/perl

necrobobsledder posted:

It's not really emulation in the same way that Wine Is Not an Emulator. The FreeBSD/PC-BSD Linux compatibility layer is a set of API call wrappings that are linked to the native BSD calls underneath. In practice, like almost everything about Open Source stuff that isn't backed by a closed source company somewhere along its path, this actually doesn't solve that many problems because a lot of applications will flat out not work because this API wrapper set is incomplete, including the very important calls epoll and inotify, both of which are needed to run Dropbox for Linux (someone managed to post all the calls not available prefixed with DUMMY() in the FreeBSD SVN). Epoll wasn't implemented as of FIVE YEARS ago when I first heard of the compatibility layer, and it's still not, so I doubt it will ever be done.
The problem in this case is that kqueue is somewhat of a divergent path from both epoll and inotify. FreeBSD has the functionality, but it doesn't get targeted because it's small. BSD is the new Linux. The OSX dropbox client happily uses kqueue, though.

The assertion "open source stuff not backed by a closed-source company..." is 99% wrong, though. You should look at kernel, coreutils, or GCC commits by committer email address sometime.

necrobobsledder posted:

Therefore, I can't use this for most stuff I want to use this for (namely, applications that are only available as closed source Linux binaries). Otherwise, it's back to trying to use FreeNAS as a virtualization host which leads down to two exasperating paths:
What closed-source binaries do you need? Dropbox and what else?

necrobobsledder posted:

1. VMware ESXi and map everything from your HBAs in FreeNAS down to your direct hardware via VT-d. Really not an option if you just bought an i3-4130 (or most other i3 processors) which explicitly does NOT have VT-d extensions necessary to run DirectPath I/O for VMware.
[/quote]
Not that you're wrong here, but you should point out that most consumer-grade motherboards don't have chipset support for vt-d either, and Intel doesn't require them to.

necrobobsledder posted:

2. Run the BSD hypervisor, bhyve, somehow on FreeNAS. This is a lot harder than anything else in practice.

3. Go into mostly uncharted territory by running Virtualbox under FreeNAS 9+ (there are PBIs for FreeNas 7 which might as well be considered dead).
Why would you not do this the opposite way and pass raw ZFS volumes to a FreeNAS guest from bhyve or whatever? FreeNAS is an appliance.

necrobobsledder posted:

The problems I mostly see is that the FreeNAS forum top users are all about going full hog prosumer / small business oriented and so they'd rather just push everyone to do the Right Thing (admittedly correct but missing the point as usual for most nerd pedantic conversations) and have a separate virtualization server from the NAS / SAN stack. The Linux community's cowboy approach of "screw it, who cares if it's not practical, let's see if it even works!" has gotten them further in this respect at the cost of quality of what is implemented.

The Linux community doesn't work that way either. There are a lot of projects which expect Linux or GCC despite using autotools, granted, but the assertion that Linux is somehow less "quality" is absurd.

You should virtualize FreeNAS on top of bhyve. Even vbox on freebsd would be fine. But it's intended to be at the top of the stack, not the bottom. Don't rail against the system because you're trying to put a square peg in a round hole

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Paul MaudDib posted:

What's the cheapest way to get a server rack? I've got a servers that I picked up cheap, but no rack. I've got a total of 5u of servers, 1x1u and 2x2u, so I probably want like 7u for a bit of airflow and expandability. Can I buy the rails and make one myself, or is there something cheap enough to make it not worth it? Craigslist maybe?

I made my own with wood and some rack posts. If I recall correctly, it cost me about $80 for the lumber and rack posts.

Yours will be a ton cheaper since you only want a 7U cabinet.

SamDabbers
May 26, 2003



Agrikk posted:

Yours will be a ton cheaper since you only want a 7U cabinet.

Way cheap if you only need a 2-poster.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

evol262 posted:

The assertion "open source stuff not backed by a closed-source company..." is 99% wrong, though. You should look at kernel, coreutils, or GCC commits by committer email address sometime.
I don't consider the e-mail address that relevant particularly for the kernel. We may interpret the data differently but I call this at least 50% closed source company backers because your e-mail address is just one part of a person's affiliation. Fundamentally, I think we're just in semantics disagreement and I'm a self-described grumpy troll. I work on an Apache Foundation project at work daily, but we have several commercial products that bring in the :10bux: and that's partly where we will focus the open source efforts if my company's dumping developer dollars.

quote:

What closed-source binaries do you need? Dropbox and what else?
Currently just Dropbox but I suspect I'll run into some issues trying to run Google Apps management apps (on a list of my attempts to try cloud-sync with better mobile app support) and some media converters like Handbrake. The only others on my radar would be icc, some VMware management client utilities I haven't tested (not the VMware tools guest agent), or stuff that requires a rather involved stack like iTunes on WINE under FreeBSD that is really icky on a headless server but kind of required for the sort of media storage system I'm still working on to get both nerd practicality / features and wife acceptability UX factors.

quote:

Not that you're wrong here, but you should point out that most consumer-grade motherboards don't have chipset support for vt-d either, and Intel doesn't require them to.
I dislike repeating myself in a thread this huge and there's several posts in this thread that probably describe it if not elsewhere on the Internet. I know I've posted about ECC / VT-x / VT-d support in better depth before this at the very least. I'm not exactly a reference, just a data point. It takes more effort than I'm willing to put in to properly compile all this info into something better.

quote:

Why would you not do this the opposite way and pass raw ZFS volumes to a FreeNAS guest from bhyve or whatever? FreeNAS is an appliance.
That's basically the option I mentioned via ESXi instead of bhyve, which is only possible with full VT-d support because any extra layer to the disks is a potential stability risk partly founded upon paranoia as well as the intrinsic simplicity of less moving parts being a Good Thing. There's a very prominent stickied post against virtualizing FreeNAS on their forums unless you really know what you're doing which winds up leading you to question whether you're doing anything right even if you are a professional with how obtuse and complicated this can be. After I read through it I don't even see a lot of my own concerns addressed by other posters either in terms of testing and professional production concerns.

quote:

The Linux community doesn't work that way either. There are a lot of projects which expect Linux or GCC despite using autotools, granted, but the assertion that Linux is somehow less "quality" is absurd.
The Linux community is nowadays large and diverse enough that any unilateral judgment call but the most useless is probably incorrect by default and should be an automatic troll alert as I'll readily admit. But what I'm really getting at is that Linux is the go-to platform for support ahead of FreeBSD implying first-run implementations with little maturity (there are exceptions with features like jails drifted over to Linux as containers). Maturity and quality are synonymous when it comes to dealing with the thread subject matter of Protect Ya Bits I'd presume. I don't care if it was written in King Kernighan's C by the immaculate conception spawn of Linus Torvalds, Bill Gates, and Richard Stallman - Baby Jesus is still a baby and isn't going to be the foundation for 10TB of anyone's data they would mind losing out of the gate.

quote:

You should virtualize FreeNAS on top of bhyve. Even vbox on freebsd would be fine. But it's intended to be at the top of the stack, not the bottom.
The stack placement I can understand and would debate a bit, but a more fundamental question is why should someone virtualize an appliance that stores personal data on top of something that is still in pre-release form? http://www.phoronix.com/scan.php?page=news_item&px=MTUwODY. Some of the more respected FreeNAS forum posters are encouraging people to try running virtualized servers under jails in FreeNAS, so I'm not alone in trying exotic FreeNAS inception sort of ideas but I'm pretty sure most users aren't talking about FreeNAS virtualization in any other context than for testing or on top of ESXi (maybe KVM). I don't really consider FreeNAS just an appliance given that it encourages various plugins within jails that have the same effective impact as appliances in themselves. That is, FreeNAS is not an atomic unit of a "business service."

And I'll have to go wash my eyes out after writing that term again in my life.

Psimitry
Jun 3, 2003

Hostile negotiations since 1978
I'm hoping someone on here might have some advice. Was having issues with my router (Asus RT-N16 with DD-WRT installed) and its port-forwarding abilities. Actually, in truth, other than its basic functions (routing, network access), nothing in DD-WRT worked (QoS, protocol blocking, etc). So I decided to switch back to my old D-Link DIR-655.

Once everything was setup, things appeared to be working ok. I was able to access my NAS4Free box through my bedroom HTPC, and life was good. However, for some reason my internet wasn't working on said HTPC. No worries. I'll just reboot.

After rebooting, everything on the internet side was working wonderfully. However, my NAS4Free box was suddenly missing. Nothing changed, but I have problems all the time accessing it from Windows 8 (which is a whole separate post which I will make later). However, I couldn't access it from the Web GUI either.

I have a monitor plugged into it right now, and it is up and running at address 192.168.1.250, and it's plugged directly into the router. Connection lights are flashing, all should be good... except it's not. Router control panel says that there's only two units currently connected to the LAN, my bedroom pc and my desktop.

I'm at a loss. Help?

IOwnCalculus
Apr 2, 2003





This reeks of an IP conflict. Did you assign 192.168.1.250 on the storage box directly? If it's in your router's DHCP range, one of your other computers is probably also getting that IP assigned by the router and it's making everything poo poo bricks.

Psimitry
Jun 3, 2003

Hostile negotiations since 1978

IOwnCalculus posted:

This reeks of an IP conflict. Did you assign 192.168.1.250 on the storage box directly? If it's in your router's DHCP range, one of your other computers is probably also getting that IP assigned by the router and it's making everything poo poo bricks.

It was definitely something like that. I disconnected the entire network, except for my desktop and the NAS box, rebooted the NAS and it snagged a new IP (in this case, 192.168.0.102). I went into the Router and set a DHCP reservation for the NAS, so hopefully that should prevent that problem in the future. Thanks!

Psimitry
Jun 3, 2003

Hostile negotiations since 1978
Well...poo poo. Despite setting up the DHCP reservation, the NAS automatically stops connecting as soon as I connect the line for the rest of the house. Very strange...

(networking never was my strong suit)

Edit: finally got it. Shut my NAS down, disconnected it from the network, reconnected the rest of my house, started up every computer in the house so that the router could assign it IP's, saved reservations for every PC in the house, reconnected my NAS, started it up, got a new IP, saved it, everything is working (for now).

Yeesh. What a pain.

Psimitry fucked around with this message at 05:48 on Dec 31, 2013

evol262
Nov 30, 2010
#!/usr/bin/perl

Sorry, I wasn't trying to start a big thing or put you on your back foot, so I'll keep this short.

I'll preface this by saying that I work on virtualization for Redhat, which colors my impression on all of these things.

I didn't mean to imply that you didn't know about the chipset requirement for vt-d, I just like to mention it as often as possible since people frequently get "my processor supports vt-d" with "I have vt-d support on this build", and I like to reiterate the difference. Nothing against you.

Bhyve is pretty stable and capable for a new hypervisor these days. It's solid on OpenBSD and FreeBSD guests, at least, and mostly Linux. The big difference for your case is that you can still use ZFS on the host and pass in raw zvols to get the advantages of ZFS, vanilla BSD, and FreeNAS all at once.

Believe me, I know it's more convenient to have a one size fits all environment, but bhyve with zvols (or smartos with the same) is almost as good, and much better than rdm on vmware or wedging a hypervisor into freenas. It's designed to make managing iscsi, disks, and NFS easy. ZFS already makes disks so easy freenas is practically redundant. Let it just handle the nas/San parts.

Lastly, my impression of open-source is heavily colored by involvement. I see a lot of closed source companies file RFEs and write code when the response is "patches welcome (and we won't support your product or feature without you doing the initial grunt work)". I don't see them do it otherwise. And when they do, it's generally "this barely works, but now upstream accepted it and the maintenance burden, so we'll take credit" Which is smart business, and I can't fault them, but it's really hard for me to say that a meaningful portion of development is done by closed source companies.

BlankSystemDaemon
Mar 13, 2009



If you're good enough at FreeBSD to setup bhyve and have prosumer/enterprise, why are you running a NAS appliance? I thought the whole point of NAS appliances was to make it possible for people not as technically minded to configure and maintain a NAS on consumer-grade hardware with a UI of some description.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

evol262 posted:

Believe me, I know it's more convenient to have a one size fits all environment, but bhyve with zvols (or smartos with the same) is almost as good, and much better than rdm on vmware or wedging a hypervisor into freenas. It's designed to make managing iscsi, disks, and NFS easy. ZFS already makes disks so easy freenas is practically redundant. Let it just handle the nas/San parts.

Lastly, my impression of open-source is heavily colored by involvement. I see a lot of closed source companies file RFEs and write code when the response is "patches welcome (and we won't support your product or feature without you doing the initial grunt work)". I don't see them do it otherwise. And when they do, it's generally "this barely works, but now upstream accepted it and the maintenance burden, so we'll take credit" Which is smart business, and I can't fault them, but it's really hard for me to say that a meaningful portion of development is done by closed source companies.
Well, I have a different view with a different involvement history. I have a VMware cert, worked at one of the vendors you're likely having gripes about, and I hack on an Apache Big Data project doing cross platform features for work. Years ago it was Redhat that kept pushing to add themselves and ask for something for nothing from us when I knew they had more resources to put in than we did. I could never figure out if they were just clueless about my cues, being penny pinchers, or playing politics, which is odd given we didn't compete and had no intent to either.

But I get where you're coming from now and it make sense finally.

D. Ebdrup posted:

If you're good enough at FreeBSD to setup bhyve and have prosumer/enterprise, why are you running a NAS appliance? I thought the whole point of NAS appliances was to make it possible for people not as technically minded to configure and maintain a NAS on consumer-grade hardware with a UI of some description.
I know how to do most of what FreeNAS packages up for me (and some implementation / convention choices I take issue with), but the primary point of me using it is basically calculated laziness. I don't have to do anything besides install it and click a few times instead of trying to remember all the different flags for smbmount, zfs create, netatalk, etc. after something fried and I had to reinstall (configuration export in FreeNAS is neat but I don't know what it can save honestly - beats what I do by default). If all FreeNAS offered was literally checkboxes for CIFS, iSCSI, NFS, and AFP with CRUD for different RAID volumes, then I wouldn't be using it. When something goes wrong, I want to be down for as little as possible. I could just back up my NAS system disk but I've never been very good with remembering to backup everything there and correctly, so I just backup data and not configuration. A backup with good data is better than one that's going to screw stuff up when you restore it. I know IT best practices as a professional, but I'm not going to follow most of them at home besides the most critical.

I also get the impression that there's a company that offers service & support for a more enterprisey version of FreeNAS that is where a lot of the project support comes from.

evol262
Nov 30, 2010
#!/usr/bin/perl

necrobobsledder posted:

Well, I have a different view with a different involvement history. I have a VMware cert, worked at one of the vendors you're likely having gripes about, and I hack on an Apache Big Data project doing cross platform features for work. Years ago it was Redhat that kept pushing to add themselves and ask for something for nothing from us when I knew they had more resources to put in than we did. I could never figure out if they were just clueless about my cues, being penny pinchers, or playing politics, which is odd given we didn't compete and had no intent to either.
The vendors I have gripes about are mostly tier 1 OEMS. Still, you're right that we push for involvement and file RFEs. I guess the difference is that if you don't pick it up and we care, we actually hire or reassign someone to develop and maintain it, who usually ends up working on the project on other features. We try to be a good citizen (better than we used to be). Vendors who won't be named hassle PMs through reps, offer to loan hardware, and do development as a last resort, usually as a one-off package that we're expected to integrate and maintain.

The work you do is almost exactly what I meant. Again, it's smart business, but you can bet

I have a vmware cert, too. No gripes. I like vmware. But rdm sucks. Especially for freenas.

necrobobsledder posted:

I know how to do most of what FreeNAS packages up for me (and some implementation / convention choices I take issue with), but the primary point of me using it is basically calculated laziness. I don't have to do anything besides install it and click a few times instead of trying to remember all the different flags for smbmount, zfs create, netatalk, etc. after something fried and I had to reinstall (configuration export in FreeNAS is neat but I don't know what it can save honestly - beats what I do by default). If all FreeNAS offered was literally checkboxes for CIFS, iSCSI, NFS, and AFP with CRUD for different RAID volumes, then I wouldn't be using it.

I use FreeNAS for my setup as well, for the same reasons, albeit I virtualize it, and I only need to remember how to create raw zvols (I'm using illumos as a host). It's not perfect, but it's the least bad situation if you don't want to duplicate hardware and don't have passthrough support.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

evol262 posted:

The vendors I have gripes about are mostly tier 1 OEMS.
Yeah, keep going, you only have four to go through, but it was one of them. Depending upon how long you've been there, I may have even been on a call with you about libvirt bindings way back.

quote:

It's not perfect, but it's the least bad situation if you don't want to duplicate hardware and don't have passthrough support.
I think it'd be just easier to work around the Dropbox snag by changing the workflow to be push-to rather than pull-from wrt the storage component. Far better to change one integration direction than to rework the whole thing and destabilize things. The software setup is beyond the scope of the thread entirely though.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Anyone have any tips for making the monthly scrub of a 24TB zfs pool not consume so many resources?

It takes my system load to somewhere between 4 and 12 for ~55 hours at the beginning of every month, whereas the system load averages around 2 the rest of the month.

Thermopyle fucked around with this message at 18:08 on Jan 1, 2014

SamDabbers
May 26, 2003



Thermopyle posted:

Anyone have any tips for making the monthly scrub of a 24TB zfs pool not consume so many resources?

It takes my system load to somewhere between 4 and 12 for ~55 hours at the beginning of every month, whereas the system load averages around 2 the rest of the month.

What else beside storage-related (cifs/nfs/iscsi) processes is running on your server for it to have such a high load average? What are the hardware specs of the box? Are you using compression or dedup? Which OS?

My server takes about 8.5 hours to scrub a zpool with half that amount of raw space that's about 65% full.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

SamDabbers posted:

What else beside storage-related (cifs/nfs/iscsi) processes is running on your server for it to have such a high load average?

If you mean the average load of 2, I guess I wouldn't call that particularly high, but then maybe I'm just used to that after using the server for years for various projects. I'd say the majority of that load for the past couple months is a python application I developed which is doing near real-time analysis of usenet postings (I was curious about types of posts, types of videos and their codecs, and a bunch of nerdy stats). Most of the load here is CPU.

If you mean the load average during the scrub, it's all z_rd_int, and thus the reason for my question.

SamDabbers posted:

What are the hardware specs of the box?

It's an Intel Q6600 with a "measly" 4GB of RAM. (DDR2 is too expensive nowadays so I will eventually just upgrade the whole server)

SamDabbers posted:

Are you using compression or dedup?
Nope


Ubuntu Server 12.10

SamDabbers posted:

My server takes about 8.5 hours to scrub a zpool with half that amount of raw space that's about 65% full.
This pool is 90% full.

SamDabbers
May 26, 2003



Thermopyle posted:

4GB of RAM. (DDR2 is too expensive nowadays so I will eventually just upgrade the whole server)

This is likely the main contributor to your slow scrub times. ZFS can and will use all the memory you can give it, and I've read that a rule of thumb for ZFS is to give it 1GB RAM for each 1TB of raw disk space. I've also read that that's horseshit, but 8GB RAM is considered the recommended minimum for ZFS on FreeBSD. It's likely that the system is swapping like crazy when the scrub is happening, so in lieu of upgrading the whole server or buying pricey DDR2, maybe you could stick an SSD in there for faster swap space? ZFS performance can also benefit from SSD L2ARC (read cache) and ZIL (write cache), but I'm not sure either of those will actually help for a scrub.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

SamDabbers posted:

This is likely the main contributor to your slow scrub times. ZFS can and will use all the memory you can give it, and I've read that a rule of thumb for ZFS is to give it 1GB RAM for each 1TB of raw disk space. I've also read that that's horseshit, but 8GB RAM is considered the recommended minimum for ZFS on FreeBSD. It's likely that the system is swapping like crazy when the scrub is happening, so in lieu of upgrading the whole server or buying pricey DDR2, maybe you could stick an SSD in there for faster swap space? ZFS performance can also benefit from SSD L2ARC (read cache) and ZIL (write cache), but I'm not sure either of those will actually help for a scrub.

Good idea. I'll try that out.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


I just picked up a very, very, very cheap D-Link DSN-3400-10 new on eBay. It's an SMB-targeted 10GbE 15 bay iSCSI SAN array and it comes without disks, so assuming it's not a price error and I actually do receive it I'll have a bit of populating to do. I'll most likely run RAID5 with a hot spare and carve it all into a single LUN for my Windows file server over 10GbE.

Has anyone had good luck with the Seagate NAS drive lineup? I'm torn between those and the WD Reds. They seem to dance around each other in price points so it will probably come down to how many good things I can hear about each brand.

SamDabbers
May 26, 2003



Cenodoxus posted:

Has anyone had good luck with the Seagate NAS drive lineup? I'm torn between those and the WD Reds. They seem to dance around each other in price points so it will probably come down to how many good things I can hear about each brand.

Get some of each for your array. That way you'll have less chance of a bad batch from one manufacturer causing you to lose data. Also, a 14-drive RAID 5 might not be such a good idea.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Cenodoxus posted:

I just picked up a very, very, very cheap D-Link DSN-3400-10 new on eBay.

Something tells me that isn't going to come through. Just for fun though, I threw my greenbacks in the ring as well.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done



Good god, nothing quite like statistics to strike fear into your heart. :gonk: I do have some concerns with the math and assumptions in the first article, but I'm glad you brought them up. I haven't seen those before.

I'll still have my data on an online backup plan (thank god iSCSI counts as "locally attached") so a URE would be a major annoyance at best, but I'd almost rather not tempt fate.

I know enterprises are in the clear for a while longer since SAS drives sit at around 1 in 10^16, but there must be dozens of goons in this thread with ankle fetish stashes on SATA arrays much bigger than 12TB. How do they cope? Also makes me wonder about the stability of 12TB+ ZFS instances.

Moey posted:

Something tells me that isn't going to come through. Just for fun though, I threw my greenbacks in the ring as well.

My pessimistic side is waiting for the "Sorry, it's out of stock" email to come through. I found the seller's website and the price+shipping was the same, so it must not be a listing error.

I was on the fence about whether to bet my money on it, but eBay Buyer Protection made the decision easier.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Cenodoxus posted:

I was on the fence about whether to bet my money on it, but eBay Buyer Protection made the decision easier.

Pretty much nothing to lose. Would be real interesting to play with.

the_lion
Jun 8, 2010

On the hunt for prey... :D
Today I lost 6TB to two external seagate drives. Most of it I got back, because I tend to spread things to multiple disks.

I'm looking at the smart, cost effective way to backup. I tend to do a lot of large video projects and i'm on a mac.

Tape is probably out of my cost reach, but I was considering maybe a RAID setup? I read the OP, I probably just want RAID 1 since i'm backing projects up and not really revisiting them often.

Any advice besides avoid Drobo?

Mr Shiny Pants
Nov 12, 2012

the_lion posted:

Today I lost 6TB to two external seagate drives. Most of it I got back, because I tend to spread things to multiple disks.

I'm looking at the smart, cost effective way to backup. I tend to do a lot of large video projects and i'm on a mac.

Tape is probably out of my cost reach, but I was considering maybe a RAID setup? I read the OP, I probably just want RAID 1 since i'm backing projects up and not really revisiting them often.

Any advice besides avoid Drobo?

The easy way, get a Synology that can host at least four disks. Giving you Raid 5.

My personal preference: Get a HP Microserver insert 4 disks and run something like Freenas with ZFS as the filesystem.

Tornhelm
Jul 26, 2008

Mr Shiny Pants posted:

The easy way, get a Synology that can host at least four disks. Giving you Raid 5.

My personal preference: Get a HP Microserver insert 4 disks and run something like Freenas with ZFS as the filesystem.

Or even better, Grab a Microserver and put XPEnology (pretty much roll-your-own Synology) on it. Pretty much the best of both worlds.

the_lion
Jun 8, 2010

On the hunt for prey... :D

Mr Shiny Pants posted:

The easy way, get a Synology that can host at least four disks. Giving you Raid 5.

My personal preference: Get a HP Microserver insert 4 disks and run something like Freenas with ZFS as the filesystem.

Since i'm new to this, is there any downside to ZFS?

Tornhelm posted:

Or even better, Grab a Microserver and put XPEnology (pretty much roll-your-own Synology) on it. Pretty much the best of both worlds.

What do you mean by best of both worlds?
Also, with any of these options do I need to buy all the drives at the same time and of the same type? I was thinking of starting small and then adding later.

Tornhelm
Jul 26, 2008

the_lion posted:

What do you mean by best of both worlds?

XPEnology is pretty much the Synology operating system, based off of the GPL source code that Synology provides. So when you run it on a Microserver (or any other computer really), you're essentially getting a higher powered NAS for cheaper than if you bought a Synology box for a minimal amount of effort setting it up.

If you're using any good NAS (Synology/QNAP/etc) then you don't need to buy all the drives at once, they *should* expand the storage to fit however many drives you have at the time.

Mr Shiny Pants
Nov 12, 2012

the_lion posted:

Since i'm new to this, is there any downside to ZFS?


What do you mean by best of both worlds?
Also, with any of these options do I need to buy all the drives at the same time and of the same type? I was thinking of starting small and then adding later.

ZFS is probably the best filesystem in existence right now. No hyperbole or anything. There are some contenders but it has proven to be stable and above all else it will protect your data like other filesystems will not.

Especially with these large drives (2TB and higher ) data integrity should be your first priority.

If you care about your data, ZFS will do wonders.

Downsides: If you don't want to go with something like ZFS guru or Napp It you will need some command line knowledge of Linux, Solaris or FreeBSD. RAM, ZFS wants RAM and lots of it.
ZFS guru: http://zfsguru.com

Personally I run ZFS on Linux on Ubuntu and it works as advertised.

Mr Shiny Pants fucked around with this message at 14:29 on Jan 2, 2014

BlankSystemDaemon
Mar 13, 2009



Remember that RAID is not backup! It's just an acronym for Redundant Array of Inexpensive/Independent Disks. As such it only serves to protect your data from otherwise catastrophic hardware failure that would cause dataloss within the scope of whatever parity level you're running at - ie. 1 disks worth of parity can protect against one disk failure, 2 disks worth of parity can protect against two disks failures, and so forth.

Software RAID (zfs, along with lesserother alternatives) mitigates hardware failure even more, as it means the hardware you run your array on is irrelavant (which is not the case with hardware RAID - if your hardware RAID controller dies, you have to get one that works exactly the same way, which basically means one exactly like the one you had, as there's no guarentee that new controllers will work the way old one did, even if they're by the same manefacturer).

If you really care about the data, a good backup strategy is at least one cold backup in weekly rotation on-location, at least one off-site cold backup off-site on a fortnightly or monthly schedule, and an online solution like Carbonite or SpiderOak.
If it's easily replacable data, you can get away with just a cold backup on a monthly schedule.
In all cases, however - it comes down to how easy the data is to replicate vs. the cost, in time and money, of maintaining the backup.

As for a recommendation for what to go with: A HP N36L/N40L/N54L Microserver (whatever's cheapest), 8GB ECC memory, 4 disks and a USB disk + FreeNAS and a weekend of reading documentation and trying things out is a good start.

BlankSystemDaemon fucked around with this message at 15:15 on Jan 2, 2014

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Cenodoxus posted:

I know enterprises are in the clear for a while longer since SAS drives sit at around 1 in 10^16, but there must be dozens of goons in this thread with ankle fetish stashes on SATA arrays much bigger than 12TB. How do they cope? Also makes me wonder about the stability of 12TB+ ZFS instances.

RAID 6. RAIDZ2. Multiple pools.

Oh, and for the question about downsides of ZFS: You still can't expand vdev's with new devices, which is still my biggest frustration having moved to ZFS from mdadm.

Thermopyle fucked around with this message at 15:27 on Jan 2, 2014

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

the_lion posted:

Since i'm new to this, is there any downside to ZFS?
1. Increased system requirements compared to a basic mdraid hardware NAS that's widely used (it's really just what's standard on Linux for like 1.5 decades)
2. Increased urgency for ECC since there is actual evidence of people losing data on ZFS that ECC could have prevented (this should be baseline today IMO though given how unreliable storage itself is becoming while we rely upon it more)
3. Some restrictions on OS compatibility such as Solaris and FreeBSD for the best / most stable implementations while Linux is nearly two generations or so behind them in its ZFS implementation. This leads to...
4. Hardware requirements get a little more restrictive than if you were using some Windows software RAID so you now need to pick out the parts, which are thankfully easy to acquire and inexpensive
5. ZFS design presently mandates that you can't incrementally add storage by adding disks one at a time in practice - you generally need to upgrade all the drives in an array at once or painfully slowly one at a time. This is pretty standard for most corporate / business environments but very difficult to justify oftentimes for home users. There's all sorts of implied risks with this sort of upgrade path if you don't plan carefully.
6. (A superficial nitpick IMO) Not technically as mature as the old standby Unix volume management systems including LVM. This is basically LVM or SVM which ZFS has tried to address their primary gripe being the number of commands by condensing to two now very convoluted commands zpool and zfs. This is like saying that C is not as mature as Fortran though...
7. ZFS was intended for use in professional IT environments primarily, not your cheapo home machine with Linux ISOs and anime fandubs. Hence a great deal of the above "downsides" are not downsides to its target users and its future direction will reflect that presumption.

Otherwise, the criticisms against ZFS are endemic to software RAID or Unix-centric RAID systems as a whole.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

5. ZFS design presently mandates that you can't incrementally add storage by adding disks one at a time in practice - you generally need to upgrade all the drives in an array at once or painfully slowly one at a time. This is pretty standard for most corporate / business environments but very difficult to justify oftentimes for home users. There's all sorts of implied risks with this sort of upgrade path if you don't plan carefully.
No. A zpool consists of vdevs and you can have millions of those. Need to add more storage? Just add an additional set of disks with their own parity to your existing pool. That way you're only limited by the amount of connectors (sata, esata, pci-ex, usb3, thunderbolt, et cetera) that you have in your system - and with up to 28 port pci-ex cards plus external cases, you can have quite a few vdevs.
Block pointer rewrite, the feature necessary for zfs to be able to grow a single vdev when adding additional disks, is coming - but nobody really knows when, and it's been in the pipeline for a long time.

BlankSystemDaemon fucked around with this message at 23:56 on Jan 2, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply