Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

Internet Explorer posted:

Sounded like he was more talking about the hardware/software itself.
Yes. Sorry you felt this was directed at you, Cpt.Wacky, it's not what I meant.

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

the spyder posted:

I support 1.4PB of ZFS based storage across our two sites. In the field, we have over 4PB of ZFS storage. Our older systems are all based off NexantaStor and everything in the last year off OpenIndiana- both running Napp-it 0.8h. Half is linked via Infiniband to our cluster, the other half is archival/local storage. I just ordered another 100TB of disks/chassis, one will be for consolidating our VM's local storage and completely SSD based- the other two for local backup storage. I am very eager to see what 22x240gb Intel 520 series will do performance wise. The other two are 48 3TB 7200rpm spindles. We have suffered some pretty bad disk failures thanks to the dance club downstairs, but not once lost data. (24 failed drives in 12 months.) If you have any specifics, let me know. Everything is based on Supermicro dual quad xeons, 96gb+ of ram, LSI HBA's, and Seagate/Intel drives.

How's the IB support working for you? NFS over IPoIB, I assume?

Cpt.Wacky
Apr 17, 2005

evil_bunnY posted:

Yes. Sorry you felt this was directed at you, Cpt.Wacky, it's not what I meant.

No problem, I'm a little sensitive about it since I've been blocked in all my attempts to get proper equipment and do things right, not just with storage.

evil_bunnY
Apr 2, 2003

Cpt.Wacky posted:

No problem, I'm a little sensitive about it since I've been blocked in all my attempts to get proper equipment and do things right, not just with storage.
Time to YOTJ the gently caress out!

the spyder
Feb 18, 2011

PCjr sidecar posted:

How's the IB support working for you? NFS over IPoIB, I assume?

Outside of some IP address issues in the field due to being on generator power, pretty decent. Our main app (don't laugh) is SMB and sees 3.2GB's throughput. In house I am using NFS over copper 10GBE. Outside of rsync'ing our primary/backup NAS, I can't max it out- yet.

Mierdaan
Sep 14, 2004

Pillbug

the spyder posted:

We have suffered some pretty bad disk failures thanks to the dance club downstairs

This is just wonderful.

Novo
May 13, 2003

Stercorem pro cerebro habes
Soiled Meat

the spyder posted:

I support 1.4PB of ZFS based storage across our two sites. In the field, we have over 4PB of ZFS storage. Our older systems are all based off NexantaStor and everything in the last year off OpenIndiana- both running Napp-it 0.8h. Half is linked via Infiniband to our cluster, the other half is archival/local storage. I just ordered another 100TB of disks/chassis, one will be for consolidating our VM's local storage and completely SSD based- the other two for local backup storage. I am very eager to see what 22x240gb Intel 520 series will do performance wise. The other two are 48 3TB 7200rpm spindles. We have suffered some pretty bad disk failures thanks to the dance club downstairs, but not once lost data. (24 failed drives in 12 months.) If you have any specifics, let me know. Everything is based on Supermicro dual quad xeons, 96gb+ of ram, LSI HBA's, and Seagate/Intel drives.

Is there a reason you used NexentaStor vs. OmniOS or FreeBSD? I am seriously considering FreeNAS right now because we're relatively small (only need ~20TB in the near future) and it seems that paid support is available. If I quit and the next person isn't familiar with ZFS, at least they'll be able to call support and use the web administration.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server. We'd like to split them up onto multiple (virtual) servers and use DFS to create a single namespace. I was hoping that we could create a Profiles share on each server, and then use DFS to merge them into a single giant profiles share. Now that we've dug into it, I guess that's not really how DFS works.

So I'm wondering if there's some easy way to manage all of this that I'm just missing?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server
Why/how?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

FISHMANPET posted:

We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server. We'd like to split them up onto multiple (virtual) servers and use DFS to create a single namespace. I was hoping that we could create a Profiles share on each server, and then use DFS to merge them into a single giant profiles share. Now that we've dug into it, I guess that's not really how DFS works.

So I'm wondering if there's some easy way to manage all of this that I'm just missing?

Is this a single site? My answer would be to some how split things up by site/department/job function, or something. Our larger call center has ~450 users at any given time and our profile server there is a loving beast. I can't imagine 6K profiles. Shoot me now.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Well the old server doesn't have enough space or IOPs, so we're migrating to a new server(s) with different backend storage. Performance or space won't be a problem anymore. But we currently do and will continue to have a problem with backups, mainly a Full backup of all the data takes longer than 24 hours so it merges into the next day's diff, and all sorts of bad things happen. Ideally we'd like to split it into 7 servers, so we can do 1/7th of the data with a full backup each day.

Now we could do this just as easily by having 7 folders on a single LUN on a single server, or 7 LUNs on a single server, but my thinking is that if we spread it across multiple servers, we'd be able to lessen the impact of a single OS failure or reboot patches.

The argument for keeping it in a single namespace, either via DFS or just one big profile share, is ease of use and management. The user's profile path would always be at \\server\profiles\%username% rather than \\server\profiles[1-7]\\%username%.

Maybe I'm thinking about this all wrong?

E: These are for student labs at a University, there's no way to really separate them, because they're all the same to us, and all at the same site. On the flip side, we only have 300-400 machines, so it's not possible to have everybody logging in at one time.

evil_bunnY
Apr 2, 2003

FISHMANPET posted:

We've got 6000 and growing user profiles

manage on a single server.
Kill yourself then pass me the gun.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

DFS won't do what you want unfortunately. I'm not sure what will to be honest. A file share load balancer or something that can keep track of what user maps where... not sure if something like that exists.

Exclusive
Jan 1, 2008

Yeah you want a file virtualization product like an F5 ARX to auto-magically turn your 7 filers into 1 share. DFS will work for your profiles but you still have to manage the DFS links for all the users. Well that doesn't really gain you anything I suppose.

Nebulis01
Dec 30, 2003
Technical Support Ninny

Windows Server 2012 with Scale Out file services on a Clustered Shared Volume could be a possible solution for you. Every server can access the shared resource and you could add a number of hosts to the cluster

http://technet.microsoft.com/en-us/library/hh831349.aspx

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Ugh, that sounds like way too much work.

Each user only has a 500MB quota, and our current storage is only around 800GB. We're splitting it into two servers, one for the roaming part of the profile, one for redirected parts, each having 750GB of storage, so 1.5TB of storage total. The increased space along with the increased IOPs capability of the new storage and the single split means our backup problems will probably be solved as well.

the spyder
Feb 18, 2011

Novo posted:

Is there a reason you used NexentaStor vs. OmniOS or FreeBSD? I am seriously considering FreeNAS right now because we're relatively small (only need ~20TB in the near future) and it seems that paid support is available. If I quit and the next person isn't familiar with ZFS, at least they'll be able to call support and use the web administration.

FreeNAS does not offer the performance or Infinband support we need.

the spyder
Feb 18, 2011

Mierdaan posted:

This is just wonderful.

We tossed SeisMac on a spare MBP and set it next to our two main storage NAS(s). It registered the equivalent of me shaking and tossing the MBP as hard as I could for 6 hours.
I am amazed these consumer 1.5TB Seagates have held up since 2009.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS.

The problem is we're going to buy the EMC anyway since upper management decided all storage is to be EMC now, but I just need to 'play the game' a bit.

evil_bunnY
Apr 2, 2003

Entry level netapp?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

skipdogg posted:

I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS.

The problem is we're going to buy the EMC anyway since upper management decided all storage is to be EMC now, but I just need to 'play the game' a bit.

Throw NetApp FAS 2200 quote against it.

hackedaccount
Sep 28, 2009

the spyder posted:

We tossed SeisMac on a spare MBP and set it next to our two main storage NAS(s). It registered the equivalent of me shaking and tossing the MBP as hard as I could for 6 hours.
I am amazed these consumer 1.5TB Seagates have held up since 2009.

You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS.

The problem is we're going to buy the EMC anyway since upper management decided all storage is to be EMC now, but I just need to 'play the game' a bit.

I am still very happy with my oracle zfs appliance purchase.

Syano
Jul 13, 2005

skipdogg posted:

I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS.

The problem is we're going to buy the EMC anyway since upper management decided all storage is to be EMC now, but I just need to 'play the game' a bit.

Comedy option: synology populated with your own disks

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

hackedaccount posted:

You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club.

There really aren't any OpenIndiana guys anymore; the OpenSolaris community's moved to Illumos-based distributions. You can see tumbleweeds drifting through OI's mailing lists and hg repos.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Isn't OpenIndiana part of Illumos?

What's currently the most active Solaris fork/Illumos distribution?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

What's currently the most active Solaris fork/Illumos distribution?
Technically, the most popular Illumos distributions are fairly single-purpose (Nexenta, SmartOS). Illumian's as much of a ghost town as OpenIndiana. The only active general-purpose platform development seems to be centered around OmniOS, which is starting to look like abandonware as well (e: what) which doesn't have much going on in terms of up-to-date userland.

If FreeBSD 9 had Crossbow, COMSTAR, and the OpenSolaris CIFS server there would be very little reason for anyone to ever run Illumos.

Vulture Culture fucked around with this message at 04:48 on Apr 23, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

If FreeBSD 9 had Crossbow, COMSTAR, and the OpenSolaris CIFS server there would be very little reason for anyone to ever run Illumos.
So if freebsd was solaris?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

Technically, the most popular Illumos distributions are fairly single-purpose (Nexenta, SmartOS). Illumian's as much of a ghost town as OpenIndiana. The only active general-purpose platform development seems to be centered around OmniOS, which is starting to look like abandonware as well.

FWIW OmniOS's being actively developed by OmniTI and it is cash-flow positive for them.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

So if freebsd was solaris?
The killer features compelling consideration of Solaris used to be ZFS and DTrace, and FreeBSD's support for those is quite excellent these days. So, more like "if FreeBSD had these specific features of use to some people." The lack of a good iSCSI target in FreeBSD is pretty awful, but Crossbow is a "nice-to-have" at best and Samba 3.6+ has basically crossed the gap in terms of missing features with the Solaris kernel CIFS server -- it does Windows Explorer ACL changes and stuff like that without a hitch in AD environments. Some more performance would be nice, but I haven't found it lacking, per se.

PCjr sidecar posted:

FWIW OmniOS's being actively developed by OmniTI and it is cash-flow positive for them.
Man, I basically lifted the "starting to look like abandonware" part from a different part of the post without noticing, sorry. What I meant is that the platform is new and the userland is pretty neglected, at least compared to what people might expect from a modern-day Linux/BSD. I do hope that some more momentum springs up behind it later; OmniTI being the Red Hat of Illumos would be a pretty cool thing for Oracle to have to fight with.

AIX is being actively developed by IBM and it's cash-flow positive for them, but it doesn't mean it has good long-term prospects as a general-purpose OS in most people's datacenters. Not intended as a slam -- Theo Schlossnagle is a really loving smart guy and OmniOS has some really great stuff going on -- but it's a product with very specific use cases. If you want something to make you more productive as an admin out of the box, I'm not at a point where I could comfortably recommend it to people who aren't looking to engineer their whole app stack from the ground up. It's basically the Slackware of Illumos distributions, and people are using it because nobody's released a Debian yet with enough momentum to keep the core bits of the platform from bitrotting.

I give him a lot of credit for the shitload of Illumos expertise they have behind what they do, though, and I hope they're successful at keeping it running (not least of all because of what the other Illumos distro commit histories look like).

Vulture Culture fucked around with this message at 04:52 on Apr 23, 2013

Moey
Oct 22, 2010

I LIKE TO MOVE IT
What's the preference with nas storage vs something that is raw block based?

I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Moey posted:

What's the preference with nas storage vs something that is raw block based?

I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present.

Not having to throw a VM in front of it to handle NFS/CIFS. If the filer can do it natively it saves me a VM to manage. I'm looking at 6 or 8 TB of bulk 7.2K NL-SAS for basically a big file dump for my engineers to move data around (in addition to about 1TB of fast for VM usage). If I can avoid throwing up a VM to present it to them, all the better in my book.

skipdogg fucked around with this message at 05:07 on Apr 23, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Moey posted:

What's the preference with nas storage vs something that is raw block based?

I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present.
almost always better for the storage to be aware of the filesystem.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
e: dumb, nm

Vulture Culture fucked around with this message at 05:23 on Apr 23, 2013

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We got burnt so bad by a NAS device that was supposed to solve all our problems that I don't think we'll ever be putting a NAS component in our SAN.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

skipdogg posted:

Not having to throw a VM in front of it to handle NFS/CIFS. If the filer can do it natively it saves me a VM to manage. I'm looking at 6 or 8 TB of bulk 7.2K NL-SAS for basically a big file dump for my engineers to move data around (in addition to about 1TB of fast for VM usage). If I can avoid throwing up a VM to present it to them, all the better in my book.

Yea. On top of the fact that it's one less server to worry about patching, securing, and managing you can also get nice things like transparent decuplication, share or even file level cloning, snapshot backups with user initiated restore, array level replication for backup, multi-protocol access, 5 9s uptime without having to do windows clustering (which sucks, clustering a file server sucks) and a lot of other benefits. The better question is why you WOULDN'T want to use your dedicated file sharing appliance to share files and would instead prefer to run a more complex solution based around a general purpose OS that also happens to share files?

FISHMANPET posted:

We got burnt so bad by a NAS device that was supposed to solve all our problems that I don't think we'll ever be putting a NAS component in our SAN.

There are bad implementations of just about anything out there, but there are perfectly good ones too. And there are also vendors that do both block and file natively and don't require a separate NAS component.

YOLOsubmarine fucked around with this message at 05:18 on Apr 23, 2013

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

NippleFloss posted:

There are bad implementations of just about anything out there, but there are perfectly good ones too. And there are also vendors that do both block and file natively and don't require a separate NAS component.

I don't get it, are you implying that important decisions about core infrastructure should be made on facts rather than gut feelings and intuition?

No, that would just be crazy.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Moey posted:

I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present.
It depends on what you're trying to actually accomplish.

There's use cases to be made for both, for sure. If you're serving up small amounts of storage, your administrators get to use tools and technologies they already know, whether that's Windows file sharing or Samba or whatever else. You get something on the backend that's transparent and understandable, which is nice if the poo poo hits the fan and you need to use an off-the-shelf data recovery tool to pick through the wreckage. This kind of approach is quite appropriate to most small business environments.

On the other hand, you have a lot of drawbacks to rolling your own. Hardware NAS solutions often feature a huge amount of hardware protocol offload and some very fast backends. They give you out-of-the-box support for active-active NAS configurations, which are really difficult to get right if you try to do it yourself using a cluster filesystem on the backend and some file serving modules (nfsd, Samba, whatever) on the frontend that aren't cache-coherent. The filesystems typically scale a lot further than most things you'll find in operating systems you already manage, and probably handle snapshots a lot better. Performance metrics are typically better and more in-depth. Most larger-scale vendors will integrate in some kind of policy engine for tiering. They also tend to be a lot better at asynchronous replication than what you'll find in a standard OS configuration, though if your SAN supports block-level replication that can always be an option for you.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

FISHMANPET posted:

I don't get it, are you implying that important decisions about core infrastructure should be made on facts rather than gut feelings and intuition?

No, that would just be crazy.
Haha, point taken. I've been there before. You have my sympathies.

Misogynist posted:

They give you out-of-the-box support for active-active NAS configurations, which are really difficult to get right if you try to do it yourself using a cluster filesystem on the backend and some file serving modules (nfsd, Samba, whatever) on the frontend that aren't cache-coherent.

This is a really good point and something people usually miss when they talk about moving to VMware. There's more to fault tolerance than just making sure you can survive a hardware failure on a blade. Kernel panics, software installation or updates, OS patching...there are lots of reasons why you might STILL want a clustered solution even if you're running in a VMWare cluster that protects against hardware errors, and they are not trivial to build on a general purpose OS. Active-Active NAS hardware makes it all easy and basically foolproof.

YOLOsubmarine fucked around with this message at 05:29 on Apr 23, 2013

Adbot
ADBOT LOVES YOU

Amandyke
Nov 27, 2004

A wha?
Just added 3.24 raw PB to a customer's environment in under 3 hours today. Yay scale-out nas!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply