|
Internet Explorer posted:Sounded like he was more talking about the hardware/software itself.
|
# ? Apr 19, 2013 21:32 |
|
|
# ? May 10, 2024 20:52 |
|
the spyder posted:I support 1.4PB of ZFS based storage across our two sites. In the field, we have over 4PB of ZFS storage. Our older systems are all based off NexantaStor and everything in the last year off OpenIndiana- both running Napp-it 0.8h. Half is linked via Infiniband to our cluster, the other half is archival/local storage. I just ordered another 100TB of disks/chassis, one will be for consolidating our VM's local storage and completely SSD based- the other two for local backup storage. I am very eager to see what 22x240gb Intel 520 series will do performance wise. The other two are 48 3TB 7200rpm spindles. We have suffered some pretty bad disk failures thanks to the dance club downstairs, but not once lost data. (24 failed drives in 12 months.) If you have any specifics, let me know. Everything is based on Supermicro dual quad xeons, 96gb+ of ram, LSI HBA's, and Seagate/Intel drives. How's the IB support working for you? NFS over IPoIB, I assume?
|
# ? Apr 19, 2013 22:09 |
|
evil_bunnY posted:Yes. Sorry you felt this was directed at you, Cpt.Wacky, it's not what I meant. No problem, I'm a little sensitive about it since I've been blocked in all my attempts to get proper equipment and do things right, not just with storage.
|
# ? Apr 19, 2013 22:21 |
|
Cpt.Wacky posted:No problem, I'm a little sensitive about it since I've been blocked in all my attempts to get proper equipment and do things right, not just with storage.
|
# ? Apr 19, 2013 23:37 |
|
PCjr sidecar posted:How's the IB support working for you? NFS over IPoIB, I assume? Outside of some IP address issues in the field due to being on generator power, pretty decent. Our main app (don't laugh) is SMB and sees 3.2GB's throughput. In house I am using NFS over copper 10GBE. Outside of rsync'ing our primary/backup NAS, I can't max it out- yet.
|
# ? Apr 20, 2013 05:58 |
|
the spyder posted:We have suffered some pretty bad disk failures thanks to the dance club downstairs This is just wonderful.
|
# ? Apr 20, 2013 14:11 |
|
the spyder posted:I support 1.4PB of ZFS based storage across our two sites. In the field, we have over 4PB of ZFS storage. Our older systems are all based off NexantaStor and everything in the last year off OpenIndiana- both running Napp-it 0.8h. Half is linked via Infiniband to our cluster, the other half is archival/local storage. I just ordered another 100TB of disks/chassis, one will be for consolidating our VM's local storage and completely SSD based- the other two for local backup storage. I am very eager to see what 22x240gb Intel 520 series will do performance wise. The other two are 48 3TB 7200rpm spindles. We have suffered some pretty bad disk failures thanks to the dance club downstairs, but not once lost data. (24 failed drives in 12 months.) If you have any specifics, let me know. Everything is based on Supermicro dual quad xeons, 96gb+ of ram, LSI HBA's, and Seagate/Intel drives. Is there a reason you used NexentaStor vs. OmniOS or FreeBSD? I am seriously considering FreeNAS right now because we're relatively small (only need ~20TB in the near future) and it seems that paid support is available. If I quit and the next person isn't familiar with ZFS, at least they'll be able to call support and use the web administration.
|
# ? Apr 22, 2013 16:31 |
|
We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server. We'd like to split them up onto multiple (virtual) servers and use DFS to create a single namespace. I was hoping that we could create a Profiles share on each server, and then use DFS to merge them into a single giant profiles share. Now that we've dug into it, I guess that's not really how DFS works. So I'm wondering if there's some easy way to manage all of this that I'm just missing?
|
# ? Apr 22, 2013 17:33 |
|
FISHMANPET posted:We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server
|
# ? Apr 22, 2013 18:27 |
|
FISHMANPET posted:We've got 6000 and growing user profiles, and the number and size has become too much to manage on a single server. We'd like to split them up onto multiple (virtual) servers and use DFS to create a single namespace. I was hoping that we could create a Profiles share on each server, and then use DFS to merge them into a single giant profiles share. Now that we've dug into it, I guess that's not really how DFS works. Is this a single site? My answer would be to some how split things up by site/department/job function, or something. Our larger call center has ~450 users at any given time and our profile server there is a loving beast. I can't imagine 6K profiles. Shoot me now.
|
# ? Apr 22, 2013 21:19 |
|
Well the old server doesn't have enough space or IOPs, so we're migrating to a new server(s) with different backend storage. Performance or space won't be a problem anymore. But we currently do and will continue to have a problem with backups, mainly a Full backup of all the data takes longer than 24 hours so it merges into the next day's diff, and all sorts of bad things happen. Ideally we'd like to split it into 7 servers, so we can do 1/7th of the data with a full backup each day. Now we could do this just as easily by having 7 folders on a single LUN on a single server, or 7 LUNs on a single server, but my thinking is that if we spread it across multiple servers, we'd be able to lessen the impact of a single OS failure or reboot patches. The argument for keeping it in a single namespace, either via DFS or just one big profile share, is ease of use and management. The user's profile path would always be at \\server\profiles\%username% rather than \\server\profiles[1-7]\\%username%. Maybe I'm thinking about this all wrong? E: These are for student labs at a University, there's no way to really separate them, because they're all the same to us, and all at the same site. On the flip side, we only have 300-400 machines, so it's not possible to have everybody logging in at one time.
|
# ? Apr 22, 2013 21:25 |
|
FISHMANPET posted:We've got 6000 and growing user profiles
|
# ? Apr 22, 2013 21:40 |
|
DFS won't do what you want unfortunately. I'm not sure what will to be honest. A file share load balancer or something that can keep track of what user maps where... not sure if something like that exists.
|
# ? Apr 22, 2013 21:46 |
|
Yeah you want a file virtualization product like an F5 ARX to auto-magically turn your 7 filers into 1 share. DFS will work for your profiles but you still have to manage the DFS links for all the users. Well that doesn't really gain you anything I suppose.
|
# ? Apr 22, 2013 21:49 |
|
FISHMANPET posted:Stuff Windows Server 2012 with Scale Out file services on a Clustered Shared Volume could be a possible solution for you. Every server can access the shared resource and you could add a number of hosts to the cluster http://technet.microsoft.com/en-us/library/hh831349.aspx
|
# ? Apr 22, 2013 22:13 |
|
Ugh, that sounds like way too much work. Each user only has a 500MB quota, and our current storage is only around 800GB. We're splitting it into two servers, one for the roaming part of the profile, one for redirected parts, each having 750GB of storage, so 1.5TB of storage total. The increased space along with the increased IOPs capability of the new storage and the single split means our backup problems will probably be solved as well.
|
# ? Apr 22, 2013 22:22 |
|
Novo posted:Is there a reason you used NexentaStor vs. OmniOS or FreeBSD? I am seriously considering FreeNAS right now because we're relatively small (only need ~20TB in the near future) and it seems that paid support is available. If I quit and the next person isn't familiar with ZFS, at least they'll be able to call support and use the web administration. FreeNAS does not offer the performance or Infinband support we need.
|
# ? Apr 22, 2013 23:51 |
|
Mierdaan posted:This is just wonderful. We tossed SeisMac on a spare MBP and set it next to our two main storage NAS(s). It registered the equivalent of me shaking and tossing the MBP as hard as I could for 6 hours. I am amazed these consumer 1.5TB Seagates have held up since 2009.
|
# ? Apr 22, 2013 23:54 |
|
I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS. The problem is we're going to buy the EMC anyway since upper management decided all storage is to be EMC now, but I just need to 'play the game' a bit.
|
# ? Apr 23, 2013 02:14 |
|
Entry level netapp?
|
# ? Apr 23, 2013 02:22 |
|
skipdogg posted:I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS. Throw NetApp FAS 2200 quote against it.
|
# ? Apr 23, 2013 02:40 |
|
the spyder posted:We tossed SeisMac on a spare MBP and set it next to our two main storage NAS(s). It registered the equivalent of me shaking and tossing the MBP as hard as I could for 6 hours. You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club.
|
# ? Apr 23, 2013 02:47 |
|
skipdogg posted:I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS. I am still very happy with my oracle zfs appliance purchase.
|
# ? Apr 23, 2013 03:14 |
|
skipdogg posted:I just got some quotes back on a VNXe3300 and either pricing is up over 25% in the last year or my VAR is being ridiculous. What competitor should I be beating them down with? LeftHand/P4xxx? I want NAS functionality on the filer, so EqualLogic is out. Compellent might be overkill. Straight NAS would be ok, I can do VMWare over NFS. Comedy option: synology populated with your own disks
|
# ? Apr 23, 2013 03:46 |
|
hackedaccount posted:You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club. There really aren't any OpenIndiana guys anymore; the OpenSolaris community's moved to Illumos-based distributions. You can see tumbleweeds drifting through OI's mailing lists and hg repos.
|
# ? Apr 23, 2013 03:52 |
|
Isn't OpenIndiana part of Illumos? What's currently the most active Solaris fork/Illumos distribution?
|
# ? Apr 23, 2013 04:11 |
|
FISHMANPET posted:What's currently the most active Solaris fork/Illumos distribution? If FreeBSD 9 had Crossbow, COMSTAR, and the OpenSolaris CIFS server there would be very little reason for anyone to ever run Illumos. Vulture Culture fucked around with this message at 04:48 on Apr 23, 2013 |
# ? Apr 23, 2013 04:22 |
|
Misogynist posted:If FreeBSD 9 had Crossbow, COMSTAR, and the OpenSolaris CIFS server there would be very little reason for anyone to ever run Illumos.
|
# ? Apr 23, 2013 04:28 |
|
Misogynist posted:Technically, the most popular Illumos distributions are fairly single-purpose (Nexenta, SmartOS). Illumian's as much of a ghost town as OpenIndiana. The only active general-purpose platform development seems to be centered around OmniOS, which is starting to look like abandonware as well. FWIW OmniOS's being actively developed by OmniTI and it is cash-flow positive for them.
|
# ? Apr 23, 2013 04:37 |
|
adorai posted:So if freebsd was solaris? PCjr sidecar posted:FWIW OmniOS's being actively developed by OmniTI and it is cash-flow positive for them. AIX is being actively developed by IBM and it's cash-flow positive for them, but it doesn't mean it has good long-term prospects as a general-purpose OS in most people's datacenters. Not intended as a slam -- Theo Schlossnagle is a really loving smart guy and OmniOS has some really great stuff going on -- but it's a product with very specific use cases. If you want something to make you more productive as an admin out of the box, I'm not at a point where I could comfortably recommend it to people who aren't looking to engineer their whole app stack from the ground up. It's basically the Slackware of Illumos distributions, and people are using it because nobody's released a Debian yet with enough momentum to keep the core bits of the platform from bitrotting. I give him a lot of credit for the shitload of Illumos expertise they have behind what they do, though, and I hope they're successful at keeping it running (not least of all because of what the other Illumos distro commit histories look like). Vulture Culture fucked around with this message at 04:52 on Apr 23, 2013 |
# ? Apr 23, 2013 04:38 |
|
What's the preference with nas storage vs something that is raw block based? I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present.
|
# ? Apr 23, 2013 04:56 |
|
Moey posted:What's the preference with nas storage vs something that is raw block based? Not having to throw a VM in front of it to handle NFS/CIFS. If the filer can do it natively it saves me a VM to manage. I'm looking at 6 or 8 TB of bulk 7.2K NL-SAS for basically a big file dump for my engineers to move data around (in addition to about 1TB of fast for VM usage). If I can avoid throwing up a VM to present it to them, all the better in my book. skipdogg fucked around with this message at 05:07 on Apr 23, 2013 |
# ? Apr 23, 2013 05:01 |
|
Moey posted:What's the preference with nas storage vs something that is raw block based?
|
# ? Apr 23, 2013 05:08 |
|
e: dumb, nm
Vulture Culture fucked around with this message at 05:23 on Apr 23, 2013 |
# ? Apr 23, 2013 05:12 |
|
We got burnt so bad by a NAS device that was supposed to solve all our problems that I don't think we'll ever be putting a NAS component in our SAN.
|
# ? Apr 23, 2013 05:13 |
|
skipdogg posted:Not having to throw a VM in front of it to handle NFS/CIFS. If the filer can do it natively it saves me a VM to manage. I'm looking at 6 or 8 TB of bulk 7.2K NL-SAS for basically a big file dump for my engineers to move data around (in addition to about 1TB of fast for VM usage). If I can avoid throwing up a VM to present it to them, all the better in my book. Yea. On top of the fact that it's one less server to worry about patching, securing, and managing you can also get nice things like transparent decuplication, share or even file level cloning, snapshot backups with user initiated restore, array level replication for backup, multi-protocol access, 5 9s uptime without having to do windows clustering (which sucks, clustering a file server sucks) and a lot of other benefits. The better question is why you WOULDN'T want to use your dedicated file sharing appliance to share files and would instead prefer to run a more complex solution based around a general purpose OS that also happens to share files? FISHMANPET posted:We got burnt so bad by a NAS device that was supposed to solve all our problems that I don't think we'll ever be putting a NAS component in our SAN. There are bad implementations of just about anything out there, but there are perfectly good ones too. And there are also vendors that do both block and file natively and don't require a separate NAS component. YOLOsubmarine fucked around with this message at 05:18 on Apr 23, 2013 |
# ? Apr 23, 2013 05:16 |
|
NippleFloss posted:There are bad implementations of just about anything out there, but there are perfectly good ones too. And there are also vendors that do both block and file natively and don't require a separate NAS component. I don't get it, are you implying that important decisions about core infrastructure should be made on facts rather than gut feelings and intuition? No, that would just be crazy.
|
# ? Apr 23, 2013 05:21 |
|
Moey posted:I've never worked in a huge environment but I would rather have raw block storage and throw a VM in front of whatever I need to present. There's use cases to be made for both, for sure. If you're serving up small amounts of storage, your administrators get to use tools and technologies they already know, whether that's Windows file sharing or Samba or whatever else. You get something on the backend that's transparent and understandable, which is nice if the poo poo hits the fan and you need to use an off-the-shelf data recovery tool to pick through the wreckage. This kind of approach is quite appropriate to most small business environments. On the other hand, you have a lot of drawbacks to rolling your own. Hardware NAS solutions often feature a huge amount of hardware protocol offload and some very fast backends. They give you out-of-the-box support for active-active NAS configurations, which are really difficult to get right if you try to do it yourself using a cluster filesystem on the backend and some file serving modules (nfsd, Samba, whatever) on the frontend that aren't cache-coherent. The filesystems typically scale a lot further than most things you'll find in operating systems you already manage, and probably handle snapshots a lot better. Performance metrics are typically better and more in-depth. Most larger-scale vendors will integrate in some kind of policy engine for tiering. They also tend to be a lot better at asynchronous replication than what you'll find in a standard OS configuration, though if your SAN supports block-level replication that can always be an option for you.
|
# ? Apr 23, 2013 05:23 |
|
FISHMANPET posted:I don't get it, are you implying that important decisions about core infrastructure should be made on facts rather than gut feelings and intuition? Misogynist posted:They give you out-of-the-box support for active-active NAS configurations, which are really difficult to get right if you try to do it yourself using a cluster filesystem on the backend and some file serving modules (nfsd, Samba, whatever) on the frontend that aren't cache-coherent. This is a really good point and something people usually miss when they talk about moving to VMware. There's more to fault tolerance than just making sure you can survive a hardware failure on a blade. Kernel panics, software installation or updates, OS patching...there are lots of reasons why you might STILL want a clustered solution even if you're running in a VMWare cluster that protects against hardware errors, and they are not trivial to build on a general purpose OS. Active-Active NAS hardware makes it all easy and basically foolproof. YOLOsubmarine fucked around with this message at 05:29 on Apr 23, 2013 |
# ? Apr 23, 2013 05:23 |
|
|
# ? May 10, 2024 20:52 |
|
Just added 3.24 raw PB to a customer's environment in under 3 hours today. Yay scale-out nas!
|
# ? Apr 23, 2013 05:40 |