|
skipdogg posted:Not to mention having to deal with Oracle. Nimble will probably be very responsive if you have issues. Oracle's bastard hardware division... probably a crap shoot. Working with their support has been enjoyable. They also are very proactive if something is wrong and we ignore it.
|
# ? Jan 16, 2014 20:39 |
|
|
# ? May 25, 2024 15:27 |
|
El_Matarife posted:HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US Yeah... This happened where I work. It ruined new years eve and day.
|
# ? Jan 16, 2014 21:29 |
Goon Matchmaker posted:Yeah... This happened where I work. It ruined new years eve and day. Yeah I had to work New years eve because of it. I felt bad for the RCM guys at EMC because they were slammed and catching a LOT of poo poo which wasn't their fault, just EMCs fault as a whole.
|
|
# ? Jan 16, 2014 22:30 |
|
Maneki Neko posted:I don't think we've see anything that I would chalk up to cluster mode directly, but we've also had tons of problems (hopefully fixed in the 7.2 version we just upgraded to), so performance hasn't been a huge focus for us lately. I was hoping that with 2x the hardware resources available (compaired to running on one contoler in 7 mode) to the vserver that it would make good use of them rather than just use them to provide node level HA. Ah well, they took the longest time to properly multithread 7mode so who knows some time this decade perhaps.
|
# ? Jan 17, 2014 09:05 |
|
JockstrapManthrust posted:I was hoping that with 2x the hardware resources available (compaired to running on one contoler in 7 mode) to the vserver that it would make good use of them rather than just use them to provide node level HA. On a per node level cdot performs marginally worse than 7 mode due to the additional overhead involved in maintaining the various ring databases and an additional level of indirection in the storage layer. It's around a 10% decrease in maximum performance if you're doing indirect access, and less if you're doing direct access. A CDOT cluster is much more similar to a vmware cluster than an isilon or equallogic. The vserver can use resources on any node, but volume performance is still limited by wha t is available on a single node, so single filesystem performance will not benefit from additional nodes. The real benefits of cdot are non-disruptive operations and single namespace. There is a relatively new feature called infinite volumes that will stripe volumes across multiple nodes, but they aren't fully baked yet. I suspect they will eventually become a scale out performance option and perhaps even the default volume type.
|
# ? Jan 17, 2014 22:38 |
|
I've been messing around with OpenFiler as a replacement for the lovely proprietary software we currently use called Open-E. I got a chance to reinstall one of these systems about 8 months ago with OpenFiler, and so far it's been running like a champ (no crashes or system hiccups) yet. My company has 8 or so 24 disk Supermicro chassis w/24GB RAM, decent CPU and a kind of lovely RAID controller. What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)?
|
# ? Jan 23, 2014 02:38 |
|
Wicaeed posted:What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)? Not using openfiler, for starters, which is practically a dead project. You can hack this together with freenas, nexenta sort of does it, but lefthand (not free) is your best bet without in-house expertise. If you have expertise, TSM, SAM, HPand storage spaces on 2k12r2 (never used the last) are your best bet for tiered. If you're ok manually tiering it, gluster/lustre or a hacked up zfs setup might work for you.
|
# ? Jan 23, 2014 05:39 |
|
Wicaeed posted:What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)? If I had a shitload of storage servers, I wouldn't bother. I would burn a bunch of CDs with SmartOS on them, boot my storage servers, and build storage pools on each one. They would be standalone, and I would just create jobs to replicate the storage amongst them. If you really need HA, you are probably going to pay that premium. Answer these questions: how often do you see catastrophic failure that was completely unanticipated, and how much downtime can you tolerate on this storage? If the answers are rarely and a few hours, then congrats, you don't need HA, you just need nearline backups which SmartOS can provide.
|
# ? Jan 23, 2014 05:48 |
|
What would be the best solution for clustered storage using regular hardware? For instance right now I manage a bunch of Dell 2950's acting as NASes, and shuffling stuff around to make sure they never fill up Is there some way I can create a system whereby I can just keep adding more 2950s and keep adding to one large storage pool?
|
# ? Jan 29, 2014 02:57 |
|
GFS or Lustre, if you want NAS a couple of gateways may be required.
|
# ? Jan 29, 2014 03:44 |
|
theperminator posted:What would be the best solution for clustered storage using regular hardware? This is the use case of gluster, lustre, swift, and dfs. ocfs and gfs2 will not do what you want. What are your actual requirements for performance, management, and redundancy?
|
# ? Jan 29, 2014 04:59 |
|
^ to add on what are your fail over requirements for this clustered system?
|
# ? Jan 29, 2014 06:30 |
|
Is ceph (in block storage mode) an option in this arena too? I'm asking, not offering it as a suggestion, have read a little about it but not played around at all.
Docjowles fucked around with this message at 06:47 on Jan 29, 2014 |
# ? Jan 29, 2014 06:43 |
|
I want to store and archive backups of VMs, I have tried swift but it doesn't seem to handle it well when I'm doing backup jobs in the range of 1-4TB. I'd like it to be able to handle failure from one disk to a whole node, so Ceph Block storage looks like it'd do the job so I might check that out, thanks!
|
# ? Jan 29, 2014 10:35 |
|
I am putting together some scratch storage that will be used for... well stuff we dont want to load on to our main production storage. I am using some parts from some old servers and I want to be able to present some NFS shares to the network so I can access from client machines but also esxi hosts. Whats the OS du jour for this sort of thing? FreeNas? Nas4free? Something else I havent seen yet?
|
# ? Jan 29, 2014 17:22 |
|
Docjowles posted:Is ceph (in block storage mode) an option in this arena too? I'm asking, not offering it as a suggestion, have read a little about it but not played around at all. I always forget about Ceph since I haven't used it yet. Syano posted:I am putting together some scratch storage that will be used for... well stuff we dont want to load on to our main production storage. I am using some parts from some old servers and I want to be able to present some NFS shares to the network so I can access from client machines but also esxi hosts. Whats the OS du jour for this sort of thing? FreeNas? Nas4free? Something else I havent seen yet? Literally anything which can function as a NFS server. FreeNAS is a safe bet.
|
# ? Jan 29, 2014 17:54 |
|
demonachizer posted:What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also. You check out PureStorage? It's a bunch of ex-EMC and ex-Veritas guys, EMC is actually suing them. They're pretty drat impressive but weren't going to be landing things like replication, iSCSI / NFS, and a few other checkbox features for six months when I saw them last, but they appear to have some of it now according to their site. Non-disruptive hardware upgrades is a pretty killer feature, plus 512B sectors that kill any alignment issues. Violin Memory, Texas Memory Systems (Now IBM), Whiptail (Now Cisco), the flash SAN market is really overflowing with potential options.
|
# ? Jan 29, 2014 21:46 |
|
El_Matarife posted:You check out PureStorage? It's a bunch of ex-EMC and ex-Veritas guys, EMC is actually suing them. We actually got scared off from a SAN on this project because people were quoting some pretty ridiculous prices for the support/warranty, like 5k+ per year which was tough to swallow. That is possibly totally normal etc. We ended up just doing DAS and using Double-Take Availability for mirroring. I really wanted to get one of the Nimble units in so that we could build off of it but we couldn't make it work budget wise. We had a good idea as to the hardware cost but had no clue that we were looking at 30k+ on the support for two units. I will check out Pure next round when we retire some of our other poo poo and start a VM project.
|
# ? Jan 29, 2014 23:14 |
|
Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). Their storage efficiency has been poor. Including compression were seeing worse than 4:1, 420TB stored in 116TB of disk. Throughput has also been abysmal. If you add in the FusionIO cards, Commvault's very high support costs (we have capacity base licensing), and the cost of FAS2240's that we currently use, this environment has become very expensive and performs poorly in just about every measure. I had a positive experience with DataDomain in the past (2+ years ago). I got decent throughput but excellent compression ratios (>10:1). I hear that EMC is messing with their backup products and the future is mirky for the DataDomain product line. They are trying to integrate all these disparate products they purchased and drive people into their complete data protection stack. Considering were a NetApp\Commvault shop right now that would lead to many complications for us. Anyone know whats going to happen with DataDomain? Are there other similar products (inline dedupe storage) out there worth considering?
|
# ? Jan 30, 2014 00:34 |
|
demonachizer posted:We actually got scared off from a SAN on this project because people were quoting some pretty ridiculous prices for the support/warranty, like 5k+ per year which was tough to swallow. $5k/year is nothing. I've signed off on $250k+ for maintenance alone.
|
# ? Jan 30, 2014 00:37 |
|
Yeah... if $5k is an unreasonable amount of money to your company you are probably not in the market for a SAN.
|
# ? Jan 30, 2014 00:49 |
|
Expect maintenance on any enterprise piece of hardware to be ~18% of list per year, maybe a bit more/less depending on response times.
|
# ? Jan 30, 2014 01:08 |
|
parid posted:Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). We, as an organization, have gone round and round with them on this as well...very annoying.
|
# ? Jan 30, 2014 02:22 |
|
parid posted:Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). Their storage efficiency has been poor. Including compression were seeing worse than 4:1, 420TB stored in 116TB of disk. Throughput has also been abysmal. If you add in the FusionIO cards, Commvault's very high support costs (we have capacity base licensing), and the cost of FAS2240's that we currently use, this environment has become very expensive and performs poorly in just about every measure. Really happy with ExaGrid at the moment personally. Not cheap though.
|
# ? Jan 30, 2014 03:04 |
|
TKovacs2 posted:Really happy with ExaGrid at the moment personally. Not cheap though. I'm not sure how we could be spending more at the moment. Its probably in range. How are your compression ratios in the real world? These guys love to promise the world and it almost never lives up to it.
|
# ? Jan 30, 2014 04:05 |
|
Docjowles posted:Yeah... if $5k is an unreasonable amount of money to your company you are probably not in the market for a SAN. We had to ok about 350k in workstation purchases as part of the project after getting specs from a software vendor so finance was already reeling a bit. And honestly, for this project we really don't need the performance etc. of a SAN but it was a good way to get a large SAN into the room to then build off of for consolidation and later virtualization projects as well. Just to give a picture, we currently have 7 or so different file servers of varying ages for different departments (weird budget poo poo, regulatory issues and grant politics). My dream is to get a single SAN infrastructure and virtualize the 60+ application servers we have but it is a tough sell even though I know I could put together a pretty good proposal with some obvious savings over time. There is also a big concern about knowledge etc. since I am the only person that seems to care about learning virtualization. Everyone else is pretty old school (most have been there for 12+ years). Misogynist posted:If it's scary to your financial officers for budgeting reasons and not your department for cost reasons, you should get a 3-year support contract rolled into the initial purchase. It will probably save you a few bucks if you can spare the cash now. We are in a weird position because we don't really have a budget of our own so we sort of propose things to the finance director and work things out. We went with the three year in the beginning but the added amount for the two SANs killed it. I can say that the guy I was working with made a big (appreciated) effort to get us where we needed to be but it wasn't happening because of the aforementioned workstation costs. Demonachizer fucked around with this message at 05:52 on Jan 30, 2014 |
# ? Jan 30, 2014 05:47 |
|
ragzilla posted:Expect maintenance on any enterprise piece of hardware to be ~18% of list per year, maybe a bit more/less depending on response times.
|
# ? Jan 30, 2014 10:14 |
|
We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups. Any gotcha's? Split brain? Ditch comm vault and go for Veeam? We looked at 3Par also but we liked the NetApp more because of the ease of snapshotting and the like. Any criticism is welcome, we haven't fully decided yet.
|
# ? Jan 30, 2014 19:03 |
|
Mr Shiny Pants posted:We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups. I'd say it's: PHDVirtual > CommVault > Veeam, personally. What are you backup up to?
|
# ? Jan 30, 2014 19:21 |
|
A smaller FAS in a Colo. What is wrong with Veeam? We were pretty impressed when they demoed it. The Sharepoint stuff and Exchange stuff was excellent.
Mr Shiny Pants fucked around with this message at 19:43 on Jan 30, 2014 |
# ? Jan 30, 2014 19:38 |
|
Mr Shiny Pants posted:A smaller FAS in a Colo. If you're going FAS-to-FAS, I don't know why you'd mess around with any VM backup software. Use the NetApp vCenter plugin (VSC - Virtual Storage Console) to take your snapshots and then use either SnapVault or SnapMirror to send them off-site.
|
# ? Jan 30, 2014 19:43 |
|
madsushi posted:If you're going FAS-to-FAS, I don't know why you'd mess around with any VM backup software. Use the NetApp vCenter plugin (VSC - Virtual Storage Console) to take your snapshots and then use either SnapVault or SnapMirror to send them off-site. That is the idea. We still might need the software for some other machines not on the filers.
|
# ? Jan 30, 2014 19:46 |
|
Mr Shiny Pants posted:We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups. A metrocluster is basically the same as a regular cluster, just stretched across fiber switches with a few minor extra rules (like in case of the split brain thing). Are you looking at metrocluster because you need the whole two separate sites ability? Also one thing to keep in mind is that new feature support sometimes lags a little behind the regular FAS products. For example you can mix shelves in a stack now (though not recommended), but you can't yet with metrocluster.
|
# ? Jan 30, 2014 19:48 |
|
OldPueblo posted:A metrocluster is basically the same as a regular cluster, just stretched across fiber switches with a few minor extra rules (like in case of the split brain thing). Are you looking at metrocluster because you need the whole two separate sites ability? Also one thing to keep in mind is that new feature support sometimes lags a little behind the regular FAS products. For example you can mix shelves in a stack now (though not recommended), but you can't yet with metrocluster. We have two datacentres that are close by and we run fiber to them. The Metrocluster gives us the ability to have a stretched VMware cluster on top. The idea is to have it physically separated but logically one cluster.
|
# ? Jan 30, 2014 19:53 |
|
First of all, I work for Inktank, the company supporting Ceph, so take what follows with the requisite grain of salt. I'd like to dig in to the Ceph v Gluster thing a bit...so if that's not your cup of tea feel free to breeze right on past this one (it's bound to be a bit of a WoT).evol262 posted:It's new and essentially has the same advantages and disadvantages as Gluster, except that it's newer, less stable, and arguably slower. It's mainline, though, and things should rapidly equalize. Sorry I'm a bit late to this comment (Dec of last year) but I really hate to see it characterized this way. While I realize this may have been flippant/off-the-cuff, each system has use cases where they shine. I'm a little frustrated with all the misleading marketing bombs that keep getting lobbed over the fence from RedHat, but I suppose that's to be expected from any megacorp /rant. Ok, on to the meat... Grand Unified Storage Debate If you haven't seen it, at LCA 2013 Sage (creator of Ceph) and John Mark Walker (Gluster community leader) debated the relative merits of each: The best part about this is how both of these guys acknowledge the strengths and weaknesses of each system. Architecture Ceph is built on a strongly consistent object storage system which was designed to provide native object, block and file storage from its inception. The technology uses light weight peer to peer software processes along with an extremely flexible algorithm (called CRUSH) for data placement. The software processes automatically handle all expansion, contractions and rebalancing of the data within a cluster. Red Hat Storage Server 2.0 (RHSS) is based on GlusterFS which was designed as a distributed POSIX filesystem and was optimized for this usecase. Additional capabilities have been added as plugins but without consideration to the requirements this may place on the underlying storage system. Data remirroring is not an automatic process when a node leaves/joins the cluster increasing the management cost of an ongoing cluster. High Level Comparison (summarized from a report by Hastexo)
* Ceph > Gluster in general data redundancy, distribution, and resilience * Gluster > Ceph in terms of POSIX filesystem maturity * Ceph > Gluster in distributed block device and ReSTful object storage support * Ceph > Gluster in availability and richness of APIs for programming use * Ceph > Gluster in terms of integration with virtualization and cloud computing stacks * Gluster > Ceph in asyncronous replication and hence, its use in cross-datacenter disaster recovery * Gluster > Ceph in ease-of-use wrt user experience Now, this report was generated in September 2012 so obviously the gap between each system in each of the respective areas has narrowed quite a bit. Overall it's still pretty open to what people prefer to use, and both options are definitely a viable solution. For things like OpenStack cloud deployments Ceph is the clear leader while RHSS, (and by extension Gluster) still seem to have more pure enterprise storage deployments (purely anecdotal, no evidence to support). Ceph has quite a few large production deployments including places like CERN, Deutsche Telekom, Dreamhost, and the University of Alabama, so it has definitely met the bar in terms of stability and usability. The one quid pro quo is that CephFS, the POSIX layer on top of the underlying object store, is still being called "nearly awesome" and isn't suggested for production deployment. That is scheduled to change this year with the "Giant" release. If anyone has more questions I'm always happy to talk shop in our IRC channel (scuttlemonkey on irc.oftc.net #ceph). As you might see from my posting habits I rarely come out of the woodwork here and mostly just lurk and cause a drain on the available bandwidth. :P Scuttlemonkey fucked around with this message at 20:26 on Jan 30, 2014 |
# ? Jan 30, 2014 20:19 |
|
Ceph looks rad. Too bad I don't have any hardware to test it with. A VM is not the same as two physical boxes running Ceph.
|
# ? Jan 30, 2014 20:34 |
|
Admittedly, it was a flippant/off the cuff comment. And ceph has a lot of advantages wrt gluster, but gluster also has a lot wrt ceph. It's not so much a marketing bomb as:
It wasn't intended to be a "Gluster rocks, Ceph sucks" post. I really don't know what problem you had with my characterization of Ceph, which compared it favorably to Gluster except for stability. And honestly, CephFS isn't stable. But again, Ceph is improving rapidly, and there are very good reasons to pick it over Gluster. Just not any on that list.
|
# ? Jan 30, 2014 21:21 |
|
evol262 posted:Ceph's default blocksize can lead to bad performance comparisons on untuned ceph v. gluster systems. evol262 posted:Requiring a metadata server is reminiscent of lustre in a bad way. evol262 posted:Gluster.org and RHSS are not the same thing. Gluster doesn't automatically expand or contract, but that's an intentional design decision which matches other distributed filesystems and disk-level redundancy, up to and including ZFS pools. It's inconvenient and more work for administrators, but hardly a black mark. evol262 posted:It's extremely difficult to say that "ceph > gluster in cloud/virtualization" integration. Huh? Gluster and Ceph are both supported in Openstack. RHEV/oVirt have native Gluster support. Gluster's NFS driver lets it be used as a backing datastore for VMware. You can do NFS over rbd, but it's not native. Ceph's support on Openstack is very comparable to Gluster. "How many Openstack deployments are on Ceph vs Gluster" is a terrible metric for whether "x > y" unless you also intend to argue that "netapp = gluster" and "xen > vmware" based on numbers from the user survey. Keep in mind that the report summary that I was drawing from wasn't mine...it was a third party (which I can't find a public link to). So I felt remiss in including some of it but not all. I think that, properly tuned, Gluster and Ceph are both amazingly-awesometastic™ options in comparison to the historically available options. evol262 posted:And honestly, CephFS isn't stable. Ahhh, ok...your response makes way more sense now. With a more file-system-centric view I totally get where you're coming from. I would have just amended your original statement to say "CephFS" instead of "Ceph" (which was the root of my frustration, which...admittedly is more related my interactions with Jeff Darcy than with your statements) and I think there would have been no response incited. Honestly my biggest hope is that RedHat/Gluster and Ceph/Inktank can really drive a wedge into the storage industry (those are some huge numbers...both in storage and in dollars) and start weaning people off the expensive, black box, forklift options (/braces for NetApp and EMC fans...). Thanks for such a reasoned response, always love to see good technical discourse.
|
# ? Jan 30, 2014 21:58 |
|
Anyone had a good look at Pure yet?
|
# ? Jan 30, 2014 22:04 |
|
|
# ? May 25, 2024 15:27 |
|
Vanilla posted:Anyone had a good look at Pure yet? Just waiting to get our final budget numbers for 2014 (wtf board) back and then hopefully will be pulling the trigger on pure for SQL backend storage.
|
# ? Jan 31, 2014 00:41 |