|
Kaddish posted:I'm not a big Netapp guy but isn't BackupExec NDMP capable? Oh, it is. I'm just not too keen on keeping BackupExec around if we move to something else for our VMWare level backups in our prod environment. We're pulling back our SAS Equallogic storage to build a small capacity 3 node VMWare Essentials Plus cluster in our office and I had planned on moving the current production backup exec licensing into our much smaller office environment where I feel it will work a bit better. So, I'm open to other (preferribly better but not horrifically expensive and complex) NDMP capable backup options.
|
# ? Mar 6, 2015 21:03 |
|
|
# ? May 21, 2024 14:18 |
|
TSM for VM seems to work well but that's ruled right out due to both of your requirements.
|
# ? Mar 6, 2015 21:10 |
|
Vanilla posted:That reminds me - for those of you who were following my job race between Nimble and Pure I did eventually join Pure and am loving the product and the company so far. If you go back something like 2 years you'll see me griping about how much of a cluster gently caress EMC gear is to set up and maintain. I think you and I talked about it a bit. There is so much better out there for most organizations.
|
# ? Mar 8, 2015 08:25 |
|
Vanilla posted:That reminds me - for those of you who were following my job race between Nimble and Pure I did eventually join Pure and am loving the product and the company so far. What area are you working in? I just met with some Pure folks last week.
|
# ? Mar 9, 2015 23:18 |
|
Internet Explorer posted:If you go back something like 2 years you'll see me griping about how much of a cluster gently caress EMC gear is to set up and maintain. I think you and I talked about it a bit. There is so much better out there for most organizations. Yup, aint that the truth!
|
# ? Mar 9, 2015 23:30 |
|
Moey posted:What area are you working in? I just met with some Pure folks last week. Out in EMEA
|
# ? Mar 9, 2015 23:30 |
|
Vanilla posted:Out in EMEA Ahh, safe to assume I didn't meet with you then.
|
# ? Mar 10, 2015 00:34 |
|
I'm looking for a Thunderbolt 2 enabled 1u or 2u (or hell 3u ... we have the space for it) storage system that can't exceed 27" in depth. We ordered the LaCie 12TB 8Big rackmount not realizing it was too deep (just barely). It's for a mobile DIT cart that will be out in the field and has to be closed up for transport so it HAS to fit that depth and it needs to be rackmounted for stability. 4 or 8 bay is enough and in the $1500-2000 range is preferable. It can be relatively dumb storage. We'll just be writing to it via a Blackmagic Mini Record in. edit: And the reason it's Thunderbolt 2 is because the mobo on the DIT machine has 2 x Thunderbolt 2 ports.
|
# ? Mar 11, 2015 15:31 |
|
Not sure this thread is really the best place to ask about DAS solutions. I did find this comparison via google and it looks like there might be a few products that could fit your needs: http://wolfcrow.com/blog/a-comparison-of-10-thunderbolt-raid-storage-solutions/
|
# ? Mar 19, 2015 22:47 |
|
bull3964 posted:Speaking of OnTap, we just finished our Netapp 2554 install today. 20x 4tb SATA, 4x 400gb SSD, 48x 900gb SAS. There's definitely going to be a bit of a learning curve to this as it's not quite as point and shoot as the Equallogic or Pure I've used so far. I didn't see this addressed, but is pretty important. When you go to provision the FlashPool you need to keep in my what the workload will be. Certain workload profiles won't even leverage the FlashPool so allocating it would be a waste. Also make sure your SSD Aggr is raid 4 instead of the default DP, otherwise you'll lose two disks to parity and only have two for data use. How many nodes? Did you provision both nodes with the split disk types? The presence of SATA drives will change the performance characteristics (slightly) of the system vs one that is all SSD+SATA and the other all SAS. Also, learn and love QoS in 8.3. Just try and keep the policy names short or you'll never tell them apart since the CLI truncates after 22 characters i think.
|
# ? Mar 20, 2015 23:31 |
|
I've got an EMC AX4 that is ancient and out of warranty, not in use in production any more but just sitting around. Anyway, thinking of keeping it around in the office for VMWare lab work etc. Does anyone know if it is possible to use non-emc branded disks in this? I understand it will probably flat out refuse to without some kind of gently caress around with disk formatting or something so I'm wondering if someone has done this before and if so, how?
|
# ? Mar 30, 2015 02:58 |
|
AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.
|
# ? Mar 30, 2015 14:44 |
|
Richard Noggin posted:AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive. There are ways to brute force a firmware update onto a drive.
|
# ? Mar 30, 2015 17:59 |
|
toplitzin posted:I didn't see this addressed, but is pretty important. When you go to provision the FlashPool you need to keep in my what the workload will be. Certain workload profiles won't even leverage the FlashPool so allocating it would be a waste. Also make sure your SSD Aggr is raid 4 instead of the default DP, otherwise you'll lose two disks to parity and only have two for data use. Two nodes. One node owns the SAS aggregate and one node owns the SATA aggregate. It will, of course, failover to the other node if necessary, but we split them so as to not cause any performance issues.
|
# ? Mar 30, 2015 18:18 |
|
Richard Noggin posted:AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive. ...and I think they used a 520 block size rather than your usual 512. The disks must be pretty small? 146/300GB? Worth buying one just to try?
|
# ? Mar 30, 2015 18:56 |
|
Vanilla posted:...and I think they used a 520 block size rather than your usual 512. Ax4s will use 500gb and 1tb sata drives. They are 520 byte per sector formatted though. They should be reasonably affordable on eBay.
|
# ? Apr 1, 2015 03:35 |
|
Are there any dual path, dual controller DAS SAS JBODs that aren't horrible? We have some Intel JBOD2000s (224 and they're lousy. Fan controllers will increase fan speeds, but never decrease them, power supplies detected as failed when they're perfectly fine, can only be turned on by pushing a button, etc. I'm having a hard time finding something that doesn't suck.
|
# ? Apr 1, 2015 17:31 |
|
Any suggestions for building out a cheap storage server or a SAN to use as virtual machine storage for a small five to ten server hyper-V or xenserver setup? The servers will mostly be single socket xeon E3 servers with 32gb of ram and would use the storage server or SAN as storage. Ideally I would love to find some magical storage solution that would let me set up some sort of redundancy and allow me to add more SSDs as needed as well as add a second device later on for redundancy. Really we probably don't need a real SAN, a regular storage server with four Intel DC S3500 1.2TB SSDs would be enough to handle the virtual machines. We are mainly planning on using it to offer hosted application services to our clients and would love to have failover features available for their virtual machines. Also ideally I would love something that I can roll my own from because its something that I would enjoy more and I would also love to be able to fix any problem without having to contact vendor support. Personally I would love to use something that is not hardware dependant like Windows Storage Spaces but I am worried that performance will be awful. It is something that I need to test but I only have regular consumer grade SSDs in my home lab. I have been using storage spaces on my backup server and its been amazing as far as how easy it is to add more disk to a cluster. However I have heard mixed reports about Storage Spaces performance. Does any of you knowledgeable fellows know of any magical product that exists that can do any of this stuff that is also relatively inexpensive? Also not sure if this is a question for the enterprise storage thread or the virtualization thread.
|
# ? Apr 1, 2015 19:25 |
|
Stealthgerbil posted:Any suggestions for building out a cheap storage server or a SAN to use as virtual machine storage for a small five to ten server hyper-V or xenserver setup?
|
# ? Apr 1, 2015 23:37 |
|
We're pondering dumping our current NetApp gear (still on 7 mode) and moving to hybrid Tegile (T3200s in particular) for VM storage vs refreshing our current NetApp heads and going through the joy of a CDOT migration. I haven't seen much Tegile chat, anyone have any thoughts on their gear they want to share? I've heard good things about their all flash arrays, but haven't really run across a lot of people running their stuff in a hybrid configuration. Really I'm just curious if it's as "set it and forget" as the sales reps/engineers are claiming for NFS vmstores. Backup wise we'll be either going Veeam or Commvault.
|
# ? Apr 9, 2015 18:33 |
|
We looked at Tegile vs. Nimble (vs. VNX lol) and went Nimble (which invalidates the following story because you need NFS), not because anyone we talked to hated Tegile, but because no one felt strongly about it. Nimble feedback is nothing but positive, and their install base is much larger. Tegile's supposedly shooting for an IPO at some point, but until then, Nimble is a little more transparent if you're into that. Tegile pushes their use of eMLC flash, which is dumb because the point of a storage array is to abstract away the underlying hardware. I spoke to some Tegile references and they were all like "yeah, I dunno, we set it up and it works, it wasn't too bad." So it is set it and forget it, and I was unable to find someone with juicy support stories, so that was my only real concern. Plenty of people vouched for Nimble's support experience, which I can also do at this point. So tl;dr, we looked at Tegile and it was unexciting, but not bad. And since you're looking for NFS, my Nimble story is pointless except to explain why we didn't go with Tegile.
|
# ? Apr 9, 2015 18:50 |
|
Erwin posted:We looked at Tegile vs. Nimble (vs. VNX lol) and went Nimble (which invalidates the following story because you need NFS), not because anyone we talked to hated Tegile, but because no one felt strongly about it. Nimble feedback is nothing but positive, and their install base is much larger. Tegile's supposedly shooting for an IPO at some point, but until then, Nimble is a little more transparent if you're into that. The "we set it up and it works" is all we've really gotten as well, which is certainly a good thing compared to my NetApp experiences. I liked Nimble as well, but Tegile overall seems like they have a more flexible line of arrays. Maneki Neko fucked around with this message at 09:01 on Apr 10, 2015 |
# ? Apr 10, 2015 07:13 |
|
The Nimble CS-420 we had in our lab pretty handily outperformed the Tegile HA2100 we had. The was running raid 10 on the backing storage and still couldn't keep up with the Nimble running raid 6. Raidz and raidz2 aren't particularly performant and Tegile suffers the same problems which means you're stuck dividing up capacity into different raid levels for different performance requirements versus just running everything off of one big pool. That's not simple and doesn't really fit with their message of easy setup and operation. The benefits of Tegile are multi-protocol and inline dedupe and compression. They're not any faster than anyone else, from my experience, so it basically comes down to whether you want a NetApp-lite experience for less money. Me, I'd either go with Nimble for simplicity or NetApp for multi-protocol. Tegile seems to split the difference in a not-very-compelling way. If it's all virtual then TinTri has a really nice offering that is simple, fast, and has some really nice features that all work on a per VM basis, including per-VM QOS limits and guarantees now.
|
# ? Apr 11, 2015 00:54 |
|
NippleFloss posted:Me, I'd either go with Nimble for simplicity or NetApp for multi-protocol.
|
# ? Apr 11, 2015 03:13 |
|
Not forgetting a few other things such as Nimble being comparatively cheap (think against Netapp) and includes all software
|
# ? Apr 11, 2015 17:06 |
|
I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.)
|
# ? Apr 11, 2015 18:17 |
|
Few months back I think it was this thread someone needed to do large bandwidth file storage/transfers for 4k video editing and you guys gave him great advice but I can't find it, anyone remember what I'm talking about?
|
# ? Apr 11, 2015 22:53 |
|
Cross posting this from the Enterprise Windows thread, because maybe people have seen something on the storage side:FISHMANPET posted:I'm gonna post this in the Storage thread too, but has anyone seen problems with slow storage performance on Server 2012 R2? I've got an open case with Microsoft but we're a month in and still seem to just be flailing randomly at even identifying a problem. I've heard mumblings of others having problems, but wondering if anyone has noticed anything.
|
# ? Apr 13, 2015 18:57 |
|
Misogynist posted:I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.) Is how to be a Lustre admin without killing yourself covered? But I honestly would love to see the slides. And a comparison to HDFS would be awesome.
|
# ? Apr 13, 2015 19:15 |
|
evol262 posted:Is how to be a Lustre admin without killing yourself covered?
|
# ? Apr 13, 2015 23:58 |
|
New TinTri OS adds support for VM level QoS including both limits and guarantees. That's a pretty great feature that no one else can match right now. Only Solidfire has workable QoS guarantees at all and those are at the volume level.
|
# ? Apr 14, 2015 18:35 |
|
FISHMANPET posted:Cross posting this from the Enterprise Windows thread, because maybe people have seen something on the storage side: VM or physical? What kind of storage?
|
# ? Apr 15, 2015 04:14 |
|
Cross posting this from the virtualization thread. Probably belongs here anyway:goobernoodles posted:One of my two offices has only one host on local storage running a DC and some file, print, and super low-end application servers. It's a small office with about 20-30 people. The long term plan is to replace the core server and storage infrastructure in our main office, then potentially bringing the SAN and servers to the smaller office to improve on their capacity as well as have enough resources to act as a DR site. Until then though, I was planning on loading up a spare host down with 2.5" SAS or SATA drives in order to get some semblance of redundancy down there, as well as being able to spin up new servers to migrate the old 2003 servers to 2012. Right now, there's ~50Gb of free space on the local datastore. I'm looking for at least 1.2tb of space on the server I take down. I'm trying to decide on what makes the most sense from a cost, performance, resiliency and future usability standpoint. I'm trying to keep everything under a grand.
|
# ? Apr 15, 2015 04:16 |
|
Misogynist posted:I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.) What's your input on Gluster thus far? I'm considering evaluating it here in the next few weeks.
|
# ? Apr 16, 2015 00:35 |
|
mattisacomputer posted:VM or physical? What kind of storage? Physical server, attached with Fibre Channel to a Hitachi SAN. But apparently the same issue is happening on a local 10k SAS disk and also a FusionIO card. But, it turns out, this request is coming from a production system running unrelease Commvault software in an experimental configuration. I assumed that we were doing normal stuff and other customers were doing this just fine, but nobody is doing this at the scale we are. So tl;dr; maybe not a problem, backup guy is a poo poo.
|
# ? Apr 16, 2015 00:43 |
|
the spyder posted:What's your input on Gluster thus far? I'm considering evaluating it here in the next few weeks.
|
# ? Apr 16, 2015 05:05 |
|
Misogynist posted:Haven't run it in production. It seems to be the easiest distributed FS to plan and administer, since it only has one node type and there are few deployment gotchas. It seems to be a great fit overall for file-based workloads requiring high throughput from hundreds or thousands of clients, which covers most non-speciality HPC cluster use cases. It's a bad fit for client nodes that are throughput-constrained, because of the way it handles replication on the client rather than the server other than when it's healing a replication problem. It also doesn't seem to be quite as good a fit for mass object storage as Ceph, but that's a fairly specific use case for most on-premises environments. The client end is FUSE-based, which can result in some slowdown versus the CephFS client which is native and kernel-based. However, CephFS has an MDS component that's currently impossible to scale and a single point of failure, so I wouldn't recommend it for prime time. As far as I know, ceph RBD is still the component everyone loves and CephFS is questionably stable, but YMMV and it's under heavy development (we bought inktank, but ceph packages are still going through an overhaul to get into Fedora, which says a lot about how bad it was about shipping a ton of crap in /opt before)
|
# ? Apr 16, 2015 05:26 |
|
evol262 posted:As far as I know, ceph RBD is still the component everyone loves and CephFS is questionably stable, but YMMV and it's under heavy development (we bought inktank, but ceph packages are still going through an overhaul to get into Fedora, which says a lot about how bad it was about shipping a ton of crap in /opt before) Yahoo! just announced plans yesterday to use Ceph to underpin the media storage for Flickr and Tumblr: http://www.theregister.co.uk/2015/04/15/yahoo_plans_storage_service_on_ceph/ CephFS doesn't have any inherent stability problems that I'm aware of, and it's rather well-tested, but from my perspective, it's inadvisable to use a metadata server with a single point of faiure in a production setting. It was bad enough when Hadoop did it. Vulture Culture fucked around with this message at 05:50 on Apr 16, 2015 |
# ? Apr 16, 2015 05:48 |
|
Misogynist posted:Haven't run it in production. It seems to be the easiest distributed FS to plan and administer, since it only has one node type and there are few deployment gotchas. It seems to be a great fit overall for file-based workloads requiring high throughput from hundreds or thousands of clients, which covers most non-speciality HPC cluster use cases. It's a bad fit for client nodes that are throughput-constrained, because of the way it handles replication on the client rather than the server other than when it's healing a replication problem. It also doesn't seem to be quite as good a fit for mass object storage as Ceph, but that's a fairly specific use case for most on-premises environments. The client end is FUSE-based, which can result in some slowdown versus the CephFS client which is native and kernel-based. However, CephFS has an MDS component that's currently impossible to scale and a single point of failure, so I wouldn't recommend it for prime time. We've a couple of small gluster deployments and on the whole its works really well. The geo-rep feature is really good and the 3.6 version of it where it uses the volume change log rather than rsync is a big improvment. A small problem with it is upgrading between versions ( 3.4 -> 3.5 -> 3.6) can be a bit of an adventure and the last time I checked the advice was to disconnect all the clients and offline the cluster before doing this. Small file performance is pretty meh at the moment but they are doing a lot of work to improve this right now. If you are willing to handle failover yourself you can use NFS to access the cluster rather than the fuse client.
|
# ? Apr 16, 2015 13:19 |
|
|
# ? May 21, 2024 14:18 |
|
jre posted:We've a couple of small gluster deployments and on the whole its works really well. The geo-rep feature is really good and the 3.6 version of it where it uses the volume change log rather than rsync is a big improvment. A small problem with it is upgrading between versions ( 3.4 -> 3.5 -> 3.6) can be a bit of an adventure and the last time I checked the advice was to disconnect all the clients and offline the cluster before doing this. Small file performance is pretty meh at the moment but they are doing a lot of work to improve this right now. If you are willing to handle failover yourself you can use NFS to access the cluster rather than the fuse client.
|
# ? Apr 16, 2015 15:48 |