|
grobbendonk posted:I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX). I've used it, it's a nice platform. It comes with SVC which is pretty easy to implement. I work for IBM so feel free to ask more specific questions. IBM is pretty conservative on the marketing material, so yes, the v7000 works as advertised.
|
# ? Jul 20, 2011 02:17 |
|
|
# ? May 21, 2024 16:26 |
|
skipdogg posted:We use almost everything mentioned here in our backup scheme. I am in a smaller environment, but I had no idea Tape had finally dropped to such an affordable price. It looks like I can pick up a refurb PE124T LTO4 SAS with 8 cartridges + 10 tapes for around $2500. I need to buy two media safes (one for here and one for the owners house), but I am sure happy the subject was brought back up. This way I can keep monthly and weekly offsite backups, on top of the daily disk to disk snapshots. (Currently at 1.2tb of data, 2 tapes or so worth.)
|
# ? Jul 20, 2011 02:35 |
|
the spyder posted:I am in a smaller environment, but I had no idea Tape had finally dropped to such an affordable price. It looks like I can pick up a refurb PE124T LTO4 SAS with 8 cartridges + 10 tapes for around $2500. I need to buy two media safes (one for here and one for the owners house), but I am sure happy the subject was brought back up. This way I can keep monthly and weekly offsite backups, on top of the daily disk to disk snapshots. (Currently at 1.2tb of data, 2 tapes or so worth.) 2500 isn't steep at all, especially considering a major data loss can pretty much destroy most small businesses. Even having the boss keep a monthly set offsite at his house or in a safety deposit box can be the difference between a company surviving or not. Vanilla posted:This is a topic that has been brought up recently due to what happended to the Aussie company Distribute IT. Well thats neat. How about I take the drives out of the DD and smash them, do they have an answer for that? I'm a really really angry admin.
|
# ? Jul 20, 2011 04:19 |
|
the spyder posted:I am in a smaller environment, but I had no idea Tape had finally dropped to such an affordable price. It looks like I can pick up a refurb PE124T LTO4 SAS with 8 cartridges + 10 tapes for around $2500. I need to buy two media safes (one for here and one for the owners house), but I am sure happy the subject was brought back up. This way I can keep monthly and weekly offsite backups, on top of the daily disk to disk snapshots. (Currently at 1.2tb of data, 2 tapes or so worth.) There are probably cheaper routes to go if you were so inclined. If I'm not mistaken, IBM offers essentially a 2-drive stand-alone frame with autoloader (like 6 or 10 slots) for less than that. Got into a dicussion with one of the guys out here on the contract we're working on about the drop in tape prices. It was only about 5 or 10 years ago that new tapes would run you 75+ bucks each, but even Wal-mart sells media now...can get a full-stock of media for a fraction of the cost a few years back. Flip-side of the coin is that buying new from the manufacturers', they are raping people on licenses and support contracts. But guess that's good for us 3d party folks.
|
# ? Jul 20, 2011 13:33 |
|
EVA4400 with 60x400GB 10k FC disks here. What kind of vRaid should we use for: Random vmware application servers Exchange database files Smaller Oracle/MSSQL (~2GB express editions) servers Fileservers We're having some performance issues, where creating a new VM (Win 2008 R2) takes ~25 minutes (the "expanding" part during the initial install), and we see Write Latency going to 1-4k ms. We're mostly using vRaid5 except for the bigger database servers. Should we convert most VMFS disks to vRaid1? Has anyone here gotten a performance analysis from HP on a EVA system? Was it worth the money, or did you just get obvious results like "add more disks, use faster disks"?
|
# ? Jul 20, 2011 14:42 |
|
skipdogg posted:
No, they'd like you to do that because they get to sell another one.
|
# ? Jul 21, 2011 00:00 |
|
zapateria posted:EVA4400 with 60x400GB 10k FC disks here. What kind of vRaid should we use for: So you are now in the EVA hell where HP refuses to tell how the secret sauce works and how to get predictable performance. Some people we talked to got the solution from HP, "buy a new EVA and migrate some apps to it".
|
# ? Jul 21, 2011 14:24 |
|
Is anyone familiar with the various solutions available for de-dupe backup storage? We use CommVault and Data Domains, but our DD's (DD510 and 530) are coming to end of life in October. We outgrew them. I've been looking at the new Data Domains, FalconStor, Nimble and HP offerings but it's a little offputting when the sales engineer coming onsite to try and sell me doesn't know whether their solution does fixed- or variable-length block deduplication. Our basic requirement is that we have around a 15TB full plus 7TB incremental backups at our Colo that we want to replicate to our home office where we can do weekly tape-outs. The home office needs to support the replicated Colo data, as well as another 3TB of native file server data from the home office and another 4TB of replicated file server data from the remote offices. We also need to size this so that we won't need to upgrade for the next 3 years. My company is not so cheap that they won't buy the right solution, but we also don't throw money around like drunk monkeys. Can anyone who has been through a similar process guide me to what might be the best solution (bang for buck?). CommVault or another software solution plus some whitebox hardware or even taking advantage of our SAN would be a great alternative but CommVault's de-dupe is fixed block length only as far as I know - and I'm only aware of one software solution that would support variable length de-dupe (FalconStor's de-dupe virtual appliance).
|
# ? Jul 22, 2011 20:55 |
|
I have just bought simpana and have yet to put it in production. Is there a reason you choose dedupe in the store instead of doing it in software on the media agent? I have limited targets to test to but using a netap FAS dedupe system as a target was worse than plain hardware compression on tapes. Is probably som blocksize settings in simpana or something? Setting up dedupe in simpana got much better numbers and we can use some random san disk as storage. With the pricing on disk from DD or netapp it just makes a lot more sense to dedupe in software?
|
# ? Jul 23, 2011 17:18 |
|
conntrack posted:I have just bought simpana and have yet to put it in production. Is there a reason you choose dedupe in the store instead of doing it in software on the media agent? Simpana is great. You won't regret choosing it as a backup solution. But for de-dupe, neither CommVault or NetApp is in the same category as Data Domain. Data Domain is one of only a couple of solutions that do variable block-length deduplication on write (no post-process maintenance jobs). If you are happy with the capacity you'll need on your NetApp to do backups then maybe you don't need to look at the Data Domain but they are awesome products for backup. At one point last year, we were getting 28x compression on our Data Domains. Data Domains can ONLY do this though - they like static data that doesn't change much (especially if you're replicating). Read up on variable block-length deduplication to see why it's almost always better FOR BACKUP DATA (not for everything).
|
# ? Jul 23, 2011 18:24 |
|
Honky_Jesus posted:The newer models are petty spiffy; I touch myself thinking about the SL8500. A place across the street just ripped out two SL8500s (I believe the somewhat joke was that they were going to just put em down by the river for the homeless folks to live in.) Went to an IBM, not sure what model it is... It's smaller tho, and I guess it works better with TSM, which was their reason for replacing it. I'm not willing to believe that a flagship library like an SL8500 had any problems with TSM that STK/SUN/Oracle engineers wouldn't fall all over themselves to fix, so I'm thinking there may have been something else going on. I told them I'd take them off their hands if I'd had the floor space.. They seemed at least partially serious about just dumping them. Made me sad.
|
# ? Jul 25, 2011 19:43 |
|
oh SH/SC goons, help me for you are my only hope! We've been having issues with our dc recently due to their chillers, they had a tech doing some work on one of them today and swapping load between their two chillers. This caused water to heat up and essentially turn the fans off in our racs Being new to netapps, I'm not sure if there are logs to track how hot it got during that time. I'm guessing probably not, so I think my question is how hard is it to put it into zenoss? I had a quick gander and we have the community netapp zenpack installed but not sure where I am to go from here. sincerely embarrassed backups monkey
|
# ? Jul 26, 2011 08:18 |
|
zapateria posted:EVA4400 with 60x400GB 10k FC disks here. What kind of vRaid should we use for: EVA4400 with 72 x 300GB FC 15k and EVA4100 with 36 x 300GB FC 10k here (used with vSphere 4.1) We asked for help with a performance issue and they performed a performance analysis. Basically, it was buy more disk, sorry. What's your config for disk groups? (i'm assuming one disk group?)
|
# ? Jul 26, 2011 11:06 |
|
One disk group, 27 virtual disks.
|
# ? Jul 26, 2011 13:44 |
|
vomit-orchestra posted:oh SH/SC goons, help me for you are my only hope! If you have NOW access you can look up the diag codes for your system. There is a chapter on environmental alarms such as temp and voltage. I don't know if they list the exact temp but it's a start. If no alarms were logged i guess you were safe during the incident, unless you crashed of course....
|
# ? Jul 28, 2011 11:49 |
|
Spamtron7000 posted:Simpana is great. You won't regret choosing it as a backup solution. But for de-dupe, neither CommVault or NetApp is in the same category as Data Domain. Data Domain is one of only a couple of solutions that do variable block-length deduplication on write (no post-process maintenance jobs). If you are happy with the capacity you'll need on your NetApp to do backups then maybe you don't need to look at the Data Domain but they are awesome products for backup. You have some very good points. I had forgotten the variable vs fixed issue. It's a good thing i read up on that again, it makes it hard to compare the targets we have available at my site. It also explains why netapp FAS systems dedupe like poo poo, simpana stores container files that are never aligned the same way twise with the underlaying filesystem. I will have to ask netapp about their marketing crap they spewed on us on how the FAS systems would dedupe it to hell and back. It just won't work as they claim unless there is some tweaks to be done.
|
# ? Jul 28, 2011 11:54 |
|
My client has a Netapp FAS2020. It has 2 volumes with one aggregate, and shows 7 disks 1 spare on the system status. If I go into the storage options it shows that there are 14 134gb SAS drives installed. That would allow for almost 2tb of storage. However it shows that only 4 of the disks are of type data, 7 of type partner, 1 dparity, 1 parity, and one spare. That seems to be an awful lot of redundancy, when about a quarter of the physical capacity can be used. I did not configure this system, and it actually is not being used at all right now. I would like to provision it as an ESXi datastore, but need more than the 454 gigabytes available. Does the unit really need these partner disks or can the entire array be reconfigured as a a traditional RAID setup?
|
# ? Jul 28, 2011 16:40 |
|
Looks like the FAS 2020 is in HA mode and 7 drives are assigned to the head you are logged into, and the other 7 are assigned to the other head. This means the disks are in two separate aggregates. The 7 disks in head B are not "redundancy" for the disks in head A. You could change the ownership of the 7 drives in head B to head A. Then head A would have all the drives, you could stick them in one big RAID-DP group and then have 138GB x 11 = ~1.5TB of storage, with one parity, one dparity and one spare. Head B would be idle until a failover occurred.
|
# ? Jul 28, 2011 18:55 |
|
Cool, that makes sense. High availability would be nice though, but it's FAR more likely that the site would loose network connectivity before loosing an entire disk array. So I will just reprovision it I guess.
|
# ? Jul 28, 2011 19:04 |
|
No, you already have high availability. If a head fails, the other head, the partner, will takeover and there will be no interruption in service. Your current setup is like having two baskets with 6 eggs each in them, and you use both baskets at the same time. You want to change to a setup where you run one basket with 12 eggs and the other basket is empty. But in both situations you are running in HA, and if basket 1 failed basket 2 would be there to pickup all the eggs, with nothing going down.
|
# ? Jul 28, 2011 19:08 |
|
Thanks for the help. Do you by chance have a guide that shows how to properly remove the partner drives so that they can be reassigned to head A? Will I have to wipe the entire aggregate? Or can I simply grow it by adding head B's disks?
|
# ? Jul 28, 2011 19:50 |
|
You also lose half your processing performance.
|
# ? Jul 28, 2011 20:16 |
|
conntrack posted:You also lose half your processing performance. That's not a problem, only one or two ESXi boxes will be connecting to it. Having more storage space is much more important than processing speed in this case.
|
# ? Jul 28, 2011 21:10 |
|
but you aren't gaining any space, you are just consolidating it. Just use the space on the partner filer.
|
# ? Jul 29, 2011 04:30 |
|
You will need to leave 3 disks attached to the partner, as it will needs its own aggregate/root volume to run and that requires at least 3 disks.
|
# ? Jul 29, 2011 05:59 |
|
madsushi posted:You will need to leave 3 disks attached to the partner, as it will needs its own aggregate/root volume to run and that requires at least 3 disks. Good point. I forgot this is a small 2020 and if there are no other disks to host the root volume you can't do what I said and move all the disks to head A. Best practice is to keep the heads balanced anyways, so you should probably follow adorai's advice.
|
# ? Jul 29, 2011 16:43 |
|
zapateria posted:EVA4400 with 60x400GB 10k FC disks here. What kind of vRaid should we use for: VMDK files and Exchange should be on VRaid1, File servers can be on 5 and Oracle/MSSQL would depend on the size and performance requirements of the database. Unless you're dealing directly with HP engineering to solve stability issues, HP support is worthless as gently caress. I was a partner doing OEM support for HP, and calling their internal support lines was never anything more than a waste of time. I should also ask, how many VMs are you running on this EVA? Are the VM hardware, drivers and critical updates all up to date? Have you used EVAperf to gather any performance metrics? 60 disks really may not be enough, although if you have everything as VRAID5 that'll definitely be causing some bottlenecks. Nomex fucked around with this message at 20:11 on Jul 29, 2011 |
# ? Jul 29, 2011 20:01 |
|
Can anyone recommend what type and model of new SAN I should look at as I've never had to look at purchasing one before? I'm currently specifying a infrastructure refresh at my new job. Looking to set up their first SAN and virtualize 9 servers. I would be after around 2 to 3TB of storage. Mostly SQL2000 and a bunch of file servers with around 100 max users at most. Will look to have 3 vmware or xen hosts and would run iSCSI. Not sure if we can afford any kind of replication to another site. Most of the current servers are still Pentium III... So anything new is going to be a big improvement for them. What would be good switches for iSCSI? Exchange is staying physical for now as it was sorted out at the end of last year.
|
# ? Aug 3, 2011 22:00 |
|
You can get the entry level Equallogic boxes for fairly cheap, and for what you need they sound like they'll work pretty well. I have been using them for a while now and I think they're the best bang for the buck out there. If you are going to look at virtualizating stuff I would try to upgrade as much of the Windows 2000 stuff as you can, even if it's just to 2003. If you only have 100 users I would virtualize the Exchange server in a heartbeat. If it's a newer server you can convert it into an extra VMware or XenServer host. I am not as familiar with the network infrastructure side of things, but I think we are using 2 Cisco Catalyst 3750s in an HA configuration. I would recommend keeping the SAN/iSCSI traffic separate from your LAN traffic. The majority of the people in this thread will recommend VMware, but you can do what you are wanting to do with any of the big 3, really. VMware, Hyper-V, XenServer, I would just look at what costs the least and go from there. Most 100 user companies don't really care if a server goes down for a few minutes and you need to go in a boot it up on the other host. I personally would rather put that money into decent hardware.
|
# ? Aug 3, 2011 23:50 |
|
I don't keep up with this thread so apologies if this was brought up already... Just trying to spread the word on this because we were caught offguard by it today. OSX Lion will cause EMC Celerra to panic without a fairly new patch. http://jasonnash.com/2011/07/27/read-this-if-you-have-an-emc-celerra-or-vnx-and-osx-lion-clients/
|
# ? Aug 5, 2011 02:26 |
|
warning posted:I don't keep up with this thread so apologies if this was brought up already... Just trying to spread the word on this because we were caught offguard by it today. Yeah saw this a while back, had to inform someone a few months ago that they needed to upgrade before deploying any 10.7. Patch was released almost 6 months ago as the issue was noted in the OSX Lion beta. Happened to quite a few vendors as it's a change in the Apple code.
|
# ? Aug 5, 2011 09:59 |
|
Vanilla posted:Yeah saw this a while back, had to inform someone a few months ago that they needed to upgrade before deploying any 10.7. Patch was released almost 6 months ago as the issue was noted in the OSX Lion beta. I've seen this happen twice so far, one yesterday caused a pretty major outage for a customer, rather amusing issue if you're not the one having to fix it. I really don't think many EMC customers do flare updates on a regular basis, I regularly see clarions running versions which are years old. I think we'll be seeing this for a while yet. Equallogic on the other hand have a new firmware every 3 weeks, They've got the customers pretty well trained by now.
|
# ? Aug 5, 2011 11:56 |
|
The Clariion shouldn't need a Flare upgrade unless it's more than a couple years old. If the Celerra is prior to 5.6.47.x, you'll need a control station upgrade on it. Otherwise they can just patch the Lion issue. Or tell your Lion users that they can't use CIFS
|
# ? Aug 11, 2011 22:23 |
|
So I've read the lat 35+ pages, but not the whole thread. I was recently given an MD3000 with 14 Barracuda ES 500GB 7200 RPM drives. It has a pair of single-port SAS blades and a single 1Gb iSCSI blade. It's obviously too loud for most people's home use, but occasionally I'd like to fire it up and use it to learn a bit about SANs. Also, it will make a forklift upgrade of my WHS v1 box to newer drives easier, since I'll have lots of space to copy things to. All my WHS drives are >95% full now. I don't even have a dumb GigE switch as mine died, but is there a recommended, affordable managed gigabit switch for a home user who wants to use iSCSI? Along with the enclosure, my friend also gave me an LSI MegaRAID SAS 8888ELP and a SAS3442E. The Raid card could be added internally to one of my machines, possible with some of the ES drives, or even the older SATA drives from the home server, and the SAS HBA could be used to attach the MD3000 to the WHS machine. I guess I want to see if it's worth my time to learn on this old hardware in a home environment. I should mention some large aggregate storage at work would be beneficial, but this has sat idle for at least a year, so I'm not too gung-ho about using it for business data, even as a temporary backup location. Also, basically all my servers are PCI-X, the riser for the mail server to switch it from PCI-X to PCIe is pricy, there's just no easy way to shoehorn it into the setup at work. Oddhair fucked around with this message at 00:25 on Aug 25, 2011 |
# ? Aug 24, 2011 19:14 |
|
I'm looking at an EMC VNX 5300. Being a relatively new product, however, I can't find much in the way of details. For example, I can't seem to find one massive PDF explaining everything I could ever want to know about the SAN. The last storage vendor I was looking at was IBM, and they had enough documentation to choke a horse. Due to political reasons, we are not getting the rebadged FAS3240. I'm looking at roughly 12TB raw SAS (VMs) and 40TB raw SATA (file storage). There are no incredibly heavy hitting VMs, we don't run a mail server, or high use DB server. I was thinking of NFS for vSphere, and possibly CIFS for file storage.
|
# ? Aug 25, 2011 20:32 |
|
quackquackquack posted:I'm looking at an EMC VNX 5300. Being a relatively new product, however, I can't find much in the way of details. For example, I can't seem to find one massive PDF explaining everything I could ever want to know about the SAN.
|
# ? Aug 25, 2011 21:03 |
|
http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf has a lot of details. I thought I had a flash drive from EMC World that has data sheets on every product, but I can't find it here in my office. Is anyone excited about the new 3PAR P10000? V400 and V800 submodels look awesome, no SAS drives though, and no 10Gbit out of the gate. Peer Motion looks sweet though.
|
# ? Aug 25, 2011 21:49 |
|
complex posted:http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf has a lot of details. (Channel your inner 80s Australian) That's not documentation. THIS is documentation: http://www.google.com/url?sa=t&sour...rL_7TnzQkteFWAA
|
# ? Aug 25, 2011 21:58 |
|
quackquackquack posted:(Channel your inner 80s Australian) That's not documentation. THIS is documentation: http://www.google.com/url?sa=t&sour...rL_7TnzQkteFWAA They were even nice enough to include something to cute out and stick in the spine of your half inch 3 ring binder so you can identify what it is you just wasted 160 pages on easily.
|
# ? Aug 25, 2011 23:41 |
|
|
# ? May 21, 2024 16:26 |
|
quackquackquack posted:I'm looking at an EMC VNX 5300. Being a relatively new product, however, I can't find much in the way of details. For example, I can't seem to find one massive PDF explaining everything I could ever want to know about the SAN. Step-by-step guides to most of the things you'd want to do: http://www.emc.com/vnxsupport The Related Documents section gets pretty thorough. Pantology fucked around with this message at 01:53 on Aug 26, 2011 |
# ? Aug 26, 2011 01:49 |