Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vanilla
Feb 24, 2002

Hay guys what's going on in th

Misogynist posted:


Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts :)

Symmetrix or Clariion user?

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th
So I have a ton of perfmon stats from a certain server.

What tools do you use to analyse these? I know there's the windows Performance Monitor tool but i've found it a bit 'hard'.

Do you know of any third party tools for analysing permon outputs?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

soj89 posted:



Bottom line: What's the best RAID type to put in place? What about the controller type? The more I've been reading, it seems like Raid 1+0 is preferred over Raid 5 in any case. Would an i7 quad with 8 gb be overkill for the main file server? Especially since it's not particularly i/o intensive.

If it's a file server (typical 75% read, 25% write) and low IO I would go for RAID 6.

RAID 6 hurts on writes due to the double parity calc but given you are expecting low IO I would take the HA benefits of RAID 6.

Read IO is the same for R1 and R6 if you're looking at the same number of drives.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Misogynist posted:

This piece is wrong, wrong, wrong for some I/O profiles, especially as your array encompasses more and more disks.

In a RAID-1 (or RAID-0+1, etc.) array, if you're doing a large sequential read, the heads on both disks in the mirror set are in the same place (and passing over the same blocks), so you get zero speed benefit out of the mirror set even though you're technically reading from both disks at once. Versus RAID-0, your throughput is actually cut in half. For large sequential reads, your performance will almost always be better on an array without duplicated data, because you get to actually utilize more spindles.

With RAID-5/6, you do lose a spindle or two on each stripe (though not always the same spindle like with RAID-4) because you aren't reading real data off of the disks containing the parity information for a given stripe. This implies that for random workloads with a lot of seeks, RAID-0+1 will give you better IOPS.

RAID-5/6 for read throughput, RAID-0+1 for read IOPS.

It's a file server, so the IO profile is likely going to be highly random read, cache miss. The caveat being that it may be some kind of specific file server (CAD, Images) dealing with huge file sizes and then a boost in sequential read would be beneficial as you note.

I do disagree that you lose a spindle or two with each stripe, the parity is distributed and does not consume whole drives - all drives participate in the random read operations without penalty and worst case random read IO for a 10k spindle - about 130 IOPS. However, I may be totally wrong here because i'm not factoring in the data stripe itself but have never seen it as a factor is IOPS comparison calcs!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Jadus posted:

I'm curious what the general thought is when warranty expires on a SAN.
Lets say my company completely invests in a P2V conversion, purchasing a SAN with a 5 year warranty. After that 5 years are up, the SAN itself may be running perfectly, but the risk of having something like a controller or power supply fail and then wait multiple business days for a replacement is pretty high.
I can't image it's typical to replace a multi-TB SAN every 5 years, and warranty extensions only last so long.
I suppose after that 5 years it may be a good idea to purchase a new SAN and mirror the existing, for a fully redundant infrastructure, and then only make replacements on major failures.
What is normally done in this situation?
As already mentioned the typical refresh rate is between 3 and 5 years. This goes for little SANs and also huge multi Petabyte arrays.
This involves massive amounts of work to migrate the arrays over.

This is typically due to the fact that maintenance costs for anything over 5 years is horrifically expensive. I’ve seen 10 year old arrays still mainained because the business has decided that it’s too risks to move the key apps and they’d happily pay through the nose just to not have to address the problem for another year.

Vendors want you refreshing arrays because it means money but at the same time they really don’t want to support old kit. It means more old spare parts, skills, etc and the simple fact that as time goeson more parts are likely to fail.

Also, 3-5 years in the storage word is HUGE. The differences in capabilities between an array today and an array three years ago is massive. You’d want to refresh anyway to take advantage of thelatest and greatest.

Don’t ever let contracts expire and try to take on support yourself. When it all goes wrong you need that vendor support because at the end of the day if you don’t meet expectations someone high up is going to say ‘you did WHAT?’..........and remember it’s not your money :)

oblomov posted:

I keep waiting for someone other then Sun to do decent SSD caching deployment scenario with SATA back-end. NetApp PAM cards are not quite same (and super pricey). EMC is supposedly coming out (or maybe it's already out) with SSD caching, but we are not an EMC shop, so I am not up to date with all EMC developments.


Out already. It’s called FAST Cache and involves basically using Solid State drives as an extension of array cache, for both read and write. I think it boosts the cache on an array up to 2TB.

1000101 posted:

EMC sort of does it via what they call FAST; specifically with "sub LUN tiering."

Essentially they break the LUN up into 1GB pieces and promote data to its relevant spot. Where data lands depends on frequency of access. I dunno if writes default to SSD though and I don't have a Powerlink account to get the information.

That’s a different feature to the caching mentioned abovel :)

Sub-Lun tiering will work out which portion of the LUN actually needs the performance and they will push that up to SSD or fast disk. Anything that doesn’t need the performance can get dumped down to SATA to save money.

Writes would be treated the same as they are today, they hit each cache and is then acknowledged to the host. Then they get pushed to disk.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

complex posted:

Anyone have any experience with 3Par gear? We're looking for alternatives to our NetApp. We use Fiber Channel.

Good kit. Simple to use, some good features, does the job but expensive - the cost is between mid tier and high end. Those that use it love it.

They get great survey ratings. Those industry surveys that ask if you'd use your current vendor again - 3PAR get 100%.

Also - 3PAR is now owned by HP just FYI.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

dj_pain posted:

So goons, I managed to get my hands on a Dell/EMC AX150i pretty cheap with 12x750gb drives. Now looking at the price of 1tb/2tb can I use larger drives in this SAN ? and yes I have been googling this :(

The highest I can find on a spec sheet is 750GB

http://japan.emc.com/collateral/hardware/data-sheet/c1111-clariion-ax150-ax150i.pdf

I'll have a look next week though.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Mako posted:

Does anyone happen to know anything about DataDomain licensing?

We have a couple DD's that have small (250GB drives) in them. Is it possible to put larger (1TB) for example drives in and be able to use them all in the filesystem?

Do i have to buy the drives FROM datadomain or can i just get any old drive from newegg?

You'd have to buy them from Data Domain, like many vendors DD/EMC only let you put their drives in the array.

Not saying it wouldn't work - I have no idea about that side of things but if they found the drives (by going to swap another drive, or by the newegg drive messing up the array) your maintenance would be void.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Intraveinous posted:

I didn't bother with it, but I heard something about them stuffing a bunch of hook^H^H^H^H dancers in a Mini Cooper, and jumping a motorcycle over a bunch of Symmetrix cabinets... Sounds like a lot of fluff with not much substance to me.

That said, having not watched it, I don't know if there was any substance, but if I'm going to take my time to watch a webcast, I want to know what the product is, what it does, and how it does it. If I wanna watch stunts, I'll do that another time.

There were something like 41 new products announced, some big, some insignificant. I think later on they were obviously just counting for ramping up sake.

High End

--Sub Lun Fully Automated storage tiering announced for the VMAX. Moved chunks of data around in 7.5MB chunks from Solid state drives, to FC or slower 1TB ot 2TB drives. Been waiting about 18 months for that one!

Mid Range

--New Mid Range arrays – VMX Range. Mostly a hardware upgrade, up to 1000 drives, new FAST Cache limits of up to 2TB. New 6 Core Intel CPUs, 6GB SAS Backend,
New software features such as application provisioning. For example give it exchange info such as mailboes, size, etc and it will provision the disk based on MS best practice.

Built in GUI tools such as power output, etc.

Low end

--New Low end arrays – VMXe Range, starting at sub-10k. Up to 120 drives. Cheap, lots of good bits.

Data Domain / Backup

--New Data Domain Range – DD890 and DD860 Appliances. Bigger and faster than previous range. Can store up to 14.2 Petabytes logical at a rate of at almost 15TB per hour.

--Also new DD890 as global dedup array. Up to 28.5PB logical at 26TB per hour.

--Native support for IBM iSeries announced.

--Data Domain archiver announced. Now use the DD as long term archive in addition to backup – directly archive through CIFS or NFS.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

three posted:

From using some of the IOPS calculators, am I correct in that 10K SAS disks in RAID-10 will provide better IOPS than 15K SAS disks in RAID-5?

Depends but yes, is quite possible.

In a read environment you can get around 130 IOPS (worst case) of random read IO from a 10k disk and 180 IO from a 15k disk.

However, once you add writes into the equation then you are upping the number of IOPS. Each write in RAID 10 creates 2 IOPS (write to one disk then the other). Each write in a RAID 5 environment means 4 IOPS (two reads to make a parity and two writes to write the parity).

So a RAID 5 environment you may have faster disks but it's possible a lot more IOPS are required for the RAID 5 overhead.

Vanilla fucked around with this message at 12:03 on Jan 26, 2011

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Hey guys, came across someone asking for 'server offload backup solutions'. Never heard this terminology before? Are they talking about clones for backup??

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Thanks for the reply guys, I understand what they're after now.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Badgerpoo posted:

Currently one team does both the SAN switches and the storage, it's looking like the SAN switches would come to the Network team as we know how to connect things together properly. I suppose we would be classified as a medium/large enterprise? I have no experience with SAN/NAS systems so am wondering if doing this breaks something crucial in the whole process of running a cohesive service.

It's uncommon as it normally falls down to the storage guys.

However, I do know one major bank that does it this way. Networks look after both ethernet and FC networks. FC is a piece of cake compared to IP anyway and this does put them further along the lines with converged networking, FCoE, etc. A lot of push back I see on converged networks is simply down to internal staff politics.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Misogynist posted:

I dream of a day when all SANs do nothing but dynamically tier blocks across storage and anything more complicated than "I want this much storage with this class of service" all happens silently.

Edit: Basically Isilon I guess

It's what a lot of vendors are trying to do, hence the whole vBlock style approach to hardware that is VERY popular right now.

One block purchase and one interface to code against to lets people create an portal to allow their apps teams to get their server, storage, db, network 'automatically'.

Naturally some arrays are already doing the block based tiering (EMC, IBM, HDS, etc).

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Spamtron7000 posted:

Does anyone have straight information on EMC FAST and the element size? My understanding is that data tiering within a Clariion using FAST only occurs on a 1Gb element size - meaning that moving a "hot spot" really means moving 1Gb chunk of data. I'm not a storage guy but it seems to me that this sucks for applications like Exchange and SQL that do a lot of random reads of 4k, 8k and 64k blocks and that it would just result in unnecessarily dragging a lot of 1Gb chunks around from array to array. Is this a conscious decision by EMC? Are they working on decreasing the element size or allowing some kind of manual configuration (ideally, per array)? Is it even worth considering enabling a FAST pool with SSD for anything other than sequential data?

For Clariion / VNX it is indeed 1GB chunks.

For VMAX it's 768k chunks BUT it moves these in groups of 10 (so 7.5mb chunks).

I think it's down to array performance. More chunks means more metadata to store and analyse and the array already has enough to look after - even at night when it is expected to move the chunks the backups are running, batch jobs are kicking off, etc.

1GB chunks are still pretty small, compared to a 2TB drive it's a fraction of a percent and i'm seeing some BIG Clariion arrays out there. There's a lot of chunks to look after.....

Hopefully with hardware enhancements this size will come down, granularity is always good as long as it can be handled.

The way I look at the automated tiering is not to look at it from a performance improvement perspective but from a dump-all-that-inactive-data down to 2TB drives. If you're looking for all an round performance boost stick FAST Cache in there (SSDs used as cache), if you're looking to reduce costs stick FAST on.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.

I know if I bought 2TB of SSD space for all that "automatic right performance" joy, the nightly batch processing system that just happens to be 2TB would push out the interactive systems. That system goes balls out after hours and in the day others take over during the shorter office hours window. Making a profile for that might be interesting.

You set the window of observation. For most places this would be 9-5 and everything overnight (backups etc) is ignored.

Additionally you can choose to ignore some LUNs, lock a LUN in place if the user would prefer a set performance level, etc.

Talking about EMC, not sure about all vendors.

quote:

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

It's the utilisation of larger 1TB and 2TB drives that are generating the cost savings (in addition to reduced footprint, power, cooling, etc). This is why I see automated storage tiering as mostly a money saver than a performance improver.

Little SSD, a few slivers of fast disk and a ton of SATA. I've already seen it in action 10 times because people have just been using fast FC drives for all data and it isn't needed.

Eventually the small, fast drives will go away, SSDs will be cheaper and it will be all SSD and large 2TB+ drives.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

complex posted:

First, before allowing anything to happen automatically, some systems will allow you to run in sort of a "recommendation" mode, saying "I think this change will be beneficial".

Also, if your tiering system does not take time-of-day changes (or weekly/monthly, whatever) into account, then of course it won't be able to adapt to cyclic events like you describe.

I always avoid the recommendation modes. There used to be an EMC Symmetrix tool called Symm Optimizer that would automate array performance balancing but on a 'dumb level' compared to the sub-LUN tiering we have today. It's been around for about 7/8 years.

It would move hot spots around to balance the array, it had to be the same RAID type, it moved whole hypers, it needed some swap space, etc.

Anyway, those that used recommendation mode never actually went through with the moves as they were either too busy to even look at the array or didn't want to make changes in a enterprise environment.

Those that left Symm Optimizer to do its thang had an awesomely balanced array without hot spots.

These arrays have got so much bigger that people just have to bite the bullet and leave them to automate these things - there's a reason why it's called automated storage tiering and people have even less time today....

Vanilla fucked around with this message at 09:12 on Mar 4, 2011

Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

You mean getting the same effect as giving the applications dedicated spindles? :)

If by same affect you mean performance predictability then yes. I've seen it once already on the Clariion where Automated Tiering has been enabled and the application owner has seen some good performance improvements. For example a run dropped from 50 minutes to 30 minutes because a many key parts of the DB moved up to SSD.

However, the problem was the next day the run took 35 minutes, then 25 minutes the day after, then 28 minutes, and so it see-sawed for weeks. This is because some blocks get demoted and given to other apps.

They didn't like the see-sawing but wanted to keep the performance improvements - so they had the LUN locked in place at the current SSD/FC/SATA levels and got a generally consistent run time.


quote:

That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though.

So generally people only give a select few important applications a portion of SSDs (because there is typically few SSDs in comparison to other drives as they're so drat expensive right now). This means a few guys see performance improvements......but if they're the critical apps then it's all good.

Most LUNs don't see too big an improvement because all that is happening is that their inactive data is being dumped lower. However, it's very rare they will see any kind of performance degradation.

I don't see it as a way to fire all the crusty storage guys (that's cloud :) ) but as a way to free up the time of the storage guys. We all have way too much storage - someone with 250TB and a growth rate of 30% has almost 1PB after 5 years. We can't give 500TB of storage the same love and attention we did when it was only 50TB so we have to let the array do it.

quote:

Why doesn't the vmax virtualize external storage? Any one?

EMC has never seen an array as a technology to virtualise other arrays. There are a lot of questions, issues and down sides to such an approach.

EMC has Invista, which is a joint approach with Cisco and Brocade. Basically the virtualisation was done at the fabric layer in the blades.

Quite a few customers use it but it never really went mainstream due to a number of factors including other technologies that were coming up. By that I mean the main use case for Invista was data mobility without host downtime - this can now be done with PowerPath (host based), VMAX Federation (one VMAX to another without the host knowing) and so on. There were many ways to skin a cat.

The latest storage virtualisation technology is called VPLEX.....although the storage virtualisation is just a side show - the main use it to provide active / active data centers.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

The cost of "enterprise sata" sort of takes out the "save" part in "save money", so virtualizing midrange is looking better and better.

Edit: If the vmax would get the external capability i would definitely look more in to it.

It all depends on what you are trying to achieve.

30TB of SATA is a lot cheaper than 30TB of 450GB 15k on an enterprise array, more so once you factor in less DAEs and surrounding hardware.

If you had two configuration - one with 100TB of FC storage using 450GB 15k drive sand the other with 100TB of SSD, FC, SATA and auto tiering the second would cost less and likely provide more IOPS.

Consider the down sides to virtualisation through an enterprise array:
- Enterprise cache and FC ports is not just for your tier 1 apps but also being given to data acting as fetch through.
- What happens if a 1TB LUN is corrupt and needs to be restored? You want me to push that data through my enterprise array in the middle of the day?!
- Limited use cases - look at the support matrix, only a few things are actually recommended and supported for this kind of virtualisation (archive, backup, file)
- etc

So it all depends on what you are trying to do - why does the array need to be virtualised for you?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

H110Hawk posted:

To give you an idea of the other end of the spectrum BlueArc tried to do this with their Data Migrator which would offload files from Fast -> Slow storage based on criteria you set. This happened at the file level, so if you had a bajillion files you wound up with a bajillion links to the slow storage. I'm not saying one way is better than the other, or one implementation is better than another, but there are extremes both ways with this sort of thing.

I for one would bet EMC has it less-wrong than BlueArc. Is their system designed for Exchange datastores? Is there a consideration in how you configure Exchange to deal with this?

File and block tiering are two different things.

EMC VNX can automatically tier file systems LUN just like normal block LUNs.

However, there is a tool called File Management Appliance. Based on policies (last accessed, size, last modified, etc) the appliance will scan the file system and move individual files from fast spindles to slow spindles.

Then it can move the files again to an archive, such as on Data Domain. Not just an EMC tool, can do the same for Netapp (well, the archive piece).

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Intraveinous posted:


Right now I've got budgetary quotes in hand from 3PAR and NTAP, and expect a Compellent one soon. Haven't talked to EMC yet. Anyone else I should be talking to? Am I making too big a deal out of the differences in management?

I think you are making too big a deal out of it. You'll be happy with any of those.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

H110Hawk posted:

Slick. I remain blessedly ignorant of how Exchange works. Do you have to tell it how to behave for 3-tier, or does it expect that to be handled behind the scenes by the block device?

It does it itself, it's comes really far over the years. Like most apps you still have to size it appropriately.

Exchange 2003 = nightmare. Number 1 application I saw people dedicate spindles to - it hates latency and doesn't share nice with other apps.

Exchange 2007 = better but still bad.

Exchange 2010 = Really good. Just rolled a 20000 seat exchange environment on 1TB SATA drives. Most 2010 environments are capacity bound and not disk performance bound so using SATA drives really helps.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

szlevi posted:

Moving data at sub-LUN vs file system level is a good question - personally I'd go with file system instead of sub-LUN due to the fact that the latter has no clue about the data but FS-level tiering at least doubles the latency and that's not always tolerable. Recently I met with F5 and I saw a demo of their appliance-based ARX platform and I really liked the granularity and added latency wasn't bad at all but the cost is just crazy, I don't see it ever making it into the mainstream... it was much more expensive than EMC's DiskXtender which is already in the ripoff-level price range.

The benefit with file based tiering is that you can do it simply based on policy around metadata already contained in the file system. If it hasn't been accessed in 6 months move it down a tier. If it still hasn't been accessed after 9 months move it to a suitable archive platform - the file metadata itself gives us a load of data which can be used to help move data around.

Sub-Lun has no clue about the data simply due to its nature. The array has no idea and just moves things based on patterns it sees and policies you set.

If you just want to move files around then opt for lightweight tools that can move files and even leave stubs but without being intrusive to the environment. The F5 stuff if full on file virtualisation and literally sits between the users and the files (i.e - just try and rip it out in a few years) - if it's file tiering you want look elsewhere for similar functionality that is easier to deploy.

Use F5 for other file virtualisation fancy features such as global namespace, easy migrations - the file archiving stuff is a side show :)

On the latency side i've never cared if something increases file latency. Such latency is completely invisible to your general file users and only becomes an issue if you have an application that trolls thousands of files.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Netapp question guys.

What's the write penalty on RAID DP?

I.e RAID 1 = 2 writes, R5 = 4 writes, R6 = 6 writes but i've no idea what it is with RAID DP. I know there's two parity drives but no idea the actual penalty.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Been a loooooong time since I touched on iSCSI so I have some 101 questions which I know the answers to but things change so worth asking!

-- With iSCSI it's still OK to just use the standard servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
-- All you need with iSCSI is the MS Initiator, which is free.
-- Generic IP switches are fine - or do they need certain 'things?

Anyone know some free, accepted tools for copying LUNs off of DAS onto arrays? Robocopy still liked?

Any general rules for someone moving from just DAS onto an iSCSI array?

Vanilla
Feb 24, 2002

Hay guys what's going on in th
So with all the big vendors the salesman is totally in control of the price. The price he will submit depends heavily on the situation and you should never believe the 'best and final' line - It's bullshit.

Are you a huge Netapp fan with money? Hell you'll get a little discount.

Are you a non-Netapp account and in the middle of a refresh which may feature a 'price price price'vendor like Dell? You'll get a great discount. Netapp are under a lot of pressure - they have one revenue stream, a huge growth rate to meet and wall street to please. They will drop the price.

When Dell come up against EMC / Netapp they know they don't have the feature set to match so they just drop the price because often price is the biggest factor.

Some common sense tips:

- Research into when end of quarter is for certain vendors. You will get a better price the closer you leave it to their EoQ.
- Involve multiple vendors in the process and even inform the vendors who you are looking at (but share nothing more).
- When you've reached the point where you think you can't get the price down any more that's fine - move onto other things. You got a great deal - why not secure the same discount levels for your upgrade costs for the next 3 years? This is actually really easy to do because sales people don't see / care that far ahead. They just want that deal.
- Don't believe the 'only valid for this quarter' crap. If you turned up three days into a new quarter and said 'I want to buy this now' they're not going to risk you going elsewhere. If they play hard, play hard back.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
I've always been very dubious of the 'treat snapshots as a backup' line. The safest method is backing up using a different medium (tape, backup disk, backup appliance).

Replicating it to another array isn't going to help you if a disgruntled employee wants to cause some damage or a hacker gains access to your arrays ALA Distribute IT

http://www.theregister.co.uk/2011/06/21/hacks_wipe_aus_web_and_data/

Get tape in there somewhere and get it going to tape regularly!!

Vanilla fucked around with this message at 00:44 on Jul 16, 2011

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Xenomorph posted:

Ok, BACKUPS.

How do people with dozens/hundreds of terabytes handle backups? Multiple backup drives? HUGE backup drives? What about bandwidth? We back up over the network. Do you have backup drives attached locally to servers? Does each server get its own backup unit?

So when it comes to file backups I see a number of strategies.

1.) Archive. Archive anything that has not been modified or accessed in 30 days. Stick the archive on the other array and replicate it. Don't back it up (anything archived hasn't been modified so it will already exist in the backups).

This means you only backup the active data and not the stale data. This ends up being a fraction of the backup and also means people can access the older files as it's an onlne archive.

Check out Enterprise Vault from Symantec.

2.) File-deduplication based backup technologies such as Avamar which only send changed data to the backup. Not only for the big servers, also can be deployed on laptops and desktops for local backups.

This may be out of your budget but there is a software only edition for smaller environments.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

skipdogg posted:

I'm personally a big advocate of offsite storage, if an unhappy admin went in an wiped out the SAN (which runs all our VM's, and storage), and then nuked the DD, and we didn't have tapes the company would be irreparably harmed. It wouldn't be able to recover.


This is a topic that has been brought up recently due to what happended to the Aussie company Distribute IT.

As far as i'm aware Data Domain have the ability to dial into the box and recover any backups, it's something initiated by engineering. Even if someone deleted all the backups they could still get them back - feel free to ask them.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

grobbendonk posted:

I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX).

I was wondering if anyone has had any practical experience with an IBM Storwize V7000 midrange disk system? Are there anythings to be aware of, or whether it does exactly as advertised.


The V7000 is alright. I've no practical experience - only what i've seen and heard. IBM are offering some rock bottom prices to get footprint and I think they updated the platform a few weeks ago.

- It doesn't have ny kind of compression or deduplication like EMC or Netapp can offer.
- It doesn't offer any kind of built-in file capability like EMC or Netapp (even thought you say it's for your block environment the file capbility often comes in handy somewhere over the next 3-5 years!).
- I recall it comes with SVC to allow you to virtualize any old arrays. This requires some effort but not heard of anyone actually bothering to do so.
- As SVC sits over the top expect any VMware plugins and features to be available for use quite some time after actual VMware release (12 months+)
- Doesn't have the platform maturity of other vendors offerings. Lacks many things which the others have.

Might be a contender for the Clariion footprint, but certainly not your DMX/VMAX. The new EMC VNX range is very good (i'm biased) so consider using the IBM proposition to beat down the price.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

skipdogg posted:



How about I take the drives out of the DD and smash them, do they have an answer for that? I'm a really really angry admin.

No, they'd like you to do that because they get to sell another one.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

warning posted:

I don't keep up with this thread so apologies if this was brought up already... Just trying to spread the word on this because we were caught offguard by it today.

OSX Lion will cause EMC Celerra to panic without a fairly new patch.

http://jasonnash.com/2011/07/27/read-this-if-you-have-an-emc-celerra-or-vnx-and-osx-lion-clients/

Yeah saw this a while back, had to inform someone a few months ago that they needed to upgrade before deploying any 10.7. Patch was released almost 6 months ago as the issue was noted in the OSX Lion beta.

Happened to quite a few vendors as it's a change in the Apple code.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
So remember the VNXe range and the VNX are quite different so that blog wont be entirely valid for the VNX.

Anyway, with regards to software upgrades it's not a power down of the array but an in-turn reboot of the storage processors. This is necessary for some larger software updates.

SP A reboots and SP B takes over. SP A comes back online and then SP B has the patch applied and SP A takes over until B is back. I/O continues and the array stays online.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Internet Explorer posted:

So I guess the bottom line is does EMC support doing software updates on a SAN in production while I/O traffic is still flowing to it?

Absolutely, non-disruptive upgrades are a key selling point of most vendors.

Naturally you want to pick a time where traffic is relatively low (10pm rather than 10am) but it's very very rare if an array needs to be fully cycled off to apply a software update. If this were the case then something has gone wrong.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

amishpurple posted:

Just want to chime in and suggest to anyone looking at purchasing a new SAN to look at Compellent (now owned by Dell). We bought a Compellent with 156 disks back in July and I've been so happy with it. Coming from an EMC CX3-40 there isn't a drat thing I miss about EMC or NaviSphere. Dell is so hungry to sell Compellent that you can pick one up for an incredible price too.

Navisphere is pretty much dead, a free upgrade to Flare x would have put you on Unisphere which is like a bajillion times better (edit: then again not sure if the CX3 could have gone there)

http://www.youtube.com/watch?v=mACVdai9YwE

Do Compellent have the new 64 bit code out yet out of interest?

Vanilla fucked around with this message at 00:53 on Sep 11, 2011

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Just FYI I highly, highly recommend some FAST Cache in general, make sure it's part of your selection as it's great.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

ZombieReagan posted:

Definitely, this...FAST-Cache will help keep you from having to add more drives to a pool just for IO most of the time. Just be aware that you have to add the SSD's in pairs (mirroring), and as far as I can tell you have to destroy the FAST-Cache group in order to expand it. Shouldn't be a major issue, just do it during off-peak times.

Still beats NetApp PAM-II cards accelerating reads only, and having to take a controller offline to plug it in. :cool:

Indeed, I was looking at a report last week where the Fast Cache was servicing about 80% of the busy IO without going to disk. Loads of FC drives sitting there at low utilisation - wished they'd gone for all high cap drives now!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Serfer posted:

Things I hate? EMC's lower end hardware won't support FAST cache. It won't even support SSD's at all. It's loving stupid. We need a SAN at every one of our offices, but can't justify spending $45,000 on a higher end SAN for each location. So we have NX4's currently, and would like to eventually upgrade to VNXe, but without FAST cache, it's still a ridiculous proposition.

I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?

EMC is especially anal about flash drives and it's all to do with things like data protection, failure rates, reliability, etc.

The only vendor that churns out an SSD that most of the industry trusts for enterprise workloads is STEC (Zeus IOPS range). However, pretty much every other vendor goes to STEC also - HP, Netapp, HDS, etc. So they have these great drives in high demand and they can't make enough of them - result is a pretty high price.

Price has come down massively but compared to a VNXe it's just not feasable. Spending $10000 on a VNXe and then filling it with a few flash drives costing about $10000 isn't going to make sense today - so it's not offered.

Given time and more SSD suppliers (rumours of a second are floating around) you'll start to see them more and more in different arrays and in greater numbers.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

adorai posted:

And do you use only one? What if the SSD you are using for write cache fails? I know in the sun world it is best practice to run a mirror of two (or more) for write cache.

So the minimum SSD drives you'd need are three. One RAID 1 pair and one hot spare.

They probably don't cost 10k a piece, it does actually depend on the model of array. The price changes so often that it's probably dropping a few % a month. Another few years we'll be throwing them in like candy.

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th

incoherent posted:

Get Dell and EMC fighting. EMC took 15k of the top of a 36TB (600 15k and 2TB NLSAS blend in a EMC VNXe) just because we dropped EQL name.

Do this. In addition 80k Euro's isn't VNXe territory but easily the lower end of the VNX range if you need to go there.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply