Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Pile Of Garbage
May 28, 2007



Anyone here have any experience with FCIP (Fibre Channel Tunneling)? I'm going to be working on a project soon that involves merging two, physically seperate FC fabrics via FCIP for the purpose of volume copy/mirroring. We will be using IBM V7000 SANs connected via FC with IBM SAN06B-R Multi-Protocol Routers to do the FCIP tunneling.

Any feedback or anecdotal accounts would be awesome. Alternatively I have extensive experience working with IBM hardware, including SANs, so I can answer questions anyone may have.

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



Misogynist posted:

IBM pricing is all about your vendor relationship; smart shops don't pay retail with IBM. I'm not able to disclose the pricing that we got on our units, but considering the DS8000/SVC featureset (and SONAS if you go V7000 Unified) that you get in the units, IBM has a very competitive midrange offering on their hands.

Seconding this, as I already explained to evil_bunnY in the virtualisation thread. IBM Special Bid prices can be extremely good, especially if they think they can take business away from well-established HP/EMC/Dell customers.

Pile Of Garbage
May 28, 2007



I'm not too familiar with software RAID on Linux either however I've just had a look at the man page for mdadm and you can use the --examine switch to get details on components in the array so what output do you get when you run the following:
pre:
mdadm --examine /dev/sdb1
mdadm --examine /dev/sdc1
msadm --examine /dev/sde1

Pile Of Garbage
May 28, 2007



evil_bunnY posted:

It was v7K. Sorry :laugh:

Have you lodged the fault(s) with IBM yet? I'm interested to see how you get along as my support experience with V7000 hardware has been pretty poor.

The problem with getting support for the V7000 is that the department/team which handles it (Storage Central/zRTS) are one of those "don't call us, we'll call you" departments. All you can do is call the IBM support number, lodge a case with the National Contact Centre and pray that they got your details right and/or you get a callback.

Recently we experienced a "catastrophic failure" with a V7000 cluster that we deployed at a customers site (Thankfully it wasn't in production yet). Around the end of April I recieved an alert stating that a PSU in the control enclosure had failed so I dutifully logged it with the National Contact Centre and started the waiting game.

Then just over 30 minutes later the control enclosure started spewing out heaps of terrifying alerts ("The node is no longer a functional member of the cluster", "Enclosure electronics critical failure", etc.) and then went down hard and became completeley inaccessible.

When I got onsite the fans on the control enclosure were running at 100%, the enclosure indicator light on the control enclosure was off and all the HDDs in the control enclosure had the amber fault lights lit. I cold-booted everything and it magically came back up fine.

So basically in the space of 5 minutes the entire cluster went down hard.

The first thing I did was get back onto IBM which began a saga which lasted almost a month. The fault was escalated from level 2 to level 3 and then to product engineering/development in the UK. There were countless incidents of miscoummincation between all levels and we only managed to get a final word on the fault after tearing the service delivery manager a new one.

In the end we finally recieved word back from development (Via a level 3 engineer) who stated that the failure was caused by a soft error experienced on one of the i2c buses. The control enclosure responded to this error by attempting to perform a hard reset which failed resulting in both canisters going down. The expected fix for this is to be included in the v6.4 code release (Release data TBA). However development assured me that the "condition is extremely rare and very unlikely to occur again".

This experience with the Storage Central/zRTS team and development has left quite the sour taste in my mouth. I still love the V7000 and this issue would not deter me from recommending it to others (If they can afford it) however I do not look forward to dealing with IBM support for anything beyond a HDD failure with any V7000 hardware.

Oh and sorry about the :words:. I get carried away sometimes.

Pile Of Garbage
May 28, 2007



Wicaeed posted:

Interesting, looks like Netgear is taking the plunge into Storage Area Networks

http://netgear.com/business/products/storage/ReadyDATA-family/RD521210.aspx#one

The hardware itself looks like rebranded Supermicro chassis, although I'm curious simply due to the fact how outstanding our lower end Netgear ReadyNAS has been so far.

Do NOT get me started on Netgear's storage products. One of my bosses mate's works at Netgear doing sales and several months ago he convinced my boss to go Netgear NAS products for the purpose of providing off-site data replication for customers.

We purchased a ReadyNAS 4200 and the idea was that we would run the 4200 in our datacentre and deploy the smaller ReadyNAS Pro units at customer's sites which would then replicate back to the 4200.

Since deploying them we have had nothing but problems and my colleague has almost been driven to insanity from dealing with Netgear support. Things got so bad that earlier this week they flew one of their level 3 technicians over from Sydney to try and sort out all our problems.

evil_bunnY posted:

Lord save us!

That's a shitton of features out of the box though.

That's what we thought as well until we had a meeting with some QNAP reps today who took us through their product line and honestly it shits all over Netgear's offerings (Not sure about price though as we are still working on getting quotes).

Personally I think that when Netgear went to market far too early when they decided to break out from the consumer/prosumer market into the SMB/large business. All of the "new" features in the new ReadyDATA 5200 have already been available in QNAPs products for quite some time. Netgear really needed to let their product mature before making their marketing push.

Oh and on the subject of the new ReadyDATA 5200 you cannot use ReadyNAS Replicate to replicate data between the 5200 and any other existing ReadyNAS products. The current line of ReadyNAS products run some flavour of Linux and use ext3 for the filesystem while the 5200's now run Solaris and use ZFS which is why they are not backwards compatible (Or so I've been told).

All the same the ReadyDATA 5200 may be able to fill a small niche. I was told that they expect pricing for a fully-populated 5200 enclosure to come in under $10k which is pretty cheap for a NAS which supports iSCSI and has 10GbE.

Pile Of Garbage
May 28, 2007



Pvt. Public posted:

I concur with what you have said. We were looking at the 5200s but I convinced QNAP to let me test drive one of their 2U units and I've not bothered to even consider anything else since. Amazingly easy to setup, fantastic interface and the drat thing just WORKS. Also, if you want rough pricing for QNAP, hit up CDW. They have most of their units in stock and their price isn't a long way off what you will pay, especially if you already have your pricing adjusted for you company. Also, Newegg has some QNAP stock.

This is great to hear. Both myself and my colleague (The one who was driven to the brink of insanity from Netgear tech support) are extremely excited about the QNAP offerings. At the moment we are trying to get some proof-of-concept/demo units from them which may be difficult as they have limited market penetration in Western Australia. If you can offer any info or advice re distributors in the WA or AU region feel free to PM or e-mail me at cheeseDOTcubeATgmailDOTcom.

Pile Of Garbage
May 28, 2007



complex posted:

As for "awful" VAAI support, which primitives are you missing that you wish you had? (Note: This is a test) I found VAAI support to be great.

According to the VMware Compatibility Guide the Nimble CS210 doesn't support any VAAI features in ESXi 5.0 or 5.0u1: http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=san&productid=21009 (Of course that info could be out of date).

Pile Of Garbage
May 28, 2007



Trojan posted:

There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.

I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them).

On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.

Pile Of Garbage
May 28, 2007



This talk of configuring iSCSI hosts to accommodate node failover reminds me of one of the reasons why I really prefer FC over iSCSI. Assuming the fabric is configured correctly hosts will failover between nodes as soon as they receive an RSCN. Propagation of RSCNs is pretty much instantaneous in a well-configured fabric which makes everything extremely tolerant to failures.

Pile Of Garbage
May 28, 2007



Misogynist posted:

http://en.wikipedia.org/wiki/Internet_Storage_Name_Service#State_Change_Notification

Of course, very few people out there actually use iSNS, but the functionality is there for iSCSI initiators.

What is the main reason that people choose iSCSI over FC? The company that I previously worked for always deployed FC SANs so the bulk of my experience is with FC which I came to prefer over iSCSI. The majority of the talk in this thread seems to be around iSCSI devices so I'm just wondering what is the deciding factor to deploy iSCSI over FC.

Pile Of Garbage
May 28, 2007



adorai posted:

cost and simplicity. Why spend the extra for an fc switch and hba when iscsi works just fine?

See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?

On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs?

Pile Of Garbage fucked around with this message at 12:55 on Aug 17, 2012

Pile Of Garbage
May 28, 2007



evil_bunnY posted:

When your IT crew's never touched FC it makes a lot of sense to not get into it.

Sorry I'm really not sure what your point is here as the same can be said for iSCSI.

From a configuration perspective I've found FC much easier to configure. I've mainly worked with IBM SAN24B-4 FC switches and SAN06B-R MPRs which are basically re-branded Brocade 300 and 7800 series devices respectively and they are extremely easy to use (Great GUI, very logical CLI and Brocade provides great documentation). Once you understand the basic concepts of configuring a stable fabric you can easily scale that knowledge out. It only starts to get complicated when you start utilising more advanced features like FC-FC routing, fabric merging or FCIP.

From my experience with iSCSI there are way more things that need to be considered in even simple deployments (i.e. VLAN tagging for iSCSI traffic segregation, link aggregation, MPIO drivers, jumbo frame support, etc.).

Of course as I said a few posts ago my experience with iSCSI is tiny when compared to my FC experience so feel free to shoot me down.

Pile Of Garbage
May 28, 2007



Xenomorph posted:

...and this? From what I saw, most of the iSCSI devices I looked at had an additional layer in-between: an OS running on the device (Linux) with its own file system, then a file created on that as a "virtual disk" is shared as a volume over iSCSI -which is then formatted by the initiator. The idea of some sort of nested file systems doesn't seem like a good idea to me. Is it making partitions instead?

What you are describing there is a NAS which is able to present shares as iSCSI targets. NASs are file-level storage devices where the disks are formatted with a file system and the device presents the storage as CIFS/NFS shares. On the other hand SANs are block-level storage devices which simply present slices of storage to hosts via iSCSI or FC.

e:fb (Sort of, NippleFloss's post is a bit more abstract)

Pile Of Garbage fucked around with this message at 08:32 on Sep 7, 2012

Pile Of Garbage
May 28, 2007



NippleFloss posted:

FC is more stable because it is a protocol designed from the ground up to provide storage service. It handles flow control better, handles device and link failure better, has lower header overhead, has lower overhead at the packet switching layer, etc....It's a storage protocol from the ground up, not a storage protocol layered on top of a multi-purpose networking protocol that was designed to allow the transmission and routing of large numbers of small data packets to many hosts.

I've always loved FC for exactly this reason but as you already pointed out it always comes down to price. It's always an up-hill battle trying to convince the customer or your boss to take the plunge when the alternative is so competively priced.

Of course it doesn't help when your sales guys insist on choosing 5m LC-LC fibre cables instead of the 1m ones when they draw up quotes (Which was the case at my previous employer). I mean bloody hell your average 42U rack is just under 2m in height and I doubt you are really going to have your SAN or hosts more than 1m away from the switches. All it does is make cable management a nightmare. Plus they are around $50 cheaper.

That's the end of my rant.

Pile Of Garbage
May 28, 2007



luminalflux posted:

1m is usually too short if you want to have any semblance of cable management. All depending on where you put your switches and SAN equip, of course.

Plus, isn't the cost of fibre usually in the termination, not the length?

Yeah you're right. From my experience usually the SAN controllers and your host HBAs will come with SFPs however you'll still need to buy them for the switches (Note I've only really dealt with IBM gear so it might be different with other vendors).

An 8-pack of 8Gb-SW 850nm SFP+'s will cost around $4-5k and depending on what FC switches you are using it can get considerably more expensive from there (Brocade 300-series FC switches usually only have the first 8 ports licensed and you have to pay extra to activate more ports).

edit: 5m may not seem long until you start to rack everything which is when you end up in a situation like this:


Pile Of Garbage fucked around with this message at 13:12 on Sep 11, 2012

Pile Of Garbage
May 28, 2007



Misogynist posted:

Most of that mess is your too-long, too-thick factory power cables, which can be replaced for $1.50 apiece. Clean that up and you'll have plenty of room to correctly dress your fiber at the side of the rack.

Yeah well that was when I was at my last job and the sales guys never considered the actual racking when putting quotes together. We were IBM partners so I think they just threw together builds in x-config, ran them by our local distributor (Avnet) and then presented them to the customer. I asked several times to be involved in the process but they were always too busy trying to score sales.

I don't want to drag the thread off-topic but I just wanted to mention that the re-branded Brocade FC gear that IBM sells isn't very practical rack wise. In middle of the photo are two IBM SAN24B-4 FC switches (Re-branded Brocade 300-series) which are half-length and have the FC+power connectors on the front. However just above them is a SAN06B-R MPR (Re-branded Brocade 7800-series) which is full length and has the FC+Ethernet connectors on one end and the power connectors on the other :confused: That's why there is 1RU of space above it: so that FC cables can run through from the FC switches to the MPR.

Mierdaan posted:

What power cables do you normally get? I've bought shorter sizes before so I could deal with length better, but I thought the thickness was pretty standard.

I should note that some of thick cables down the side are 16A C19/C20 terminated cables which connect the UPS at the bottom of the rack to the PDUs further up the rack. Also that picture was taken before I went to town on the whole thing with double-sided velcro.


To try and bring things back on topic before I piss everyone off with my tl;dr derail: earlier in the thread the Netgear ReadyNAS range of devices was mentioned and I voiced by extreme dissatisfaction with their "higher-end" business models like the 4200 (Basically they are shite).

However sometime ago I inherited one of the low-end ReadyNAS business/prosumer models (A ReadyNAS Pro 2) and I have to say it's a pretty nifty device. I built a HTPC for my dad a while back and he used to have an external USB HDD plugged into it where all the media was kept. This was slow and clunky so I grabbed the ReadyNAS, configured it in a RAID0 array, created one big share on the array and configured it to present that as an iSCSI target. Then I just plugged it straight into the 1Gb Ethernet port of the HTPC, configured the Windows iSCSI initiator and bam: 3.7TB of portable storage that is a heck of a lot faster than USB 2.0.

So yeah for home use it's an alright device. It took all of 10 minutes to configure it for iSCSI and get it hooked up and given it's portability I can see it being useful in a situation where you need to move a large amount of data between different locations (As in plug it in at site A, upload data, take it over to site B, upload data).

Pile Of Garbage fucked around with this message at 16:08 on Sep 11, 2012

Pile Of Garbage
May 28, 2007



Corvettefisher posted:

We need the USB "SERVER DRIVES" pic, if only I could find it

I haven't seen that picture but I'm imagining a server with a bunch of 4-port USB PCI cards and a bazillion external USB HDDs plugged into it. Please tell me that is the case because it would be hilarious.

Pile Of Garbage
May 28, 2007



Moey posted:

Either that, oooorrrr a thumbdrive array.

http://gigaom.com/mobile/usb_thumb_drive/

I see your USB thumbdrive RAID array and raise you a 3.5" FDD RAID array (Had to use Wayback Machine as the site is suspended because they didn't pay their bills, note that the site's stylesheet doesn't load on the cached copy so just hit Ctrl+A to see the text).

Pile Of Garbage
May 28, 2007



stevewm posted:

Speaking of USB hardrives...

At my work we use these: http://www.addonics.com/products/saturn_cipher_dcs.php for backup. Each one has a 1TB SATA drive in them. We use 5 of them that get rotated Monday-Friday. Have worked great for us. Much more reliable than the DDS and SLR tapes we used to use, faster too.

Wow, that site and product looks pretty old, mainly due to them using beige-box PCs in their stock photos, comparing the size of their product to a VHS tape and using DES/3DES for encryption in the product itself...

I've had a look at the product specs for that device and am somewhat confused. From what I've been able to glean from their website they've reinvented the wheel by creating their own proprietary connector, namely USIB (Universal Storage Interface Bus), which has their fancy-pants 36-pin connector on one end and a USB/SATA connector on the other end.

All that aside, what do you use to backup data to the drives and how do you ensure that the volumes on the HDDs are properly dismounted before you remove them? Also why did you decide to decide to replace your, I'm assuming, existing DDS/SLR tape backup solution with this instead of a LTO tape system?

Pile Of Garbage
May 28, 2007



Moey posted:

That is pretty neat that is has a hardware key. For encryption on our offsite drives, we are just using truecrypt volumes, which have been working great.

Yeah the hardware key becomes less impressive when you check the specs where states that the available encryption options are "64-bit DES, 128-bit TDES and 192-bit TDES" which doesn't even make sense as the key size for DES is 56-bits and the sizes for TDES are 56, 112 and 168-bits. Bottom line is that DES is poo poo but I should probably stop posting.

Pile Of Garbage
May 28, 2007



Syano posted:

Huh? How the crap do you get drinking MS kool aid out of me asking if someone can shed some light on the MS storage strategy for server 2012?

I think Corvettefisher may have jumped the gun abit as you are asking a valid question. Still, I couldn't help grinning when I read "Having your hyper-v stores on an SMB share is pretty awesome too."

According to this TechNet article I'm led to believe that they are recommending the Scale-Out File Server poo poo as an "in-place of" SAN-based shared storage as opposed to "as well as" . Then again, I could be wrong as their wording is a bit vague. Edit: disregard, that paragraph, I'm an idiot.

One thing to note is that their new "feature" which makes this possible, namely SMB 3.0 "Multichannel", only works when all your servers/clients are running Windows 8/Server 2012 which sort of kills the deal.

Also when I read "Scale-Out File Server" I think of IBM SONAS which really just shits all over Microsoft's idea (Although it costs the same as the GDP of Norway...)

Pile Of Garbage
May 28, 2007



Misogynist posted:

Hey, did I just come upon the only other SONAS user on the forums? :raise:

Unfortunately no. I wish I got to work with SONAS but the closest I've ever come is the Storwize V7000 (Just the normal one, not the Unified). I've read the SONAS Redbook back-to-back though! :shobon:

three posted:

What is the benefit of continuing the traditional SAN architecture?

I would rather have a resilient scale-out infrastructure that uses cheaper technology. Scale-out SANs are already very popular (e.g. Equallogic), so let's go a step further and push that into the server, make it resilient and highly available, and ditch the behemoth SAN architecture. Solid-state drives becoming affordable and easily obtainable makes this idea easier, as well.

Push everything into the software layer.

I've worked with SANs/NASs for several years and I like to think I'm somewhat on top of things but what the gently caress defines a "scale-out SAN"? A quick search on Google has simply led me to believe that it's just another lovely buzzword.

Pile Of Garbage
May 28, 2007




Oh I see what they're saying, despite the stupid names they've used (Although that's probably just my completely irrational Dell hatred talking). I've worked with IBM SVC (SAN Volume Controller) before which which does the same sort of thing (They call it "external storage virtualisation").

edit: vvv ahahahaha love it

Pile Of Garbage fucked around with this message at 06:38 on Nov 9, 2012

Pile Of Garbage
May 28, 2007



the spyder posted:

I racked one of my new internal 336TB (raw) ZFS SAN's this week and realized that my field engineers are not configuring hot spares.

I feel your pain. Some time ago at my previous employer I found that one of my colleagues wasn't configuring hot spares and, to make matters worse, was not configuring automatic alerting on the SAN. This feat of retardation was discovered when two HDDs in a RAID5 array on the SAN died taking which took a whole LUN offline. We only discovered the failure by accident 3 months after it had happened and miraculously no production systems were harmed (The LUN was a VMFS datastore and the VMs on that datastore were not in production).

It shits me to tears when people just piss all over best practice. I had a few choice words to say to my colleague after that incident.

Pile Of Garbage
May 28, 2007



Misogynist posted:

Good to see management also wasn't auditing the monitoring on your tier 1 systems.

Well management wasn't actually auditing anything at all which is part of the reason I left that outfit.

Pile Of Garbage
May 28, 2007



Misogynist posted:

One of our NAS devices decided to randomly lose about 10-15 TB of data today out of a share. Not worried about it even a little bit.

Wow, do you know what caused it? What make/model is the NAS?

Pile Of Garbage
May 28, 2007



Misogynist posted:

I wish I could just delete everything; it would make my job a lot easier. We've had a sync running to our IBM SONAS so we can have twice as much of this delicious data!

Unngghh you luck bastard! I've always wanted to work with an IBM SONAS setup. I went to a SONAS workshop hosted by IBM a while back and was quite impressed by the features and capabilities of GPFS. What do you think of it? Does it live up to all the hype?

Pile Of Garbage
May 28, 2007



underlig posted:

Lost password on an IBM DS3512, this link http://joeraff.org/tag/ds3512/ refers to a console-cable you can hook up to the san to clear a locked state, and other sites tells me i can use the same to reset the password.

I just want to know if anyone's actually done it and how risk-free it is? The password reset shouldn't drop all the disks or whatever, but since this is our only san i can't really experiment.

IBMs manual says that port on the san is for support technicians only.

Is the device still under it's factory warranty or covered by an extended ServicePac warranty agreement? If yes, call them up and log a fault so that they can send out a tech to do the procedure. From the looks of it you run the risk of really loving things up with that procedure. Also it says on that website that you need a username/password specific to the controller model and that you have to contact IBM to get that anyway.

Pile Of Garbage
May 28, 2007



Can anyone point me in the direction of some good resources for NetApp E5400 and HP EVA P6000 devices? I've already got a full understanding of NAS/SAN tech however I've only really worked with IBM NAS/SAN devices before so it'd be much appreciated if anyone could recommend some reading material which goes into the specific nuances of the aforementioned NetApp and HP devices.

Pile Of Garbage
May 28, 2007



So I'm currently supporting an environment which has a HP EVA SAN with two NetApp filers. NDMP backups are being done from the NetApp directly to tape.

Today it took two hours to restore a single folder containing two PDFs. Kill me.

Pile Of Garbage
May 28, 2007



Can anyone recommend a good iSCSI target solution for Linux that supports SCSI-3 Persistent Reservation? I've setup a small lab on my home PC to learn about Microsoft Failover Clustering and need to provision shared storage to the cluster nodes.

Pile Of Garbage
May 28, 2007



EoRaptor posted:

This is what snapshots are for? I don't understand how something ended up on tape but not in your daily snap(s)?

Misogynist posted:

Why would someone have the same retention policy for tape and on-disk snapshots? :confused:

The answer to both of these is :iiam:

We are in the process of completely overhauling the environment so hopefully things will get better. For backups they are using CommVault which I've never really used before but it seems pretty solid so far (Given that I've only had about 4 weeks to work with it).

Pile Of Garbage
May 28, 2007



Misogynist posted:

LIO supports SCSI-3 PGRs, but my honest recommendation is to skip Linux here and go with COMSTAR from an Illumos derivative (OmniOS would probably be my choice). There is absolutely no comparison among other open-source stacks in terms of robustness, performance, and maturity.

Is what you're suggesting free? I'm really just looking to setup a small home lab environment and want to avoid purchasing anything (Hence why I'm going with Hyper-V).

Pile Of Garbage
May 28, 2007



EoRaptor posted:

I'm not complaining that there are tapes, just that for deleted folders and other simple file recoveries, it's usually much, much faster to use a storage devices volume snapshot abilities to go back in time X hours/days and grab it.

Unless you have historical tape that goes back much farther than the snapshots are able to and these are very old files that went missing?

I'm really not sure why they aren't using snapshots on the NetApp controller but I'm guessing it's due to capacity limitations. The storage environment at the moment is a bit hodgepodge. There's two SANs, a HP EVA8100 and a EVA8400, which are presented to a NetApp V3410 which in turn provides block and file (CIFS) storage. To complicate things further they have two NetApp FAS3240 NASs with one providing NFS shares to VMware hosts and the other providing CIFS shares for backup staging.

I think they're planning on replacing the entire setup with a NetApp FAS6200-series or something.

Pile Of Garbage
May 28, 2007



NippleFloss posted:

Snapshots on user file shares are generally pretty thin. We retain 3 months worth of snapshots and we see around 5% of the total space consumed being held by snapshot blocks. You can almost always afford to enable them on low-change data like that. If the concern is running out of space due to snapshot usage you can set up snapshot autodelete to prune them when the volume gets near full. As mentioned, they are so quick to restore from that it's really a waste not to have at least a few available for quick restores of recently lost data. The fact that users can initiate their own restores through the previous versions functionality in windows or the ~snapshot directory makes it even better.

Yeah having them setup would be a godsend, especially if users can use Previous Versions. The idiot who configured the NetApp didn't use any standard naming convention for the volumes and shares so when a user gives me a UNC path to a file/folder to be restored I have to spend ages hunting around to find the actual volume that the share maps to.

Pile Of Garbage
May 28, 2007



parid posted:

Long shot, but do you have dedupe on? If not you might be able to find the space you need for snapshots in dedupe savings alone.

I'm pretty sure that dedupe is turned on but I'd have to check with the storage team. Is there a general best-practice for snapshot capacity requirements based on volume size?

Pile Of Garbage
May 28, 2007



I found a copy of "Designing Storage Area Networks" by Tom Clark at the office which I flicked through and I found the following page which is pretty hilarious:



I like to think that it's based on a true story :allears:

Pile Of Garbage
May 28, 2007



demonachizer posted:

It satisfies the business needs of the project. If we have a location based disaster taking out both server rooms business is stopped anyway. But we also have daily shipments to servers 3 miles away and offsite backups to Iron Mountain for business continuity purposes. We are comfortable with a lot of downtime on paper really since these aren't our clinical systems and just for some administrative poo poo and stuff like print servers etc.

The purpose of the second room is just in case we have a flood in the primary room or something. The environment is not as important as the data and the data has a lot of redundancy built in. The file server that will be hosted will also be mirroring off to a physical box for backup to tape and for additional redundancy.

EDIT:

Funny story. I was just told that we need to start thinking about the environment that is replacing this one since we are EOLing it at 3 years.

You wouldn't happen to be doing this project for a company located in Subiaco, Western Australia that starts with an M?

Pile Of Garbage
May 28, 2007



demonachizer posted:

Nope. Other side of the planet.

My bad. On-topic question: what has your experience with Iron Mountain been like? I've had to start dealing with them very frequently in my new job and while they are extremely reliable their TapeGuard website is poo poo, especially when you are dealing with a collection of 5,000+ tapes and are trying to manually recall a large amount of them. Also the fact that they don't give you any details regarding scheduled or unscheduled deliveries until 2 hours before delivery is a bit crap.

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



Speaking of horrible websites, has anyone seen the web interface for a HP MSL6000-series tape library? That poo poo is atrocious.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply