|
Sounds like bug 536445? I've heard of similar NVRAM battery issues. Even though that bug does not show an available fix, there is one: flash battery firmware. See https://kb.netapp.com/support/index?page=content&id=2016592&actp=LIST_RECENT&viewlocale=en_US&searchid=1327443096712
|
# ? Jul 4, 2012 17:57 |
|
|
# ? May 10, 2024 00:43 |
|
I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi. I know I can get a 380 G8 with 12x 3TB drives in it, but what would I run on it? Nexenta adds $10k to the bill, and that's a hard pill to swallow. I don't know enough about the collection OpenSolaris forks to know if they're at a point where they're usable for something like this with ZFS, or if I should just go with something I know better. Also looking at the Nexsan E18, and if anyone has other suggestions I'd love to hear them.
|
# ? Jul 6, 2012 17:31 |
|
Whitebox FreeBSD?
|
# ? Jul 6, 2012 17:47 |
|
KS posted:I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi. We've been pretty happy with OmniOS, and the support is much cheaper than Nexenta. Just make sure the hardware is on the Illumos (nee OpenSolaris) HCL; the Dell R720XDs H310 is not as of this writing.
|
# ? Jul 6, 2012 18:15 |
|
KS posted:I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi. I'm looking at doing this same thing, and am leaning towards something like this guy did using a SuperMicro SC847 - 36 drive chassis and FreeNAS. Configured half full with 18 x 3TB drives is about $8,000 from CDW, and would give over 40TB of usable space. Its definitely a 'roll your own' solution, and I'm not sure how fast it would be, but for that price the capacity can't be beat.
|
# ? Jul 6, 2012 19:21 |
|
Jadus posted:I'm looking at doing this same thing, and am leaning towards something like this guy did using a SuperMicro SC847 - 36 drive chassis and FreeNAS. We've been happy with that chassis w/ OpenSolaris for a few years for light/medium workloads.
|
# ? Jul 6, 2012 19:52 |
|
Nukelear v.2 posted:Anyone have ideas on why an EQL 6110XS might go non-responsive when doing 8k read IO? Seems to occur across all volumes on the unit. Doing some bench-marking with SQLIO and all my tests are good except for 8k random read. At 16k random read it was just under 16.5k iops so I thought maybe the switches were flooding so I dialed SQLIO back a single thread and it still dies. Just as a follow-up on this, the host I was testing from did not have it's nics set for jumbo frames. Changing the MTU to 9000 resolved the issue, not exactly sure why this would have happened though, should have just been performance degradation as far as I understand things. It's now happily pushing close to 33,000 IO/s at 255MB/s on my 8k random workload. Edit: Personally I think broadcom drivers are wrong when they label the default MTU at 1500 and they really sending 9216. Since EQL says they can only support 9000 even on 10G. That makes more sense to me as the cause. Nukelear v.2 fucked around with this message at 21:20 on Jul 6, 2012 |
# ? Jul 6, 2012 21:12 |
|
evil_bunnY posted:gently caress you system manager Heh, this is why I only use the CLI to manage filers. System Manager has limited usefulness compared to CLI.
|
# ? Jul 7, 2012 05:05 |
|
The fun just never ends, guys. Starboard Storage finally pulled a permanent patch out of their collective Russian asses (did you know the company is run by Russians now? I have no problem with Russians, just thought it was interesting), and it made things worse, not better. I am Jack's complete lack of surprise.
|
# ? Jul 9, 2012 15:46 |
|
Powdered Toast Man posted:The fun just never ends, guys.
|
# ? Jul 9, 2012 21:16 |
|
Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now.
|
# ? Jul 9, 2012 23:06 |
|
nuckingfuts posted:Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now. I know that they are sanitized, otherwise they can retain some of the ownership info they had previously (which happens all the time when I buy 3rd-party NetApp drives). I heard somewhere (not officially) that the good ones are repaired/reused as spares/replacements and the bad ones are canned.
|
# ? Jul 10, 2012 03:46 |
|
They are either overwritten or destroyed depending on whether they can be reconditioned. See KB article 3012103.
|
# ? Jul 10, 2012 04:15 |
|
nuckingfuts posted:Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now. I know they sanitize it at least, but most of the time the paranoid bunch like banks just get new drives and keep the old ones.
|
# ? Jul 10, 2012 08:12 |
|
Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays.
|
# ? Jul 18, 2012 15:48 |
|
So someone is looking at 2 8 bay Drobos at work (this) to be specific, and I'm pretty sure that's an awful idea from what I've heard about Drobos. Would a pair of 8 bay Synology boxes be a better choice? I guess the plan is to mirror them, which Drobo has the capability to do. Beyond that I don't really know what the plan is. It looks like these are going to be backup space (so the second would be backup of a backup?) for experiment data.
|
# ? Jul 18, 2012 21:11 |
|
Please God, don't get Drobos. Synology, QNAP, or even Buffalo would be infinitely better. I really love our 10 bay Synology NAS.
|
# ? Jul 18, 2012 21:14 |
|
I can confirm that Synology's products are excellent and so is their support. We are rolling them out for on-site software repository purposes at 130+ sites. It's only a single-drive model but it did some very specific things that no other device we found would do (primarily with FTP access for Wyse Device Manager, which we use to patch/image Wyse thin clients).
|
# ? Jul 18, 2012 21:22 |
|
Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.
|
# ? Jul 18, 2012 21:25 |
|
Nomex posted:Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays. Feature wise they don't really do anything out of the norm. Shadowcopy is local LDEV mirroring, universal replicator is remote LDEV mirroring. There's really not much to say about it, honestly.
|
# ? Jul 18, 2012 21:32 |
|
FISHMANPET posted:Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad. My standard pre-purchase practice is to google "<x> sucks" and review the results. In this case there's a lot material there. The drobosucks blogspot is pretty decent.
|
# ? Jul 18, 2012 22:40 |
|
Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.
|
# ? Jul 20, 2012 22:53 |
|
adorai posted:Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.
|
# ? Jul 21, 2012 03:15 |
|
NippleFloss posted:Protection manager could be used to do this somewhat trivially.
|
# ? Jul 21, 2012 03:21 |
|
FISHMANPET posted:Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad. NO NO NO DO NOT GET DROBO Seriously if you want to talk answer my PM or email me at Corvttefish3r@gmail.com I can help you out and offer a bunch of support for cheap
|
# ? Jul 21, 2012 03:36 |
|
NippleFloss posted:Protection manager could be used to do this somewhat trivially. Protection Manager could be great... if it wasn't such a piece of poo poo. Let me count the ways: 1) What the gently caress is up with requiring 130% space on your destination volumes? Sometimes I want for my 100GB volume to SnapVault to another 100GB volume, and I really don't enjoy the idea of requiring the destination volume to be 130GB. I end up making all of my SnapVault relationships manually and then importing them to get around this... but that's the OPPOSITE of what I want to be doing. 2) Speaking of, what's up with all of the arbitrary volume requirements? The language is different between my source and destination volumes, which doesn't matter at all for LUNs, but I guess that's a good enough reason to not let me set up a SnapVault relationship! 3) There needs to be a really simple SnapVault option in Protection Manager where PM goes and gets the last snapshot taken on the source and then copies it over to the destination. Requiring me to reconfigure every single SnapDrive and SnapManager instance is a huge task, whereas PM could EASILY be smart enough to grab the latest snapshot name to sync over. I spoke with one of the OnCommand/PM project managers at Insight, and he was explaining how you could take 10 NetApps and put them into a big destination pool and let PM manage everything -- it would make all of the volumes 16TB and thin-provision everything. That sounds great... if you had 10 NetApps. If you're just trying to sync 1-2 NetApps to 1-2 other NetApps, PM simply doesn't give you the options or the flexibility (30% OVERHEAD REQUIRED) that I want. I am working on replacing the whole goddamn thing with a series of PowerShell scripts and calling it a day.
|
# ? Jul 21, 2012 04:10 |
|
Corvettefisher posted:NO NO NO DO NOT GET DROBO If you've got negative things to say about a storage vendor, say it in here so that other people may learn from your pain.
|
# ? Jul 22, 2012 17:00 |
|
Mierdaan posted:If you've got negative things to say about a storage vendor, say it in here so that other people may learn from your pain. Ps: drobos are bad enough when it's just one nerd's anime on there, putting VM's on the things will be like trying to swim is cowshit: slow, unsafe and very, very unpleasant.
|
# ? Jul 22, 2012 18:20 |
|
madsushi posted:Lots of words about Protection Manager It'd definitely not a perfect product and the earlier iterations were basically unusable, but it has improved to the point where it is functional and possibly even useful if you spent some time getting familiar with it. Regarding your specific issues: 1) There isn't a one-to-one ratio of source-to-destination size for snapvaults, so it doesn't really make sense to size them at one-to-one. A vault destination will have a different number of snapshot copies than the source (generally more) and if dedupe is in use the data is initially re-inflated before being deduped on the destination. The extra size is accounting for that overhead. That said, it's not a strict %130, the calculation is a bit more detailed than that, and differs depending on whether you're using 3.7 or 3.8 and up. There are some hidden options that can be changed to tune the calculation to provide more or less additional space. If you're interested I can provide them. Enabling Dynamic Secondary Sizing is probably the best way to go, provided you're on 3.8 or above. 2) Snapvault gets very unhappy when there are volume language mismatches between a source and destination. This isn't a Protection Manager issue, it's a WAFL issue, or, more generally, an issue with there not being a direct mapping of some characters from one language to another. If the destination volume doesn't support umlauts because of it's language setting and there are files on the source that have umlauts then it's going to fail. 3) The integration with SnapDrive and SnapManager is required because the vaults get cataloged in PM as being part of a SnapManager backup set. That allows you do do things like perform a restore from an archive transparently, or perform your validation on the secondary site. You can't do that if you don't have that catalog information because you are performing your vaulting separately from your SM backups. Of course, for some people that would be just fine, and so the limitation sucks, but that's the rationale behind it.
|
# ? Jul 23, 2012 19:21 |
|
NippleFloss posted:It'd definitely not a perfect product and the earlier iterations were basically unusable, but it has improved to the point where it is functional and possibly even useful if you spent some time getting familiar with it. Regarding your specific issues: 1) I have never been able to make a volume smaller than 1.3x and still have Protection Manager accept it as a candidate for SnapVault. I opened a TAC case to see about reducing that down but never got anywhere. Sometimes my vault will be almost a mirror, sometimes I want the vault to store fewer snapshots than the source, etc. My destination filer has about 1.2x the space of my production filers, so making every volume start at 1.3x really doesn't work well. If you know of a way to get the minimum size under 1.3x, I am all ears and that would help quite a bit. 2) I can get the volume language mismatch issue, but I guess I am unhappy there is not 1) an override 2) a button to "fix" it in PM 3) when I fix it myself, I have to wait 15-30m before Protection Manager sees that I fixed the volume name manually. 3) Gotcha, restore from archive is actually a good point I did not think about. I still use PM at several client sites simply because it's better than my batch files, but it feels like it's a lot of work/learning for small clients (1-2 NetApps) and there are so many little "gotchas" that make it difficult for me to teach others. Here's a day in the life: 1) (SD install) Enable SnapDrive integration with Protection Manager. 2) (SM config wizard) Enable SnapManager integration with Protection Manager. 3) (NetApp Management Console) Attach the newly-created dataset with some destination volumes. It wants to make them 1.3x? OK, we'll make them manually. 4) (OnCommand System Manager) Make the volume, turn off snapshots, turn on manual dedupe, make qtree. 5) Wait 15 minutes for Protection Manager to rescan 6) (NetApp Management Console) Try to attach the new qtrees to the dataset, but the volume language is wrong. 7) (ONTAP CLI) Change the volume language 8) Wait 15 minutes for Protection Manager to rescan 9) (NetApp Management Console) Attach the new qtrees (finally), assign a policy, initialize the SnapVault 10) (SM backup wizard) Configure the backup jobs to archive All of that is due to Protection Manager, not including the steps needed to set up SD/SM and MPIO and the application database migration in the first place. Right now I can make a volume in 10 minutes and hand it off to a non-storage admin who can use SnapDrive and SnapManager to get their application set up quickly. With Protection Manager, a smart storage admin needs to spend an hour in so many different consoles just to set up replication. This is in contrast to SnapMirror which is "mirror to this volume using OnCommand System Manager -- done".
|
# ? Jul 23, 2012 21:57 |
|
So today I turned this: Into this: 6 times. Across 3 sites. There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.
|
# ? Jul 24, 2012 06:56 |
|
Nomex posted:Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays. I know they have lab environments for training courses that often sit unused, if they're eager to sell you something maybe you could ask them for access to one of those to have a quick look at the different admin tools. Our shop has the whole range of HDS products (AMS, HUS, USPV, VSP, HNAS) and I can tell you that they are generally fast, reliable and easy to understand, but their GUI management tools are unbelievably slow and unsexy. At least HDS admits they need to improve them, and things are starting to look a lot better than they did a few years ago. And yeah there's no amazing new technology in them, they do things the tried and trusted way and generally choose the simplest implementations, but in a SAN the less things that can go horribly wrong the better.
|
# ? Jul 24, 2012 07:45 |
|
Trojan posted:There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things. I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them). On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.
|
# ? Jul 24, 2012 12:05 |
|
Trojan posted:There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.
|
# ? Jul 24, 2012 13:16 |
|
cheese-cube posted:I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them). The 9900 is a rebadged data direct networks 9900, which is two generations behind DDN's current offering. You see them in hpc and broadcast; they do some interesting things to keep those pipes full that make them less well suited for general SAN workloads.
|
# ? Jul 24, 2012 18:11 |
|
They're useful as giant scratch storage, but my personal recommendation is to never store anything important on DDN.
|
# ? Jul 24, 2012 18:50 |
|
cheese-cube posted:On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane. Getting 76 DCS3700's up and running pretty soon, don't know too much about them just yet.
|
# ? Jul 24, 2012 21:09 |
|
The_Groove posted:Sometimes two at once resulting in 60 arrays with 1 failed drive, and another 60 with 2.
|
# ? Jul 24, 2012 21:11 |
|
One pair of controllers, but with 2 drive enclosures (daisy-chained) for each channel. So 1200 drives total.
|
# ? Jul 24, 2012 21:15 |
|
|
# ? May 10, 2024 00:43 |
|
I need to build a new HA SAN for our ESXi backend. Since we are a white box shop (supermicro) and huge ZFS user, I was planning on building a OpenIndiana Supermicro ZFS box. I made the terrible mistake of quoting some parts through my CDW rep and mentioning the above. He is now trying to shove a NetApp rep down my throat- claiming their new $8k entry level box can do everything I want. For some reason, I highly doubt it.
the spyder fucked around with this message at 22:29 on Jul 24, 2012 |
# ? Jul 24, 2012 22:25 |