|
So we had a cache battery controller "failure" on one of our Dell SC4020s'. It's less than a year old. Apparently there is a known issue with the firmware on the cache controller. Reseating the battery did not work, so the suggestion from co-pilot is to reseat the controller. Even though this is a redundant system, I'm still wary of reseating the controller during normal hours, so I get to do some maintenance tonight. I'm just hoping this is not indicative of a larger issue.
|
# ? Jul 10, 2015 16:28 |
|
|
# ? May 27, 2024 03:34 |
|
What does Dell say? If it's a known issue they should have something, I wouldn't pull a controller without their recommendation.
|
# ? Jul 10, 2015 19:16 |
|
sanchez posted:What does Dell say? If it's a known issue they should have something, I wouldn't pull a controller without their recommendation. They said to pull the controller.
|
# ? Jul 10, 2015 19:25 |
|
How come Nimble isn't in the OP? What are Goon's opinions on it?
|
# ? Jul 11, 2015 02:34 |
|
kiwid posted:How come Nimble isn't in the OP? What are Goon's opinions on it? OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point. edit: My god I wrote that poo poo in 2008. Is there interest in a refresh?
|
# ? Jul 11, 2015 03:13 |
|
Based on past experience, no one actually reads megathread OP's. kwid is apparently the rare exception.
|
# ? Jul 11, 2015 03:35 |
|
kiwid posted:How come Nimble isn't in the OP? What are Goon's opinions on it? I've never seen anything negative about Nimble here, or anywhere for that matter. I love mine. I was promised 32,000 IOPS and can pull 40,000. I rarely think about it, which is a compliment.
|
# ? Jul 11, 2015 04:10 |
|
Biggest knocks against Nimble are that its block only and not particularly feature rich. It's simple and performs fairly well though. Tegile is winning a lot of deals over Nimble in my area lately and has multi-protocol support, which can be nice. I like Tintri a lot as well, though it's a bit more expensive than the other two.
|
# ? Jul 11, 2015 04:32 |
|
Nimble is just stupid fast per spindle.
|
# ? Jul 11, 2015 04:46 |
|
Never had any problems with my nimble arrays. One of the is now pegging the CPU in the controller and latency is still sub 3ms (a lotta growth). Upgrading the controllers in early 2016.
|
# ? Jul 11, 2015 05:49 |
|
1000101 posted:edit:
|
# ? Jul 11, 2015 18:09 |
|
OldPueblo posted:You can still choose 7-mode if you like, just bear in mind there is no 8.3+ 7-mode, 8.2.x is the last 7-mode release and it'll continue to get bug fixes, etc., for years. You can also change your mind after it's delivered really, just get with your licensing reps and get licenses switched. At some point there will probably be a platform that isn't supported for 7-mode, I'm just guessing. The default is cDOT now though, if that was your question. Since 8.2, licenses work for either 7 mode or cDOT - so if you receive with cDOT and want to change your mind to 7, you just unset a BIOS variable, netboot the filer to 8.2.3P2 or whatever you want, then reinitialize. But you shouldn't. cDOT works great for essentially everything that 7 mode did. If you have 7 mode, just set the BIOS variable, reinitialize, and reconfigure. You'll need a cluster license key, but whoever you bought the netapp from can help you with that, for free.
|
# ? Jul 12, 2015 22:22 |
|
kiwid posted:How come Nimble isn't in the OP? What are Goon's opinions on it? Have two Nimble arrays that have been rock solid since we bought them. They require very little maintenance as well, which is a huge + in my book. Quite happy with them, however dammit I wish they did NFS as well
|
# ? Jul 12, 2015 22:33 |
|
So the Compellent Array that had a bad controller I posted about, now also had SSD a drive fail. I'm kinda surprised to see failure like this in something that's not even been running for a year. A Dell tech should be here in about 3 hours with the parts at least.
|
# ? Jul 13, 2015 15:40 |
|
1000101 posted:OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point. I remember first writing about nimble in this thread like 2-3 years ago and goons going "Not so sure 'bout those dudes" and now I feel all vindicated.
|
# ? Jul 13, 2015 17:52 |
|
Anybody have experience with SolidFire flash arrays? How well does the scale-out and volume QoS features work?
|
# ? Jul 17, 2015 14:10 |
|
Has anyone run storage performance benchmarks on a Cisco UCS box? I'm doing a side gig for a customer, setting up a database cluster on a UCS box (UCS C240 M3 SFF, dual-8-core/2.7 GHz,128 GB RAM, 16x300G) running VMware for them. They had a question on how to set up the storage and now I'm curious about RAID-10 v RAID-6 (or whatever Cisco calls it on a UCS box). Has anyone ever compared the two? I'm also considering RAID-50 and RAID-60 to maximize usable disk space while retaining some level of performance. Where's the sweet spot? Agrikk fucked around with this message at 14:27 on Jul 17, 2015 |
# ? Jul 17, 2015 14:24 |
|
You will almost certainly want to use RAID 10 unless they are super read heavy. The sweet spot depends on your read/write ratio, but if they only have that one box and all of the VMs are going to be on it, RAID 10 is the right choice.
|
# ? Jul 17, 2015 14:37 |
|
Raid 10.
|
# ? Jul 17, 2015 21:33 |
|
RAID 10 or passthrough for VMware's VSAN.
|
# ? Jul 17, 2015 22:14 |
|
we do raid 6 for our VDI stuff, but honestly raid10 would have been a better choice.
|
# ? Jul 18, 2015 00:29 |
|
I have learned that keeping on top of storage reclamation is probably a good idea. Over the weekend we came pretty close to being completely full on our tier 3 storage. After cleaning up some old data I thought I had cleaned up mostly everything, but noticed usage on our Compellent arrays didn't change (after replay and DP)... File deletion does not zero blocks out. This is something I already knew but it didn't really click until I saw the space discrepancy. I ended up having to use a combination of 'esxcli storage vmfs unmap` and dd within our linux guests (thick disks) to free up the blocks on the array. Here is the dd script i used: code:
|
# ? Jul 20, 2015 16:00 |
|
bigmandan posted:I have learned that keeping on top of storage reclamation is probably a good idea. Nope, which is why thin provisioning on NFS is the tits. Some Vendors have tools that simplify the guest zeroing portion, but the basic steps are the same.
|
# ? Jul 20, 2015 17:41 |
|
[Edit: gently caress me... thought this was the NAS thread. Sorry.]
Internet Explorer fucked around with this message at 01:28 on Jul 22, 2015 |
# ? Jul 22, 2015 01:19 |
|
If you're in the market for an all flash array there are some pretty good deals to be had out there right now. Pure is running a promo where you can get an FA405 with 2.75TB raw for about $50k. NetApp is offering an 8020 AFF with 4.8TB raw for about $25k. Both arrays feature both dedupe and compression, so data reduction rates are pretty good. Flash storage is definitely getting cheaper and cheaper.
|
# ? Jul 27, 2015 18:18 |
|
Also, Cisco has killed off Invicta (formerly Whiptail) less than two years after acquiring it, having done basically nothing at all with it.
|
# ? Jul 27, 2015 22:55 |
|
NippleFloss posted:Also, Cisco has killed off Invicta (formerly Whiptail) less than two years after acquiring it, having done basically nothing at all with it.
|
# ? Jul 28, 2015 03:11 |
|
Vulture Culture posted:Cisco acqui-hires, they never keep a product line intact. All indications are that they intended to sell the product directly (they did this, there are Invicta customers out there and they tried to get us to quote it) as well as integrate it with UCS. They've given up on both and it's basically a dead product with no obvious landing spot for the people or IP they acquired.
|
# ? Jul 28, 2015 04:47 |
|
Vulture Culture posted:Cisco acqui-hires, they never keep a product line intact. Think Meraki will stick around? They are probably making some good money off the licensing.
|
# ? Jul 28, 2015 05:08 |
|
I don't see Meraki going anywhere. It's already rebranded as Cisco Meraki, and fills a nice gap in their product line allowing them to compete with the Aerohives of the world.
|
# ? Jul 30, 2015 14:00 |
|
Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)? If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)? Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what? edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something? kiwid fucked around with this message at 22:01 on Jul 30, 2015 |
# ? Jul 30, 2015 21:57 |
|
kiwid posted:Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)? You probably don't want to use Glacier for backup. There is a minimum four hour wait before your request will even begin to be serviced. It's meant for archival data that you will access very infrequently and with a very generous RTO. It's also priced much higher per get request so restores can get expensive. Backing up to S3 is pretty common though and a lot of backup vendors have configurations to allow that fairly trivially. AWS stores objects, not files or blocks, so a request is just a request to store or retrieve an object. And object is just a blob of data identified by some metadata. How many requests are required to store or retrieve and object is going to be determined by how your backup software handles writing to the object store.
|
# ? Jul 30, 2015 22:41 |
|
kiwid posted:Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)? Echoing what NippleFloss has already said: Glacier is for long term archiving of data. Think processed log files, legal documents with 7-year retention times, and anything else you refer to once and then need to keep for a long time without accessing it. S3 is the more typical approach for backups as a bucket can typically be mounted as a storage device in most cloud-aware software these days and integrates seamlessly with your existing backup infrastructure. If cost is a thing, you can always look at S3's reduced redundancy option which basically reduces availability from 11 nines to four (99.99%) I think. Have a look at the cost calculator for a better sense of cost differences between the various storage options: http://calculator.s3.amazonaws.com/index.html Also, for lager backup or recovery jobs, have a look at AWS Import/Export. It is pretty much what you said: you dump your data to a hard drive and ship it to them. Or you ship them a hard drive and they dump your data back on it and ship it back. But the process can take several days and isn't designed to be a disaster recovery option for critical data.
|
# ? Aug 4, 2015 16:18 |
|
Not entirely sure if this is the best thread to ask in but it does involve a function of storage... We use NetApp filers and almost every user has a laptop. We end up with a bunch of orphaned lock files on our network storage because people tend to not actually close files before they disconnect their laptops from the network, or disconnect from the VPN if they're at home. In turn, this leads to the orphaned lock files being reused when someone else comes along and opens the file, so the wrong user is reported as having the file open for editing. Basically, user A disconnects improperly and leaves a lock file on the network for a random word file. User B comes along and opens that same file and the lock file is re-used (apparently). Now if another user tries to open the same file, they're told that user A has the file open, when actually user B has it open. The lock file also doesn't seem to always correctly clear after that even if user B closes the file properly (we sometimes find months old lock files that aren't cleared). The question being...short of beating it into the heads of users to stop disconnecting from the network without closing their files, is there any way to automatically clear these orphaned lock files rather than handling it on a case by case basis? Some function of NetApp that I haven't discovered or anything else like? Or in general, has anyone had similar problems and found a better solution? Thanks
|
# ? Aug 4, 2015 18:02 |
|
Agrikk posted:If cost is a thing, you can always look at S3's reduced redundancy option which basically reduces availability from 11 nines to four (99.99%) I think. One minor point, both the reduced redundancy option and the standard option have the same availability target (99.99), the difference is that the standard option has 11 9s of durability while the reduced has only four. Durability is the chance that an object will be lost within a year, versus availability which is the chance that it will be unavailable for some portion of the year.
|
# ? Aug 4, 2015 18:24 |
|
Agrikk posted:Echoing what NippleFloss has already said:
|
# ? Aug 4, 2015 18:37 |
|
My team is playing with the idea of going tapeless when we refresh our Netbackup environment. However, we'd like to do it without throwing hundreds of thousands of dollars at a particular storage vendor if we can help it (like DataDomain or something similar). Does anyone have experience for setting up a high-density, cheap, non-performant storage array for the purposes of backups attached to a Netbackup media server? Preferably something with dedupe and compression? We've also thought of just rolling some dense HP servers with a bunch of 6TB SATA drives in it and running something like OpenDedupe on top of it, but I'm not sure if that would be more trouble than it'd be worth to maintain all that, and have to worry about setting up our own alarming and scheduling drive replacements and whatnot.
|
# ? Aug 13, 2015 21:38 |
|
So, thin provisioning (especially with virtualization). You have thin provisioning at the VMWare level You have thin provisioning at the storage level, sometimes even twice at the storage level (NetApp thin provisioned LUN inside of a thin provisioned volume.) The only reason to thin provision is to oversubscribe resources. The question is WHERE to you oversubscribe. What's best practice? If you have dedup/compression, does it make sense to even thin provision VMs at the VMWare level anymore? At that point is it better to thick provision them and size the drives accordingly so you don't have to worry about oversubscribing each VMWare volume, instead focusing all your attention on the storage level?
|
# ? Aug 14, 2015 16:19 |
|
I'm phone posting so I can't respond to all of your questions, but make sure you know how to reclaim space at the guest level, the host level, and the storage level. Read up on iSCSI UNMAP and make sure your environment supports it. Otherwise you can paint yourself into a corner. I generally avoid thin provisioning at the storage level unless there's a specific need or benefit. At the hypervisor level is a bit more flexible and does have some storage-motion related benefits.
|
# ? Aug 14, 2015 21:22 |
|
|
# ? May 27, 2024 03:34 |
|
If you don't thin provision at the storage level though, how is dedup providing you any benefits at all? If I thick provision a 10tb volume on netapp and put 2 thick provisioned 5tb LUNs on it and then put 50 100% full but identical 100gb VMs on each LUN I'll be wasting a ton of space. I will have reserved 10tb of storage on the SAN, both of my 5tb VMware volumes would be full, each guest would also be full, but I would only be consuming around 100gb (give or take) of real storage.
|
# ? Aug 14, 2015 22:21 |