Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wicaeed
Feb 8, 2005

Docjowles posted:

That's verbatim why my old boss forbid me from buying a SAN for our virtual environments. So instead we used a lovely hacked up DRBD solution that failed all the loving time but hey at least we didn't have a "single point of failure" :shepface:

Jesus christ this sounds like the line of thought from our company DBA Manager when it came time to rebuild our old Billing environment.

I had sit down and explain to him (with drawings and everything) how a loving RAID array works (with hotspares!) and how redundant controllers, network links, switches, etc work to make sure that the disk array was redundant as possible.

He still wanted us to buy a second SAN of the same make/type and use it as a hotspare because reasons.

And then the turdnugget goes and builds a SQL cluster, but then places two standalone MSSQL servers in front of it so clients can connect to it instead of the cluster :psyduck:

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005
Is there anyone here that has production experience with EMC's ScaleIO product?

I'm specifically looking for information regarding the mixing of different hard drives type within the same physical chassis, well as how ScaleIO works when mixing hardware (same server vendor but different HW generations).

Wicaeed
Feb 8, 2005

NippleFloss posted:

EMC has a few different products that compete in the scale out array space: Isilon, ExtremeIO and ScaleIO. Of those, only Isilon has enough market presence to make any real determinations about what it's good at, and that seems to be high throughput sequential IO streams. So massive scale archival data, object storage, streaming video, etc. It has proven to be less than stellar at running things like VMWare or OLTP because (like many distributed systems) it is very metadata intensive and the time required to a) query metadata to locate all of the pieces of an IO request and b) assemble those pieces from the various nodes they are located on, incurs enough latency to make it inefficient for random IO where you can't do readahead to mask that latency. That sort of problem is solvable through things like a coherent global cache (like VMAX), but adding the hardware to work around that problem makes things significantly more expensive.

NetApp doesn't really do scale out like that. Clustered ONTAP can scale a namespace, but individual filesystems live on only a single controller, so single workload performance is limited by the controller that owns the volume unless you do striping at the host layer. They have a construct called an infinite volume that stripes IO from a single volume across multiple nodes, but it is meant for pretty limited use cases right now.

I'm not sure what the performance of Nimble's solution is like because they are also still pretty small. One interesting thing they do is provide a special multi-path drive that not only manages paths on the host, but also directs IO requests for specific LBAs to the node that owns that LBA, so there is no back end traffic to retrieve the data from a partner node and no need for global cache. I'm not sure how they do that, though (A round robin assignment of LBAs or blocks of LBAs to each node, perhaps?) and it could cause other issues.

Basically there is no perfect solution. All of these arrays are build to be good at one or several things and the design decisions required to meet those goals involve trade-offs that make them less good at other things. Which is why it's important to pick a vendor based on what you actually want to do, and not based on synthetic benchmarks or innovative features that won't help with your workload.

Sounds like you know a bit about ScaleIO :)

Do things like FusionIO cards or the addition of the EMC XtremCache product make any sort of difference in regards to random IO performance?

Wicaeed
Feb 8, 2005

bull3964 posted:

On the subject of VSAN, I wonder if they are going to let people start using SSDs for actual storage rather than just cache. The new intel PCIe SSDs were announced the other day and those seem perfect.

Two 400gb DC P3600s in a 3 host cluster would be screaming. That would give you 1.2tb of insane fast storage for around $4500. Not a ton of raw storage to be sure, but if you had a handful of VMs, you could get 3 lower end dell servers like the R420 an put together a fast as poo poo 3U SMB cluster for very little money or complexity. You could probably even get away with DC P3500s if your write load wasn't too high and save another $1200.

It actually wouldn't be a bad compliment to a Dell VRTX since it has 8 PCIe bays internally. Setup a VSAN with PCIe SSDs across the blades you have installed (assign two PCIe slots to each) for OS volumes and then use the shared PERC8 mechanical drive backplane for bulk storage.

Curiously, EMC ScaleIO is going to start using some form of local memory as the cache for it's storage in it's latest release, allowing you to use local SSD's as a faster tier of storage.

Wicaeed
Feb 8, 2005

Moey posted:

Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance.

You will spend more time racking the thing (their rails suck) than you will deploying it.

How well does their replication work? Do they support any form of active fail over?

I just got a tentative approval from my boss to quote out a secondary SAN for our current planned MSSQL Billing environment, with a budget of 80k.

Right now we were thinking we want to purchase a second copy of our Equallogic SAN to act as a backup in case of a primary array failure, but I'm fairly certain that Equallogic can't seamlessly failover in any way. It also doesn't support a lot of advanced features such as compression or dedup, and has absolutely no flash to speak of.

Wicaeed
Feb 8, 2005
So I got to sit down for an hour with Nimble and go through a webex presentation about their product.

If half of what they are claiming is true, this should be pretty simple sell to Management, as long as it doesn't break the bank ($80k)

Wicaeed
Feb 8, 2005

bigmandan posted:

I sat through that same presentation a few days ago. It is pretty drat impressive. Ballpark figure for the cs220 was about 50-60k (Canadian monopoly money)

:stare: drat that's quite a bit more expensive than Moey assumed near the top of this page (comparing it to an EMC VNX5200 + DAE for $22k) putting it (probably) right back into the territory of poo poo-that-I-want-but-couldn't-ever-get-budgeting-for

Was that for a single unit?

Moey posted:

What are you looking to get?

I have found them very reasonable in terms of pricing. Bigmandan's price there seems a lot higher than what we paid for our CS220 even after converting it to USD.

Was that before or after blowjobs?

I'm going to talk to our VAR and see if he can throw a quick and dirty quote my way based on what we want (probably two CS220 shelves, one as primary one as a backup). At this point I'm not holding my breath.

Wicaeed fucked around with this message at 22:59 on Jun 27, 2014

Wicaeed
Feb 8, 2005

Richard Noggin posted:

8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs.

EMC has an offering called ScaleIO that uses local storage of your hosts and then combines that info a distributed storage solution.

From what I've seen it looks fairly nice, and it falls right around the $10k mark for their starter kit, but that doesn't include any of the required hardware/storage/etc. The kit lets you scale up to 12.5TB if memory serves.

Think of it as kind of like vSAN, but not available only to VMware hosts, you can totally use storage on other non-VMware hosts and serve that up to to VMware as long as the storage backend is on the same network.

And yeah you're gonna be sad with just 7.2k RPM NL-SAS drives.

How in the world do you swing a Nexus 5k but can't get more than 10k for storage?

Wicaeed fucked around with this message at 00:31 on Jun 28, 2014

Wicaeed
Feb 8, 2005

bigmandan posted:

It was for a single unit with 10 GbE and 3 year support, suggested retail no discounts.

Well I just got a quote from our VAR and while it was cheaper than what you said in your post, it wasn't by much.

Wicaeed
Feb 8, 2005

Dilbert As gently caress posted:

This will be my next blog post.

Debunking the hype behind vSAN.

In all honesty it's an over priced product that is way more over hyped than it should be. It's cool and all but, the limitations and cost of long term overship far outweigh the cost of a real SAN.

Can you compare it or show other parallels with vSAN and EMC ScaleIO and other scale out storage systems as well?

Wicaeed
Feb 8, 2005

Dilbert As gently caress posted:

Uhh does ExtremeIO count? That's what I am working with right now, and actively pursuing to counter because how immature the product is... But sure.

I don't know what parallels I can show, but within the next week, check howtovirtual.com for some kind of post.

I don't think ExtremeIO and ScaleIO really do the same thing? ExtremeIO seems to be a fairly straight forward flash based array while ScaleIO is more a software defined, scale out storage system.

For those of you with Nimble devices:

Does Nimble have something similar to Equallogic SynchRep? SynchRep allows you to synchronize writes to two volumes across two (or more) separate Equallogic arrays. It allows for a fairly high degree of failover and I'm currently looking at it for a MSSQL cluster.

Was curious if Nimble does anything like this.

Wicaeed
Feb 8, 2005

Nitr0 posted:

You can synchronize volumes to another array, yes.

Is this more of a take a snapshot and then replicate the info setup, or some thing where any writes made to one array are also replicated to the second array and verified before any confirmation is actually sent to the machine that did the writing?

Wicaeed
Feb 8, 2005
Does anyone use LSI MegaRAID Storage Manager any more?

We have probably 500 or so endpoints that currently use this software. I was wondering LSI has a product that can be used to administer (and deploy configs to) all of these endpoints en masse.

Googling hasn't really been of any help thus far.

Wicaeed
Feb 8, 2005
drat,

Just got the quote from Nimble today, and while it's not 22k for a CS220G, it's almost exactly the same price as an Equallogic PS6210X + a little bit more for support.

If we weren't dead set on using SyncRep for our new MSSQL cluster I think I could make a good business case for a new SAN vendor.

Unfortunately everything I've read says that Nimble doesn't have a comparable technology to SyncRep, which might be a deal breaker for us.

Goodbye, pipe dream :(

Wicaeed
Feb 8, 2005

NippleFloss posted:

Why not just use SQL 2012 availability groups and have much more transparent failover with any storage you choose?

I really don't know. For some reason we decide to separate our billing DB into read servers and write servers.

It seems incredibly backwards, but I'm not a DBA.

Wicaeed
Feb 8, 2005

NippleFloss posted:

You can do that with availability groups. Secondary copies are read only by default. Separating reporting or backup on to a read only copy of a DB is common, and trivially easy with native SQL 2012 tools. Even as an employee of a storage vendor with a pretty robust replication suite I still recommend that our customers use native 2012 replication rather than our tools.

Probably more for the DBA thread, but correct me if I'm wrong, don't MSSQL HA Availability Groups use Cluster Shared Volumes for storage? Or can they use attached local storage as well?

The presentation I've been given says we are going to be using our SAN for write-intensive storage in a 2 node availability group, and then using another 2 nodes availability group for the read-intensive operations. Either way we are potentially wasting 50% of our resources since:

A) We have 4 nodes to use, and all 4 nodes have the same storage on them
B) Only two of those nodes (the read-intensive workloads) will be used in an availability group
C) Two nodes will have all of their local storage going to waste

Wicaeed
Feb 8, 2005

Misogynist posted:

In SQL Server 2008, a common topology was something like this:

code:
              ,----------------.          ,----------------.
              | Shared storage |          | Shared storage |
,--------.    `----------------'          `----------------'    ,--------.
| pri-01 |---\        |                            |        /---| mir-01 |
`--------'    \   ,------.                      ,------.   /    `--------'
               |--| MSCS |----SQL Mirroring---->| MSCS |--|
,--------.    /   `------'                      `------'   \    ,--------.
| pri-02 |---/                                              \---| mir-02 |
`--------'                                                      `--------'
In 2012, there's a few different patterns you can use. MS details them here:

http://blogs.msdn.com/b/sqlcat/archive/2013/11/20/sql-server-2012-alwayson-high-availability-and-disaster-recovery-design-patterns.aspx

You never want all four nodes having the same storage. At most, you would have two HA clusters that each shares the same volumes between the cluster members, and use log shipping to your DR site.

DR site, what's that? :v:

This company has had to learn some hard lessons, and apparently is still learning them.

Wicaeed
Feb 8, 2005

Nebulis01 posted:

CSVs are only supported in SQL2014 and above, also Availability groups are only an option if you're running Enterprise Edition of MSSQL. Standard only supports using a Failover Clustered Instance based on Windows Server Failover Clustering which requires shared storage.

It seems silly to that you're willing tos pend the $$ licensing enterprise but not drop $22k on the Nimble/Equallogic box.

Our parent company literally showers us with license keys for anything Windows we want.

How I wish they would give us their Enterprise VMware licenses :(

Wicaeed
Feb 8, 2005

Maneki Neko posted:

I'm not willing to spend any real time on it, but I'm assuming it's something like this:



It's pretty much like this :)

Wicaeed
Feb 8, 2005

Nukelear v.2 posted:

Will join the chorus, use SQL for this, much more flexibility.

We run the above model (using EQL, no syncrep) in 2012 still. Originally my plan was to use the AlwaysOn feature set to make the mirror side readable, but it turns out that you can't lash together two Windows HA SQL clusters at different locations into one cluster for AO.

Sadly plain mirroring means the second site isn't readable unless you want to snapshot it. But it's DR so not a big deal.

If he is doing this to have a read/write pair and don't need the ultra availability of a 4 node/2 san model, and just want 2 nodes/2 san then AlwaysOn would work out. If he want less waste he could use one of the more traditional replication technologies (i.e. transaction) to push from a 2 node HA write cluster to a farm of read nodes with local storage (or their own shelves)

:hellyeah:

I was finally able to convince our DBA that it would be to his benefit to have all of the Synchronous replication tech running in MSSQL as opposed to in our SAN simply because they can manage it & troubleshoot any problems.

Boss and I sat through an on-site Nimble demo today and he was super impressed. He sure likes his graphs :)

Wicaeed
Feb 8, 2005

Nukelear v.2 posted:

Having fixed that bit, Eql is still is a very solid choice. We've run production SQL off PS6110XS's for a couple years now with no downtime. The tiering between SSD and HD works well and can be scaled out with more pure ssd boxes or big cheap archive boxes or more hybrids. The benefits of a single vendor certified stack are always nice. Bigger vendor who isn't likely to be bought up and disappear. Most importantly you have two vendors who can deliver what you want and you can price them against each other. Let them know you've got solutions from a couple vendors, be knowledgeable about their relative strengths so that when they try to dog each others products you know what is noise and what is real, then get ready for the discounts, so many discounts. I've never known Dell to lose on price. Either way, both are nice kit and you'll probably be happy with either.

Edit: Also sync rep sucks, async forever.

We'll see how it plays out. This is kind of the first time I've had quotes in hand from both vendors, and they are both quoting solutions that will work for us.

Equallogic is offering a lot more usable storage, and coming in well under our budget as we already have some Equallogic tech onsite, thus we don't need to buy as much equipment. However, the available IOPS of their array is only about 3k for the shelf we are buying.

Nimble is offering something we haven't had before: Incredibly fast storage for a decent price. The original plan was to run just our critical environments on Nimble, but with the IOPS we can get from their equipment, we can potentially host some of our less critical DBs off of our SAN.

Wicaeed
Feb 8, 2005
Has anyone heard of/used any Skyera products? Our Sr Engineer of our parent company suggested we buy two Skyera Skyeagle's instead of our current plan to purchase two Nimble CS220G's.

:catstare:

Yes I seriously just typed that.

Yes their product would seem to be over 10x what our budget would be.

Wicaeed
Feb 8, 2005

Moey posted:

Pretty interesting, but I am curious how their software behind everything is.

Honestly their website and lack of real world reviews makes me kind of suspicious that the entire thing is snake oil.

Wicaeed
Feb 8, 2005

Moey posted:

Had a fun little support story with Nimble today. Was doing an update on an array for 1.4.x to 2.x and the said array didn't have autosupport communication for a good chunk of time. This array was blocked from said update until support did a "fix". They had some bug where the firmware needed to be at a minimum level before it was updated to the latest. Their fix was to ssh into the array and run a touch command on a random rear end file. Without either this file or a recent timestamp, the pre-update check would fail.

Not the most fancy way to do it, but seemed to work. They really don't trust their customers to handle poo poo on their own, and rather their support to handle it. Not a bad thing for a lot of people, but it does kinda suck when I can read and do a tiered update.

Still loving these things.

Can't wait to start the implementation of what we were approved for. Furiously :f5:ing our PR system in the hope that it goes through soon.

Wicaeed
Feb 8, 2005
What's the go-to open source file server distro? Is it still a toss up between Openfiler & FreeNAS?

I'm looking specifically to run a fair number of VMware VMs in a test environment off of a 24 disk Supermicro disk array, all connected through iSCSI.

I've been running a similar setup for about a year and a half with no hiccups. Performance isn't really a requirement, just stability/ease of use & simplicity of setup.

Wicaeed
Feb 8, 2005
:woop:

Finally got approval for our Nimble project (2x CS300 shelves). Can't wait to get them in house and start testing!

Wicaeed
Feb 8, 2005
Woo!

I have twins!!



edit: oh god tables

Wicaeed
Feb 8, 2005
Holy mother of gently caress EMC licensing :psypop:

We buy a product a month ago and get an activation email, have to go to their LAC website to activate our entitlements, get sent an email with a loving certificate saying we can use their software and then have to call their licensing support rep to be told it's a 48 hour turn around to actually claim the license.

:rant:

Wicaeed
Feb 8, 2005
If I recall, someone in this thread said that they were getting PowerVault pricing from Dell for their Equallogic array lineup.

Would you be willing to drop some numbers?

I need something stupid simple for our datacenter for a small VMware deployment and Equallogic seems to fit the bill, for now at least. I need to keep it relatively cheap though, but with 10Gbit connectivity.

Wicaeed fucked around with this message at 04:07 on Jan 21, 2015

Wicaeed
Feb 8, 2005
Woah holy crap, just got pricing back for a single PS4210X with the following:

24 900GB 10K SAS Drives (21TB RAW)
Dual 10Gbit Controllers
3 years NBD support

17 Grand

:stare:

Not bad at all

Wicaeed fucked around with this message at 00:12 on Jan 23, 2015

Wicaeed
Feb 8, 2005

bigmandan posted:

That seems like a pretty decent deal. What's your use case going to be?

Small, 3 host VMware deployment that probably won't run more than 20 VMs. For what we really need it's slightly overkill, but I'm not complaining.

Wicaeed
Feb 8, 2005

Maneki Neko posted:

I asked in the Virtualization thread, but possible this is a better place.

Anyone using any of the hyperconverged stuff (Simplivity, Nutanix, etc)? We're talking to Simplivity, but was just curious what people's real world experiences have been.

Simplivity seems to offer more flexibility over the "lol here's a block of crap and you'll like it" approach that some of the other hyperconverged vendors shoot for.

I'm assuming EMC ScaleIO falls into that hyperconverged storage range. I used it (briefly) and wasn't extremely impressed. It seems that for same price Nutanix was offering everything ScaleIO had + dedup/compression.

Nebulis01 posted:

So we're looking for a new SAN our requirements are pretty minimal. This is interal Hyper-V cluster and a few SQL boxes.

Our existing stuff is about 4K IOPS, 250MB/sec and ~4TB used our existing infrastructure is 47% read, 53% write. We're looking for something that will handle 15K IOPS and give us 6-8TB usable with 10GigE and allow us to put a trial 5-10 users on to VDI with room to expand that to 60-70 down the road. Our budget is in the $40k range for this project. I'm looking at multiple vendors for this and so far we've narrowed it to Tegile, Nimble and NetApp.


NetApp wants us on an FAS2552A outfitted with 4x 200GB SSD and 20x 1TB 2.5" 7.2K (not willing to promise an IOPS benchmark for the array other than to say 'fits your needs')
Nimble presented a CS300 array with 4x 160GB SSD and 20x 1TB 2.5" 7.2k (rating 30K IOPS)
Tegile is quoting an HA2100 with 3x 200GB SDD and 13x 2TB 3.5" 7.2K (rating 10K IOPS, moving to an HA2300 with 1TB 2.5" 7.2K gets us in to the 30K range)

All of the quotes are in the same $40k+- range with 3 years of NBD support and a cold spare kit for HDD and SSD.

Are these prices too high (they seem a might high to me)?

I'm also really looking for feedback on the quality of Tegile, the platform seems like a nice front end to what is essentially a commercial version of ZFS with some bells and whistles. But I'm hesitant to go with such an unknown and young company for such a mission critical piece of infrastructure.

Are there other vendors we should consider?

Our company purchased two Nimble CS300 arrays back around September and it fell right around the 40k per array mark, with support.

Wicaeed
Feb 8, 2005

kiwid posted:

How come Nimble isn't in the OP? What are Goon's opinions on it?

Have two Nimble arrays that have been rock solid since we bought them. They require very little maintenance as well, which is a huge + in my book.

Quite happy with them, however dammit I wish they did NFS as well :(

Wicaeed
Feb 8, 2005

Rhymenoserous posted:

Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe.

On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend.

Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?

Wicaeed
Feb 8, 2005

Rhymenoserous posted:

They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker.

That doesn't really make sense, them telling you to not to thin provision, seeing as you're throwing away disk space (as far as VMware is concerned) on a datastore that uses nothing but thick provisioned VMs.

Unless I'm dumb :confused:

Wicaeed
Feb 8, 2005

NippleFloss posted:

In his scenario he is doing thick provisioning and is confused. It is confusing in either scenario because what ESX reports as used and what Nimble reports as used will never match. But thick provides other benefits on Nimble.


You're not throwing away any space, you're still thin provisioned on the storage layer where the blocks actually live. Thin or thick or eager zero thick all consume the same amount of space on the array, the only difference is how they appear to VMFS.

This is also why NFS is great for VMware, no issues translating thin provisioning from one file system layer to the next.

I know I'm not throwing away any space, Nimble knows that too, just curious why VMware doesn't. I mean I know it's block level storage, so whatever the storage is doing underneath that doesn't really matter, but regardless it'd be nice to know! More importantly, VMware alerting doesn't tell you this either

I'm surprised that Nimble or VMware hasn't released any tools to reconcile the the difference between looking at storage from a VMware VMFS level vs a Nimble OS level, or any other vendor for that matter. Even with the vCenter Nimble integration it doesn't tell you this.

I've used a FreeNAS appliance I built from scratch before to store some VM's so I know about the NFS/iSCSI difference from the VMware storage level, and having that information there is REALLY nice.

Wicaeed fucked around with this message at 00:00 on Sep 2, 2015

Wicaeed
Feb 8, 2005
What's the opinion on EMC Isilon? Company I started working for has a 400-something TB setup that we just dropped 80k on for software renewals. Only problem is the guy who set the drat thing up is two years gone, nobody has a clue as to how this thing runs.

Wicaeed
Feb 8, 2005

NippleFloss posted:

Looks like it was a rough quarter for Nimble. Transitioning out of the startup phase and getting to stable growth and profitability is tough, especially given how competitive the storage market is. They need to add something compelling to stay relevant long term relevant, I think.

http://blogs.barrons.com/techtraderdaily/2015/11/19/nimble-plunges-31-as-q4-rev-view-misses-by-a-mile-ceo-cites-enterprise-disappointment/

Good for us I guess, since we badly need to update our CS240's that are getting hammered in our datacenter, maybe we can get a good deal!

Now if only they would loving release the 2.3 firmware so I can bring VVOLs into our prod datacenter and manage everything through the web client :argh:

Wicaeed
Feb 8, 2005

Super pumped about this on one hand, but really hoping our backup project budget will fit in with the pricing for Veeam :(

NippleFloss posted:

Just in time for us to put our partnership with Nimble on hold. Timing!

Assuming this functions identically to the NetApp and HP functionality this is actually a killer feature because you can do file level restores of data from within a VM directly from Nimble snapshots (i.e. scheduled directly from the controller and not through VEEAM) using the free version of VEEAM. It's great for customers that want to go use snapshots as backups because otherwise doing file level restore is a bit of a chore.

Curious to know why you guys are leaving Nimble, is it to do with their recent company performance or a technical reason?

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005
Funny, our company too is looking to switch to something less expensive than our current 300TB Isilon cluster after receiving a renewal quote that our budget couldn't match.

Need something in between that and just using a rather large VM or two to store all of our NFS data on Nimble.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply