|
Docjowles posted:That's verbatim why my old boss forbid me from buying a SAN for our virtual environments. So instead we used a lovely hacked up DRBD solution that failed all the loving time but hey at least we didn't have a "single point of failure" Jesus christ this sounds like the line of thought from our company DBA Manager when it came time to rebuild our old Billing environment. I had sit down and explain to him (with drawings and everything) how a loving RAID array works (with hotspares!) and how redundant controllers, network links, switches, etc work to make sure that the disk array was redundant as possible. He still wanted us to buy a second SAN of the same make/type and use it as a hotspare because reasons. And then the turdnugget goes and builds a SQL cluster, but then places two standalone MSSQL servers in front of it so clients can connect to it instead of the cluster
|
# ¿ Apr 21, 2014 10:22 |
|
|
# ¿ May 12, 2024 11:11 |
|
Is there anyone here that has production experience with EMC's ScaleIO product? I'm specifically looking for information regarding the mixing of different hard drives type within the same physical chassis, well as how ScaleIO works when mixing hardware (same server vendor but different HW generations).
|
# ¿ Apr 24, 2014 01:25 |
|
NippleFloss posted:EMC has a few different products that compete in the scale out array space: Isilon, ExtremeIO and ScaleIO. Of those, only Isilon has enough market presence to make any real determinations about what it's good at, and that seems to be high throughput sequential IO streams. So massive scale archival data, object storage, streaming video, etc. It has proven to be less than stellar at running things like VMWare or OLTP because (like many distributed systems) it is very metadata intensive and the time required to a) query metadata to locate all of the pieces of an IO request and b) assemble those pieces from the various nodes they are located on, incurs enough latency to make it inefficient for random IO where you can't do readahead to mask that latency. That sort of problem is solvable through things like a coherent global cache (like VMAX), but adding the hardware to work around that problem makes things significantly more expensive. Sounds like you know a bit about ScaleIO Do things like FusionIO cards or the addition of the EMC XtremCache product make any sort of difference in regards to random IO performance?
|
# ¿ May 30, 2014 01:17 |
|
bull3964 posted:On the subject of VSAN, I wonder if they are going to let people start using SSDs for actual storage rather than just cache. The new intel PCIe SSDs were announced the other day and those seem perfect. Curiously, EMC ScaleIO is going to start using some form of local memory as the cache for it's storage in it's latest release, allowing you to use local SSD's as a faster tier of storage.
|
# ¿ Jun 5, 2014 09:58 |
|
Moey posted:Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance. How well does their replication work? Do they support any form of active fail over? I just got a tentative approval from my boss to quote out a secondary SAN for our current planned MSSQL Billing environment, with a budget of 80k. Right now we were thinking we want to purchase a second copy of our Equallogic SAN to act as a backup in case of a primary array failure, but I'm fairly certain that Equallogic can't seamlessly failover in any way. It also doesn't support a lot of advanced features such as compression or dedup, and has absolutely no flash to speak of.
|
# ¿ Jun 20, 2014 21:57 |
|
So I got to sit down for an hour with Nimble and go through a webex presentation about their product. If half of what they are claiming is true, this should be pretty simple sell to Management, as long as it doesn't break the bank ($80k)
|
# ¿ Jun 27, 2014 22:32 |
|
bigmandan posted:I sat through that same presentation a few days ago. It is pretty drat impressive. Ballpark figure for the cs220 was about 50-60k (Canadian monopoly money) drat that's quite a bit more expensive than Moey assumed near the top of this page (comparing it to an EMC VNX5200 + DAE for $22k) putting it (probably) right back into the territory of poo poo-that-I-want-but-couldn't-ever-get-budgeting-for Was that for a single unit? Moey posted:What are you looking to get? Was that before or after blowjobs? I'm going to talk to our VAR and see if he can throw a quick and dirty quote my way based on what we want (probably two CS220 shelves, one as primary one as a backup). At this point I'm not holding my breath. Wicaeed fucked around with this message at 22:59 on Jun 27, 2014 |
# ¿ Jun 27, 2014 22:52 |
|
Richard Noggin posted:8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs. EMC has an offering called ScaleIO that uses local storage of your hosts and then combines that info a distributed storage solution. From what I've seen it looks fairly nice, and it falls right around the $10k mark for their starter kit, but that doesn't include any of the required hardware/storage/etc. The kit lets you scale up to 12.5TB if memory serves. Think of it as kind of like vSAN, but not available only to VMware hosts, you can totally use storage on other non-VMware hosts and serve that up to to VMware as long as the storage backend is on the same network. And yeah you're gonna be sad with just 7.2k RPM NL-SAS drives. How in the world do you swing a Nexus 5k but can't get more than 10k for storage? Wicaeed fucked around with this message at 00:31 on Jun 28, 2014 |
# ¿ Jun 28, 2014 00:28 |
|
bigmandan posted:It was for a single unit with 10 GbE and 3 year support, suggested retail no discounts. Well I just got a quote from our VAR and while it was cheaper than what you said in your post, it wasn't by much.
|
# ¿ Jun 30, 2014 21:05 |
|
Dilbert As gently caress posted:This will be my next blog post. Can you compare it or show other parallels with vSAN and EMC ScaleIO and other scale out storage systems as well?
|
# ¿ Jul 2, 2014 01:20 |
|
Dilbert As gently caress posted:Uhh does ExtremeIO count? That's what I am working with right now, and actively pursuing to counter because how immature the product is... But sure. I don't think ExtremeIO and ScaleIO really do the same thing? ExtremeIO seems to be a fairly straight forward flash based array while ScaleIO is more a software defined, scale out storage system. For those of you with Nimble devices: Does Nimble have something similar to Equallogic SynchRep? SynchRep allows you to synchronize writes to two volumes across two (or more) separate Equallogic arrays. It allows for a fairly high degree of failover and I'm currently looking at it for a MSSQL cluster. Was curious if Nimble does anything like this.
|
# ¿ Jul 9, 2014 22:49 |
|
Nitr0 posted:You can synchronize volumes to another array, yes. Is this more of a take a snapshot and then replicate the info setup, or some thing where any writes made to one array are also replicated to the second array and verified before any confirmation is actually sent to the machine that did the writing?
|
# ¿ Jul 10, 2014 19:45 |
|
Does anyone use LSI MegaRAID Storage Manager any more? We have probably 500 or so endpoints that currently use this software. I was wondering LSI has a product that can be used to administer (and deploy configs to) all of these endpoints en masse. Googling hasn't really been of any help thus far.
|
# ¿ Jul 11, 2014 21:34 |
|
drat, Just got the quote from Nimble today, and while it's not 22k for a CS220G, it's almost exactly the same price as an Equallogic PS6210X + a little bit more for support. If we weren't dead set on using SyncRep for our new MSSQL cluster I think I could make a good business case for a new SAN vendor. Unfortunately everything I've read says that Nimble doesn't have a comparable technology to SyncRep, which might be a deal breaker for us. Goodbye, pipe dream
|
# ¿ Jul 15, 2014 01:02 |
|
NippleFloss posted:Why not just use SQL 2012 availability groups and have much more transparent failover with any storage you choose? I really don't know. For some reason we decide to separate our billing DB into read servers and write servers. It seems incredibly backwards, but I'm not a DBA.
|
# ¿ Jul 15, 2014 06:22 |
|
NippleFloss posted:You can do that with availability groups. Secondary copies are read only by default. Separating reporting or backup on to a read only copy of a DB is common, and trivially easy with native SQL 2012 tools. Even as an employee of a storage vendor with a pretty robust replication suite I still recommend that our customers use native 2012 replication rather than our tools. Probably more for the DBA thread, but correct me if I'm wrong, don't MSSQL HA Availability Groups use Cluster Shared Volumes for storage? Or can they use attached local storage as well? The presentation I've been given says we are going to be using our SAN for write-intensive storage in a 2 node availability group, and then using another 2 nodes availability group for the read-intensive operations. Either way we are potentially wasting 50% of our resources since: A) We have 4 nodes to use, and all 4 nodes have the same storage on them B) Only two of those nodes (the read-intensive workloads) will be used in an availability group C) Two nodes will have all of their local storage going to waste
|
# ¿ Jul 15, 2014 07:01 |
|
Misogynist posted:In SQL Server 2008, a common topology was something like this: DR site, what's that? This company has had to learn some hard lessons, and apparently is still learning them.
|
# ¿ Jul 15, 2014 07:37 |
|
Nebulis01 posted:CSVs are only supported in SQL2014 and above, also Availability groups are only an option if you're running Enterprise Edition of MSSQL. Standard only supports using a Failover Clustered Instance based on Windows Server Failover Clustering which requires shared storage. Our parent company literally showers us with license keys for anything Windows we want. How I wish they would give us their Enterprise VMware licenses
|
# ¿ Jul 15, 2014 21:36 |
|
Maneki Neko posted:I'm not willing to spend any real time on it, but I'm assuming it's something like this: It's pretty much like this
|
# ¿ Jul 16, 2014 00:18 |
|
Nukelear v.2 posted:Will join the chorus, use SQL for this, much more flexibility. I was finally able to convince our DBA that it would be to his benefit to have all of the Synchronous replication tech running in MSSQL as opposed to in our SAN simply because they can manage it & troubleshoot any problems. Boss and I sat through an on-site Nimble demo today and he was super impressed. He sure likes his graphs
|
# ¿ Jul 17, 2014 01:59 |
|
Nukelear v.2 posted:Having fixed that bit, Eql is still is a very solid choice. We've run production SQL off PS6110XS's for a couple years now with no downtime. The tiering between SSD and HD works well and can be scaled out with more pure ssd boxes or big cheap archive boxes or more hybrids. The benefits of a single vendor certified stack are always nice. Bigger vendor who isn't likely to be bought up and disappear. Most importantly you have two vendors who can deliver what you want and you can price them against each other. Let them know you've got solutions from a couple vendors, be knowledgeable about their relative strengths so that when they try to dog each others products you know what is noise and what is real, then get ready for the discounts, so many discounts. I've never known Dell to lose on price. Either way, both are nice kit and you'll probably be happy with either. We'll see how it plays out. This is kind of the first time I've had quotes in hand from both vendors, and they are both quoting solutions that will work for us. Equallogic is offering a lot more usable storage, and coming in well under our budget as we already have some Equallogic tech onsite, thus we don't need to buy as much equipment. However, the available IOPS of their array is only about 3k for the shelf we are buying. Nimble is offering something we haven't had before: Incredibly fast storage for a decent price. The original plan was to run just our critical environments on Nimble, but with the IOPS we can get from their equipment, we can potentially host some of our less critical DBs off of our SAN.
|
# ¿ Jul 17, 2014 20:32 |
|
Has anyone heard of/used any Skyera products? Our Sr Engineer of our parent company suggested we buy two Skyera Skyeagle's instead of our current plan to purchase two Nimble CS220G's. Yes I seriously just typed that. Yes their product would seem to be over 10x what our budget would be.
|
# ¿ Jul 24, 2014 02:18 |
|
Moey posted:Pretty interesting, but I am curious how their software behind everything is. Honestly their website and lack of real world reviews makes me kind of suspicious that the entire thing is snake oil.
|
# ¿ Jul 24, 2014 20:49 |
|
Moey posted:Had a fun little support story with Nimble today. Was doing an update on an array for 1.4.x to 2.x and the said array didn't have autosupport communication for a good chunk of time. This array was blocked from said update until support did a "fix". They had some bug where the firmware needed to be at a minimum level before it was updated to the latest. Their fix was to ssh into the array and run a touch command on a random rear end file. Without either this file or a recent timestamp, the pre-update check would fail. Can't wait to start the implementation of what we were approved for. Furiously ing our PR system in the hope that it goes through soon.
|
# ¿ Aug 20, 2014 06:05 |
|
What's the go-to open source file server distro? Is it still a toss up between Openfiler & FreeNAS? I'm looking specifically to run a fair number of VMware VMs in a test environment off of a 24 disk Supermicro disk array, all connected through iSCSI. I've been running a similar setup for about a year and a half with no hiccups. Performance isn't really a requirement, just stability/ease of use & simplicity of setup.
|
# ¿ Aug 26, 2014 23:55 |
|
Finally got approval for our Nimble project (2x CS300 shelves). Can't wait to get them in house and start testing!
|
# ¿ Sep 11, 2014 21:58 |
|
Woo! I have twins!! edit: oh god tables
|
# ¿ Oct 3, 2014 03:12 |
|
Holy mother of gently caress EMC licensing We buy a product a month ago and get an activation email, have to go to their LAC website to activate our entitlements, get sent an email with a loving certificate saying we can use their software and then have to call their licensing support rep to be told it's a 48 hour turn around to actually claim the license.
|
# ¿ Nov 13, 2014 22:09 |
|
If I recall, someone in this thread said that they were getting PowerVault pricing from Dell for their Equallogic array lineup. Would you be willing to drop some numbers? I need something stupid simple for our datacenter for a small VMware deployment and Equallogic seems to fit the bill, for now at least. I need to keep it relatively cheap though, but with 10Gbit connectivity. Wicaeed fucked around with this message at 04:07 on Jan 21, 2015 |
# ¿ Jan 21, 2015 03:31 |
|
Woah holy crap, just got pricing back for a single PS4210X with the following: 24 900GB 10K SAS Drives (21TB RAW) Dual 10Gbit Controllers 3 years NBD support 17 Grand Not bad at all Wicaeed fucked around with this message at 00:12 on Jan 23, 2015 |
# ¿ Jan 23, 2015 00:08 |
|
bigmandan posted:That seems like a pretty decent deal. What's your use case going to be? Small, 3 host VMware deployment that probably won't run more than 20 VMs. For what we really need it's slightly overkill, but I'm not complaining.
|
# ¿ Jan 23, 2015 01:36 |
|
Maneki Neko posted:I asked in the Virtualization thread, but possible this is a better place. I'm assuming EMC ScaleIO falls into that hyperconverged storage range. I used it (briefly) and wasn't extremely impressed. It seems that for same price Nutanix was offering everything ScaleIO had + dedup/compression. Nebulis01 posted:So we're looking for a new SAN our requirements are pretty minimal. This is interal Hyper-V cluster and a few SQL boxes. Our company purchased two Nimble CS300 arrays back around September and it fell right around the 40k per array mark, with support.
|
# ¿ Jan 23, 2015 21:36 |
|
kiwid posted:How come Nimble isn't in the OP? What are Goon's opinions on it? Have two Nimble arrays that have been rock solid since we bought them. They require very little maintenance as well, which is a huge + in my book. Quite happy with them, however dammit I wish they did NFS as well
|
# ¿ Jul 12, 2015 22:33 |
|
Rhymenoserous posted:Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe. On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend. Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?
|
# ¿ Sep 1, 2015 19:11 |
|
Rhymenoserous posted:They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker. That doesn't really make sense, them telling you to not to thin provision, seeing as you're throwing away disk space (as far as VMware is concerned) on a datastore that uses nothing but thick provisioned VMs. Unless I'm dumb
|
# ¿ Sep 1, 2015 22:43 |
|
NippleFloss posted:In his scenario he is doing thick provisioning and is confused. It is confusing in either scenario because what ESX reports as used and what Nimble reports as used will never match. But thick provides other benefits on Nimble. I know I'm not throwing away any space, Nimble knows that too, just curious why VMware doesn't. I mean I know it's block level storage, so whatever the storage is doing underneath that doesn't really matter, but regardless it'd be nice to know! More importantly, VMware alerting doesn't tell you this either I'm surprised that Nimble or VMware hasn't released any tools to reconcile the the difference between looking at storage from a VMware VMFS level vs a Nimble OS level, or any other vendor for that matter. Even with the vCenter Nimble integration it doesn't tell you this. I've used a FreeNAS appliance I built from scratch before to store some VM's so I know about the NFS/iSCSI difference from the VMware storage level, and having that information there is REALLY nice. Wicaeed fucked around with this message at 00:00 on Sep 2, 2015 |
# ¿ Sep 1, 2015 23:56 |
|
What's the opinion on EMC Isilon? Company I started working for has a 400-something TB setup that we just dropped 80k on for software renewals. Only problem is the guy who set the drat thing up is two years gone, nobody has a clue as to how this thing runs.
|
# ¿ Sep 25, 2015 10:07 |
|
NippleFloss posted:Looks like it was a rough quarter for Nimble. Transitioning out of the startup phase and getting to stable growth and profitability is tough, especially given how competitive the storage market is. They need to add something compelling to stay relevant long term relevant, I think. Good for us I guess, since we badly need to update our CS240's that are getting hammered in our datacenter, maybe we can get a good deal! Now if only they would loving release the 2.3 firmware so I can bring VVOLs into our prod datacenter and manage everything through the web client
|
# ¿ Nov 20, 2015 06:29 |
|
Thanks Ants posted:For people with Nimble appliances and Veeam: Super pumped about this on one hand, but really hoping our backup project budget will fit in with the pricing for Veeam NippleFloss posted:Just in time for us to put our partnership with Nimble on hold. Timing! Curious to know why you guys are leaving Nimble, is it to do with their recent company performance or a technical reason?
|
# ¿ May 3, 2016 05:26 |
|
|
# ¿ May 12, 2024 11:11 |
|
Funny, our company too is looking to switch to something less expensive than our current 300TB Isilon cluster after receiving a renewal quote that our budget couldn't match. Need something in between that and just using a rather large VM or two to store all of our NFS data on Nimble.
|
# ¿ Jun 9, 2016 02:06 |