Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Richard Noggin
Jun 6, 2005
Redneck By Default
Very nice, thanks!

We're going to be moving our server lineup to ESX 3i, and we're trying to determine whether we can get by with DAS or if we need a SAN. We're a small company, but we have a couple DB-centric apps that run. Here's the setup:

1 SBS 2003 box, serving ~5 users
1 Web based management system with a SQL backend
1 Ticketing system with a SQL backend
1 Server 2k3 box as a backup DC

We'd create a new VM for hosting SQL for both the management and ticketing systems, so the load would go down on those VMs but another one would be picking up the slack.

The new server would be a DL360 G5 with dual quad Xeons with 16GB RAM. Our current switch is a POS Linksys 16 port managed GB job - it's fine for what we're doing now, but I'm not sure about iSCSI traffic. Our storage and IO needs aren't going to change a whole lot in the next few years.

The DAS we're looking at is a MSA60 with ~900GB usable space (12 146GB SAS drives in RAID 10). For a SAN, we'd be looking at the MSA 2000 series running on iSCSI, probably with a similar drive setup.

Any thoughts about what would be the best bang for the buck? The SAN alone is $15k+, while DAS + the server comes in at $11k.

edit: we're a HP shop.

Richard Noggin fucked around with this message at 12:51 on Aug 29, 2008

Adbot
ADBOT LOVES YOU

Richard Noggin
Jun 6, 2005
Redneck By Default
I suspect the best use case of vSAN is extending the value of previous capex by repurposing hardware that already meets the requirements, not buying new.

Richard Noggin
Jun 6, 2005
Redneck By Default

sudo rm -rf posted:

I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks.

Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright?

I was looking at the configuration options, wasn't really sure what this refereed to:



I can't tell if that's needed or not.

8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs.

Richard Noggin
Jun 6, 2005
Redneck By Default

sudo rm -rf posted:

No. This is all new to me (a theme for most of my posts in these forums, haha). How is that usually done?

But the VMs aren't all on at the same time - the majority of them are simple Windows 7 boxes used to RDP into our teaching environment. When we aren't in classes, there's not even a need to keep them spun up.


I work for Cisco, but we're a small part of it.

Then why get a SAN?

Richard Noggin
Jun 6, 2005
Redneck By Default

sudo rm -rf posted:

Because our Domain Controllers, DHCP Server, vCenter Server, AAA Server and workstations are also VMs, and at the moment everything is hosted on a single ESXi host. I can get more hosts, their cost isn't an issue - but additional hosts do not offer me much protection without being able to use HA and vMotion. That's a reasonable need, yeah?

See, that changes the game. You said earlier that you needed the storage for 30 infrequently used VMs that were basically jumpboxes. Without knowing the rest of your environment, I'd be inclined to tell you to put the workstation VMs on local storage and your servers on the SAN, but at least get 10K drives. Speaking to general cluster design, our standard practice is 2 host clusters, and add more RAM or a second processor if need be, before adding a third host.

Richard Noggin
Jun 6, 2005
Redneck By Default

Moey posted:

I still prefer a minimum of 3 hosts. That way of I I put one into maintenance mode, I can still have another host randomly fail and poo poo will still restart. Not too likely but servers are pretty cheap now a days.

The amount of time that a host spends in maintenance mode is very, very small. Wouldn't you rather take that other $5k that you would have spent on a server and put it into better storage/networking?

Just to give everyone an idea, here's our "standard" two host cluster:

- 2x Dell R610, 64GB RAM, 1 CPU, 2x quad-port NICs, ESXi on redundant SD cards
- 2x Cisco 3750-X 24 port IP Base switches
- EMC VNXe3150, drive config varies, but generally at least 6x600 10k SAS on dual SPs
- vSphere Essentials Plus

Richard Noggin
Jun 6, 2005
Redneck By Default

KS posted:

I tend to agree that running multiple small clusters is non-optimal. You have to reserve 1 host's worth -- or 1/n of the workload where n = number of hosts -- of capacity in the cluster in case of failure. The bigger your clusters get, the more efficient you are. It is not efficient to run a bunch of 2-node clusters at 50% util, compared to one big cluster at (n-1)/n percent util.

You also lose efficiency from all those unnecessary array controllers and switches. This is not how anyone sane scales out.

We don't run multiple small clusters. This would be per-client. Our clients, just like a lot of posters here, don't have unlimited resources, so it's a bang for the buck type of thing. The choice between better hardware and a server that sits idle all day long is pretty much a no-brainer.

Richard Noggin
Jun 6, 2005
Redneck By Default

NippleFloss posted:

If you don't have the equivalent, resource wise, of one server sitting idle then your cluster isn't going to survive a host failure gracefully. The problem is that in your cluster that means 50% of resources need to be reserved, whereas for larger clusters it's a smaller percentage for n+1 protection.

Your logic seems exactly backwards to me.

In large environments, yes, the design does not make sense. The majority of our clients don't fall into that segment. Given a fixed budget for a high-availability setup, we have simply chosen to go quality over quantity. The workload can happily run on one host, so we have redundancy covered. For workloads that can't, then yes, three hosts makes sense.

Richard Noggin
Jun 6, 2005
Redneck By Default

Cavepimp posted:

My VNX 5200 showed up yesterday and I already wish I had a 2nd one to replace our VNXe 3300. The VNXe was easier to set up initially, but the stripped down interface started to bother me over time. Especially the missing performance related info/reporting.

Did you purchase the monitoring suite for the VNX? Without it you don't get that info. I also really hate how EMC charges an arm and a leg just to be able to view performance info.

Richard Noggin
Jun 6, 2005
Redneck By Default

adorai posted:

We discovered recently that our oracle array has disks which are consistently at or above 95% utilization. It's amazing that the array was able to mask this kind of problem from us for long enough to get to this point. To make sure I wasn't lying to my users I've been running in VDI since I noticed the issue while I work on quotes for more storage, and other than the big logon push in the morning the system stays pretty drat usable.

We've been burned a few times by Oracle storage (7120) and it got sent down to the minors where it can be a non-critical unit. We have a VNX 5200 instead, and love it. A shame too - the 7120 was ridiculously expensive for what we got.

Richard Noggin
Jun 6, 2005
Redneck By Default

Martytoof posted:

What would you guys do if you were given an HP server with 8 empty 2.5" bays and asked to create a Veeam server on a tight budget? I'm thinking of going with 8 of these:

http://www.canadacomputers.com/product_info.php?cPath=15_1086_215_217&item_id=QR1669

I'm hoping that spread across eight spindles the 7200 won't be too much of a bog, and all this server will do is run Veeam so it'll obviously be write-heavy.

I'm going to be backing up about 1-2 tb worth of VMs (raw, likely much less after Veeam dedup) so I'd like to give myself extra room to work with for extra retention. RAID-5 would give me 7TB to work with, though I'd likely use a different setup.

I had to fight to get just this server and a Veeam license with a very limited budget so this client is obviously not terribly focused on backups (though I am adamant that they at least get something in place), so cost is likely to be a factor (I'll likely have to fight to just get the above link x8). I'll sacrifice performance for having a backup though.

We have Veeam repos on slower storage than that. Your bottleneck will most likely be the 1Gb ethernet connection, not disk. Other factors unconsidered I'd say you could expect 70-90MB/s.

Richard Noggin
Jun 6, 2005
Redneck By Default

Internet Explorer posted:

On the Dell MD3200 series you can actually connect up to 4 hosts, shared storage, with active/active controllers. For someone just looking at a small VMware Essentials Plus network with no-frills needed, it's great.

I'll offer a counter opinion: Dell md32xx series arrays suck. They're overpriced, offer a crappy management solution (really? a 1GB download just to get management software, PLUS having to upgrade said 1GB package with every firmware release?), and only offer block. For the same or less money you can get a VNXe3150 or 3200 with unified storage and built-in management. In casual observance, the EMCs perform better than the Dells too.

Oh - did I mention that Dell acknowledges there is a very real possibility of data corruption during firmware updates? Yeah...not impressed.

Richard Noggin
Jun 6, 2005
Redneck By Default

skooky posted:

Pretty much everything you posted just then is wrong.

Management software does not need to be updated with every firmware upgrade.

Having probably done 100s of MD3xxx firmware upgrades, I have not come across an issue once. The "possibility" is standard CYA stuff.


Interesting. Every one I've done has resulted in having to upgrade the array manager after. I've had several Dell techs speak adamantly of the "possibility", and with enough conviction that we don't even think about doing the upgrade with any sort of IO to the SPs. EMC is plenty happy with active upgrades. But hey, it's really just Ford vs Chevy. :iiaca:

Richard Noggin
Jun 6, 2005
Redneck By Default

PCjr sidecar posted:

nah just that operating near capacity brings most sans to their knees

They're just not meant to handle a load like that.

Richard Noggin
Jun 6, 2005
Redneck By Default
We use Catalyst 3750-X switches in small storage environments with great success. We even break the rules and share duties with L3 routing.

Richard Noggin
Jun 6, 2005
Redneck By Default

devmd01 posted:

I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage.

This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.

Richard Noggin
Jun 6, 2005
Redneck By Default
What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

Richard Noggin
Jun 6, 2005
Redneck By Default
Yup, 1GbE. We do have the option of going host-->controller and bypassing switching altogether with the VNXe3200s we usually deploy, but I'm not sure if that's a good idea.

Richard Noggin
Jun 6, 2005
Redneck By Default

NippleFloss posted:

4900 series switches will handle storage traffic better than 3750s owing to a much larger shared port buffer space. Bursty traffic or mismatched egress/ingress rates (all common for network storage) can overload the relatively small buffers on the 3750s and lead to packet drops.

Yeah, that I know, but at 4x the cost.

Richard Noggin
Jun 6, 2005
Redneck By Default

NippleFloss posted:

A refurbished 4948 is few hundred bucks more than a refurb 3750X-24T.

Interesting. I'll have to check it out. List is like 2x I think.

Richard Noggin
Jun 6, 2005
Redneck By Default
AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.

Richard Noggin
Jun 6, 2005
Redneck By Default
I don't see Meraki going anywhere. It's already rebranded as Cisco Meraki, and fills a nice gap in their product line allowing them to compete with the Aerohives of the world.

Richard Noggin
Jun 6, 2005
Redneck By Default

Docjowles posted:

You already got some good answers, but another decent tool I ran across recently is Microsoft's DiskSpd (formerly SQLIO).

This is what Veeam recommends to test performance for their app.

Richard Noggin
Jun 6, 2005
Redneck By Default
HP just picked up Simplivity for $650m. It will be interesting to see where they take the line, as I believe Simplivity's stuff is build on top of both Dell and Cisco hardware.

Richard Noggin
Jun 6, 2005
Redneck By Default
Anyone have any experience with Nimble's Storage-On-Demand offering? If so, what's your environment like?

Richard Noggin
Jun 6, 2005
Redneck By Default

adorai posted:

1) I would buy another Nimble today, no problem. I don't care who owns them, the product is great.

This. Just got an AF1000 with 24 disks last week and have been putting it through its paces. I had to deal with support already (weird issue with a disk dropping off at every reboot), but they were fantastic and US-based. Also, 35K IOPS with our normal 75/25 workload in iometer is just :fap:

Richard Noggin
Jun 6, 2005
Redneck By Default

evil_bunnY posted:

Does anyone make a good continuous backup appliance I can point at a bunch of SMB shares? About 30TB worth

A good friend of mine is in sales at Rubrik. PM me and I can give you his email if you want.

Richard Noggin
Jun 6, 2005
Redneck By Default

Spring Heeled Jack posted:

Wow, HPE is about to lose a sale to Dell because CDW can’t manage to get pricing we’ve been asking for like two weeks. I really liked the AF40 but holy hell get your poo poo together.

I can't stand CDW. Perhaps it's because we're not a mega-enterprise, but their pricing has been abysmal and we experienced a high AM turnover. SHI is now our preferred VAR.

Richard Noggin
Jun 6, 2005
Redneck By Default

YOLOsubmarine posted:

Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies.

They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath.

Richard Noggin
Jun 6, 2005
Redneck By Default

YOLOsubmarine posted:

They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts.

They aren’t getting any more capital anywhere other than acquisition so that’s really the only option. Hopefully they manage to do something to make it right for existing customers.

Violin has been a penny stock for years but they still exist somehow, so Tintri still has a shot at bare susbsistence.

I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't.

Adbot
ADBOT LOVES YOU

Richard Noggin
Jun 6, 2005
Redneck By Default

1000101 posted:

One nice thing about Pure is their evergreen support program. Basically if you stay current they'll keep your gear current without having to do hardware refreshes. It sounds insane but it ends up being a great way to keep customers giving you money for your product.

I'd say it's probably the most interesting part of buying Pure.

We have a similar deal with Nimble - we get free controller upgrades every three years, provided we keep maintenance on it. I don't have the specifics in front of me but I believe we need to renew for a minimum of three years at a time to be eligible for this.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply