Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
oblomov
Jun 20, 2002

Meh... #overrated

Erwin posted:

I have a new EMC AX4 iSCSI array in place and it seems quite a bit slower than I think it should be. Is there a reliable way to benchmark its performance and any statistics for similar devices that I can compare it to? I've tried googling around but I can't find any "here is what speed you should expect with this iSCSI array" information.

You really need to provide some more info. What's slow exactly (i.e. whats the throughput you are getting)? Which disks do you have in there? How is it connected (how many ports, port speeds, what switch, switch config, jumbo frames, etc...)? What is it connected to (what is the server hardware, OS, application)?

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.

There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough.

oblomov
Jun 20, 2002

Meh... #overrated

brent78 posted:

Please explain. We are looking at picking up 6 shelves of Lefthand. I've used EqualLogic in the past and loved everything about them, except my boss is anti Dell these days. If LeftHand sucks, please tell before I get neck deep in it.

I got to pipe out and say it's a pleasure dealing with Equallogic (NetApp too) support. We haven't had any really weird calls, mostly drive failure here and there and couple network based shenanigans, but they are very quick to respond. Also, the modules seem pretty solid and easy to use.

For pricing, agree with previous posters, don't even look at retail pricing for NetApp (or Equallogic). Get some competitive quotes and start talking to sales people. I've used 2020s and 2050s and they are pretty nice units for what they go for, however lately we have been buying Equallogic for low and low-mid level instead, turns out cheaper even with dedupe (and for VMware you got vSphere thin-provisioning now). For a bit higher-end (mid to high level SANs), we have been going NetApp. Not that Equallogic can't deliver mid-level SAN performance, but Netapp got a lot of flexibility in quite a few areas, all being said and done.

Oh, and I am not sure about SAN, but dealing with HP sales is like pulling teeth. I am talking fairly high end contracts too (not just couple hundred $K).

oblomov
Jun 20, 2002

Meh... #overrated

Cyberdud posted:

How about this : QNAP TS-859 Pro turbo NAS which supports jumbo frames (http://www.qnap.com/pro_detail_feature.asp?p_id=146)

it comes to around 1600 CAD and supports 8 bays each so we can purchase two of them.

Does netgear make good switches ? I saw a pretty affordable one that supports Jumbo Frames. What do you guys recommend?

It's decent if you want to run a small NAS for 5-10 people. I wouldn't run VMware from it, this is not for this. Netgear makes decent switches for your house or your dentist's office, not for enterprise gear (it's actually decent for low end switching). Check out Dell or HP switches if Cisco is a bit too pricey (it is indeed). Do get something that supports flow control (send and receive) as mentioned. Going with either of these should save you a bit of cash.

For the SAN, depending on the load, check out MD3000i from Dell or maybe Equallogic 4000 series. Make sure to talk to Sales rep, also get quotes from HP/Cisco/IBM and pressure the sales guy/girl, you can get good discount that way.

oblomov
Jun 20, 2002

Meh... #overrated
Can anyone recommend an open source distributed file system that can stretch across multiple jbod systems ala Google FS? I am looking for one that will stripe across different drives in this "storage cluster". I've looked at ParaScale, HDFS, OpenAFS, etc... HDFS seems the most promising out of the bunch, but target metrics of multitude huge files is not quite what I was looking for.

Basically we got a potential project where we may need to store a whole bunch of archive/backup/tier3 and 4 data with a fairly small budget and I wanted to explore possibility of "rolling my own" "storage cloud".

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

Ceph and Gluster are the two big ones that I'm aware of.

Edit: There's also MogileFS.

Cool, appreciate the info. GlusterFS seems promising. Going to check that out along with Hadoop's HDFS and OpenAFS. MogileFS seemed interesting but requires their libraries to write to it, which is not what I was quite looking for. Going to check out ParaScale again as well, even though it's commercial. Ceph looks cool but seems a bit too raw even for dev/test environment.

oblomov
Jun 20, 2002

Meh... #overrated

lilbean posted:

When zpools are concatenated the writes are striped across devices, so with multipathing and the striping it's pretty damned fast.

You sound bang on about 50K being low for the expansion (which is why I'm leaning towards the flash array).

$50K can get you 2 x Equallogic PS6010XV (you would need to beat up your Dell rep) with each having 16x450GB SAS drives. Your writes will be striped across each unit, so in affect you can have 28 drives (2 are hot space per each unit) striped in a single storage pool with 2 active controllers. What's your OS server side?

IMO, NetApp FS3100 with PAM card is gonna run you a lot more then $100K. Hell, a shelf of 450GB fiber drives is going to run over $30K. However, a 2150 (or whatever the hell is the equivalent right now) with a fiber shelf can probably be had for around $50K. NFS license is pretty cheap for 2000 series too.

oblomov
Jun 20, 2002

Meh... #overrated

EoRaptor posted:

Note: the mutli device stripping works for any platform, but will only get full throughput where an optimized multipath driver is available. Currently, only windows has such a driver, though a vmware one is in development. Other platforms have to wait for a redirect from the device they are hitting if the data they are seeking is elsewhere, leading to a latency hit at best.

Equallogic has some great ideas, but some really weird drawbacks. A single EQ device filled with ssd's might actually be a faster solution, though it depends on what you are bound by (i/o, throughput, latency, etc). We are back at our SSD device discussion, however, and there are betetr players than equallogic, I feel.

Well, there are workarounds with vSphere without native driver (no idea why Dell delayed that till 4.1). I have bunch of Equallogics running on MPIO just fine (just need to follow Dell/VMware guidelines on how many iSCSI vnics to create for the physical ones and then do some command line config on ESX hosts, not particularly difficult stuff there).

That said, I just am not sure of the state of Linux iSCSI initiators beyond simple failover stuff. We usually use NetApp NFS for Linux beyond a few boxes with not too intensive IO. 10GB helps with Equallogic, but does not help addressing MPIO volume addressing limitations. One can configure most of that manually (in Windows at least, so imagine Redhat/Suse have similar options). Redhat/SUSE drivers are supposed to be coming out Q3/Q4 along with vSphere 4.1 plugin, I believe.

oblomov
Jun 20, 2002

Meh... #overrated

EnergizerFellow posted:

Yeah, more like ~$200K. Nothing like overkill. ;)

Yeah, the FAS2040 is a much more reasonable option. Some back of the envelope numbers would put a FAS2040 w/ DS4243 tray, 15K spindles, and NFS option at ~$70K and ~5500 IOPS.

The FAS2040 is the only model I'd touch in that whole lineup. The FAS2020 and FAS2050 are effectively a major generation behind and I'd expect to get an EOL notice on them any day now. I'd also expect a FAS2040 variant with 10G and more NVRAM to show up soonish as well.

Oh yeah, forgot about 2040. We have a few of the older FAS2020 and FAS2050 boxes around, and they are pretty nice. However, we started buying Equallogic lately for the lower end tasks like this one, they are very price competitive (especially on Windows side).

All of that said, I am exploring distributed/clustering file system options on home-built JBODs now for couple different thing. I can certainly see why Google/Amazon use that sort of thing (well, and they have a few hundred devs working on the problem too :P). If only ZFS was cluster aware...

oblomov
Jun 20, 2002

Meh... #overrated

EoRaptor posted:

Yeah, I didn't expect hard numbers, just that you had looked at the database performance counters, and know that you are i/o bound (i/o wait) not database limited (lock contention/memory limited)

For a server, look at a Xeon 56xx or 75xx series cpu, cram as much memory into the system as you can afford, and you should end up with a database monster. It's not going to be cheap, but the cpu and memory throughput is probably untouchable at the price point.

For memory throughput especially, the 75xx series is really really good.

oblomov
Jun 20, 2002

Meh... #overrated

Klenath posted:

Who here has direct experience with the Dell EqualLogic product line?

I have 40+ of the 6000 series, a few 6010s, a few 6500s and bunch of 5000 series arrays all over the place. I've been running Equallogic for about year and a half now.

quote:

Questions like:


What is the overall performance like, in your experience?

How well does an EqualLogic "group" scale, for those beyond one shelf?

How good is the snapshot capability (speed / IOPS hit on the original LUN / snap & present snap LUN to same or other host automatically or near-automatically)?

How easy / reliable is the array replication?

Does EqualLogic require retarded iSCSI drivers for multipathing like Dell's MD3000i (we have one of these things already, and I'm not very fond of it - partially for this reason)?

Can I realistically use hardware iSCSI offload with EqualLogic (many modern Dell 1U & 2U servers have optional iSCSI offload for onboard NICs, and it would be nice if we could leverage it)?

1. Performance is pretty good. You have to properly size your network throughput, your applications, VMware (or HyperV, or whatever) virtual environments, etc... Keep in mind you either need shitload of 1GB ports or couple 10GB per box, and make sure to have MPIO (active/active) on your clients (be careful with virtualization config). Performance is directly related to scaling (which depends on your network setup, think Nexus/6509/6513 if Cisco, maybe 4000 series but haven't tried that) so if you want more IO, prepare to throw down more boxes (and of course up to you if you want to RAID-10 or whatever). I believe with latest firmware you can have up to 12 boxes (or was it 8) in same pool. Group is just for management, really.

2. Snap hit is pretty light. The snapshot itself is very fast and I haven't really observed performance degradation during the process. That said, I don't take snapshots during really busy times.

3. Replication is very simple. However, don't look for compression or anything fancy like that. Look for Riverbed or similar devices to optimize WAN (if doing WAN replication). Setup is straightforward, haven't had it fail yet. I replicate fairly large volumes daily to tertiary storage. I don't replicate much across the WAN.

4. Yes, you want to use the "retard" drivers :). However, they work very very well for Windows hosts. For Linux, you can do same (Windows too, but its easier to use Equallogic host utils) but have to setup each LUN manually for multipathing. Unfortunately it's same thing for Vsphere 4.0. Multipathing works well, but you have to do manual setup on the LUNs. Dell is coming out with a driver for 4.1 and beta is supposed to kick off next quarter.

5. Umm... not sure? This is on the server side, so depends on your hardware and OS. iSCSI is iSCSI after all. I don't bother with offload since CPU hit is miniscule, VMware does not do much offloading, and I don't have 10GB models running in non-VM environments. I also hate Dell on-board NICs with a passion, since they are all Broadcom (still vividly remember Server 2003 SP2 fiasco there) and well, let's face it, Broadcom sucks. I normally use either Intel NICs (in 1/2U boxes) or integrated switches in blade enclosures.

oblomov
Jun 20, 2002

Meh... #overrated

Intraveinous posted:

Apologies in advance, as I'm way behind in this thread. If this has already been answered months ago, just feel free to ignore me.

The way we replicate our Oracle Databases is using Oracle Dataguard. Basically, you have a temporary LUN set up on the DR/Receiving side that receives a log ship on a set interval. Your shipping interval can be as short or long as you like, and since we've got 100Mbit connection to DR, we go on a constant (real time) shipping schedule, and if something makes it get behind, such is life. Our SLA is for DR to be within 5 minutes of Production. Main DB writes out logs as changes are made and ships them out to the DR site, where they are applied. Whole thing is handled nicely in Oracle Grid Control.

IANADBA, so I'm not sure if this requires RAC in order to work this way, or not, but that is what we are running currently. 4x node Integrity for our production systems, and 2x node for DR, running Oracle Database Enterprise and RAC 10g.

Just wanted to pipe up and say that we do same thing for our Oracle setup on NetApp. We don't use SnapMirror and just use DataGuard on our production RAC cluster. That said, Netapp demoed SnapMirror and it looked like you could do async with that as well with the Oracle Snapshot Manager.

oblomov
Jun 20, 2002

Meh... #overrated

rage-saq posted:

Seriously? So as you build your EQL "grid" up you are introducing more single points of failure? Holy cow...

Well, keep in mind that each box has 2 fully operational controllers. Also, usually you don't stripe volumes across every single box (depending on the number of units, so if it's more then 4-5). Still, it is a point of failure that's not very clearly explained. Also, most frustrating thing ever sometimes is that there is no manual way to failover between controllers, which is kind of nuts. Still, warts and all, I like Equallogic for its price/performance/ease of use. Plus, all licenses are free, Exchange, SQL, replication, upcoming VMware and Oracle.

oblomov
Jun 20, 2002

Meh... #overrated

TobyObi posted:

Already using it (in limited cases) along with 8Gbit FC.

Using 10gbe (over fiber though) with NetApps and Equallogic. Works just peachy. Also using 10gbe from our vmware boxes to save on number of cables from blade enclosures.

On Exchange 2010, we are going to go no backup route. However, we'll have 3 DAG copies, including 1 lagged DAG over wan to remote datacenter, and we have redundant message archiving infrastructure as well. MS is also not doing backups on Exchange 2010 as well, and they don't even have lagged copies.

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

I can sort of understand not keeping backups if you're going with the lagged copies, but running without offline backups is insanely risky in the event that a sysadmin goes rogue and tries to trash the whole environment.

Well, yes, that's always a risk. We have gone through some risk / mitigation exercises and we shall see what turns out. There is still a chance we'll do backups. Now, the more interesting problem with Exchange is that right now we are on physical boxes with local DAS storage and may be going virtual for Exchange 2010. That may prove to be interesting doing SAN calcs for that.

I figure with new and improved I/O handling for 2010, we could get away running this off SATA SAN, either NetApp or Equallogic. We'll see how our testing goes. I am not sure that virtuals will turn out to be cheaper, considering SAN costs, increased license costs (smaller virtual servers vs. larger hardware ones) and vmware costs, but we'll be doing the number crunching. Anyway, enough derailing the thread :).

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

We ended up going fully-virtual for our Exchange 2010 environment -- it really does gently caress-all in terms of CPU utilization and with the disk I/O reductions being what they are there's really no good reason not to consolidate it into the same VM hosting environment as everything else anymore. We just made sure to set up our DRS/HA hostgroups in 4.1 to keep the servers apart from one another and set our resource pools to prioritize Exchange appropriately. I think we're using 16GB RAM per mailbox server, which means Exchange takes up about 1/3 of the memory on our ESXi environment.

Right now we're running on FC because we had literally terabytes of spare FC capacity just sitting here, but I don't really see any compelling reason why we couldn't run on SATA with the I/O numbers we're pulling off the SAN.

Yeah, that's what we were thinking (minus the SAN). The thing is that with say 1500-2000 users per each virtual mailbox server, that's a lot of VMs for DAGs in each datacenter, especially with multiple copies. We'll eat up a lot of ESX "bandwidth" and if you count SAN cost and VMware cost, savings are not really there. MS now allows you to have multiple roles on your DAG nodes, so you can get your CPU utilization even higher. Plus, half the point is gone if not using DRS/vMotion anyway.

oblomov
Jun 20, 2002

Meh... #overrated

adorai posted:

I certainly agree with the idea, and as an individual system administrator I am all for combining the links, but it's hard to go against such an easy to implement best practice. However, I am very glad you posted the links, because we are rebuilding our entire VMware infrastructure over the next few weeks so we'll certainly be able to consider doing so.

Well, you would have multiple 10GB links per server so you should still have MPIO. Here is the thing. Look at switch/datacenter/cabling costs, and 10GB starts making sense. Our 2U VMware servers each used to have 8 cables (including Dell drac) and now we have 3. It's similar with our NFS/iSCSI storage. You would be surprised how much cabling, patch panels and all that stuff costs and how much pain in the rear it takes to run say 100 cables from a blade enclosure.

We are going all 10GB for new VMware and storage infrastructure, and cost analysis makes sense.

oblomov
Jun 20, 2002

Meh... #overrated

adorai posted:

You can get a 3140 HA pair w/ ~10TB (usable) of FC and ~10TB (usable) sata and every licensed feature for well under $200k if you play your cards right. Figure another $30k per 5TB of FC or 10TB of SATA.

Wow, I would love to see where you are getting that pricing. Shelf of 24 x 600GB 10K SAS drive is retailed at $80K plus, and 2TB SATA at about same. Then you throw in 3 year support and it's almost another 20% on top. Now, nobody is paying retail, but still, you are getting a very very good deal if you are paying $30K for that.

It's even more ridiculous pricing for full 3140HA with all licensing and 10TB SAS plus 10TB SATA. Now, NetApp is going to be discounting now since new hardware is coming out soon, but still.....

You are talking Equallogic pricing here, and while I like Equallogic, let's face it, nobody would be buying it if NetApp was priced same.

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

Keep in mind that with each iteration, the disks increase in areal density and capacity, cache sizes increase, and RAID implementations get faster, so you'll never need quite as much disk with each new generation of SAN as you did with the previous one. If you can hold out until 3TB disks are supported by your vendor of choice, you might be really surprised by how low the costs can get. Plus, many vendors are doing interesting things with SSD, which can knock down your spindle count when you either put your data on SSD or use it as a layer-2 cache (assuming your platform supports that).

If this isn't you, you need to be seriously considering tiered storage, storage virtualization, or other things that really minimize the amount of capacity actually taken up by SAN data. SANs are expensive both in terms of up-front costs and maintenance.

I keep waiting for someone other then Sun to do decent SSD caching deployment scenario with SATA back-end. NetApp PAM cards are not quite same (and super pricey). EMC is supposedly coming out (or maybe it's already out) with SSD caching, but we are not an EMC shop, so I am not up to date with all EMC developments.

Equallogic has a mixed SSD/SAS array for VDI which we are about to start testing right now, not sure how that's going to work out in larger quantities due to pricing. They really need dedupe and NFS as part of their feature suite.

oblomov
Jun 20, 2002

Meh... #overrated

adorai posted:

That's basically the design philosophy of a SAN to begin with, everything is redundant.

But I agree with the sentiment. buy a 5 year old san, then buy an identical unit for spare parts.

That's crazy talk, IMO. Yes, it's cheaper, but is it better? Say one fails, then you have to hope that you can get another one quick so you get your redundancy back. Plus new apps/virtualization/etc.. is pushing more and more data through the pipe. We used to think that 4GB SAN fiber links will never be filled up and now are seeing enough traffic of some VMware host configs we are testing saturate 10GB link.

I agree with what Misogynist said below, with faster hardware, better caching hardware and algorithms, more ram, faster CPUs, etc... SAN hardware is becoming more efficient.

That said, what we do is put stuff in the lab after 4-5 years, it's perfect for that environment and if it goes, oh well.

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

I was really pissed that the Unified Storage line didn't completely take off and dominate the industry in the low- to mid-end. If Sun was in a better position when that was released, and the IT world wasn't terrified of Sun being acquired and the vendor support stopping, they would have made a killing on it. The Fishworks analytics stuff is still the best in the industry.

Their software stack sucked when they released it. No decent client snapshots, no integration with most applications, etc... iSCSI support was iffy as well. But you are right, from the point of what kind of hardware they had (even with horrible sales).

oblomov
Jun 20, 2002

Meh... #overrated

lilbean posted:

Wait what? The 4540 was one of the best things Sun ever made. Goddamnit.

There is always Equallogic... :P

oblomov
Jun 20, 2002

Meh... #overrated

H110Hawk posted:

50% or you aren't even trying, and frankly you're wasting the sales guys time. 60% + lunch if you have the time to really turn the screws. Dinner and event tickets should follow the sale to discuss your upcoming projects.

Huh, and I thought our discount was good.... Going to have to talk to procurement.

Adbot
ADBOT LOVES YOU

oblomov
Jun 20, 2002

Meh... #overrated

Crowley posted:

I would too. I've been using EVAs for the better part of a decade without any issue at all.

Haven't used EVAs, but I've had a terrible time dealing with HP sales team on desktop/laptop purchases. Talking large scale account with 8 figure sales a year and bad responsiveness, slow ordering, lags on delivery, just overall a bad experience.

On the other hand EMC, NetApp, and Dell are always prompt, responsive and provided excellent support from pretty much anything we got from them. Now, Dell we escalate through the TAM sometimes, but that's how it rolls, and it's still quick. Personally, this soured me enough on HP that I wouldn't look at them as a vendor for anything for a while.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply