Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Corvettefisher posted:

Might be more a networking question but,

Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE.

Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.

If I'm using FCP today I'm not sure why I'd want to replace it with 10gbE and iSCSI though I might consider FCoE at the edge if I'm trying to cut down on cabling costs/simplify my network design. Realistically speaking most end hosts would be fine with just 4gbps of storage bandwidth and even then few come close to the upper ceiling of that.

Personally if I was doing block storage today I'd probably still prefer to use 8gb FCP in my core storage network because it's very reliable and will generally push more traffic than 10gbE when I start bundling links together. I'm probably also invested in some tools that integrate nicely with native FCP that don't have an iSCSI equivalent yet.

It also means I can keep using any storage I'm trying to replace for other less important things. I don't buy the performance or management argument since in either case (iSCSI or FCoE) I still have to learn a new technology and truth be told we're talking about pretty high levels of bandwidth for your typical small to mid-sized customer.

Unless you're selling brocade VCS ethernet fabric switches (or maybe Juniper QFabric) in which case you can build a pretty awesome ethernet storage network core.

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Corvettefisher posted:

Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers.

FYI the only really "active/active" storage array that EMC sells is the VMAX/Symmtrix line. VNX and VNXe's are more or less active/passive|passive/active and in fact do support ALUA.

quote:

Before I dive into this do you work in an internal department of a company or for an IT firm servicing different customers headed up against bid deals and SLA's?

I work for a professional services company with a large chunk of my client base in the fortune 500. I specifically handle architecture and design work and have done a lot of capacity planning over the years. I've worked with financial services, health care, commercial and retail customers.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

madsushi posted:


I have a hard time looking at SANs that don't include a SSD/Flash-based read cache these days. The 3PAR (and Compellent, and etc) tiering isn't real-time and isn't going to get you anywhere near the same performance boost.

It's not realtime but you can default writes to land in the highest tier and eventually move down as they data gets cold. Most data tends to be that way anyway and when you're talking north of 100TB of storage it becomes cost prohibitive to put everything on flash or even 15k RPM SAS. Tiering in some form or another will always be here until we start seeing people being amazed that we used to store data on spinning ceramic platters.

I'd suspect what we'll actually see in the future is something more along the lines of policy based data management built into the array. This application always lives here and that application is archival so put it on cheap lovely disks. A sort of "manual" control that can be managed transparently to end hosts.

Either that or scale-out storage just becomes the flavor of the week (something like this http://www.yellow-bricks.com/2012/09/04/inf-sto2192-tech-preview-of-vcloud-distributed-storage/.) One big happy tier capable of delivering as much IO and bandwidth as you need.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

evil_bunnY posted:

I know what syncrep is, I thought the name was silly.

VPLEX differs from a lot of synchronous replication technologies in that all the copies can be active. Handy for things like multi-site vmotion with VMware for example.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Could get a 48 port nexus 5k UP and then you have a migration path to FCoE and won't have to deal with learning a new switch platform or fighting potential interop issues.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Also note that the 5k can do native FC for the vmax as well. Using a vmax 40k with a 5596UP and pretty happy with the results thus far.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:

Pretty syched and while I just found out my budget I really want to do my absolute best and dedicate as much time to it as I can because I would really like to use this as a VCDX defense some day.

You're looking at the VCDX in entirely the wrong way based on this post.

That said you should look at either this:
http://www.brocade.com/products/all/switches/product-details/300-switch/index.page

or this:
http://www.brocade.com/products/all/switches/product-details/6505-switch/index.page

As alternatives to the MDS 9100 series.

quote:

What's making you lean to EMC for the storage?

It's possible if this is for education use that an EMC array may be needed to teach EMC centric classes. Then again it may not be so this question should really be answered before cutting a PO.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

three posted:

Do people still buy NetApp?

Yes.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

three posted:

Do these people feel shame?

Not at all. It's still a solid and pretty easy to use product. I would buy it if I could.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Small business? I would opt for a shared nothing design and depend on app level clustering for anything that supports an important business process.

Cheap, relatively easy to support and won't require much additional training for the end customer. Not as sexy as dropping in a NetApp or a Pure Storage box or something with 10GbE but easier on the wallet both in up front and ongoing costs.

Also with respect to failure, your storage is only as "available" as people let it be. You could buy a pair of 8 engine VMAX 40K's with VPLEX in front of them and still have a few outages a week.

Basically people and their training/skillsets have a lot to do with how often a system is up. Usually more than the actual technology they're maintaining.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:

Hersey and I are talking sub $25K level or 21-74 data workers SMB not; the SMB where you live.

I'm thinking the same thing. 3 esxi servers and essentials plus basically. Comedy vsan option if there just has to be shared storage.

Edit: out here we'd just put it all in amazon and use office 365 for a business that size.

1000101 fucked around with this message at 16:40 on Apr 21, 2014

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

sudo rm -rf posted:

What kind of budget do you think would be the minimum required to plan get my DCs protected and plan for a minimal amount of growth?

For DCs you wouldn't need to worry too much since you can just build 1 DC on each ESXi host and they'll all replicate to each other.

AAA could probably be similar, just need to define multiple AAA sources on your network devices.


quote:

This sounds like a good plan, and our hosts are pretty beefy - the discount we get on UCS hardware is significant.

Edit: I really appreciate you guys walking me through this. I'm essentially a one man team, so I don't have a lot of options for assistance outside of my own ability to research.

I largely agree with his plan as well. Focus on what you need shared storage for (things like vcenter, maybe AAA, anything else) and keep throwaway items on local disk.

Also do you happen to be in the bay area?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:



On NetApp? because this is the first time I have heard it talked about on NetApp; I know it exists on other vendors. Just hasn't been talked up from what I've heard on netapp.


NetApp's been doing this at least since 2007 (probably longer). It's basically old at at this point and I'm shocked more storage doesn't do this today.

Regarding licensing: someone hasn't priced out a VMAX!

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:

Doesn't UCS chassis encapsulate/emplement basically the same poo poo with IP traffic via FCoE? I imagine VNXe 3200 is doing similar, but only abstracting the HW of raid, protocol layers, and such to the software.

A UCS chassis provides power and plumbing to blades and not a whole lot else. The intelligence all happens in the fabric interconnect and no it doesn't do anything with IP traffic via FCoE. IP Traffic gets stuffed into ethernet frames just like every other server vendor on the planet. FCoE is just wrapping fibre channel frames with ethernet and putting it in a 'no drop' QoS class to guarantee delivery.

Once the FCoE frame reaches the FI it can (depending on if it's end host or in switched fabric mode):

- Be sent out a native FC uplink to a fibre channel switch to deliver to wherever (where it strips off the ethernet header since it isn't needed anymore)

- Be sent out of an ethernet uplink via FCoE

- Be handed off to the FCF services on the FI where it'll go through the usual process of FCP delivery


quote:

Honestly would love to hear how it is working underneath, if it isn't doing a vSAN(abstracting the HW and pushing protocols and such to the software to make a unified solution); but that is all I can figure.

How you get data to the controller doesn't have to have anything to do with how the controller is actually storing and retrieving the data. You're going to take a SCSI command and stick it in either an iSCSI packet or FC frame and send it out the most appropriate HBA. Your storage network will take said packet and deliver it to the controller who's going to read out that SCSI command and do something fancy with it. In most cases a LUN is probably just a file (or set of files) on a set of backend disks somewhere being shuffled around however the controller sees fit.


edit for clarity.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Sounds like EMC is priced to win. Assuming you need more than 100MB/sec of throughput on a single port I'd probably go with the EMC if the price is right.

FC isn't terribly difficult to manage. You'll pretty much touch your FC switches only when you add more storage controllers or more hosts. One rule to remember is that you'll want to create 1 zone per host per switch. In that zone make sure you've got the WWNs of an ESXi host and any storage ports you want to talk to. Avoid an all-in-one zone.

EMC Unisphere though.. Unisphere makes me irrationally angry. I only touch it about once a year when I add some more hosts to the lab and I always forget the process for adding hosts. It's more straightforward on probably ever other vendor on the planet.

Is Nimble aware you're evaluating EMC as well? Maybe go back to Nimble/your VAR and ask for bigger discounts/free stuff to bring the configurations into balance.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Misogynist posted:

4948 doesn't stack, though, which can complicate topologies somewhat once you go beyond the port count of a single switch. Costs go up a bit once you throw the uplink SFP+ modules in the mix.

I don't think I'd ever use stacked 3750s for a storage network though.

re: SFP+ costs,

'no errdisable detect cause gbic-invalid'

and

'service unsupported-transceiver'

will be your best friend on a shoestring budget. Cisco criminally overcharges for optics.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
FC is an easy protocol to scale on the cheap and some people like the idea of keeping the storage network separate anyway.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Mr Shiny Pants posted:

If you have diamonds why not go whole hog and get Infiniband?

It is probably the best interconnect, too bad it gets glossed over.

Mostly because not a lot of storage manufacturers use it as a host interconnect.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Strife posted:

I want to increase the bandwidth to the SAN but I know precisely dick about fiber zoning beyond "it's similar to a VLAN." The Brocade switches are only used for connecting the hosts to the 3PAR, so do I need to do anything beyond relying on their default configs? Is there some fiber equivalent of port aggregation or can I just plug them in?

Zoning is more closely related to creating access lists than it is a VLAN. You generally want to make sure you're using single initiator zones with either single targets or multiple targets. By that I mean an initiator would basically be a server and a target would be some sort of storage device, tape library, whatever.

So if I have say 2 servers and a storage array I would define 2 zones using a naming convention not unlike below:

SERVERHOSTNAME1_HBA0__3PARHOSTNAME
member SERVERHOSTNAME1_HBA0
member 3PARHOSTNAME_CONTROLLER1_PORT1
member 3PARHOSTNAME_CONTROLLER1_PORT2
member 3PARHOSTNAME_CONTROLLER2_PORT1
member 3PARHOSTNAME_CONTROLLER2_PORT2

SERVERHOSTNAME2_HBA0__3PARHOSTNAME
member SERVERHOSTNAME2_HBA0
member 3PARHOSTNAME_CONTROLLER1_PORT1
member 3PARHOSTNAME_CONTROLLER1_PORT2
member 3PARHOSTNAME_CONTROLLER2_PORT1
member 3PARHOSTNAME_CONTROLLER2_PORT2

Then add those zones to the zoneset and activate it. Brocade should support the notion of device aliases or faceless.. Use them or you will hate life when you go to troubleshoot and you find yourself staring at WWPNs instead of something that is meaningful to you. Repeat this process for the B fabric.

Don't be tempted to throw all your servers in one big happy zone and call it a day as this can cause problems for you down the road.

edit: I could give you CLI syntax for a Cisco MDS but I haven't touched a brocade in a good long while. Should be mostly the same though.

quote:

You need to know how zoning is configured on your switches. You are most likely using soft zoning (cause no one uses hard/port zoning.. but be sure). With soft zoning, you are going to build a zone on your switch which will be a "container" that matches WWNs on the SAN ports to the WWNs on your VM hosts. Anyways, as long as your zones contain the WWNs for the ports you want to connect, the FC protocol will take care of everything else.... including multipathing/login/etc

I hate to be a pedantic dickface but hard zoning/soft zoning really refers to how zoning gets enforced (is it just via FCNS or actually making sure the ASIC doesn't forward the traffic). Even when you do WWNN/alias based zoning it's still enforced on hardware on brocade and MDS switches (and probably QLogic and other OEM brands as well.) People often mistake hard zoning for port based zoning (which I agree do not do!.)

That's the end of my pedantry!

edit 2: here's an article that explains for me! http://searchstorage.techtarget.com/tip/Zoning-part-2-Hard-zoning-vs-soft-zoning

1000101 fucked around with this message at 19:30 on May 7, 2015

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Brocade FabricOS! In some ways I like it better than MDS but yeah you've got the gist of it. Brocade's GUI fabric manager is leaps and bounds better than DCNM though.

If you can, consider adding some aliases for your devices to make those screens a little readable.

Also a reminder that you probably have 2 switches that aren't connected to each other so you'll need to repeat the exercise on both sides. This is by design to make sure you have redundancy in your storage network (in case you screw up zoning somehow for example your servers will still have a path available)

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

kiwid posted:

How come Nimble isn't in the OP? What are Goon's opinions on it?

OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point.

edit:
My god I wrote that poo poo in 2008. Is there interest in a refresh?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
RAID 10 or passthrough for VMware's VSAN.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

adorai posted:

why can't you just use a few robocopy threads to do it? Get them close to synced and then schedule a few hours to finalize.

This is pretty much what we've done for these large migrations. You probably don't have 100TB of change every day so pre-stage as much as you can then just do a final cutover.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

mayodreams posted:

I appreciate all of the input.

I know these systems are very much a balance of needs and workflows. In this case, there are two FC luns on SAS, the 9tb and a 1.5TB. The rest are SATA via iSCSI.

With our configuration, we are talking a lot of VM storage, so I am definitely concerned about the queue depth issues.

Large LUNs aren't necessarily a bad thing depending on how many hosts, size of VMs and most importantly: does your array support VAAI? I have a number of customers on VMAX today that just use something like a 10TB datastore size.

You can track the queue via 'esxtop' during live troubleshooting and historically via vCenter performance charts.

The big metrics to watch out for though are going to be read and write latency.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

KennyG posted:

FC Zoning.

Anyone use anything that's not the proprietary Brocade/Cisco tools that are insanely priced? I have a non-trivial zoneset to build... (2 sites, 4 FC switches, vplexes, multiple recoverpoint clusters, 8 storage arrays and about 100 hosts). I currently use excel but certainly simple mapping of groups to rules to generate single initiator, single target zones seems like a problem that was solved in 2004.

Ansible and a jinja2 template.

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Aunt Beth posted:

Check out NetApp SolidFire line if all you're doing is iSCSI. Great performance and storage efficiency.

One nice thing about Pure is their evergreen support program. Basically if you stay current they'll keep your gear current without having to do hardware refreshes. It sounds insane but it ends up being a great way to keep customers giving you money for your product.

I'd say it's probably the most interesting part of buying Pure.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply