Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KillHour
Oct 28, 2007


Wicaeed posted:

Jesus christ this sounds like the line of thought from our company DBA Manager when it came time to rebuild our old Billing environment.

I had sit down and explain to him (with drawings and everything) how a loving RAID array works (with hotspares!) and how redundant controllers, network links, switches, etc work to make sure that the disk array was redundant as possible.

He still wanted us to buy a second SAN of the same make/type and use it as a hotspare because reasons.

And then the turdnugget goes and builds a SQL cluster, but then places two standalone MSSQL servers in front of it so clients can connect to it instead of the cluster :psyduck:

Can I have this guy's contact info? I have some extremely expensive products he may be interested in.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Docjowles posted:

That's verbatim why my old boss forbid me from buying a SAN for our virtual environments. So instead we used a lovely hacked up DRBD solution that failed all the loving time but hey at least we didn't have a "single point of failure" :shepface:

I hear it from time to time too, and it generally rests on a gross misunderstanding of both what the chassis actually does (it's all passive backplane connectors, not a single part that can fail) and what redundancy actually is, as Misogynist pointed out.

The "it's not redundant because it's all in one <thing>" objection applies to blade enclosures, racks, rooms, buildings, power grids, cities...

YOLOsubmarine fucked around with this message at 16:25 on Apr 21, 2014

Zephirus
May 18, 2004

BRRRR......CHK

Bitch Stewie posted:

Wow. I mean i really rate Synology but no way would I want it as my production storage - no SLA so when poo poo breaks you're on your own basically.

QNAP support is supposed to be even worse.


Confirming that, it's awful. And the products I have found to be very slow. The QNAP firmware is still md-raid underneath it all, so there's that.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

1000101 posted:

I'm thinking the same thing. 3 esxi servers and essentials plus basically. Comedy vsan option if there just has to be shared storage.

Edit: out here we'd just put it all in amazon and use office 365 for a business that size.

Yeah the plan would be to toss exchange into O365.

HE wants the arch to have a shared storage option, which I figure a MD3200i Dual SP's then a QNAP "backup" device for a worst case both SP's fail, you could limp on the QNAP for critical services until dell comes out and repairs the SP's.

I'd like vSAN but at 2.5k a proc? wow what was vmware/emc thinking?

I mean his plan would involve a primary QNAP using vSphere replication to a secondary QNAP, but hell if he was relying on that why not just get 2 beefy disk filled hosts and vSphere replicate to a standby.

NippleFloss posted:

This is a fairly dumb statement, and it sounds like your friend has a really poor understanding of storage.

I understand where he is coming from, and I agree. However, without understanding and assessing the risk management for SMB, it will be impossible to make a cost effective solution. The chance of both SP's blowing or an SP taking out the other exists; yes, but the chances of both blowing or one taking out the other is so unlikely. Even then if both go, so long as you have a plan in place to hit a acceptable recovery time for both the client and your practices where is the issue.

Dilbert As FUCK fucked around with this message at 17:39 on Apr 21, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

If a consultant tried to sell me a reference architecture for a business based on prosumer stuff like QNAP or Synology I would fire them on the spot. Either pay for shared storage from a reputable vendor (your MD array suggestion with dual SPs is FINE) or just don't do shared storage, but don't put in bad hardware just to tick a meaningless checkbox for "better" redundancy. At the price point you're operating at there is no perfect solution so the best you can do is keep things simple and supportable.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

If a consultant tried to sell me a reference architecture for a business based on prosumer stuff like QNAP or Synology I would fire them on the spot. Either pay for shared storage from a reputable vendor (your MD array suggestion with dual SPs is FINE) or just don't do shared storage, but don't put in bad hardware just to tick a meaningless checkbox for "better" redundancy. At the price point you're operating at there is no perfect solution so the best you can do is keep things simple and supportable.

No I completely agree.

Probably just going to do a big dump of my plans in the what are you working on or vmware thread later tonight.

I don't mind using a QNAP for a this NAS is for backups or replicated data off the main prod MD/MSA, but not as a primary storage

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

I understand where he is coming from, and I agree. However, without understanding and assessing the risk management for SMB, it will be impossible to make a cost effective solution. The chance of both SP's blowing or an SP taking out the other exists; yes, but the chances of both blowing or one taking out the other is so unlikely. Even then if both go, so long as you have a plan in place to hit a acceptable recovery time for both the client and your practices where is the issue.

Well, the problem I have with is statement is that in basically every shared chassis storage architecture the chassis has no effect on the SPs so whether they are in one chassis or two or eight, they will fail in the same way. If an SP failure is going to blow up your other SP it will happen whether they are communicating through an interconnect across racks (I've had exactly this happen on a StorageTek 6140 array), or through an internal bus in a chassis backplane. Likewise, a power surge that fries the PSU on the chassis would also fry the PSU on your two independent components. Any coupled architecture has some shared domain where a failure can propagate, but is going to be a very low probability event (certainly lower than the probability of a single QNAP or Synology component failing and forcing VMs offline until the secondary copies can be spun up).

I know you get this, I'm just really baffled by your friend's logic and how someone who has evidently been in the business for a while can think that way.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

I hear it from time to time too, and it generally rests on a gross misunderstanding of both what the chassis actually does (it's all passive backplane connectors, not a single part that can fail) and what redundancy actually is, as Misogynist pointed out.

The "it's not redundant because it's all in one <thing>" objection applies to blade enclosures, racks, rooms, buildings, power grids, cities...
For what it's worth, I have actually seen a passive backplane outright fail on an IBM DS, though I never got a response from the vendor on exactly how that happened. I raised a shitstorm when IBM failed to learn from this and actually removed the contra-rotating loop configuration from their V7000/V3700 kit because it was "too confusing."

Vulture Culture fucked around with this message at 18:28 on Apr 21, 2014

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:


I know you get this, I'm just really baffled by your friend's logic and how someone who has evidently been in the business for a while can think that way.

I think it's a case of "engineer tunnel vision", where everything is looking so awesome on paper and how things SHOULD work with a price to beat, while there's a gap growing on what it actual can and will do.

Misogynist posted:

For what it's worth, I have actually seen a passive backplane outright fail on an IBM V7000, though I never got a response from the vendor on exactly how that happened. I raised a shitstorm when IBM failed to learn from this and actually removed the contra-rotating loop configuration from their V7000/V3700 kit because it was "too confusing."

This is why we both really hate single chassis Balde solutions.

Dilbert As FUCK fucked around with this message at 18:16 on Apr 21, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

For what it's worth, I have actually seen a passive backplane outright fail on an IBM DS, though I never got a response from the vendor on exactly how that happened. I raised a shitstorm when IBM failed to learn from this and actually removed the contra-rotating loop configuration from their V7000/V3700 kit because it was "too confusing."

I've seen it happen on arrays that have been improperly packed and moved very indelicately. Connectors get bent out of shape or wiggle loose from the motherboard and things start behaving strangely when you boot the system up. But then, that happens with any piece of computer equipment that you drop out the back of a truck. Obviously it's theoretically possible that an array could just have a spontaneous failure of the passive backplane (usually there is redundancy there too, so multiple failures, really) but it's such a low probability occurrence that worring enough about it to build it into your design is going to be silly for 99.999% of customers. And those customers who do need to worry about it are going to have geo-redundant datacenters anyway, so the single chassis argument will be moot.

The passive backplane systems that NetApp sells meet the same five nines availability numbers as the separate chassis solutions because it's just not a common enough failure mode to cause a change of even .001% in the total numbers. I'd wager that the same is true of basically every vendor that does the same, as well all of the blade chassis and networking gear (Nexus 7k and MDS switches, for instance) that use a similar design. It's mostly just FUD and engineers over-thinking things.

YOLOsubmarine fucked around with this message at 19:20 on Apr 21, 2014

Wicaeed
Feb 8, 2005
Is there anyone here that has production experience with EMC's ScaleIO product?

I'm specifically looking for information regarding the mixing of different hard drives type within the same physical chassis, well as how ScaleIO works when mixing hardware (same server vendor but different HW generations).

parid
Mar 18, 2004
Looks like I might be getting into the HPC business soon. I have mostly been a NetApp admin so far. Any recommendations on what technologies/architectures to start learning about? I hear a lot about paralyzed file systems.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

parid posted:

I hear a lot about paralyzed file systems.

Heh.

Lustre is pretty popular. GPFS if you do business with IBM. Ceph if you're interested in emerging technologies.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

parid posted:

I hear a lot about paralyzed file systems.

This is awesome.

Pile Of Garbage
May 28, 2007



parid posted:

Looks like I might be getting into the HPC business soon. I have mostly been a NetApp admin so far. Any recommendations on what technologies/architectures to start learning about? I hear a lot about paralyzed file systems.

You lucky bastard. Big hardware gives me a big boner. IBM have a new draft of their "SONAS Concepts, Planning Architecture, and Planning Guide" Redbook which might be worth a look: http://www.redbooks.ibm.com/redpieces/pdfs/sg247963.pdf

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun

parid posted:

Looks like I might be getting into the HPC business soon. I have mostly been a NetApp admin so far. Any recommendations on what technologies/architectures to start learning about? I hear a lot about paralyzed file systems.
I work in HPC, I don't know if you'll still be in storage, but I'd get a little familiar with the main HPC/storage things I guess. Parallel filesystems, infiniband, SAS/fiber channel, cluster management things (xcat, etc.), MPI, job schedulers, your favorite monitoring/alerts framework, etc. HPC is generally IBM, Cray, SGI, and Dell's world, excluding some smaller integrators. So if you have an inkling of what type of system you have or will have you can start research on some of their specific offerings.

Our storage is 76 Netapp E5400's (IBM dcs3700), so that part may be familiar!

A "paralyzed" filesystem is an issue we see a lot, usually caused by some user job triggering the OOM-killer on a node (or hundreds of nodes). It's really the filesystem being "delayed for recovery" while GPFS tries to figure out what happened to the node and what to do with the open files and all these tokens that got orphaned. It's not a very fast process and can result in things like a specific file, directory, or entire filesystem being "hung" until the recovery finishes.

parid
Mar 18, 2004

The_Groove posted:

I work in HPC, I don't know if you'll still be in storage, but I'd get a little familiar with the main HPC/storage things I guess. Parallel filesystems, infiniband, SAS/fiber channel, cluster management things (xcat, etc.), MPI, job schedulers, your favorite monitoring/alerts framework, etc. HPC is generally IBM, Cray, SGI, and Dell's world, excluding some smaller integrators. So if you have an inkling of what type of system you have or will have you can start research on some of their specific offerings.

Our storage is 76 Netapp E5400's (IBM dcs3700), so that part may be familiar!

A "paralyzed" filesystem is an issue we see a lot, usually caused by some user job triggering the OOM-killer on a node (or hundreds of nodes). It's really the filesystem being "delayed for recovery" while GPFS tries to figure out what happened to the node and what to do with the open files and all these tokens that got orphaned. It's not a very fast process and can result in things like a specific file, directory, or entire filesystem being "hung" until the recovery finishes.

I don't have a lot of details. I wouldn't take responsibility for a whole cluster, but I might be asked to help them with storage and I'd like to be useful if they do. Existing systems are getting long in the tooth and haven't been meeting their needs well. Its all NFS right now. I wouldn't be surprised if it was time to step up into something more specific to their use.

I have gotten the E series pitches before so I'm familiar with their architecture. NetApp is a strong partner for our traditional IT needs so I'm sure we will talking to them at some point. I don't want to just assume they are going to be the best fit due to our success with a different need.

What kind of interconnects do your see? Between processing nodes and storage? Sounds like a lot of block-level stuff. Is that just due to performance drivers?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

parid posted:

I don't have a lot of details. I wouldn't take responsibility for a whole cluster, but I might be asked to help them with storage and I'd like to be useful if they do. Existing systems are getting long in the tooth and haven't been meeting their needs well. Its all NFS right now. I wouldn't be surprised if it was time to step up into something more specific to their use.

I have gotten the E series pitches before so I'm familiar with their architecture. NetApp is a strong partner for our traditional IT needs so I'm sure we will talking to them at some point. I don't want to just assume they are going to be the best fit due to our success with a different need.

What kind of interconnects do your see? Between processing nodes and storage? Sounds like a lot of block-level stuff. Is that just due to performance drivers?

No, block-level isn't really appropriate for HPC systems; multi-node access at block level falls over badly for more than a few nodes. Lustre and GPFS expose a POSIX-like file system to all clients. For the end user, a GPFS or Lustre installation will look like a NFS or local file system.

A standard Lustre* installation would look like E-series** JBODs, connected via SAS to dedicated Lustre servers, which are exposed via the HPC interconnect*** to the main cluster processing nodes**** If you're using Lustre, you'll need a metadata server with some high IOPS disks connected to the HPC interconnect.

* (or GPFS native raid)
** (48-72 3.5" 7200 RPM SATA drives and enclosure with a passive backplane and SAS expander)
*** (Infiniband or 10GbE if you're lucky, GbE if you're not)
**** which will either use a native client (kernel modules or FUSE) or NFS exported from a server that has the native client mounted.

I would strongly suggest looking at preconfigured Lustre or GPFS appliances; conveniently, NetApp sells one:

http://www.netapp.com/us/media/ds-3243.pdf

but you can get them elsewhere.

parid
Mar 18, 2004
That's a great head start. I'll get to googling. Thanks!

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun

parid posted:

What kind of interconnects do your see? Between processing nodes and storage? Sounds like a lot of block-level stuff. Is that just due to performance drivers?
The ones I've worked with are 10GbE and "FDR-14" Infiniband @ 56Gbps. In the 10GbE setup the storage nodes (gpfs NSDs) had dual bonded 10G interfaces, but the clients were nodes in a power6 cluster that had weird routing and routed through 4 "I/O" nodes to get to the storage nodes. With infiniband we have a full fat-tree topology for the compute cluster and an extra switch hanging off the side where the storage nodes are connected. GPFS itself uses TCP/IP over 10GbE and both TCP/IP and RDMA over Infiniband.

I came across a pretty good Parallel I/O presentation from a few years back that gives a decent overview of HPC storage: http://www.nersc.gov/assets/Training/pio-in-practice-sc12.pdf

orange sky
May 7, 2007

So, as a part of my company plan I must get a cert on storage. Which should I get, the EMC E10-001v2 or the HP ATP Storage Solutions V1? As in, who has the greatest market share? Or the best products? :confused: I have no bias towards either company, so I have no idea.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

orange sky posted:

So, as a part of my company plan I must get a cert on storage. Which should I get, the EMC E10-001v2 or the HP ATP Storage Solutions V1? As in, who has the greatest market share? Or the best products? :confused: I have no bias towards either company, so I have no idea.

EMC is the largest storage vendor by a wide margin, however that test is not very product focused from what I understand. If your goal isn't to certify in a particular technology then you might be better off with a vendor neutral SNIA cert. What kind of hardware does your company use?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I'd lean towards the EMC exam.

Sickening
Jul 16, 2007

Black summer was the best summer.
Speaking of EMC, I am having a hell of a time on what should be one of the most basic things of my new VNX. For some reason I can't find a single place in Unisphere to change the network settings of my management port. Right now I am connecting to the service port but I can't find the drat thing in the gui.

orange sky
May 7, 2007

NippleFloss posted:

EMC is the largest storage vendor by a wide margin, however that test is not very product focused from what I understand. If your goal isn't to certify in a particular technology then you might be better off with a vendor neutral SNIA cert. What kind of hardware does your company use?

About half-half between EMC VNX up to Symmetrix and HP up to the 3Par, that's why I don't know what to pick up. I was leaning towards EMC because I like their book on storage (I'd started reading it before they told me I was gonna cert/work on this) but I see HP storage a lot in every data center I go to.. I'll probably just go EMC because I like the company better.

E: and yes it shouldn't be very product focused so I can just switch later I guess.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

EMC owns much more of the storage market than HP. If you're picking one for resume purposes go EMC. Much more common.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I read the EMC storage book and it used EMC equipment in examples but the concepts were all generalized.

Amandyke
Nov 27, 2004

A wha?

Sickening posted:

Speaking of EMC, I am having a hell of a time on what should be one of the most basic things of my new VNX. For some reason I can't find a single place in Unisphere to change the network settings of my management port. Right now I am connecting to the service port but I can't find the drat thing in the gui.

I am going to assume that this is a block only system.

There are a couple ways to do it. With Unisphere, you would want to login, click on system, then hardware, then on the right click on SPA Network Settings. The window that pops up should allow you to change the IP on SP A, then just do the same for SP B.

Via CLI:
naviseccli -h ***SPA_IP*** -user sysadmin -password sysadmin -scope 0 networkadmin -set -ipv4 -address ***NEW_IP*** -subnetmask ***NEW_SUBNET*** -gateway ***NEW_GATEWAY***

Then just do the same for SPB, pointing the command at SPB's ip address.

Sickening
Jul 16, 2007

Black summer was the best summer.

Amandyke posted:

I am going to assume that this is a block only system.

There are a couple ways to do it. With Unisphere, you would want to login, click on system, then hardware, then on the right click on SPA Network Settings. The window that pops up should allow you to change the IP on SP A, then just do the same for SP B.

Via CLI:
naviseccli -h ***SPA_IP*** -user sysadmin -password sysadmin -scope 0 networkadmin -set -ipv4 -address ***NEW_IP*** -subnetmask ***NEW_SUBNET*** -gateway ***NEW_GATEWAY***

Then just do the same for SPB, pointing the command at SPB's ip address.

I figured that part out at least. Them being called virtual adapters in that settings menu threw me off. The issue I am having now is that I can ping those addresses but not access unisphere through them.

evil_bunnY
Apr 2, 2003

Sickening posted:

I figured that part out at least. Them being called virtual adapters in that settings menu threw me off. The issue I am having now is that I can ping those addresses but not access unisphere through them.
I don't know how it works out on your particular piece of hardware, but generally it's good practice not to make your management system available on the network that moves data.

parid
Mar 18, 2004
Real long shot here, has anyone setup Intellisnap with Commvault 10 to a clustered ontap NetApp? Commvault's documentation has a lot of "check box here" type documentation but nothing about required configuration for the array, or how it connects, with what protocols (pretty sure its http), or from where. This is all listed as supported but it appears much of the config is hidden or hard coded and not documented anywhere. Ultimately we want to snap Exchange 2013 DB iscsi luns, but were just doing file system now to see if we can get that working.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Got a question that may be better suited for the Virtualization thread.

One of the sales guys came to be with an interesting idea today that I don't think is possible, but we'll see.

He he has a client that has a number of low/mid/high tier FC storage systems, some nonames, some 3par, etc. He wants to create a vSphere VSAN cluster to act as a front end for all of this storage. 60% of the hosts attached to this storage, about 6-8 different storage systems, all of different types, are ESXi of some sort, but the remaining are physical boxes, mostly oracle DB servers.

His plan is to somehow create LUNs on the vSphere VSAN that can be presented to the physical devices through the FC network. I had not heard anything about this with the developments of VSAN, so I'm skeptical, but I really only deal with our IBM V7000s so my storage experience isn't very widespread.

I had suggested some kind of storage visualization appliance, like the IBM SVC, but cost is a big factor so they're trying to do this on the cheap (big surprise.) Any ideas if there is any substance here?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

mattisacomputer posted:

Got a question that may be better suited for the Virtualization thread.

One of the sales guys came to be with an interesting idea today that I don't think is possible, but we'll see.

He he has a client that has a number of low/mid/high tier FC storage systems, some nonames, some 3par, etc. He wants to create a vSphere VSAN cluster to act as a front end for all of this storage. 60% of the hosts attached to this storage, about 6-8 different storage systems, all of different types, are ESXi of some sort, but the remaining are physical boxes, mostly oracle DB servers.

His plan is to somehow create LUNs on the vSphere VSAN that can be presented to the physical devices through the FC network. I had not heard anything about this with the developments of VSAN, so I'm skeptical, but I really only deal with our IBM V7000s so my storage experience isn't very widespread.

I had suggested some kind of storage visualization appliance, like the IBM SVC, but cost is a big factor so they're trying to do this on the cheap (big surprise.) Any ideas if there is any substance here?

This definitely won't work because the FC LUNs from the mishmash of storage won't appear to be presented by controllers on the HCL. Even locally attached disks that aren't on the HCL don't work. See: trying to do anything with AHCI/Intel ICH10. vSAN is very particular about the hard drives it uses. Also you won't have any SSDs if you're all LUN-based and VMware requires at least one SSD per host in a vSAN.

THEN, on top of that, VMware explicitly does not recommend that you use it as a SAN. They don't want you to try to beef up a couple of boxes to act as a pseudo-SAN and then share that out to clients. Not designed for that use case.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

madsushi posted:

This definitely won't work because the FC LUNs from the mishmash of storage won't appear to be presented by controllers on the HCL. Even locally attached disks that aren't on the HCL don't work. See: trying to do anything with AHCI/Intel ICH10. vSAN is very particular about the hard drives it uses. Also you won't have any SSDs if you're all LUN-based and VMware requires at least one SSD per host in a vSAN.

THEN, on top of that, VMware explicitly does not recommend that you use it as a SAN. They don't want you to try to beef up a couple of boxes to act as a pseudo-SAN and then share that out to clients. Not designed for that use case.

Okay, so right off the bat VSAN won't work if it detects the LUNs are from FC storage and not local SAS controllers?

As for the SSDs, I know they're buying new hosts as well which will have some local storage, including SSDs for that part of the requirement. Even if VMware doesn't recommend to use it as a SAN for non-vmware applications, is it even possible? I can't see anywhere where it's even possible to present VSAN datastores as a LUN to another FC device over the fabric, which if that is the case, it would kill this ridiculous idea right there and make them go the correct route.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

mattisacomputer posted:

Okay, so right off the bat VSAN won't work if it detects the LUNs are from FC storage and not local SAS controllers?

As for the SSDs, I know they're buying new hosts as well which will have some local storage, including SSDs for that part of the requirement. Even if VMware doesn't recommend to use it as a SAN for non-vmware applications, is it even possible? I can't see anywhere where it's even possible to present VSAN datastores as a LUN to another FC device over the fabric, which if that is the case, it would kill this ridiculous idea right there and make them go the correct route.

Correct, vSAN won't let you take non-local disks and put them into a disk group, and there's even strict restriction on which local controllers it can use.

To present vSAN space as a LUN, you'd need to make a VM in the vSAN, and then have that VM present its own local HD as a LUN. VMware isn't going to natively present a LUN to anything.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

Okay, I think I convinced him to kill the VMware VSAN idea. Moving forward, what would be the low cost version of something like an IBM SVC to virtualize/centralize all of this assorted FC storage?

Thanks Ants
May 21, 2004

#essereFerrari


I really don't want to say Openfiler, but…

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

mattisacomputer posted:

Okay, I think I convinced him to kill the VMware VSAN idea. Moving forward, what would be the low cost version of something like an IBM SVC to virtualize/centralize all of this assorted FC storage?

v7000 :v:

Mr Shiny Pants
Nov 12, 2012

mattisacomputer posted:

Okay, I think I convinced him to kill the VMware VSAN idea. Moving forward, what would be the low cost version of something like an IBM SVC to virtualize/centralize all of this assorted FC storage?

Solaris?

Adbot
ADBOT LOVES YOU

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.


That's what I recommended, as I know it can do the job but Sales Guy's response was "they're not gonna spend that kind of money." Either way not my problem, but the project now has me interested in what alternatives there to IBM SVC / v7000 that would provide similar features. Openfiler looks like it would be a great fit, so Sales Guy is going to reach out to them for more info - is there something we should be wary about?



As for Solaris, I could see using the magic of ZFS to do this buy they're not going to want to bring in Unix as the customer is going to want something they feel comfortable with. Not saying its a good reason to not consider Solaris, I can just hear Sales Guy in my head turning it down.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply