Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KillHour posted:

This thread is probably my best bet because it's storage heavy, even if we aren't using SANs.

The company I work for is looking into OEMing servers to rebrand as storage appliances for video security (NVRs). I've been put in charge of doing research on how they should be built out, and who we should go through. Right now, I'm leaning towards HP, due to their warranty (and because we've been using them for a while).

I've also been looking at Supermicro, due to their lower cost and ease of rebranding. Also, the largest OEM in the industry (BCD Video) uses HP, so another vendor would help differentiate ourselves.

Anyways, I had a few questions.

For people who have worked with Supermicro, how is their support/reliability? We're a 10-man shop, so we really don't want to have to spend a lot of time on support calls, and since this is for security, these things have to be rock-solid.

Supermicro's support process is time-intensive. Good results for end users requires an engaged, active, and knowledgeable reseller.

quote:

For people that have done OEM work in the past, who is easiest to work with? I've done some work with Dell in the past when I worked at Ingram, and it didn't go very well.

Secondly, while most of the systems will be 20TB or less (which I could shove in a DL380 12-bay, no problem), we will probably need to accommodate systems as large as 200TB or more. I could either go with external DAS units or use something like the Proliant 1xSL4540 to get the job done. Is there a good reason to go with one over the other other than cost and rack density? What is the densest system out there outside of SANs? I know Supermicro has a 72 LFF disk system, and I've seen them advertise a 90 LFF disk system (but I can't find it on their website, is it new?).

JBODs+external servers are going to be a bit less dense than a server w/ a ton of drives. How many are you planning on buying? There are other vendors that sell those form factors, but generally don't deliver a HP/Dell/IBM support experience.

quote:

Also, one of the biggest issues with large camera systems is disk throughput. I see systems all the time that use 6 or 8 15k SAS drives in a RAID 10 just for 24 hours of storage so that they can offload the video to the slower 7200 RPM drives at night when there's less recording happening. Milestone (the VMS we're using) actually requires a "live" array for this reason. Is there a reason not to use SSDs instead of 15k SAS for something like this? It seems less expensive, and if I use a PCIe card like this, I can even save some drive bays.

Be mindful of SSD write lifecycles.

Adbot
ADBOT LOVES YOU

CrazyLittle
Sep 11, 2001





Clapping Larry

PCjr sidecar posted:

Be mindful of SSD write lifecycles.

Are SAS ssd's significantly better in that regard? How are the big vendors doing it?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

PCjr sidecar posted:

Be mindful of SSD write lifecycles.

To be fair they have gotten much better in the past years, and generally a mechanical drive will fail before an SSD reaches a write life.

IIRC, when ssd's hit that write limit you can still do reads so it's a matter of copying data off.

KillHour
Oct 28, 2007


PCjr sidecar posted:

Supermicro's support process is time-intensive. Good results for end users requires an engaged, active, and knowledgeable reseller.

We're very knowledgeable and more than capable of building and supporting systems. That being said, we have about 10,000 cameras out there, so we need to have reliable systems that we won't have to touch often and can be fixed quickly if they do break, or we'll drown. Also, we don't have the luxury of billing by the hour for support. Most of our customers pay for a yearly support contract, and we need to handle those quickly or we can loose a lot of money.

If it takes a week to process an RMA, that won't work for us.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

CrazyLittle posted:

Are SAS ssd's significantly better in that regard? How are the big vendors doing it?

Somewhat better, not immune. Better manufacturing/testing tolerances, better controllers, more spare cells at listed capacity, overprovisioning.

Dilbert As gently caress posted:

To be fair they have gotten much better in the past years, and generally a mechanical drive will fail before an SSD reaches a write life.

IIRC, when ssd's hit that write limit you can still do reads so it's a matter of copying data off.

A rule of thumb is that the better eMLC SSDs can handle 10 full writes per day over 3 years. I'd be concerned if KH's live tier is sized close to his daily write size, especially if using cheaper SSDs.

Failure mode depends on the drive and manufacturer. We've seen failing writes, failing reads, the drive disappearing entirely, or taking the entire SAS expander or controller offline (that was a fun one.)

KillHour posted:

We're very knowledgeable and more than capable of building and supporting systems. That being said, we have about 10,000 cameras out there, so we need to have reliable systems that we won't have to touch often and can be fixed quickly if they do break, or we'll drown. Also, we don't have the luxury of billing by the hour for support. Most of our customers pay for a yearly support contract, and we need to handle those quickly or we can loose a lot of money.

If it takes a week to process an RMA, that won't work for us.

Unless you maintain an inventory of cold spare parts or work with a third party fulfillment service you probably don't want to go with SuperMicro.

KillHour
Oct 28, 2007


PCjr sidecar posted:

A rule of thumb is that the better eMLC SSDs can handle 10 full writes per day over 3 years. I'd be concerned if KH's live tier is sized close to his daily write size, especially if using cheaper SSDs.

Failure mode depends on the drive and manufacturer. We've seen failing writes, failing reads, the drive disappearing entirely, or taking the entire SAS expander or controller offline (that was a fun one.)

Wouldn't that mean that if I fill the drive once per day (which is pretty much what would happen), then I should expect about 30 years? I... don't see that being a problem. Also, I'd obviously use RAID 1. I just don't want to be going out there every year to swap out a drive.

PCjr sidecar posted:

Unless you maintain an inventory of cold spare parts or work with a third party fulfillment service you probably don't want to go with SuperMicro.

Thanks for this.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KillHour posted:

Wouldn't that mean that if I fill the drive once per day (which is pretty much what would happen), then I should expect about 30 years? I... don't see that being a problem. Also, I'd obviously use RAID 1. I just don't want to be going out there every year to swap out a drive.

It's not necessarily a linear relationship and it's not a guarantee, but an average. IIRC, that's with the enterprise Intel drives. Generally speaking, an expected workload within an order of magnitude of predicted lifetime would make me nervous. Check out the SSD megathread OP for more info on lifespan, write averaging, etc.

I've heard some anecdotal evidence that some consumer hardware RAID cards don't play well with SSDs, but I don't know much about that.

KillHour
Oct 28, 2007


PCjr sidecar posted:

It's not necessarily a linear relationship and it's not a guarantee, but an average. IIRC, that's with the enterprise Intel drives. Generally speaking, an expected workload within an order of magnitude of predicted lifetime would make me nervous. Check out the SSD megathread OP for more info on lifespan, write averaging, etc.

I've heard some anecdotal evidence that some consumer hardware RAID cards don't play well with SSDs, but I don't know much about that.

Well, we'd definitely be using enterprise RAID cards and SSDs for this. Our typical sizing for live drives is around 3:1 (meaning in the end, we're really only writing 1/3 of the disk per day), but could go with a larger ratio if it became a problem. 16 720p cameras would probably consume < 50GB/day in a standard installation, and I'd put those on a pair of 120 or 240GB SSDs in RAID 1.

Edit: I'll check out the megathread, but maybe I'll look into SLC instead of MLC. Thanks for the help.

KillHour fucked around with this message at 03:13 on Mar 5, 2014

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
you probably don't actually need to worry about your ssd wearing out, and if you are, it was cheap enough that you can just send replacements every other year.

Sickening
Jul 16, 2007

Black summer was the best summer.

Sickening posted:

Anybody have any experience with EMC's pricing model? We haven't made it to the stage of the itemized quote yet, but I am interested to know where the money is made. The usual suspect is always support, but I am really curious to the pricing of their fast cache disks.

Quoting from me posting in the wrong drat thread.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Sickening posted:

Anybody have any experience with EMC's pricing model? We haven't made it to the stage of the itemized quote yet, but I am interested to know where the money is made. The usual suspect is always support, but I am really curious to the pricing of their fast cache disks.

In what sense?

They have margin everywhere, just like any such vendor but I recall support actually being quite untouchable in terms of discount......

Are the FAST Cache disks appearing too expensive?

Also, what size fast cache disks are you going for and how many of them?

Sickening
Jul 16, 2007

Black summer was the best summer.

Vanilla posted:

In what sense?

They have margin everywhere, just like any such vendor but I recall support actually being quite untouchable in terms of discount......

Are the FAST Cache disks appearing too expensive?

Also, what size fast cache disks are you going for and how many of them?

3rd day on the job, so I haven't been given any specifics yet. We have been quoted something like 60k for a 20 gig usuable array. I haven't been given any of the specs so far so I was just curious. My only real storage experience has been with netapp.

AlternateAccount
Apr 25, 2005
FYGM

Sickening posted:

3rd day on the job, so I haven't been given any specifics yet. We have been quoted something like 60k for a 20 gig usuable array. I haven't been given any of the specs so far so I was just curious. My only real storage experience has been with netapp.

I will give you an array with 20GB usable for half that, sign with me! ;)

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Linux Nazi posted:



New toys arrived. Fully outfitted 5700s and frame-licensed vplex. x2 of course.

Still waiting on the bump up from 1gb to 10gb interconnects between our datacenters (which is essentially just updating the bandwidth statement), but we start going over the vplex design tomorrow with EMC to whiteboard and see how we need to cable everything up.

Should be interesting. The idea is a VMWare stretched / metro cluster. We are 99% virtualized, and we already have layer 2 spanning courtesy of OTV. With vplex taking care of the storage side, we can essentially put one datacenter's ESXi hosts into maintenance mode and go to lunch while we wait for things to gracefully vmotion to the other side of town.

Right now we are all RecoverPoint and SRM, it works pretty well, but failovers are a huge event.


I'm not sure if anybody is interested to hear about our VPLEX implementation now that we've completed it, but if there's any questions feel free to ask.

A quick rundown of our environment:

We are running 2 small datacenters that we manage, 40 miles apart in the Phoenix Metro. We run 10gb interconnect between DCs specifically to run VPLEX Metro (this alone took nearly 4 months to provision due to some difficulties with our provider...). We also have a pair of 4gb FC interconnects that facilitate cross-presenting the VPLEXes so they can behave as active/standby paths. As far as we've been told by EMC, we are the only company in the southwest region doing cross-presenting on VPLEX Metro.

We are 99% virtualized on VMWare 5.5 in a new VMWare cluster built specifically to leverage the VPLEX-provisioned datastores. We are right at 32 hosts, all gen8 HP BL460s, 256GB per and 10gb flexfabric connectivity to our Cisco Nexus 7010 network, and 9513 fabric switches.

Our only physical systems are 6x MSSQL 2008R2 WSCS nodes, which also use VPLEX provisioned luns. These are also on gen8s, only they run .5TB of memory in each.

Our underlying storage is a pair of (roughly) 200TB VNX 5700s. We are adding another 40TB soon.

We use OTV for layer 2 spanning, so we have IP portability that permits us to swing VMs seamlessly between datacenters.

I'm one of three members on our (small) Infrastructure team, and am in charge of the storage layer, VMWare, and microsoft tech (SQL, Exchange, AD). Though I have my CCNA, I do not involve myself with the network layer beyond basic management and on-call duties.


I would absolutely not not not consider myself a fantastic storage guy, I know enough to design and manage what I've got, largely by myself. I'm far stronger on the VMWare and MS side of things.

I know VPLEX gets brought up from time to time, and if anybody wants to hear the realities of implementing it, feel free to ask.


As far as seeing it at work: It's like some voodoo poo poo. When we demonstrated failing SQL between data centers without losing a single ping to the cluster, jaws hit the floor. Doing SQL DR the old way (with RPAs and 2 SQL clusters) was a messy, all-day / all-hands affair. Wildly painful.

VMWare works exactly as it should. VMs that participate in our spanned vlans are free to move between hosts without any limits. We leave DRS in fully automated mode, and literally have no idea which datacenter VMs are living in at any particular time. We pin non-OTV VMs to one side using simple DRS groups. We demonstrated this to the company originally using a terminal server running a HD youtube video as we migrated it 40 miles away to our other datacenter, and didn't lose a single frame.


edit: forgot to include the not when speaking to my storage skills. Eek.

Blame Pyrrhus fucked around with this message at 00:01 on Mar 15, 2014

Wicaeed
Feb 8, 2005

Linux Nazi posted:

Seriously Neat poo poo

Oh god I want your job :swoon:

evol262
Nov 30, 2010
#!/usr/bin/perl

Linux Nazi posted:

I'm not sure if anybody is interested to hear about our VPLEX implementation now that we've completed it, but if there's any questions feel free to ask.

A quick rundown of our environment:

We are running 2 small datacenters that we manage, 40 miles apart in the Phoenix Metro. We run 10gb interconnect between DCs specifically to run VPLEX Metro (this alone took nearly 4 months to provision due to some difficulties with our provider...). We also have a pair of 4gb FC interconnects that facilitate cross-presenting the VPLEXes so they can behave as active/standby paths. As far as we've been told by EMC, we are the only company in the southwest region doing cross-presenting on VPLEX Metro.

We are 99% virtualized on VMWare 5.5 in a new VMWare cluster built specifically to leverage the VPLEX-provisioned datastores. We are right at 32 hosts, all gen8 HP BL460s, 256GB per and 10gb flexfabric connectivity to our Cisco Nexus 7010 network, and 9513 fabric switches.

Our only physical systems are 6x MSSQL 2008R2 WSCS nodes, which also use VPLEX provisioned luns. These are also on gen8s, only they run .5TB of memory in each.

Our underlying storage is a pair of (roughly) 200TB VNX 5700s. We are adding another 40TB soon.

We use OTV for layer 2 spanning, so we have IP portability that permits us to swing VMs seamlessly between datacenters.

I'm one of three members on our (small) Infrastructure team, and am in charge of the storage layer, VMWare, and microsoft tech (SQL, Exchange, AD). Though I have my CCNA, I do not involve myself with the network layer beyond basic management and on-call duties.


I would absolutely not not not consider myself a fantastic storage guy, I know enough to design and manage what I've got, largely by myself. I'm far stronger on the VMWare and MS side of things.

I know VPLEX gets brought up from time to time, and if anybody wants to hear the realities of implementing it, feel free to ask.


As far as seeing it at work: It's like some voodoo poo poo. When we demonstrated failing SQL between data centers without losing a single ping to the cluster, jaws hit the floor. Doing SQL DR the old way (with RPAs and 2 SQL clusters) was a messy, all-day / all-hands affair. Wildly painful.

VMWare works exactly as it should. VMs that participate in our spanned vlans are free to move between hosts without any limits. We leave DRS in fully automated mode, and literally have no idea which datacenter VMs are living in at any particular time. We pin non-OTV VMs to one side using simple DRS groups. We demonstrated this to the company originally using a terminal server running a HD youtube video as we migrated it 40 miles away to our other datacenter, and didn't lose a single frame.


edit: forgot to include the not when speaking to my storage skills. Eek.

Who's doing this in Phoenix, if you don't mind saying? There were very few competent companies last time I was on the market here.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

evol262 posted:

Who's doing this in Phoenix, if you don't mind saying? There were very few competent companies last time I was on the market here.

We are doing it in-house. We deal mostly with a specific type of lending, and as a result write a lot of our own applications in-house as well.

I'm not wanting to sound paranoid, just that the internet is still the internet and I'd rather not mention the company.

That being said, we've demonstrated the tech to a few other local guys, we like showing it off. If anybody in the area ever wants to see it in action, just hit me up.

evol262
Nov 30, 2010
#!/usr/bin/perl

Linux Nazi posted:

We are doing it in-house. We deal mostly with a specific type of lending, and as a result write a lot of our own applications in-house as well.

I'm not wanting to sound paranoid, just that the internet is still the internet and I'd rather not mention the company.

That being said, we've demonstrated the tech to a few other local guys, we like showing it off. If anybody in the area ever wants to see it in action, just hit me up.

Phoenix being what it is, this gives me a pretty good idea of who you work for, but I am still local and may want to see it in action someday...

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Sickening posted:

3rd day on the job, so I haven't been given any specifics yet. We have been quoted something like 60k for a 20 gig usuable array. I haven't been given any of the specs so far so I was just curious. My only real storage experience has been with netapp.

Depends entirely what you loaded it with.

My friend is one who vouches a bunch for doing bunch of SSD's fast cache then loading a VNXe up with as many 3TB drives as it takes. Granted it is fast, until you exhaust the cache, or have to fetch a bunch of cold data. We do get in a few discussions about that in classes.

If the thing 20TB 10k-15 storage + fast cache and priority support 60K isn't too out of the question on a VNX 5k; bit high mind you but obtainable.

Are you buying direct or going through a distributor like ingram micro or tech data?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
So Dilbert advised to look into DataDomain. Spoke with EMC and they pretty much advised to give them raw backups and let them compress.

Does anyone actually do this with a VM backup software like Veeam/PDH Virtual?

I think I rather have some raw storage and let my software do it for cheaper.

parid
Mar 18, 2004

Moey posted:

So Dilbert advised to look into DataDomain. Spoke with EMC and they pretty much advised to give them raw backups and let them compress.

Does anyone actually do this with a VM backup software like Veeam/PDH Virtual?

I think I rather have some raw storage and let my software do it for cheaper.

The rub is who does the dedupe better. Datadomain's is pretty good.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

So Dilbert advised to look into DataDomain. Spoke with EMC and they pretty much advised to give them raw backups and let them compress.

Does anyone actually do this with a VM backup software like Veeam/PDH Virtual?

I think I rather have some raw storage and let my software do it for cheaper.

If you have Essentials plus or higher you own VDP, ask your reseller for the license if needed.

the VDP/VDP-A are avamar appliances they loving own. The VDP-Advanced supports replication. It's probably the best software backup platform you can get.

Linux Nazi posted:

I'm not sure if anybody is interested to hear about our VPLEX implementation now that we've completed it, but if there's any questions feel free to ask.

A quick rundown of our environment:

We are running 2 small datacenters that we manage, 40 miles apart in the Phoenix Metro. We run 10gb interconnect between DCs specifically to run VPLEX Metro (this alone took nearly 4 months to provision due to some difficulties with our provider...). We also have a pair of 4gb FC interconnects that facilitate cross-presenting the VPLEXes so they can behave as active/standby paths. As far as we've been told by EMC, we are the only company in the southwest region doing cross-presenting on VPLEX Metro.

We are 99% virtualized on VMWare 5.5 in a new VMWare cluster built specifically to leverage the VPLEX-provisioned datastores. We are right at 32 hosts, all gen8 HP BL460s, 256GB per and 10gb flexfabric connectivity to our Cisco Nexus 7010 network, and 9513 fabric switches.

Our only physical systems are 6x MSSQL 2008R2 WSCS nodes, which also use VPLEX provisioned luns. These are also on gen8s, only they run .5TB of memory in each.

Our underlying storage is a pair of (roughly) 200TB VNX 5700s. We are adding another 40TB soon.

We use OTV for layer 2 spanning, so we have IP portability that permits us to swing VMs seamlessly between datacenters.

I'm one of three members on our (small) Infrastructure team, and am in charge of the storage layer, VMWare, and microsoft tech (SQL, Exchange, AD). Though I have my CCNA, I do not involve myself with the network layer beyond basic management and on-call duties.


I would absolutely not not not consider myself a fantastic storage guy, I know enough to design and manage what I've got, largely by myself. I'm far stronger on the VMWare and MS side of things.

I know VPLEX gets brought up from time to time, and if anybody wants to hear the realities of implementing it, feel free to ask.


As far as seeing it at work: It's like some voodoo poo poo. When we demonstrated failing SQL between data centers without losing a single ping to the cluster, jaws hit the floor. Doing SQL DR the old way (with RPAs and 2 SQL clusters) was a messy, all-day / all-hands affair. Wildly painful.

VMWare works exactly as it should. VMs that participate in our spanned vlans are free to move between hosts without any limits. We leave DRS in fully automated mode, and literally have no idea which datacenter VMs are living in at any particular time. We pin non-OTV VMs to one side using simple DRS groups. We demonstrated this to the company originally using a terminal server running a HD youtube video as we migrated it 40 miles away to our other datacenter, and didn't lose a single frame.


edit: forgot to include the not when speaking to my storage skills. Eek.

Got any questions about vPlex? vPLEXy owns; worked on a 1gb metro line, even VNX and avamar performed well. I'd love to assist on and cosiderations of VPLEX.

Dilbert As FUCK fucked around with this message at 12:59 on Mar 15, 2014

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Dilbert As gently caress posted:

If you have Essentials plus or higher you own VDP, ask your reseller for the license if needed.

the VDP/VDP-A are avamar appliances they loving own. The VDP-Advanced supports replication. It's probably the best software backup platform you can get.

Balls. Never really looked into VDP and we purchased (and got a great deal on) PHD Virtual.

I'll have to spin up VDP in my lab and play with it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

Balls. Never really looked into VDP and we purchased (and got a great deal on) PHD Virtual.

I'll have to spin up VDP in my lab and play with it.

If you do the VDP, you'll need to provision a NFS or SCSI lun and remote replicate/snap it.

Also ask your MSP about vSphere Operations manager; it owns hard and will help identify bottlenecks in your environment.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Sickening posted:

3rd day on the job, so I haven't been given any specifics yet. We have been quoted something like 60k for a 20 gig usuable array. I haven't been given any of the specs so far so I was just curious. My only real storage experience has been with netapp.

Don't worry so much on where they make money - they will apply a hardware, software and services discount across the board.

Play the game - involve another party - like Dell. Watch them both drop their pants. Wait till end of quarter - watch pants go down further. In budget and meets your requirements? Buy.

Bitch Stewie
Dec 17, 2011
Anyone have any experience with the lower end Hitachi HUS 100 boxes?

Sickening
Jul 16, 2007

Black summer was the best summer.
EMC folks, does Unisphere even make sense to purchase with one unit?

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Sickening posted:

EMC folks, does Unisphere even make sense to purchase with one unit?

Unisphere in general or added features? Basic management for the array is handled with Unisphere regardless, but there are added features and enablers you'd be buying licensing to use for non basic packages. I'll go see if I can find my older EMC sales guides I've used in the past, but I'm pretty sure general unisphere support comes with the array when you buy it. (I need to double check this as it may have changed and I haven't done a quote myself in a while so I could be way wrong on this)

Generally I would say no to the added packages if its just one array. Just depends on what software Licensing you are looking at getting into.

Also are you looking to do a full unified box with the NAS portion or just block only?

Sickening
Jul 16, 2007

Black summer was the best summer.

Langolas posted:

Unisphere in general or added features? Basic management for the array is handled with Unisphere regardless, but there are added features and enablers you'd be buying licensing to use for non basic packages. I'll go see if I can find my older EMC sales guides I've used in the past, but I'm pretty sure general unisphere support comes with the array when you buy it. (I need to double check this as it may have changed and I haven't done a quote myself in a while so I could be way wrong on this)

Generally I would say no to the added packages if its just one array. Just depends on what software Licensing you are looking at getting into.

Also are you looking to do a full unified box with the NAS portion or just block only?

Block only. Unisphere showed up as its own line item for like 2k (as well as the other usual bs stuff they try to sneak by) and it had me a little confused. I asked why it was listed separately because I would assume that its part of the VNX system and he said it was a requirement. I understand licensing other features like FAST SUITE and Local Protection, but the GUI?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
That's probably the management package, which can be handy depending how large the virtual environment, and how you want to look at data.

Dilbert As FUCK fucked around with this message at 21:50 on Mar 19, 2014

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

probably just a line item that was already included in the overall quote and part of the package. Worth trying to knock it down a little if you push back some. Wouldn't hurt to bring it up

Sickening
Jul 16, 2007

Black summer was the best summer.

Dilbert As gently caress posted:

That's probably the management package, which can be handy depending how large the virtual environment, and how you want to look at data.

Its for a very small environment.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
I thought Unisphere was mandatory?

Sickening
Jul 16, 2007

Black summer was the best summer.

Vanilla posted:

I thought Unisphere was mandatory?

It really might be. I have no clue about EMC practices. I am pretty sure all the line item quotes I have gotten for all my previous storage never had the console as a seperate 2k line item.

MC Fruit Stripe
Nov 26, 2002

around and around we go

Sickening posted:

Unisphere showed up as its own line item for like 2k (as well as the other usual bs stuff they try to sneak by)
Seriously, if anyone hasn't had the pleasure of an EMC quote, it is so depressing how much bullshit they try to throw on a quote - and we're talking tens of thousands of dollars worth of useless poo poo. You think buying a car is full of landmines, buy a SAN.

Internet Explorer
Jun 1, 2005





If you don't have Unisphere are you going to admin the entire thing by command line? It would be like ordering something without the web management interface. I don't think you can even buy it without Unisphere. It has to be just the way they are wording the quote.

Sickening
Jul 16, 2007

Black summer was the best summer.

Internet Explorer posted:

If you don't have Unisphere are you going to admin the entire thing by command line? It would be like ordering something without the web management interface.

If that's the case then that's fine. This particular quote had a 1/3 of it being pointless bullshit which is even more than I am use to with vendors. I just don't know the this vendor well enough to know if the saleperson is full of poo poo. He told me the same thing.

Amandyke
Nov 27, 2004

A wha?

Sickening posted:

If that's the case then that's fine. This particular quote had a 1/3 of it being pointless bullshit which is even more than I am use to with vendors. I just don't know the this vendor well enough to know if the saleperson is full of poo poo. He told me the same thing.

As far as I am aware there is no way to turn off unisphere. It is the gui for VNX.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Amandyke posted:

As far as I am aware there is no way to turn off unisphere. It is the gui for VNX.

This is correct. It runs off the management service on the storage processors themselves.

I'm standing by its a piece of poo poo line item that you should fight them on Sickening. Its not worth $2000 even with all the features.

Adbot
ADBOT LOVES YOU

Aquila
Jan 24, 2003

MC Fruit Stripe posted:

Seriously, if anyone hasn't had the pleasure of an EMC quote, it is so depressing how much bullshit they try to throw on a quote - and we're talking tens of thousands of dollars worth of useless poo poo. You think buying a car is full of landmines, buy a SAN.

That was my experience with Hitachi as well.

gently caress sans.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply