Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

My biggest beef with netapp is that 80% of the fancy poo poo that makes it worth the the expense is tribal knowledge in the head of netapp prof services and consultants.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

three posted:

This is a pretty stupid glittering generality.

NetApp has been falling behind for quite some time, both technologically (e.g. still no equivalent product to XtremIO/Pure, and from people exiting the company (e.g. Vaughn Stewart). They've plateau'd and will be in some serious trouble if they don't get their poo poo together.

They recently launched the EF550 all-flash array, so they're at least trying to play in that space.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

NetApp has been falling behind for quite some time, both technologically (e.g. still no equivalent product to XtremIO/Pure, and from people exiting the company (e.g. Vaughn Stewart). They've plateau'd and will be in some serious trouble if they don't get their poo poo together.

XtremeIO barely exists as a product at this point. The AFA market is so small and changes so quickly that there's no really much information on who currently owns the market (amusingly, it was assumed to be Violin just a year ago), but NetApp has had all flash E-series products available for more than a year and, surprising even to me, apparently people buy them. I know we have at least a few pretty sizable deployments of EF series boxes out there. They're pretty no-frills, relative to OnTAP, but then most of the AF market right now has huge feature gaps as well and E-Series is much more suited to working with flash than OnTAP. Longer term there's FlashRay which is the in-house developed AFA that provides an experience and feature set more like OnTAP. That should release this year (maybe?). If it's good the company will be fine, if it's bad then there's definitely trouble on the horizon.

evil_bunnY posted:

My biggest beef with netapp is that 80% of the fancy poo poo that makes it worth the the expense is tribal knowledge in the head of netapp prof services and consultants.

Not that I don't think there's a ton of complexity that could easily be hidden from customers by some smarter use of our tools (of which we have too many already), but what specific things did you have in mind? This "it's too complex for general purpose admins" talk comes up a lot internally, especially when we're competing against guys like Nimble who basically have no knobs to turn.

Docjowles posted:

They recently launched the EF550 all-flash array, so they're at least trying to play in that space.

That's an update of the previous EF-540 which was released about a year ago. We actually have a decent bit of market share, it's just that it's E-Series so no one notices or cares.

evil_bunnY
Apr 2, 2003

NippleFloss posted:

Not that I don't think there's a ton of complexity that could easily be hidden from customers by some smarter use of our tools (of which we have too many already), but what specific things did you have in mind? This "it's too complex for general purpose admins" talk comes up a lot internally, especially when we're competing against guys like Nimble who basically have no knobs to turn.
The specific thing I ran into was the management suite needing a very specific OS/java/browser version, and still being an unreliable POS (disconnects, tab switching causing the whole shabang to desync), ended up just doing 99% over SSH.

Things like snapmirror setup in vmware envts, and anything on 8 are much more of pain than they need to be.

What's the recommended UI these days? I barely touch the thing aside from monitoring anymore because it just annoys me. And their sales rep for sweden is a gigantic douchebag.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone here actually front their storage with Nexenta? I curious on reasoning and performance.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
In my experience, NetApp:

*has one of the worst sales teams (pushy, uninformed, lots of pricing games)
*has the most flexible product (you can do drat near anything, with the right license)
*is very hard to size/scale properly (and sales will gently caress it up every time)
*has one of the best management suites available (NetApp System Manager is baller, NetApp VSC is THE BEST SAN-VMware plugin available)
*is only price competitive if you beat the poo poo out of sales which is really, really annoying
*has amazing hardware support and availability
*has the best software available (SnapManager is by far the most powerful SQL backup software, especially with cloning, scripting, etc)
*has the worst possible software support (good luck finding anyone that knows anything advanced in SnapManager!!)

So if you want to use NetApp, you need somebody that knows it very well (especially sizing), can troubleshoot their software installation/config problems themselves, and someone who will beat up sales for you. The upside is that the box can do everything at once and enables you to do some really cool things (that can save the business time/monye) and the hardware is rock solid.

KennyG
Oct 22, 2002
Here to blow my own horn.
Linux nazi. I am about to drop some huge coin on a vplex implementation (geo, not metro) that has now gotten political. I have lost a lot of my trust in my EMC rep in the past 4 days. Would you be up for a phone call. Serious business. I'd love to have a bit of a trip report about the gotchas

Our environment:
2xbricks on 8gb FC
300tb isilon
10 vmware hosts
2 Mssql hosts w .75tb ram
3 misc apps on physical hosts for edge case also
9 node Hadoop w/ 200tb local disk
400mbps net hosting private apps


DR order coming in 24-72 hours:
600 miles away
100mbps
Same vmware/sql nodes as above (no Hadoop/misc)
40tb VNX
200tb isilon
Vplex/recovapoint

KennyG
Oct 22, 2002
Here to blow my own horn.
Reading the above posts and knowing what I know about XIO, I am excited for the line. It's only 2/3rds the product of pure right now, but it's rapidly improving. I would be surprised if Dell didn't buy Pure. They already OEM their boxes and let's face it dell is a dead man walking in the storage game right now. Once dell gets their ownership situated I would expect them to pick up a new vendor to unfuck their storage offering.

I demoed a lot of storage between thanksgiving and Xmas and pure was the most feature complete for someone with high performance requirements and 150k plus to spend. EMC is more spendy but I think it has more potential over time given the clustering. Active/active 2 node or more can melt the doors off of pure. Heaven help anyone who owns violin stock. The CEO is a crook and the technology is getting eaten alive by the disk AFAs. While only 70% of the performance it can be half the price and twice the capacity with a lot more features which fits a lot more use cases than violin.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

The specific thing I ran into was the management suite needing a very specific OS/java/browser version, and still being an unreliable POS (disconnects, tab switching causing the whole shabang to desync), ended up just doing 99% over SSH.

Things like snapmirror setup in vmware envts, and anything on 8 are much more of pain than they need to be.

What's the recommended UI these days? I barely touch the thing aside from monitoring anymore because it just annoys me. And their sales rep for sweden is a gigantic douchebag.

There are some changes coming to System Manager concurrent with OnTAP 8.3 that may address some of those issues for you. That said, I do almost everything over SSH because I started life as a Unix admin and it's just faster and more comfortable for me. System manager is okay for a lot of day to day things, but it still has some gaps and you really need to always make sure you're on the most recent version. As far as SnapMirror in VMWare environments, you can set VSC to kick off updates of relationships which avoids the issues of locked snapshots causing problems. We use snapmirror VERY heavily at my customer on all sorts of data and it generally works fine as long as you're mindful of scheduling conflicts. We manage it with Protection Manager, which is painful to learn but works alright once it's set up.

As far as the recommended UI, that's still System Manager. The issue I have with the management suite is we've got System Manager for day to day administration, OnCommand Unified Manager for alerting and reporting, Workflow Automation for process enablement, OnCommand Insight for virtual performance management, OnCommand Insight Balance, ALSO for virtual performance management, NetApp Management Console for Performance Advisor and Protection Manager, SnapProtect for complex application integrated snapshot workflows, SnapCreator for other types of application integrated snapshot workflows....there's just way too much poo poo and it's all in different containers. The number of tools needs to be cut into a third and a lot of the functionality consolidated. SnapManager products also need to go to an agent based architecture with a centralized server for scheduling and management.

madsushi posted:

In my experience, NetApp:

*has one of the worst sales teams (pushy, uninformed, lots of pricing games)
*has the most flexible product (you can do drat near anything, with the right license)
*is very hard to size/scale properly (and sales will gently caress it up every time)
*has one of the best management suites available (NetApp System Manager is baller, NetApp VSC is THE BEST SAN-VMware plugin available)
*is only price competitive if you beat the poo poo out of sales which is really, really annoying
*has amazing hardware support and availability
*has the best software available (SnapManager is by far the most powerful SQL backup software, especially with cloning, scripting, etc)
*has the worst possible software support (good luck finding anyone that knows anything advanced in SnapManager!!)

So if you want to use NetApp, you need somebody that knows it very well (especially sizing), can troubleshoot their software installation/config problems themselves, and someone who will beat up sales for you. The upside is that the box can do everything at once and enables you to do some really cool things (that can save the business time/monye) and the hardware is rock solid.

Sales teams are pretty hit or miss. NetApp relies on channel partners a whole lot, which I think compounds the problem because a lot of them aren't very well trained and their incentives don't necessarily line up with the customer's needs, so you get a lot of user car salesman style tactics. As far as sizing goes, I think the difficulties get overplayed somewhat. You've basically got a disk resource bucket and a compute resource bucket and if you don't overload either then you'll be fine. There are some corner cases that you can hit and there have been some feature updates to WAFL that should probably be enabled by default to provide more consistency, but aren't due to excessive caution...but ultimately the vast majority of performance problems I run into come down to trying to do more IO than the disks can handle. The big issue with NetApp performance is that because of the way WAFL is designed it doesn't really fall over gradually, it falls off a cliff all at once. It falls off later than many other arrays would, but it falls a lot faster once it does. So you go from "things are fine" to "EVERYTHING IS ON FIRE!" in record time. That makes sizing properly much more critical, and it makes it much more obvious when it wasn't done properly.


KennyG posted:

Reading the above posts and knowing what I know about XIO, I am excited for the line. It's only 2/3rds the product of pure right now, but it's rapidly improving. I would be surprised if Dell didn't buy Pure. They already OEM their boxes and let's face it dell is a dead man walking in the storage game right now. Once dell gets their ownership situated I would expect them to pick up a new vendor to unfuck their storage offering.

I demoed a lot of storage between thanksgiving and Xmas and pure was the most feature complete for someone with high performance requirements and 150k plus to spend. EMC is more spendy but I think it has more potential over time given the clustering. Active/active 2 node or more can melt the doors off of pure. Heaven help anyone who owns violin stock. The CEO is a crook and the technology is getting eaten alive by the disk AFAs. While only 70% of the performance it can be half the price and twice the capacity with a lot more features which fits a lot more use cases than violin.

I'm skeptical that taking flash, whose primary benefit is very low latencies, and putting into an architecture that is guaranteed to increase latency due to all of the required backend traffic to reassemble data, is going to end up being the best way to leverage flash. Loosely coupled scale out may make more sense both from a latency perspective and because it will lessen metadata requirements leaving more RAM available for other storage operations. I also think that the new 3PAR AFAs may very well be the best all flash product on the market right now. They're getting really really great performance on top of having a fairly mature feature set for an AFA.

Eikre
May 2, 2009
I'm searching for a server-client backup solution that will upstage to hotswapped HDDs. Right now, the backup suite we're running is Tolis Group's BRU, but the client-side daemon has trouble with Windows 8, and will only backup according to a schedule (which is problematic because I have users on laptops with inconsistent office hours, but no VPN to keep them connected). Also, BRU is all about the tape backups, and we wanna try using RDX cartridges (essentially e-SATA drives with armor) instead.

It would be nice to find another monolithic solution like BRU that could do what I want, but I must not be doing research in the right place because options are not forthcoming. I was also beginning to see some virtue in switching to a two-part process; I could use something rsynch-esque to mirror all the valuable data to the middle-stage server whenever it was available, and then make weekly snapshots of that to the RDX carts using a single-machine archival protocol.

Whatever I end up doing, though, needs to support Windows, Linux, AND OSX clients. The server's OS is less important; I'd prefer Linux, but I can do winserv.

Anyway, I thought I'd pass the query by the goon hivemind, because it usually knows the consensus on best practices. Hoping someone has a tip.

Docjowles
Apr 9, 2009

Oh hey I thought I was the only person on earth to ever use BRU. It's been years but it can definitely do disk-to-disk backup, if I understand your question. I had it backing up to a NAS, no tape involved. Or is it that you're wanting to do disk-to-disk-to-disk, with another HD as the final target instead of a tape?

Eikre
May 2, 2009
The second bit. Middlestage to a 2.5 TB server; upstage to 750gb RDX carts. Even if that wasn't a requirement, though, BRU's other shortcomings make it a desirable to get out of.

orange sky
May 7, 2007

So are there any blogs/news websites or feeds I should follow regarding IT infrastructure in general? Thanks in advance.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

KennyG posted:

Linux nazi. I am about to drop some huge coin on a vplex implementation (geo, not metro) that has now gotten political. I have lost a lot of my trust in my EMC rep in the past 4 days. Would you be up for a phone call. Serious business. I'd love to have a bit of a trip report about the gotchas

Our environment:
2xbricks on 8gb FC
300tb isilon
10 vmware hosts
2 Mssql hosts w .75tb ram
3 misc apps on physical hosts for edge case also
9 node Hadoop w/ 200tb local disk
400mbps net hosting private apps


DR order coming in 24-72 hours:
600 miles away
100mbps
Same vmware/sql nodes as above (no Hadoop/misc)
40tb VNX
200tb isilon
Vplex/recovapoint

Haven't been keeping up with thread. Drop me a PM if you like, or shoot me an email at: jr at pipefl dot com.

I'll answer anything I know how!

Picardy Beet
Feb 7, 2006

Singing in the summer.
I had a surprising call from my salesperson at hp, never heard him that embarrassed. And I'm understanding now : one of the possible integrators managed to propose a solution based on 6 P4000 nodes quoting 180 000 € - twice the amount of a 3PAR solution; and twice the given budget.
The technical director for this integrator looked archi-dogmatic to me. He was centered on P4000 only - even if I explained I was a FC shop. And coming from L'Oréal, he really doesn't care about costs. My industry don't have the same margins, this'll be a shock to him.

lampey
Mar 27, 2012

Why isn't there an equivalent demo vm for EMC, like Netapp and Equallogic have.
Equallogic has http://sostechblog.com/2012/01/14/demo-virtual-equallogic-ps6000-array-available/
Netapp has one with fewer restrictions also. We used them in training at my last job, and also we had older spare Netapps. Where I'm at now everything is used in production.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

orange sky posted:

So are there any blogs/news websites or feeds I should follow regarding IT infrastructure in general? Thanks in advance.

http://rogerluethy.wordpress.com
http://storagearchitect.blogspot.com/
http://glinden.blogspot.com/
https://www-304.ibm.com/connections/blogs/tivolistorage/feed/entries/atom?lang=en_us
http://storagemojo.com/
http://stevetodd.typepad.com/my_weblog/
https://www.ibm.com/developerworks/community/blogs/accelerate?lang=en
http://virtualgeek.typepad.com/virtual_geek/
https://storagebuddhist.wordpress.com
https://www.ibm.com/developerworks/community/blogs/InsideSystemStorage
http://purestorageguy.com
http://www.datacenterknowledge.com
https://www.ibm.com/developerworks/community/blogs/storagevirtualization?lang=en
http://www.storagebod.com/wordpress
http://highscalability.com/blog/
http://aussiestorageblog.wordpress.com
http://blog.scummins.com

Modern Pragmatist
Aug 20, 2008
I work in a research group at a university and we're going to roll our own server for managing our research projects etc. So it's going to be running a PACS server, basic file sharing services, a few internal websites. We expect approximately 20 people using it regularly throughout the work day.

The goal is to run ZFS with several 6-disk vdevs in raidz2.

We currently have the following configuration (with regards to storage)

3 x 6-disk vdevs in raidz2 (4TB 7200RPM 120MB cache, SAS disks) (48TB effective)
2 x 400GB SAS SSD cache drives (one read, one write)
2 x 500GB SAS 7200 RPM,64 MB cache boot disks (RAID 1)
LSI 9211-8i SAS controller

We are weighing some of the pros and cons of substituting in different components.

1) Would it be any more beneficial to have 5 x 6-disk vdevs of 600GB 15K drives? I feel that if your cache is big enough the actual disk speed shouldn't be much of a factor.

2) Would it be worth upgrading to four total SSD cache drives? (2 read, 2 write)

Any other general advice about this setup would be extremely helpful. We've been talking with a consultant about this system and I just wanted to hear an unbiased opinion.

If this is the wrong thread to ask, let me know and I can take this question elsewhere.

thebigcow
Jan 3, 2001

Bully!
How much RAM is this thing going to have? Remember, information about L2ARC takes up space in ARC.

If your setup requires SSD for L2ARC and SLOG to get the performance you need then definitely do at least mirrored for each, otherwise losing the single disc will make it useless until you get another one in there.

Read up on SLOG before you put an SSD in for it. It only improves writes in certain situations. Losing it no longer hoses a ZPOOL but I don't know if the system is usable until it is replaced.

When you read guides on ZFS ask yourself if this persons only concern is maximum space for :filez: , or if its real information about running a gigantic file share. Lots of old Sun developer blog urls are broke, but if you replace Sun with Oracle in the url it should still come up.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
if it were me, I would run 2x 8 disk z2 volumes and 2 hot spares. Same usable space, similar performance, more safety.

Modern Pragmatist
Aug 20, 2008
Right now we're looking at 128 GB or RAM but that's purely what the hardware guys recommended. We have no problem spending more if necessary.

The setup doesn't necessarily require an SSD for L2ARC or SLOG we were just thinking this would give us a pretty big performance boost. Would it be better to use mirrored 15k spinning disks for these? Is there a more widely used option that we haven't considered?

How would the 2 x 8 disk z2 be any more safe? As I understood it all z2's should have the same data safety.

Our original configuration also includes 2 hot spares that I failed to list.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Modern Pragmatist posted:

Our original configuration also includes 2 hot spares that I failed to list.
My suggestion was only safer because it included hot spares, whereas your original description did not. If you already have hot spares, yours will be just fine.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Modern Pragmatist posted:

Right now we're looking at 128 GB or RAM but that's purely what the hardware guys recommended. We have no problem spending more if necessary.

The ARC works well, give it as much as you can cram in.

quote:

The setup doesn't necessarily require an SSD for L2ARC or SLOG we were just thinking this would give us a pretty big performance boost. Would it be better to use mirrored 15k spinning disks for these? Is there a more widely used option that we haven't considered?

Heh, no; don't bother with 15K drives in any case. I'm going to disagree with thebigcow slightly; if you're really concerned about degraded performance in case of a SSD failure, it's better to add them as two separate devices and let ZFS balance the load across the two. Better peak, and in case of failure your write is only as bad as it would be when mirrored. Frankly, the SLOG and L2ARC failure modes we've seen are clean enough (not in Oracle Solaris (lol), but Illumos is OK) that I wouldn't be too worried, especially for a departmental server. Writes go sync (a big deal for NFS clients, mostly), and read cache falls back to ARC only.

L2ARCs mostly make sense when your average working set is larger than RAM but small and nonrandom enough that the L2ARC can effectively cache warm data.

My favorite SLOG device: http://www.hgst.com/solid-state-storage/enterprise-ssd/sas-ssd/zeusram-sas-ssd

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I don't know if I am getting trolled or not.

My friend who I am helping do reference architectures for vmware server virtualization, he's incredably smart so I'm not sure why stuff is being designed this way.

He wants to design basic "vblocks" for SMB customers, he told me 10-15 servers, and maybe a VDI in a box so anywhere from 20ish desktops before a scale out of another host. I did this a year or so ago at a consulting gig, which is kinda funny because at the time he shot down the idea of reference designs or the like. He's looking to hit a 20k solution for customers that includes Network+storage+servers AND services to set up, he's gauging 15k for HW and 5 for setup with some bumping room on either.


Currently he is looking at storage with QNAP and Synology; this is a real head scratcher to me for multiple reasons; first off it's a single controller, secondly drives are going to be needed to bought 'off the shelf', and thirdly failover is going to be a huge troublesome issue. His plan is to address the single controller issue with vSphere replication to a second QNAP/Synology box for when/if poo poo goes south and run off that box until the primary is back up and running. The second problem I see is who do you call when poo poo goes south on the HW/Drive level? Disk replacements can take a week or more, I'm glad he's looking at enterprise drives but still it's a when not an if, QNAP's support ain't the same level as EMC's that he works with a bunch.


I mentioned going dual controller single SAN but that got shot down because "I have a hard time calling it redundant because it is same chassis", and yes while I can see that I also realize the risk of both controllers blowing is most likely much lower than facing issues with the dual QNAP + vSphere replication; and even so nothing is preventing anyone from having a primary MSA 1040 or MD3200i then replicated to a QNAP or the like, in the event a everything goes to poo poo moment. I mean sure will it probably run like poo poo? yes! but it's going to still keep your head above water until a tech from HP can come out there and swap controllers/drives etc.




I don't have anything against single controller configurations they work well in their use case, but a MSA 1040/MD3200i is going to be so much better for the client's business continuity and MSP/VAR's managability, over some QNAP Synology.

Am I overthinking here?


For cost references the QNAP he is looking at is in the 2-3k ball park for the barebone server and about 100ish bucks a pop for 1TB drives, and about 16GB ram (QNAP uses an ARC); about 4k total per NAS.


The MSA 1040 starts @6500MSRP, and a MD3200i w/ 4x30015k drives for OS RAID 10 + 8x1TB RAID6 drives data/arch, dual SP's@1GB cache each, and 3 year warranty@10k MSRP.

Dilbert As FUCK fucked around with this message at 22:50 on Apr 19, 2014

Thanks Ants
May 21, 2004

#essereFerrari


When you say 10-15 servers I'm assuming guests here? When I think SMB virtualization I find it hard to ignore VRTX, especially as smaller premises will benefit from the reduced noise levels of the box. I don't know if it would hit requirements for VDI but I also wouldn't be trying to do VDI in a 20k budget.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Caged posted:

When you say 10-15 servers I'm assuming guests here? When I think SMB virtualization I find it hard to ignore VRTX, especially as smaller premises will benefit from the reduced noise levels of the box. I don't know if it would hit requirements for VDI but I also wouldn't be trying to do VDI in a 20k budget.

Physical servers that will become VM's or existing guests that will go to this platform. The VRTX is not an option, he likes it and so do I, but it's not an option.

His plan is to leverage VDI is an add-in value, he wants to have the foundation to be able to support a small VDI deploy and server environment.


The ballpark we are working on is
1-2 Domain Controllers
1-2 File Servers
1 Print Server
1-2 Application server
1 vCenter
1 VDP or backup appliance
1 vSphere replication appliance

Don't even get me started on why dual 6 core with HT is a hard requirement when you could easily get away with 2x4+HT, 1x6 at a high clock and HT, or a 2.2 8 core; for like 8 out of ten environments even when calculating in failover since ram will be the issue but we can solve it by forcing small pages if needed.

Dilbert As FUCK fucked around with this message at 23:01 on Apr 19, 2014

Thanks Ants
May 21, 2004

#essereFerrari


I'm not the most experienced person ever when it comes to this stuff but it just seems strange to want to have a reference bundle presumably to reduce the admin overhead and then cheap out on arguably the most important component when these sorts of problems have already been solved.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Caged posted:

I'm not the most experienced person ever when it comes to this stuff but it just seems strange to want to have a reference bundle presumably to reduce the admin overhead and then cheap out on arguably the most important component when these sorts of problems have already been solved.

He doesn't like the fact VNXe's start at 10k, he even is a hard core EMC guy and constantly teaches never cheap out on storage. So I'm not sure if this is just prepping me to see what responses I give as to why designs are bad and to defend my own for vcdx, or what.


but holy poo poo I am so confused right now.

Thanks Ants
May 21, 2004

#essereFerrari


I'm not even sure you could hit $15k for hardware unless you went down the VSA path.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Caged posted:

I'm not even sure you could hit $15k for hardware unless you went down the VSA path.

oh herp derp I can't read line items today

Dilbert As FUCK fucked around with this message at 23:35 on Apr 19, 2014

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Redundancy isn't an independent characteristic of risk management, it's part of a system -- and it still amounts to basic math. If you have two of something and each of them is four times as likely to fail as one of something else, which is the better investment? This is very hard for people to understand when they haven't been working in the field because there's no easy number you can point to describe the likelihood of failure, but optimizing for a vague hand-wavey idea of redundancy at the expense of all else is a really stupid move.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Misogynist posted:

Redundancy isn't an independent characteristic of risk management, it's part of a system -- and it still amounts to basic math. If you have two of something and each of them is four times as likely to fail as one of something else, which is the better investment? This is very hard for people to understand when they haven't been working in the field because there's no easy number you can point to describe the likelihood of failure, but optimizing for a vague hand-wavey idea of redundancy at the expense of all else is a really stupid move.

This dude is a VCDX/EMC(whatever cert is over EMCISA). I'm feeling dumbfounded I have to explain this so I am questioning my methods right now(even if many of the things I've said then goin 'oh maybe he is right' have been totally wrong and come full circle on; maybe I am a bit smarter than I give myself credit for.

I agree redundancy is most not an independent characteristic, redundancy is a forumula of multiple variables that equates to a solution that fits the budgetary, operational, end user applicability, and business continuity aspect of the client and it's end goal for both the seller and client. If you stop focusing on what the needs of the client is and the levels of complexity and failures you are introducing what good is the solution?


I think it's tunnel vision of an engineering position getting the best of someone not use to a MSP/VAR atmosphere. I mean poo poo I want to go to varrow in 2 years but I'd work for this dude, we are polar oppisites on things but we both help eachother see the other side to make a managable solution.


Maybe this is just some all dumb elaborate ruse to get me to finish my VCDX path

Dilbert As FUCK fucked around with this message at 06:13 on Apr 20, 2014

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

Redundancy isn't an independent characteristic of risk management, it's part of a system -- and it still amounts to basic math. If you have two of something and each of them is four times as likely to fail as one of something else, which is the better investment? This is very hard for people to understand when they haven't been working in the field because there's no easy number you can point to describe the likelihood of failure, but optimizing for a vague hand-wavey idea of redundancy at the expense of all else is a really stupid move.
it's not that easy either. Even if each is four times as likely to fail as a single unit, that doesn't mean the chance of a dual node failure is higher than that of a single unit

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Small business? I would opt for a shared nothing design and depend on app level clustering for anything that supports an important business process.

Cheap, relatively easy to support and won't require much additional training for the end customer. Not as sexy as dropping in a NetApp or a Pure Storage box or something with 10GbE but easier on the wallet both in up front and ongoing costs.

Also with respect to failure, your storage is only as "available" as people let it be. You could buy a pair of 8 engine VMAX 40K's with VPLEX in front of them and still have a few outages a week.

Basically people and their training/skillsets have a lot to do with how often a system is up. Usually more than the actual technology they're maintaining.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

1000101 posted:

Small business? I would opt for a shared nothing design and depend on app level clustering for anything that supports an important business process.

Cheap, relatively easy to support and won't require much additional training for the end customer. Not as sexy as dropping in a NetApp or a Pure Storage box or something with 10GbE but easier on the wallet both in up front and ongoing costs.

Also with respect to failure, your storage is only as "available" as people let it be. You could buy a pair of 8 engine VMAX 40K's with VPLEX in front of them and still have a few outages a week.

Basically people and their training/skillsets have a lot to do with how often a system is up. Usually more than the actual technology they're maintaining.

Hersey and I are talking sub $25K level or 21-74 data workers SMB not; the SMB where you live.

Bitch Stewie
Dec 17, 2011
Wow. I mean i really rate Synology but no way would I want it as my production storage - no SLA so when poo poo breaks you're on your own basically.

QNAP support is supposed to be even worse.

What I don't get is why even look at shared storage with the small workload you've mentioned?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Dilbert As gently caress posted:

Hersey and I are talking sub $25K level or 21-74 data workers SMB not; the SMB where you live.

I'm thinking the same thing. 3 esxi servers and essentials plus basically. Comedy vsan option if there just has to be shared storage.

Edit: out here we'd just put it all in amazon and use office 365 for a business that size.

1000101 fucked around with this message at 16:40 on Apr 21, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

"I have a hard time calling it redundant because it is same chassis"

This is a fairly dumb statement, and it sounds like your friend has a really poor understanding of storage.

Docjowles
Apr 9, 2009

NippleFloss posted:

This is a fairly dumb statement, and it sounds like your friend has a really poor understanding of storage.

That's verbatim why my old boss forbid me from buying a SAN for our virtual environments. So instead we used a lovely hacked up DRBD solution that failed all the loving time but hey at least we didn't have a "single point of failure" :shepface:

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005

Docjowles posted:

That's verbatim why my old boss forbid me from buying a SAN for our virtual environments. So instead we used a lovely hacked up DRBD solution that failed all the loving time but hey at least we didn't have a "single point of failure" :shepface:

Jesus christ this sounds like the line of thought from our company DBA Manager when it came time to rebuild our old Billing environment.

I had sit down and explain to him (with drawings and everything) how a loving RAID array works (with hotspares!) and how redundant controllers, network links, switches, etc work to make sure that the disk array was redundant as possible.

He still wanted us to buy a second SAN of the same make/type and use it as a hotspare because reasons.

And then the turdnugget goes and builds a SQL cluster, but then places two standalone MSSQL servers in front of it so clients can connect to it instead of the cluster :psyduck:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply