Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rhymenoserous
May 23, 2008
I think you mean your backup SAM-SD. Right terms people...

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

KS posted:

I have a dedicated VM with Java 6 Update 29 just for that. My desktop kept getting infected when I tried to use 6U29+firefox to browse the web, and it doesn't play well in Chrome.

Still better than the Hitachi or HP interfaces I came from.

Which is funny because Chrome has been the most reliable browser when it comes to getting to my array. Firefox derps up too much.

Rhymenoserous
May 23, 2008

Amandyke posted:

EMC Goon checking in. I'd be happy to check into any SR's/issues any of you seem to be having. Just a lowly CE but I can help if you think your SR's getting hung up, or at least I could let you know what the current status of it is.

As far as I'm concerned CE's are the only people that actually do any work at EMC, everyone else that comes out to the job site are just there to argue about where to go for lunch.

Rhymenoserous
May 23, 2008

Goon Matchmaker posted:

What does EMC recommend as the amount of free space to leave for LUN snapshots on the VNX platform? 25%?

15-25% was what the EMC CERTIFIED INSTRUCTOR told me in class. Depending on changerate. If you don't know what your basic changerate is, or it's particularly high default to 25%.

Rhymenoserous
May 23, 2008
You say 2GB+ fibre channel. Are you good at that stuff? Because you can easily run iSCSI off a 10g network and simplify the setup.

Instead of saying the kind of hardware you want why not post a list of needs instead.

I.E. I need X much space, I'll be running X number of VM's and X of those will be exchange/SQL and the rest file storage.

I need to be able to retain x number of snapshots with or without the ability to replicate.

EDIT: Price range? Most of the 2U storage boxes in a decent range that aren't over glorified DAS really only do iSCSI (Which is why I asked above why you want FC).

Rhymenoserous
May 23, 2008
Also I wouldn't tie myself to a solution because "I have drives that are laying around that will fit it". I'd sell that poo poo on e-bay and use that to help fund what you actually need.

Rhymenoserous
May 23, 2008

Moey posted:

I have not worked with EMC before, but that seems quite high.

Edit: Actually looking around online, 600gb 15k 2.5" drives do seem to be quite pricey.

Edit 2: The unit you mention takes 3.5" disks. That seems insanely high.

On the lower end systems EMC gets you on the price of the system. On the higher end ones they drat near give the hardware away provided you sign away your entire companies firstborn children to their unholy embrace.

Rhymenoserous
May 23, 2008

Xenomorph posted:

My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will).

I just wasn't sure if that is how all iSCSI-compatible devices do it.

Next question: how important is it to have a dual-controller RAID?

Is something like the QNAP TS-EC1279U-RP terrible?

The primary use I'm looking for is just data storage. People want to move stuff of their desktops to a shared drive.

I know you are concerned about overhead with iSCSI but it's virtually non existent on a modern system. I mean I'm presenting blocks of iSCSI to 4 ESXi boxes and I'm using those blocks to run windows servers (I.E. the OS is coming into the box over iSCSI). Each one of those windows servers is talking to it's own storage over Microsofts iSCSI initiator (My storage has app aware backups if you do it this way as opposed to setting up a big virtual storage pool). And at least two of these pools are database servers that at any given moment have ~200 people hitting them almost contantly because our ERP vendor never clued into the idea that if I'm not actively executing a query a db call/connection is absolutely unnecessary (gently caress these guys seriously).

So as long as your just sharing out chunks of storage to do well... whatever the hell you are doing (I have no clue) you will be fine. Sidenote: if you are doing linux stuff I'd get a device that can do NFS as well as iSCSI. iSCSI plays really nice with windows, NFS is a breeze with linux.

Rhymenoserous
May 23, 2008

Amandyke posted:

iSCSI works great no matter what OS you're running...

I've never used it on *nix because NFS on *nix is just so drat easy.

Rhymenoserous
May 23, 2008

ZombieReagan posted:

I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately.

EMC is going to try and push some poo poo like replication manager on you and trust me you'd rather kill yourself than ever try to figure out what the gently caress is wrong with replication manager.

Rhymenoserous
May 23, 2008

paperchaseguy posted:

i think you're wanted here

Goddammit. Just mousing over that makes my blood pressure rise.

Somewhere on Spiceworks right now someone in a multi million dollar company asking "What is a good first SAN/NAS" is being told "No bro, you don't need all that just slap you together a throw away server and some trend micro storage rack you found at the dump".

Rhymenoserous fucked around with this message at 14:06 on Oct 3, 2012

Rhymenoserous
May 23, 2008

Misogynist posted:

Technically, it's two different departments in the same organization.

Two units is more than one. Sky is the limit.

Rhymenoserous
May 23, 2008

Vanilla posted:

Are these an EMC part?

If so the rep is right, he can't sell them *but* he can't try other things to get them to you.

Firstly see if you can find the EMC Part number either on it, in a manual or somewhere.

1.) Get them from customer services directly or via the rep.
2.) Get the part from the factory. It doesn't take much for a grunt to grab a handful of these and put them in a box.

Speak to your TC, not your rep.

He's right. Your TC probably has a small box of these in his car (Mine did).

Rhymenoserous
May 23, 2008
I've heard great things about Unitrends: but honestly any full fledged backup solution is going to be pricey. I'd look back into the San market honestly. It's getting very competitive and prices are plummeting compared to say five years ago.

Rhymenoserous
May 23, 2008
Whoe, when did EMC buy Datadomain?

Rhymenoserous
May 23, 2008

Corvettefisher posted:

Holy poo poo Avamar systems are expensive.

90k starting?
:drat:

It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it.

Rhymenoserous
May 23, 2008

Amandyke posted:

That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations.

True: With a big enough enterprise they'll practically hand you the hardware but you are going to eat it in support and software costs. I don't really miss dealing with EMC.

Rhymenoserous
May 23, 2008

The_Groove posted:

Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on!

I have to say this post has left an impression on me too.

Rhymenoserous
May 23, 2008
I'm going to bet on the answer being "Because gently caress tape"

Rhymenoserous
May 23, 2008

Amandyke posted:

Oh, if that's all that's being backed up, you could just use USB thumb drives then.

In raid 5 with hotspare. How many USB ports do you have?

Rhymenoserous
May 23, 2008
Your sense of humor is broken.

Rhymenoserous
May 23, 2008

Wompa164 posted:

Can anyone recommend a good NetApp and/or Compellent reseller and integrator?

I work for a small eDiscovery company looking to step up our data game. We're looking for an end-to-end company to help us figure out our data management needs, come up with the proposed hardware, and integrate it into our environment.

We've been in talks with one such company (http://www.ostusa.com/) and they've given us a quote which includes a (presumably) lower-tier NetApp storage device, which given people's experiences on here, sounds like a good vendor.

I know this question is a little out of place but I'm not a data/systems administrator by trade so I'm doing my best to help out on this project. Thanks in advance!

I worked for an e-discovery company for almost three years, and storage can quickly become a nightmare in that environment. I can answer generalist questions but honestly in depth questions depend entirely on if your discovery software dedupes, what size projects you tend to take on etc.

What software are you using? What is it's backend? If you are using something like say iPro, are you keeping the SQL databases on your old storage? On server? Planning on putting it on the new storage?

Rhymenoserous
May 23, 2008

Wompa164 posted:

Hey, thanks for the reply.

I should clarify by saying that we do computer forensics and eDiscovery. Our projects are highly varied but it's become increasingly clear over the last year or two that we need to invest in managed data. Right now we work off of bare desktop drives and I am trying to push the company towards a centralized infrastructure for both processing and evidence storage. Lately most of our work comes in the form of indexing and searching client data with NUIX. The solution doesn't need to be highly optimized, but it needs to be reliable and flexible with room for future expansion.

Our projects vary from 200GB to 10TB on average. I'm looking to get a small/medium array in place (~20-30TB) for active projects which can then be offloaded to tape for archival. Any insight you can provide would be excellent, thanks!

A 20-30TB array with enough speed to not drag down your projects is going to be a bit on the expensive side. In my experience E-Discovery is very IO intensive.

Any of the big names, EMC, Netapp etc will do the trick, but you definitely need to tell your architect/company who you are paying to do this exactly what it is you are doing with the filesystem.

What is your budget like?

Rhymenoserous
May 23, 2008

Moey posted:

Do you know what kind of IOPS you need from the setup? Also is that $20k just for the storage, or for servers/storage/licensing? Do you have include switches as well for your storage network?

I'm seconding all these questions: Because if it's 20k for everything you may as well just wave off now.

Rhymenoserous
May 23, 2008
Unless you can get a great deal on refurb gear I just don't see it happening.

Rhymenoserous
May 23, 2008

three posted:

I am really looking forward to VMware making the VSA/etc awesome so that it is actually feasible for most environments. It's a long way away with all the limitations it has now, but I think it's the future. Getting rid of the SAN would be awesome.

It will never happen, there are a ton of reasons to go with a shared chunk of external storage outside of the capacity arguments that VSA are likely to solve. For a small business though I see VSA as a godsend.

Rhymenoserous
May 23, 2008

three posted:

I disagree. SANs aren't used because people love them for virtualization, they're used because they're a requirement for HA/DRS/etc. Nutanix is already working in this space. It's just a matter of time.

Do you really think an "All in one" virtualization box is really going to throw the world all a twitter? I'm kind of skeptical.

Rhymenoserous
May 23, 2008

paperchaseguy posted:

hey guys i was gonna roll my own SAM-SAMBA-SOC (Scott Allen Miller SAMBA Scale Out Cloud) for my 15000 user Exchange 2005 production environment do you have any hardware to recommend? My budget is $650.

Man we're getting a lot of mileage out of me posting that SAM-SD poo poo here aren't we?

Rhymenoserous
May 23, 2008

NippleFloss posted:

I could have happily lived the rest of my life without knowing about Scott Allen Miller but now I know about him and I can't unknow about him and that makes me angry.

Thanks Rhymenoserous, thanks.

Did you know he writes tech blogs? Man he has an entire series on how raid 5 is not a backup neat huh!

Rhymenoserous
May 23, 2008

three posted:

The performance difference between RDM and VMFS is negligible: http://www.vkernel.com/files/docs/white-papers/mythbusting-goes-virtual.pdf

It would have been nice if he had charted the iSCSI results considering he did test them.

Rhymenoserous
May 23, 2008
I'm going to say just buy another server with plenty of storage in it.

Rhymenoserous
May 23, 2008

FISHMANPET posted:

I'm currently figuring out how to do bare metal backups on some machines. Some critical workstations, and every server (including the storage servers). And it's being backed up to a Drobo. I want to kill myself.

But it leads me to something I've been wondering for a while. What's the backup "paradigm" when everything is virtualized? I'm used to a "backup" server with a tape drive connected to it, but that's a hard work load to virtualize (due to the physical connection to the tape drive) so I'm not sure what the proper way to do it is. Should there just be another SAN, preferably offsite, that everything gets backed up to? Is there a good way to use a tape drive in a virtualized environment? With things like ZFS snapshots and Volume Shadow Copy should I not even worry about backups for file recovery, and worry about backups for disaster recovery?

Here's how I do it. Or I should say how it's going to be done when I'm all finished.

I'm running 4 ESXi hosts running over a 10GE connection for my storage, the hosts are in a DRM cluster so if any one host goes down things automigrate around to bring everything back online. Entirely hands off, I love it.

The 10GE storage links to my Nimble storage device which holds all of my "On site snaps" depending on the storage pool I'm keeping everything from hourly snapshots to daily, and I generally keep a month or so worth laying around again dependent on app.

Offsite I'll have a second storage appliance from nimble that is going to be catching replica's from the primary(this is an array function), and two ESXi hosts that are basially just waiting for me to bring VM's online. I'm keeping the DR site fairly cheap though so I'm unlikely to use DRM, or even use vcenter at all there.

I've been thinking about veaam but it's really getting poo poo reviews lately.

Rhymenoserous
May 23, 2008

three posted:

Management... auditing? What is this madness?

Auditing? But I pay my taxes...

Rhymenoserous
May 23, 2008
Another nimble buddy! Hey :buddy:!

EDIT: They have expansion cabinets now, and the pricing looks good.

Rhymenoserous
May 23, 2008

GrandMaster posted:

We've been looking at nimble boxes too, but I was surprised at how expensive they were considering it's full of lovely SATA. We are looking at ~100TB of storage, and it came in more expensive than Compellent, VNX5500 & FAS3250 boxes with similar capacity - the other boxes take up more space but I've got much more confidence around the performance since they all have truckloads of 15K SAS & SSD caching.

I'm concerned about how some of the workloads would perform on a Nimble like some of our OLTP/OLAP etc apps. I'm sure VMware/VDI would run pretty quick though.

At the 100TB mark I'd be looking at a bigger vendor too. To me Nimble's products make sense at the medium business level i.e. I need 20tb of storage for VM's etc.

Rhymenoserous
May 23, 2008

skipdogg posted:

Looking for a good intro to SAN book. I know the absolute basics, but I guess I'm looking for more detail. I know what a LUN is, but what purpose does it have, why are they created. I know that iSCSI and Fibre Channel are connection protocols, but what makes them different and the advantages and disadvantages of each. Basically a good foundation book with some general best practices. It doesn't have to be vendor specific, but we're an EMC shop if it matters.

1. A LUN is for all intents and purposes a block of unformatted storage that you present to a device as if it were a disk. It's purpose is to provide a lump of remote storage to a device so it appears as if it were just a new blank hard drive.

2. iSCSI goes over ethernet and is tied to whatever speeds that standard can do. Back when 1GE was pretty much as blazing fast as standard network equipment would go, people would use fibre channel in order to get high speed access to their storage devices. With 10GE reaching commodity pricing I can't think of a single loving reason to deal with fibre channel ever again unless I had already sunk money into it.

Rhymenoserous
May 23, 2008

Syano posted:

Powervault kits are great. You can add shelves any time you need up to like 192 total drives or something like that. You can't go wrong with them for small deployments

EDIT: I just reread some of your environment. I run a 425 user mail system, along with about 30 more guests on a Dell MD3200i using near line SAS drives. Granted my environment is pretty low IOPs but still. Definately look at solutions from Equallogic, Netapp, etc but dont count out the powervaults cause someone told you they suck. They absolutely are fine for smallish environments

Don't work with many small/medium businesses do you? I had 10 year old dell poweredge servers running mission critical stuff when I first started working at my current shop, and the prevailing attitude towards hardware purchases was generally "Just use one of the old ones laying around".

Case in point: About a half a year before I was hired (Call it three years ago) they purchased an entirely new software that drove... well everything from inventory to service calls. Rather than purchase a new server and new storage for their new program they just took down half of the old ERP softwares cluster, reinstalled windows on it, appropriated half of it's storage and setup the new system on it. Naturally it ran like poo poo.

The excuse I heard was "Well we already owned an extra server we didn't need".

Trying to pry money out of people like this is hard. My entire first year here was spent convincing the owner that just because you spent 10k on something five years ago doesn't mean it's worth a poo poo now. Servers don't have some intrinsic value that sticks around after they are past the point where they can do any of the jobs you need to do. This isn't a car you can restore and then keep on using as a daily driver.

Now I have a rack of lovely HP servers running ESX backed by a SAN. Much better than the baremetal raid 0 shitboxes with DAS that I was dealing with before.

Rhymenoserous fucked around with this message at 23:01 on Oct 22, 2013

Rhymenoserous
May 23, 2008

Thanks Ants posted:

"Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time."

That's a Scott Alan Miller Storage Device you goony gently caress.

Rhymenoserous
May 23, 2008

KennyG posted:

Ken Rockwell's alter ego.

Although, thought exercise. People seeking no-cost IT advice from a crowd full of marginally educated people without any budget are likely going to recommend the most inexpensive solution draw backs be damned in every instance. It would be a shock if the results were any different. Frankly, the :10bux: to get in here will skew the results back in the favor of 'if you want quality, you have to pay for it.' There is a lot of snakeoil and FUD out there, but in the end, you have to know that storing fractional petabytes with any kind of performance requirement is going to cost more than backblaze.

I like the warm comfort of knowing that if I have major issues with a device I can have a technical expert on site within 12-24 hours (Depending on severity) to help me deal with it. The thought of having openfiler go tits up on some shitbox running production data with me as the last line of support actually gives me the heebie jeebies.

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

KennyG posted:

I work in Ediscovery. KjATstillabower.com if you want to chat

I also used to work in the IT end of ediscovery.

EDIT: I should say that to "Do it right" requires you throwing a big pile of money at the problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply