|
I think you mean your backup SAM-SD. Right terms people...
|
# ¿ Aug 31, 2012 19:45 |
|
|
# ¿ May 10, 2024 00:29 |
|
KS posted:I have a dedicated VM with Java 6 Update 29 just for that. My desktop kept getting infected when I tried to use 6U29+firefox to browse the web, and it doesn't play well in Chrome. Which is funny because Chrome has been the most reliable browser when it comes to getting to my array. Firefox derps up too much.
|
# ¿ Aug 31, 2012 21:32 |
|
Amandyke posted:EMC Goon checking in. I'd be happy to check into any SR's/issues any of you seem to be having. Just a lowly CE but I can help if you think your SR's getting hung up, or at least I could let you know what the current status of it is. As far as I'm concerned CE's are the only people that actually do any work at EMC, everyone else that comes out to the job site are just there to argue about where to go for lunch.
|
# ¿ Sep 5, 2012 14:44 |
|
Goon Matchmaker posted:What does EMC recommend as the amount of free space to leave for LUN snapshots on the VNX platform? 25%? 15-25% was what the EMC CERTIFIED INSTRUCTOR told me in class. Depending on changerate. If you don't know what your basic changerate is, or it's particularly high default to 25%.
|
# ¿ Sep 6, 2012 14:47 |
|
You say 2GB+ fibre channel. Are you good at that stuff? Because you can easily run iSCSI off a 10g network and simplify the setup. Instead of saying the kind of hardware you want why not post a list of needs instead. I.E. I need X much space, I'll be running X number of VM's and X of those will be exchange/SQL and the rest file storage. I need to be able to retain x number of snapshots with or without the ability to replicate. EDIT: Price range? Most of the 2U storage boxes in a decent range that aren't over glorified DAS really only do iSCSI (Which is why I asked above why you want FC).
|
# ¿ Sep 6, 2012 17:50 |
|
Also I wouldn't tie myself to a solution because "I have drives that are laying around that will fit it". I'd sell that poo poo on e-bay and use that to help fund what you actually need.
|
# ¿ Sep 6, 2012 17:55 |
|
Moey posted:I have not worked with EMC before, but that seems quite high. On the lower end systems EMC gets you on the price of the system. On the higher end ones they drat near give the hardware away provided you sign away your entire companies firstborn children to their unholy embrace.
|
# ¿ Sep 7, 2012 14:20 |
|
Xenomorph posted:My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will). I know you are concerned about overhead with iSCSI but it's virtually non existent on a modern system. I mean I'm presenting blocks of iSCSI to 4 ESXi boxes and I'm using those blocks to run windows servers (I.E. the OS is coming into the box over iSCSI). Each one of those windows servers is talking to it's own storage over Microsofts iSCSI initiator (My storage has app aware backups if you do it this way as opposed to setting up a big virtual storage pool). And at least two of these pools are database servers that at any given moment have ~200 people hitting them almost contantly because our ERP vendor never clued into the idea that if I'm not actively executing a query a db call/connection is absolutely unnecessary (gently caress these guys seriously). So as long as your just sharing out chunks of storage to do well... whatever the hell you are doing (I have no clue) you will be fine. Sidenote: if you are doing linux stuff I'd get a device that can do NFS as well as iSCSI. iSCSI plays really nice with windows, NFS is a breeze with linux.
|
# ¿ Sep 7, 2012 14:26 |
|
Amandyke posted:iSCSI works great no matter what OS you're running... I've never used it on *nix because NFS on *nix is just so drat easy.
|
# ¿ Sep 7, 2012 16:41 |
|
ZombieReagan posted:I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately. EMC is going to try and push some poo poo like replication manager on you and trust me you'd rather kill yourself than ever try to figure out what the gently caress is wrong with replication manager.
|
# ¿ Sep 18, 2012 17:52 |
|
paperchaseguy posted:i think you're wanted here Goddammit. Just mousing over that makes my blood pressure rise. Somewhere on Spiceworks right now someone in a multi million dollar company asking "What is a good first SAN/NAS" is being told "No bro, you don't need all that just slap you together a throw away server and some trend micro storage rack you found at the dump". Rhymenoserous fucked around with this message at 14:06 on Oct 3, 2012 |
# ¿ Oct 3, 2012 14:01 |
|
Misogynist posted:Technically, it's two different departments in the same organization. Two units is more than one. Sky is the limit.
|
# ¿ Oct 4, 2012 21:43 |
|
Vanilla posted:Are these an EMC part? He's right. Your TC probably has a small box of these in his car (Mine did).
|
# ¿ Oct 8, 2012 20:00 |
|
I've heard great things about Unitrends: but honestly any full fledged backup solution is going to be pricey. I'd look back into the San market honestly. It's getting very competitive and prices are plummeting compared to say five years ago.
|
# ¿ Oct 16, 2012 17:24 |
|
Whoe, when did EMC buy Datadomain?
|
# ¿ Oct 16, 2012 21:53 |
|
Corvettefisher posted:Holy poo poo Avamar systems are expensive. It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it.
|
# ¿ Oct 19, 2012 20:27 |
|
Amandyke posted:That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations. True: With a big enough enterprise they'll practically hand you the hardware but you are going to eat it in support and software costs. I don't really miss dealing with EMC.
|
# ¿ Oct 19, 2012 21:33 |
|
The_Groove posted:Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on! I have to say this post has left an impression on me too.
|
# ¿ Oct 24, 2012 20:36 |
|
I'm going to bet on the answer being "Because gently caress tape"
|
# ¿ Oct 25, 2012 16:28 |
|
Amandyke posted:Oh, if that's all that's being backed up, you could just use USB thumb drives then. In raid 5 with hotspare. How many USB ports do you have?
|
# ¿ Oct 29, 2012 17:02 |
|
Your sense of humor is broken.
|
# ¿ Oct 29, 2012 20:18 |
|
Wompa164 posted:Can anyone recommend a good NetApp and/or Compellent reseller and integrator? I worked for an e-discovery company for almost three years, and storage can quickly become a nightmare in that environment. I can answer generalist questions but honestly in depth questions depend entirely on if your discovery software dedupes, what size projects you tend to take on etc. What software are you using? What is it's backend? If you are using something like say iPro, are you keeping the SQL databases on your old storage? On server? Planning on putting it on the new storage?
|
# ¿ Oct 30, 2012 21:36 |
|
Wompa164 posted:Hey, thanks for the reply. A 20-30TB array with enough speed to not drag down your projects is going to be a bit on the expensive side. In my experience E-Discovery is very IO intensive. Any of the big names, EMC, Netapp etc will do the trick, but you definitely need to tell your architect/company who you are paying to do this exactly what it is you are doing with the filesystem. What is your budget like?
|
# ¿ Oct 31, 2012 17:33 |
|
Moey posted:Do you know what kind of IOPS you need from the setup? Also is that $20k just for the storage, or for servers/storage/licensing? Do you have include switches as well for your storage network? I'm seconding all these questions: Because if it's 20k for everything you may as well just wave off now.
|
# ¿ Nov 6, 2012 22:23 |
|
Unless you can get a great deal on refurb gear I just don't see it happening.
|
# ¿ Nov 6, 2012 22:48 |
|
three posted:I am really looking forward to VMware making the VSA/etc awesome so that it is actually feasible for most environments. It's a long way away with all the limitations it has now, but I think it's the future. Getting rid of the SAN would be awesome. It will never happen, there are a ton of reasons to go with a shared chunk of external storage outside of the capacity arguments that VSA are likely to solve. For a small business though I see VSA as a godsend.
|
# ¿ Nov 8, 2012 17:52 |
|
three posted:I disagree. SANs aren't used because people love them for virtualization, they're used because they're a requirement for HA/DRS/etc. Nutanix is already working in this space. It's just a matter of time. Do you really think an "All in one" virtualization box is really going to throw the world all a twitter? I'm kind of skeptical.
|
# ¿ Nov 8, 2012 20:09 |
|
paperchaseguy posted:hey guys i was gonna roll my own SAM-SAMBA-SOC (Scott Allen Miller SAMBA Scale Out Cloud) for my 15000 user Exchange 2005 production environment do you have any hardware to recommend? My budget is $650. Man we're getting a lot of mileage out of me posting that SAM-SD poo poo here aren't we?
|
# ¿ Nov 9, 2012 21:14 |
|
NippleFloss posted:I could have happily lived the rest of my life without knowing about Scott Allen Miller but now I know about him and I can't unknow about him and that makes me angry. Did you know he writes tech blogs? Man he has an entire series on how raid 5 is not a backup neat huh!
|
# ¿ Nov 9, 2012 22:55 |
|
three posted:The performance difference between RDM and VMFS is negligible: http://www.vkernel.com/files/docs/white-papers/mythbusting-goes-virtual.pdf It would have been nice if he had charted the iSCSI results considering he did test them.
|
# ¿ Nov 13, 2012 18:30 |
|
I'm going to say just buy another server with plenty of storage in it.
|
# ¿ Nov 20, 2012 22:03 |
|
FISHMANPET posted:I'm currently figuring out how to do bare metal backups on some machines. Some critical workstations, and every server (including the storage servers). And it's being backed up to a Drobo. I want to kill myself. Here's how I do it. Or I should say how it's going to be done when I'm all finished. I'm running 4 ESXi hosts running over a 10GE connection for my storage, the hosts are in a DRM cluster so if any one host goes down things automigrate around to bring everything back online. Entirely hands off, I love it. The 10GE storage links to my Nimble storage device which holds all of my "On site snaps" depending on the storage pool I'm keeping everything from hourly snapshots to daily, and I generally keep a month or so worth laying around again dependent on app. Offsite I'll have a second storage appliance from nimble that is going to be catching replica's from the primary(this is an array function), and two ESXi hosts that are basially just waiting for me to bring VM's online. I'm keeping the DR site fairly cheap though so I'm unlikely to use DRM, or even use vcenter at all there. I've been thinking about veaam but it's really getting poo poo reviews lately.
|
# ¿ Nov 20, 2012 22:23 |
|
three posted:Management... auditing? What is this madness? Auditing? But I pay my taxes...
|
# ¿ Nov 21, 2012 15:34 |
|
Another nimble buddy! Hey ! EDIT: They have expansion cabinets now, and the pricing looks good.
|
# ¿ Dec 3, 2012 20:25 |
|
GrandMaster posted:We've been looking at nimble boxes too, but I was surprised at how expensive they were considering it's full of lovely SATA. We are looking at ~100TB of storage, and it came in more expensive than Compellent, VNX5500 & FAS3250 boxes with similar capacity - the other boxes take up more space but I've got much more confidence around the performance since they all have truckloads of 15K SAS & SSD caching. At the 100TB mark I'd be looking at a bigger vendor too. To me Nimble's products make sense at the medium business level i.e. I need 20tb of storage for VM's etc.
|
# ¿ Dec 4, 2012 16:32 |
|
skipdogg posted:Looking for a good intro to SAN book. I know the absolute basics, but I guess I'm looking for more detail. I know what a LUN is, but what purpose does it have, why are they created. I know that iSCSI and Fibre Channel are connection protocols, but what makes them different and the advantages and disadvantages of each. Basically a good foundation book with some general best practices. It doesn't have to be vendor specific, but we're an EMC shop if it matters. 1. A LUN is for all intents and purposes a block of unformatted storage that you present to a device as if it were a disk. It's purpose is to provide a lump of remote storage to a device so it appears as if it were just a new blank hard drive. 2. iSCSI goes over ethernet and is tied to whatever speeds that standard can do. Back when 1GE was pretty much as blazing fast as standard network equipment would go, people would use fibre channel in order to get high speed access to their storage devices. With 10GE reaching commodity pricing I can't think of a single loving reason to deal with fibre channel ever again unless I had already sunk money into it.
|
# ¿ Dec 14, 2012 19:58 |
|
Syano posted:Powervault kits are great. You can add shelves any time you need up to like 192 total drives or something like that. You can't go wrong with them for small deployments Don't work with many small/medium businesses do you? I had 10 year old dell poweredge servers running mission critical stuff when I first started working at my current shop, and the prevailing attitude towards hardware purchases was generally "Just use one of the old ones laying around". Case in point: About a half a year before I was hired (Call it three years ago) they purchased an entirely new software that drove... well everything from inventory to service calls. Rather than purchase a new server and new storage for their new program they just took down half of the old ERP softwares cluster, reinstalled windows on it, appropriated half of it's storage and setup the new system on it. Naturally it ran like poo poo. The excuse I heard was "Well we already owned an extra server we didn't need". Trying to pry money out of people like this is hard. My entire first year here was spent convincing the owner that just because you spent 10k on something five years ago doesn't mean it's worth a poo poo now. Servers don't have some intrinsic value that sticks around after they are past the point where they can do any of the jobs you need to do. This isn't a car you can restore and then keep on using as a daily driver. Now I have a rack of lovely HP servers running ESX backed by a SAN. Much better than the baremetal raid 0 shitboxes with DAS that I was dealing with before. Rhymenoserous fucked around with this message at 23:01 on Oct 22, 2013 |
# ¿ Oct 22, 2013 22:59 |
|
Thanks Ants posted:"Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time." That's a Scott Alan Miller Storage Device you goony gently caress.
|
# ¿ Dec 17, 2014 22:57 |
|
KennyG posted:Ken Rockwell's alter ego. I like the warm comfort of knowing that if I have major issues with a device I can have a technical expert on site within 12-24 hours (Depending on severity) to help me deal with it. The thought of having openfiler go tits up on some shitbox running production data with me as the last line of support actually gives me the heebie jeebies.
|
# ¿ Dec 18, 2014 16:34 |
|
|
# ¿ May 10, 2024 00:29 |
|
KennyG posted:I work in Ediscovery. KjATstillabower.com if you want to chat I also used to work in the IT end of ediscovery. EDIT: I should say that to "Do it right" requires you throwing a big pile of money at the problem.
|
# ¿ Dec 29, 2014 18:35 |