|
unknown posted:How many people are doing boot-from-SAN (ie: no hard drives on the physical server)? I've done a few good sized network upgrades. At two remote co-located sites we did a blade chassis and a small SAN and did boot-from-SAN. Each site has an extra blade on hand in case a server were to fail so we could remap the LUN and get it back up and running remotely in a short period of time. We did that with the customer before they had gone through testing and validation of virtualization instead, which is how all new deployments are done.
|
# ¿ Aug 29, 2008 22:32 |
|
|
# ¿ Apr 28, 2024 15:17 |
|
KS posted:Update: Open up a case with HP, something is seriously not right here.
|
# ¿ Sep 2, 2008 17:38 |
|
unknown posted:What are people using for doing their I/O tests of boxes? I prefer IOmeter. Here are some ideas towards creating some workloads to help evaluate storage performance. code:
|
# ¿ Sep 2, 2008 19:30 |
|
KS posted:performance woes Actually I just remembered something, I had some similar performance issues on some BL25ps because of some stock settings of the HBA driver/BIOS. Are you using Qlogic or Emulex? What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario. edit: Also, theres a bunch of storage guys in #shsc on irc.synirc.org. You should stop by and chat we might be able to give you other pointers. rage-saq fucked around with this message at 20:06 on Sep 2, 2008 |
# ¿ Sep 2, 2008 19:49 |
|
echo465 posted:Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500. Get ready to have fun when the controller shits itself and annihilates all the data on your array, and Promise has no idea why and offers no recourse for fixing it or preventing it from happening again (which it will). I've had 3 different customers who had differing Promise units and they all did something along these lines happen, and Promise basically told them to go gently caress themselves. I can't tell people enough to avoid this kind of crap.
|
# ¿ Sep 6, 2008 23:41 |
|
Alowishus posted:
I'm not an EMC guy (I've had very little experience with it actually) but I'd definitely say you are at a 2k block size. Lots of larger SANs don't give you the option of your block size. HP EVA, Netapps and I think HDS are all 4k blocks and you can't change it.
|
# ¿ Sep 17, 2008 02:34 |
|
H110Hawk posted:I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures. P800 is about $1k, each shelf is about $3k and then add your 1TB LFF SATA drives and you are good to go. If you need more attach another P800 and more shelves etc.
|
# ¿ Sep 17, 2008 20:36 |
|
Alowishus posted:Just out of curiosity, in an HP environment, how does one add their own 1TB LFF SATA drives? Is there a source for just the drive carriers? Because using HP's 1TB SATA drives @~$700/ea makes this quite an expensive solution. I don't know if theres a SKU for the correct drive trays, generally I just tell customers to buy the HP drives and thats the end of it.
|
# ¿ Sep 19, 2008 01:18 |
|
Catch 22 posted:I didn't calculate that number, that comes from my "Storage Specialist" at Dell telling me my peak IOSP to use a Equallogic. I calculated MY IOPS for my environment, and I added on to my real number to account for the RAID overhead. Yeah, if he's telling you 3k IOPS with that equalogic, he clearly has NO idea what he's talking about. Thats just peak IOPS for that unit under the right configuration, which is NOT how someone who knows how to design storage is going to talk about the speed of an array with you. Someone that knows what they are doing and isn't just some retarded sales drone parroting marketing information and talking points given to him in his 'training classes' is going to with you about what kind of IOPS you currently push, where you are going, and what kind of setup you need to achieve that. That being said I've setup a few Equalogics for customers who purchased them before engaging my services, and I don't really like them. Silly setup constraints like not being able to use all of the drives in the chassis in one array and other things. Their cabling setup of only 2x 1GbE despite having 3 ports was odd, their web management page was alright but left a lot to be desired. One of the customers who I setup said Equalogic 2 months is already going to be moving to a better SAN because it just wasn't cutting it as configured. And about his comment of an Equalogic box running a 2000 user Exchange 2007 environment, I'm going to either call bullshit or say it ran unacceptably slow. I've done a few 2000 user Exchange designs, and the last design will use 16x 146gb 15k HDD in RAID10 for the database and then a small LUN on a separate 6x300gb 15k RAID10 disk group for the log files. I really doubt a single equalogic box would be able to handle it acceptably unless it was a REALLY lightly used Exchange server. rage-saq fucked around with this message at 18:17 on Sep 19, 2008 |
# ¿ Sep 19, 2008 18:12 |
|
brent78 posted:Anyone else going to be at the HP Storage Symposium in Colorado Springs next month? Send me a PM, I'm staying at the Broadmoor (as is everyone else I imagine) Hmm, are you going through a distributor or through HP? One of my distributor guys whispered something about another storage event soon (last one was in SF).
|
# ¿ Sep 24, 2008 04:37 |
|
Kullrock posted:What is LFF short for?, Can you use regular 1TB sata disks on those, or do you need to pay out your rear end for HP's own disks? LFF = Large Form Factor aka 3.5" SFF = Small Form Factor aka 2.5" I believe you could use any old 1TB drives in there as long as you found the empty trays and purchased them, but I don't recommend it to my customers as they wouldn't get HP warranty coverage. Plus HPs has Dual Port 1TB 7200 SAS and have a more uniform higher performance and reliability than standard SATA. The Dual Port means your shelves can have two data paths all the way from the drive to shelf to controller(s) to give you fiber-channel like availability for a fraction of the cost. HP's LFF drive lineup looks like this right now. code:
|
# ¿ Sep 29, 2008 17:10 |
|
Mierdaan posted:I was originally talking about the Dell MD3000i; I don't know what EQL product number that is (it is EQL hardware, right? No, the MD3000i is a box from LSI. Equallogic are totally different products, but I wouldn't buy either. They are both rather immature products in a large number of ways (web management, strange limitations and non-competitive price/performance).
|
# ¿ Sep 29, 2008 17:12 |
|
Catch-22 what kind of configuration are you looking at and what kind of features? I can estimate to you how much a good HP setup would cost, and worst case you could use that to get a better price from EMC.
|
# ¿ Sep 29, 2008 17:22 |
|
Mierdaan posted:Thanks rage-saq. What would you say is the best bang for your buck in the $15-$25k range? As previously mentioned we don't currently need anything with crazy good performance, but we don't want to handicap ourselves later. Only need about 3-4TB usable space, with the ability to scale up later. What kind of features? How many hosts? What kind of disk configuration? HP has a really great lineup with the MSA2000. There is an FC model, iSCSI model and SAS (still a SAN, think FC without FC) model with both single and dual controller/fabric versions and they are all pretty awesome from a management and price/performance standpoint.
|
# ¿ Sep 29, 2008 17:26 |
|
Mierdaan posted:Snapshots are really all we'd be looking for right now. Don't need replication as I'm still working on getting the first device in the door The MSA2000 series can do snapshots and volume copies, but not array based replication. Additionally, the MSA2000 series is a block-level virtualized storage system, so you don't necessarily have to carve up disk groups based off I/O patterns like you do with traditional disk-stripe arrays. MSA2000 main controller shelf is 12 disks, and additional shelves are also 12 disks. Also, transaction logs are nearly 100% sequential, and an array that operates in nearly 100% sequential mode will perform ~2.5x as fast, so you can really get by with a lot fewer disks and consolidate your LUNs there. You could probably get the space and the performance you need by going with 8x 450gb in RAID10 and 4x146GB (logs) in RAID10. At 20 hosts you are looking at either FC or iSCSI, and given your tight budget you'll probably want to go iSCSI. For reference, a dual-controller MSA2012i with 8x450gb and 4x146gb would come in around 17k or so.
|
# ¿ Sep 29, 2008 17:45 |
|
Catch 22 posted:Opps, sorry forgot that. Whats your AIM? Some of those other part numbers you mentioned don't really mean anything to me because I'm not well versed in EMC part numbers. Or you can come into #shsc on irc.synirc.org, we've a few storage pros in there quite regularly.
|
# ¿ Sep 29, 2008 17:50 |
|
BonoMan posted:I guess I can ask this part of the question as well...what's a good pipeline for backing up from a SAN or NAS? You need to hire a consultant to come in and help you understand what you want. You could be talking about something in the 40-50k range (unless you need new servers too) or you could be talking about something in the 300k+ range. A qualified consultant will help you figure out what you want vs what you want to pay for.
|
# ¿ Sep 29, 2008 17:57 |
|
BonoMan posted:Yeah we have one coming in next week. Like I said this is just me putting feelers out there to see what other folks use. It depends on a lot of aspects. A few things to start thinking about now. 1: Recovery Point Objective, what is acceptable data loss, the closer to 0 you get, the more money you spend. 0 data loss is almost cost-exorbitant, and requires a few hundred grand in equipment to achieve and its still not guaranteed. 2: Recovery Time Objective, what is an acceptable downtime. Again, the closer to 0 you get the costs go up exponentially. An RTO of 0 means server redundancy, storage redundancy (two SANs) and application redundancy (example: Microsoft Exchange Continuous Cluster Repication). 3: IOPS. The amount of IOPS you need is going to ultimately determine the expense of the array. If you've got something that needs a sustained random IOPS level of about 2000 IOPS on a database set of 300gb or more, you are talking about getting about 20 disks just for that one application.
|
# ¿ Sep 29, 2008 18:33 |
|
Catch 22 posted:Call EMC, they will come out, do metrics for you, go over everything they think you need, and they will do you right. You can then use this info to compare at other products. Be frank with them and do not think of them as "demoing" a product to you. EMC actually came out onsite and did some performance monitoring to determine IOPS usage patterns before you gave them any money? I've not actually heard of them doing this, just coming up with random guesstimates for customers based off a little input from the customer. They were horribly wrong (short by about 40% in some cases) that I ended up fixing their order at the last minute before they placed their order (but sometimes after and having to fix it by buying more disks). Moral of the story: You can't cheat by guessing at IOPS patterns, you really need to know what your usage patterns look like. Some applications (like Exchange) have decent guidelines, but they are just that, guidelines. I've seen the 'high' Microsoft Exchange estimates be short by about 50% of actual usage, and I've also seen peoples mail systems be 20% under the 'low' guideline. SQL is impossible to guideline, you need to do a heavy usage case scenario where you record lots of logs to determine what to expect.
|
# ¿ Sep 30, 2008 03:31 |
|
Mierdaan posted:Can you explain this a little more? I've always felt like I was missing something by carving up disk groups like I did before, so I'm glad to know I am. I just don't understand quite what Reposting this question on a new page Is rage-saq just talking about the ability to automatically move more frequently-used data to faster disks? That makes sense if you have slower spindles in your device, but if you cram the whole thing full of 15k RPM SAS drives I'm not quite sure what this accomplishes. Again, totally missing something [/quote] Well, there are two basic RAID types used today. The old way: Disk raid. A 32kb or 64kb or 128kb stripe (sometimes per lun, sometimes not) runs through the entire disk group. Sometimes you make multiple disk groups and group those guys together for performance. Different vendors have different strategies. What are the downsides? The stripe must be maintained, so performance and reliability follow traditional models and consolidating I/O patterns to particular disk groups (random vs sequential) is of great concern. The newer way: Block raid. Data is virtualized down to blocks, which are then spread out over the disk group. The advantage is the blocks aren't tied to particular disks, so they can be spread out to maintain redundancy and improve performance as needed. That means you can mix I/O patterns without seeing a performance penalty because the blocks will get spread out more and optimized so more disks are driving the I/O. It also means it doesn't matter so much that all the disks match the same disk size and spindle speed, but to make your RAID meet sizing and redundancy requirements you won't be able to use the fullest extent of your capacity when you mix spindles. Not all block storage systems will automatically migrate data over different disk spindle speeds like Compellent and 3Par, that is still fairly new. I know HP is working on including migration into EVAs eventually, but right now its still a best practice to have the same spindle speed per disk group in an EVA. Drobo uses virtualized storage to spread out redundancy blocks everywhere so they can get their crazy recovery thing going. EMC unfortunately is very pompous and is under the misguided opinion that block virtualization is a bad which is why they don't have it. A lot of industry experts disagree.
|
# ¿ Sep 30, 2008 15:42 |
|
ghostinmyshell posted:I'm not happy with Lefthands support. As soon as LH can identify the problem is not in relation to their software, you enter the bermuda triangle dealing with HP for the hardware side of things. Sounds like someone is wishing they got a 6 Hour Call To Repair warranty instead Its mighty expensive though, nearly double 4 hour response!
|
# ¿ Dec 30, 2008 23:43 |
|
oblomov posted:Rage, can you clarify on this? Is this through HP or Lefthand support? Personally, I think I am going to go with NetApp after all. Well in a few weeks their support will be through HP. 6 hour Call To Repair is expensive because it is the highest level of support you can get. You call in an issue and HP basically promises it is resolved in 6 hours. That means they stock spare parts for you at the local warehouse, and make sure they have enough certified techs in your area to cover you should a problem arise. This is basically pulling out all the stops so your poo poo gets fixed ASAP.
|
# ¿ Jan 4, 2009 16:38 |
|
oblomov posted:Hmm... I will hit up my rep on this. The other route is frankly unacceptable. One of big reasons even at looking at something other then NetApp is less downtime and quicker potential fixes. If it takes 2 days to repair a SAN, that's simply unacceptable. The fact that their support is going to be through HP is what concerns me most actually. IMO, HP support is simply not up to par considering their pricing. HP support options like 4 hour 24/7 etc is just as good as anyone elses these days on Standard equipment, if you really need to have insane support you buy 6 hour CTR. LeftHand should be getting shelved under Standard equipment so the same Proliant level warranty options will apply. If you REALLY want crazy support, you need to step it up to Enterprise equipment, stuff like EVA's, ESL's and XP's have insane warranty options. This is the kind of equipment that competes directly with Netapp, as Netapp predominantly makes Enterprise level equipment. EVA's are awesome (installing one right now as a matter of fact) and have all sorts of warranty/support/prefailure replacement + auto dialhome that just don't appear on the Lefthand/MSA level Standard storage lines. If you get an XP you can get a support contract with HP for 7 9's uptime. Thats like 6 seconds of downtime per year. This is what runs the NASDAQ and thats the kind of warranty they have. rage-saq fucked around with this message at 02:39 on Jan 5, 2009 |
# ¿ Jan 5, 2009 02:36 |
|
Misogynist posted:IT people who have no balls (or female equivalent). They are called Thatchers. skullone posted:And whatever NAS/SAN you get, it'll suck. There will always be odd performance problems that you'll have to spend hours troubleshooting before your vendor will listen to what your saying, only to have them say its a known problem, and a patch will be ready in a few weeks. The fact that there are consultants/vendors out there that allow this kind of behavior is appalling. You are dropping a lot of cash for a very advanced piece of equipment that is just supposed to work, if it doesn't you should return it and get something that does. I do enterprise storage design/consulting/implementation primarily around HP products and I can honestly say all of my deployments work 100% as advertised with no mysterious performance/reliability problems. Thats the whole loving point of doing this. If the product couldn't deliver as advertised I would be the first one trying to get the customer a refund as well as not recommending it in the future. My personal opinion is that this is what you get when you go with generic server equipment and then use some kind of general purpose operating system + software package to accomplish this kind of low level stuff. A lot of Sun's entry products utilizing ZFS seem to fit this kind of bill along with other stuff I'm not a fan of like LeftHand etc. Your mileage may vary of course.
|
# ¿ Jan 28, 2009 20:51 |
|
Mierdaan posted:Well, we've been hosed. Reason number 5234634 to not deal with those big fullfilment warehouses like CDW, PC-Mall etc for configs. The guy you are talking to really has no clue about any high level technical stuff, he's just a guy that knows sales stuff and is 'good with computers'. Always always always hire a consultant to come out and do the design/config work for you, its not free but at least it will be right and you will have significant recourse if they screw up the config. Because you took your PC-Mall config which was hosed up, directly to another vendor who didn't have any part in the config, you are pretty much hosed.
|
# ¿ Jan 29, 2009 16:18 |
|
mkosmo posted:Hey Storage Gurus, I have a question for you if you will permit: First my two cents, your EMC engineer who sold and implemented the system for you should have properly designed your system to meet your necessary workload performance requirements. Now on to the real question of why this is a difficult thing to achieve. >350MB/s is quite a bit of throughput, but throughput alone is not the most important factor when it comes to disk performance. It can actually be extremely misleading. Disk performance is a very careful balancing act of the right raid level at the right block size for an optimal IOPS & throughput level to best meet your I/O pattern. For example 350MB/s @ 8 IOPS would be a pretty poorly performing disk system, whereas 80MB/s @ 5000 IOPS could be an extremely well performing disk system. A few things that will affect your performance that you'll want to look into, be sure to use IOMeter to do your performance testing, anything else is basically full of lies (I'm looking at you HDtach) 1: I/O pattern. This means what % is read, what % is write, and what % is random vs sequential. 2: Block sizes. The block size of your stripe (I think EMC calls this element size?) and the block size of your filesystems partition. Per the same number of disks in a disk group you will have to balance the block size for both IOPS and throughput (MB/s). The smaller the block size the more IOPS you are going to get, but at a lower MB/s. Conversely the larger the block size the more MB/s you are going to get but at a much lower IOPS. Additionally you are going to get better IOPS AND MB/s the more sequential your workload is, and also even more if your workload is more read than write. 8kb block size on both the disk and filesystem size is one of the better balanced configurations. With a 24x 450gb 15k disk group on an HP EVA4400 I've gotten in excess of 6000 IOPS (some of the figure would be cached performance) at 250-300 MB/s in certain access patterns. Database servers and anything that has a highly random and mildly write heavy, like operating system drives and things like that. Fileservers you can increase your block size to 32kb or higher and get good throughput but at a lower IOPS rate (which is generally fine for a file server). IOMeter is going to become your best friend for evaluating your disk/filesystem configurations to see if you get the performance you need to meet your workload. rage-saq fucked around with this message at 22:30 on Feb 4, 2009 |
# ¿ Feb 4, 2009 22:28 |
|
InferiorWang posted:Thanks fellas. What I'd really like to do is have an iSCSI SAN, 2-4 TB. I'd like to host a modest amount of vmware guests on it hosting primarily file shares, home directories, and groupwise email for the staff numbering 250-300. Then I'd like to do it all over again with a completely redundant SAN at one of our other schools in some sort of fail over configuration. Being able to do snapshots and have some sort of monitoring dashboard is something I'd like as well. Pick a big number and get them to approve that, then show off how good you are by coming in under budget!
|
# ¿ Mar 18, 2009 22:42 |
|
InferiorWang posted:I might be better off saying it's cheap and ending up over budget. Otherwise I'm likely to get this response: I'd advise against getting any kind of advice on important system design from what is essentially the best buy of product fulfillment centers. Call in a consultant and engage with them in a discovery and project assessment for virtualizing your infrastructure.
|
# ¿ Mar 19, 2009 17:29 |
|
BonoMan posted:Being new to SAN type storage...what are options that don't require per license usage? The use of StorNext is totally not required for any SAN usage at all, it is for very specialized use scenarios. When you map a volume off a SAN to a system, it appears as a any regular local drive it has exclusive block level access to. SANs abilities to share those same volumes between two servers is great, but without a shared filesystem it would be useless. Two systems would write to the file table at the same time and just destroy the whole thing. StorNext provides that shared filesystems so all the computers that have that volume mapped locally can use a centralized locking system so they don't write to the FAT/same files at the same time where you would get a destroyed filesystem.
|
# ¿ Jun 10, 2009 05:56 |
|
bmoyles posted:Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days? If you want file-presentation instead of block-presentation from your unit, the new HP Extreme Data Storage 9100 is pretty badass. Really low cost per GB, very fast NFS, CIFS, managed through one console. Its also extremely dense due to their super fancy new disk shelves that hold 82 LFF disks in a 5u shelf. Dreamworks just purchased a few petabytes of it.
|
# ¿ Jun 15, 2009 23:53 |
|
Erwin posted:Anybody familiar with Scale Computing? They caught our eye because they were originally in the financial market, which is what we do (even though they failed at it). Their system starts with 3 1U units, either 1TB, 2TB, or 4TB per unit, and then you can expand 1U at a time, mixing and matching pur-U capacities. Skip it. IMO the only clustered (as opposed to traditional aggregate arrays like HP EVA, EMC CX, netapp etc) systems worth getting into are LeftHand or some kind of Solaris ZFS system.
|
# ¿ Nov 11, 2009 20:14 |
|
1000101 posted:EMC Recoverpoint, and potentially NetApp Snapmirror can both give you application consistent replication. I'm pretty sure synchronous replication (where it ACKs a mirrored write before continuing on the source side) and 70ms of latency would destroy nearly any production database performance. At a certain point you can't overcome latency.
|
# ¿ Dec 22, 2009 06:59 |
|
three posted:Hrms. What would you suggest as the best route to copy the database off-site to another SAN? We actually just want a snapshot every hour or so, not live replication. I'm very new to Oracle, but have good MySQL experience, and to a lesser extent MSSQL experience. Could we just do similar to a dump and import like would be done with MySQL? Although that wouldn't be optimal since we'd have to lock the entire DB to dump it, no? This kind of advanced high availability is not as simple as many sales people would like it to sound. Some options like application specific mirroring (I'm not an Oracle pro, but I know MS-SQL 2005/2008 have both sync and async database mirroring with failover) would allow you some technical avenues towards improving the situation. If the people who bought it weren't smart enough to know what they were asking for, or didn't know enough to ask the right questions and not have the wool pulled over their head, then theres not much you can do from a business/customer service standpoint. However, if the RFP/requirements sheet/whatever was specific enough it called for these specific features, and you didn't get them due to whatever kind of fuckup then you should take some hints from comments people have already made and make DELL fix this. As people have already mentioned, find out WHO made the promises, get emails of them if possible, and hold them to it. If you are a vendor/consultant/etc and you make claims about XYZ with vanilla icecream on the side, and it doesn't do any of that, you can be sure as hell you can LEGALLY get out of paying agreed upon price. Thats how this industry works. If you have already paid it they would be REALLY REALLY DUMB for them to not do EVERYTHING they can possibly do to make it better. These kinds of empty promises and bad service followup is every competitors and lawyers dream situation and every product/service/sales managers worst nightmare. I'm a consultant and you better bet this is one of the highest things on my list of "Things Not To Do". I do everything I can to make sure I understand the technical requirements, and make sure the customer understands the differences and why its important these things are defined and not glossed over. If I promise X solution and can only deliver Y because I hosed up for whatever reason, you better bet my boss is going to chew my rear end out because he has to pay to fix it and I may have to be working on a new resume sooner than I had wanted. I will say I'm not surprised, DELL internal equallogic sales reps are notoriously clueless (getting someone barely qualified for sales, and barely qualified for basic end user technical support doing advanced enterprise design is their SOP) and I've had many customers buy the heaping load of bullshit they get fed by somebody who clearly doesn't understand what they are talking about only to get disappointed later. Great specific examples of this are when they get sold a single SATA shelf and are promised 50k IOPS because that is what the maximum value on the specs sheet says. Meanwhile the sales rep says its going to meet the performance requirements for a write heavy SQL database for this and god knows what other reason. If you don't press to get real technical knowledge they don't throw someone qualified at you. 1000101 posted:Clearly this is the case but I'm not sure how Oracle handles it. Is Oracle going to acknowledge every write? If so then yeah 70ms would make the DBA want to blow its brains out. Or does it just ship logs on some kind of interval? More of an Oracle question than a storage question but it would be helpful in deciding if you should use array based replication or application based. I suppose thats the real question of *WHERE* you are doing your replication and what your application supports. If Oracle doesn't care how it got the database so long as it was crash consistent with the log files you could get away with some pretty dirty array based replication techniques that do a synchronous replication that only amounts to *COPY BLOCKX, WAIT FOR CONFIRMATION ON BLOCK X BEING WRITTEN BEFORE SENDING BLOCK Y* as opposed to some more advanced things like array agents for oracle or oracle itself doing the replication. Different techniques allow for different options but at a fundamental level you can't have *true* synchronous replication that doesn't hold up processing on the source end until the target matches the source. Otherwise its not no-data-lost-crash-consistent which then defeats the point of synchronous replication and that kind of scenario where something like 70ms delay would really kill you. Its just the nature of the beast.
|
# ¿ Dec 23, 2009 08:09 |
|
Insane Clown Pussy posted:So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about.
|
# ¿ Mar 3, 2010 07:27 |
|
|
# ¿ Apr 28, 2024 15:17 |
|
EoRaptor posted:Most have these have already been answered, but I'll add some things Dell/Equallogic isn't very public about : Seriously? So as you build your EQL "grid" up you are introducing more single points of failure? Holy cow...
|
# ¿ May 25, 2010 17:33 |