|
KS posted:Update: Open up a case with HP, something is seriously not right here.
|
# ? Sep 2, 2008 17:38 |
|
|
# ? May 21, 2024 19:08 |
|
What are people using for doing their I/O tests of boxes? bonnie++ or just straight 'dd' or any other package?
|
# ? Sep 2, 2008 18:57 |
|
unknown posted:What are people using for doing their I/O tests of boxes? I prefer IOmeter. Here are some ideas towards creating some workloads to help evaluate storage performance. code:
|
# ? Sep 2, 2008 19:30 |
|
KS posted:performance woes Actually I just remembered something, I had some similar performance issues on some BL25ps because of some stock settings of the HBA driver/BIOS. Are you using Qlogic or Emulex? What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario. edit: Also, theres a bunch of storage guys in #shsc on irc.synirc.org. You should stop by and chat we might be able to give you other pointers. rage-saq fucked around with this message at 20:06 on Sep 2, 2008 |
# ? Sep 2, 2008 19:49 |
|
Anyone making SANs with SFF SAS drives yet? We're trying to standardize our environment around 72/146 GB 2.5" SFF SAS drives (300 GB by end of year)unknown posted:What are people using for doing their I/O tests of boxes? small blocks = higher IOPS big blocks = higher throughput To simulate our SQL workload, we use 8k blocks, 60% write, 40% read, 100% random. brent78 fucked around with this message at 05:39 on Sep 3, 2008 |
# ? Sep 3, 2008 01:35 |
|
rage-saq posted:What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario. Messing around with this a bit today. We're using QLogic mezzanine cards in a mix of 20P G3s and 25Ps. BL20P default was queue depth 16 and execution throttle 16. Dell server with QLA2640s was queue depth 16 and execution throttle 255. I've been tweaking queue depth a bunch but not execution throttle. I'll have to try more settings tomorrow. I started messing with I/O schedulers too. Out of the box it was using CFQ on the 8 paths underlying the mpath device. I think there's some performance to be gained here. This thread, especially jcstroo's post, drives me crazy. I'll hang out in #SHSC, thanks.
|
# ? Sep 3, 2008 02:25 |
|
rage-saq posted:I prefer IOmeter. Here are some ideas towards creating some workloads to help evaluate storage performance. brent78 posted:Also recommending IOMeter. But more importantly, try to simulate the actual workload that you expect to use. I recently had a vendor tell me "you should be getting at least 8,000 IOPS on that LUN, not sure why you're only seeing 8,000". As it turns out, their test was performed using 512B blocks, 100% sequential, 100% read. Well duh. Unfortunately I'm running Freebsd - although I can probably get IOmeter working under the linux emulation. Just haven't seen any 'success' parts while googling so far. So I'm looking for other various industry standard ones. And yes, I'm well aware of testing my load, not someone else's marketing focused one. In one of our applications, bonnie++ is actually fairly close to what we're doing (numerous files).
|
# ? Sep 3, 2008 05:28 |
|
We've only recently begun to Virtulize, with ESXi. Probably going to be buying a starter kit for 3 servers + VMM here in a bit from VMWare, right now I am just digging the free version and working with the evaluation of the rest. I'm using 6 servers, Dell PE2950's and 1950's - they have the newer CPU's that support Intel-vt. All have 8gb of ram except the 2950 running our primary SQL 2005 server - it has 16gb. They host our custom web-based application(s), so there are 2 IIS servers, 2 Active Directory DC's, a Backup Exec server, 2 SQL 2005 servers (one primary, the other mirrored), and an ISA 2006 firewall, plus a file storage host. Pretty simple stuff. I've also inherited (from the previous CEO) a Dell/EMC AX150i iSCSI SAN with 8 out of 12 disks filled with 500gb SATA drives. Finally I have two Dell 5724 iSCSI optimized switches to tie it all together outside of my regular network.. The fun part has been trying to fit them all together. So far I've managed to get several of our lighter load servers virtual-ized, but I am unsure if I should do so with the SQL Server - we don't have an enormous load on it (it hosts our customer management apps and product catalogs, and runs just fine on a 2950 w/RAID 5 local storage, Perc 5i controller). Is my setup too low-rent for such a situation? Any general pointers on how I should set up ESXi with the AX150i/my config?
|
# ? Sep 3, 2008 05:52 |
|
Mierdaan posted:Thanks for the thread, 1000101. Floating this question again
|
# ? Sep 4, 2008 18:12 |
|
I can answer most of your EMC questions (features, functionality, why EMC over xyz) from a sales perspective, i'm not that technical! I can give opinions on other vendors in general.
|
# ? Sep 4, 2008 18:49 |
|
Mierdaan posted:Thanks for the thread, 1000101. I've been looking at the MD3000i as well, mainly because it is the only iSCSI filer that Dell sells that doesn't use insanely expensive disks. On paper, it looks quite good, however some of the wording is a bit confusing. For example, they claim more performance by adding a secondary controller, and they only mention cache when you have a dual-controller setup. I searched and searched and could not find any mention of cache per controller, only when used in a dual-controller setup, so you'll probably want that second controller just-in-case. Those disks should yield a healthy 170 IOPS per disk (vs. ~130 for 10k) so that would be ideal for a database. If your database can live fine on 680 IOPS, its probably good enough. That extra drive should be used as an universal hot-spare, though.
|
# ? Sep 4, 2008 19:17 |
|
optikalus posted:I've been looking at the MD3000i as well, mainly because it is the only iSCSI filer that Dell sells that doesn't use insanely expensive disks. Have you looked at any vendors outside of Dell? The reseller we work with is pretty big on Dell, so I don't know how biased they are; they're claiming wonderous things about the 3000i. "optikalus posted:Those disks should yield a healthy 170 IOPS per disk (vs. ~130 for 10k) so that would be ideal for a database. If your database can live fine on 680 IOPS, its probably good enough. "optikalus posted:That extra drive should be used as an universal hot-spare, though.
|
# ? Sep 4, 2008 19:29 |
|
Mierdaan posted:Have you looked at any vendors outside of Dell? The reseller we work with is pretty big on Dell, so I don't know how biased they are; they're claiming wonderous things about the 3000i. Not really as I don't have the capital to plop down $25k unfinanced on a filer (Dell offers $1 buy-out leasing terms). HP has a similar product, but I dislike HP for various reasons and I didn't immediately see any financing options. I would love an EMC or NetApp, but its just out of my reach at the moment. Actually, I did have an EMC IP4700 for a few days -- it took a 4' drop off my cart on the way to my car. Flattened. Oops.
|
# ? Sep 4, 2008 19:51 |
|
optikalus posted:Not really as I don't have the capital to plop down $25k unfinanced on a filer (Dell offers $1 buy-out leasing terms). HP has a similar product, but I dislike HP for various reasons and I didn't immediately see any financing options. My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008.
|
# ? Sep 6, 2008 19:11 |
|
dexter posted:My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008. The PERC4/DC isn't even compatible with RHEL. I found that out the hard way with my PV220S. I could transfer enough data to fill the card's cache at speed, but any further writes were dog slow, like 10MB/s or less (initial write was only 90MB/s). The Adaptec I replaced my PERC with can max out at 300MB/s reads and something like 200MB/s write. LSI can suck it.
|
# ? Sep 6, 2008 20:47 |
|
Anyone have experience with LeftHand Networks, specifically their rebranded HP (or Dell 2950) appliances? Went to couple of demos and their premise seems pretty slick. Kind of like 3Par but cheaper. I especially liked ability to add/remove nodes almost on the fly.
|
# ? Sep 6, 2008 22:14 |
|
Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.
|
# ? Sep 6, 2008 23:10 |
|
echo465 posted:Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500. Get ready to have fun when the controller shits itself and annihilates all the data on your array, and Promise has no idea why and offers no recourse for fixing it or preventing it from happening again (which it will). I've had 3 different customers who had differing Promise units and they all did something along these lines happen, and Promise basically told them to go gently caress themselves. I can't tell people enough to avoid this kind of crap.
|
# ? Sep 6, 2008 23:41 |
|
echo465 posted:Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.
|
# ? Sep 6, 2008 23:48 |
|
rage-saq posted:Get ready to have fun when the controller shits itself and annihilates all the data on your array, and Promise has no idea why and offers no recourse for fixing it or preventing it from happening again (which it will). My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem. echo465 fucked around with this message at 06:37 on Dec 15, 2015 |
# ? Sep 8, 2008 15:30 |
|
echo465 posted:My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from all the Mac fags when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem.
|
# ? Sep 8, 2008 16:59 |
|
I was caretaker for 4 vTrak m500i units at a contract gig once, filled with 500GB Seagate SATA drives. One of the units had the controller take a poo poo, twice, about 6 months apart. After a very stressful morning on the phone with Promise, they had me force each drive online in the web management tool, and that caused the controller to magically fix itself somehow and everything was hunky dory. No data loss or anything. I used the exact same procedure for the second instance and that also fixed everything. I think I was just very very lucky
|
# ? Sep 8, 2008 20:26 |
|
Misogynist posted:People in the enterprise world tend to have better things to do than bitch about their hardware on the Internet. Oh I don't know, alt.sysadmin.recovery is probably a good resource if you remember to ROT13 the brand name before you search.
|
# ? Sep 8, 2008 22:30 |
|
So, nobody has any experience with LeftHand? Googling online did not find much. I guess I'll tell them for some references and will put the product in the lab for some stress testing.
|
# ? Sep 12, 2008 01:40 |
|
How can I definitively determine the blocksize of a EMC Clariion CX3 RaidGroup? I've got a script that is pulling a "navcli getall" report and parsing it to produce a web-based report about free space. Unfortunately, all I get from the report for each RaidGroup is this: code:
code:
|
# ? Sep 16, 2008 22:33 |
|
Alowishus posted:
I'm not an EMC guy (I've had very little experience with it actually) but I'd definitely say you are at a 2k block size. Lots of larger SANs don't give you the option of your block size. HP EVA, Netapps and I think HDS are all 4k blocks and you can't change it.
|
# ? Sep 17, 2008 02:34 |
|
I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. Our requirements: Hardware raid, dual parity preferred (RAID6), BBU Cheap! Runs or attachable to Debian Etch, 2.6 kernel. Power-dense per gb. Cheap! To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O. We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing. The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail. I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks. Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.
|
# ? Sep 17, 2008 17:23 |
|
Alowishus posted:How can I definitively determine the blocksize of a EMC Clariion CX3 RaidGroup? Unless i'm wrong you can set the block size on a lun by lun basis depending on your requirements. 2k, 4k, 8k, 16k, 32k, etc. This is the size you are likely looking for. Overall background block size is set at 520 bytes. 512 bytes data and 8 bytes of Clariion data to ensure integrity, but the last 8 bytes is user transparent. Have you tried looking at the lun via Navisphere?
|
# ? Sep 17, 2008 18:47 |
|
H110Hawk posted:I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures. P800 is about $1k, each shelf is about $3k and then add your 1TB LFF SATA drives and you are good to go. If you need more attach another P800 and more shelves etc.
|
# ? Sep 17, 2008 20:36 |
|
This is for the 50GB backup offer, I presume?
|
# ? Sep 17, 2008 21:05 |
|
complex posted:This is for the 50GB backup offer, I presume? Hrm? rage-saq posted:Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures. Right now our current theory is a 3ware 9690SA card with these: http://www.siliconmechanics.com/i20206/4u-24drive-jbod-sas_sata.php So your solution is about 2x the cost. It's a supermicro backplane, we're getting a demo unit in about 5-10 days. Any horror stories about the card? Backplane? Is there something cheaper per rack U per gb? (Or moderately close, mmonthly cost of the rack and all.)
|
# ? Sep 17, 2008 22:06 |
|
H110Hawk posted:I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. You could also use Solaris 10 with Samba if OpenSolaris makes you uncomfortable, but OpenSolaris is just so much goddamn nicer, especially with the built-in CIFS server, and just as stable. If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat. Vulture Culture fucked around with this message at 00:00 on Sep 18, 2008 |
# ? Sep 17, 2008 23:54 |
|
Misogynist posted:If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat. I have 26 thumpers.
|
# ? Sep 18, 2008 02:10 |
|
Anyone have recommendations for reading material for SAN's?
|
# ? Sep 18, 2008 17:10 |
|
dexter posted:My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008. Thanks for that. Dell was trying to sell us those explicitly *for* use with freeBSD. I wish I could stab whoever in procurement decided we had to use dell for all of our freeBSD boxes. H110Hawk posted:I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. http://www.ixsystems.com/products/colossus.html Absorbs Quickly fucked around with this message at 17:31 on Sep 18, 2008 |
# ? Sep 18, 2008 17:22 |
|
1000101 posted:So this poo poo all sounds fancy and expensive; how much does it cost man!?!?! I collected some IOPS data for my environment, and we are looking at a average of 300, top end 600 IOPS, excluding when my backup is running. I am looking at a Equallogic SATA SAN 8TB now (3000 IOPS), and its 50K+ or 60K+ for SAS. Mind you this is SATA, and I assume you are talking about SCSI, SAS high end enterprise stuff. You say 30K for comparable specs? What am I getting hosed on, or what are you leaving out? I was thinking 30K when I started this trek and I don't know what happened now that I hold this quote in my hands.
|
# ? Sep 18, 2008 18:06 |
|
Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know. I've worked with a bunch of goons in the past and no one's ever said anything bad about me to my face Just shoot me an IM/PM if you need help with anything, I don't care if it's just to see if your current vendor is screwing you over or if you just want to talk about feelings. Because I do. fake edit: Sorry if this sounds spammy. There's not really any way to say "HEY I CAN HELP YOU! DO BUSINESS WITH ME" without doing so.
|
# ? Sep 18, 2008 18:11 |
|
Oh man, this thread is a welcome sight. I'm just about to get a Clariion AX4 configured w/ 11 400GB 10k SAS drives and 2 heads to replace our crap-rear end Ubuntu NFS server that's currently providing backend storage to 4 ESX frontends. Any setup tips on carving this thing up once it ships to us? How should i connect VM -> storage? Expose a LUN for each VM then use RDM in vmware? One giant VMFS volume stored to a RAID-10 on the AX4? A mix of both? Nothing? OH GOD THE CHOICES ARE KILLING ME
|
# ? Sep 18, 2008 18:55 |
|
M@ posted:Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know.
|
# ? Sep 18, 2008 20:26 |
|
|
# ? May 21, 2024 19:08 |
|
Auslander posted:Oh man, this thread is a welcome sight. Haha, I'm right where you are. We're looking at the Dell MD3000i and the AX4 right now, and the jump from crappy DAS to a real (if low-end) SAN is just imposing.
|
# ? Sep 18, 2008 20:48 |