|
Misogynist posted:So, in the last two weeks my old IBM DS4800/DS5100 storage arrays have: I've worked with DS-series devices a fair bit in the past and have seen the bolded issues before on systems running older firmware however I've never seen LUN corruption. Were you running the latest recommended firmware levels or have you said "gently caress-it" and migrated off them (A valid option IMO)? I hope your ServicePacs were current
|
# ? Oct 8, 2013 10:34 |
|
|
# ? May 14, 2024 18:32 |
|
Misogynist posted:So loving glad to be done with this poo poo.
|
# ? Oct 8, 2013 11:00 |
|
Amandyke posted:The power supply for the peer SP should keep both SP's running as power is shared over the backplane. Unless I'm missing something drastic in the design of the ax 4 as compared to any other clariion. Just an update, you were right, after switching off the SPS it just kept chugging along using power from the other psu. awesome!
|
# ? Oct 8, 2013 11:13 |
|
evil_bunnY posted:How are you 7k's? cheese-cube posted:I've worked with DS-series devices a fair bit in the past and have seen the bolded issues before on systems running older firmware however I've never seen LUN corruption. Were you running the latest recommended firmware levels or have you said "gently caress-it" and migrated off them (A valid option IMO)?
|
# ? Oct 8, 2013 14:27 |
|
Misogynist posted:No issues to report with the IBM-developed gear. Uhrm... firmware? Our SAN was setup by consultants 3+ years ago and i don't think anyone's touched the configuration since then. I'll have to check that tomorrow, depending on what version of IBM DS we have, where can i see what the recommended firmware is? The only thing i know is "not good" with it is that the controllers clocks are different between each other and to the server with the DS-software on it
|
# ? Oct 8, 2013 16:10 |
|
underlig posted:The only thing i know is "not good" with it is that the controllers clocks are different between each other and to the server with the DS-software on it
|
# ? Oct 8, 2013 16:53 |
|
Misogynist posted:Are you randomly converting between base-2 (TiB) and base-10 (TB) in the middle of your calculations? Yes, when he goes from 300GB to 268GB involves a conversion from megabytes to mebibytes. madsushi posted:Anytime someone says "IOPS" I just assume the worst (4K) and remember that 10,000 IOPS @ 4K is only 40 MB/s or a 320 Mbps which is pretty awful bandwidth. Psh, I prefer to use 512b IO sizes and assume the IO is sequential so I can tell my customer we can do like 10 million IOPs. Without parameters I figure I'm free to assume the best.
|
# ? Oct 8, 2013 18:05 |
|
I have a dell equallogic ps4000e on which I made a 2.5TB volume. The san is 16 disks SAS configured as RAID 50 with two hot spares. I have presented the volume to a server 2008 guest OS running on esxi 5.1 using a hardware iSCSI adapter as a pRDM. I am copying files from from the volume to the volume just to test replication and noticed the throughput in windows is 40MB/S give or take. Is this within the range I should expect for a same disk to disk copy?
|
# ? Oct 8, 2013 19:11 |
|
demonachizer posted:I have a dell equallogic ps4000e on which I made a 2.5TB volume. The san is 16 disks SAS configured as RAID 50 with two hot spares. I have presented the volume to a server 2008 guest OS running on esxi 5.1 using a hardware iSCSI adapter as a pRDM. Copying files is a really bad way to test storage performance, especially when you're to and from the same place. Most copy commands are single threaded so they have low concurrency and will drive limited throughput because the IO is serial and blocking. Copy throughput is also dependent on things like size and number of files, as well as how the files are laid out across the volume. Copy operations can also make use of filesystem cache so it may not actually flush all writes to disk at the rate it appears to. Lots of problems with doing things this way. I would recommend using a benchmark tool to get more useful values.
|
# ? Oct 8, 2013 19:39 |
|
NippleFloss posted:Copying files is a really bad way to test storage performance, especially when you're to and from the same place. Most copy commands are single threaded so they have low concurrency and will drive limited throughput because the IO is serial and blocking. Copy throughput is also dependent on things like size and number of files, as well as how the files are laid out across the volume. Copy operations can also make use of filesystem cache so it may not actually flush all writes to disk at the rate it appears to. Lots of problems with doing things this way. I would recommend using a benchmark tool to get more useful values. Yeah I am going to use iometer in a bit. Was just testing replication settings. Another quick question sorry. I downsized the volume to now be 1.8 tb which opens up the option of vRDM instead of pRDM. I am hoping to figure out if there are any drawbacks to using vRDM. This is just going to be a file store for a file server nothing intensive as far as read/write.
|
# ? Oct 8, 2013 19:46 |
|
Misogynist posted:We're probably about a year behind on firmware. We don't do any kind of maintenance often because our ERP admin doesn't really know what he's doing with FC on his servers How bad is it when you say "doesn't know what he's doing with FC"? If you're talking really bad then please run SAN Health and post the sanitised results. Just for laughs.
|
# ? Oct 8, 2013 20:08 |
|
demonachizer posted:Yeah I am going to use iometer in a bit. Was just testing replication settings. Why not just connect directly from the guest OS using the software iSCSI initiator?
|
# ? Oct 8, 2013 21:51 |
|
NippleFloss posted:Why not just connect directly from the guest OS using the software iSCSI initiator? I read that that is not a good idea a bunch of places. Is that a good way to go about it? No effects on vMotion etc.?
|
# ? Oct 8, 2013 22:49 |
|
demonachizer posted:I read that that is not a good idea a bunch of places. Is that a good way to go about it? No effects on vMotion etc.? No, that's typically the best idea. You remove all of the VMware overhead and let the guest handle everything. The only time that I have found mapping to ESX first is when you want to use a VMware snapshot on that drive (which is not often).
|
# ? Oct 8, 2013 23:00 |
|
I was going to ask this in the virtualisation thread but I have 200 posts to catch up on in there and it's on-topic now. What's the best way to connect to iSCSI storage from within a guest OS? When I've set up VMware and iSSCI the storage has always been on its own network on NICs dedicated to that task. Do I need to link the storage network physically to the same network that the VMs use or what?
|
# ? Oct 8, 2013 23:22 |
|
Caged posted:I was going to ask this in the virtualisation thread but I have 200 posts to catch up on in there and it's on-topic now. What's the best way to connect to iSCSI storage from within a guest OS? When I've set up VMware and iSSCI the storage has always been on its own network on NICs dedicated to that task. Do I need to link the storage network physically to the same network that the VMs use or what? You could add another network interface push it to your VSS/vDS for storage, vlan it, and push it to your storage. But a better question would be, What are you trying to do?
|
# ? Oct 8, 2013 23:37 |
|
Caged posted:I was going to ask this in the virtualisation thread but I have 200 posts to catch up on in there and it's on-topic now. What's the best way to connect to iSCSI storage from within a guest OS? When I've set up VMware and iSSCI the storage has always been on its own network on NICs dedicated to that task. Do I need to link the storage network physically to the same network that the VMs use or what? With software iscsi, it usually looks a little something like this: Guests connect to the VM port groups and the host uses the VMKernel port in the same VLAN. Just give the guest a dedicated NIC (or two for redundancy) for ISCSI.
|
# ? Oct 8, 2013 23:54 |
|
It was purely hypothetical. Everything I've experienced personally or read about has iSCSI storage down as being on its own network with dual paths etc, I was just wondering what the 'cleanest' way of connecting iSCSI targets to guests was without loving up the whole point of having a separate storage network.
|
# ? Oct 8, 2013 23:55 |
|
KS posted:With software iscsi, it usually looks a little something like this: So in this case would you give your guest 2 iSCSI nics, one on iscsi_B and one on iscsi_A?
|
# ? Oct 9, 2013 00:00 |
|
KS posted:With software iscsi, it usually looks a little something like this: Cheers for that - it's how I had an idea it might be done but I wasn't sure if VMkernel NICs should only carry that type of traffic.
|
# ? Oct 9, 2013 00:02 |
|
YAY just got a "dilbert can you size out what this CC environment needs?" It's a pilot for 1 of 4 sites but holy poo poo it is already working good on 7.2k disk and some cool stuff I do in the background. I wish I could use this as my VCDX stuff but no idea if it fits the requirements. Right now my budget only grants me enough for 1) VNXe 3300 with 25 600GB@15k drives w/ 8Gb FC to storage (oh wait poo poo does the VNXe 3300 do FC? need to research) or 2) VNXe 5300 with 25 10k@300 + Flash Cache + 2+4GB 1Gb/e or 3) Another Vendor like Nettapp/HP I lean more towards the VNX 5300 and bit due to FAST CACHE the bullet of Gb/E but I am looking at MAX 200 VDI desktops, + vAPP's for Windows Admin courses, Security(CISSP/SEC+), UNIX, EMC, and VMware courses. Basically I am working with a FAS 2040 and PS4000 all 1TB 7.2K disks and only hosting ICM/VCAP and a minor of linux courses, and I plan to use these as dummy storage for classes that are not scheduled at the time of use. IE ICM class on Thursday at 6-9:30 svMotion migrates the lab vAPP's to the 15k/10k for use, then when ICM is over migrate back to 7.2 when class hours are over. FISHMANPET posted:So in this case would you give your guest 2 iSCSI nics, one on iscsi_B and one on iscsi_A? You could for Round Robin purposes, but that looks like a dedicated iSCSI network so you would need to have a NIC for outbound. Dilbert As FUCK fucked around with this message at 00:14 on Oct 9, 2013 |
# ? Oct 9, 2013 00:05 |
|
Caged posted:It was purely hypothetical. Everything I've experienced personally or read about has iSCSI storage down as being on its own network with dual paths etc, I was just wondering what the 'cleanest' way of connecting iSCSI targets to guests was without loving up the whole point of having a separate storage network.
|
# ? Oct 9, 2013 00:09 |
|
FISHMANPET posted:So in this case would you give your guest 2 iSCSI nics, one on iscsi_B and one on iscsi_A? I would, yeah, and run appropriate MPIO on the guest. We use this a fair amount to present SAN snapshots to dev machines, etc. Our actual config is somewhat more complex, as we run network and storage over the same 10GE pipes, using a distributed switch and LACP teams to a pair of Nexus 5Ks. You can LACP the network/NFS vlans and still use MPIO for the iscsi stuff if you do your bindings correctly.
|
# ? Oct 9, 2013 00:12 |
|
NippleFloss posted:Copying files is a really bad way to test storage performance, especially when you're to and from the same place. Most copy commands are single threaded so they have low concurrency and will drive limited throughput because the IO is serial and blocking. Copy throughput is also dependent on things like size and number of files, as well as how the files are laid out across the volume. Copy operations can also make use of filesystem cache so it may not actually flush all writes to disk at the rate it appears to. Lots of problems with doing things this way. I would recommend using a benchmark tool to get more useful values. Ha, yeah had this so many times. Customer copies to a windows server and gets x MB per sec. They then copy to their nice new NAS and they get a bit less even!! So they are furious! The difference is that the NAS box will handle ten of those threads at that speed. 1:1 can be a little worse on NAS depending on the previous box, but it's a workhorse.
|
# ? Oct 9, 2013 15:36 |
|
Has anyone here ever added a shelf to a Dell MD3200i? The official documentation says you have to shut down the system to do it but I've seen anecdotal evidence that this just isn't the case. Wanted to know if someone has actually ever done it.
|
# ? Oct 9, 2013 19:24 |
|
Syano posted:Has anyone here ever added a shelf to a Dell MD3200i? The official documentation says you have to shut down the system to do it but I've seen anecdotal evidence that this just isn't the case. Wanted to know if someone has actually ever done it. I've got an MD3220i and over the years have expanded twice. Once with an MD1220 and an MD1200. Both times I did not shut down, and suffered no ill effects; I had the extra capacity added to my disk groups within a few minutes.
|
# ? Oct 9, 2013 19:28 |
|
Syano posted:Has anyone here ever added a shelf to a Dell MD3200i? The official documentation says you have to shut down the system to do it but I've seen anecdotal evidence that this just isn't the case. Wanted to know if someone has actually ever done it.
|
# ? Oct 9, 2013 19:32 |
|
evil_bunnY posted:I did some time ago to a 3000i and I wouldn't think of doing it online. Risk of corruption too high in your eyes or is there something else more sinister?
|
# ? Oct 9, 2013 19:48 |
|
Syano posted:Risk of corruption too high in your eyes or is there something else more sinister?
|
# ? Oct 9, 2013 20:03 |
|
theperminator posted:Yeah there's no shared backplane on the AX4, just two SP's with their own Serial/Power/Ethernet it's probably a pretty budget unit compared to what a lot of you guys are running. I just replaced an SPS yesterday on an AX-4. I can confirm that neither of the SP's powered off during the procedure. Only the power supply for the SPS that was replaced turned off as it's input power was taken away. Both SP's stayed up. Edit: Should have scrolled all the way down before I replied. Glad it worked out for you though!
|
# ? Oct 10, 2013 01:37 |
|
How often do you guys have data scheduled to replicate between SANs normally? I am using some equallogic ps4000e SANs and have scheduled replication happening once an hour. I notice that sometimes I am eating up my replication reserve within that hour and have two options it seems, replicate far more frequently or increase replication pool. I have 6 defined volumes so I have the replication staggered with one firing every 10 minutes but I am wondering if I am being way too conservative and should just have all replicate every 10 minutes. Currently I have my replication reserve set to around 50% of my volume size which seems like a lot... I figure my replication reserve should be set to the maximum percent of change to a volume that I could expect in a time period so shortening the interval between reps might make sense. Are there drawbacks to this? EDIT: If some of this poo poo makes no sense because of terminology etc. and I will try to explain what I understand as far as the way the asynchronous replication works between equallogic sans.
|
# ? Oct 10, 2013 15:09 |
|
It really depends on your rate of change and the amount of bandwidth that you have. Also if you have any Windows 2003 servers with lots of files make sure you turn off the last accessed timestamp. Equallogics have pretty large block sizes and changing that timestamp on a ton of files can cause a lot of extra overhead on your replication. Sorry I can't give you a more detailed answer, short for time right now. http://www.las-solanas.com/storage_virtualization/ntfs_san_performance_best_practice.php
|
# ? Oct 10, 2013 23:30 |
|
Anyone using Force10 S4810Ps in production and have any love/hate comments? We're looking at moving to 10gig with redundant switches, finally. Our requirements are pretty light since we're running fine on a single 3560E right now.
|
# ? Oct 11, 2013 18:36 |
|
Mierdaan posted:Anyone using Force10 S4810Ps in production and have any love/hate comments? We're looking at moving to 10gig with redundant switches, finally. Our requirements are pretty light since we're running fine on a single 3560E right now. We've got a pair of them connecting our VMware cluster with our SAN. Any problems we've had I think stem from the fact that we're a Cisco shop and so our admin is really confused by them.
|
# ? Oct 11, 2013 18:43 |
|
FISHMANPET posted:We've got a pair of them connecting our VMware cluster with our SAN. Any problems we've had I think stem from the fact that we're a Cisco shop and so our admin is really confused by them. Anything in particular he's confused by? I've never used Force10 gear before, but I'm not particularly entrenched in IOS either.
|
# ? Oct 11, 2013 19:02 |
|
We accidentally caused a network loop somehow, and now we've put the whole thing behind a different router for ~reasons~. We've got it setup so that the only stuff on 10Gbe is our VMware and SAN, and we have 1 Gb connection to the rest of our network. Our admin has been doing Cisco for... 20+ years and she's pretty ingrained to IOS. I think it's just a problem of it being different and she's not the best at picking up new skills.
|
# ? Oct 11, 2013 19:12 |
|
Mierdaan posted:Anyone using Force10 S4810Ps in production and have any love/hate comments? We're looking at moving to 10gig with redundant switches, finally. Our requirements are pretty light since we're running fine on a single 3560E right now. We have a pair of them at our primary site and another pair at our DR site. I love them, had them for about two years now with no issues whatsoever. I don't do any routing on ours, they just have a single VLAN for all of our EqualLogic arrays and Windows hosts (carries iSCSI traffic only). With the latest version of the firmware, I believe there's some goofy poo poo that goes on if you stack them. I saw the warning when I upgraded the firmware on our DR pair last week, but since they're not stacked, I didn't look into it too much. I don't do network stuff on a day to day basis, but we have plenty of Cisco and Dell switches around here as well and they all seem pretty similar to me as far as the CLI goes.
|
# ? Oct 14, 2013 16:28 |
|
Note to self: to move an eql member to a different pool, use "Modify Member configuration" rather than "Delete Member" from within the pool... Setting up a new san so luckily no data is involved, but now I have to drive back to the DC to plug in with serial again... God I'm poo poo at my job.
|
# ? Oct 15, 2013 06:35 |
|
I have a question for you storage folks that relates to multiple data centres: If you wanted to run two geographically distinct data centres, but with dark fiber between them, do you want the storage to be HA or do you rely on the services doing their thing. We want to be able to have services active at both sites, and be able to fail them over with little/no downtime. The networking for this is easy, but our current storage (Nimble) apparently doesn't do HA. The best solution I can think of currently is to have two Nimbles, and just make the LUNs primary at either site, dependant on where the service currently lives. The problem with this is that the duplication runs over a 1Gb interface so we won't be able to fail services over very smoothly. What do other people do?
|
# ? Oct 15, 2013 09:08 |
|
|
# ? May 14, 2024 18:32 |
|
theperminator posted:Note to self: to move an eql member to a different pool, use "Modify Member configuration" rather than "Delete Member" from within the pool... Yeah this seems like something you may want to research before doing.
|
# ? Oct 15, 2013 11:52 |