|
For those of you running Equallogic SANs, are you keeping them all in the same 'Group' and just splitting things up into different pools or running completely separate groups?
|
# ? Feb 7, 2011 20:14 |
|
|
# ? May 21, 2024 15:27 |
|
You don't see any risk in getting sun gear from oracle? Everything oracle touches get expensive and is the market even sure they will continue supporting solaris and ZFS for any longer period of time? They are remaking opensolaris and telling the opensource folks to follow orders or get stuffed. Oracle isn't old tech loving sun. But then again i might be out of the loop?
|
# ? Feb 7, 2011 21:12 |
|
conntrack posted:You don't see any risk in getting sun gear from oracle? Oracle basically told the opensource guys to gently caress right off. I wouldn't get one of the Oracle rebranded Sun servers unless it was a great deal. Oracle has a wonderful habit of loving anyone who doesn't have 7 figures to spend.
|
# ? Feb 7, 2011 22:42 |
|
Got my first toe in the water today with enterprise storage beyond raid arrays in servers, w000! Boss: "We had a failed drive on the san, I already swapped in our cold spare, I need you to call it in. Here's how you log in to CommandView EVA, and let me give you a 2 minute tutorial on what buttons NOT to press. Have fun!"
|
# ? Feb 8, 2011 20:07 |
|
three posted:For those of you running Equallogic SANs, are you keeping them all in the same 'Group' and just splitting things up into different pools or running completely separate groups? Equalogic kinds of sells 'all in one group' as their way of easing management and allowing the group to spread load around, however it's not required to do it. I can't say what the right choice for you is, you need to decide if you want isolated workloads, separate fabric, or ease of management and responsive allocation.
|
# ? Feb 8, 2011 21:17 |
|
devmd01 posted:Got my first toe in the water today with enterprise storage beyond raid arrays in servers, w000! Please tell me your boss ungrouped and ejected the disk properly before swapping it.
|
# ? Feb 8, 2011 21:41 |
|
quote:let me give you a 2 minute tutorial on what buttons NOT to press. Have fun! All of them. Remember: The enterprise storage device is always right. Arguing with the storage device will result in forfeiture of your weekend, especially on a Monday.
|
# ? Feb 13, 2011 17:03 |
|
three posted:To relate this back to storage: storage is the #1 bottleneck people run into with VDI. We've been using Equallogic units, and we plan to add more units to our group to increase IOPS/Capacity as needed. (Currently our users are on the same SAN(s) as our server virtualization, and this is why I want to move them to their own group.) devmd01 posted:Got my first toe in the water today with enterprise storage beyond raid arrays in servers, w000!
|
# ? Feb 15, 2011 12:48 |
|
evil_bunnY posted:Your EVA doesn't self-report? For shame. HP hounded me until I'd let them come in and install their silly SIM Remote Reporting service, since they were shutting down the old phone-home systems. Once I let them come and set it up, it's far worse off than it ever was before. It loses connectivity with their servers for at least a few days every week, and likes to do things like ignore failed disks when they happen, then submit 5 duplicate tickets for the failure a few days after I manually log a case, get the replacement and install it. It usually decides to do this at 3AM on a Tuesday so I get phone calls from India until I wake up, log in, check the EVA, and call them back to say it's a false report. So if it's anything like mine, the EVA is *supposed* to self report, but doesn't do a reliable job of it. Meeting with HP/3PAR, NTAP, and Compellent this week to look at some different storage. Should be interesting.
|
# ? Feb 15, 2011 16:38 |
|
I have a question for you NAS/SAN goons. Where I work, we have seperate Network and Systems teams (I am a network guy). The systems team currently have a bunch of different storage systems that no one person seems to know all about. They have made a proper hash of connecting them together whenever they have had free reign with the fiber channel connections. We are considering taking over the actual fiber channel network side of things and, among other things, removing the approx 60% of fiber that doesn't even conenct to anything. How feasable is it to demark the control of a SAN in this way, does anyone know of any organisation that has one team do the fiber channel and another the system itself? I can give specific system names/types if it helps (as far as I know).
|
# ? Feb 15, 2011 22:08 |
|
Do you mean one team does the SAN switches, and another team does the storage? That's kind of rare though one place I went to had us put zoning in a spreadsheet and hand it off to the customer, while the contract team put together the storage for the hosts. In larger shops it is quite common to have a separate storage team that does the switches and arrays. Now with virtualization the storage teams are tending to get merged back in with the server teams.
|
# ? Feb 15, 2011 22:58 |
|
Currently one team does both the SAN switches and the storage, it's looking like the SAN switches would come to the Network team as we know how to connect things together properly. I suppose we would be classified as a medium/large enterprise? I have no experience with SAN/NAS systems so am wondering if doing this breaks something crucial in the whole process of running a cohesive service.
|
# ? Feb 16, 2011 00:55 |
|
That would be a little out of the ordinary as usually the network team doesn't know much about SANs. Sounds like management is trying to put a cap on something they understand (loose cables) while risking something they don't understand (putting the SAN in inexperienced hands). From what you've said it doesn't sound like a good idea. It could just be that the storage people need some procedural help, to know to remove cables once they aren't being used. What exactly is going wrong with the cabling? Just how much hardware do you have?
|
# ? Feb 16, 2011 01:31 |
|
Badgerpoo posted:Currently one team does both the SAN switches and the storage, it's looking like the SAN switches would come to the Network team as we know how to connect things together properly. I suppose we would be classified as a medium/large enterprise? I have no experience with SAN/NAS systems so am wondering if doing this breaks something crucial in the whole process of running a cohesive service. I'm working for a vendor these days, but I was part of a storage team in my last job, and we did everything storage related, basically from the storage arrays and tape libraries to fibre cards in the servers. I'm not sure how well what you describe would work, might be ok as long as the network guys know something about storage, it's similar, but not exactly the same as normal networking. I think to manage a fibre network you either need a storage guy who knows some networking or a networking guy who knows some storage.
|
# ? Feb 16, 2011 09:27 |
|
In my experience the SAN switches and disk arrays are managed by a single team (and this is in anywhere from Fortune 10 companies to 30 person companies). At most places the guys who run the Ethernet also run the fiber, but do not have any type of access to the switches or arrays. The Fibre Channel itself protocol is very easy to understand if you know TCP/IP, but networking and storage are two very different beasts.
|
# ? Feb 16, 2011 09:42 |
|
The problem we have is that the systems team itself don't really know a great deal about SANs either. They seem to have been sold a bunch of different systems over the years and they've never quite worked properly. Fiber channel and ethernet networking are converging to an extent as there is increasing technology cross over (ethernet is emulating some of the features of fiber channel with larger layer 2 domains, kinda). The idea is that we would learn about Fiber Channel (properly) and do it properly, but would leave the management of the SAN itself to the systems guys. Operationally, when Systems want a new link(s) they would request it from us, and we would then implement it in the best way.
|
# ? Feb 16, 2011 10:35 |
|
Badgerpoo posted:The problem we have is that the systems team itself don't really know a great deal about SANs either. They seem to have been sold a bunch of different systems over the years and they've never quite worked properly. Fibre channel isn't that hard really, just remember the basic rules and stick to them, keep simple obvious naming standards, make sure it's flexible enough that you don't need to have exceptions, one initiator per zone, and the main one Redundancy redundancy redundancy If you've got an existing environment which hasn't been designed properly, you really need to redo large amounts of it to introduce some sanity, and doing so without having an impact on production can be difficult. Start by doing a full audit, list everything you've got, what servers are using what storage on what arrays, and how they're currently connected. then you need to put a design together, work out how everything should be connected, what your zones should be, how everything is should go together. Thats the easy bit, next you need to go from where you are to where you need to be, and to do so without having a huge impact on production, get that right and your next job is going to pay a fuckload more than you're getting now.
|
# ? Feb 16, 2011 11:37 |
|
Badgerpoo posted:Currently one team does both the SAN switches and the storage, it's looking like the SAN switches would come to the Network team as we know how to connect things together properly. I suppose we would be classified as a medium/large enterprise? I have no experience with SAN/NAS systems so am wondering if doing this breaks something crucial in the whole process of running a cohesive service. It's uncommon as it normally falls down to the storage guys. However, I do know one major bank that does it this way. Networks look after both ethernet and FC networks. FC is a piece of cake compared to IP anyway and this does put them further along the lines with converged networking, FCoE, etc. A lot of push back I see on converged networks is simply down to internal staff politics.
|
# ? Feb 16, 2011 12:04 |
|
Incidentally, this storage/network split is why I think Cisco didn't really understand the market when they decided to have the Nexus series also do FC. I've never seen anyone actually use it outside of a UCS stack.Vanilla posted:A lot of push back I see on converged networks is simply down to internal staff politics. Vulture Culture fucked around with this message at 13:11 on Feb 16, 2011 |
# ? Feb 16, 2011 13:08 |
|
If it's just the cables that are the problem, maybe assign fibre cable management to the storage team.
|
# ? Feb 16, 2011 13:24 |
|
This is the sort of things that starts long department wars. Networking thinks they should own everything with cables and starts to make a stink about things. Results in pulled fibers(you don't have ospf on the storage NETWORK?), poo poo switches getting bought because the san doesn't affect their operations and so on. I have heard horror stories. Then again if the storage "team" is and old crusty guy that makes pretty christmas trees with the FC cabling a revolution might be warranted.
|
# ? Feb 16, 2011 14:06 |
|
To me it doesn't sound like the network team needs to take over the switches, it sounds like the SAN team needs to hire somebody competent.
|
# ? Feb 16, 2011 14:30 |
|
I've been asked for a car analogy from my manager for any Snapmanager product for NetApp because the person who is signing the checks can't understand the justification for SQL/Exchange. Spending lots of money to save our jobs when "OH poo poo" hits doesn't fly I guess.
|
# ? Feb 16, 2011 17:38 |
|
ghostinmyshell posted:I've been asked for a car analogy from my manager for any Snapmanager product for NetApp because the person who is signing the checks can't understand the justification for SQL/Exchange. i guess you did the "company down and nobody working" calculation and it didn't bite? i feel you pain.
|
# ? Feb 16, 2011 17:58 |
|
conntrack posted:i guess you did the "company down and nobody working" calculation and it didn't bite? i feel you pain. ROI: You still have all your email. Maybe the car analogy can be: It's like cloning your family every morning so when they die in a fiery car crash on the way to work/school you aren't left a bitter empty shell of a man.
|
# ? Feb 16, 2011 18:31 |
|
SnapManager is like keeping 255 spare engines, transmissions, and alternators in your trunk, although they're magic parts and weigh almost nothing. If one of these parts breaks, you can simply swap in one of your many spares in a moment's notice. Without SnapManager, if your engine breaks, you have to take it to a costly repair shop, wait a very long time, and you don't end up with exactly the same engine you had before.
|
# ? Feb 16, 2011 18:36 |
|
Maneki Neko posted:ROI: You still have all your email. I have printed this to hard copy.
|
# ? Feb 16, 2011 19:29 |
|
I would play a video of a car crash and then play it back in reverse frame by frame
|
# ? Feb 16, 2011 21:07 |
|
Nomex posted:To me it doesn't sound like the network team needs to take over the switches, it sounds like the SAN team needs to hire somebody competent.
|
# ? Feb 18, 2011 12:20 |
|
That would be nice, one of the SANs failed again on Monday...
|
# ? Feb 18, 2011 13:11 |
|
I've been getting Autosupport alerts all night about failing power supplies and spare disks for a FAS3170 at a credit union in California. I've never heard of this credit union; I have no clue why I'm listed as a contact for it. I hope their actual storage admin knows
|
# ? Feb 23, 2011 19:11 |
|
Mierdaan posted:I've been getting Autosupport alerts all night about failing power supplies and spare disks for a FAS3170 at a credit union in California. I've never heard of this credit union; I have no clue why I'm listed as a contact for it. Are you bob@bob.com ?
|
# ? Feb 23, 2011 19:24 |
|
three posted:Are you bob@bob.com ? Edit that out please, I don't want people knowing my email address... Seriously, no I have a pretty specific email address that you wouldn't get by hammering on a keyboard. I'm also already in Netapp's system for my own filers, so clearly some bits got lost somewhere. edit: Ed, from the credit union's mailroom just called me to let me know I have a package waiting. I told him to give it to the IT department.
|
# ? Feb 23, 2011 19:38 |
|
Mierdaan posted:edit: Ed, from the credit union's mailroom just called me to let me know I have a package waiting. I told him to give it to the IT department. You should tell him he's at the wrong address.
|
# ? Feb 23, 2011 19:59 |
|
MEAT TREAT posted:You should tell him he's at the wrong address. Don't do that because the technician following the part... wait, yes definitely do this.
|
# ? Feb 23, 2011 20:48 |
|
How do you guys do IOPS sizing when looking at a new array? Is a ballpark number of "150 IOPS/15K RPM disk times number of disk" enough for a rough number? Of course, that would be the low watermark, while any caching would just be gravy on top.
|
# ? Feb 23, 2011 22:32 |
|
Most major vendors have sizing tools and are usually willing to assist you with purchasing the right sized solution for what you're doing. You can't really ballpark IOPS,as there's a ton of factors that will change the answer, including RAID level, block size, application, array features etc.
|
# ? Feb 23, 2011 23:40 |
|
As a very rough rule of thumb, I use 120 IOPS/10k disk, 180 IOPS/15k disk, 60 IOPS/5k SATA. But yes, any major vendor will help you size it if you can collect some iostat data or give some good projections.
|
# ? Feb 24, 2011 00:49 |
|
If you're sizing for a major app like Exchange or Oracle, the vendor will also be able to help you with your projections.
|
# ? Feb 24, 2011 03:53 |
|
|
# ? May 21, 2024 15:27 |
|
This is for VMware, so I have the burden (luxury?) of just assuming all IO is random.
|
# ? Feb 24, 2011 15:09 |