|
TobyObi posted:However, what I am trying to figure out, is am I limited to using it as a NAS device, ie, NFS only, or will the optional FC card allow me to use it as an FC target in some way? There is no support for this whatsoever if you want to use plain Solaris 10.
|
# ? Apr 2, 2010 16:33 |
|
|
# ? May 9, 2024 22:59 |
|
I'm curious to hear other peoples feedback here... I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ? Why create a partition table with one giant partition of type lvm, when you can just pvcreate the root block device and skip all that? What do the extra steps buy you besides extra steps and the potential to break a LUN up into parts (something I have no intention of ever doing).
|
# ? Apr 2, 2010 19:15 |
|
Misogynist posted:It's not straightforward or particularly well-documented whatsoever, but the COMSTAR stack in OpenSolaris will let you run it as an FC target through ZFS. The process is almost exactly the same as setting up an iSCSI target, except you're zoning it out to WWNs instead of IQNs. I haven't used it personally, and can't speak for its performance or reliability, but my iSCSI experiences using COMSTAR have been extremely positive. I figured that would be the answer. I've already got an interesting device utilising COMSTAR and FC (and it has been rock solid), but for this, I think NFS over 10Gb ethernet is going to be easier, considering raw device access isn't a necessity, and the whole Oracle having OpenSolaris up in the air bit.
|
# ? Apr 2, 2010 23:35 |
|
StabbinHobo posted:I'm curious to hear other peoples feedback here...
|
# ? Apr 3, 2010 01:34 |
|
TobyObi posted:I figured that would be the answer. bmoyles posted:Speaking as someone who has nuked 1TB of production porn (Playboy) because a drive without a partition table looked just like the new drive I was going to format for a quick BACKUP of said data, it can be helpful StabbinHobo posted:I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ? Real question: My role has apparently been hugely expanded regarding management of our SAN. I've got most of the basics down, but can anyone recommend any really good books to start with that don't assume I'm a non-technical manager or some kind of moron? Something that pragmatically covers LAN-free backups, best practices for remote mirroring and that kind of stuff is a big plus for me. Vulture Culture fucked around with this message at 21:10 on Apr 9, 2010 |
# ? Apr 9, 2010 20:56 |
|
Misogynist posted:I'm using both NFS and iSCSI extensively in my VMware test lab, and I don't really have any complaints about the way either one is implemented in OpenSolaris. I don't think there's necessarily any benefit to FC unless you're connecting up with an existing fabric. To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
|
# ? Apr 9, 2010 22:54 |
|
TobyObi posted:To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
|
# ? Apr 9, 2010 23:39 |
|
StabbinHobo posted:I'm curious to hear other peoples feedback here... I've read somewhere that when making a RAID it's a good idea to make a partition a little bit smaller than the size of the disk, in case you're replacment disk is a few sectors smaller than the failed disk.
|
# ? Apr 9, 2010 23:53 |
|
FISHMANPET posted:I've read somewhere that when making a RAID it's a good idea to make a partition a little bit smaller than the size of the disk, in case you're replacment disk is a few sectors smaller than the failed disk. I ran into this problem on my PowerVault 220S with 14 146GB drives. One of the drives failed (Fujitsu), so I replaced it with a new Fujitsu and the Adaptec RAID card would not use the drive. It was 2MB smaller than the other drives > I had to move all the content to a new filer, swap all the users over, down the PV220, rebuild the array, move all the content back and swap everyone back over. My Areca RAID cards have an option to truncate the disk capacity to the nearest specified capacity round (I used 10GB), so a 250GB drive will probably only get 240GB, but will allow much variation between actual 250GB drive capacities. I don't think this has anything to do with the partition size, though.
|
# ? Apr 10, 2010 00:42 |
|
adorai posted:I think you would be very surprised by how much utilization you would see with iscsi. None of our VMware hosts come even close to saturating a single gigabit link with iscsi traffic. Even without 10Gb ethernet, I think it's worthwhile to consider the benefits of iscsi, which pretty much comes down to port cost and management. We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on. Sadly, single gig links aren't going to cut it. If they did, my life would be easy...
|
# ? Apr 10, 2010 00:56 |
|
TobyObi posted:We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.
|
# ? Apr 10, 2010 01:11 |
|
TobyObi posted:We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.
|
# ? Apr 10, 2010 01:13 |
|
TobyObi posted:To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
|
# ? Apr 10, 2010 05:28 |
|
adorai posted:Are you saturating 4Gb links on the SAN side or on the host side? By using trunking or a few 10Gb ports for your SAN you can do it cheaply. Our SANs obviously generate a LOT more iscsi traffic than any individual host. Misogynist posted:This might have been true in the 3.x days, but 4.0 has iSCSI MPIO that's worked very well in our testing. (We still mostly use NFS on the development side because it's easy as hell to provision new VMs. Also, we can't afford Storage VMotion.) SAM-QFS. Archiving file system. Constant data movement up and down tiers.
|
# ? Apr 10, 2010 09:19 |
|
optikalus posted:I ran into this problem on my PowerVault 220S with 14 146GB drives. One of the drives failed (Fujitsu), so I replaced it with a new Fujitsu and the Adaptec RAID card would not use the drive. It was 2MB smaller than the other drives > You have to be sure you read the Guaranteed Sector Count on any disk you purchase to replace an existing one. You are correct, it's not the partition size, but the size of whatever "thing" your array sees when building itself. This could be an exported multi-disk device (think raid10), a partition/file (when doing testing of raid subsystems), or the raw block device itself. You can add a failsafe to this by lowering the used sector count in your raid controller software for each disk while building your array. Even if your array asks you "How many gigs do you want to use on this disk?" there is typically a way to see the actual block/sector counts. Where do you set it? 1% should be totally safe, but an easy way to tell is to look at all the major disk manufacturers for similar size disks, pick the smallest number, and reduce that by a tiny percentage. Even then just pay attention to the spec sheet when ordering and send it back if it doesn't match spec. If you want an example of this, look at a Netapp sysconfig -r output. Compare the Logical to Physical sector counts. You will see Logical is far lower than physical. This helps with block remapping and them not having direct control over the manufacturing process, and that they will send you Hitcahi, Seagate, or Fujitsu disks as replacements. H110Hawk fucked around with this message at 22:23 on Apr 10, 2010 |
# ? Apr 10, 2010 22:18 |
|
H110Hawk posted:You have to be sure you read the Guaranteed Sector Count on any disk you purchase to replace an existing one. You are correct, it's not the partition size, but the size of whatever "thing" your array sees when building itself. This could be an exported multi-disk device (think raid10), a partition/file (when doing testing of raid subsystems), or the raw block device itself.
|
# ? Apr 10, 2010 22:20 |
|
Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.
|
# ? Apr 12, 2010 06:11 |
|
Sorry to bump again, but is anyone managing an IBM SAN using IBM Systems Director? I installed the SANtricity SMI-S provider on a host and connected it up to the SAN and can see all the relevant details if I look at the instance view in the included WBEM browser. However, when I try to connect to it using IBM Director, it can't discover it, even when given the server's IP address directly. Anyone have any ideas?
|
# ? Apr 21, 2010 22:18 |
|
Misogynist posted:Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now. There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough.
|
# ? Apr 22, 2010 01:38 |
|
brent78 posted:Please explain. We are looking at picking up 6 shelves of Lefthand. I've used EqualLogic in the past and loved everything about them, except my boss is anti Dell these days. If LeftHand sucks, please tell before I get neck deep in it. I got to pipe out and say it's a pleasure dealing with Equallogic (NetApp too) support. We haven't had any really weird calls, mostly drive failure here and there and couple network based shenanigans, but they are very quick to respond. Also, the modules seem pretty solid and easy to use. For pricing, agree with previous posters, don't even look at retail pricing for NetApp (or Equallogic). Get some competitive quotes and start talking to sales people. I've used 2020s and 2050s and they are pretty nice units for what they go for, however lately we have been buying Equallogic for low and low-mid level instead, turns out cheaper even with dedupe (and for VMware you got vSphere thin-provisioning now). For a bit higher-end (mid to high level SANs), we have been going NetApp. Not that Equallogic can't deliver mid-level SAN performance, but Netapp got a lot of flexibility in quite a few areas, all being said and done. Oh, and I am not sure about SAN, but dealing with HP sales is like pulling teeth. I am talking fairly high end contracts too (not just couple hundred $K).
|
# ? Apr 22, 2010 01:45 |
|
oblomov posted:There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough. Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts
|
# ? Apr 22, 2010 02:11 |
|
The IBM N series is just a rebranded NetApp if that helps you any.
|
# ? Apr 22, 2010 06:32 |
|
Misogynist posted:
Symmetrix or Clariion user?
|
# ? Apr 22, 2010 14:10 |
|
My company wants to consolidate their data (and VMs) and i've been put in charge of the project. There's so many choices out there and the bugget i was given is limited. What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure. We are talking about around 10 VMs being ran by two VMware servers. And how much is it going to cost me?
|
# ? Apr 22, 2010 15:05 |
|
Cyberdud posted:My company wants to consolidate their data (and VMs) and i've been put in charge of the project. What kind of budget? This could cost 10K or 200K, depends on the companies needs, tolerance for downtime, budget, and skill set.
|
# ? Apr 22, 2010 15:38 |
|
skipdogg posted:What kind of budget? let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible. EDIT: also would be looking for a Gigabit switch supporting jumbo frames. Cyberdud fucked around with this message at 16:38 on Apr 22, 2010 |
# ? Apr 22, 2010 16:08 |
|
Cyberdud posted:let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible. Jumbo frames AND flow control is what you want, although many entry level / mid level switches don't support both (like the procurve 2800 series, much to my disappointment). That said, I'm running 4 ESX hosts with ~30 VMs off round robined iSCSI with the default 1500 byte MTU without issue.
|
# ? Apr 22, 2010 16:54 |
|
How about this : QNAP TS-859 Pro turbo NAS which supports jumbo frames (http://www.qnap.com/pro_detail_feature.asp?p_id=146) it comes to around 1600 CAD and supports 8 bays each so we can purchase two of them. Does netgear make good switches ? I saw a pretty affordable one that supports Jumbo Frames. What do you guys recommend?
|
# ? Apr 22, 2010 17:13 |
|
Cyberdud posted:Does netgear make good switches ?
|
# ? Apr 22, 2010 21:22 |
|
Cyberdud posted:What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure.
|
# ? Apr 22, 2010 23:20 |
|
Cyberdud posted:How about this : QNAP TS-859 Pro turbo NAS which supports jumbo frames (http://www.qnap.com/pro_detail_feature.asp?p_id=146) It's decent if you want to run a small NAS for 5-10 people. I wouldn't run VMware from it, this is not for this. Netgear makes decent switches for your house or your dentist's office, not for enterprise gear (it's actually decent for low end switching). Check out Dell or HP switches if Cisco is a bit too pricey (it is indeed). Do get something that supports flow control (send and receive) as mentioned. Going with either of these should save you a bit of cash. For the SAN, depending on the load, check out MD3000i from Dell or maybe Equallogic 4000 series. Make sure to talk to Sales rep, also get quotes from HP/Cisco/IBM and pressure the sales guy/girl, you can get good discount that way.
|
# ? Apr 23, 2010 01:59 |
|
It's funny how it's advertised as a VMWARE READY NAS. I don't get it.oblomov posted:It's decent if you want to run a small NAS for 5-10 people. I wouldn't run VMware from it, this is not for this. Netgear makes decent switches for your house or your dentist's office, not for enterprise gear (it's actually decent for low end switching). Check out Dell or HP switches if Cisco is a bit too pricey (it is indeed). Do get something that supports flow control (send and receive) as mentioned. Going with either of these should save you a bit of cash. Any explanation on why that QNAP couldn't run vmware? Cyberdud fucked around with this message at 16:33 on Apr 23, 2010 |
# ? Apr 23, 2010 16:01 |
|
Cyberdud posted:Any explanation on why that QNAP couldn't run vmware? Look at is specs, it only has one powersupply. It might able to function as a VMware SAN technically, but I would never ever let it near any production use.
|
# ? Apr 23, 2010 17:11 |
|
Cyberdud posted:let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible. We did a similar project with Dell kit here, here's how I would do it for ~25k. 1 x MD3000i, Dual controller, use RAID10 pick drive speed and size according to needs. 2 x Poweredge 6224, can't let that switch be a single point of failure 1 x RPS 600 2/3 x R610's, Loaded to the gills with RAM and NICs. You want at least 2 interfaces for iscsi, 2 for vm management and then whatever else you need for production 1 x vSphere Essentials Plus bundle for 3 hosts http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVault+MD3000i That will walk you through setting the iscsi side up. For some reason the images on it aren't loading for me right now. Edit: In the cover-your-rear end approach to architecture, present them a feasible option like this and if they come back and need to go cheaper, tell them what you can remove, how much it will save and what the ramifications are. So when your single switch with no rps fails, you can't be blamed for designing a bad solution. Protip, you will be blamed anyway. Nukelear v.2 fucked around with this message at 18:23 on Apr 23, 2010 |
# ? Apr 23, 2010 18:16 |
|
Nukelear v.2 posted:Protip, you will be blamed anyway. Which is why you should propose something a bit overkill, and then bitch and moan about bringing it down to whatever level it is that you actually need.
|
# ? Apr 23, 2010 18:46 |
|
Cyberdud posted:It's funny how it's advertised as a VMWARE READY NAS. I don't get it. Do you really think that you are going to get good performance off an NFS server running embedded Linux on an Intel atom? This might work okay for 1 or two hosts, but you're talking 10. The Sun storage system that adorai recommended will run miles around this, not to mention you will get ZFS which was a far superior file system.
|
# ? Apr 23, 2010 19:07 |
|
Cyberdud posted:It's funny how it's advertised as a VMWARE READY NAS. I don't get it. It's not enterprise hardware. It's perfectly fine for small business or consumer grade use. You can run VMware as a consumer. There's a whole thread for consumer storage, NAS and iSCSI.
|
# ? Apr 23, 2010 19:18 |
|
Cyberdud, I think you may want to go through the following exercise: 1) figure out how much it will cost your company per hour of down time 2) figure out what your company's tolerance is for down time, given the cost You need to have a conversation with management about the managey, business stuff like this because ultimately you need to be accountable for your design decisions. Finally, you need to document this stuff. It sounds like you work for a pretty small shop and you guys may be pretty informal about decisions like this, but I can definitely tell you that this exercise is worth performing. Not only that, future employers would consider this sort of exercise positively in your favour. Also if the whole set up blows up in your face you can pull the report out and show it to management and tell them why you made the decisions you did.
|
# ? Apr 25, 2010 04:15 |
|
So I have a ton of perfmon stats from a certain server. What tools do you use to analyse these? I know there's the windows Performance Monitor tool but i've found it a bit 'hard'. Do you know of any third party tools for analysing permon outputs?
|
# ? Apr 28, 2010 20:43 |
|
|
# ? May 9, 2024 22:59 |
|
Vanilla posted:So I have a ton of perfmon stats from a certain server. Export .csv's and you can probably feed the data into esxplot: http://labs.vmware.com/flings/esxplot I regularly use this to parse through 1-2GB of esxtop data at a time when doing performance troubleshooting. It might work with generic windows counters too. I'm guessing it just plots whatever is in the csv.
|
# ? Apr 28, 2010 22:15 |