|
three posted:vSphere is a significantly more well-rounded and feature-rich product than its competition, in my opinion. But, you know how many people actually USE all of those features? Not many. There are still people in 2013 that don't have DRS enabled. DRS was brand new to me when I installed vMware 5.0 for the first time and upon reading up on it I was like "Oh hell yeah" and turned that poo poo on. I won't lie and say I make use of every feature set that VMWare has, but I'm making a good attempt at it.
|
# ? Apr 2, 2013 17:02 |
|
|
# ? May 8, 2024 04:32 |
|
whaam posted:So I've got 4 HP DL360 G8s with the HP esxi 5.1 iso installed and their 4 port onboard nics seem to randomly drop from 1000 full to 10 full, this is with auto turned on them all. I thought it might be related to the network runs being too close to the power so I moved them, anyone else see behaviour like with in esx 5.1? This is happening randomly across the ports, each are going to different switches as well so its not that. What's your physical switch that you're using? In my experience, autoneg tomfoolery is fixed with an update at the switch level. We have boatloads of Gen8 HPs with no issues like you're talking about, although I haven't upgraded us to 5.1 yet (We're still in 5.0)
|
# ? Apr 2, 2013 17:46 |
|
Does anyone have experience(s) with setting up a VMUG, or attending them. We use to have a group around the area, but salesmen started being filtered in as well as the main organizer moved out of state; resulting in it ending. My new place more focused on engineering and IT services than selling the latest Cisco/Dell/HP hardware, so we don't have to worry about sales pitches. So I think it would be a worth while experience to try and start it up I've spoken to about 10ish people I know who would love to start one back up in my area. The good thing about my new place is they are always looking to sponsor community events such as this, so I am fairly sure they would say yes. However, I wasn't sure if anyone had some words of wisdom before I bring up the "Hey why don't we do it?" to some people.
|
# ? Apr 2, 2013 19:46 |
|
I've got what I suspect are disk IO problems, but I'm not sure how to confirm it enough to go management and try and get new storage. When I have to reboot a simple VM (a domain controller for instance) it can take over a half hour from when the BIOS is done loading (which only takes a few seconds) to when I get a login screen on the console of the VM. From what I can tell we're fine on both memory and CPU, so I think it's disk. We've got a pretty lovely RAID that can basically only deliver about 150-200 IOPS, and I think it's way overloaded. I also see a lot of short disconnects of some of the LUNs in the VMware console. Is there a single place I can go in VMware to see total demand on the disk from a single host? (No vCenter, so per host is the least granular I can get).
|
# ? Apr 2, 2013 20:53 |
|
FISHMANPET posted:Is there a single place I can go in VMware to see total demand on the disk from a single host? (No vCenter, so per host is the least granular I can get). Hosts and Clusters -> Select your host -> Performance Tab -> Advanced Button -> Select various options in the dropdown. "Disk", "Datastore", and "Storage Adapter" should all show some insight. My guess, there's some massive read latency talking to your VM datastore(s). These graphs should show it happening in a nice shiny labelled format for you to present to whomever pays for your storage backend.
|
# ? Apr 2, 2013 21:05 |
|
Also when SSH'd into a host you can nab some stuff via ESXTOP and press D Has any paths changed for that host=>datastore? It isn't trying to use a downed path is it? If you are seeing disconnects from storage, see if other VM's on that storage device have the same problem. Also what Path Selection Policy are you using? Do you have anything in the CD rom that is trying to be read? For a Domain Controller 200 IOPS isn't all that bad, especially if AD/DNS is all it is running on the VM. Dilbert As FUCK fucked around with this message at 21:22 on Apr 2, 2013 |
# ? Apr 2, 2013 21:19 |
|
200 IOPS runs 4 domain controllers, 2 file servers (the actual files are on different LUNs, but the base OS is run off this LUN), and a host of other things, coming out to about 20 machines. E: which is to say, at least according to my intuition, that 200 is not nearly enough for all that.
|
# ? Apr 2, 2013 21:31 |
|
SSH to the host, run esxtop, and hit V to go to the VM disk page. High numbers in the LAT/rd and LAT/wr should be enough to show a problem. Also, theoretically the CMDS/s column should fairly consistently add up to 200 if you're pegged, I think? If you want to be more thorough, there's a KB article on exactly this.
|
# ? Apr 2, 2013 21:43 |
|
Really it depends if all they are hosting is just the Windows OS boot VMDK's, depending on the FS's needs, it very well could be. ESXTOP should give you an idea if you are queue'ing disk writes/reads. Do other VM's experiences the same issue when rebooted on that lun/datastore? If so I would assume one of two things, you have a path to storage issue, or you do not have enough IOPS to handle requests. http://kb.vmware.com/kb/1008205 here is a good article on it.
|
# ? Apr 2, 2013 21:46 |
|
Yeah, I've seen this on other VMs too. Also a lot of them are just "slow" when I try and use them on the console, but from what I can tell it's not a CPU or memory problem anywhere, which leads me back to disk. The RAID is kind of a piece of crap so I'm willing to blame the disconnects on the RAID being awful. And the topology is also awful. Each host has a gigabit Ethernet cable that plugs into a dumb pocket switch, and that pocket switch has a gigabit cable that goes into the iSCSCI RAID.
|
# ? Apr 2, 2013 21:53 |
|
FISHMANPET posted:The RAID is kind of a piece of crap so I'm willing to blame the disconnects on the RAID being awful. And the topology is also awful. Each host has a gigabit Ethernet cable that plugs into a dumb pocket switch, and that pocket switch has a gigabit cable that goes into the iSCSCI RAID.
|
# ? Apr 2, 2013 22:01 |
|
Less Fat Luke posted:What in the gently caress. What is your back end storage? Also what is a pocket switch?
|
# ? Apr 2, 2013 22:10 |
|
One of those tiny netgear style switches you can buy from newegg for $30 I'd assume.
|
# ? Apr 2, 2013 22:13 |
|
Yup. The backend is some awful Sun StorageTek RAID, running with 3x2TB Western Digital Black drives in a RAID 5.
|
# ? Apr 2, 2013 22:17 |
|
sanchez posted:One of those tiny netgear style switches you can buy from newegg for $30 I'd assume. At my previous job my old boss tried to get away with unmanaged netgear switches for the iscsi network. After replacing them with managed switches that were full wire speed, all of our random iscsi dropping issues went away.
|
# ? Apr 2, 2013 22:17 |
|
Moey posted:Also what is a pocket switch? gently caress, this horrible place is rubbing off on me, I never even realized that this wasn't real IT lingo and just poo poo we make up for the gently caress of it.
|
# ? Apr 2, 2013 22:19 |
|
FISHMANPET posted:Yup. The backend is some awful Sun StorageTek RAID, running with 3x2TB Western Digital Black drives in a RAID 5. Oh boy SATA DRIVES TOO? The only way it have been betters is, if those were the green series. I'd love to hear who's idea it was to implement that "BUT IT'S 4000GB STORAGE! THAT'S GOTTA BE FAST BECAUSE LARGE NUMBERS!"
|
# ? Apr 2, 2013 22:38 |
|
In our meager defence this was our first VMWare deployment and also never in the history of our organization has performance of storage actually been a bottlneck.
|
# ? Apr 2, 2013 22:50 |
|
Moey posted:What is your back end storage? Also what is a pocket switch? It wasn't a criticism, more like a question on my mind of the VMware licenses versus even a single actual switch.
|
# ? Apr 2, 2013 23:01 |
|
FISHMANPET posted:In our meager defence this was our first VMWare deployment and also never in the history of our organization has performance of storage actually been a bottlneck. Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices.
|
# ? Apr 2, 2013 23:01 |
|
We're collosally stupid basically. Expertise is shunned in favor of outdated group think. But then it turns into poo poo That Pisses Me Off, so I'll take the links and try some stuff on our servers tomorrow. Fun fact: We have two servers licensed for ESX 4.x Standard, but we didn't spring for VCenter because it was $5k (on top of a $200k hardware buy) and what was the point? The reason we have only 3 disks in the RAID is that we didn't want to buy disks from Sun, so we bought empty trays and filled them with disks from Newegg. We thought it would be as easy as buying Dell trays, but nobody uses this product, so we could only find 3 in the country. So we bought 6 disks and we just have a pile of cold spares.
|
# ? Apr 2, 2013 23:34 |
|
Get out before you turn into one of them (it's started already). I am not joking. Get the gently caress out.
|
# ? Apr 2, 2013 23:38 |
|
three posted:Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices.
|
# ? Apr 3, 2013 00:53 |
|
three posted:Doing new things isn't an excuse for doing them poorly. It's not like virtualization is tiny niche without documentation of best practices. You would be surprised how many "VCP's" I have met that make designs similar to Fish's setup... Single hosts and Essentials kits tagged on for "High Availability" on a single host running software raid for a multisite deployment, DROBO NAS appliances for View Deploys, Production_Supercritical_data RAID arrays put in JBOD, designs that only use DAS, people putting ISCSI storage on boxes without proper bindings, limits on resources causing host swapping when plenty of resources were available.... I don't even want to start on the interviews... My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10!
|
# ? Apr 3, 2013 01:24 |
|
Corvettefisher posted:My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10! My boss has been doing exactly the same thing. Going on and on and on about how ReadyNAS boxes are crap (hint: they are, but not as bad as one would think) and how they can't handle any load. We had been fighting over iSCSI MPIO for a while. He didn't believe me, and in fact argued against me almost to the point of calling me stupid, when I said I could literally double the performance of a ReadyNAS by getting rid of the ether channel setup, breaking it into two separate 1gb links on different subnets (he wouldn't let me use VLANs). I finally got my hands on a spare ReadyNAS 4200 loaded with 16x 7200RPM SATA drives and was given permission to test my "theory." After blowing the configuration on the 4200 away, setting up a proper raid 6 array, none of that x-raid 2 poo poo, and upgrading the firmware on it, I put my theory to the test. My bosses face when I presented the benchmark results: I believe the next words out of his mouth were "Draw up a plan to get this in prod. NOW." I've now got our entire development environment running off of one ReadyNAS with a load greater than what the previous ReadyNAS had on it and it's not even sweating.
|
# ? Apr 3, 2013 01:55 |
|
Corvettefisher posted:My favorite is still when people come to me, and boast about how gijjabytes they have in the back room almost 10 whole TB of storage and still slow GOTTA GET ANOTHER 10! ... but 20tb isn't an impressive amount of storage these days I mean, it's a lot, but it's not like "holy cow I've gotta tell someone about THIS, they'll never believe it!" amazing. I guess maybe if it was 20tb of SSD storage then we can talk e: Oh nevermind I totally misread what you were getting at Disregard.
|
# ? Apr 3, 2013 03:40 |
|
Hi, I'm the dork who tried to P2V his laptop and got stumped by the RSA Soft-token not loading in the VM. I gave up on that adventure, in fact I totally forgot I asked the question. The question was asked right as I was ramped up on my support of vCenter Configuration Manager and poo poo got so busy so fast I forgot about all the tertiary crap. I went from "I'm going to be so smooth and work from home through my P2V VPN" to "gently caress my life how the hell do I handle all of the poo poo they're throwing at me? I'm not doing anything beyond sleeping when I get home" Now that I have a bit of a handle on the product, I'm looking to expand. I just bought 2 Dell Poweredge 2950's for home so I can play with vCloud Director. 24GB of ram for each host, should be enough to stage a small environment for testing. I'm still learning stuff from vCenter down. I came into this job with no Virtualization experience at all, and I barely have any to this day. My job is a lot of SQL and UI troubleshooting, I never go down into vSphere/ESXi. I hope this home lab will give me a good platform from which to learn. I chose vCD as a focus because the product looks useful, and AppDirector sits on top of that and I want/need to learn more about that. And I suppose Data Director, but I'm not sure I have an urge to play with database deployment right now. Data Director confuses me almost as much as DVS networking. :P A DVS is a requirement for vCD, and despite going through an ICM class, I know gently caress all about all this fancy networking. I have one vCD cell up and running in my lab at work, but heck if I know what ANY of it does; because all I did was follow the docs like a good little boy and it installed. I understand the base concept of Cloud Director; but trying to solidify all of it's concepts in my head is daunting. Hell, I understand AppDirector better than Cloud Director, and that just seems wrong on it's face. I don't have anything to add or questions to ask; but I'm going to have a poo poo-ton of asinine questions going forward from next weekend when the servers are finally here.
|
# ? Apr 3, 2013 05:40 |
|
Is it ok to admit I love vDS'es mostly because I'm lazy?
|
# ? Apr 3, 2013 08:59 |
|
evil_bunnY posted:Is it ok to admit I love vDS'es mostly because I'm lazy? Just say you love it for NIOC.
|
# ? Apr 3, 2013 13:24 |
|
evil_bunnY posted:Is it ok to admit I love vDS'es mostly because I'm lazy? Anyone that doesn't admit to that is a liar. IT as a whole is driven by laziness.
|
# ? Apr 3, 2013 14:12 |
|
I'm trying to setup Hyper-V on Windows 8 Pro and I want to do the following things: -Run Windows 7 or at least XP along with a host of malware analysis labs -Run Windows Update natively on the VM (unless there's a better way to do it without direct internet access) -Run phone-home software in such a way that it can't establish a connection without my saying so Can I somehow get away with an internal virtual switch for the VM? I only have my motherboard's onboard NIC in this box and don't want to virtualize Windows 8's access to it. Is the best solution to just walk over to the store and buy another physical NIC so I can use an external switch on it?
|
# ? Apr 3, 2013 15:29 |
|
ragzilla posted:Anyone that doesn't admit to that is a liar. IT as a whole is driven by laziness. That and hate-driven development. I'd be lying if I said a lot of my priorities weren't set by the "what is annoying the poo poo out of me lately?" method.
|
# ? Apr 3, 2013 16:03 |
|
Docjowles posted:That and hate-driven development. I'd be lying if I said a lot of my priorities weren't set by the "what is annoying the poo poo out of me lately?" method. I wish my boss would understand those two things.
|
# ? Apr 3, 2013 18:47 |
|
If anyone is wondering what cool poo poo the UCS platform offers there is now an emulator out for it. Video in the link. http://wahlnetwork.com/2013/04/01/cisco-ucs-platform-emulator-walkthrough-video/ Wasn't sure if I should post this here or the Cisco Thread, might just make a crosspost in both threads Dilbert As FUCK fucked around with this message at 13:16 on Apr 4, 2013 |
# ? Apr 4, 2013 01:51 |
|
What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing.
|
# ? Apr 4, 2013 02:05 |
|
parid posted:What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing. Now there is an emulator to find out ! In all honesty I sell them and from a VMware perspective I would They are good if you have to spend a budget so you get the same budget next year. Dilbert As FUCK fucked around with this message at 02:16 on Apr 4, 2013 |
# ? Apr 4, 2013 02:13 |
|
The reason UCS is popular is because VARs have high $$$ motivation to sell them, and Cisco has been literally giving them away to try to get into the server market. I don't think from a technical perspective they're THAT amazing although they do some neat stuff. A lot of the functionality they spearheaded is just being mimicked by Dell, etc. I think they do make things a bit overcomplicated, as well.
|
# ? Apr 4, 2013 02:21 |
|
parid posted:What am I missing with ucs? It seems 10-15% more expensive and the only tangible benefit for my esxi hosts seems to be faster os setup and simpler cabling both of which only impact day 1 of setup. How are other pepper find value to justify the expense. Judging purely by their customers, i think there is just something I'm missing. Need to upgrade firmware? Spend an hour with an HP firmware DVD moving server to server? With UCS apply a new firmware bundle to the server and set it to apply at next reboot. Reboot hosts during maintenance window and enjoy your bugfixed firmware. Unified fabric also gives some nice approaches for hardware sparing (assuming you use boot from SAN). The FC WWNs and Ethernet MACs are part of the blade 'personality'. So whereas before you may have had an extra VMware host in each of 2-3 clusters so provide your N+2 availability (so you can maintain N+1 when you put a host in maint mode) and another blade as a cold spare for Oracle RAC (or something else compute heavy) with UCS you can maintain 1 or 2 of those redundant servers and move it around as needed. This also lends itself well to quick upgrades if you have spare hardware of a higher spec in your chassis, power off old server, reapply personality to a higher spec one, power on. Of slightly less interest, assuming you're using VICs, you can add a new SAN fabric as a vSAN and present it to servers without having to add physical HBAs (just present a new vHBA to the server). Server can have HBAs in old and new vSANs, migrate data from old SAN to new, remove old vHBAs and decommission old fabric (useful if you have to return ex-lease stuff and don't want the 2 fabrics to touch at all). Separate NICs for iSCSI/NFS, Frontend, and vMotion traffic? No problem. You can even apply QoS to them, in case you want to guarantee your vMotion traffic exactly 2.5gbps. Want to limit a specific application? Present a new pair of vNICs to your ESX host and apply a 100mbps policy to it.
|
# ? Apr 4, 2013 02:56 |
|
List price might be more expensive, they are reasonably competitive with normal discounts.
|
# ? Apr 4, 2013 03:00 |
|
|
# ? May 8, 2024 04:32 |
|
ragzilla posted:With UCS apply a new firmware bundle to the server and set it to apply at next reboot. quote:with UCS you can maintain 1 or 2 of those redundant servers and move it around as needed The hardware detaching is cool, but how often does anyone need to use it? It'd be easier to just use auto deploy for ESXi hosts, and only bad people run physical workloads nowadays. (Plus who boots Windows from SAN? Although, technically you could just swap the drives.) quote:Present a new pair of vNICs to your ESX host and apply a 100mbps policy to it.
|
# ? Apr 4, 2013 03:22 |