|
How often to host failures really even happen in this day and age? This is especially true since storage is out of the equation with VMWare.
|
# ? Aug 16, 2013 16:45 |
|
|
# ? May 29, 2024 22:59 |
|
three posted:"Hey guys vSMP FT is coming any day now!!" It really is. I actually though it was already released.
|
# ? Aug 16, 2013 16:54 |
|
Didn't they show it off last year at VMworld, but it had some rediculous network requirements? like 2 10gb links?
|
# ? Aug 16, 2013 18:05 |
|
Does anybody make good use of vApps and also use Windows servers that are joined to a domain? It seems like each vApp should have its very own domain controller to keep the whole self-contained thing going, and it makes it much easier to clone, but it also seems like a bunch of unnecessary domain controllers. Our environment isn't that big and we don't have more than a few vApps, so either way works for us, I'm just curious what others are doing.
|
# ? Aug 16, 2013 18:31 |
|
Erwin posted:Does anybody make good use of vApps and also use Windows servers that are joined to a domain? It seems like each vApp should have its very own domain controller to keep the whole self-contained thing going, and it makes it much easier to clone, but it also seems like a bunch of unnecessary domain controllers. Depending how you have it setup you could easily tell the vAPP to deploy on a virtual network that does have the capability to contact the domain. It doesn't have to be a segmented netwrok.
|
# ? Aug 16, 2013 20:53 |
|
Yes, I know, but that wasn't really my question. Basically I want to clone a set of production VMs into an isolated network for testing/demo. So I can either have Prod1 vApp and Prod2 vApp on the production network, each with a DC also on the production network, and just clone the vApps, or I can have them without domain controllers and clone the vApp plus a domain controller into an isolated network. I guess it's a question of an extra step vs. unnecessary VMs. Like this: code:
code:
|
# ? Aug 16, 2013 21:23 |
|
Sorry my brain is a bit fried after 3 days of vCloud classes(which were awesome by the way thanks KS) and work stuff. Couldn't you just give the DC 2 network uplinks, 1 to the production and the other to the vAPP segmented network? The problem there would be DNS then, if vAPP1 and vAPP2 are clones and the vm's have similar naming conventions there may be a problem. Personally, I would probably have the Testing/Dev vAPPs fully fenced/separated so there is no chance of gently caress-ups to the production. It's a bit of an extra step but seems like it would be safer.
|
# ? Aug 16, 2013 21:31 |
|
Has anyone seen issues with VMware Round Robin path selection when using two separate paths to a single switch? I have a small setup with two PowerConnect 2724 switches acting as the networking for my VMware cluster, and they both connect using separate (single) interfaces to a Force10 S50 switch. On the S50 switch I have a disk array that has a 4 port Port-Group configuration using LACP. I've tested this using Windows Server 2008 R2 MPIO when directly connected to the Force10 and I get transfer speeds approaching interface maximum (when using Windows MPIO each gig link is at 99% utilization). My VMware servers, however, have a strange issue, that even though they have separate paths to the Force10 switch, when Vmware Round Robin load balancing is active, I get ~40% of my usual link speeds (for a ~80% utilization overall). When I disable one of the paths, I get almost 100% utilization. Wouldn't it stand to reason that because there are two separate paths, I should show at least some improvement in the network xfer speed over a single 1Gbit link? I've attached a graph that shows this. I'm tempted to say the issue is these switches, but it's just extremely weird behavior that I've never seen before. Wicaeed fucked around with this message at 03:52 on Aug 17, 2013 |
# ? Aug 17, 2013 03:46 |
|
Try setting the number of IOPS before path switching to 1. See a post of mine in this thread on how to do that. I was experiencing something similar with a ReadyNAS until I swapped the IOPS to 1.
|
# ? Aug 17, 2013 04:30 |
|
FISHMANPET posted:It's not a network issue, it's a storage issue. The disk had to be attached with a software initiator in the guest OS, and the way that's required to be done locks the VM to a single host. It's a combination of the Windows storage requirements for clustered file services and the way VMware implements those requirements. You could always do this. It required a physical mode RDM so you could pass the PR bit to the LUN. We have numerous SQL clusters that have the same storage requirements. There's a slew of requirements to make it work, but it works. The SCSI adapter needs to operate in shared mode, the cluster nodes can't both be on the same VMWare host, etc, you need to find a good place to house the community VMDK proxy files for the RDMs, etc.
|
# ? Aug 18, 2013 05:25 |
|
Came across a much more detailed writeup of that new Dell VRTX blade chassis people were excited about a couple months back. Certainly intriguing for dumping IT-infrastructure-in-a-box at a small business or branch office. Edit: Article is older than I realized, saw it linked elsewhere and assumed it was new. Oh well, still new to me!
|
# ? Aug 19, 2013 17:44 |
|
Crossposting this to the IT certification thread since I've forgotten where most of us are, but if you're taking the Stanly CC VCP-DCV course make sure that you log into the system today or tomorrow per the email you should have received or you'll risk being dropped from the course.
|
# ? Aug 19, 2013 20:27 |
|
My Google-fu is failing me; is it possible to run VMware Workstation 9 virtual nics/networks in promiscuous mode on a Windows 8 64 host? I see lots of references to setting it up under a Linux host, but not Windows.
|
# ? Aug 20, 2013 20:01 |
|
I think it just works on Windows, really. Promiscuous mode only ever came up as an issue for times I was running Linux as the Host OS (nested ESXi was the use-case that demanded this capability). Since it works fine while on Windows, I think it enters promiscuous mode just fine. Could probably check the vmware.log file for the VM and see if it uses the privilege, too. Are you seeing it not work, I suppose?
|
# ? Aug 21, 2013 03:59 |
|
It's apparently not working. For development purposes I'm trying to diagnose a problem with a heartbeat service on a Linux guest that won't see the link is up. The difference between it and physical hardware is the promiscuous flag when running "ip link" on the guest. I was positive that I'd seen this before when working with the heartbeat protocol in the ESXi environment. However, in reviewing my notes, it seems the problem wasn't having nic/network in promoscuous mode but setting "Route based on IP Hash" on the VM network properties (although if I remember I only needed to set it once I attempted to deploy across separate physical nics). It looks like there's no equivalent for that setting under Workstation.
|
# ? Aug 21, 2013 16:39 |
|
What's the best way to cold clone a VM onto ESXI? We've got some machines running on HyperV and we don't need to V2V while they're live, we can just shut them off for the copy, as that seems safest. But VMware Converter no longer supports Cold Cloning it seems. Should I just hot clone everything, or is there another tool? E: Welp, should have read more closely, it can still cold clone VMs, it just can't cold clone physical machines. Which we're not doing anyway. FISHMANPET fucked around with this message at 23:04 on Aug 21, 2013 |
# ? Aug 21, 2013 23:02 |
|
Sorry Cheesus, I'm not sure. Indeed network configuration is rather spartan in the Hosted products such as Workstation, Player, and Fusion. And perhaps there were some additional policies available before. With this said, it might be configurable in the command-line/config files? If I have some time tomorrow, I can do some digging. Unfortunately it's a bit hectic lately and there are some pretty big issues to prioritize so I can't make any guarantees. Hope someone else knows.
|
# ? Aug 22, 2013 03:44 |
|
Does anyone familiar with KVM know if it's like VMware in that provisioning too many vCPU's actually makes performance worse for the guest? I assume so but I have not been able to Google up a definitive answer. Just a couple one-sentence "size your VM's appropriately" non-tips.
|
# ? Aug 22, 2013 19:12 |
|
Docjowles posted:Does anyone familiar with KVM know if it's like VMware in that provisioning too many vCPU's actually makes performance worse for the guest? I assume so but I have not been able to Google up a definitive answer. Just a couple one-sentence "size your VM's appropriately" non-tips. It's like that in most all hypervisors, or most over provisioning of virtual HW. The underlying Host Hardware's cpu still needs to account for the virtual software that is being processed, even things as small as virtual floppy drives need to be computed. Depending on load and over provisioning can determine how much performance degradation you see. Also, remember even if the vCPU is idle it is still having to process a WAIT command
|
# ? Aug 22, 2013 19:35 |
|
Fuckin' vmware support taking a week between replies. I just want my machines to replicate, my remote hosts to not refuse to connect to vcenter server, and upgrades to not cause a PSOD, is this too much to ask? The PSOD issue has been open for like a month, and I'm lucky to get two replies a week.
|
# ? Aug 22, 2013 23:53 |
|
Serfer posted:Fuckin' vmware support taking a week between replies. I just want my machines to replicate, my remote hosts to not refuse to connect to vcenter server, and upgrades to not cause a PSOD, is this too much to ask? The PSOD issue has been open for like a month, and I'm lucky to get two replies a week. What version are you running? What are the hosts? What does your management netwrok look like, let me know I should be able to help
|
# ? Aug 22, 2013 23:59 |
|
https://www.moscone.com/site/do/event/view;jsessionid=979A237740DCC069EEA26A92BF9E96E6?id=727&nav.type=0&nav.base=1402&nav.filter=1402 OH SON OF A BITCH it's in PEX is in SF next year oh well
|
# ? Aug 23, 2013 00:07 |
|
Serfer posted:Fuckin' vmware support taking a week between replies. I just want my machines to replicate, my remote hosts to not refuse to connect to vcenter server, and upgrades to not cause a PSOD, is this too much to ask? The PSOD issue has been open for like a month, and I'm lucky to get two replies a week. Have you talked to your engineer's manager? Its worth escalating if you haven't gotten a response in this long. Generally when you start raising a stink VMware will address things in a slightly more timely fashion. Otherwise you're just letting your TSE get away with not doing his job. quote:https://www.moscone.com/site/do/eve...nav.filter=1402 Why is this bad?
|
# ? Aug 23, 2013 00:26 |
|
1000101 posted:Why is this bad? The flight and Hotel cost more than vegas which was the main reason I couldn't be a VMworld. Even a motel 6 + flight was like 1400.
|
# ? Aug 23, 2013 01:40 |
|
Need some hive mind input. Going V2V from Xen to Vmware. Surprise, Xen does not clone well to VMW so the support links say hot clone is the answer to avoid driver issues blowing up your VMware VM. Luckily we are not moving Exch or any large SQL databases so hot cloning is a bit more acceptable in this environment. However, VMW still states they don't recommend P2Ving a domain controller. I swear I have cloned DC's in the past but I could be remembering wrong. Would you avoid cloning the DC's? One of them is loaded with certificate services which IIRC are a PITA to move.
|
# ? Aug 23, 2013 01:46 |
|
Stugazi posted:Need some hive mind input. Going V2V from Xen to Vmware. Surprise, Xen does not clone well to VMW so the support links say hot clone is the answer to avoid driver issues blowing up your VMware VM. I would avoid HOT cloning a DC, I would use the cold clone CD
|
# ? Aug 23, 2013 01:50 |
|
Dilbert As gently caress posted:I would avoid HOT cloning a DC, I would use the cold clone CD Any reason you can't just spin up a new DC? Unless you've done a ton of weird OS-level customizations, it should take like 20 minutes to spin a whole new VM from a template and replicate over the data from AD.
|
# ? Aug 23, 2013 01:52 |
|
Dilbert As gently caress posted:The flight and Hotel cost more than vegas which was the main reason I couldn't be a VMworld. Even a motel 6 + flight was like 1400. Seems like you could get your work to chip in for this considering the amount of VMware work you do?
|
# ? Aug 23, 2013 02:13 |
|
Docjowles posted:Seems like you could get your work to chip in for this considering the amount of VMware work you do? For PEx yes proably a good chunk of it, VMworld, I have only been at my current ~4 months bit harder for them to justify a ~4k expenditure for VMworld.
|
# ? Aug 23, 2013 02:20 |
|
Dilbert As gently caress posted:For PEx yes proably a good chunk of it, VMworld, I have only been at my current ~4 months bit harder for them to justify a ~4k expenditure for VMworld. If they pay for the pass then they should be able to spring for the travel. It's a drop in the bucket and if you work for a VMware partner its pretty easy to get at least 1 comped pass. Hotels aren't too different in cost from Vegas.
|
# ? Aug 23, 2013 06:57 |
|
And SF is a drat sight better than loving Vegas for normal people.Misogynist posted:Any reason you can't just spin up a new DC? Unless you've done a ton of weird OS-level customizations, it should take like 20 minutes to spin a whole new VM from a template and replicate over the data from AD.
|
# ? Aug 23, 2013 12:02 |
|
Yeah I probably just need to bust some balls a bit harder, but no Vegas is much cheaper for me to get to. Luxor+flight is like ~800, Just looked it up for SF in Feb Flight+hotel is ~1200.
|
# ? Aug 23, 2013 14:12 |
|
Dilbert As gently caress posted:What version are you running? What are the hosts? What does your management netwrok look like, let me know I should be able to help We kinda went through the PSOD stuff back here It's 5.1u1, the hosts are Dell R610's, if that makes a difference. Here's the networking setup: Management server is in 192.168.100.x. routing between the two sites works fine, ESXi host can ping any other machine in 100, except for the vCenter server. Rebooting ESXi fixes the issue, and allows it to reconnect to vCenter, but I can't reboot all of them without scheduling downtime.
|
# ? Aug 23, 2013 17:01 |
|
Do you really use fault tolerance? Or is that network set up just in case? I'm just curious because I've never met anyone who actually uses it.
|
# ? Aug 23, 2013 17:18 |
|
Wait, why do you have vMotion and FT sharing the same VSS on 2 1/g links? that seems like it would cause a mess unless you got some cool traffic shaping setup
|
# ? Aug 23, 2013 17:33 |
|
Erwin posted:Do you really use fault tolerance? Or is that network set up just in case? I'm just curious because I've never met anyone who actually uses it. Dilbert As gently caress posted:Wait, why do you have vMotion and FT sharing the same VSS on 2 1/g links? that seems like it would cause a mess unless you got some cool traffic shaping setup FT is just setup in just in case, it's not used, so it shouldn't be any issue.
|
# ? Aug 23, 2013 17:40 |
|
Serfer posted:FT is just setup in just in case, it's not used, so it shouldn't be any issue. What about vMotion? It will try to use all available bandwidth of the nics attached, or do you have it setup as Nic 1 Active for MGM with 2 as standby, and Nic2 for Management with Standby to nic 1
|
# ? Aug 23, 2013 17:48 |
|
Dilbert As gently caress posted:What about vMotion? It will try to use all available bandwidth of the nics attached, or do you have it setup as Nic 1 Active for MGM with 2 as standby, and Nic2 for Management with Standby to nic 1 Nothing special setup for vMotion, if it uses a bunch of bandwidth, it's usually not a big deal for my environment. I'd worry if we vMotioned often, but it's not terribly common.
|
# ? Aug 23, 2013 17:58 |
|
I just found out that the infrastructure team of the company at which I am contracting has created "units" of hardware that they can offer in their IaaS plan. While this doesn't seem so bad, with the basic unit is 1 core and 2GB of RAM it has caused some strange and horrible configurations to appear on VMs that people are requesting. I was just on a VM that was configured with 64GB of RAM. And 32 CPUs. When I asked about it, they said that it was a SQL server requiring 64GB of ram, and since they're paying for the cores why not add the CPUs as well? Another machine was configured with 20GB of RAM. And of course five cores because they paid for 'em. Five cores? These two examples seem really wrong to me, but I can't find any supporting documentation other than "provision your CPU cores properly". Can anyone add justification to why these two configurations are okay or horribly broken and what the impact of these configurations would be?
|
# ? Aug 23, 2013 21:37 |
|
|
# ? May 29, 2024 22:59 |
|
They have dumb architects and should do something like Amazon does: https://aws.amazon.com/ec2/instance-types/#instance-details
|
# ? Aug 23, 2013 21:40 |