|
RDMs are basically dump conduits that pass your block level storage from the VM to your storage device with no real understanding of what is in it or how big it is. You can't just convert from one to the other and if you want to put it in a VDMK then you need to mount up a new empty one, stop whatever services you need to in the OS, manually copy data over from the RDM, then swap around the drive letters.
|
# ? Jul 25, 2014 19:44 |
|
|
# ? Jun 3, 2024 22:30 |
|
angus725 posted:We'll be deploying HyperV as our main hypervisor. Bought a windows 8 pro laptop from Lenovo a few days ago to take advantage of the windows server management software. Control Panel -> Programs and Features -> Turn windows features on and off -> Hyper-V This will enable win 8.1 pro's client hyper-v, which is exactly what you will run your vm's. You can then just straight export a VM to server Hyper-V Modulo some stuff like live migration.
|
# ? Jul 25, 2014 19:49 |
|
BangersInMyKnickers posted:RDMs are basically dump conduits that pass your block level storage from the VM to your storage device with no real understanding of what is in it or how big it is. You can't just convert from one to the other and if you want to put it in a VDMK then you need to mount up a new empty one, stop whatever services you need to in the OS, manually copy data over from the RDM, then swap around the drive letters. That's not true according to VMware: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005241 There are a ton of links on how to do this, including the storage migration method we tried. As well, our VM admin has done this in the past on another system. I'm not sure why the error is happening this time. e: Just found this one: http://blogs.vmware.com/vsphere/2012/02/migrating-rdms-and-a-question-for-rdm-users.html and it seems it might have to be a cold migration.
|
# ? Jul 25, 2014 19:54 |
|
Vsphere 5 Due to security restrictions, we cannot have our servers auto login at boot. Can I script logging on a windows server though vsphere 5? Google isn't giving me any good results past booting the server, not logging it in. Edit: I don't want the server to auto log on at boot, I need to be able to trigger this login via a script after reboots etc are done Hadlock fucked around with this message at 20:29 on Jul 25, 2014 |
# ? Jul 25, 2014 20:17 |
|
Is there a reason you can't just do it at the guest level? http://support.microsoft.com/kb/324737/en-us
|
# ? Jul 25, 2014 20:26 |
|
Server has to login to console as a specific admin for legacy apps to run. But, it can't be set to auto login because then a hacker would have access to the core system; VMware admins and my group have separate access to avoid this sort of thing. Otherwise the hacker could just reboot machine and have full access. The login needs some element of control. Since vmware seems to have an auto logon to domain functionality, it seems like I should be able to trigger that at some point other than immediately after power on.
|
# ? Jul 25, 2014 21:02 |
|
Hadlock posted:Server has to login to console as a specific admin for legacy apps to run. Wouldn't it be kind of a moot point regardless if a hacker was able to compromise the system? either way have you looked at this https://www.vmware.com/support/developer/PowerCLI/PowerCLI55/html/Invoke-VMScript.html You need tools running on the vm in questions, but it should be running on most any VM anyways.
|
# ? Jul 25, 2014 21:12 |
|
CLAM DOWN posted:That's not true according to VMware: You're right, this absolutely should work and it's something I do all the time, most recently with a 1.6TB RDM with over 7 million faxes on it that couldn't be down for any length of time. It has to be in virtual mode, but you mentioned that. It should not have to be a cold migration. I haven't seen your specific error message before and you may need to contact VMware support or do some Googling, but I did have one suggestion. If you hit "advanced" on the migrate screen you can migrate only the RDM drives of the VM while leaving any VMDKs in place. I've found this helps sometimes.
|
# ? Jul 25, 2014 21:17 |
|
KS posted:You're right, this absolutely should work and it's something I do all the time, most recently with a 1.6TB RDM with over 7 million faxes on it that couldn't be down for any length of time. It has to be in virtual mode, but you mentioned that. It should not have to be a cold migration. Thanks for confirmed I'm not crazy and it's possible! We were indeed doing that "advanced" method, leaving all VMDKs in place and only doing the migration to a thick provision on the virtual RDM. We're going to test a cold migration the same way once we have a chance to reboot the machine, but you might be right in that it may require a support call. Thanks!
|
# ? Jul 25, 2014 21:27 |
|
Dilbert As gently caress posted:Wouldn't it be kind of a moot point regardless if a hacker was able to compromise the system? Yes it would be a moot point but I am unable to convince the auditors of that. That looks like a really great VMware pass through for powershell, but unfortunately I don't believe it's possible to login a machine at the lock screen using console scripting... Unless I am missing something here. Edit: the reason why this matters is that on patch weekend, someone draws the short straw and has to spend 4 hours logging back in all of the servers at 3am on a Saturday night after midnight Hadlock fucked around with this message at 21:53 on Jul 25, 2014 |
# ? Jul 25, 2014 21:42 |
|
Upgraded the production vCenter to 5.5U1c last night. Just in time for 6 to come out! That was such a simple and clean upgrade, very happy after my last "fun" time going from 4 to 5.
|
# ? Jul 25, 2014 21:54 |
|
Hadlock posted:Yes it would be a moot point but I am unable to convince the auditors of that. Off topic but what kind of legacy apps are you running that need a user logged in? I know they are out there, but gently caress.
|
# ? Jul 25, 2014 22:06 |
|
Moey posted:Off topic but what kind of legacy apps are you running that need a user logged in? I know they are out there, but gently caress. Custom business software that hasn't caught up with the idea of server farms etc. They are stuck on 2003 and converting to 2012 next year and skipping 2008 completely. In mean time we're running software designed for 1-2 servers, and leaning heavily on advances in SSD and database cashing until next year.
|
# ? Jul 25, 2014 22:20 |
|
Can I only bind iSCSI to one vSwitch? I've got two vSwitches: vSwitch0: VM Network, 192.168.100.0/24 & VMkernel management, 192.168.100.x/32. vSwitch1: VMkernel management, 10.0.0.x/32 This is less than ideal for networking reasons, but let's just assume that I'm happy with the setup for now. vSwitch1 VMkernel has iSCSI port binding and on the 10.0.0.0 network I've got an EMC SAN feeding my host LUNs. vSwitch0 has a Synology box on which I'd also like to create a LUN to feed my host as a tier 2 datastore onto which I can have my fileserver VM archive old data which will be accessed infrequently. Again, vSwitch1 has the iSCSI port binding and I can't enable the same on vSwitch0 -- the option is physically grayed out. I obviously don't want to remove vSwitch1 iSCSI binding because then -- hello -- my datastores disappear and I'm a lot less happy about the situation than even now. Moreover, its "iscsi port binding" checkbox is also grayed out, though enabled. I'm assuming this is to keep me from disabling its port binding while its in use. So my google-fu is feeling kind of weak, but according to a few things I've read this should absolutely be possible .. not sure what I could be doing wrong.
|
# ? Jul 27, 2014 23:02 |
|
How many physical NICs are backing vSwitch0 and that VMkernel? I believe you can only have 1 physical NIC per iSCSI VMkernel port. So you might have to make new VMkernel port, only assign it 1 NIC (overriding the vSwitch config) and then turn it on.
|
# ? Jul 27, 2014 23:35 |
|
Just signed out of my VPN so I'll check tomorrow cause I'm so lazy, but thanks for the info. I'll see if that's the case.
|
# ? Jul 27, 2014 23:38 |
|
Martytoof posted:Just signed out of my VPN so I'll check tomorrow cause I'm so lazy, but thanks for the info. I'll see if that's the case. Can you do a screenshot of how stuff is laid out? or a drawing or something? You can do two on the same vSwitch, but you have to specify the failover order on the NICS so that you bind the kernels correctly. One nic will be active, the other will be unused, and vice versa
|
# ? Jul 28, 2014 05:36 |
|
Successfully deployed ESXi 5.5 with 4 vm's running (server 2012 r2,oel 7,rhel 7) inside on my test box. Just trying to get the general idea of visualization. What's next?
|
# ? Jul 29, 2014 00:13 |
|
Fiendish Dr. Wu posted:Successfully deployed ESXi 5.5 with 4 vm's running (server 2012 r2,oel 7,rhel 7) inside on my test box. Deploy some nested ESXi and VCSA.
|
# ? Jul 29, 2014 00:18 |
|
Download the blueprint and look over topics that seem interesting to you, note them, find out their requirements and try to set them up. The blueprint is located here: http://mylearn.vmware.com/mgrReg/plan.cfm?plan=45082&ui=www_cert Dilbert As FUCK fucked around with this message at 00:48 on Jul 29, 2014 |
# ? Jul 29, 2014 00:45 |
|
Okay virtualization guys the linux guys chased me away so you're my last hope. I am trying to figure out the best practice to build linux templates for deployment with VMWare. I believe I have a process down, but the part still tripping me up is the partition and swap. On a physical server I would use LVM, 200MB /Boot, Memory Size /Swap, and then max out the rest for / with ext4. What is best file system and partition structure for a linux template in VMWare? I think that going with one template with 200MB /boot, / max, on ext4 is the way to go and then add a swap file after deployment.
|
# ? Jul 29, 2014 03:54 |
|
Make everything in LVM so you can easily grow the partitions later.
|
# ? Jul 29, 2014 04:03 |
|
I'd vouch for sticking with an LVM layout as well, and EXT4 when possible. For swap, you can add another VMDK dedicate it as swap and then depending on your storage setup place it where you need. There isn't too much in the way of performance tuning for linux like there is on windows. What are the bulk of the servers spun up off the template going to be used for. Aside from that; Turn off X/Gnome/KDE/whatever from running if you don't need it Use paravirtual whenever possible Turn off any Hard Disk indexing services other than that I can't say I do too much else with it since most modern distro's actually have customizations built in, and will automatically work to accommodate it. IIRC I installed fedora 20 in workstation and when I did a yum update -y it actually installed vmwareopentools and such.
|
# ? Jul 29, 2014 04:22 |
|
ghostinmyshell posted:Okay virtualization guys the linux guys chased me away so you're my last hope. I dunno what Linux thread you asked in, but Linux is actually very good about this. What distro? E: to elaborate, if it has grub2, just put /boot on lvm (or one large / partition). Don't use a swap file. You need swap, but loopback is not a great choice. Partition layout and filesystem don't matter. And the kernel mostly has support for virtio. (open)VMware tools are widely available. These are irrelevant concerns. They don't matter. If performance is really a concern, try to keep on the same library versions so samepage works. Stagger cronjobs (you don't need to disable m/slocate, but make sure they don't all kick off at the same time on all systems; same for yum-cron or apt's equivalent). Use paravirt or host-accelerated devices where possible, even if it means you need VMware tools by default. Instead of a template, I'd preseed or kickstart. If you want a template, resize / on first boot with cloud-init. If it's a very old kernel, watch for clock drift and use elevator=deadline. evol262 fucked around with this message at 05:12 on Jul 29, 2014 |
# ? Jul 29, 2014 04:31 |
|
Thanks guys I started googling up some of the stuff mentioned and it looks like it will me out. Sorry for not being specific, but I need to make Ubuntu and CentOS server images. My predecessor who didn't do any of this poo poo said never use LVM for virtual machines but I'll take your word for it. I would much prefer to to kickstart/preseed these things from scratch but we don't have DHCP in this environment and I the one or two guides I tried following with a static IP method didn't work out too well.
|
# ? Jul 29, 2014 05:31 |
|
ghostinmyshell posted:Thanks guys I started googling up some of the stuff mentioned and it looks like it will me out. Sorry for not being specific, but I need to make Ubuntu and CentOS server images. My predecessor who didn't do any of this poo poo said never use LVM for virtual machines but I'll take your word for it. If you don't use LVM you need to do fun stuff like nuking and re-creating the partition table live instead of just pvcreate / vgextend / lvresize / resize2fs. Nuking and recreating works, most of the time, if you're just growing the VMDK and the partition you want to expand is the last on the VMDK, but I wouldn't rely on it. DHCP+kickstart/preseed is easy with something like cobbler or foreman evol262 posted:Stagger cronjobs (you don't need to disable m/slocate, but make sure they don't all kick off at the same time on all systems; same for yum-cron or apt's equivalent). Use paravirt or host-accelerated devices where possible, even if it means you need VMware tools by default. Wisdom. We had huge IO issues until we started splaying our search engine reindex jobs so they didn't all kick off exactly every 5 minutes.
|
# ? Jul 29, 2014 06:37 |
|
luminalflux posted:Wisdom. We had huge IO issues until we started splaying our search engine reindex jobs so they didn't all kick off exactly every 5 minutes. For some reason this reminded me of how locusts stay in the ground for prime number of years to avoid synching up with a specific predator and also to avoid competing with another group of alternating locusts... I would imagine setting your jobs to kick off after a prime number of minutes or seconds would be beneficial.
|
# ? Jul 29, 2014 08:50 |
|
ghostinmyshell posted:Thanks guys I started googling up some of the stuff mentioned and it looks like it will me out. Sorry for not being specific, but I need to make Ubuntu and CentOS server images. My predecessor who didn't do any of this poo poo said never use LVM for virtual machines but I'll take your word for it. PXE through static doesn't work unless you have dhcp and you can set a reservation. Which is probably what your environment should be doing anyway. Centos, at least, will not need dhcp. Add a kickstart to the ISO and make booting using it the default option. Yes, this works. Never tried with Ubuntu, but I'd be surprised if you couldn't put a preseed on.
|
# ? Jul 29, 2014 14:56 |
|
We are having an issue on our VMWare 4.6 thin clients where users trying to lock there Win 7 VDI's are using ctrl+alt+del and locking the thin client OS (Win 7 embedded) rather than the VDI OS. Is there a work around to this that does not include disabling ctrl+alt+del on the thin client? We would still need to log off the generic user on the thin client and switch to admin on it.
|
# ? Jul 29, 2014 17:48 |
|
Disable ctrl+alt+delete through PCoIP GPO on the virtual desktop, and force the users to use ctrl+alt+insert within the virtual desktop. Normal ctrl+alt+del will then only work for the thin client. Or disable ctrl+alt+delete on the thin client through GPO. You can still logoff thin clients with shift+logoff without C+A+D.
|
# ? Jul 29, 2014 19:15 |
|
Just speaking from a Windows + GPO standpoint, I don't think that's possible? The whole point of sending Crtl + Alt + Delete is because it's a kernel interrupt, so it should supersede any VDI client (or fake login screen) on the host machine. Windows + L is a faster way to lock anyway.
|
# ? Jul 29, 2014 19:48 |
|
Roargasm posted:Just speaking from a Windows + GPO standpoint, I don't think that's possible? The whole point of sending Crtl + Alt + Delete is because it's a kernel interrupt, so it should supersede any VDI client (or fake login screen) on the host machine. Windows + L is a faster way to lock anyway. You're right. So I'd do the former: Forcing ctrl alt insert via PCoIP settings plus user education.
|
# ? Jul 29, 2014 20:38 |
|
Does vCenter Server care what types of licensed systems it manages? For Example, if I have a vCenter Essentials Plus licensed vCenter server, can I manage an ESXi Enterprise licensed host from that vCenter, assuming it is not using any of the Enterprise licensed features?
|
# ? Jul 30, 2014 00:41 |
|
Wicaeed posted:Does vCenter Server care what types of licensed systems it manages? Only vCenter Essentials can manage Essentials and Essentials plus due to how the licensing is structured, while vCenter standard can manage Standard-Enterprise+.
|
# ? Jul 30, 2014 01:25 |
|
Dilbert As gently caress posted:Only vCenter Essentials can manage Essentials and Essentials plus due to how the licensing is structured, while vCenter standard can manage Standard-Enterprise+. So if I have a product key for Vmware vSphere 5 Enterprise Plus (unlimited cores per CPU) what determines how many copies I can attached to vCenter Std?
|
# ? Jul 30, 2014 01:33 |
|
Wicaeed posted:So if I have a product key for Vmware vSphere 5 Enterprise Plus (unlimited cores per CPU) what determines how many copies I can attached to vCenter Std? Nothing, really. Besides crazy-high maximums and the performance of your vCenter server (it has to keep up). Max 1,000 hosts connected to a single vCenter Max 10,000 powered on VMs connected to a single vCenter Max 500 hosts per datacenter object in vCenter
|
# ? Jul 30, 2014 01:41 |
|
Does anyone mind the new OP going live the day NDA is released?
|
# ? Jul 31, 2014 04:20 |
|
Dilbert As gently caress posted:Does anyone mind the new OP going live the day NDA is released? Other than my "Virtualization Megathread shouldn't be a VMware marketing brochure" refrain (please at least touch on other hypervisors and paradigms [openstack, docker, citrix vdi]) even though we all know it'll be mostly VMware talk), no.
|
# ? Jul 31, 2014 06:20 |
|
evol262 posted:Other than my "Virtualization Megathread shouldn't be a VMware marketing brochure" refrain (please at least touch on other hypervisors and paradigms [openstack, docker, citrix vdi]) even though we all know it'll be mostly VMware talk), no. Look I learned a lot since the last op posted. I am including openstack, citrix, hyperV, and welp dunno about docker n poo poo to post about it. Feel free to post what you want I will include it. My main concern is that people get useful information out of it.
|
# ? Jul 31, 2014 06:39 |
|
|
# ? Jun 3, 2024 22:30 |
|
I'm curious to see hyper-v expanded this time around
|
# ? Jul 31, 2014 06:59 |