|
What's the guest?
|
# ? Aug 19, 2016 22:45 |
|
|
# ? May 28, 2024 16:33 |
|
Thermopyle posted:So, I have this weird thing happening every once in awhile (intervals from 10 minutes to a couple days). I do lots of work in a plain old VMWare vm hosted with Workstation on my Windows 10 machine. Only while that machine is running, I get system-wide "freezes". And by freeze, I mean things stop happening. The mouse works, and keyboard input is buffered and gets dumped when the machine unfreezes, and my CPU goes to 0.
|
# ? Aug 19, 2016 23:03 |
|
evol262 posted:What's the guest? Ubuntu 14.04. anthonypants posted:Virus scanner? I suppose that's possible, but I haven't been able to pin it down to that. I use whatever you call the thing that's built in to Win10 (Defender?).
|
# ? Aug 19, 2016 23:14 |
|
can you adjust the cpu share you give the guest? Restrict it to a single core? What's Ubuntu doing when that happens? Anything in /var/log/messages or whatever the Ubuntu equivalent is?
|
# ? Aug 19, 2016 23:24 |
|
Winkle-Daddy posted:can you adjust the cpu share you give the guest? Restrict it to a single core? However many cores I assign doesn't seem to make a difference. Hard to say if this happened during or immediately before/after, but here's whats going on at the right time in /var/log/syslog (Ubuntu puts messages here): code:
|
# ? Aug 20, 2016 00:05 |
|
Thermopyle posted:So, I have this weird thing happening every once in awhile (intervals from 10 minutes to a couple days). I do lots of work in a plain old VMWare vm hosted with Workstation on my Windows 10 machine. Only while that machine is running, I get system-wide "freezes". And by freeze, I mean things stop happening. The mouse works, and keyboard input is buffered and gets dumped when the machine unfreezes, and my CPU goes to 0. This is probably not the cause but I've seen random freezing like that when hard disks are having read errors or crc errors. The affected disks were usually seagate 2tb disks and would report no bad sectors but have large numbers of read errors (after a while data began corrupting so the controller was hiding the bad sectors or something). I'd give your disks' SMART data a look with Crystal Disk Info.
|
# ? Aug 20, 2016 01:05 |
|
Disk error/retry/fail cycle on the host box could cause that. Check the logs for block errors.
|
# ? Aug 21, 2016 13:44 |
|
Rexxed posted:This is probably not the cause but I've seen random freezing like that when hard disks are having read errors or crc errors. The affected disks were usually seagate 2tb disks and would report no bad sectors but have large numbers of read errors (after a while data began corrupting so the controller was hiding the bad sectors or something). I'd give your disks' SMART data a look with Crystal Disk Info. This put me on the right track...I think. It made me think about an older SSD I had in the system so I removed it and I haven't encountered the problem in a while, so hopefully that was the issue. Turns out it probably wasn't a virtualization issue, but thread delivers anyway (hopefully)!
|
# ? Aug 24, 2016 15:40 |
|
Not sure where to put this, but since its dealing with virtualization I figured it fit here... We are in the planning stages of setting up a 3 host VM setup for our primary business application, running Server 2012 R2 with Hyper-V as host. There will be 6 Windows VMs total spread across the 3 hosts (and several non-Windows VMs), but we want the ability to move them between hosts if needed. From what I am understanding reading MS licensing documentation going with the Standard route, one standard license allows you to run host + 2 VMs on a single processor host. If you want additional Windows VMs you need additional Standard licenses (which allows for 2 more Windows VMs on that host). You can most a VM to another host, but that host must be licensed for the total number of VMs that could possibly run on it. i.e. If you have 6 total Windows VMs in your environment, each host must be licensed for 6 Windows VMs. So if I am understanding this correctly, we would need 9x standard licenses total. 3 assigned to each host. Are there any MS licensing experts in the house?
|
# ? Aug 30, 2016 19:47 |
|
stevewm posted:Not sure where to put this, but since its dealing with virtualization I figured it fit here... MS licensing is an arcane art where asking three people gets you five answers. You might want to look at https://www.microsoft.com/en-us/Licensing/learn-more/brief-server-virtual-environments.aspx I don't see how you need to license every host for six Windows VMs. When you move two VMs from a host to another the Windows license transfers so it's fine?
|
# ? Aug 30, 2016 20:11 |
|
Riso posted:MS licensing is an arcane art where asking three people gets you five answers. From what I was able to understand reading the document MS has for licensing in virtual environments, unless each host is licensed for all possible VMs then you can only transfer a license every 90 days. (not that they would have a way of enforcing that)
|
# ? Aug 30, 2016 20:14 |
|
Correct me if I'm wrong, but couldn't you just install the three hosts with the free Hyper-V Server 2012 product and then apply six Windows Server Standard licenses directly to the guests? Assuming you can deal with the UI limitations of the Hyper-V Server product of course. edit: Another thought would be whether it needs to be licensed for a worst case scenario or just what you expect to have happen in the real world. Odds are you won't ever be running on one host, or if you happened to be through some unexpected dual host failure it would be a situation you'd be trying to get out of so fast an auditor who happened to be there at just the wrong time's head would spin. If you assume two of the three hosts will always be up they'd only need two licenses each, again for a total of six. wolrah fucked around with this message at 20:31 on Aug 30, 2016 |
# ? Aug 30, 2016 20:25 |
|
No point using the free version. Windows Standard gives you two virtualised licenses in addition to running the physical hyper-v host.
|
# ? Aug 30, 2016 21:12 |
|
I like that every IT thread is complaining about how retarded MS virtual licensing is at the same time today
|
# ? Aug 30, 2016 21:37 |
|
This is an okay breakdown: http://www.altaro.com/hyper-v/virtual-machine-licensing-hyper-v/ Key point is this: quote:The rules don’t change for a cluster. Each host needs to have sufficient licenses to run the maximum number of virtual machines it can be realistically expected to ever run. So, if you have six guests in a two-node cluster, then you’re going to need three Standard licenses for each host, winding up at a total of six licenses. If you’re using Datacenter edition to cover unlimited guests for one host, then you need a separate Datacenter license for the other(s) as well.
|
# ? Aug 30, 2016 22:26 |
|
Docjowles posted:I like that every IT thread is complaining about how retarded MS virtual licensing is at the same time today In other news, I didn't actually physically check when my boss told me power-loss emergency cooling was automated, and it turns out the power sensor valve turned out to be a manual valve. Mother. Fucker. Also a satellite UPS failed to re-charge after power was restored, but I'm not in charge of those so whatevs. Of course I was suddenly made in charge when it turned out nobody else knew a thing about contingency planning. Thanks.
|
# ? Aug 30, 2016 22:27 |
|
BangersInMyKnickers posted:So if you can get away with the free version of Hyper-V
|
# ? Aug 31, 2016 04:06 |
|
BangersInMyKnickers posted:This is an okay breakdown: http://www.altaro.com/hyper-v/virtual-machine-licensing-hyper-v/ So exactly what I was thinking..
|
# ? Aug 31, 2016 13:11 |
|
Vulture Culture posted:Does this product still exist? Hyper-V server is still a thing, yup.
|
# ? Aug 31, 2016 13:16 |
|
BangersInMyKnickers posted:This is an okay breakdown: http://www.altaro.com/hyper-v/virtual-machine-licensing-hyper-v/ If I'm understanding you correctly, you're staying that because you get 1 physical + 2 virtual instances, you can use all 3 as licenses for 3 VMs on a single host. It doesn't work that way - you can't apply licenses directly to a VM; you ALWAYS license the hardware it runs on. He would need a total of 9 licenses for 6 VMs in a 3 host cluster.
|
# ? Aug 31, 2016 14:09 |
|
Richard Noggin posted:..... He would need a total of 9 licenses for 6 VMs in a 3 host cluster. Exactly what I was thinking from reading the document. Apparently I need to get better VARs... Both of the ones I typically deal with were absolutely clueless on this point and seemed weirded out someone would use Hyper-V. Then they tried to sell me VMWare licenses.. One even said I would need Standard licenses for every VM I ran, even Linux VMs.
|
# ? Aug 31, 2016 15:30 |
|
Who the gently caress VAR tried to sell MS licenses for nix systems?
|
# ? Aug 31, 2016 15:44 |
|
Potato Salad posted:Who the gently caress VAR tried to sell MS licenses for nix systems? One that gets commission per license sold?
|
# ? Aug 31, 2016 15:56 |
|
Potato Salad posted:Who the gently caress VAR tried to sell MS licenses for nix systems?
|
# ? Aug 31, 2016 16:26 |
|
Potato Salad posted:Who the gently caress VAR tried to sell MS licenses for nix systems? Like I said, I need a better one.... It seems some have been pushing VMWare so long they are clueless about anything else. They actually said I would need Windows licenses for every VM I ran under hyper-v, even if it wasn't a Windows VM!
|
# ? Aug 31, 2016 16:40 |
|
So anything exciting at VMWorld this year? First year I've skipped in a while, and so far it doesn't look like anything super exciting beyond "WE'LL MANAGE ALL YOUR CLOUDS LOL"
|
# ? Aug 31, 2016 16:42 |
|
stevewm posted:Exactly what I was thinking from reading the document. Microsoft licensing is (mostly) hypervisor-agnostic. I say mostly because there are specific rights to use Hyper-V, but when you're talking about Microsoft-based operating systems installed in a virtual machine, they do not care what the underlying hypervisor is; the same license terms apply.
|
# ? Aug 31, 2016 17:52 |
|
VMworld is meh. Woohoo, html5 next generation, yay everyone loves the host web client. Still waiting for 'em to price their poo poo more competitively.
|
# ? Aug 31, 2016 20:10 |
|
Maneki Neko posted:So anything exciting at VMWorld this year? First year I've skipped in a while, and so far it doesn't look like anything super exciting beyond "WE'LL MANAGE ALL YOUR CLOUDS LOL" The cross cloud architecture is their big announcement, and honestly it's pretty neat. Being able to easily move workloads between public clouds or between public and private clouds, have consistently enforced and centrally managed security policies via NSX that apply no matter where the workload lives is cool. No big announcements other than that, just a lot of focus on automation, SDN, and VSAN.
|
# ? Aug 31, 2016 22:07 |
|
Potato Salad posted:Who the gently caress VAR tried to sell MS licenses for nix systems?
|
# ? Sep 2, 2016 21:22 |
|
I'd like to automate the creation of VM's in Vagrant style. These would be Linux machines with a desktop environment. Probably Ubuntu 16.04. Would need to install various software and config various things in the VM. I've found some google results for using Vagrant for this purpose, but it doesn't seem like a very "supported" use case. Googling "automated VM creation" leads me to various sorts of results. I don't care if this is virtualbox or vmware. Currently, I use a base snapshot of a vmware machine, but I think I'd like to able to do things programmatically. What's the One True Way of doing this?
|
# ? Sep 2, 2016 23:08 |
|
Thermopyle posted:I'd like to automate the creation of VM's in Vagrant style. These would be Linux machines with a desktop environment. Probably Ubuntu 16.04. Would need to install various software and config various things in the VM. What exactly are you trying to automate? Creating the VM? Installing the software? Building images to launch through vagrant? Each of these are different answers. If you're just trying to have a vm you run on your desktop that you can delete easily or recreate easily vagrant is a great solution. "vagrant up; vagrant -f destroy". If you're trying to automate software install, then you want something like puppet/chef/salt/etc, and if you're trying to create the image that you launch out of vagrant (or many other platforms) you want packer.
|
# ? Sep 3, 2016 02:11 |
|
ILikeVoltron posted:What exactly are you trying to automate? Creating the VM? Installing the software? Building images to launch through vagrant? Each of these are different answers. terraform or packer
|
# ? Sep 5, 2016 08:37 |
|
ILikeVoltron posted:What exactly are you trying to automate? Creating the VM? Installing the software? Building images to launch through vagrant? Each of these are different answers. Creating the VM, installing the software.
|
# ? Sep 5, 2016 15:15 |
|
If anyone is running a Dell/VMware stack and using the Fault Resilient Memory (FRM) feature check your kernel logs from startup to make sure the kernel pages are being correctly replicated, it will throw warnings if they are not but there are no other outward signs that the feature is broken on the 13G servers with certain configs. FRM was developed and introduced on the 12G servers and at that time, Intel processors had a single NUMA node per socket. With the processors on the 13G there is now internally two nodes per socket package, with each node getting its own memory controllers connected by the internal crossbar. These internal nodes are masked with the default BIOS config, I assume because most OS's don't know how to optimize for the new architecture and will weigh bandwidth/latency on the crossbar the same as the QPI even though the crossbar is considerably faster. With our guest VMs we don't go over 3vCPU per VM so it isn't a concern, so I change the memory config to "Cluster on Die" to improve memory performance. This exposes all four numa nodes and the hypervisor can now attempt to prevent a guest from spanning the crossbar. To support FRM with this new architecture, Dell split FRM in to two mode: NUMA FRM and FRM. In FRM mode with Cluster on Die, one node from each socket has a designated protected memory space to replicate kernel pages in, while the other node on the socket operates normally. This gets you a single copy of the kernel pages on the second socket which seems ideal as it minimizes memory overhead while possibly keeping your hypervisor up and running in the event of ram failure. However, VMware's implementation has no idea what to make of this and attempts to replicate the kernel pages to all three exposed nodes and fails to all three. If you put the hardware in NUMA FRM then the hardware creates a protected memory space on all nodes and in that situation VMware successfully replicates to one other node, but the other protected spaces are effectively wasted and you're eating an extra 32gb of memory overhead per host. Dell sees the problem and is working on it but VMware has a huge case of red-rear end at the moment claiming that it's working "as it should" (clearly is not) and we're slugging in out trying to get them to fix their poo poo. I suspect they're treating the system as 4-socket and haven't updated their FRM code since the 12g servers came out and they don't want to bother, so it's probably going to be months before patches are out to do anything about it.
|
# ? Sep 6, 2016 15:24 |
|
jaegerx posted:terraform or packer I legit don't understand what you're asking or answering here? Thermopyle posted:Creating the VM, installing the software. Likely vagrant and some form of provisioning, https://www.vagrantup.com/docs/provisioning/ I'd personally use puppet because it's what I know but if you're weak on that (or chef, ansible, salt, etc) you could just write some shell scripts that'll kick off when the vagrant vm is launched. (ie: yum install foo / apt-get install foo)
|
# ? Sep 6, 2016 16:51 |
|
ILikeVoltron posted:I legit don't understand what you're asking or answering here? There's no One True Way in this case, only a half-dozen different ways of varying effectiveness. If the software is installed every time the same way, just use docker and bake it into an image. Or use another product, chef can do it; puppet can do it with a little more work. Almost any tool that allows you to clone VMs can get you an image and then any configuration mangagement tool like chef/ansible/salt/puppet/whatever can do your modifications. What varies is if you prefer a push/pull and whether you want the server to be headless. Bhodi fucked around with this message at 17:24 on Sep 6, 2016 |
# ? Sep 6, 2016 17:20 |
|
Alright, thanks for the insight guys. On to other things: My home server in it's current (and old) configuration just has Ubuntu Server installed on it and a ton of little apps and scripts running on it that I can't ever even recall what I got going on. I'm thinking about turning all of its services into VMs and containers in the vain hope that it will: 1) Just help me keep it organized so I can remember what the hell I've done on the machine, 2) learn, 3) make it easier to update software and try new things without loving up my poo poo. I currently do run one VM on it via KVM so I can access some old Windows stuff, but I'm thinking about going all in. The most important service the machine provides is file serving via Samba to the rest of the house from zfs pools. Questions:
|
# ? Sep 6, 2016 17:47 |
|
I think containerization is the thing for you if you're running a bunch of scripts and homebrew apps with a short memory budget. The memory overhead of a little Ubuntu Server VM may be relatively small, but if you're running 6--8 apps on that machine, you'll get into the red kinda quick. You also seem to be maxing memory out sometimes, particularly in buffering. I'm guessing the zpool is backed by spinning disks? Not knowing what you're running on the machine, I'd have to guess that the buffering is write activity on your smb / zpool setup. Is the write performance of your zpool ever a problem? A roommate uses zfs / Freenas to serve media and all of our centralized backups in addition to a pair of jails. It's a 16GB machine, and when we're doing big writes, our monitoring VM will see the storage stack gobble up huge amounts of ram. I say this because you may notice an occasional performance degradation on your smb/zpool storage service if my assumption about what's occasionally eating your ram is correct. This would make precisely divvying-up your memory to guest VMs a rather crucial task. Got any monitoring on the memory consumption of each of your services?
|
# ? Sep 6, 2016 18:20 |
|
|
# ? May 28, 2024 16:33 |
|
Potato Salad posted:I think containerization is the thing for you if you're running a bunch of scripts and homebrew apps with a short memory budget. The memory overhead of a little Ubuntu Server VM may be relatively small, but if you're running 6--8 apps on that machine, you'll get into the red kinda quick. You also seem to be maxing memory out sometimes, particularly in buffering. I'm guessing the zpool is backed by spinning disks? Not knowing what you're running on the machine, I'd have to guess that the buffering is write activity on your smb / zpool setup. I don't have any monitoring that granular though that is something I should get. I've actually only got 12GB in there right now, but I don't mind dropping a little bit of cash to bump it up to 16GB. Anyway, your'e right about the zpool. The buffering maxing out memory usage happens while scrubbing my pools. Really puts a hurt on system performance. However, I don't ever push the pools R/W performance as they just serve media and occasionally something automated copies new media onto them, but because its automated and happens in the background I never notice how the performance actually is.
|
# ? Sep 6, 2016 18:30 |