|
kernel samepage sharing is really good at VM density. Running 6-8 VMs on 16GB of memory is 100% doable without breaking a sweat, mostly. However, running ZFS on the same host is going to be a problem unless you really limit the ARC. Just because ZFS is a memory pig. For VM v container, ask yourself "will I need to treat this like a real system regularly?" If no (plex, consul, redis, whatever), container. If yes (one nginx/apache server to rule them all, and you keep touching it to mess with SSL certs/vhosts/whatever), or it needs a lot of privileged stuff (virtual machines, zfs, whatever), VM.
|
# ? Sep 6, 2016 18:31 |
|
|
# ? May 8, 2024 10:05 |
|
Bhodi posted:I've never used it, but packer is basically docker plus some ansible-esque configuration goo read from a config file and applied to the image during build-time; terraform is a manager tool by the same company and has plugins to manage infrastructure like DNS, network switches as well as the stuff you build with packer. It's an answer and they seem to do what was asked for, though not the tools I'd recommend. Packer is a tool for starting from some kind of machine base state, applying configurations, and producing a binary artifact. (Docker is one of those possible backends if you have a hate-boner for Dockerfiles for some reason, but that's the only relationship between the two.) The fundamental unit of production in Packer is the virtual machine -- VirtualBox, VMware Fusion/Workstation, EC2, Azure, etc. You spin up a virtual machine -- if you have R/W console access to it to send keystrokes, like through VirtualBox or VMware Fusion, you can start from a completely empty box and an ISO -- and then you apply your configurations to it and save it somewhere as a new image, potentially packaging it for Vagrant in the process. Terraform is largely for interacting with API-driven cloud technologies and not random on-premises stuff (though there's no reason why it couldn't, architecturally), so if you want to manage DNS, it had better be through Route53 or CloudFlare or UltraDNS. It can manage virtual networks in OpenStack Neutron and it could conceivably do VMware NSX (it doesn't right now), but at this moment there is no technology in Terraform to manage a single vendor's physical switches. Ansible or Puppet are much better options for that if you have a supported switch.
|
# ? Sep 6, 2016 19:04 |
|
Vulture Culture posted:Packer is a tool for starting from some kind of machine base state, applying configurations, and producing a binary artifact. (Docker is one of those possible backends if you have a hate-boner for Dockerfiles for some reason, but that's the only relationship between the two.) The fundamental unit of production in Packer is the virtual machine -- VirtualBox, VMware Fusion/Workstation, EC2, Azure, etc. You spin up a virtual machine -- if you have R/W console access to it to send keystrokes, like through VirtualBox or VMware Fusion, you can start from a completely empty box and an ISO -- and then you apply your configurations to it and save it somewhere as a new image, potentially packaging it for Vagrant in the process.
|
# ? Sep 6, 2016 19:12 |
|
Bhodi posted:This seems to be exactly what docker does, so I'm a bit confused here now. As I've said, I've never used it, and now I'm wondering why you ever would given there are other more common tools in that arena.
|
# ? Sep 6, 2016 19:21 |
|
Vulture Culture posted:Docker has nothing to do with producing virtual machine images. Docker is a highly opinionated toolset for producing container images and running instances of them.
|
# ? Sep 6, 2016 19:25 |
|
Bhodi posted:I guess I consider container images to be similar enough to binary artifacts to lump them together. I think we're splitting hairs at this point. You're right on opinionated, though! No splitting hairs, you're just wrong. Containers != VMs. VM Images != Containers in any way.
|
# ? Sep 6, 2016 20:20 |
|
VMware literally calls virtual machines containers
|
# ? Sep 6, 2016 20:55 |
|
Bhodi posted:VMware literally calls virtual machines containers Think of it this way -- containers are glorified chroots. It's more complicated than that, but it's a simple enough distinction.
|
# ? Sep 6, 2016 20:59 |
|
Bhodi posted:VMware literally calls virtual machines containers No we don't. I work on the monitor team. We have never called a VM a container.
|
# ? Sep 6, 2016 21:05 |
|
DevNull posted:No we don't. I work on the monitor team. We have never called a VM a container. Why is everyone crawling out of the woodwork on this, it may have a specific definition in your mind but these terms get conflated all the time and honestly I ran out of "give a poo poo" about 5 years ago so let's say running things in things, ok?
|
# ? Sep 6, 2016 21:14 |
|
Bhodi posted:you call the vmx the container and it has a version and contains the vm, which is listed under virtual machines in vsphere I have worked at VMware for 9 years and have never heard someone call the vmx a container. Everyone calls the vmx the VM. The version is hardware version, which exposes hardware capabilities to the guest OS and is going to be quite different from a contain version.
|
# ? Sep 6, 2016 21:29 |
|
If VMware called VMs containers then the new "VSphere Integrated Containers" product sure would be a confusing name.
|
# ? Sep 6, 2016 21:34 |
|
It must be just us internally, then. Googling it brings up (non-vmware) hits so we probably aren't the only one. I'll stop the dumb derail and only refer to something as a container in the future if it shares kernel space with the host.
Bhodi fucked around with this message at 21:43 on Sep 6, 2016 |
# ? Sep 6, 2016 21:40 |
|
Bhodi posted:you call the vmx the container and it has a version and contains the vm, which is listed under virtual machines in vsphere
|
# ? Sep 6, 2016 22:55 |
|
Oh hey what's this? Someone who has no business speccing up servers has ordered something to host VMs on and deployed it. Let's take a look! Single quad-core Xeon E3-1xxx, 32GB RAM, running 6 VMs each with 4 vCPUs allocated . Let me know how you get on with that.
|
# ? Sep 6, 2016 23:20 |
|
"Frank, dude, Frank, have you seen the new hard drives? Holy poo poo one WDRed can hold 16gb" "poo poo, Sam, pair of those in raid and she's good to go."
|
# ? Sep 7, 2016 00:24 |
|
Can you assholes stop triggering the rest of the thread thanks
|
# ? Sep 7, 2016 10:57 |
|
Please stop
|
# ? Sep 7, 2016 15:21 |
|
What's good, economical hardware for a poor man's VM host? Something I can scour ebay for or something new, don't care. I'm not really familiar with PC hardware that isn't desktop-oriented.
|
# ? Sep 8, 2016 23:06 |
|
Tell us which hypervisor mang
|
# ? Sep 8, 2016 23:19 |
|
Thermopyle posted:What's good, economical hardware for a poor man's VM host? Something I can scour ebay for or something new, don't care. I'm not really familiar with PC hardware that isn't desktop-oriented. Used Precision towers are a cheap way to get a xeon/ecc memory with at least some overlap on VMware's support matrix. Hyper-v will run on practically anything if you go that route.
|
# ? Sep 8, 2016 23:26 |
|
evil_bunnY posted:Tell us which hypervisor mang Well, I'm just experimenting right now. I've been using KVM recently on an ubuntu host. I'll probably stick with that for awhile.
|
# ? Sep 8, 2016 23:48 |
|
Dunno what your workload is, but you probably don't give a drat about ECC at home. For KVM/Hyper-V lab hosts, I'd probably go with the cheapest Haswell+ box you can find with an i5/i7 as much memory as possible and a couple of SSDs (for a single host -- using networked storage is obviously better). For a "poor man's VM host" running in a lab (your house), desktop stuff is often preferable anyway, since it's quiet. VPro/IPMI are nice if you can get them in a cheap/quiet platform. VPro also implies VT-d (generally) if you care. An Optiplex 9020 with an i7 bumped to 32gb of memory and a couple of SSDs plus an extra dual port Intel NIC (if you care) will cost you less than $1000, and probably even less than that. Unless you have a specific need for some functionality or capacity, this is probably more than you'll use in a casual lab anyway.
|
# ? Sep 9, 2016 00:25 |
|
what are you trying to accomplish? http://www.newegg.com/Product/Product.aspx?Item=N82E16813157616 http://www.newegg.com/Product/Product.aspx?Item=N82E16820011361 will probably run KVM and will almost certaintly run VMware. Any old case and power supply will do. At that point, just toss a little storage in it.
|
# ? Sep 9, 2016 00:37 |
|
adorai posted:what are you trying to accomplish? I talked a bit about it on the previous page, but 50% learning/experimenting, 50% making my home server better and more maintainable. The biggest resource hog on my current server is ZFS with ~30TB of storage. There's the ongoing RAM usage of ARC and then some pretty intense usage during periodic scrubs. Also I do a bit of transcoding/streaming with Emby, so I'm a little hesitant about using a Kabini. Because of the ton of hard drives, I probably need to stay with purchasing components to put in to my massive case rather than buying a prebuilt system. Of course, for the right price I can strip a prebuilt apart and shove it in my current case or turn my current case into an external drive enclosure...
|
# ? Sep 9, 2016 16:20 |
|
I use an i7 NUC with a big SSD in it for QEMU+KVM. Takes up slightly more room than my mouse on my desk. I wouldn't use it for a big farm of Windows servers, but it's fine for all my Linux/BSD/whatever stuff.
|
# ? Sep 9, 2016 16:42 |
|
Is your case big enough to hold an SSI-EEB board? Transcode all the things like it's going out of style, and plenty of RAM for ZFS...
|
# ? Sep 9, 2016 16:44 |
|
That's an awful lot of energy draw though. On the flip side are single-socket boards with processors like the E3-1265L V3 running only 40-ish watts idle (I may or may not be partial to my supermicro uATX + 1265L v3 VMware lab).
|
# ? Sep 9, 2016 17:38 |
|
That config idles at about 90W, and it's way more than twice the machine that single socket E3 is, so you'd really be coming out ahead on energy efficiency vs running two E3 boxes. That's an extra 1.2KWh per day (50W * 24H / 1000) and at $0.10/KWh that amounts to an extra $44/year to run it. It's insignificant if you aren't running a datacenter full of them.
|
# ? Sep 9, 2016 17:52 |
|
drat your math!
|
# ? Sep 9, 2016 17:59 |
|
Real late to the game here, but just using the ESXi web client for the first time ever: How does a product this bad ever get shipped by anyone? It takes days to figure out all the magical combinations of settings you're not allowed to change at the same time or it will just silently forget to apply them. e: for gently caress's sake, never, ever try to change a portgroup on an existing vNIC and add a new vNIC at the same time or you're gonna be editing VMX files over SSH Vulture Culture fucked around with this message at 20:57 on Sep 9, 2016 |
# ? Sep 9, 2016 20:54 |
|
Just wait until you try to configure hardware passthrough in the web console Or manage serial devices.
|
# ? Sep 9, 2016 21:33 |
|
You talking about the host web client or the VCenter web client?
|
# ? Sep 9, 2016 21:35 |
|
NippleFloss posted:You talking about the host web client or the VCenter web client?
|
# ? Sep 9, 2016 21:48 |
|
anthonypants posted:The ESXi web client would have to be the former. Isn't it in beta/development/testing? No, it was a fling in 5.5 that is now included in 6. I've never had any issues with it, but then I don't have to do much in it other than occasionally power on a VM or connect to a VM console, since I use VCenter.
|
# ? Sep 9, 2016 22:10 |
|
It's fairly clear uncommon functions have not been fully implemented and tested yet. I'd rather it was in this state than not out at all, though. Works fine for day to day management.
|
# ? Sep 10, 2016 12:36 |
|
With regards to hardware for a home lab slash server... So, it looks like a used Xeon E5-2670 is a great deal. 8 cores at 2.6GHz for around 80 bucks. The main problem is finding a motherboard for cheap to put the thing in. I'm kind of wanting ECC support, but AFAICT that means no X79 chipset mobos which means server-class motherboards which means $400+ dollars and a new power supply. I don't really have to have ECC as I'm not a ZFS ECC alarmist for my usage like some, but it'd be nice. Anyway, does anyone have any ideas about a cheapish motherboard to put one of these in? Used or not, I don't really care. Also, is there a better thread for this? edit: Another question...how do I create a VM for Ubuntu 16.04 on a 14.04 host using virt-manager? In the OS Version selection it only goes up to 14.04... Thermopyle fucked around with this message at 20:59 on Sep 11, 2016 |
# ? Sep 11, 2016 20:39 |
|
Quoting myself:SamDabbers posted:Is your case big enough to hold an SSI-EEB board? Transcode all the things like it's going out of style, and plenty of RAM for ZFS... Intel S2600CP2J motherboard, dual E5-2670s, and 128GB ECC RAM for $500. Edit: I recently built this exact machine, and it's a beast for how little it cost.
SamDabbers fucked around with this message at 21:46 on Sep 11, 2016 |
# ? Sep 11, 2016 21:13 |
|
SamDabbers posted:Quoting myself: Well poo poo, I looked at that but missed that it included CPUs and RAM. Thanks. I'll have to see what the PSU situation is like and also measure my case up, but I think it will work... edit: For anyone else considering something similar here's a video modifying an ATX case to take an SSI-EEB motherboard. Thermopyle fucked around with this message at 23:52 on Sep 11, 2016 |
# ? Sep 11, 2016 21:25 |
|
|
# ? May 8, 2024 10:05 |
|
Thermopyle posted:
Just install it, AFAIK it doesn't do anything special while creating a VM which is basically the same but a newer version.
|
# ? Sep 12, 2016 07:10 |