|
evol262 posted:Tooling took a while to get there with containers also. I'm not saying this is super practical yet, but the isolation offered by using virt is an active area of research. Tooling comes later, mostly. isn't that basically intel's clear containers?
|
# ? Nov 6, 2017 02:56 |
|
|
# ? May 21, 2024 19:01 |
|
minato posted:But in my experience SWEs don't want to know all that stuff, they just want to push a button and get their feature into prod. They're actively resistant to learning about how the sausage is made. They just want a magic "PaaS 2.0" where they click a button and get a deployment pipeline, telemetry, logs, alerts, & reliability. They don't want to know anything about configuration, auto-scaling, backups, availability-zones, security, load-balancing or service meshes; that's just an opaque implementation detail to them. And I can see their point. https://www.youtube.com/watch?v=VQ7kpxPXTm4 PCjr sidecar posted:isn't that basically intel's clear containers?
|
# ? Nov 6, 2017 03:40 |
|
Vulture Culture posted:Brendan Burns did a really interesting talk about this at Velocity NYC this year. The videos are paywalled in O'Reilly Safari, but there's an older and less-fleshed-out version of his talk from KubeCon last year:
|
# ? Nov 6, 2017 06:03 |
|
I’m trying to learn VMware basics (presale and vtsp stuff for now) from scratch using the courses on the vmware site. Would this be a case where I’d be served by the book mentioned in the OP?
|
# ? Nov 6, 2017 22:59 |
|
I’m gonna be a hipster, but unless you have a compelling reason like your current job site having VMware with no plans to send it to the cloud, you’d be better served spending that time learning the basics of aws. Maybe Azure if you are a windows heavy shop.
|
# ? Nov 7, 2017 00:04 |
|
Punkbob posted:I’m gonna be a hipster, but unless you have a compelling reason like your current job site having VMware with no plans to send it to the cloud, you’d be better served spending that time learning the basics of aws. VMware ain’t that hard to learn and there’s still a ton of it out there. The human brain is big enough to learn both things and depending on location VMware is going to be a much more employable skill in the short term.
|
# ? Nov 7, 2017 00:28 |
|
The whole notion that VMs are going away everywhere and are being replaced by containers or serverless or four line perl scripts or whatever it is this week is rather annoying. It's something you should keep on top of, like the rest of the field, but VMs aren't going away for a long time. The landscape needs to mature a whole hell of a lot more before it makes any sense for businesses to start completely rearchitecting their apps. The architecture isn't useless by any means, it's great in certain circumstances but not every app intrinsically benefits from such a large architecture change to be worth it. Cattle are coming eventually but nobody's gassing all of their pets.
|
# ? Nov 7, 2017 03:33 |
|
Seriously, there are still places that run everything on bare metal.
|
# ? Nov 7, 2017 04:22 |
|
I know that people run on prem. But running a single node ESXi node isn’t going to be as useful to get basic concepts past what you could with a VMware workstation/fusion. Using AWS or equivalent you can cheaply mess with some advanced concepts that are transferrable. To me knowing how to architect a redundant system is more of an abstract thing then mastering managing a single host.
|
# ? Nov 7, 2017 09:11 |
|
Punkbob posted:I know that people run on prem. But running a single node ESXi node isn’t going to be as useful to get basic concepts past what you could with a VMware workstation/fusion. I have no idea why you think you can't design redundant systems with a single host. Sure, you could do the same in workstation, but there's no real reason not to pretend to do the 'real thing'. Nested virtualization is a real thing which lets you set up whatever labbing environments you want to test failures. If you think the average AWS 'admin' has any idea how the redundancy in AWS actually works beyond 'scale my app out' and "don't keep all my critical stuff in the same AZ", I don't know what to tell you. I'd probably argue that in-house sysadmins (virtualization guys or not) have a much better idea of how to design redundant systems (storage, network, compute, etc) than the "AWS handles all that stuff for me!" crowd. Not to mention that what qualifies as 'redundant' differs between the two.
|
# ? Nov 7, 2017 13:28 |
|
I’d argue that knowing the concepts behind virtualization is useful, but learning the nitty gritty of esxi is not useful. I would never hire someone who built a home lab to build out a VMware environment. Learning the nitty gritty of esxi is putting the cart before the horse. I’d be much more inclined to hire a sysadmin that got the concepts behind the workflows that it enables as well as having a strong background in being a server janitor. A home lab is a small bonus in an interview but it can also be a major turn off of a candidate if they don’t get “it”.
|
# ? Nov 7, 2017 14:47 |
|
It shows passion and drive. Everything else can be taught.
|
# ? Nov 7, 2017 15:01 |
|
Punkbob posted:I’d argue that knowing the concepts behind virtualization is useful, but learning the nitty gritty of esxi is not useful. I would never hire someone who built a home lab to build out a VMware environment. Learning the nitty gritty of esxi is putting the cart before the horse. I’d be much more inclined to hire a sysadmin that got the concepts behind the workflows that it enables as well as having a strong background in being a server janitor. A home lab is a small bonus in an interview but it can also be a major turn off of a candidate if they don’t get “it”. If someone is looking for their first real sysadmin job they’re much more likely to be put in front of VCenter than the AWS management console, and knowing your way around it will be of great benefit to that person’s rapid advancement on to other, more interesting things.
|
# ? Nov 7, 2017 17:06 |
|
Punkbob posted:I’d argue that knowing the concepts behind virtualization is useful, but learning the nitty gritty of esxi is not useful. I would never hire someone who built a home lab to build out a VMware environment. Learning the nitty gritty of esxi is putting the cart before the horse. I’d be much more inclined to hire a sysadmin that got the concepts behind the workflows that it enables as well as having a strong background in being a server janitor. A home lab is a small bonus in an interview but it can also be a major turn off of a candidate if they don’t get “it”. I can't imagine a scenario where having a home lab (be it physical hardware at home or a setup in a cloud service) would be a negative for me as an interviewer since it demonstrates at least some drive to learn outside of work/school environments. Even if I disagree with the relevance of what they're doing with the lab the existence of it isn't a negative.
|
# ? Nov 7, 2017 17:45 |
|
adorai posted:Seriously, there are still places that run everything on bare metal. And there's still use cases to run certain things on bare metal, even in virt heavy environments.
|
# ? Nov 7, 2017 17:47 |
|
Punkbob posted:I’d argue that knowing the concepts behind virtualization is useful, but learning the nitty gritty of esxi is not useful. I would never hire someone who built a home lab to build out a VMware environment. Learning the nitty gritty of esxi is putting the cart before the horse. I’d be much more inclined to hire a sysadmin that got the concepts behind the workflows that it enables as well as having a strong background in being a server janitor. A home lab is a small bonus in an interview but it can also be a major turn off of a candidate if they don’t get “it”. Great. Don't hire them. But AWS/GCE skills and architecture are only vaguely in the same domain as traditional, on-prem virt (self-hosted openstack is the only place I'd consider both). Believe it or not, lots of people still use on-premises. Frankly, targeting application resiliency, region resiliency, and scale-out don't have much to do with virtualization in the classic sense, and labbing in ESXi teaches something totally different. The point is not "my lab in ESXi mirrors production issues", but "I've touched vcenter and I have a vague idea of how to set up vswitches, LUNs, etc". Other than building images, which is similar across both. The concepts and workflows behind managing your own storage, network, and compute resources vs "gimme another SDN, some buckets/volumes, and here's some cash for a larger instance" aren't comparable. AWS admins are better suited as ex-devops guys who can help structure the application for scale and failure. Not new/ex admins who know the underlying resources more than they understand the application.
|
# ? Nov 7, 2017 17:49 |
|
VMware's Black Friday sales have started. When Workstation 14 was announced, the NIC bandwidth control was one of the better features I'd seen in it for several releases, since I work on a networking product. Now in the post-Net Neutrality era, I bet it will come even more handy.
|
# ? Nov 22, 2017 18:15 |
|
This might be a better thread to ask in: what would be a good backup solution to daily back up VMs from my Win10 PC that I use as a Hyper-V host?
|
# ? Nov 23, 2017 22:48 |
|
Mayne posted:This might be a better thread to ask in: what would be a good backup solution to daily back up VMs from my Win10 PC that I use as a Hyper-V host? Are you willing to spend money? That will be a big factor in the options available to you. There is a free edition of Veeam, don't know if it runs on Win10 non server versions.
|
# ? Nov 24, 2017 09:54 |
|
Mr Shiny Pants posted:Are you willing to spend money? That will be a big factor in the options available to you. Veeam Endpoint does, I believe.
|
# ? Nov 24, 2017 12:38 |
|
bobfather posted:Veeam Endpoint does, I believe. Endpoint works for clients/physical servers not vm-hosts, https://hyperv.veeam.com/free-hyper-v-backup/ does hyper-v backup but features are limited(no scheduling for instance)
|
# ? Nov 24, 2017 12:45 |
|
SlowBloke posted:Endpoint works for clients/physical servers not vm-hosts, https://hyperv.veeam.com/free-hyper-v-backup/ does hyper-v backup but features are limited(no scheduling for instance) I think he's just looking to backup his Hyper-V guests. As a free solution Endpoint Backup would work fine for him. He'd just have to be willing to install it on all his Windows guests.
|
# ? Nov 24, 2017 12:50 |
|
bobfather posted:I think he's just looking to backup his Hyper-V guests. Nothing stops him from installing veeam backup free on the hyper-v hosts, if it's a homelab it's certainly less hassle than multiple veeam endpoint installs(I wouldn't do it on a prod enviroment).
|
# ? Nov 24, 2017 12:56 |
|
SlowBloke posted:Nothing stops him from installing veeam backup free on the hyper-v hosts, if it's a homelab it's certainly less hassle than multiple veeam endpoint installs(I wouldn't do it on a prod enviroment). That's fair. I use ESXi free so Veeam Backup Free was never really an option for me. That said, in a small environment Endpoint backup is literally set once and forget forever.
|
# ? Nov 24, 2017 13:36 |
|
Mayne posted:This might be a better thread to ask in: what would be a good backup solution to daily back up VMs from my Win10 PC that I use as a Hyper-V host?
|
# ? Nov 24, 2017 16:39 |
|
Vulture Culture posted:Mount your storage as iSCSI from a ZFS host, take incremental filesystem snapshots Then ship it off to s3 for cold storage.
|
# ? Nov 24, 2017 22:22 |
|
I'm looking for some virtualisation / containerisation advice. I'm a developer and I regularly run low-traffic low-availability-need apps for a variety of business and personal reasons. I'll soon be building one or more physical servers for my home office and I'm looking for the simplest way to run my apps. I've worked with docker, which is fine, but the orchestration is crummy. I've worked with triton, which is fine, but it gives you a VM on which you still have to set up the environment (either via puppet/chef or manually). I've briefly looked at k8s but its relationship with bare metal vs e.g. vmware/openstack is difficult for me to comprehend. I won't be running this on a thousand hosts. I don't want to spend a bunch on licences. I can write puppet if I have to but I'd prefer not to, because it's a lot of work. I'd like the flexibility to choose on-prem vs cloud hosting without dramatically changing the mechanism of packaging/deployment. Something I've found is typical for when I do any research, there's always some fresh new way of doing this that's *just* around the corner, never actually available. Last round it was joyent's public cloud, this time it's AWS EKS. Any ideas?
|
# ? Dec 7, 2017 06:19 |
|
Jaded Burnout posted:Any ideas? Terraform has a Docker provider. One Terraform configuration can spin up infrastructure locally or on any remote docker host on any provider. If you need better availability, Terraform also has a Kubernetes provider. edit: Alternatively you can use docker-compose and then use the same .yml file against AWS ECS. Erwin fucked around with this message at 06:39 on Dec 7, 2017 |
# ? Dec 7, 2017 06:36 |
|
Erwin posted:Terraform has a Docker provider. One Terraform configuration can spin up infrastructure locally or on any remote docker host on any provider. I've used Terraform for some basic stuff against AWS and that's fine, I guess I can look into how well it handles docker stuff. Erwin posted:edit: Alternatively you can use docker-compose and then use the same .yml file against AWS ECS. I've tried docker-compose at various times including sat face to face with the authors when it was still called fig and I still couldn't get it to orchestrate in the way I wanted, e.g. some nodes waiting for dependant resources to come online, but maybe that's a pipe dream and too heavily influenced by the way I was trying to do dev work at the time.
|
# ? Dec 7, 2017 08:05 |
|
Does anyone have experience with doing nested virtualization in Linux guests in VMware Workstation? I have a Xenial guest in Workstation 12 that I use for multi-VM Vagrant environments, using both VirtualBox and KVM through libvirt. The nested VMs are very unstable; if they have multiple cores allocated it's a 100% guarantee that processes in them will segfault constantly. With only one core allocated, stability improves but is still not great. I have various Ansible playbooks and build scripts for building these environments, and it sucks when they constantly fall over and explode. By way of comparison, the same multi-VM environments are totally stable non-nested. I've tested on VirtualBox on a Mac and also with KVM on a regular Linux server and everything is fine there. I would like to set up another nested virtualization test using VMware Fusion on a Mac but haven't had time to do so yet. My main incentives for getting the nested virt working are because the desktop workstation I have is much more powerful than my MBP and has a lot more memory, so I can build much bigger and more elaborate test environments (and I don't want to tie up a $20k server for my virt experiments when I can get them all done on a much cheaper workstation). However, it has to run Windows, so I can't put Linux right on the machine.
|
# ? Dec 21, 2017 23:26 |
|
chutwig posted:Does anyone have experience with doing nested virtualization in Linux guests in VMware Workstation? The title of the thread was supposed to be a joke. Or so I thought.
|
# ? Dec 21, 2017 23:34 |
|
Volguus posted:The title of the thread was supposed to be a joke. Or so I thought. I work on the monitor team at VMware and we run several machines dedicated to nested testing. Mostly ESX in ESX, but we have a few WS in ESX as well. It is slower, but we expect the same correctness.
|
# ? Dec 21, 2017 23:39 |
|
DevNull posted:I work on the monitor team at VMware and we run several machines dedicated to nested testing. Mostly ESX in ESX, but we have a few WS in ESX as well. It is slower, but we expect the same correctness. You, I understand. You are supposed to test all kinds of wacky configurations your customers may have. It's the customers that I don't get . But anyway, is VM in VM @ VMWare now supported? I tried it back in the early 2000s (2003 maybe?) when VMs were the new cool thing and VMWare workstation caught me trying to install it in an VMWare VM and basically refused to let me do it. Again, not that I would have had a need for it, but experimenting is fun.
|
# ? Dec 22, 2017 01:41 |
|
Volguus posted:You, I understand. You are supposed to test all kinds of wacky configurations your customers may have. It's the customers that I don't get . But anyway, is VM in VM @ VMWare now supported? I tried it back in the early 2000s (2003 maybe?) when VMs were the new cool thing and VMWare workstation caught me trying to install it in an VMWare VM and basically refused to let me do it. Again, not that I would have had a need for it, but experimenting is fun. Not only are nested VMs fine, but they’re a fine way to simulate weird network topographies for an amateur homelabber.
|
# ? Dec 22, 2017 01:56 |
|
Anyone have experience with vSAN & AppVolumes? My new company is getting into using AppVolumes, which I've done before, but the majority of their storage is all vSAN based which means there isn't a (decent) storage device (or devices depending on how big we scale) for the AppVolume AppStacks outside of the vSAN datastore itself. I can't seem to find any good reference for using vSAN as the AppStack storage other than poo poo that says you can... I have the following concerns: 1) The AppStack vmdk's aren't associated with a VM so I can't assign a specific Storage Policy to them, does this mean the default storage policy applies (which in our case basically mirrors)? Or does no storage policy apply (meaning it puts it on which ever hosts services the request for uploading the file)? 2) If the vmdk only really resides on the local storage of one (or two, depending on question 1) host(s) all reads to the AppStack VMDK are going to hammer that single host instead of distribute across the cluster? This would be my biggest concern. 3) Would there be a way to configure AppVolumes, or vSAN to distribute the vmdk's evenly across the cluster so that the benefit of vSAN servicing (in most cases) the closest storage to the VM is maintained, ie VM1 has AppStack1 assigned to it, it reads the VMDK copy off the disk on the Host that VM1 is running off of? I'm trying either resolve these (as I see them) issues, or if I can't I want to make the case that we'd need dedicated shared storage with decent Read performance.
|
# ? Dec 22, 2017 19:39 |
|
It’s going to use the default storage policy for the AppStack vmdks. Since AppStacks are read only it should end up in the cache layer for VSAN on whatever host owns the frequently accessed apps and stay there. Writable volumes aren’t one to many, so those won’t create any single host traffic patterns. If you really want to force a VMDK to stripe across multiple hosts you can crank up the stripes setting in your default VSAN policy to force each object to stripe across multiple magnetic drives. If you set it to a value larger than the number of magnetic drives in a single host it will force it only multiple hosts. VSAN also distributes reads in a round robin fashion across replica copies based on the block offset, so if your ftt is set to two you’re getting at least two hosts active in servicing reads for that AppStack VMDK.
|
# ? Dec 23, 2017 19:55 |
|
YOLOsubmarine posted:It’s going to use the default storage policy for the AppStack vmdks. Perfect, that was what I was figuring but couldn't find any proof for some reason. Thanks so much for the info!
|
# ? Dec 23, 2017 20:29 |
|
Volguus posted:You, I understand. You are supposed to test all kinds of wacky configurations your customers may have. It's the customers that I don't get . But anyway, is VM in VM @ VMWare now supported? I tried it back in the early 2000s (2003 maybe?) when VMs were the new cool thing and VMWare workstation caught me trying to install it in an VMWare VM and basically refused to let me do it. Again, not that I would have had a need for it, but experimenting is fun. I am pretty sure that it is supported. You can select the guest type as vmkernel from the UI. It is way better now that everything is running HV instead of binary translation. Not only is it for for experimenting, but it can also help test your deploy when moving to a new version. Lots of places deploy into VMs to make sure everything is compatible before rolling it out on their hardware.
|
# ? Dec 23, 2017 21:35 |
|
I'm using an old desktop (AMD FX-8350) as my "server". My plan was to install Windows Server and run some VM's in Hyper-V, but Hyper-V apparently doesn't have some of the features I wanted (USB passthrough, etc.) so I installed VMWare workstation, which apparently has an issue with Windows Server 2016's Credential Guard feature. Before I go through the rigmarole of turning that off I figured I'd pop in here and see if there's some other path I should be taking? Should I be running another VM product on the bare metal and then VM's on top of that? I'd like to run a couple Windows VM's, a Linux VM, and perhaps an OSX VM if I can managed to get that going on an AMD processor.
|
# ? Dec 28, 2017 06:31 |
|
|
# ? May 21, 2024 19:01 |
|
BeastOfExmoor posted:I'm using an old desktop (AMD FX-8350) as my "server". My plan was to install Windows Server and run some VM's in Hyper-V, but Hyper-V apparently doesn't have some of the features I wanted (USB passthrough, etc.) so I installed VMWare workstation, which apparently has an issue with Windows Server 2016's Credential Guard feature. Before I go through the rigmarole of turning that off I figured I'd pop in here and see if there's some other path I should be taking? Should I be running another VM product on the bare metal and then VM's on top of that? I'd like to run a couple Windows VM's, a Linux VM, and perhaps an OSX VM if I can managed to get that going on an AMD processor. I've got an AMD FX 8300 running ESXi. The onboard network card drivers for the MSI motherboard I have it in weren't recognized so I could've added them to the iso through some method but I just put an intel nic in there. I haven't tried to virtualize macos on it but I've run freebsd, linux, Windows XP, 7, 8, and 10 without issue.
|
# ? Dec 28, 2017 09:18 |