|
I finally setup and tried out Docker recently and it's extremely cool. Instead of using an ubuntu VM with ~1gb of memory to run an nginx proxy with SSL I'm running a container that's using ~30mb! I'm having 1 weird issue I wanted to ask about here though. Windows 10 Pro volume mounting with docker seems to have a lot of issues based on the search results I see, but the problem I'm having seems weirdly specific. I have the restart policy for my containers set to always. When I start docker my containers therefore start up, but any volumes that are mounted from the Windows host are empty. If I restart the container after docker is started normally the volume mount works correctly. Background info: Host is Windows 10 Pro, File sharing on host is enabled, Drives are shared within the Docker settings, the test command "docker run --rm -v c:/Users:/data alpine ls /data" works as you would expect. One other question, if I can't get the host volume mounting working the workaround seems pretty obvious, just don't mount volumes from my windows host. What would be the best way to manage a volume for a container so that it won't get erased or overwritten by Docker?
|
# ? Jul 25, 2017 03:01 |
|
|
# ? Jun 2, 2024 21:16 |
|
Running Linux containers on Windows is a bit of a hack at best (it still uses a VM to host your containers, so it cannot really use all the Windows integration features). Stick to the same host and container operating system for the time being. Maybe once Microsoft's support for Linux software on Windows progresses further in a couple of years, it might become wise to mix and match.
|
# ? Jul 25, 2017 08:17 |
|
Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out?
|
# ? Jul 27, 2017 00:46 |
|
Potato Salad posted:Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out? We got caught with a bug migrating from 6.1 where it fails. VMware told us to wait until the next VCSA release this month.
|
# ? Jul 27, 2017 02:14 |
|
Potato Salad posted:Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out? It's fine. It was basically fine on release.
|
# ? Jul 27, 2017 05:31 |
|
My only complaint from upgrading to 6.5 from 6.1 is they did something to break the active memory stat so every VM eventually reports 100% active memory and support claims to have no idea that this is an issue or how to fix it despite me seeing it in every 6.5 deployment I've looked at.
|
# ? Jul 27, 2017 10:44 |
|
EssOEss posted:Running Linux containers on Windows is a bit of a hack at best (it still uses a VM to host your containers, so it cannot really use all the Windows integration features). Stick to the same host and container operating system for the time being. Maybe once Microsoft's support for Linux software on Windows progresses further in a couple of years, it might become wise to mix and match. Just to clarify this means windows on windows and linux on Linux. Not something insane like matching distros. Also generally docker on Mac is ok because so many devs use that and it’s well tested. Also if running docker don’t use red hat. Use Ubuntu, or coreos.
|
# ? Jul 27, 2017 13:05 |
|
Punkbob posted:Also if running docker don’t use red hat. Use Ubuntu, or coreos. For what possible reason? docker-latest is in EL distros.
|
# ? Jul 27, 2017 16:16 |
|
Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old? It looks like it's for VDI, so I'm thinking, maybe they do different versions, but my understanding is that if you clone a machine with snapshots, you just get the consolidated form. I'm trying to figure out how this could possibly make sense.
|
# ? Jul 27, 2017 19:35 |
|
[quote="“evol262”" post="“474775465”"] For what possible reason? docker-latest is in EL distros. [/quote] Kernel fixes that are becoming more and more necessary for docker are not tho. Redhat Atomic aims to fix this but the docker running world has standardized on Debian/Ubuntu or CoreOS. And if you are running docker in prod you should be using kubernetes.
|
# ? Jul 27, 2017 19:45 |
|
Dr. Arbitrary posted:Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old? Nope.
|
# ? Jul 27, 2017 19:47 |
|
If someone has those six different snapshots deployed in Horizon simultaneously, sure
|
# ? Jul 27, 2017 21:26 |
|
Potato Salad posted:If someone has those six different snapshots deployed in Horizon simultaneously, sure Gonna bet a nickel this is the case.
|
# ? Jul 27, 2017 21:35 |
|
Punkbob posted:Kernel fixes that are becoming more and more necessary for docker are not tho. Redhat Atomic aims to fix this but the docker running world has standardized on Debian/Ubuntu or CoreOS. Those get backported to RHEL, FYI. There were significant rebases in 7.3 and 7.4. No, AUFS is still not supported. That's fine. The docker team is better about not relying on arbitrary kernel versions to detect features (and instead doing it the proper way), and the RHEL platform team is doing better about backporting those from kernel 4 faster. Atomic is a plain respin of RHEL+ostree. It doesn't have any docker-specific fixes other than an out-of-the-box layout for docker storage on LVM (which anyone can do). If you're running a large docker deployment in production, yes, you should use kubernetes. Which runs on basically every distro without problems. Openshift is an extremely large public PaaS which wraps kubernetes on RHEL (or CentOS). It's a major growth sector for Redhat, and a big part of the reason they're better about fixes. Most shops don't want to run CoreOS as a 'standard' unless their entire workload is containerized (it probably isn't), because managing separate environments is obnoxious. This is the same reason why I wouldn't say Ubuntu. If you run Ubuntu everywhere else, great. Run Ubuntu. If you run CentOS or RHEL, there is no reason why you can't run docker or kubernetes on 7.3 or 7.4. It's more important for a lot of people to 'standardize' on what works for their business , which often means not having admins split knowledge between a bunch of different distros. If you haven't used `docker-latest` (which went public in June or so), you should. evol262 fucked around with this message at 00:09 on Jul 28, 2017 |
# ? Jul 28, 2017 00:07 |
|
Punkbob posted:And if you are running docker in prod you should be using kubernetes. My team has been running docker in prod on Mesos + Marathon for about a year and a half now so there's definitely viable options to Kubernetes if it's not your thing.
|
# ? Jul 28, 2017 00:12 |
|
I mean lol if you think that someone running rhel is running 7.3. Putting my professional hat on. Open shift is ok, but I still am not convinced that it makes sense vs mainline kubernetes, but if that is something you need to sell to higher ups then that’s what you have to do. I’m also not a super big fan of RHEL in general but that bias might grow out of an issue with users of RHEL being super conservative. If you are running kubes I think that you also really need to step back and not manage the base OS too much. If you really need custom images you need to build a repeatable process using packer and trigger frequent ci builds for the base OS. I’ve got a bunch of experience trying to manage what are basically dumb hosts via things like chef and ansible for docker orchestration using Mesos and the packer+kubes method is just so much better.
|
# ? Jul 28, 2017 00:23 |
|
[quote="“Cidrick”" post="“474791421”"] My team has been running docker in prod on Mesos + Marathon for about a year and a half now so there’s definitely viable options to Kubernetes if it’s not your thing. [/quote] I am actually just leaving a job where I pushed mesos really hard, and I have to admit kubes is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support Edit: Kubes not docker freeasinbeer fucked around with this message at 00:34 on Jul 28, 2017 |
# ? Jul 28, 2017 00:25 |
|
Punkbob posted:I am actually just leaving a job where I pushed mesos really hard, and I have to admit docker is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support You can just run docker on top of mesos. Or kubernetes on mesos. Or marathon. I mean, I really like mesos (spark and chronos in particular), but it's really overblown unless you actually have it on every host as a 'datacenter operating system' or whatever their marketing schtick is. Punkbob posted:I mean lol if you think that someone running rhel is running 7.3. Note also that layered products (Atomic, for example) don't continue support for EUS Z-streams. Atomic has been 7.3 since 7.3 released, etc. Punkbob posted:Putting my professional hat on. Open shift is ok, but I still am not convinced that it makes sense vs mainline kubernetes, but if that is something you need to sell to higher ups then that’s what you have to do. I’m also not a super big fan of RHEL in general but that bias might grow out of an issue with users of RHEL being super conservative. If you are running kubes I think that you also really need to step back and not manage the base OS too much. If you really need custom images you need to build a repeatable process using packer and trigger frequent ci builds for the base OS. Openshift makes sense for customers because it's PaaS. That's about it. It's user-friendly kubernetes with good tooling which handles all the obnoxious 'built your image from a git repo/source, test it, then deploy in stages' parts for you. Plus a reasonably good abstraction around autorouting pods (which kubernetes does pretty well on its own, to be fair). It's an add-on to mainline kubernetes. Not a competitor. The selling point is that developers really can manage their own environments with nothing more than a cname from the devops/ops/whatever team, which isn't true for base kubernetes. And they don't need to know anything about kubernetes or containers. Just 'I want to deploy a node/java/ruby/python/whatever app from this git repo' and it does everything for them. If you're running kubernetes, you shouldn't manage the OS too much. You still need to install it. That's the thing. For customers who have existing standardizations or hardening scripts which they require on a corporate level, managing preseed+kickstart+cloud-init (for Ubuntu+RHEL+CoreOS) is obnoxious. Standardize on one thing and use it everywhere. Punkbob posted:I’ve got a bunch of experience trying to manage what are basically dumb hosts via things like chef and ansible for docker orchestration using Mesos and the packer+kubes method is just so much better. Eventually, for people who need/want private/hybrid cloud infrastructures, a proper greenfield deployment tool which makes it as 'hands-off' as AWS is to customers is probably the end. Mirantis does well at this. Containers will have a place in that, as will traditional virt, and 'private cloud' where you need something more complex than a container but more anonymous than a pet.
|
# ? Jul 28, 2017 00:53 |
|
Punkbob posted:I am actually just leaving a job where I pushed mesos really hard, and I have to admit kubes is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences.
|
# ? Jul 28, 2017 03:26 |
|
Cidrick posted:Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences. Mesos is a "datacenter OS". It wants to own all the hosts, pool their resources, and schedule things (spark, batch jobs, containers, etc) on top of them. Kubernetes is an available layer. Mesos tries to keep everything at maximum usage if it possibly can, so all of your layers get CPU/mem time without you managing or worrying about it. Kubernetes is less ambitious. Kubernetes is etcd+calico+skydns and some tooling which lets you say "this group of containers (docker by default) comprises a discrete application -- make sure it stays up, scales according to the rules I set, put a ha proxy in front of it at defined entrypoints, attach persistent storage if it's needed, and give me an API Kubernetes defines "pods" as the unit, which are logical groups of container(s). Broadly, think of it like AWS compute+cloudformation or openstack+heat for containers. And it's multi-tenant.
|
# ? Jul 28, 2017 03:51 |
|
Cidrick posted:Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences. I could go into a bunch of detail but it boils down to mesos being a great distributed task engine but kubernetes being a great docker orchestration tool. And therein lies the rub kubernetes is just so far ahead, but is super opinionated docker management. Mesos doesn’t care but you have to build everything yourself. Also DC/OS rubs me the wrong way in some respects and is actively a abandoning the open source side of the mesos ecosystem to make a unified platform. If that’s its goal I’ll use kubernetes.
|
# ? Jul 28, 2017 03:58 |
|
[quote="“evol262”" post="“474797822”"] Mesos is a “datacenter OS”. It wants to own all the hosts, pool their resources, and schedule things (spark, batch jobs, containers, etc) on top of them. Kubernetes is an available layer. Mesos tries to keep everything at maximum usage if it possibly can, so all of your layers get CPU/mem time without you managing or worrying about it. Kubernetes is less ambitious. Kubernetes is etcd+calico+skydns and some tooling which lets you say “this group of containers (docker by default) comprises a discrete application — make sure it stays up, scales according to the rules I set, put a ha proxy in front of it at defined entrypoints, attach persistent storage if it’s needed, and give me an API Kubernetes defines “pods” as the unit, which are logical groups of container(s). Broadly, think of it like AWS compute+cloudformation or openstack+heat for containers. And it’s multi-tenant. [/quote] I agree with broad strokes of this, but mesos itself isn’t aiming to do that, the ecosystem might be and mesos might even have some bindings for some of it, but in the mesos world you need to build so much of it by hand that I don’t have to worry about with kubes. All of the orchestration is on a single plain not bits and bobs spread everywhere like in mesos. In some respects that freedom is awesome because you can customize stuff for your use case, but that is a double edged sword as you have to maintain a bunch of different tools on hosts to make sure things run. Pods in of themselves are pretty awesome because they make it super easy to build sidekick containers that do just one thing and can be shared across many pods. Marathon is getting pods in the latest releases but it’s so much simpler to use kubes.
|
# ? Jul 28, 2017 04:08 |
|
There's also Nomad if you don't need all the complexity of Kubernetes or Marathon. The integration with the rest of the Hashicorp stack is pretty nice too. Kubernetes is emerging as the de facto standard, though, so you have to weigh that simplicity versus how likely you are to just hire people who are already good with Kubernetes in the future.
|
# ? Aug 1, 2017 22:55 |
|
Dr. Arbitrary posted:Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old? I know I'm late on this, but this is clearly from someone who thinks snapshots are backups. Since they are sorely mistaken, good luck consolidating those fuckers in any reasonable amount of time.
|
# ? Aug 3, 2017 01:45 |
|
mayodreams posted:I know I'm late on this, but this is clearly from someone who thinks snapshots are backups. Since they are sorely mistaken, good luck consolidating those fuckers in any reasonable amount of time. We keep 10 or so snapshots of our VDI base images. It has been useful for testing in the past, and the performance impact is nil since they are consolidated on the linked clones.
|
# ? Aug 3, 2017 04:49 |
|
1) Motherboard recommendations for a home ESXi host? Also, is the onboard rain controller in whatever the answer to #1 adequate or should I be looking at additional pci-x controllers?
|
# ? Aug 4, 2017 18:51 |
|
cr0y posted:pci-x What year is it? Edit: Look into ASRock Rack if you actually want a server board, then eBay for a used supported CPU.
|
# ? Aug 4, 2017 19:49 |
|
Moey posted:What year is it?
|
# ? Aug 4, 2017 19:51 |
|
Moey posted:What year is it? You know what i meant guys. (pcie) I don't really need a "server" board, do I? Im not planning on any crazy workloads, just want to be as esx conpatible as possible.
|
# ? Aug 4, 2017 19:53 |
|
I'm currently running a desktop mobo in my home server, but will be going Micro-ATX 2011-3 ASRock Rack and a used CPU whenever I decide up upgrade it. IPMI would be nice.
|
# ? Aug 4, 2017 19:54 |
|
cr0y posted:You know what i meant guys. (pcie)
|
# ? Aug 4, 2017 19:57 |
|
Unless you want to deal with injecting drivers to get your NIC/storage controller working (or add in cards for that stuff), spend the little extra on a server board. If it is super minimal use/single host. You could always grab a NUC.
|
# ? Aug 4, 2017 20:00 |
|
anthonypants posted:"as esx conpatible as possible" would be a server board, yes. ...this is a really good point.
|
# ? Aug 4, 2017 20:06 |
|
ESXi 5.5 and newer will install and run on just about anything with an Intel CPU that supports VT-x and an Intel NIC.
|
# ? Aug 5, 2017 04:14 |
|
Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 or the Lenovo equivalent (TS140-150?) than to actually build something around a server-chipset motherboard. I really doubt you're going to run into compatibility issues with whatever random desktop you want to run on, especially if you're OK popping in a NIC if the one you have isn't supported.
|
# ? Aug 5, 2017 06:10 |
|
Eletriarnation posted:Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 than to actually build something around a server-chipset motherboard. Does VMware still blacklist realtek NICs? That might be something to watch out for
|
# ? Aug 5, 2017 06:12 |
|
You can inject drivers into your ESXi image as well.
|
# ? Aug 5, 2017 07:31 |
I installed ESXi on Lenovo Carbon black (laptop) and it worked just fine. I've also done home labs with random desktops I've built.
|
|
# ? Aug 5, 2017 17:31 |
|
VMware is passé Just buy a few i3 nucs. Most of the time the single host stuff is useful to a point, but to be well rounded you really need more then one host.
|
# ? Aug 5, 2017 21:47 |
|
|
# ? Jun 2, 2024 21:16 |
|
When I was doing more lab stuff at home I had a virtualized ESXi cluster running on my single physical host.
|
# ? Aug 5, 2017 23:29 |