Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
I finally setup and tried out Docker recently and it's extremely cool. Instead of using an ubuntu VM with ~1gb of memory to run an nginx proxy with SSL I'm running a container that's using ~30mb! I'm having 1 weird issue I wanted to ask about here though. Windows 10 Pro volume mounting with docker seems to have a lot of issues based on the search results I see, but the problem I'm having seems weirdly specific.

I have the restart policy for my containers set to always. When I start docker my containers therefore start up, but any volumes that are mounted from the Windows host are empty. If I restart the container after docker is started normally the volume mount works correctly.

Background info: Host is Windows 10 Pro, File sharing on host is enabled, Drives are shared within the Docker settings, the test command "docker run --rm -v c:/Users:/data alpine ls /data" works as you would expect.
One other question, if I can't get the host volume mounting working the workaround seems pretty obvious, just don't mount volumes from my windows host. What would be the best way to manage a volume for a container so that it won't get erased or overwritten by Docker?

Adbot
ADBOT LOVES YOU

EssOEss
Oct 23, 2006
128-bit approved
Running Linux containers on Windows is a bit of a hack at best (it still uses a VM to host your containers, so it cannot really use all the Windows integration features). Stick to the same host and container operating system for the time being. Maybe once Microsoft's support for Linux software on Windows progresses further in a couple of years, it might become wise to mix and match.

Potato Salad
Oct 23, 2014

nobody cares


Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Potato Salad posted:

Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out?

We got caught with a bug migrating from 6.1 where it fails. VMware told us to wait until the next VCSA release this month.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Potato Salad posted:

Am I asking for trouble if I use vcsa 6.5 now, or has the mile-long list of known bugs settled out?

It's fine. It was basically fine on release.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

My only complaint from upgrading to 6.5 from 6.1 is they did something to break the active memory stat so every VM eventually reports 100% active memory and support claims to have no idea that this is an issue or how to fix it despite me seeing it in every 6.5 deployment I've looked at.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

EssOEss posted:

Running Linux containers on Windows is a bit of a hack at best (it still uses a VM to host your containers, so it cannot really use all the Windows integration features). Stick to the same host and container operating system for the time being. Maybe once Microsoft's support for Linux software on Windows progresses further in a couple of years, it might become wise to mix and match.

Just to clarify this means windows on windows and linux on Linux. Not something insane like matching distros. Also generally docker on Mac is ok because so many devs use that and it’s well tested.

Also if running docker don’t use red hat. Use Ubuntu, or coreos.

evol262
Nov 30, 2010
#!/usr/bin/perl

Punkbob posted:

Also if running docker don’t use red hat. Use Ubuntu, or coreos.

For what possible reason? docker-latest is in EL distros.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old?

It looks like it's for VDI, so I'm thinking, maybe they do different versions, but my understanding is that if you clone a machine with snapshots, you just get the consolidated form.

I'm trying to figure out how this could possibly make sense.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
[quote="“evol262”" post="“474775465”"]
For what possible reason? docker-latest is in EL distros.
[/quote]

Kernel fixes that are becoming more and more necessary for docker are not tho. Redhat Atomic aims to fix this but the docker running world has standardized on Debian/Ubuntu or CoreOS.

And if you are running docker in prod you should be using kubernetes.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dr. Arbitrary posted:

Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old?

Nope.

Potato Salad
Oct 23, 2014

nobody cares


If someone has those six different snapshots deployed in Horizon simultaneously, sure :suicide:

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

Potato Salad posted:

If someone has those six different snapshots deployed in Horizon simultaneously, sure :suicide:

Gonna bet a nickel this is the case.

evol262
Nov 30, 2010
#!/usr/bin/perl

Punkbob posted:

Kernel fixes that are becoming more and more necessary for docker are not tho. Redhat Atomic aims to fix this but the docker running world has standardized on Debian/Ubuntu or CoreOS.

And if you are running docker in prod you should be using kubernetes.

Those get backported to RHEL, FYI. There were significant rebases in 7.3 and 7.4.

No, AUFS is still not supported. That's fine. The docker team is better about not relying on arbitrary kernel versions to detect features (and instead doing it the proper way), and the RHEL platform team is doing better about backporting those from kernel 4 faster.

Atomic is a plain respin of RHEL+ostree. It doesn't have any docker-specific fixes other than an out-of-the-box layout for docker storage on LVM (which anyone can do).

If you're running a large docker deployment in production, yes, you should use kubernetes. Which runs on basically every distro without problems. Openshift is an extremely large public PaaS which wraps kubernetes on RHEL (or CentOS). It's a major growth sector for Redhat, and a big part of the reason they're better about fixes.

Most shops don't want to run CoreOS as a 'standard' unless their entire workload is containerized (it probably isn't), because managing separate environments is obnoxious. This is the same reason why I wouldn't say Ubuntu. If you run Ubuntu everywhere else, great. Run Ubuntu. If you run CentOS or RHEL, there is no reason why you can't run docker or kubernetes on 7.3 or 7.4. It's more important for a lot of people to 'standardize' on what works for their business , which often means not having admins split knowledge between a bunch of different distros.

If you haven't used `docker-latest` (which went public in June or so), you should.

evol262 fucked around with this message at 00:09 on Jul 28, 2017

Cidrick
Jun 10, 2001

Praise the siamese

Punkbob posted:

And if you are running docker in prod you should be using kubernetes.

My team has been running docker in prod on Mesos + Marathon for about a year and a half now so there's definitely viable options to Kubernetes if it's not your thing.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I mean lol if you think that someone running rhel is running 7.3.

Putting my professional hat on. Open shift is ok, but I still am not convinced that it makes sense vs mainline kubernetes, but if that is something you need to sell to higher ups then that’s what you have to do. I’m also not a super big fan of RHEL in general but that bias might grow out of an issue with users of RHEL being super conservative. If you are running kubes I think that you also really need to step back and not manage the base OS too much. If you really need custom images you need to build a repeatable process using packer and trigger frequent ci builds for the base OS.

I’ve got a bunch of experience trying to manage what are basically dumb hosts via things like chef and ansible for docker orchestration using Mesos and the packer+kubes method is just so much better.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
[quote="“Cidrick”" post="“474791421”"]
My team has been running docker in prod on Mesos + Marathon for about a year and a half now so there’s definitely viable options to Kubernetes if it’s not your thing.
[/quote]

I am actually just leaving a job where I pushed mesos really hard, and I have to admit kubes is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support


Edit: Kubes not docker

freeasinbeer fucked around with this message at 00:34 on Jul 28, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

Punkbob posted:

I am actually just leaving a job where I pushed mesos really hard, and I have to admit docker is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support

You can just run docker on top of mesos. Or kubernetes on mesos. Or marathon. I mean, I really like mesos (spark and chronos in particular), but it's really overblown unless you actually have it on every host as a 'datacenter operating system' or whatever their marketing schtick is.

Punkbob posted:

I mean lol if you think that someone running rhel is running 7.3.
I'm a developer at Redhat. An awful lot of people are running 7.3. I can't (won't) give percentages, but the percentage of people (and important customers) who upgrade to a new release within a month of it going public is significant enough that getting support for RHV/oVirt and Openstack on new releases is something we specifically target.

Note also that layered products (Atomic, for example) don't continue support for EUS Z-streams. Atomic has been 7.3 since 7.3 released, etc.

Punkbob posted:

Putting my professional hat on. Open shift is ok, but I still am not convinced that it makes sense vs mainline kubernetes, but if that is something you need to sell to higher ups then that’s what you have to do. I’m also not a super big fan of RHEL in general but that bias might grow out of an issue with users of RHEL being super conservative. If you are running kubes I think that you also really need to step back and not manage the base OS too much. If you really need custom images you need to build a repeatable process using packer and trigger frequent ci builds for the base OS.

Openshift makes sense for customers because it's PaaS. That's about it. It's user-friendly kubernetes with good tooling which handles all the obnoxious 'built your image from a git repo/source, test it, then deploy in stages' parts for you. Plus a reasonably good abstraction around autorouting pods (which kubernetes does pretty well on its own, to be fair).

It's an add-on to mainline kubernetes. Not a competitor. The selling point is that developers really can manage their own environments with nothing more than a cname from the devops/ops/whatever team, which isn't true for base kubernetes. And they don't need to know anything about kubernetes or containers. Just 'I want to deploy a node/java/ruby/python/whatever app from this git repo' and it does everything for them.

If you're running kubernetes, you shouldn't manage the OS too much. You still need to install it. That's the thing. For customers who have existing standardizations or hardening scripts which they require on a corporate level, managing preseed+kickstart+cloud-init (for Ubuntu+RHEL+CoreOS) is obnoxious. Standardize on one thing and use it everywhere.

Punkbob posted:

I’ve got a bunch of experience trying to manage what are basically dumb hosts via things like chef and ansible for docker orchestration using Mesos and the packer+kubes method is just so much better.
I'm not arguing that at all, though there's no reason you can't just use the official kubernetes ansible playbooks to deploy. The thing is that a huge number of people are just 'dipping their toes' into containerization, because a lot of their ancient monolithic apps don't fit the use case. By the time they rewrite them, containers will no longer be cool, and they'll chase the next hype train. See 'private cloud'

Eventually, for people who need/want private/hybrid cloud infrastructures, a proper greenfield deployment tool which makes it as 'hands-off' as AWS is to customers is probably the end. Mirantis does well at this. Containers will have a place in that, as will traditional virt, and 'private cloud' where you need something more complex than a container but more anonymous than a pet.

Cidrick
Jun 10, 2001

Praise the siamese

Punkbob posted:

I am actually just leaving a job where I pushed mesos really hard, and I have to admit kubes is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support


Edit: Kubes not docker

Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences.

evol262
Nov 30, 2010
#!/usr/bin/perl

Cidrick posted:

Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences.

Mesos is a "datacenter OS". It wants to own all the hosts, pool their resources, and schedule things (spark, batch jobs, containers, etc) on top of them. Kubernetes is an available layer. Mesos tries to keep everything at maximum usage if it possibly can, so all of your layers get CPU/mem time without you managing or worrying about it.

Kubernetes is less ambitious.

Kubernetes is etcd+calico+skydns and some tooling which lets you say "this group of containers (docker by default) comprises a discrete application -- make sure it stays up, scales according to the rules I set, put a ha proxy in front of it at defined entrypoints, attach persistent storage if it's needed, and give me an API

Kubernetes defines "pods" as the unit, which are logical groups of container(s). Broadly, think of it like AWS compute+cloudformation or openstack+heat for containers. And it's multi-tenant.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Cidrick posted:

Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences.

I could go into a bunch of detail but it boils down to mesos being a great distributed task engine but kubernetes being a great docker orchestration tool. And therein lies the rub kubernetes is just so far ahead, but is super opinionated docker management. Mesos doesn’t care but you have to build everything yourself.

Also DC/OS rubs me the wrong way in some respects and is actively a abandoning the open source side of the mesos ecosystem to make a unified platform. If that’s its goal I’ll use kubernetes.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
[quote="“evol262”" post="“474797822”"]
Mesos is a “datacenter OS”. It wants to own all the hosts, pool their resources, and schedule things (spark, batch jobs, containers, etc) on top of them. Kubernetes is an available layer. Mesos tries to keep everything at maximum usage if it possibly can, so all of your layers get CPU/mem time without you managing or worrying about it.

Kubernetes is less ambitious.

Kubernetes is etcd+calico+skydns and some tooling which lets you say “this group of containers (docker by default) comprises a discrete application — make sure it stays up, scales according to the rules I set, put a ha proxy in front of it at defined entrypoints, attach persistent storage if it’s needed, and give me an API

Kubernetes defines “pods” as the unit, which are logical groups of container(s). Broadly, think of it like AWS compute+cloudformation or openstack+heat for containers. And it’s multi-tenant.
[/quote]

I agree with broad strokes of this, but mesos itself isn’t aiming to do that, the ecosystem might be and mesos might even have some bindings for some of it, but in the mesos world you need to build so much of it by hand that I don’t have to worry about with kubes. All of the orchestration is on a single plain not bits and bobs spread everywhere like in mesos. In some respects that freedom is awesome because you can customize stuff for your use case, but that is a double edged sword as you have to maintain a bunch of different tools on hosts to make sure things run.

Pods in of themselves are pretty awesome because they make it super easy to build sidekick containers that do just one thing and can be shared across many pods. Marathon is getting pods in the latest releases but it’s so much simpler to use kubes.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
There's also Nomad if you don't need all the complexity of Kubernetes or Marathon. The integration with the rest of the Hashicorp stack is pretty nice too. Kubernetes is emerging as the de facto standard, though, so you have to weigh that simplicity versus how likely you are to just hire people who are already good with Kubernetes in the future.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Dr. Arbitrary posted:

Is there a legitimate case in VMware for having a chain of five or six snapshots for a VM, all of which are over a year old?

It looks like it's for VDI, so I'm thinking, maybe they do different versions, but my understanding is that if you clone a machine with snapshots, you just get the consolidated form.

I'm trying to figure out how this could possibly make sense.

I know I'm late on this, but this is clearly from someone who thinks snapshots are backups. Since they are sorely mistaken, good luck consolidating those fuckers in any reasonable amount of time.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

mayodreams posted:

I know I'm late on this, but this is clearly from someone who thinks snapshots are backups. Since they are sorely mistaken, good luck consolidating those fuckers in any reasonable amount of time.

We keep 10 or so snapshots of our VDI base images. It has been useful for testing in the past, and the performance impact is nil since they are consolidated on the linked clones.

cr0y
Mar 24, 2005



1) Motherboard recommendations for a home ESXi host? Also, is the onboard rain controller in whatever the answer to #1 adequate or should I be looking at additional pci-x controllers?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

What year is it?

Edit:

Look into ASRock Rack if you actually want a server board, then eBay for a used supported CPU.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Moey posted:

What year is it?
I don't think anyone ever really knew what PCI-Extended was.

cr0y
Mar 24, 2005



Moey posted:

What year is it?

Edit:

Look into ASRock Rack if you actually want a server board, then eBay for a used supported CPU.

You know what i meant guys. (pcie)

I don't really need a "server" board, do I? Im not planning on any crazy workloads, just want to be as esx conpatible as possible.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I'm currently running a desktop mobo in my home server, but will be going Micro-ATX 2011-3 ASRock Rack and a used CPU whenever I decide up upgrade it.

IPMI would be nice.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

cr0y posted:

You know what i meant guys. (pcie)

I don't really need a "server" board, do I? Im not planning on any crazy workloads, just want to be as esx conpatible as possible.
"as esx conpatible as possible" would be a server board, yes.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Unless you want to deal with injecting drivers to get your NIC/storage controller working (or add in cards for that stuff), spend the little extra on a server board.

If it is super minimal use/single host. You could always grab a NUC.

cr0y
Mar 24, 2005



anthonypants posted:

"as esx conpatible as possible" would be a server board, yes.

...this is a really good point.

Kazinsal
Dec 13, 2011
ESXi 5.5 and newer will install and run on just about anything with an Intel CPU that supports VT-x and an Intel NIC.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 or the Lenovo equivalent (TS140-150?) than to actually build something around a server-chipset motherboard.

I really doubt you're going to run into compatibility issues with whatever random desktop you want to run on, especially if you're OK popping in a NIC if the one you have isn't supported.

Methanar
Sep 26, 2013

by the sex ghost

Eletriarnation posted:

Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 than to actually build something around a server-chipset motherboard.

I really doubt you're going to run into compatibility issues with whatever random desktop you want to run on, especially if you're OK popping in a NIC if the one you have isn't supported.

Does VMware still blacklist realtek NICs? That might be something to watch out for

Moey
Oct 22, 2010

I LIKE TO MOVE IT
You can inject drivers into your ESXi image as well.

milk milk lemonade
Jul 29, 2016
I installed ESXi on Lenovo Carbon black (laptop) and it worked just fine. I've also done home labs with random desktops I've built.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
VMware is passé

Just buy a few i3 nucs. Most of the time the single host stuff is useful to a point, but to be well rounded you really need more then one host.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT
When I was doing more lab stuff at home I had a virtualized ESXi cluster running on my single physical host.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply