Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cidrick
Jun 10, 2001

Praise the siamese

TeMpLaR posted:

Anyone ever heard of a place that uses NFS for all VM Guest OS's but uses software iscsi inside the guest for all additional drives?

I've seen that done for Exchange before, but not for everything. Any ideas why anywhere would do this and not convert everything over to NFS that isn't a cluster?

I am finding lots of examples of how to stop doing it, saying that it is antiquated. Maybe this is just a status quo kind of thing.

I recently started in a shop and inherited this kind of mess. Most of our Microsoft infrastructure is running on NFS-backed datastores, but our company DFS infrastructure for home drives and shared drives and whatnot are all connected to the same NetApp via iSCSI. I believe it was done for control reasons - really, political reasons - because the team that owned the Microsoft stuff at the time wanted to be able to control their own vfiler, so they were given their own aggr and vfiler on the netapp and given the keys to do whatever they needed to with it, and the storage team got to be hands-off on supporting that slice of the NetApp.

There's not really any technical reason I can think of why you'd want to do that, though. Not that I can think of, anyway.

Adbot
ADBOT LOVES YOU

Cidrick
Jun 10, 2001

Praise the siamese

minato posted:

evol262 had a really good overview of OpenStack somewhere on one of these threads, but search is failing me.

Found it.

I needed to look it up, too. I started digging through the RDO docks and using packstack to play around with the openstack installation, but there's an overwhelming amount of openstack information out there that's far from concise. My shop uses cloudstack at the moment, and it's fine, but we're looking at trying out openstack, mostly because of vendor and community support (and future support).

Cidrick
Jun 10, 2001

Praise the siamese

evol262 posted:

open-vm-tools (depending on distro) is basically all the LGPL-ed parts of vmware tools, and large parts of vmxnet and other bits are mainline.

Holy crap. How did I not know this existed until now? Thanks.

Cidrick
Jun 10, 2001

Praise the siamese
Are there any good design docs for setting up a distributed virtualization (oVirt + KVM or otherwise) cluster using all local disk, with shared storage running on GlusterFS bricks? I'd kind of like to try it out in our lab environment as a POC since we have a bunch of old Hadoop machines lying around with fat local SATA drives that I would love to start stacking VMs on. I have zero experience with Gluster but I'd like to start playing with it.

I'm not too concerned with a step by step guide on what to do, but rather a "here's how you should lay things out and here's how you should scale it" type of write-up.

Cidrick
Jun 10, 2001

Praise the siamese

evol262 posted:

oVirt is always KVM. For better or worse, it's not something you can just stick on top of an existing environment, though. It expects to be on a dedicated virt setup and to own all the relevant components. vdsm (essentially the glue between libvirt/network/storage and the web ui) in particular doesn't play well. Migrating from a plain libvirt environment to ovirt involves standing up one host and virt-v2v'ing machines. If you're starting from scratch, though...

I don't really have any plans to migrate from anything, so this would be from scratch, and the oVirt piece is just going to be for my POC environment. My long term goal is to design a compute and storage platform to move our internal cloud platform onto (which is already running on Cloudstack + KVM) which is all running on local local disk on 1U pizza boxes, into something that runs against a shared storage pool sitting on top of Gluster so that we get the flexibility of having shared storage so we can juggle machines around in the environment, without throwing tons of money at Hitachi for another array.

Don't get me wrong - all of our Hitachi arrays have been rock solid, but they're very expensive to get going and to manage, and they're difficult to scale. I'd much rather buy more cheap servers and shove them into the cluster since we can get a new server in a couple of days, whereas adding shelves to a Hitachi array (or Nimble, or Netapp, or whatever you end up using) takes weeks or months depending on the vendor and how much of a pain in the rear end the procurement department is feeling like being that week. Yes, I realize that adding commodity hardware with a bunch of local disk is not going to be as rock solid as dedicated storage array, but my hope is that Gluster is robust enough noawadays to be able to gracefully handle hardware failures in the environment with all its self-healing features.

evol262 posted:

What kind of use case are you going for? Ceph RBD isn't supported by oVirt, but it's significantly better at some workloads as long as you're ok doing a little extra work. It's especially good at tiering storage and letting you configure fast/slow pools, or splitting up pools of disks on a single chassis, which gluster is frankly poo poo at

If you've got a bunch of identical machines with no other constraints, though, gluster's pretty great. You basically want to set up the disks for optimal local performance (hardware raid or mdraid or whatever), mount them somewhere, and use that as a volume. Change the volume ownership to the kvm or qemu user..

This is pretty much the use case, yes. Our local disk footprint for our app tier is both minimal and ephemeral, which is how we get away with running everything on local disk without any shared storage. The only thing our apps really do is log to disk, which we're shipping off to a logstash environment anyway, so performance isn't a huge concern. Or at least, not at the forefront of our requirements. The VMs are all qcow2s based off CentOS images that I maintain, so the image deduplication is already handled for us in that regard.

I hadn't looked at Ceph though. This is still just a twinkle in my eye at the moment so I'm trying to figure out what's out there.

evol262 posted:

I'd also say that if you have the budget, I'd probably just use VMware+vSAN. The gluster support is ok, but there are a ton of integration pieces slated for oVirt 3.6 which will it a lot nicer, but that caveat goes everywhere.

VMware is off the table completely for us. As much as I have loved working with VMware over the years, my company's relationship with them is basically irreparably poisoned so I'm looking at open source alternatives. We had about a hundred blades all running ESXi in a nicely partitioned farm all backed by a couple of NetApp heads that worked fairly well, but I'm faced with moving everything onto Cloudstack, so I'm trying to come up with a nicely scaling platform that will handle it all without the limitation of a single pair of NetApps being the single choke point for all storage in the environment.

Cidrick
Jun 10, 2001

Praise the siamese

evol262 posted:

Openstack!

Seriously. I work on both products, and I'd recommend oVirt if it fit what it sounds like you want (and if you want to use traditional virt, feel free to contradict me), but...

Heh, fair enough. Honestly I'm less concerned about migrating from Cloudstack to Openstack because most everyone I work with in operations at my company is on board with doing that, but my concern is coming up with a scaled storage model that will work with both Cloudstack AND Openstack while we transition. I mostly wanted to dick with oVirt + Gluster just so I could learn the ropes of a distributed storage environment without having to whole-hog with a full infrastructure stack, but I suppose I had better just dive in, because you're absolutely right, there's not really a reason to play with oVirt at this point.

How stable and mature is Ceph? Does it do all the replication and self-healing of distributed storage that Gluster does? I admittedly know very little about it.

Cidrick
Jun 10, 2001

Praise the siamese
Are you running virt-sysprep against the template when you're done with the image?

Cidrick
Jun 10, 2001

Praise the siamese

Martytoof posted:

I'm gonna feel real stupid if that's listed in VMware's guide because I swear I read that thing head to toe. I haven't done either and will try that today, thanks.

Derp, I missed that you said it was for VMware. It probably won't fix the problem of the customizations not working, but it WILL at least simplify making your base template generic by doing all that udev rule removal stuff for you.

Cidrick
Jun 10, 2001

Praise the siamese

stubblyhead posted:

We don't have these in our lab so I can't confirm, but esxtop might provide you with that info.

Off the top of my head I dont know about iscsi, but with FC, esxtop (in the storage device tab) will display a KAVG value that will show you how much time disk operations are spending in the hypervisor kernel layer.

Cidrick
Jun 10, 2001

Praise the siamese

Wicaeed posted:

Is LACP usage still only officially supported with the vSphere Enterprise license?

It's been a while since I fudged around with it, but I recall that the free (and lower licensed tiers) never really worked "right" or something.

Or maybe I'm just retarded.,

Real 802.3ad LACP link aggregration only works if you're using distributed vSwitches, which are only available on Enterprise Plus.

Most people don't need true LACP link aggregration anyway. The standard vSwitch active/active NIC teaming is simpler to configure anyway. I'm not a networking expert, but I think the only benefit LACP gives you is better load balancing on the trunk, since it's considered one logical "link" and you let the magic of the protocol balance across the physical links for you.

Cidrick
Jun 10, 2001

Praise the siamese
We're using Mesos + Marathon, aka "DC/OS by hand". The OS powering the Mesos slaves/agents are currently CentOS on-premises, but we're starting to deploy in AWS and using CoreOS for the slaves/agents instead, since it's trivial to spin up and stop a bunch of Mesos slaves using cloud-init. The slaves which are what are running docker only have an eth0 and a docker0 interface, which NATs any traffic it gets to their containers based on random ports that Marathon assigns to it.

We use Bamboo with haproxy under the covers, using a pre-baked CentOS 7 AMI that does some light self-discovery to determine how haproxy should be routing traffic. We put a load balancer (or ELB in AWS) in front of multiple haproxy boxes, and then point a wildcard DNS record at the load balancer. Using Bamboo to generate the haproxy config, haproxy looks at the host header in the request which matches the name of the application as far as Marathon sees it, then load balances your traffic to your containers.

Basically, if you have a containerized app called shitbox, and you tell Marathon to deploy three of them, three containers get spun up on Mesos slaves on random ports, Bamboo generates an haproxy listener and backend, and then when you try to make an HTTP call to "shitbox.us-west-2.some.tld", haproxy matches that host header to the three containers and load balances to the three containers on whatever random port they happen to be listening on.

It's pretty tidy, and using Bamboo's haproxy templating you can do some pretty cool stuff based on host headers or uri matching or by extracting environment variables from Marathon. I'll admit I haven't played with Kubernetes at all, but someone else on my team did about 6 months ago and decided it wasn't as good as Mesos + Marathon, whatever that means.

We've been running Docker on Mesos (with CentOS) for a little over a year now with no major issues, aside from some strange docker bugs here and there, and some learning about the perils of using devicemapper-loopback as your docker storage driver. We use OverlayFS now :)

Cidrick
Jun 10, 2001

Praise the siamese

Mr Shiny Pants posted:

This is cool, I was wondering how service discovery would work.

We put a docker container with a consul agent on each mesos slave that phone home to join the consul cluster. We run the consul container in host mode so that any other container on the host can query the agent via localhost:8500.

Cidrick
Jun 10, 2001

Praise the siamese

Beefstorm posted:

So, docker container. It's basically an OS, an application, and only the bits required to make that application purr; all inside a VM.

Is that a very broken way to describe it?

EDIT: I noticed something.

How is it not a VM? Or is that just one of those things people don't want you to call a 'VM' when it is actually a 'Container'?

I guess I'm just missing fundamental differences.

A virtual machine has its own completely virtualized hardware and BIOS, along with all of the bloat that it brings.

A container shares the host OS's kernel and all of the resources that come with it - cpu, memory, disk, network, et al. However, it comes with its own binaries and libraries, which are most of what make up a traditional operating system. However, it's not quite what I would describe as a "full" operating system, since you don't have to sweat things like configuring interfaces or storage and whatnot. The containerization platform (docker, in this case) abstracts all of that out for you.

The most common metaphor I see to describe the differences is to imagine your host hardware as a high-rise building:

- in a traditional bare metal world, your building looks like a hostel, with everyone sharing the space and resources and falling on top of one another
- in a virtualized world, your building looks like a condo, where every unit is self-sufficient with its own circuit breakers, water heater, furnace, and so on
- in a container world, your building is more like an apartment building, where every unit shares the same basic plumbing, power, trash services, and so on

The folks at Redhat love their coloring books, so here's their take on the metaphor in PDF form along with some basic pros and cons to each of the different approaches.

Edit: Dan Walsh's take likens the virtual machine approach to more of a duplex than a condo building, but you get the idea

Cidrick fucked around with this message at 21:24 on Apr 26, 2017

Cidrick
Jun 10, 2001

Praise the siamese

Moey posted:

The coloring book thing is great.

I have copies of both this and the SELinux coloring book in my desk at work.

To date, nary a speck of color on any of the pages, though :(

Cidrick
Jun 10, 2001

Praise the siamese

Punkbob posted:

And if you are running docker in prod you should be using kubernetes.

My team has been running docker in prod on Mesos + Marathon for about a year and a half now so there's definitely viable options to Kubernetes if it's not your thing.

Adbot
ADBOT LOVES YOU

Cidrick
Jun 10, 2001

Praise the siamese

Punkbob posted:

I am actually just leaving a job where I pushed mesos really hard, and I have to admit kubes is a revelation. I don’t know why mesos doesn’t have daemon sets, I mean there is that framework that says it can do the same, but kubes has first class support


Edit: Kubes not docker

Can you elaborate a little on Kubernetes being a revelation? I haven't played with Kubernetes at all, frankly, because I've been so invested in learning Mesos. I'll admit it's not perfect, but I'm adverse to throwing that expertise away in favor of the new hotness without understanding the differences.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply