Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KS
Jun 10, 2003
Outrageous Lumpwad

Internet Explorer posted:

Yes, that is what I was trying to say earlier. Depending on the vendor the verbage can change. You can hard set it in UEFI/BIOS or you can set UEFI/BIOS to allow the OS to manage it, which ESXi supports.


"Supports"

I've seen so many more PSODs on HP ESXi servers set to OS control that I don't even mess with it any more. Just static high performance in the BIOS.

Adbot
ADBOT LOVES YOU

Kerpal
Jul 20, 2003

Well that's weird.

evol262 posted:

CPU cstates shouldn't require messing with EFI unless you're completely disabling them. I'd be utterly amazed if there weren't a tunable for it somewhere in vmkernel

I would've expected this as well when modifying it within vSphere client. However, upon checking the BIOS/EFI, I did note that it was on System DBPM for the CPU. I'm assuming this is how Dell shipped it. There is also an OS DBPM option, which probably explains why changing the profile in ESXi made no difference. I imagine if you're running a large datacenter with huge clusters, fine tuning the configuration would be vital. This is one server running in an office building so if it works, good enough.

Moey posted:

I thought it could be hard set in the bios, or turned to "guest os control"?

This is what Dell calls OS DBPM I believe.

Richard Noggin
Jun 6, 2005
Redneck By Default
Oh great. The VMware/e1000 bug that's been around for over a year now, and was supposedly fixed in 5.5 U2? Not so much.. We patched to 5.5 U2 a couple months ago because of this, and we just had another host crash. They're telling us to go to P03. Adding to the confusion is the original KB article not being updated, yet it's the first result of a Google search for "VMware e1000 psod".

I wish I was still on 5.1

Kachunkachunk
Jun 6, 2011

KS posted:

"Supports"

I've seen so many more PSODs on HP ESXi servers set to OS control that I don't even mess with it any more. Just static high performance in the BIOS.
You were probably looking at inexplicable CPU timeout PSODs, and/or poor VM performance at times, right? http://kb.vmware.com/kb/1018206

HPs since the Gen6es have been notorious for lengthy and frequent SMI activity when ESXi's power saving settings are set to anything other than High Performance (aka, disabling ESXi's power control). Even then, HP officially recommends using Static High Performance and disabling all power saving functionality anyway. I'm not totally convinced it's HP's problem, but nor do I think it's necessarily VMware's. Seems like a disagreement on the approach.
I haven't seen any Dells do it, but they're in the KB as well.

Edit: Sweet, new avatar for newbies!

Internet Explorer
Jun 1, 2005





The whole thing has been a problem for such a long time, I can only imagine it's Intel/AMD/Dell/HP/VMware/whoever-the-gently caress-else not being able to agree on anything. poo poo never seems to work right until you set it in BIOS/UEFI.

quicksand
Nov 21, 2002

A woman is only a woman, but a good cigar is a smoke.

Richard Noggin posted:

Oh great. The VMware/e1000 bug that's been around for over a year now, and was supposedly fixed in 5.5 U2? Not so much.. We patched to 5.5 U2 a couple months ago because of this, and we just had another host crash. They're telling us to go to P03. Adding to the confusion is the original KB article not being updated, yet it's the first result of a Google search for "VMware e1000 psod".

I wish I was still on 5.1

I've had to update 7 hosts at 4 clients to 5.5u3 to fix this, and so far it has fixed every single one.

Fingers crossed it stays unfucked!

evol262
Nov 30, 2010
#!/usr/bin/perl

Internet Explorer posted:

The whole thing has been a problem for such a long time, I can only imagine it's Intel/AMD/Dell/HP/VMware/whoever-the-gently caress-else not being able to agree on anything. poo poo never seems to work right until you set it in BIOS/UEFI.

There's a pretty unified interface for this. Maybe VMware does it badly? It's not "not being able to agree", since it works fine in Linux and BSD

Kerpal posted:

I would've expected this as well when modifying it within vSphere client. However, upon checking the BIOS/EFI, I did note that it was on System DBPM for the CPU. I'm assuming this is how Dell shipped it. There is also an OS DBPM option, which probably explains why changing the profile in ESXi made no difference. I imagine if you're running a large datacenter with huge clusters, fine tuning the configuration would be vital. This is one server running in an office building so if it works, good enough.


This is what Dell calls OS DBPM I believe.
That may just be Dell being Dell. I don't even know why they'd have an option to ignore OS scaling unless it was to hard lock it to some value, and made admins set that. Defaulting to it is dumb

Wicaeed
Feb 8, 2005
For anyone using Veeam Free Edition, is there any way to schedule backups to run on an automated schedule?

I understand completely that part of the paid suite is the ability to do exactly this, but is there a workaround that can be used in the interim to run on a schedule?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Wicaeed posted:

For anyone using Veeam Free Edition, is there any way to schedule backups to run on an automated schedule?

I understand completely that part of the paid suite is the ability to do exactly this, but is there a workaround that can be used in the interim to run on a schedule?

If Veam has an API or something you could script a cron job to kick this off. Or if Veeam just runs on a windows machine and has any sort of CLI you could use a scheduled task.

Wicaeed
Feb 8, 2005

1000101 posted:

If Veam has an API or something you could script a cron job to kick this off. Or if Veeam just runs on a windows machine and has any sort of CLI you could use a scheduled task.

Nope, looks like that is all part of the Veeam paid edition. Oh well.

Separate topic, but what is the going rate for vRealize Orchestrator? How is it licensed?

In the past month or two I've had to set up some test environments for new products coming down the pipeline, and it's a very labor intensive process (Installing OS, configuring some settings in VMware (Networking, storage, etc)) for me.

It looks like most of the VMware stuff could potentially be automated with Orchestrator, and maybe some of the OS configuration/configuration of IPs & networking as well.

parid
Mar 18, 2004

Wicaeed posted:

Separate topic, but what is the going rate for vRealize Orchestrator? How is it licensed?

Pretty sure the license is included with vCenter. How handy are you with javascript?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

parid posted:

Pretty sure the license is included with vCenter. How handy are you with javascript?

It is indeed included with your vCenter license. Just need to go turn it on and set it up.

Wicaeed
Feb 8, 2005

parid posted:

Pretty sure the license is included with vCenter. How handy are you with javascript?

:monocle:

So it is!

And Javascript? Never touched it :ohdear:

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Wicaeed posted:

:monocle:

So it is!

And Javascript? Never touched it :ohdear:

There's a fair amount of coverage in vCO without resorting to javascript and plenty of samples out there to get you through the stuff you do. Plenty of info here: http://www.vcoteam.info/

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof
I'm being forced headlong into this thread as my company has finally begun to implement "The Cloud"
The decision has come down that we will be using OpenStack managed by Foreman.
I've been told that OpenStack makes everything highly available and redundant, and none of our VM's will ever go down. We can add nodes at will and scale VM's vertically and horizontally as needed.
After about 10 minutes of research I know this is not the case, and I'm afraid that our CIO has no idea what he's doing.

I've had some limited experience with running VMware on a single computer, so I understand how you can divide up a single piece of metal into multiple servers, but I am struggling to get a grasp on IaaS.
I've read through a lot of this thread including this post and beyond. I understand that there are different services Openstack; Nova, Cinder, Horizon, and so on. However, I do not understand how they work.

For example; Let's say I have a bunch of VM's running and have maxed out my resources. What happens when one of my nodes (physical server) dies? Is all of that computing power distributed equally between all of the metal, or does OpenStack just choose which server to run the vm based on the #of cores?

I am but a lowly helpdesk monkey and barely have a grasp on basic networking. I have no idea where to begin learning this. Is there a resource that can help me get a grasp on this ASAP?

Internet Explorer
Jun 1, 2005





Good luck with that.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Someone will probably better comment better than I however I'd say to an extent that OpenStack is analogous to AWS or Azure but running it in-house. Things like Nova, Swift, Glance, etc are the "chunks" that make it work.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I'd run with it.

Success or fail, you'll probably get a chance to work with some neat technology and it'll be a good career boost.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

GnarlyCharlie4u posted:

I've been told that OpenStack makes everything highly available and redundant, and none of our VM's will ever go down. We can add nodes at will and scale VM's vertically and horizontally as needed.
After about 10 minutes of research I know this is not the case, and I'm afraid that our CIO has no idea what he's doing.
You are correct.

I feel that a generous description of OpenStack is that it's a collection of software for launching and managing kvm / qemu instances, and the hairy networking between them. It doesn't provide the magical ability to keep VMs up without downtime, it doesn't automatically provision new VMs for you when you need more compute resources, and it can't (AFAIK) migrate instances between compute nodes like VMWare's vMotion does.

The whole point of it is to allow you to spin up + tear down VM instances at will, so it can be used as a platform for vertical/horizontal scaling. But a basic installation provides no intelligence in that regard; it won't monitor resource thresholds for you and automatically adjust to the capacity you need. You have to use a higher-level system (i.e. a PaaS) to do that. The Heat component helps perform some of this, but it's relatively new and I'm not familiar enough with it to comment on it.

quote:

For example; Let's say I have a bunch of VM's running and have maxed out my resources. What happens when one of my nodes (physical server) dies? Is all of that computing power distributed equally between all of the metal, or does OpenStack just choose which server to run the vm based on the #of cores?
When a physical Compute node falls over, the instances running on that node will obviously stop running. The OpenStack software will notice the node is down, but won't magically migrate and restart the instances. AFAIK (and I might be wrong), you'll have to wait until the node comes back up to use those instances again. And if the node is truly dead (e.g. the drive exploded) then you'll have to recreate those instances from scratch, and OS will provision them on another Compute node. I believe this is why evol262 recommends that OS be used for cattle, not pets - if something dies you're expected to just shrug shoulders and reprovision a new one, rather than spending time/money/energy taking your instance to the "vet" to resuscitate it.

When OS decides which Compute node it'll provision your instance on, it takes into account the desired "flavor" (#CPUs / memory / disk) but I don't believe you can give it any more hints than that. From a tenant's POV, they're supposed to be completely unaware of the underlying hardware. (I'm not sure that's practically true, I don't have enough experience to say that authoritatively).

quote:

I am but a lowly helpdesk monkey and barely have a grasp on basic networking. I have no idea where to begin learning this. Is there a resource that can help me get a grasp on this ASAP?
Basic networking, or OpenStack networking? Because if you want to know how OpenStack networking works, then poke around RedHat's site. OpenStack's documentation is absolutely terrible, but RedHat and their RDO/"packstack" OpenStack distribution do a reasonable job of making it easier to understand what's happening under the covers. For example, this article is 1000x better than anything you'll find in the OpenStack docs: https://openstack.redhat.com/Networking_in_too_much_detail

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

Tab8715 posted:

Someone will probably better comment better than I however I'd say to an extent that OpenStack is analogous to AWS or Azure but running it in-house. Things like Nova, Swift, Glance, etc are the "chunks" that make it work.

This is about all I understand of it.

Dr. Arbitrary posted:

I'd run with it.

Success or fail, you'll probably get a chance to work with some neat technology and it'll be a good career boost.
I'm plowing ahead full tilt.
It's nearly impossible to get my boss to change his mind about things, even when he KNOWS he is wrong.
I'm just trying to 1) keep him from breaking all of the things and 2) actually see a project to fruition for once.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

minato posted:

You are correct.
gently caress.

minato posted:

Basic networking, or OpenStack networking? Because if you want to know how OpenStack networking works, then poke around RedHat's site. OpenStack's documentation is absolutely terrible, but RedHat and their RDO/"packstack" OpenStack distribution do a reasonable job of making it easier to understand what's happening under the covers. For example, this article is 1000x better than anything you'll find in the OpenStack docs: https://openstack.redhat.com/Networking_in_too_much_detail
Thank you so much, this is a good place to start.
It appears my boss has COMPLETELY the wrong idea about what OpenStack is and how it works.

Thanks Ants
May 21, 2004

#essereFerrari


It sounds like your CIO thinks OpenStack is just like vSphere but without the licensing costs.

Docjowles
Apr 9, 2009

GnarlyCharlie4u posted:

I'm being forced headlong into this thread as my company has finally begun to implement "The Cloud"
The decision has come down that we will be using OpenStack managed by Foreman.
I've been told that OpenStack makes everything highly available and redundant, and none of our VM's will ever go down. We can add nodes at will and scale VM's vertically and horizontally as needed.
After about 10 minutes of research I know this is not the case, and I'm afraid that our CIO has no idea what he's doing.

lol. Amazon has instances randomly die from time to time, and they're freaking Amazon. Glad to see you recognize this is the case ;)

Have you looked at oVirt (or RHEV, Red Hat's paid and supported product built on it)? That might be a better fit depending on what you need.

My company runs our own OpenStack private cloud in production, with about 270 VM's currently (and more on the way). Happy to answer questions. The main thing I'll warn you about is that it is a LOOOOOOOT of work to set up, tune and maintain. We have one engineer who spends about 90% of his time doing nothing but babysit OpenStack, with the rest of us backing him up. When it's humming along, it is pretty dang awesome. When it fails, prepare to break out the hard liquor because the traditional "ask google" option often does not exist. You may be the only person to ever have the problem you're seeing, or at least talk about it publicly. Often the only relevant search result is a link to the source code. OpenStack is absolutely not as turnkey as VMware. What you "save" on licensing you will end up spending in man hours.

Regarding your specific question, OpenStack does not have HA features like VMware out of the box. Read up on the "cattle vs pets" metaphor. If a VM dies, or a whole compute node, it will not automatically reboot the affected guests on another machine. OpenStack is designed to be a cloud computing platform, and in the cloud, failure is meant to be expected. Your application should be architected such that it doesn't care that webserver1234 randomly vanished. Servers which Absolutely Cannot Go Down :colbert: are not good candidates for OpenStack. You can run them there, but be prepared for sadness.

I don't mean to make this sound like all doom and gloom. OpenStack, especially beginning with the Icehouse release, is really impressive. I love running it. But don't kid yourself that it is going to be easy to set up or maintain. It's a complex, always-evolving beast.

minato posted:

I feel that a generous description of OpenStack is that it's a collection of software for launching and managing kvm / qemu instances, and the hairy networking between them. It doesn't provide the magical ability to keep VMs up without downtime, it doesn't automatically provision new VMs for you when you need more compute resources, and it can't (AFAIK) migrate instances between compute nodes like VMWare's vMotion does.

The whole point of it is to allow you to spin up + tear down VM instances at will, so it can be used as a platform for vertical/horizontal scaling. But a basic installation provides no intelligence in that regard; it won't monitor resource thresholds for you and automatically adjust to the capacity you need. You have to use a higher-level system (i.e. a PaaS) to do that. The Heat component helps perform some of this, but it's relatively new and I'm not familiar enough with it to comment on it.

OpenStack actually does support Live Migration (what VMware calls vMotion). It works great. I'm not sure if there's an analog to Storage vMotion or not, or if live migration works with local storage. I'd guess not. We run all of our VM's on shared storage so I've never looked into it.

With Heat + Ceilometer you can set up autoscaling based on various metrics (all VM's in a pool have been at < 10% idle CPU for 10 minutes: boot another), although we are not doing this in production yet. On my 2015 wishlist.

Dr. Arbitrary posted:

I'd run with it.

Success or fail, you'll probably get a chance to work with some neat technology and it'll be a good career boost.

This. There aren't a ton of OpenStack jobs out there, but there are even fewer candidates who are actually proficient with it. Get that on your resume/LinkedIn and watch recruiters start blowing you up.

Docjowles fucked around with this message at 23:56 on Dec 26, 2014

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
evol262's post about OpenStack is a very good one, and I'd suggest you get your CIO to read it carefully, with giant <blink> tags around the Neutron networking section because getting networking right is very difficult. The engineers who manage my company's on-premise OS clusters have to be very experienced and competent, and your company will similarly have to invest a lot of time/money in managing a local OS installation. It was certainly beyond me; as a DevOps person who just wanted to play around with OS before I got tenant access to our clusters, I got a sandbox cluster installed on 1 machine via packstack fairly painlessly but installing a multi-node sandbox was an exercise in frustration that never fully worked properly.

If it doesn't have to be on-premise, then let Rackspace or some other OpenStack provider deal with the pain. Also, be prepared for OpenStack version migrations (e.g. Havana to Icehouse) to be measured in quarters rather than days.

If you're going to play around with OS to get a feel for it, then definitely use packstack to ease some of that pain. And even better, there are Docker images of various OpenStack components you can find at index.docker.io which will avoid the very lengthy install times. I concur with evol262 that you should initially select GRE or VLANs to let the Compute nodes communicate; they're easier than the alternatives.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

Thanks Ants posted:

It sounds like your CIO thinks OpenStack is just like vSphere but without the licensing costs.
That's pretty much it.
He wants all the things. For free. Always.
Free things are far superior. :colbert:

Docjowles posted:

lol. Amazon has instances randomly die from time to time, and they're freaking Amazon. Glad to see you recognize this is the case ;)
Have you looked at oVirt (or RHEV, Red Hat's paid and supported product built on it)? That might be a better fit depending on what you need.
No. I'll check out oVirt tonight. RHEV is out of the question though. Every time I show him something that costs any amount of money, he wants all of those features, but for free.
So unless it's coming out of my pocket, it's not happening.
I totally understand the pets vs cattle reference, which is why I started all this posting in the first place. I don't want to waste time herding cats.

Docjowles posted:

I don't mean to make this sound like all doom and gloom. OpenStack, especially beginning with the Icehouse release, is really impressive. I love running it. But don't kid yourself that it is going to be easy to set up or maintain. It's a complex, always-evolving beast.
I'm not. We're talking about a man who migrates mission critical things to the latest Release Candidate of Centos, every time one drops.

minato posted:

evol262's post about OpenStack is a very good one, and I'd suggest you get your CIO to read it carefully, with giant <blink> tags around the Neutron networking section because getting networking right is very difficult. The engineers who manage my company's on-premise OS clusters have to be very experienced and competent, and your company will similarly have to invest a lot of time/money in managing a local OS installation. It was certainly beyond me; as a DevOps person who just wanted to play around with OS before I got tenant access to our clusters, I got a sandbox cluster installed on 1 machine via packstack fairly painlessly but installing a multi-node sandbox was an exercise in frustration that never fully worked properly.

If it doesn't have to be on-premise, then let Rackspace or some other OpenStack provider deal with the pain. Also, be prepared for OpenStack version migrations (e.g. Havana to Icehouse) to be measured in quarters rather than days.

If you're going to play around with OS to get a feel for it, then definitely use packstack to ease some of that pain. And even better, there are Docker images of various OpenStack components you can find at index.docker.io which will avoid the very lengthy install times. I concur with evol262 that you should initially select GRE or VLANs to let the Compute nodes communicate; they're easier than the alternatives.
Thanks for the tip. It absolutely does have to be on premise though, and no money can be spent on this project. Man hours don't matter, because I'm salaried.

Again thank you all so much for your help, and for confirming my worst fears. Time to break out the coffeerum and burn the rest of my PTO reading up on OS. :thumbsup:
I'll be back with more stupid questions.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

GnarlyCharlie4u posted:

No. I'll check out oVirt tonight. RHEV is out of the question though. Every time I show him something that costs any amount of money, he wants all of those features, but for free.
So unless it's coming out of my pocket, it's not happening.
oVirt is pretty nice as a vSphere replacement, but you need to keep in mind it does require linux knowledge to implement and support. For most corporate environments, unless they are already deep with linux knowledge will benefit more from vSphere than from oVirt, from the simple fact that you can hire a vSphere schmuck off the street far easier than you can find someone who can support oVirt.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

adorai posted:

oVirt is pretty nice as a vSphere replacement, but you need to keep in mind it does require linux knowledge to implement and support. For most corporate environments, unless they are already deep with linux knowledge will benefit more from vSphere than from oVirt, from the simple fact that you can hire a vSphere schmuck off the street far easier than you can find someone who can support oVirt.

My boss has a linux superiority complex. (See above post about Centos) So that's probably a good thing.
My experience with Linux has been limited, but I'm not afraid to learn the hard way.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

GnarlyCharlie4u posted:

My boss has a linux superiority complex. (See above post about Centos) So that's probably a good thing.
My experience with Linux has been limited, but I'm not afraid to learn the hard way.
esxi is built on a linux base. It's strayed pretty far from it in recent years, but early versions of esx, up to 3.5 i believe, shipped on redhat.

seriously, get your boss to reconsider going from zero to openstack. It's like giving a 16 year old the keys to your helicopter instead of the honda civic.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Not sure if this place but how do you end up with cattle as opposed to pets? How do create an application or where it doesn't matter if crashes and you just re-create it?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
For designing apps as cattle, look at the 12 Factor principles for a guide. That was based on how Heroku advised its users to design their apps.

Most of the time it boils down to making your app stateless, and to have its config injected easily. This is quite straightforward for (say) a web service.

Not everything can be 12-factor. Like, your database server can't easily treat its filesystem as an external service, but it can come pretty close if it's backed by a grunty NetApp filer.

For hosts, you just need to start ripping out all the stuff that makes a specific host a special snowflake. No meticulously provisioned hostnames/IPs, no hand-installed services.

You can expect to have some long-lived hosts (e.g. a monitoring server or a load balancer) but they should be easily and quickly replaceable if they fail. You can get the high-availability with something like keepalived, and spinning up a new one with the right software installed should be a fast automated process anyway (e.g. use Puppet or Docker).

minato fucked around with this message at 09:38 on Dec 27, 2014

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Will this eventually become standard in application development? It seems like a great concept.

jre
Sep 2, 2011

To the cloud ?



Tab8715 posted:

Will this eventually become standard in application development? It seems like a great concept.

No, not everything can be shoe horned into this model. Anything that relies heavily on transactions or state for instance. Its difficult to make non trivial apps which can just scale out and most things don't need that level of reliability.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Distributed systems are an order of magnitude harder to maintain, operate, and debug than non-distributed systems. You shouldn't use them unless you actually need the scale they permit. If you're using Cassandra as the backend for your office lunch-ordering application, you've probably hosed up.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

minato posted:

For designing apps as cattle, look at the 12 Factor principles for a guide. That was based on how Heroku advised its users to design their apps.

Most of the time it boils down to making your app stateless, and to have its config injected easily. This is quite straightforward for (say) a web service.

Not everything can be 12-factor. Like, your database server can't easily treat its filesystem as an external service, but it can come pretty close if it's backed by a grunty NetApp filer.

For hosts, you just need to start ripping out all the stuff that makes a specific host a special snowflake. No meticulously provisioned hostnames/IPs, no hand-installed services.

You can expect to have some long-lived hosts (e.g. a monitoring server or a load balancer) but they should be easily and quickly replaceable if they fail. You can get the high-availability with something like keepalived, and spinning up a new one with the right software installed should be a fast automated process anyway (e.g. use Puppet or Docker).

i like this except for the env variables as config, since it's hard to version that. I wish there was a better way :(


Tab8715 posted:

Not sure if this place but how do you end up with cattle as opposed to pets? How do create an application or where it doesn't matter if crashes and you just re-create it?

The state that can be lost is not the eternal state

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Malcolm XML posted:

i like this except for the env variables as config, since it's hard to version that. I wish there was a better way :(
I think that's not a hard and fast rule, it's more about being easy to inject config at runtime. Env vars are just one way of doing that.

We're using a combination of files and environment vars to configure containers at spinup time. Files are for static configuration like tuning parameters and SSL certs. Those files are handled by the config management system (Puppet / Chef / etc) so it's versioned, and we can have dev/staging/prod/qa variations. The env vars are used for configuration determined at runtime, e.g. the IP/port of a log server.


12-factor is definitely useful when your application outgrows a single server and you need to make it distributed. I see a growing trend towards a Data-Center Operating System (DCOS). Apache Mesos + Marathon (which drive Twitter and AirBnB) is the most mature example of this.

A DCOS is analogous to a kernel's scheduler but spread across many hosts. With the kernel, you submit a process to be run and it decides when and where to run it; what core will do the work, what memory area to use, etc. As the job submitter, you don't know or care about any of that, it's all abstracted away from you. With a DCOS it's the same thing - you submit a job to the DCOS and the DCOS decides which of it's array of hosts will get the job. (This sounds like a PaaS, but to my mind it's spiritually closer to an OS's scheduler)

This works well when the DCOS can quickly send a job to a specific host. It's one reason why containerization technologies like Docker are so big right now, because the app container is so lightweight (relative to a VM image) that it can be deployed on any host in the cluster very quickly. I can send a command to a virgin host that will download my 5MB app container and have it running within a few seconds. With the old way, we'd have to spin up a new VM that contained the app, or maintain a host with Puppet where we'd install/configure the app before running it.

Mr Shiny Pants
Nov 12, 2012
I don't exactly know where to put this but here goes:

Does anyone know if Linux bridging works under Hyper-V? I've been trying to setup bridging for LXC and I am having a hell of a time getting the containers to work.

I've tried the following Linux sysctls:
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0

To disable any filtering but I still can't get DHCP to work or ping another machine (but the Linux host ) from a container.

I've created a br0 with brctl and everything looks right, but it does not work.

This is making me doubt the networking part of Hyper-V.

Any clues?

Thanks Ants
May 21, 2004

#essereFerrari


Is this any good?

http://www.cloudbase.it/hyper-v-promiscuous-mode/

I had to do something similar in vSphere to get a VPN concentrator to work properly.

Mr Shiny Pants
Nov 12, 2012

Thanks Ants posted:

Is this any good?

http://www.cloudbase.it/hyper-v-promiscuous-mode/

I had to do something similar in vSphere to get a VPN concentrator to work properly.

No dice, I hoped that would be it but it does not seem to work.

It's also pretty horrible to pinpoint where it goes bad.

If anyone else has an idea, please let me know.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mr Shiny Pants posted:

Does anyone know if Linux bridging works under Hyper-V? I've been trying to setup bridging for LXC and I am having a hell of a time getting the containers to work.
Whenever I have had trouble with bridging in linux, tcpdump has been instrumental in troubleshooting where the problem lies.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

minato posted:

I think that's not a hard and fast rule, it's more about being easy to inject config at runtime. Env vars are just one way of doing that.

We're using a combination of files and environment vars to configure containers at spinup time. Files are for static configuration like tuning parameters and SSL certs. Those files are handled by the config management system (Puppet / Chef / etc) so it's versioned, and we can have dev/staging/prod/qa variations. The env vars are used for configuration determined at runtime, e.g. the IP/port of a log server.


12-factor is definitely useful when your application outgrows a single server and you need to make it distributed. I see a growing trend towards a Data-Center Operating System (DCOS). Apache Mesos + Marathon (which drive Twitter and AirBnB) is the most mature example of this.

A DCOS is analogous to a kernel's scheduler but spread across many hosts. With the kernel, you submit a process to be run and it decides when and where to run it; what core will do the work, what memory area to use, etc. As the job submitter, you don't know or care about any of that, it's all abstracted away from you. With a DCOS it's the same thing - you submit a job to the DCOS and the DCOS decides which of it's array of hosts will get the job. (This sounds like a PaaS, but to my mind it's spiritually closer to an OS's scheduler)

This works well when the DCOS can quickly send a job to a specific host. It's one reason why containerization technologies like Docker are so big right now, because the app container is so lightweight (relative to a VM image) that it can be deployed on any host in the cluster very quickly. I can send a command to a virgin host that will download my 5MB app container and have it running within a few seconds. With the old way, we'd have to spin up a new VM that contained the app, or maintain a host with Puppet where we'd install/configure the app before running it.

Yeah i mean when I deploy a service it just gets distributed across a number of instances and scales up and down as needed and its only the one service that we hosed up architecting that's causing us pain.

Re "DCOS" Azure has been doing this for years at least internally, god knows if/when they will expose it.

Basically the way to make your service scale is to not do anything that would cause it to not scale.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply