Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

StabbinHobo posted:

if you're taking a paycut to get into devops you're either a surgeon or doing it wrong

Seriously guy, $25k? Lol.

Adbot
ADBOT LOVES YOU

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Docjowles posted:

Don’t just set up kubernetes ECS to run your company’s old rear end Java .NET monolith.

Aw gently caress. We're moving around 150 Windows servers to AWS, and then containerising the hell out of them so the infra teams only have to manage ~10 much larger servers, and the developers get much faster deploys. Octopus deploy loving hates AWS; it pretends to like it, but it has a deep seated hatred of servers that self-obliterate when the auto scaling group has had enough.

So we're mounting a two-pronged attack with AWS CodeDeploy & Docker

Cancelbot fucked around with this message at 14:21 on Oct 1, 2018

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I can understand pay cuts to go from a larger company to a start-up but at least equity is non-zero compensation. Unless you’re in a really high pay bracket already / very specialized (certs and speaking engagements and all that stuff that should push you into $200k base almost everywhere in the US for almost any engineer devops or not) then a pay cut doesn’t make sense to go to a non-start-up company. Even in Atlanta’s crappy underpaid market I got substantially better offers for devops consulting left and right than previous engagements for companies in defense in the DC area only a couple years prior.

Helianthus Annuus
Feb 21, 2006

can i touch your hand
Grimey Drawer

Warbird posted:

I think I’m going to accept that full time consulting gig tomorrow. Pay cut or no I think it would be more beneficial for my career by way of establishing a solid base and having the ability to branch out. My contracting firm also recommended I commit tax fraud so I could get extra cash, so it might be best to not be associated with them.

Silver lining: Since I don’t much care for pissing off the IRS the pay cut is only 25k or so. Which would be about where I would be if I converted at my current place.

pls keep looking, there are absolutely companies out there that can afford to pay you what you make now

your new gig isn't low balling you because they're short on money, they're low balling you because they think you're an easy mark. you should feel insulted. if you accept this pay cut, i promise they will continue to treat you like a mark for as long as you let them

Helianthus Annuus
Feb 21, 2006

can i touch your hand
Grimey Drawer

LochNessMonster posted:

As others have mentioned, just start playing with stuff and go from there. Methanar made a huge effort-post in the general IT thread some time (months probably?) about a good way to get started learning devops skills. If you want I can repost it here, it was an excellent post and helped several goons on their way already.

can you please repost this?

12 rats tied together
Sep 7, 2006

Docjowles posted:

Are you vaguely aware of how to write and operate modern applications, where modern is like TYOOL 2012? It is that. https://12factor.net/. Plus the usual "make your app stateless!!!" "ok but the state has to live someplace, what do we do with that?" "???????????"

I wasn't familiar with this part of 12 factor but actually it's right here:

quote:

Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.
This is pretty reasonable, I'm not sure why anyone would object to this.

Cancelbot posted:

So we're mounting a two-pronged attack with AWS CodeDeploy & Docker
AWS CodeDeploy is really good. Docker is not a requirement in this scenario, since you're running in EC2 you already have just about everything Docker gives you from an orchestration perspective, it's just a matter of arranging the nuts and bolts to your liking. Sometimes Docker makes sense, most often though I've found that if pushing towards containers comes from the development half of an organization (I'm generalizing, I know), it ends up an untenable mess of bullshit in anywhere from 1 to 6 months.

I strongly recommend that, if you can, approach Docker / containers as an organization with a shared contract of "just put your containers here: _____". That underscore can be nexus, AWS ECR, dockerhub, whatever, but starting with instructions for a developer to push their containers and trigger a deploy is, in my experience, the best overall approach for operational sanity. From that point you can build your orchestration assuming that someone checks in code -> a container is built and pushed, our automation takes over from here.

Your organization is probably different, but in all of the orgs I've worked in once developers (or really, anybody beholden to a PM / business planning) get involved in orchestration (even via something like a helm chart), operational sanity gets thrown out the window immediately in favor of shipping features before an arbitrary deadline that was decided by someone who barely knows what a container is. Container troubleshooting is the worst kind of troubleshooting so definitely fight stuff that like tooth and nail, IMO.

You also mentioned that most of your services are 100MB ram and <1% CPU? ECS/Fargate are both excellent services, but I'd highly recommend engaging your TAM or some other kind of AWS support before deciding on anything and hopefully they can work with you guys to spin up a proof of concept with something applicable to what you guys are hoping to accomplish with EC2/Docker.

It's really hard to beat either of those services though assuming you are only running on EC2. in particular, I would start off this migration journey by thinking about IAM. Getting a container scheduled onto an instance is the easy part -- an intern can bang that out in half a day. The hard part is usually managing secrets, credentials, and AWS API access.

12 rats tied together fucked around with this message at 21:12 on Oct 1, 2018

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
is it possible to have a cloudbuild.yaml in your git repo that works for both your prod ci/cd pipeline *and* devs using "cloud-build-local" against their own test envs/gcp-projects?

related q: y'all just yolo a kubectl command at the end of the steps or have a better deploy method?

Cancelbot
Nov 22, 2006

Canceling spam since 1928


Thanks!

So right now we have big EC2 instances with anything from 5 to 30(!!) independent services on them, this is a holdover from our pre-AWS days where we had around 10 physical servers in which to do all our public hosting. We are going for a "mega-cluster" approach to Docker, but the end goal is each team having an AWS account and they look after their stack which can take whatever form they like as long as its secure & budgeted for; four of our 15 teams are nearly finished with that process and the results are promising. From that we've seen a high variance in implementation details too; EC2 + RDS as a traditional "lift and shift", but some are rebuilding things in lambda + S3 as they don't even want to give a poo poo about an instance going down.

Our old QA environment: ~150 servers will be the primary target of the "containerise everything" as its either too small or so loosely coupled to require a more significant investment into EC2 instances for them. The real ballache that I think I put in previous posts is nearly all of this is Windows/.NET and Fargate as yet doesn't support that, but it is what we'd use.

We've literally just triggered our activation of AWS Enterprise and as soon as our TAM is on board we are going after how we deploy as the first thing we do, IAM is deployed fairly effectively in the places we can see (2FA, Secrets manager, least privilege etc.) but it's going to get more chaotic when the developers really see the "power" of AWS, and by "power" I mean "look at all this horrific poo poo I can cobble together, disregarding Terraform/Cloudformation" so we're working hard on building or buying some hefty governance tools that will slap down silly poo poo as best we can.

Edit: lol our Director has just asked us how quickly can we move everything from eu-west-1 (Ireland) to eu-west-2 (London) in the event of a no-deal Brexit.

Cancelbot fucked around with this message at 11:29 on Oct 2, 2018

12 rats tied together
Sep 7, 2006

Cancelbot posted:

end goal is each team having an AWS account
Awesome, this is a pretty solid idea all around. Orgs I've worked in where they tackled multi-account either early on, or from the start, are generally much healthier from an operational standpoint 2-3 years down the line than orgs who had "the aws account" until some external factor forced them into multiple accounts abruptly.

Cancelbot posted:

but some are rebuilding things in lambda + S3 as they don't even want to give a poo poo about an instance going down.
This is another great idea. My current org has been going through a "oh this lambda stuff is pretty good huh" phase for the past 12-15 months and one thing that caught most of the developers off-guard is that an s3 event notification can only have one target per unique combo of prefix and suffix. Once people realized they could spin up functions in response to objects being placed (our platform is basically a huge s3 state machine), tons of people wanted to run functions off of the same prefix, so we naively implemented the first one, failed on the followups, and then everything had to be delayed while we rewrote them all to subscribe to SNS topics.


Moving an application's infrastructure from one region to another would take us probably 15 minutes to 2 hours in my current role, depending on the application. CloudFormation has a lot of built-in helpers here now that we used to use ansible for, in particular StackSets. As long as you build all of your cfn templates assuming that region (and possibly vpc id) is a type of primary key that you'll use to lookup subnet ids, amis, and security group ids, you're like 90% done with just being able to swap eu-west-1 to eu-west-2 in github and then pushing a button.

Biggest time sinks in my experience are, of course, application configuration, and if you need to move anything heavy (especially redshift clusters) that can take a couple hours. It's not too bad though -- really helps if you have your basic network configs in cloudformation though.

LochNessMonster
Feb 3, 2005

I need about three fitty


Helianthus Annuus posted:

can you please repost this?

Took me a bit, it’s already a year old but here goes.

Methanar posted:

What do you want to do?

I know that's a hard question to answer in the very beginning when you're not even entirely sure what the hype behind a particular technology is. I know nothing about your work environment or what your workloads are.

The power of containers is the automation tooling surrounding them. A plain old docker file running somewhere doing something being handled by systemd or whatever is actually pretty boring. I guess you might be able to make things a bit quicker by pulling down an haproxy container file from a public repo or whatever, but that's not the point.

Containers are great because they are the perfect primitive for building upon. What can be built ontop of containers? Immutable infrastructures, applications that can be deployed with all of their dependencies bundled with them, intelligent automatic resource scheduling, CI/CD pipelines, blue/green deployments off the top of my head.

The reality is if you're the kind of windows admin that I was, the value isn't there for you. Whatever it was that I did at previous jobs had literally zero use whatsoever for any of the concepts I just named. But maybe you're not the kind of windows I was, or you don't want to be. If you don't know what you want out of containers, or more importantly, the larger superset that containers are part of, other than that you want them; that is is perfectly okay.

A good place to start is to just make an account with either Google Compute platform or AWS. I'm actually going to recommend GCP here. I've been spending an awful lot of time recently immersed in GCP and it's very approachable compared to AWS. Kubernetes is also a Google product and thus is as first class citizen in GCP.

Great, you've made your account and are ready to start. Here is where that hard question comes in, what do you want to do. You're entering here ~Devops~ territory. You're not a windows admin anymore working with pre-packaged applications that are built for you. In Devops land being familiar and comfortable with software development is now an unavoidable necessity because delivering software that your organization produces is the point. So, naturally I guess the first thing to do is write a hello world micro-service application in the language of your choice. Golang, nodejs, python, ruby. Pick one and follow a guide on the internet.

Your hello world application can be simple, but use many pieces. Find a guide that involves multiple external components, maybe Redis or MySQL. Say ultimately you get 5 pieces to your new micro-service oriented distributed system. A front end, a piece dedicated to db access, something in the background that handled logging, maybe an internal request router, maybe something that procedurally generates a bitmap image, a message bus, redis and your DB daemon. Now, it's time to publish your application to the world. Each micro service is self contained and stateless which means they are a perfect fit for being in a container!

But wait, writing and developing code is hard. The code you write sucks and is actually full of bugs. What a perfect time to set up a CI/CD pipeline to make your software developer lives easier. Like any good developer you've been using Git as your version control system. Why not build a Jenkins server, in a container naturally https://hub.docker.com/r/jenkins/jenkins/, that will automatically build, compile and test your code for you every time you commit a branch? Jenkins can spawn MORE containers where your code will be built and be ran against synthetic tests you write to be sure you haven't introduced regressions. https://techbeacon.com/beginners-guide-kick-starting-your-ci-pipeline-jenkins

Finally: you have a sane build system like any good developer, your code is bug free and ready for the world. Maybe you start off pushing the containers produced by Jenkins to your VMs by hand, because hey, theres only like 7 of them right? But you continue to grow and your app is pretty popular. It's starting to get hard and expensive to provision all the necessary machines you need to power your bitmap generator. You notice that your application has clearly defined times of the week of peak traffic. Wouldn't it be great if you could size the amount of compute resources you were buying from Google according to your real time traffic load? Enter: Kubernetes.

Kubernetes is a Big Deal. It's actually the technology that is underlying Google's Container Engine that's been open sourced.
Kubernetes, is a system for managing containerized applications across a cluster of nodes. Explicitly designed to address the disconnect between the way that modern, distributed systems are designed and the underlying physical infrastructure. Applications comprised of different services should still be managed as a single application (when it makes sense). Kubernetes provides a layer over the infrastructure to allow for this type of management. Scaling traffic up and down according to load. Logically grouping containers together, software defined networking and so much more are now possible.

Logically grouping containers together: maybe it just always makes sense for your bitmap generated to have 4 micro-services in running on the same host to minimize InterProcess Communication (IPC) latency. Kubernetes can do that. Maybe you always want X amount of microservices running on different underlying hardware to be resilient to datacenter mishaps. Kubernetes can do that. Since Kubernetes is now infront of your apps providing load balancing services, you can do things like blue/green deployments. Lets say parts of your application are stateful, how do you deploy new code? How about just building an entire new parallel environment that you send new users to while the existing stateful sessions just naturally drain off of the old environment. How about running as many versions of the code you write at once?

Containers are the fundamental unit making up larger systems. This is why saying you want to do containers or devops is meaningless. Because it's not something you apt-get install or curl | bash. Devops is to technology-focused companies as the scientific method was to chemists.


This is why containers and the Devops concept/mentality/paradigm/thing is useless to the kind of internal IT windows admin that I was. We didn't write code, we didn't open source software that we were empowered to orchestrate. Running large distributed systems was not our business. If you want to 'get in on this container thing' you need to evaluate what you're doing with it. Maybe you're not satisfied with being an internal windows admin anymore and thats why you're interested. Excellent! The new world of online services is big and scary, but it's here, and more accessible than ever. Join a mailing list! Go to the Kubernetes github and open every link in a tab and read it all! Write your hello world app! Learn to program! (I've got another huge rant about 'learn to program') Read my posts!

Internet Explorer
Jun 1, 2005





Thanks for resurrecting that. I was thinking about it the other day and just didn't take the time to find it. My biggest problem as a Windows admin is that I just don't have anything pushing me to play with CI/CD or whatever else you want to call it. Even doing something at home, I am having a hard time thinking of a fun project to get me started since I don't currently do any coding.

itskage
Aug 26, 2003


What's the current hot poo poo for web e2e tests?

We're still using selenium. Is that still relevant? I'm trying to evaluate this before we start writing a ton of new ones for a new project.

Doom Mathematic
Sep 2, 2008

itskage posted:

What's the current hot poo poo for web e2e tests?

We're still using selenium. Is that still relevant? I'm trying to evaluate this before we start writing a ton of new ones for a new project.

Last time I checked it was Cypress. I could be wrong though.

Helianthus Annuus
Feb 21, 2006

can i touch your hand
Grimey Drawer

LochNessMonster posted:

Took me a bit, it’s already a year old but here goes.

cool, thanks for digging that up

I think he makes a good point about how containers abstract away all the runtime details, which lets you treat containers as atomic units of infrastructure which can all use the same logic for deployment, orchestration, etc. But I think the appeal of containers is much stronger for people working with open-source software, where the runtime requirements can be very heterogenous.

I assume a windows guy at a .NET shop (or even a java shop) is in a situation where everything is much more homogenous, and many of the tools needed for CI/CD are already available and applicable to your use case without the need for extensive modification or configuration. If that's right, then containers don't really add a lot of value. But I've personally never worked at a windows shop, so I don't actually know.

But in the open source world at least, the big idea is to achieve something similar by containerizing whatever bizarre software stack you run, and then you too can use off-the-shelf tooling for your CI/CD needs (Kubernetes).

I should point out that containers are not the only way to pull this off. You can achieve something similar in AWS EC2 by baking AMIs when you build your software, and then treating each EC2 instance running some version of your AMI as your atomic unit of infrastructure. But in this case, you're using AWS specific tooling to achieve CI/CD instead of something more provider-agnostic, like k8s.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





itskage posted:

What's the current hot poo poo for web e2e tests?

We're still using selenium. Is that still relevant? I'm trying to evaluate this before we start writing a ton of new ones for a new project.

puppeteer is good if you can live with chrome only

LochNessMonster
Feb 3, 2005

I need about three fitty


Doom Mathematic posted:

Last time I checked it was Cypress. I could be wrong though.

Can confirm, Cypress is pretty cool.

12 rats tied together
Sep 7, 2006

Helianthus Annuus posted:

You can achieve something similar in AWS EC2 by baking AMIs when you build your software, and then treating each EC2 instance running some version of your AMI as your atomic unit of infrastructure. But in this case, you're using AWS specific tooling to achieve CI/CD instead of something more provider-agnostic, like k8s.

I agree mostly except I think it's a mindset thing more than a tech thing. You don't need to bake amis to have immutable infrastructure, the only thing you need to do is not mutate your infrastructure.

Last job we jokingly called it 'deterministic infrastructure' in that we assumed two servers, when fed the same inputs, resulted in the same outputs at least at a service level. This is ~usually true.

You don't need k8s or ec2, you can provision a server with PXE and run a post provisioning task. You just need cloud-init, something that can talk ipmi (ansible, for example), and your PXE distribution tool of choice and you're pretty much done.

Boot a network image, server comes up and requests a deploy for whatever it's supposed to be, deploy kicks off and sets up monitoring etc, and then ideally your node starts responding affirmatively to some kind of health check and you're in business.

Post provisioning step can include literally just dropping a docker compose file in to a server and configuring an upstart service that runs it. I've worked places where this is how we shipped some applications and it works great assuming you don't need any of the fancier k8s features.

Not that there isn't value in k8s or ec2 outside of "this is a hardware abstraction" -- it's just important to note IMO that none of the tech is magic or even that hard to replicate in a physical DC, if it makes business sense to do so.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I've got a five-digit number of server instances booting off of read-only NFS and running a bootstrap script to deploy services into tmpfs. We don't even cloud-init, we just key off a couple of DHCP fields.

smackfu
Jun 7, 2004

I like the “log to stdout” part of the 12 factor app because it encourages less stupidity. I’m using a package at work where you provide a logging type and it has six options!

Warbird
May 23, 2012

America's Favorite Dumbass

So I'm poking and prodding at Teraform. I'm a bit gunshy about hooking it up to a cloud platform because I'm a huge baby and also cheap as hell. I assume that having AWS/Azure as the provider would still be much cheaper than upgrading my home server (old rear end laptop) and handling any necessary licensing fun even accounting for colossal screwups?


And I turned down the consulting thing. Still reading through the list of concepts, it's been quite a week.

SeaborneClink
Aug 27, 2010

MAWP... MAWP!

Warbird posted:

So I'm poking and prodding at Teraform. I'm a bit gunshy about hooking it up to a cloud platform because I'm a huge baby and also cheap as hell. I assume that having AWS/Azure as the provider would still be much cheaper than upgrading my home server (old rear end laptop) and handling any necessary licensing fun even accounting for colossal screwups?


And I turned down the consulting thing. Still reading through the list of concepts, it's been quite a week.

$300 GCP sign up credit, ready go.

Lily Catts
Oct 17, 2012

Show me the way to you
(Heavy Metal)
The job talk was kinda cool, which reminds me that I make below $20k a year in an infrastructure/support/somewhat DevOps role that I moved into from a database programmer role that was literally killing me--same company! I'm looking for work in another country so I could earn decently for a change.

Warbird
May 23, 2012

America's Favorite Dumbass

SeaborneClink posted:

$300 GCP sign up credit, ready go.

My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit.

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit.

So did you take the paycut?

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

Vulture Culture posted:

I've got a five-digit number of server instances booting off of read-only NFS and running a bootstrap script to deploy services into tmpfs. We don't even cloud-init, we just key off a couple of DHCP fields.

nfs is making a comeback

Scruff_McGee
Mar 11, 2007

Thats a purdy smile

LochNessMonster posted:

Took me a bit, it’s already a year old but here goes.


Ah thank you for this, I've bookmarked the quote and subscribed to this thread. I'm in the position of being lead of our infrastructure team supporting our financial software as a service, inherited after our previous lead left last year. Basically owning all the hardware for our datacenter, networking + firewalls + loadbalancing, AWS, Datacenter vms, and supporting software like Splunk, LDAP, DNS, OSSEC and Puppet. The past year as lead has basically been struggling to keep up with fires and new requests, and now I'm looking to actually improve our infrastructure a bit. During this time we have started to implement Docker, but due to my inexperience, and direction from Dev our current implementation of Docker is...not ideal. We currently provision a new VM for each new container, because per Dev requirements each container needs a unique dns and ports for things like ssh are hardcoded (needs port 22). Additionally all services communicate using external urls to our own web application.

Now there are a lot of areas for improvement for my team, but simplifying deployment is what I'm going to try to tackle next. I've been messing around with Kubernetes both on AWS and a local cluster, and I'm hoping I can build a case to push all containers into a cluster. Since our application is a mix of contaniers and standalone vms, I want to try to set up a deployment where all containers are on a cluster, load balanced with F5 against NodePorts. Internal cluster communication uses kubedns, any communication external to the cluster uses F5 LB address, traffic going into the cluster uses F5 LB against NodePort on services... any potential issues with this setup?

I feel a bit out of my depth, and don't have many resources to talk to on Kubernetes or even best practices for infrastructure - would this be a good time to hire a consultant?

Thanks Thread

geeves
Sep 16, 2004

StabbinHobo posted:

nfs is making a comeback

Don't use nfs and gitlab w/ Postgres. Just learned that the hardway.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
you're pretty much on a good track, only thing i'd say is have the f5 point to the nginx ingress controller instead of rolling your own nodeport/proxy-container solution:
https://github.com/kubernetes/ingress-nginx

also start with some kind of rancher/openshift distro for a datacenter, doing it all yourself is just too much.

Warbird
May 23, 2012

America's Favorite Dumbass

LochNessMonster posted:

So did you take the paycut?

Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though.

Mao Zedong Thot
Oct 16, 2008


Scruff_McGee posted:

Ah thank you for this, I've bookmarked the quote and subscribed to this thread. I'm in the position of being lead of our infrastructure team supporting our financial software as a service, inherited after our previous lead left last year. Basically owning all the hardware for our datacenter, networking + firewalls + loadbalancing, AWS, Datacenter vms, and supporting software like Splunk, LDAP, DNS, OSSEC and Puppet. The past year as lead has basically been struggling to keep up with fires and new requests, and now I'm looking to actually improve our infrastructure a bit. During this time we have started to implement Docker, but due to my inexperience, and direction from Dev our current implementation of Docker is...not ideal. We currently provision a new VM for each new container, because per Dev requirements each container needs a unique dns and ports for things like ssh are hardcoded (needs port 22). Additionally all services communicate using external urls to our own web application.

Now there are a lot of areas for improvement for my team, but simplifying deployment is what I'm going to try to tackle next. I've been messing around with Kubernetes both on AWS and a local cluster, and I'm hoping I can build a case to push all containers into a cluster. Since our application is a mix of contaniers and standalone vms, I want to try to set up a deployment where all containers are on a cluster, load balanced with F5 against NodePorts. Internal cluster communication uses kubedns, any communication external to the cluster uses F5 LB address, traffic going into the cluster uses F5 LB against NodePort on services... any potential issues with this setup?

I feel a bit out of my depth, and don't have many resources to talk to on Kubernetes or even best practices for infrastructure - would this be a good time to hire a consultant?

Thanks Thread

If you control your network, run k8s on baremetal and use kube-router/BGP to advertise services/pods.

If you don't, just use an ingress controller in k8s.

Scruff_McGee
Mar 11, 2007

Thats a purdy smile

Mao Zedong Thot posted:

If you control your network, run k8s on baremetal and use kube-router/BGP to advertise services/pods.

If you don't, just use an ingress controller in k8s.

It'd be a hard sell to commit an entire UCS blade to a k8s, current plan was to look into a solution like Rancher where we can set up a cluster of vms locally, and a cluster of AMI's on AWS. Didn't know about kube-router and BGP - does that go both ways? I need to do some research. Thanks!

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though.

Good to hear that man. Start learning topics you feel you lack in during the M/F your boss is not in and search for a company that wants to bring you in aa the puppet guru but still wants to teach you other devops stuff.

geeves posted:

Don't use nfs and gitlab w/ Postgres. Just learned that the hardway.

That goes for any persitent data that requires lots of writes to it.

Source: also learned it the hard way.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
We've been testing nfs on gluster (ganesha) for our k8 persistent storage and early results are promising. We don't plan on using it as anything more than a file store though, as it's beginning to feel like an abstraction layer cake with each layer doing it's own i/o buffering and that's making me nervous

offlining a gluster node during write-read tests and watching everything freeze for several seconds then recover cleanly was pretty cool.

anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot?

Bhodi fucked around with this message at 13:12 on Oct 4, 2018

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Ganesha owns, pNFS owns. I'm so happy there's a decent user-space NFS implementation these days so Ceph and Gluster and friends can do all their dumb POSIX filesystem stuff

Bhodi posted:

anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot?
Split-brains can be hard to resolve in the general case, and there are certain networking topologies where you don't want to do this because of the possibility of really bad failover/failback loops. In a typical configuration where you're not e.g. mirroring between datacenters you probably don't have to worry much. Ceph and Gluster in particular are both pretty resilient pieces of software.

Vulture Culture fucked around with this message at 14:48 on Oct 4, 2018

22 Eargesplitten
Oct 10, 2010



I was in here a while ago realizing how far over my head I was, but now I'm doing something that should be simple. Basic web server using Express and Node on my Pi. It won't be running anything particularly intensive, I'm just building a really basic website for now until I get external hosting set up. As I understand it, Express can be lightweight, so it seems like a good fit for the Pi (3 B+). I got the official Node package, but I'm not seeing an official Express or NPM package. Are those included in there? I know Express is a framework, but I'm not sure if I need to download that separately given that you have to install it from NPM normally.

Now that I'm actually on a PC rather than my phone, I do see the Bitnami Express package, but so far I've only downloaded official packages. Is that what I would need? Also, the Mongo-Express package, is that going to give me what I need out of the box? I've been loving around with this for long enough as-is, I just want to get going so I can start coding.

The Fool
Oct 16, 2003


NPM installs with node, and express is distributed as an npm package.

Any other questions related to this part of your project may be better suited for the JavaScript thread

22 Eargesplitten
Oct 10, 2010



Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package.

I think I have everything I need from Docker at this point, so hopefully past there everything else can go in the JS thread.

And there will be a lot of everything else.

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package.

I think I have everything I need from Docker at this point, so hopefully past there everything else can go in the JS thread.

And there will be a lot of everything else.

Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right?

The idea is that you do this in your dockerfile so each container you start has the exact same setup (without you manually doing stuff to make everything work).

While knowing virtually nothing about node it will probably look something lije this

code:
# base image
FROM node:8

# copy your local version of the project into the containers filesystem
ADD . /usr/src/app

# install express
RUN npm install <express package>

# run config commands if necessary
RUN <express config commands>

# make node port reachable to the host
EXPOSE 3000

# start default express binary (googled this, might be wrong and you want to do npm start or something. 
CMD [ “bin/www” ]

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Containers built like that can kinda be large and depending upon how well things are mapped out in your layers you may be duplicating 50%+ or more of your 3 GB containers. I’m helping a couple teams transition from a shared snowflake Jenkins setup and everyone was shoving all their dependencies into $HOME/.m2 and .npm and so forth. It turns out that each of our builds sucks down 4 - 9 GB of JARs and NPM packages and the dot directory is about 95 GB with our fat JARs for different releases taking up half of it. It’s certainly faster to download layers than to grab packages from repos, but it’s still annoying for developers used to the dependencies already being in a shared home directory where incremental builds start instantly.

Adbot
ADBOT LOVES YOU

22 Eargesplitten
Oct 10, 2010



LochNessMonster posted:

Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right?

The idea is that you do this in your dockerfile so each container you start has the exact same setup (without you manually doing stuff to make everything work).

While knowing virtually nothing about node it will probably look something lije this

code:
# base image
FROM node:8

# copy your local version of the project into the containers filesystem
ADD . /usr/src/app

# install express
RUN npm install <express package>

# run config commands if necessary
RUN <express config commands>

# make node port reachable to the host
EXPOSE 3000

# start default express binary (googled this, might be wrong and you want to do npm start or something. 
CMD [ “bin/www” ]

Thanks, that makes some sense. I think I'm probably better off just going with traditional, non-containerized stuff at this point. It seems I'm still in over my head and it's not like this server has to be HA, we can just restart the Pi and browse facebook or watch cat videos on youtube for as long as it takes to come back up. Docker is really cool, but I think I'm getting sucked in by the cool factor rather than being sensible.

Oh well, I'll eventually learn to use this stuff.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply