Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
LochNessMonster
Feb 3, 2005

I need about three fitty


I’m setting up a side project at home with about 10 different microservices. Code is managed in Gitlab and consists of python apps, haproxy, webservers and databases all running in docker containers.

What would you guys pick for a CI tool to deploy this? I could go with Jenkins as that is what I’m using at work but I’d rather learn something new that is not a pos.

Adbot
ADBOT LOVES YOU

LochNessMonster
Feb 3, 2005

I need about three fitty


freeasinbeer posted:

Gitlab CI isn’t bad fwiw. If you are looking for dirt cheap and unmanaged google container builder is decent. But doesn’t have native gitlab support.

If windows teamcity.

Otherwise self hosted best options are drone(simpler) or concourse(more full featured)


Gyshall posted:

Gitlab CI is the stones. Give it a try.

Gitlab CI it is, project is already in Gitlab so makes sense. Heard some good things about it before so let’s see how this works.

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Going through the Docker tutorials I don't even know enough Linux shell to know what some of these commands are doing, so welp. :smith: Guess I'm not doing this stuff without learning a lot more Linux fundamentals.

Get the Sander van Vugt book for RHCSA and start working on Linux. It’s really fundamental and an RHCSA level of understanding will get you a long way.

On the side you can still explore Docker.

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Thanks, I'll swing by after work and pick it up from the library. I'm going through a Linux basics course on Lynda since the RHCSA course recommended a year of Linux experience or basics coursework before taking it. It looks like at least in the Denver/Boulder area of CO a junior Linux admin should make around what I'm making now. I would like to make more, but I'm willing to put that off a year or so if it means getting on the right track.

If you don’t start at the bottom (linux basic stuff) you’ll need a lot more time figuring out (basic linux) stuff while working on Docker, K8s and probably a lot other stuff too.

I get you want to get on the gravy train but to effectively do that you need some Linux experience. This does not have to be 1 year of experience but you should be comfortable with a lot of cli work. It’s not rocket science but it does take some time.

Personally I think RHCSA is a pretty good foundation.

LochNessMonster
Feb 3, 2005

I need about three fitty


Helianthus Annuus posted:

An sftp server with some bash to set up and maintain this directory structure: ${REPO_NAME}/${GIT_SHA}/${MY_COOL_ARTIFACT}

I’m tempted to replace Nexus with this.

LochNessMonster
Feb 3, 2005

I need about three fitty


Blue Ocean is pretty nice, too bad Jenkins/Groovy are lovely.

LochNessMonster
Feb 3, 2005

I need about three fitty


Mega Comrade posted:

I will die on this beautiful hill of plugins.

If you are forced to work with it you will. Rather sooner than later.

LochNessMonster
Feb 3, 2005

I need about three fitty


Vulture Culture posted:

Jenkins is so much easier if you deliberately avoid any plugin you don't absolutely have to use

I’ve taken that particular piece of advice very seriously.

Trying to figure out how groovy uses escape characters in specific corner cases is mindblowing though.

Shell session code:
node {
    echo 'No quotes, pipeline command in single quotes'
    sh 'echo $BUILD_NUMBER'
    echo 'Double quotes are silently dropped'
    sh 'echo "$BUILD_NUMBER"'
    echo 'Even escaped with a single backslash they are dropped'
    sh 'echo \"$BUILD_NUMBER\"'
    echo 'Using two backslashes, the quotes are preserved'
    sh 'echo \\"$BUILD_NUMBER\\"'
    echo 'Using three backslashes still results in preserving the single quotes'
    sh 'echo \\\"$BUILD_NUMBER\\\"'
    echo 'To end up with \" use \\\\\\\" (yes, seven backslashes)'
    sh 'echo \\\\\\"$BUILD_NUMBER\\\\\\"'
    echo 'This is fine and all, but we cannot substitute Jenkins variables in single quote strings'
    def foo = 'bar'
    sh 'echo "${foo}"'
    echo 'This does not interpolate the string but instead tries to look up "foo" on the command line, so use double quotes'
    sh "echo \"${foo}\""
    echo 'Great, more escaping is needed now. How about just concatenate the strings? Well that gets kind of ugly'
    sh 'echo \\\\\\"' + foo + '\\\\\\"'
    echo 'We still needed all of that escaping and mixing concatenation is hideous!'
    echo 'There must be a better way, enter dollar slashy strings (actual term)'
    def command = $/echo \\\"${foo}\\\"/$
    sh command
    echo 'String interpolation works out of the box as well as environment variables, escaped with double dollars'
    def vash = $/echo \\\"$$BUILD_NUMBER\\\" ${foo}/$
    sh vash
    echo 'It still requires escaping the escape but that is just bash being bash at that point'
    echo 'Slashy strings are the closest to raw shell input with Jenkins, although the non dollar variant seems to give an error but the dollar slash works fine'
}

LochNessMonster
Feb 3, 2005

I need about three fitty


Bhodi posted:

This is a crime. Did you have to figure all that out on your own?

I found this on github somewhere when I figured out half of it by myself already. I didn’t know about “dollar slashy string”.

I Inherited a (scripted) pipeline that’s mostly shell scripts strung together by groovy functions.

It actually works surprisingly well and I’m perfectly comfortable with shell scripting and the unique features/behaviour you’ll run into with that. But combining it with groovy can only be described as it’s own layer in hell.

LochNessMonster
Feb 3, 2005

I need about three fitty


A full build for us takes about 10-15 minutes and includes several automated tests qith quality gates and deploying to docker. Artifacts are stored in Nexus and they’re tagged by release number and git commit, I think. Nexus usually ‘just works’ so I don’t touch it all that much.

An inital build of a new branch will take an additional 15-20 minutes since we do a retore of our production data backup so each branch can be tested 100% production-like. This is kept persistant for the duration of the branch. With new builds only the missing incremental backups are restored. The upside is that this tests our backups tens of times per day. The downside is that our NFS server is taking a “Rodney King”-like beating when we spin up too many new builds at the same time.

It also gives me the peace of mind that whatever goes wrong, I can completely rebuild my production environment in less than half an hour with a single button press and without needing anyone elses help.

Still there are lots of things I’d like to improve.

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

Ok goons, continue doing only puppet work at a job I largely don't like or take a 50k take home pay cut to take a swing at DevOps consulting at a place I've been wanting to go? And no, they won't come up a drat penny.

Stick around and continue looking for a similar DevOps job you do like.

If you don’t have to, never take a paycut. Your negotiation position completely depends on your ability to walk away from the offer if you don’t like it. Read more about that in negotiation thread in BFC.

Don’t settle for a job at a run of the mill conultancy firm. The hourly rates are usually through the roof so if they don’t make you a decent offer it teaches you a few things about the way they run their company. You can and will not for the life of you get a decent raise after you start working there. They’re penny pinchers. If they really want you, they’ll come up with a better offer. Otherwise they’re just interested in putting a warm body in a seat. For me this would already be a red flag and I wouldn’t want to work for them.

It sounds like you have a rather cushy job which you don’t have to leave straight away. Take this interview as experience. Update your resume and LinkedIn profile and start looking at companies you’d like to work at for opportunities.

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

Thanks, I appreciate the breakdown! My entrance into Ops/DevOps was by complete chance and I never got a chance to become acquainted with most of what you listed; too many fires to put out. I'll set aside this weekend and read up on those points.

You are me, 3-4 years ago. I had a lot of experience in one specific area and wanted to branch out and learn more skills than just being a 1 trick pony (I wasn’t, but felt like one).

I got a gig at a consultancy firm that saw the potential of getting me a gig in my current expertise but also let me touch new stuff. Getting new roles becomes easier and easier as you exposed to more and more technologies.

To speed things up be sure to always have a project you pick up at home or if you have downtime at the office. I don’t mean you need to spend 10 hours each week in your spare time, but looking into stuff you don’t know yet and hear a lot about certainly helps in getting a better view of the big picture.

As others have mentioned, just start playing with stuff and go from there. Methanar made a huge effort-post in the general IT thread some time (months probably?) about a good way to get started learning devops skills. If you want I can repost it here, it was an excellent post and helped several goons on their way already.

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

I think I’m going to accept that full time consulting gig tomorrow. Pay cut or no I think it would be more beneficial for my career by way of establishing a solid base and having the ability to branch out. My contracting firm also recommended I commit tax fraud so I could get extra cash, so it might be best to not be associated with them.

Silver lining: Since I don’t much care for pissing off the IRS the pay cut is only 25k or so. Which would be about where I would be if I converted at my current place.

That’d be a really dumb move. Find a consultancy gig that’ll at least pay you the same as you earn now.

With accepting the offer your new employer will also know that you’re a weak negotiator and will stil accept their terms even if it hurts your own bottom line (like it does now). The fact that they won’t come up a dime now means you’re not likely to ever get a meaningful raise because they know you’ll accept their terms anyway.

Do yourself a favor and read the negotiation thread in BFC before accepting this offer. There are so many goons who earn thousands of dollars a year more thanks to the advice there.

This job offer is not unique, you can get a dozen like it in a matter of days. The IT market is booming, at least see if you can find another interview or two to see what other companies are willing to pay you. Lowball offer + not willing to negotiate = major red flag!

Edit: negotiation thread link

LochNessMonster fucked around with this message at 05:25 on Oct 1, 2018

LochNessMonster
Feb 3, 2005

I need about three fitty


Helianthus Annuus posted:

can you please repost this?

Took me a bit, it’s already a year old but here goes.

Methanar posted:

What do you want to do?

I know that's a hard question to answer in the very beginning when you're not even entirely sure what the hype behind a particular technology is. I know nothing about your work environment or what your workloads are.

The power of containers is the automation tooling surrounding them. A plain old docker file running somewhere doing something being handled by systemd or whatever is actually pretty boring. I guess you might be able to make things a bit quicker by pulling down an haproxy container file from a public repo or whatever, but that's not the point.

Containers are great because they are the perfect primitive for building upon. What can be built ontop of containers? Immutable infrastructures, applications that can be deployed with all of their dependencies bundled with them, intelligent automatic resource scheduling, CI/CD pipelines, blue/green deployments off the top of my head.

The reality is if you're the kind of windows admin that I was, the value isn't there for you. Whatever it was that I did at previous jobs had literally zero use whatsoever for any of the concepts I just named. But maybe you're not the kind of windows I was, or you don't want to be. If you don't know what you want out of containers, or more importantly, the larger superset that containers are part of, other than that you want them; that is is perfectly okay.

A good place to start is to just make an account with either Google Compute platform or AWS. I'm actually going to recommend GCP here. I've been spending an awful lot of time recently immersed in GCP and it's very approachable compared to AWS. Kubernetes is also a Google product and thus is as first class citizen in GCP.

Great, you've made your account and are ready to start. Here is where that hard question comes in, what do you want to do. You're entering here ~Devops~ territory. You're not a windows admin anymore working with pre-packaged applications that are built for you. In Devops land being familiar and comfortable with software development is now an unavoidable necessity because delivering software that your organization produces is the point. So, naturally I guess the first thing to do is write a hello world micro-service application in the language of your choice. Golang, nodejs, python, ruby. Pick one and follow a guide on the internet.

Your hello world application can be simple, but use many pieces. Find a guide that involves multiple external components, maybe Redis or MySQL. Say ultimately you get 5 pieces to your new micro-service oriented distributed system. A front end, a piece dedicated to db access, something in the background that handled logging, maybe an internal request router, maybe something that procedurally generates a bitmap image, a message bus, redis and your DB daemon. Now, it's time to publish your application to the world. Each micro service is self contained and stateless which means they are a perfect fit for being in a container!

But wait, writing and developing code is hard. The code you write sucks and is actually full of bugs. What a perfect time to set up a CI/CD pipeline to make your software developer lives easier. Like any good developer you've been using Git as your version control system. Why not build a Jenkins server, in a container naturally https://hub.docker.com/r/jenkins/jenkins/, that will automatically build, compile and test your code for you every time you commit a branch? Jenkins can spawn MORE containers where your code will be built and be ran against synthetic tests you write to be sure you haven't introduced regressions. https://techbeacon.com/beginners-guide-kick-starting-your-ci-pipeline-jenkins

Finally: you have a sane build system like any good developer, your code is bug free and ready for the world. Maybe you start off pushing the containers produced by Jenkins to your VMs by hand, because hey, theres only like 7 of them right? But you continue to grow and your app is pretty popular. It's starting to get hard and expensive to provision all the necessary machines you need to power your bitmap generator. You notice that your application has clearly defined times of the week of peak traffic. Wouldn't it be great if you could size the amount of compute resources you were buying from Google according to your real time traffic load? Enter: Kubernetes.

Kubernetes is a Big Deal. It's actually the technology that is underlying Google's Container Engine that's been open sourced.
Kubernetes, is a system for managing containerized applications across a cluster of nodes. Explicitly designed to address the disconnect between the way that modern, distributed systems are designed and the underlying physical infrastructure. Applications comprised of different services should still be managed as a single application (when it makes sense). Kubernetes provides a layer over the infrastructure to allow for this type of management. Scaling traffic up and down according to load. Logically grouping containers together, software defined networking and so much more are now possible.

Logically grouping containers together: maybe it just always makes sense for your bitmap generated to have 4 micro-services in running on the same host to minimize InterProcess Communication (IPC) latency. Kubernetes can do that. Maybe you always want X amount of microservices running on different underlying hardware to be resilient to datacenter mishaps. Kubernetes can do that. Since Kubernetes is now infront of your apps providing load balancing services, you can do things like blue/green deployments. Lets say parts of your application are stateful, how do you deploy new code? How about just building an entire new parallel environment that you send new users to while the existing stateful sessions just naturally drain off of the old environment. How about running as many versions of the code you write at once?

Containers are the fundamental unit making up larger systems. This is why saying you want to do containers or devops is meaningless. Because it's not something you apt-get install or curl | bash. Devops is to technology-focused companies as the scientific method was to chemists.


This is why containers and the Devops concept/mentality/paradigm/thing is useless to the kind of internal IT windows admin that I was. We didn't write code, we didn't open source software that we were empowered to orchestrate. Running large distributed systems was not our business. If you want to 'get in on this container thing' you need to evaluate what you're doing with it. Maybe you're not satisfied with being an internal windows admin anymore and thats why you're interested. Excellent! The new world of online services is big and scary, but it's here, and more accessible than ever. Join a mailing list! Go to the Kubernetes github and open every link in a tab and read it all! Write your hello world app! Learn to program! (I've got another huge rant about 'learn to program') Read my posts!

LochNessMonster
Feb 3, 2005

I need about three fitty


Doom Mathematic posted:

Last time I checked it was Cypress. I could be wrong though.

Can confirm, Cypress is pretty cool.

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit.

So did you take the paycut?

LochNessMonster
Feb 3, 2005

I need about three fitty


Warbird posted:

Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though.

Good to hear that man. Start learning topics you feel you lack in during the M/F your boss is not in and search for a company that wants to bring you in aa the puppet guru but still wants to teach you other devops stuff.

geeves posted:

Don't use nfs and gitlab w/ Postgres. Just learned that the hardway.

That goes for any persitent data that requires lots of writes to it.

Source: also learned it the hard way.

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package.

I think I have everything I need from Docker at this point, so hopefully past there everything else can go in the JS thread.

And there will be a lot of everything else.

Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right?

The idea is that you do this in your dockerfile so each container you start has the exact same setup (without you manually doing stuff to make everything work).

While knowing virtually nothing about node it will probably look something lije this

code:
# base image
FROM node:8

# copy your local version of the project into the containers filesystem
ADD . /usr/src/app

# install express
RUN npm install <express package>

# run config commands if necessary
RUN <express config commands>

# make node port reachable to the host
EXPOSE 3000

# start default express binary (googled this, might be wrong and you want to do npm start or something. 
CMD [ “bin/www” ]

LochNessMonster
Feb 3, 2005

I need about three fitty


I’d use ansible. No clients to setup, just ssh keys. Works with yaml files so not too difficult to manage either.

LochNessMonster
Feb 3, 2005

I need about three fitty


Gyshall posted:

Think of Vagrant like a shittier Terraform but more for your local virtualbox/hyperv or whatever.

I hardly use Vagrant at all anymore since Docker accomplishes the same thing.

Same, Vagrant are just a larger/slower alternative to docker containers for me at this point.

LochNessMonster
Feb 3, 2005

I need about three fitty


Spring Heeled Jack posted:

Those using docker swarm, what do you do to deploy/update services in a pipeline? We have build, tag, and push to our private repo in bamboo which then creates a release in octopus.

What I’m mostly seeing is to ssh into a manager node and run the stack deploy command from there, which is fine but it seems like there should be a better way. (Or there would be a better way by now if k8s hadn’t eaten its lunch).

You don’t have to ssh into a manager node, you can run the command from your build server with
code:
docker —tlsverify -H tcp://ip:port stack deploy —compose-file compose.yml

LochNessMonster fucked around with this message at 16:51 on Jan 9, 2019

LochNessMonster
Feb 3, 2005

I need about three fitty


Grump posted:

Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file

Like.....do I still need nginx or apache? Or am I just running docker-compose on the server? And the request to the IP address will just work?

Your image is a template. You can export it to a registry (or tar file) and import it to any other computer/server in the world and a container created from that template will run exactly the same on each machine.

Flask is your web/app server and is already installed in your container. If you start a container with that image it’ll come up on port 5000 on that machine. If you use multiple containers with the same ports exposed you want to start diving deeper into docker (ingress and orchestration) but for now this should be enough.

LochNessMonster
Feb 3, 2005

I need about three fitty


necrobobsledder posted:

(never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends)

I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?

LochNessMonster
Feb 3, 2005

I need about three fitty


Pretty happy I stumbled into this thread and read about the Caddyserver. I was trying to build a forwarding proxy for some legacy apps that aren't maintained anymore but need to send data to an API that (rightfully) only takes data from an cert based authenticated source. I was trying to do this with nginx but found out that it doesn't play nice with https when trying to do this. So someone build an nginx mod for this: https://github.com/chobits/ngx_http_proxy_connect_module

Caddyserver sounds like a way better alternative. I'm going to give that a go.

LochNessMonster
Feb 3, 2005

I need about three fitty


Probably a stupid question but is anyone familair with the placement preferences and constraints of a swarm cluster.

What I’d like to do is place a specific service with nust 1 container always on host1 of our dev swarm cluster. If host1 is not available it can run on any other node. Preferably it gets placed back on host1 when it’s up again but I’m not bothered if that last step doesn’t happen until the container is killed.

Something like this should do that right? Deploy to any swarm host with the dev label and try to place it on host1 if possible.


YAML code:
deploy:
      placement:
        constraints: 
          - node.labels.type == dev
        preferences:
          - spread: unique.label.host1

LochNessMonster
Feb 3, 2005

I need about three fitty


NihilCredo posted:

That sounds almost exactly like the use case described in the Docker Compose reference. Is it not working as expected?

Need to see for myself but haven’t had time to test yet. Coworker has been fiddling with this for a few days and claims this isn’t working. Figured I ask here if I’m missing something obvious or not.

LochNessMonster
Feb 3, 2005

I need about three fitty


quick test on the swarm stack deploy, below the exact compose file I'm testing with.

code:
version: '3.7'
services:
  preftest:
    image: nginx:alpine
    deploy:
      placement:
        preferences:
          - spread: node.labels.rack == 1
        constraints:
          - node.labels.acceptance == true
It will ignore the preference comletely and deploy to any node that matches the constraint. The node it deployed to did not have a rack label at all and the node that does h ave the rack label (with value 1) was up and running and not starved for resources.

When removing the constraint it'll deploy to any node in the cluster.

Tried to switch the order fo preferences/constraints but it has the same result. Either I'm missing something obvious or the preferences are not working properly. I've removed the label entirely and readded it without value and changed the placement preference to spread: node.labels.rack == true but that didn't change anything either. The strange thing about the label is that a docker node ls -f "Label=rack" doesn't return anything either. It might be that the label is not being recognized or something, but it certainly shows up when doing a node inspect.

edit:

Saw a PR concerning documentation about this feature on github mentioning that nodes assigned no label are being treated as if it had the label but with no value assigned. So I assigned rack=false values to all other nodes in the cluster and it still placed the container on other nodes than the preferred one. Adding a constraint to rack=1 to the compose file work properly, so it's not a label thing.

The preference functionality just seems to be not working at all.

edit 2:

Figured that "spread" might only work for n > 1 and gave some other nodes the true value for this label but that didn't change any thing at all.

LochNessMonster fucked around with this message at 13:25 on May 31, 2019

LochNessMonster
Feb 3, 2005

I need about three fitty


Cancelbot posted:

Very much this. I'm our "DevOps Lead" with a few engineers, but our role is to get everyone to adopt the mindset through tooling, process, and culture changes. The aim is to shrink & eliminate the team as the capability & autonomy of the wider IT department grows. I suspect hope this will happen in 12-18 months if we keep doing things right.

Same. Our front end devs acted really surprised when I started to ask them what they tried to do solve their failing builds. I’m currently building a team that will improve the release automation as well as educate and help all developers to adopt the mindset of ‘you ship it, you run it’. We’ve got the shipping part down, the running it mindset is slowly getting there.

Too bad C level fired the 3 project teams that were actually doing this and helping all new teams to do the same. Reason: they have some major bonusses coming up after finalizing a major acquisition and need to keep the costs in check. This basically means the rest of the year they’ll Thanos snap half of all projects and no budgets can be changed anymore.

LochNessMonster
Feb 3, 2005

I need about three fitty


Ape Fist posted:

The thing is if I actually telnet into it and run the command manually it works fine?

Please tell me you don’t actually have TELNET enabled on a server in tyool 2019....

LochNessMonster
Feb 3, 2005

I need about three fitty


I’m also using s6-overlay for this. Will look into this solution too, thanks for sharing!

We’re moving away from NFS mounted volumes for persistent data entirely though. We’ve had so many issues with it over the last 2 years.

LochNessMonster
Feb 3, 2005

I need about three fitty


PBS posted:

Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since.

This is just for our swarm clusters which run things we can't put in kubernetes for reasons that are too sad for me to try to explain, nfs is really the only shared file system available to them atm.

It's really staggering when I look back and see how much time I've wasted working around all our snowflake requirements or just general dumb practices we're dragged along with.

I’m also using it on Swarm because, as you said, there is no alternative. I think the majority of my issues have been with NFS. Our decision to disable the routing mesh and running our own ingress service is a close second though.

In hindsight we should’ve picked K8s over Swarm but 2-3 years back it was more of a coin toss.

LochNessMonster
Feb 3, 2005

I need about three fitty


I’m finally moving my platform from a trainwreck MSP to AWS and that means also migrating some services that are not mine.

To do that I’m creating an EKS cluster per project which needs to be provisioned by Terraform. Any configuration will be done by Ansible and code deployment by Jenkins.

This means that my IaC configs need to be reusable for other projects than my own so instead of having it in my applications codebase I’m creating a seperate IaC repo.

I was wondering if there are any standards / best practices on how to structure the code?

I was thinking something along the lines of

code:
Project1/
	Terraform/
		main.tf
		variables.tf
		output.tf
	Ansible/
		playbooks/
			roles/
	Jenkins/
Project2
	Terraform/
	 Ansible/
	Jenkins/

LochNessMonster fucked around with this message at 07:02 on Sep 13, 2019

LochNessMonster
Feb 3, 2005

I need about three fitty


12 rats tied together posted:

My personal preference would be:
code:
iac/
  ansible/
    playbooks/
      roles/
        terraform-thing-1/ (tasks/, templates/, handlers/, etc)
        terraform-thing-2/
        ansible-thing-1/
        ansible-thing-2/
        jenkins-config/
        [etc]
      project-1.yaml
      project-2.yaml
Put all of your orchestration and config (terraform, ansible, and jenkins) into a playbook named after each project. Use ansible's terraform and jenkins_job modules to run your terraform operations and configure jenkins from ansible-playbook. Use task tags to support least-resistance code paths through your playbooks: you probably don't need to run terraform all the time, so it shouldn't run unless requested with --tags terraform.

How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this?

LochNessMonster
Feb 3, 2005

I need about three fitty


Running into a terraform issue and as I'm pretty new to it, I can't wrap my head around it. I'm using a module to fill my variables.tf, but when running [fixed]terraform init[fixed] I'm getting the following error (management is the name of my module):

code:
[me@box]$ terraform init                                                                                                                                                                                                        
Initializing modules...
Downloading /path/Terraform for management...
- management in .terraform/modules/management 
Downloading /path/Terraform for management.management...
- management.management in .terraform/modules/management.management
 Downloading /path/Terraform for management.management.management... 
- management.management.management in .terraform/modules/management.management.management
Downloading /path/Terraform for management.management.management.management...                                                                                                                            
- management.management.management.management in .terraform/modules
/management.management.management.management
Downloading /path/Terraform for management.management.management.management.management...                                                                                                                  
- management.management.management.management.management in .terraform/modules/management.management.management.management.management          

<this continues for some time> 


Terraform tried to remove
.terraform/modules/management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management
in order to reinstall this module, but encountered an error: unlinkat```
This appears to be the following bug

I'm running the init command from path/, file structure is as follows:
code:
path/module.tf
path/terraform/main.tf
path/terraform/variables.tf
The issue on the issue tracker says this happens when 2 or more modules depend on each other, but as all values in the module (the only one I'm using) are hardcoded, I'm not seeing on which other module this would depend. This seems like such trivial behaviour that I feel I must be doing something wrong, but I have no clue what.

edit: I'm an idiot. I had module.tf in both path/ and path/terraform/. Removing the latter solved the issue.

LochNessMonster fucked around with this message at 15:01 on Sep 19, 2019

LochNessMonster
Feb 3, 2005

I need about three fitty


Apparently our SonarQube license is based on server id which changes for each machine. I'd like to move this away from a 24/7 machine as we only utilize SQ during business hours, which means terminating the machine saves me 100ish hours of compute time each week. How do you typically handle these kind of licenses with regards to immutable infra?

LochNessMonster
Feb 3, 2005

I need about three fitty


Gyshall posted:

We run SQ in a container and don't have that issue. What's your setup look like?

Currently it’s at an MSP on a virtual server with a dedicated DB server. Looking to deploy it an EKS cluster.

Last time the MSP migrated it to a new server we had to request a new license because the new install used a new Server ID which caused the old license to stop working.

LochNessMonster
Feb 3, 2005

I need about three fitty


Dynatrace is a lot more mature than AppDynamics and NewRelic. I’d pick it any day over the others.

LochNessMonster
Feb 3, 2005

I need about three fitty


Did the training myself a while back because my employer paid for it and lots of customers are "using/implementing" it as their way of working. It's complete garbage and combines all the buzzwords into something that gives management control over teams backlog which means it's the exact opposite from Agile. Commonly reffered to as Stupid Agile For Enterprises. PI events are complete garbage cram sessions which try to put people on the spot for "giving commitment to the PI goals" which can be held over their heads later to force them into delivering garbage.

The exam is webbased and not proctored. The exam questions and anwsers are literally 1 google search away. Bad companies will like it on your resume. If you're into consulting it'll be a good HR check.

LochNessMonster
Feb 3, 2005

I need about three fitty


FISHMANPET posted:

We did 3 days of PI Planning (our first time) last week over Zoom. We're infrastructure, so the overall "product" is poorly defined. One of the biggest actual theoretical advantages of safe is coordinating dependencies between teams, but nearly all our features are independent. I don't think sprints really work for infrastructure, when there's generally more external dependencies that can't be managed, and a lot of the work (even the planned work) is much more reactive and dependent on getting feedback from customers. We've also all been scrambled up out of our traditional domain based teams (linux, windows server, database) into a bunch of generic jack of all trades teams with a little bit of everything.

Love Safe!

I find Kanban a more practical approach for Infra/Platform related teams. Especially in teams where there's lots of fires to be put out, you don't have to modify the sprint goal by removing user stories to make room / adjust for bug fixes. And just like you said, there are so many dependencies, fixed time windows and troubleshooting to be done it's hard to properly estimate time spent, let alone say you're going to fix it in a specific sprint.

Adbot
ADBOT LOVES YOU

LochNessMonster
Feb 3, 2005

I need about three fitty


New Relics APM is very nice, infra monitoring pretty crappy.

Datadogs customer support blows big time. Also not that much of a fan of their tooling in general.

I really like the way Elastic is moving forward. Especially now they’re working on a unified agent so I don’t have to deploy metric, file, log, audit beats seperately. Currently using it as log and metrics platform. Might incorporate APM in the near future as well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply