|
I’m setting up a side project at home with about 10 different microservices. Code is managed in Gitlab and consists of python apps, haproxy, webservers and databases all running in docker containers. What would you guys pick for a CI tool to deploy this? I could go with Jenkins as that is what I’m using at work but I’d rather learn something new that is not a pos.
|
# ¿ Jun 24, 2018 16:12 |
|
|
# ¿ May 4, 2024 00:30 |
|
freeasinbeer posted:Gitlab CI isn’t bad fwiw. If you are looking for dirt cheap and unmanaged google container builder is decent. But doesn’t have native gitlab support. Gyshall posted:Gitlab CI is the stones. Give it a try. Gitlab CI it is, project is already in Gitlab so makes sense. Heard some good things about it before so let’s see how this works.
|
# ¿ Jun 25, 2018 13:27 |
|
22 Eargesplitten posted:Going through the Docker tutorials I don't even know enough Linux shell to know what some of these commands are doing, so welp. Guess I'm not doing this stuff without learning a lot more Linux fundamentals. Get the Sander van Vugt book for RHCSA and start working on Linux. It’s really fundamental and an RHCSA level of understanding will get you a long way. On the side you can still explore Docker.
|
# ¿ Jul 29, 2018 08:40 |
|
22 Eargesplitten posted:Thanks, I'll swing by after work and pick it up from the library. I'm going through a Linux basics course on Lynda since the RHCSA course recommended a year of Linux experience or basics coursework before taking it. It looks like at least in the Denver/Boulder area of CO a junior Linux admin should make around what I'm making now. I would like to make more, but I'm willing to put that off a year or so if it means getting on the right track. If you don’t start at the bottom (linux basic stuff) you’ll need a lot more time figuring out (basic linux) stuff while working on Docker, K8s and probably a lot other stuff too. I get you want to get on the gravy train but to effectively do that you need some Linux experience. This does not have to be 1 year of experience but you should be comfortable with a lot of cli work. It’s not rocket science but it does take some time. Personally I think RHCSA is a pretty good foundation.
|
# ¿ Jul 30, 2018 23:16 |
|
Helianthus Annuus posted:An sftp server with some bash to set up and maintain this directory structure: ${REPO_NAME}/${GIT_SHA}/${MY_COOL_ARTIFACT} I’m tempted to replace Nexus with this.
|
# ¿ Sep 12, 2018 19:40 |
|
Blue Ocean is pretty nice, too bad Jenkins/Groovy are lovely.
|
# ¿ Sep 18, 2018 20:25 |
|
Mega Comrade posted:I will die on this beautiful hill of plugins. If you are forced to work with it you will. Rather sooner than later.
|
# ¿ Sep 19, 2018 12:10 |
|
Vulture Culture posted:Jenkins is so much easier if you deliberately avoid any plugin you don't absolutely have to use I’ve taken that particular piece of advice very seriously. Trying to figure out how groovy uses escape characters in specific corner cases is mindblowing though. Shell session code:
|
# ¿ Sep 19, 2018 16:04 |
|
Bhodi posted:This is a crime. Did you have to figure all that out on your own? I found this on github somewhere when I figured out half of it by myself already. I didn’t know about “dollar slashy string”. I Inherited a (scripted) pipeline that’s mostly shell scripts strung together by groovy functions. It actually works surprisingly well and I’m perfectly comfortable with shell scripting and the unique features/behaviour you’ll run into with that. But combining it with groovy can only be described as it’s own layer in hell.
|
# ¿ Sep 19, 2018 17:40 |
|
A full build for us takes about 10-15 minutes and includes several automated tests qith quality gates and deploying to docker. Artifacts are stored in Nexus and they’re tagged by release number and git commit, I think. Nexus usually ‘just works’ so I don’t touch it all that much. An inital build of a new branch will take an additional 15-20 minutes since we do a retore of our production data backup so each branch can be tested 100% production-like. This is kept persistant for the duration of the branch. With new builds only the missing incremental backups are restored. The upside is that this tests our backups tens of times per day. The downside is that our NFS server is taking a “Rodney King”-like beating when we spin up too many new builds at the same time. It also gives me the peace of mind that whatever goes wrong, I can completely rebuild my production environment in less than half an hour with a single button press and without needing anyone elses help. Still there are lots of things I’d like to improve.
|
# ¿ Sep 27, 2018 07:08 |
|
Warbird posted:Ok goons, continue doing only puppet work at a job I largely don't like or take a 50k take home pay cut to take a swing at DevOps consulting at a place I've been wanting to go? And no, they won't come up a drat penny. Stick around and continue looking for a similar DevOps job you do like. If you don’t have to, never take a paycut. Your negotiation position completely depends on your ability to walk away from the offer if you don’t like it. Read more about that in negotiation thread in BFC. Don’t settle for a job at a run of the mill conultancy firm. The hourly rates are usually through the roof so if they don’t make you a decent offer it teaches you a few things about the way they run their company. You can and will not for the life of you get a decent raise after you start working there. They’re penny pinchers. If they really want you, they’ll come up with a better offer. Otherwise they’re just interested in putting a warm body in a seat. For me this would already be a red flag and I wouldn’t want to work for them. It sounds like you have a rather cushy job which you don’t have to leave straight away. Take this interview as experience. Update your resume and LinkedIn profile and start looking at companies you’d like to work at for opportunities.
|
# ¿ Sep 28, 2018 09:03 |
|
Warbird posted:Thanks, I appreciate the breakdown! My entrance into Ops/DevOps was by complete chance and I never got a chance to become acquainted with most of what you listed; too many fires to put out. I'll set aside this weekend and read up on those points. You are me, 3-4 years ago. I had a lot of experience in one specific area and wanted to branch out and learn more skills than just being a 1 trick pony (I wasn’t, but felt like one). I got a gig at a consultancy firm that saw the potential of getting me a gig in my current expertise but also let me touch new stuff. Getting new roles becomes easier and easier as you exposed to more and more technologies. To speed things up be sure to always have a project you pick up at home or if you have downtime at the office. I don’t mean you need to spend 10 hours each week in your spare time, but looking into stuff you don’t know yet and hear a lot about certainly helps in getting a better view of the big picture. As others have mentioned, just start playing with stuff and go from there. Methanar made a huge effort-post in the general IT thread some time (months probably?) about a good way to get started learning devops skills. If you want I can repost it here, it was an excellent post and helped several goons on their way already.
|
# ¿ Sep 29, 2018 07:57 |
|
Warbird posted:I think I’m going to accept that full time consulting gig tomorrow. Pay cut or no I think it would be more beneficial for my career by way of establishing a solid base and having the ability to branch out. My contracting firm also recommended I commit tax fraud so I could get extra cash, so it might be best to not be associated with them. That’d be a really dumb move. Find a consultancy gig that’ll at least pay you the same as you earn now. With accepting the offer your new employer will also know that you’re a weak negotiator and will stil accept their terms even if it hurts your own bottom line (like it does now). The fact that they won’t come up a dime now means you’re not likely to ever get a meaningful raise because they know you’ll accept their terms anyway. Do yourself a favor and read the negotiation thread in BFC before accepting this offer. There are so many goons who earn thousands of dollars a year more thanks to the advice there. This job offer is not unique, you can get a dozen like it in a matter of days. The IT market is booming, at least see if you can find another interview or two to see what other companies are willing to pay you. Lowball offer + not willing to negotiate = major red flag! Edit: negotiation thread link LochNessMonster fucked around with this message at 05:25 on Oct 1, 2018 |
# ¿ Oct 1, 2018 05:22 |
|
Helianthus Annuus posted:can you please repost this? Took me a bit, it’s already a year old but here goes. Methanar posted:What do you want to do?
|
# ¿ Oct 2, 2018 19:44 |
|
Doom Mathematic posted:Last time I checked it was Cypress. I could be wrong though. Can confirm, Cypress is pretty cool.
|
# ¿ Oct 2, 2018 21:50 |
|
Warbird posted:My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit. So did you take the paycut?
|
# ¿ Oct 3, 2018 21:27 |
|
Warbird posted:Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though. Good to hear that man. Start learning topics you feel you lack in during the M/F your boss is not in and search for a company that wants to bring you in aa the puppet guru but still wants to teach you other devops stuff. geeves posted:Don't use nfs and gitlab w/ Postgres. Just learned that the hardway. That goes for any persitent data that requires lots of writes to it. Source: also learned it the hard way.
|
# ¿ Oct 4, 2018 06:43 |
|
22 Eargesplitten posted:Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package. Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right? The idea is that you do this in your dockerfile so each container you start has the exact same setup (without you manually doing stuff to make everything work). While knowing virtually nothing about node it will probably look something lije this code:
|
# ¿ Oct 8, 2018 08:41 |
|
I’d use ansible. No clients to setup, just ssh keys. Works with yaml files so not too difficult to manage either.
|
# ¿ Oct 10, 2018 21:53 |
|
Gyshall posted:Think of Vagrant like a shittier Terraform but more for your local virtualbox/hyperv or whatever. Same, Vagrant are just a larger/slower alternative to docker containers for me at this point.
|
# ¿ Nov 28, 2018 21:45 |
|
Spring Heeled Jack posted:Those using docker swarm, what do you do to deploy/update services in a pipeline? We have build, tag, and push to our private repo in bamboo which then creates a release in octopus. You don’t have to ssh into a manager node, you can run the command from your build server with code:
LochNessMonster fucked around with this message at 16:51 on Jan 9, 2019 |
# ¿ Jan 9, 2019 16:48 |
|
Grump posted:Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file Your image is a template. You can export it to a registry (or tar file) and import it to any other computer/server in the world and a container created from that template will run exactly the same on each machine. Flask is your web/app server and is already installed in your container. If you start a container with that image it’ll come up on port 5000 on that machine. If you use multiple containers with the same ports exposed you want to start diving deeper into docker (ingress and orchestration) but for now this should be enough.
|
# ¿ Jan 20, 2019 09:53 |
|
necrobobsledder posted:(never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends) I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?
|
# ¿ Jan 21, 2019 07:09 |
|
Pretty happy I stumbled into this thread and read about the Caddyserver. I was trying to build a forwarding proxy for some legacy apps that aren't maintained anymore but need to send data to an API that (rightfully) only takes data from an cert based authenticated source. I was trying to do this with nginx but found out that it doesn't play nice with https when trying to do this. So someone build an nginx mod for this: https://github.com/chobits/ngx_http_proxy_connect_module Caddyserver sounds like a way better alternative. I'm going to give that a go.
|
# ¿ May 17, 2019 13:34 |
|
Probably a stupid question but is anyone familair with the placement preferences and constraints of a swarm cluster. What I’d like to do is place a specific service with nust 1 container always on host1 of our dev swarm cluster. If host1 is not available it can run on any other node. Preferably it gets placed back on host1 when it’s up again but I’m not bothered if that last step doesn’t happen until the container is killed. Something like this should do that right? Deploy to any swarm host with the dev label and try to place it on host1 if possible. YAML code:
|
# ¿ May 27, 2019 17:02 |
|
NihilCredo posted:That sounds almost exactly like the use case described in the Docker Compose reference. Is it not working as expected? Need to see for myself but haven’t had time to test yet. Coworker has been fiddling with this for a few days and claims this isn’t working. Figured I ask here if I’m missing something obvious or not.
|
# ¿ May 28, 2019 18:37 |
|
quick test on the swarm stack deploy, below the exact compose file I'm testing with. code:
When removing the constraint it'll deploy to any node in the cluster. Tried to switch the order fo preferences/constraints but it has the same result. Either I'm missing something obvious or the preferences are not working properly. I've removed the label entirely and readded it without value and changed the placement preference to spread: node.labels.rack == true but that didn't change anything either. The strange thing about the label is that a docker node ls -f "Label=rack" doesn't return anything either. It might be that the label is not being recognized or something, but it certainly shows up when doing a node inspect. edit: Saw a PR concerning documentation about this feature on github mentioning that nodes assigned no label are being treated as if it had the label but with no value assigned. So I assigned rack=false values to all other nodes in the cluster and it still placed the container on other nodes than the preferred one. Adding a constraint to rack=1 to the compose file work properly, so it's not a label thing. The preference functionality just seems to be not working at all. edit 2: Figured that "spread" might only work for n > 1 and gave some other nodes the true value for this label but that didn't change any thing at all. LochNessMonster fucked around with this message at 13:25 on May 31, 2019 |
# ¿ May 31, 2019 12:27 |
|
Cancelbot posted:Very much this. I'm our "DevOps Lead" with a few engineers, but our role is to get everyone to adopt the mindset through tooling, process, and culture changes. The aim is to shrink & eliminate the team as the capability & autonomy of the wider IT department grows. I Same. Our front end devs acted really surprised when I started to ask them what they tried to do solve their failing builds. I’m currently building a team that will improve the release automation as well as educate and help all developers to adopt the mindset of ‘you ship it, you run it’. We’ve got the shipping part down, the running it mindset is slowly getting there. Too bad C level fired the 3 project teams that were actually doing this and helping all new teams to do the same. Reason: they have some major bonusses coming up after finalizing a major acquisition and need to keep the costs in check. This basically means the rest of the year they’ll Thanos snap half of all projects and no budgets can be changed anymore.
|
# ¿ Jul 4, 2019 21:19 |
|
Ape Fist posted:The thing is if I actually telnet into it and run the command manually it works fine? Please tell me you don’t actually have TELNET enabled on a server in tyool 2019....
|
# ¿ Jul 7, 2019 07:08 |
|
I’m also using s6-overlay for this. Will look into this solution too, thanks for sharing! We’re moving away from NFS mounted volumes for persistent data entirely though. We’ve had so many issues with it over the last 2 years.
|
# ¿ Sep 5, 2019 05:07 |
|
PBS posted:Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since. I’m also using it on Swarm because, as you said, there is no alternative. I think the majority of my issues have been with NFS. Our decision to disable the routing mesh and running our own ingress service is a close second though. In hindsight we should’ve picked K8s over Swarm but 2-3 years back it was more of a coin toss.
|
# ¿ Sep 5, 2019 05:47 |
|
I’m finally moving my platform from a trainwreck MSP to AWS and that means also migrating some services that are not mine. To do that I’m creating an EKS cluster per project which needs to be provisioned by Terraform. Any configuration will be done by Ansible and code deployment by Jenkins. This means that my IaC configs need to be reusable for other projects than my own so instead of having it in my applications codebase I’m creating a seperate IaC repo. I was wondering if there are any standards / best practices on how to structure the code? I was thinking something along the lines of code:
LochNessMonster fucked around with this message at 07:02 on Sep 13, 2019 |
# ¿ Sep 12, 2019 18:32 |
|
12 rats tied together posted:My personal preference would be: How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this?
|
# ¿ Sep 13, 2019 07:00 |
|
Running into a terraform issue and as I'm pretty new to it, I can't wrap my head around it. I'm using a module to fill my variables.tf, but when running [fixed]terraform init[fixed] I'm getting the following error (management is the name of my module):code:
I'm running the init command from path/, file structure is as follows: code:
edit: I'm an idiot. I had module.tf in both path/ and path/terraform/. Removing the latter solved the issue. LochNessMonster fucked around with this message at 15:01 on Sep 19, 2019 |
# ¿ Sep 19, 2019 14:46 |
|
Apparently our SonarQube license is based on server id which changes for each machine. I'd like to move this away from a 24/7 machine as we only utilize SQ during business hours, which means terminating the machine saves me 100ish hours of compute time each week. How do you typically handle these kind of licenses with regards to immutable infra?
|
# ¿ Sep 26, 2019 09:20 |
|
Gyshall posted:We run SQ in a container and don't have that issue. What's your setup look like? Currently it’s at an MSP on a virtual server with a dedicated DB server. Looking to deploy it an EKS cluster. Last time the MSP migrated it to a new server we had to request a new license because the new install used a new Server ID which caused the old license to stop working.
|
# ¿ Sep 26, 2019 20:10 |
|
Dynatrace is a lot more mature than AppDynamics and NewRelic. I’d pick it any day over the others.
|
# ¿ Sep 27, 2019 20:08 |
|
Did the training myself a while back because my employer paid for it and lots of customers are "using/implementing" it as their way of working. It's complete garbage and combines all the buzzwords into something that gives management control over teams backlog which means it's the exact opposite from Agile. Commonly reffered to as Stupid Agile For Enterprises. PI events are complete garbage cram sessions which try to put people on the spot for "giving commitment to the PI goals" which can be held over their heads later to force them into delivering garbage. The exam is webbased and not proctored. The exam questions and anwsers are literally 1 google search away. Bad companies will like it on your resume. If you're into consulting it'll be a good HR check.
|
# ¿ Oct 2, 2020 10:44 |
|
FISHMANPET posted:We did 3 days of PI Planning (our first time) last week over Zoom. We're infrastructure, so the overall "product" is poorly defined. One of the biggest actual theoretical advantages of safe is coordinating dependencies between teams, but nearly all our features are independent. I don't think sprints really work for infrastructure, when there's generally more external dependencies that can't be managed, and a lot of the work (even the planned work) is much more reactive and dependent on getting feedback from customers. We've also all been scrambled up out of our traditional domain based teams (linux, windows server, database) into a bunch of generic jack of all trades teams with a little bit of everything. I find Kanban a more practical approach for Infra/Platform related teams. Especially in teams where there's lots of fires to be put out, you don't have to modify the sprint goal by removing user stories to make room / adjust for bug fixes. And just like you said, there are so many dependencies, fixed time windows and troubleshooting to be done it's hard to properly estimate time spent, let alone say you're going to fix it in a specific sprint.
|
# ¿ Oct 8, 2020 08:02 |
|
|
# ¿ May 4, 2024 00:30 |
|
New Relics APM is very nice, infra monitoring pretty crappy. Datadogs customer support blows big time. Also not that much of a fan of their tooling in general. I really like the way Elastic is moving forward. Especially now they’re working on a unified agent so I don’t have to deploy metric, file, log, audit beats seperately. Currently using it as log and metrics platform. Might incorporate APM in the near future as well.
|
# ¿ Nov 14, 2020 11:02 |