|
StabbinHobo posted:if you're taking a paycut to get into devops you're either a surgeon or doing it wrong Seriously guy, $25k? Lol.
|
# ? Oct 1, 2018 11:21 |
|
|
# ? May 16, 2024 02:03 |
|
Docjowles posted:Don’t just set up Aw gently caress. We're moving around 150 Windows servers to AWS, and then containerising the hell out of them so the infra teams only have to manage ~10 much larger servers, and the developers get much faster deploys. Octopus deploy loving hates AWS; it pretends to like it, but it has a deep seated hatred of servers that self-obliterate when the auto scaling group has had enough. So we're mounting a two-pronged attack with AWS CodeDeploy & Docker Cancelbot fucked around with this message at 14:21 on Oct 1, 2018 |
# ? Oct 1, 2018 14:17 |
|
I can understand pay cuts to go from a larger company to a start-up but at least equity is non-zero compensation. Unless you’re in a really high pay bracket already / very specialized (certs and speaking engagements and all that stuff that should push you into $200k base almost everywhere in the US for almost any engineer devops or not) then a pay cut doesn’t make sense to go to a non-start-up company. Even in Atlanta’s crappy underpaid market I got substantially better offers for devops consulting left and right than previous engagements for companies in defense in the DC area only a couple years prior.
|
# ? Oct 1, 2018 16:45 |
|
Warbird posted:I think I’m going to accept that full time consulting gig tomorrow. Pay cut or no I think it would be more beneficial for my career by way of establishing a solid base and having the ability to branch out. My contracting firm also recommended I commit tax fraud so I could get extra cash, so it might be best to not be associated with them. pls keep looking, there are absolutely companies out there that can afford to pay you what you make now your new gig isn't low balling you because they're short on money, they're low balling you because they think you're an easy mark. you should feel insulted. if you accept this pay cut, i promise they will continue to treat you like a mark for as long as you let them
|
# ? Oct 1, 2018 18:21 |
|
LochNessMonster posted:As others have mentioned, just start playing with stuff and go from there. Methanar made a huge effort-post in the general IT thread some time (months probably?) about a good way to get started learning devops skills. If you want I can repost it here, it was an excellent post and helped several goons on their way already. can you please repost this?
|
# ? Oct 1, 2018 18:23 |
|
Docjowles posted:Are you vaguely aware of how to write and operate modern applications, where modern is like TYOOL 2012? It is that. https://12factor.net/. Plus the usual "make your app stateless!!!" "ok but the state has to live someplace, what do we do with that?" "???????????" I wasn't familiar with this part of 12 factor but actually it's right here: quote:Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database. Cancelbot posted:So we're mounting a two-pronged attack with AWS CodeDeploy & Docker I strongly recommend that, if you can, approach Docker / containers as an organization with a shared contract of "just put your containers here: _____". That underscore can be nexus, AWS ECR, dockerhub, whatever, but starting with instructions for a developer to push their containers and trigger a deploy is, in my experience, the best overall approach for operational sanity. From that point you can build your orchestration assuming that someone checks in code -> a container is built and pushed, our automation takes over from here. Your organization is probably different, but in all of the orgs I've worked in once developers (or really, anybody beholden to a PM / business planning) get involved in orchestration (even via something like a helm chart), operational sanity gets thrown out the window immediately in favor of shipping features before an arbitrary deadline that was decided by someone who barely knows what a container is. Container troubleshooting is the worst kind of troubleshooting so definitely fight stuff that like tooth and nail, IMO. You also mentioned that most of your services are 100MB ram and <1% CPU? ECS/Fargate are both excellent services, but I'd highly recommend engaging your TAM or some other kind of AWS support before deciding on anything and hopefully they can work with you guys to spin up a proof of concept with something applicable to what you guys are hoping to accomplish with EC2/Docker. It's really hard to beat either of those services though assuming you are only running on EC2. in particular, I would start off this migration journey by thinking about IAM. Getting a container scheduled onto an instance is the easy part -- an intern can bang that out in half a day. The hard part is usually managing secrets, credentials, and AWS API access. 12 rats tied together fucked around with this message at 21:12 on Oct 1, 2018 |
# ? Oct 1, 2018 21:09 |
|
is it possible to have a cloudbuild.yaml in your git repo that works for both your prod ci/cd pipeline *and* devs using "cloud-build-local" against their own test envs/gcp-projects? related q: y'all just yolo a kubectl command at the end of the steps or have a better deploy method?
|
# ? Oct 2, 2018 00:02 |
|
12 rats tied together posted:Awesome stuff Thanks! So right now we have big EC2 instances with anything from 5 to 30(!!) independent services on them, this is a holdover from our pre-AWS days where we had around 10 physical servers in which to do all our public hosting. We are going for a "mega-cluster" approach to Docker, but the end goal is each team having an AWS account and they look after their stack which can take whatever form they like as long as its secure & budgeted for; four of our 15 teams are nearly finished with that process and the results are promising. From that we've seen a high variance in implementation details too; EC2 + RDS as a traditional "lift and shift", but some are rebuilding things in lambda + S3 as they don't even want to give a poo poo about an instance going down. Our old QA environment: ~150 servers will be the primary target of the "containerise everything" as its either too small or so loosely coupled to require a more significant investment into EC2 instances for them. The real ballache that I think I put in previous posts is nearly all of this is Windows/.NET and Fargate as yet doesn't support that, but it is what we'd use. We've literally just triggered our activation of AWS Enterprise and as soon as our TAM is on board we are going after how we deploy as the first thing we do, IAM is deployed fairly effectively in the places we can see (2FA, Secrets manager, least privilege etc.) but it's going to get more chaotic when the developers really see the "power" of AWS, and by "power" I mean "look at all this horrific poo poo I can cobble together, disregarding Terraform/Cloudformation" so we're working hard on building or buying some hefty governance tools that will slap down silly poo poo as best we can. Edit: lol our Director has just asked us how quickly can we move everything from eu-west-1 (Ireland) to eu-west-2 (London) in the event of a no-deal Brexit. Cancelbot fucked around with this message at 11:29 on Oct 2, 2018 |
# ? Oct 2, 2018 08:17 |
|
Cancelbot posted:end goal is each team having an AWS account Cancelbot posted:but some are rebuilding things in lambda + S3 as they don't even want to give a poo poo about an instance going down. Moving an application's infrastructure from one region to another would take us probably 15 minutes to 2 hours in my current role, depending on the application. CloudFormation has a lot of built-in helpers here now that we used to use ansible for, in particular StackSets. As long as you build all of your cfn templates assuming that region (and possibly vpc id) is a type of primary key that you'll use to lookup subnet ids, amis, and security group ids, you're like 90% done with just being able to swap eu-west-1 to eu-west-2 in github and then pushing a button. Biggest time sinks in my experience are, of course, application configuration, and if you need to move anything heavy (especially redshift clusters) that can take a couple hours. It's not too bad though -- really helps if you have your basic network configs in cloudformation though.
|
# ? Oct 2, 2018 19:13 |
|
Helianthus Annuus posted:can you please repost this? Took me a bit, it’s already a year old but here goes. Methanar posted:What do you want to do?
|
# ? Oct 2, 2018 19:44 |
|
Thanks for resurrecting that. I was thinking about it the other day and just didn't take the time to find it. My biggest problem as a Windows admin is that I just don't have anything pushing me to play with CI/CD or whatever else you want to call it. Even doing something at home, I am having a hard time thinking of a fun project to get me started since I don't currently do any coding.
|
# ? Oct 2, 2018 19:59 |
|
What's the current hot poo poo for web e2e tests? We're still using selenium. Is that still relevant? I'm trying to evaluate this before we start writing a ton of new ones for a new project.
|
# ? Oct 2, 2018 20:04 |
|
itskage posted:What's the current hot poo poo for web e2e tests? Last time I checked it was Cypress. I could be wrong though.
|
# ? Oct 2, 2018 20:58 |
|
LochNessMonster posted:Took me a bit, it’s already a year old but here goes. cool, thanks for digging that up I think he makes a good point about how containers abstract away all the runtime details, which lets you treat containers as atomic units of infrastructure which can all use the same logic for deployment, orchestration, etc. But I think the appeal of containers is much stronger for people working with open-source software, where the runtime requirements can be very heterogenous. I assume a windows guy at a .NET shop (or even a java shop) is in a situation where everything is much more homogenous, and many of the tools needed for CI/CD are already available and applicable to your use case without the need for extensive modification or configuration. If that's right, then containers don't really add a lot of value. But I've personally never worked at a windows shop, so I don't actually know. But in the open source world at least, the big idea is to achieve something similar by containerizing whatever bizarre software stack you run, and then you too can use off-the-shelf tooling for your CI/CD needs (Kubernetes). I should point out that containers are not the only way to pull this off. You can achieve something similar in AWS EC2 by baking AMIs when you build your software, and then treating each EC2 instance running some version of your AMI as your atomic unit of infrastructure. But in this case, you're using AWS specific tooling to achieve CI/CD instead of something more provider-agnostic, like k8s.
|
# ? Oct 2, 2018 21:27 |
|
itskage posted:What's the current hot poo poo for web e2e tests? puppeteer is good if you can live with chrome only
|
# ? Oct 2, 2018 21:35 |
|
Doom Mathematic posted:Last time I checked it was Cypress. I could be wrong though. Can confirm, Cypress is pretty cool.
|
# ? Oct 2, 2018 21:50 |
|
Helianthus Annuus posted:You can achieve something similar in AWS EC2 by baking AMIs when you build your software, and then treating each EC2 instance running some version of your AMI as your atomic unit of infrastructure. But in this case, you're using AWS specific tooling to achieve CI/CD instead of something more provider-agnostic, like k8s. I agree mostly except I think it's a mindset thing more than a tech thing. You don't need to bake amis to have immutable infrastructure, the only thing you need to do is not mutate your infrastructure. Last job we jokingly called it 'deterministic infrastructure' in that we assumed two servers, when fed the same inputs, resulted in the same outputs at least at a service level. This is ~usually true. You don't need k8s or ec2, you can provision a server with PXE and run a post provisioning task. You just need cloud-init, something that can talk ipmi (ansible, for example), and your PXE distribution tool of choice and you're pretty much done. Boot a network image, server comes up and requests a deploy for whatever it's supposed to be, deploy kicks off and sets up monitoring etc, and then ideally your node starts responding affirmatively to some kind of health check and you're in business. Post provisioning step can include literally just dropping a docker compose file in to a server and configuring an upstart service that runs it. I've worked places where this is how we shipped some applications and it works great assuming you don't need any of the fancier k8s features. Not that there isn't value in k8s or ec2 outside of "this is a hardware abstraction" -- it's just important to note IMO that none of the tech is magic or even that hard to replicate in a physical DC, if it makes business sense to do so.
|
# ? Oct 2, 2018 23:06 |
|
I've got a five-digit number of server instances booting off of read-only NFS and running a bootstrap script to deploy services into tmpfs. We don't even cloud-init, we just key off a couple of DHCP fields.
|
# ? Oct 3, 2018 03:27 |
|
I like the “log to stdout” part of the 12 factor app because it encourages less stupidity. I’m using a package at work where you provide a logging type and it has six options!
|
# ? Oct 3, 2018 12:29 |
|
So I'm poking and prodding at Teraform. I'm a bit gunshy about hooking it up to a cloud platform because I'm a huge baby and also cheap as hell. I assume that having AWS/Azure as the provider would still be much cheaper than upgrading my home server (old rear end laptop) and handling any necessary licensing fun even accounting for colossal screwups? And I turned down the consulting thing. Still reading through the list of concepts, it's been quite a week.
|
# ? Oct 3, 2018 17:28 |
|
Warbird posted:So I'm poking and prodding at Teraform. I'm a bit gunshy about hooking it up to a cloud platform because I'm a huge baby and also cheap as hell. I assume that having AWS/Azure as the provider would still be much cheaper than upgrading my home server (old rear end laptop) and handling any necessary licensing fun even accounting for colossal screwups? $300 GCP sign up credit, ready go.
|
# ? Oct 3, 2018 17:32 |
|
The job talk was kinda cool, which reminds me that I make below $20k a year in an infrastructure/support/somewhat DevOps role that I moved into from a database programmer role that was literally killing me--same company! I'm looking for work in another country so I could earn decently for a change.
|
# ? Oct 3, 2018 17:46 |
|
SeaborneClink posted:$300 GCP sign up credit, ready go. My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit.
|
# ? Oct 3, 2018 18:15 |
|
Warbird posted:My man! I hadn’t even considered Gcloud. Well I know what I’m doing for a bit. So did you take the paycut?
|
# ? Oct 3, 2018 21:27 |
|
Vulture Culture posted:I've got a five-digit number of server instances booting off of read-only NFS and running a bootstrap script to deploy services into tmpfs. We don't even cloud-init, we just key off a couple of DHCP fields. nfs is making a comeback
|
# ? Oct 4, 2018 01:55 |
|
LochNessMonster posted:Took me a bit, it’s already a year old but here goes. Ah thank you for this, I've bookmarked the quote and subscribed to this thread. I'm in the position of being lead of our infrastructure team supporting our financial software as a service, inherited after our previous lead left last year. Basically owning all the hardware for our datacenter, networking + firewalls + loadbalancing, AWS, Datacenter vms, and supporting software like Splunk, LDAP, DNS, OSSEC and Puppet. The past year as lead has basically been struggling to keep up with fires and new requests, and now I'm looking to actually improve our infrastructure a bit. During this time we have started to implement Docker, but due to my inexperience, and direction from Dev our current implementation of Docker is...not ideal. We currently provision a new VM for each new container, because per Dev requirements each container needs a unique dns and ports for things like ssh are hardcoded (needs port 22). Additionally all services communicate using external urls to our own web application. Now there are a lot of areas for improvement for my team, but simplifying deployment is what I'm going to try to tackle next. I've been messing around with Kubernetes both on AWS and a local cluster, and I'm hoping I can build a case to push all containers into a cluster. Since our application is a mix of contaniers and standalone vms, I want to try to set up a deployment where all containers are on a cluster, load balanced with F5 against NodePorts. Internal cluster communication uses kubedns, any communication external to the cluster uses F5 LB address, traffic going into the cluster uses F5 LB against NodePort on services... any potential issues with this setup? I feel a bit out of my depth, and don't have many resources to talk to on Kubernetes or even best practices for infrastructure - would this be a good time to hire a consultant? Thanks Thread
|
# ? Oct 4, 2018 02:56 |
|
StabbinHobo posted:nfs is making a comeback Don't use nfs and gitlab w/ Postgres. Just learned that the hardway.
|
# ? Oct 4, 2018 03:33 |
|
you're pretty much on a good track, only thing i'd say is have the f5 point to the nginx ingress controller instead of rolling your own nodeport/proxy-container solution: https://github.com/kubernetes/ingress-nginx also start with some kind of rancher/openshift distro for a datacenter, doing it all yourself is just too much.
|
# ? Oct 4, 2018 03:33 |
|
LochNessMonster posted:So did you take the paycut? Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though.
|
# ? Oct 4, 2018 04:08 |
|
Scruff_McGee posted:Ah thank you for this, I've bookmarked the quote and subscribed to this thread. I'm in the position of being lead of our infrastructure team supporting our financial software as a service, inherited after our previous lead left last year. Basically owning all the hardware for our datacenter, networking + firewalls + loadbalancing, AWS, Datacenter vms, and supporting software like Splunk, LDAP, DNS, OSSEC and Puppet. The past year as lead has basically been struggling to keep up with fires and new requests, and now I'm looking to actually improve our infrastructure a bit. During this time we have started to implement Docker, but due to my inexperience, and direction from Dev our current implementation of Docker is...not ideal. We currently provision a new VM for each new container, because per Dev requirements each container needs a unique dns and ports for things like ssh are hardcoded (needs port 22). Additionally all services communicate using external urls to our own web application. If you control your network, run k8s on baremetal and use kube-router/BGP to advertise services/pods. If you don't, just use an ingress controller in k8s.
|
# ? Oct 4, 2018 04:09 |
|
Mao Zedong Thot posted:If you control your network, run k8s on baremetal and use kube-router/BGP to advertise services/pods. It'd be a hard sell to commit an entire UCS blade to a k8s, current plan was to look into a solution like Rancher where we can set up a cluster of vms locally, and a cluster of AMI's on AWS. Didn't know about kube-router and BGP - does that go both ways? I need to do some research. Thanks!
|
# ? Oct 4, 2018 04:12 |
|
Warbird posted:Nope. Just found out that our PO is going to be taking off every M/F for the rest of the year and oh man does that untracked time off sound better now. I’m still convinced it’s a trap though. Good to hear that man. Start learning topics you feel you lack in during the M/F your boss is not in and search for a company that wants to bring you in aa the puppet guru but still wants to teach you other devops stuff. geeves posted:Don't use nfs and gitlab w/ Postgres. Just learned that the hardway. That goes for any persitent data that requires lots of writes to it. Source: also learned it the hard way.
|
# ? Oct 4, 2018 06:43 |
|
We've been testing nfs on gluster (ganesha) for our k8 persistent storage and early results are promising. We don't plan on using it as anything more than a file store though, as it's beginning to feel like an abstraction layer cake with each layer doing it's own i/o buffering and that's making me nervous offlining a gluster node during write-read tests and watching everything freeze for several seconds then recover cleanly was pretty cool. anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot? Bhodi fucked around with this message at 13:12 on Oct 4, 2018 |
# ? Oct 4, 2018 13:06 |
|
Ganesha owns, pNFS owns. I'm so happy there's a decent user-space NFS implementation these days so Ceph and Gluster and friends can do all their dumb POSIX filesystem stuffBhodi posted:anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot? Vulture Culture fucked around with this message at 14:48 on Oct 4, 2018 |
# ? Oct 4, 2018 14:45 |
|
I was in here a while ago realizing how far over my head I was, but now I'm doing something that should be simple. Basic web server using Express and Node on my Pi. It won't be running anything particularly intensive, I'm just building a really basic website for now until I get external hosting set up. As I understand it, Express can be lightweight, so it seems like a good fit for the Pi (3 B+). I got the official Node package, but I'm not seeing an official Express or NPM package. Are those included in there? I know Express is a framework, but I'm not sure if I need to download that separately given that you have to install it from NPM normally. Now that I'm actually on a PC rather than my phone, I do see the Bitnami Express package, but so far I've only downloaded official packages. Is that what I would need? Also, the Mongo-Express package, is that going to give me what I need out of the box? I've been loving around with this for long enough as-is, I just want to get going so I can start coding.
|
# ? Oct 7, 2018 18:30 |
|
NPM installs with node, and express is distributed as an npm package. Any other questions related to this part of your project may be better suited for the JavaScript thread
|
# ? Oct 8, 2018 00:28 |
|
Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package. I think I have everything I need from Docker at this point, so hopefully past there everything else can go in the JS thread. And there will be a lot of everything else.
|
# ? Oct 8, 2018 03:19 |
|
22 Eargesplitten posted:Thanks. I was concerned about the Docker package aspect, since I didn’t see a npm package. Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right? The idea is that you do this in your dockerfile so each container you start has the exact same setup (without you manually doing stuff to make everything work). While knowing virtually nothing about node it will probably look something lije this code:
|
# ? Oct 8, 2018 08:41 |
|
Containers built like that can kinda be large and depending upon how well things are mapped out in your layers you may be duplicating 50%+ or more of your 3 GB containers. I’m helping a couple teams transition from a shared snowflake Jenkins setup and everyone was shoving all their dependencies into $HOME/.m2 and .npm and so forth. It turns out that each of our builds sucks down 4 - 9 GB of JARs and NPM packages and the dot directory is about 95 GB with our fat JARs for different releases taking up half of it. It’s certainly faster to download layers than to grab packages from repos, but it’s still annoying for developers used to the dependencies already being in a shared home directory where incremental builds start instantly.
|
# ? Oct 8, 2018 17:13 |
|
|
# ? May 16, 2024 02:03 |
|
LochNessMonster posted:Just to make sure you’re doing it right. You’re not spinning up a node image and ssh into the container to install express manually right? Thanks, that makes some sense. I think I'm probably better off just going with traditional, non-containerized stuff at this point. It seems I'm still in over my head and it's not like this server has to be HA, we can just restart the Pi and browse facebook or watch cat videos on youtube for as long as it takes to come back up. Docker is really cool, but I think I'm getting sucked in by the cool factor rather than being sensible. Oh well, I'll eventually learn to use this stuff.
|
# ? Oct 8, 2018 21:18 |