|
Tacos Al Pastor posted:Maybe this is the wrong place to ask this question but I dont see a Docker thread per se. Is there a way to combine docker hub container images into one container image? I want to be able to pull python and selenium/cypress/robot framework in one docker pull request. What do you actually need to solve for, because what your asking for doesn't exist but there might be a few different ways to get acceptable results
|
# ? Jul 10, 2023 21:51 |
|
|
# ? May 31, 2024 19:06 |
|
I think what you want to do is just make your own Dockerfile. Use the FROM instruction to copy the poo poo you want out of the various images into your new mega image and then build and run that.
|
# ? Jul 10, 2023 21:55 |
|
^^ YesTacos Al Pastor posted:Maybe this is the wrong place to ask this question but I dont see a Docker thread per se. Is there a way to combine docker hub container images into one container image? I want to be able to pull python and selenium/cypress/robot framework in one docker pull request. FROM: selenium:latest RUN: apt-update \ apt-install cypress &&\ robot &&\ python3 Or whatever. I'd pick your FROM container to be whichever has the fiddliest install + the most dependencies packed in or yeah just roll your own it's not that hard code:
|
# ? Jul 10, 2023 22:12 |
|
Hadlock posted:^^ Yes This is exactly what I want! Thanks guys. Yes I want to run a separate container to run side by side with the containers that we already have running for our app(web app, database, etc).
|
# ? Jul 11, 2023 03:41 |
|
One other thing you can do in your Dockerfile is use multiple FROM AS images combined with the COPY —from command. This is an optimization but if there’s already a public image that has the stuff you want you can just copy the relevant files from it. Like as an example grabbing the terraform binary out of the terraform:1.4.2 image. This way you know exactly what version you’re getting and Docker won’t have to rebuild the layer if apt/yum/whatever decides to update the version it provides. This style of building tends to be faster and more reproducible, assuming the images you want to crib from exist. Hopefully that makes sense. I would provide a better example but phone posting The docker docs or any number of blog posts can fill in the blanks. What Hadlock posted is totally fine too Docjowles fucked around with this message at 04:26 on Jul 11, 2023 |
# ? Jul 11, 2023 04:21 |
|
Docjowles posted:One other thing you can do in your Dockerfile is use multiple FROM AS images combined with the COPY —from command. This is an optimization but if there’s already a public image that has the stuff you want you can just copy the relevant files from it. Like as an example grabbing the terraform binary out of the terraform:1.4.2 image. This way you know exactly what version you’re getting and Docker won’t have to rebuild the layer if apt/yum/whatever decides to update the version it provides. This style of building tends to be faster and more reproducible, assuming the images you want to crib from exist. You're talking about multi-stage builds, right? They're good for keeping your image sizes at sane, reasonable levels and you can just cherry pick exactly what you need from each given image.
|
# ? Jul 13, 2023 17:55 |
|
Necronomicon posted:You're talking about multi-stage builds, right? They're good for keeping your image sizes at sane, reasonable levels and you can just cherry pick exactly what you need from each given image. yeah
|
# ? Jul 13, 2023 19:07 |
|
don’t think about it too much though, disk is cheap. especially don’t do something stupid like use a base container with a nonstandard libc implementation so you can save 100s of megabytes
|
# ? Jul 14, 2023 01:21 |
|
gently caress off alpine
|
# ? Jul 14, 2023 02:22 |
|
Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast! Most everything will be in the cloud eventually, but in the meantime maybe adjust the 1ms timeout on your test idk.
|
# ? Jul 18, 2023 16:55 |
|
setting up a direct connect will get you there in about 1ms. maybe 2 if you're unlucky.
|
# ? Jul 18, 2023 17:09 |
|
I forgot to include that in my rant but yes of course I am aware of direct connect and we have multiple 10gb circuits in use. However all of our poo poo is in us-east-1 but not all of our physical locations are anywhere near us-east-1. So there is "the packets have to traverse the country" latency that not even Amazon's network backbone can totally eliminate. I am sympathetic to the complaint that latency is worse in this transitional period but from a network standpoint I don't think we can do anything more about it.
|
# ? Jul 18, 2023 17:14 |
|
certainly, and 14 ms is also totally fine of course. had an interesting scenario at work recently where cross AZ traffic latency increase was perhaps intolerable for some applications and we did have to work to eliminate it however we could, but the application team was easy to work with and understanding of the limitations in place, especially with how finite aws becomes if you have a complicated placement strategy
|
# ? Jul 18, 2023 17:28 |
|
Glad you got to multi-AZ deployments. Have fun supporting GraphQL forever now.
|
# ? Jul 18, 2023 17:33 |
|
Docjowles posted:Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast! I've seen this going past me, too. My team isn't officially responsible but it's amazing to watch people complain about how their new cloud systems don't perform, and then they post a flow diagram showing that a process requiring 5 calls is now basically alternating between on-prem and cloud with each call, so yeah, you're picking up a fair degree of latency there, duh.
|
# ? Jul 18, 2023 18:55 |
|
North America is about 14 light-milliseconds wide.RFC 1925 posted:No matter how hard you push and no matter what the priority, you can't increase the speed of light.
|
# ? Jul 18, 2023 21:15 |
|
I want to know whose brilliant idea it was to make EKS pod security groups not work in any reasonable way unless you change four distinct settings in the VPC CNI. The SNAT one just absolutely boggles the mind e: in before "there is nothing reasonable about security groups" Vulture Culture fucked around with this message at 22:28 on Jul 19, 2023 |
# ? Jul 19, 2023 22:20 |
|
Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.
|
# ? Jul 19, 2023 23:09 |
|
At $old_job I was roped into plenty of meetings with our TAM and their lead RDS people to try to shift hundreds of on-prem Oracle databases into AWS. This was back when the Oracle RDS offering was just getting off the ground, so we're talking single-digit TB support and a long list of caveats. They worked their asses off to try to make it happen, but there were two bears they could never outrun - the size and growth rate of our biggest instances, and our DBAs building 15 years of critical processes around Oracle tech debt. Nothing moves faster than the speed of light, but in those meetings, our goalposts got close.
|
# ? Jul 20, 2023 03:58 |
FISHMANPET posted:Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue. Sounds like you’ll be putting those dbs in OCI before long
|
|
# ? Jul 20, 2023 13:27 |
|
FISHMANPET posted:Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue. This sounds like the cloud strategy should be "don't" but that's probably not a popular thing for a tech executive to say mods???
|
# ? Jul 20, 2023 16:47 |
|
Docjowles posted:This sounds like the cloud strategy should be "don't" but that's probably not a popular thing for a tech executive to say We're a large public research university whose last CIO got an article about him in the Wall Street Journal when we fired him. We appointed an interim who was, and I say this respectfully, a professional seat warmer. It was his job, when some high-level leader left, to just sit in the chair, keep things afloat, until a true replacement could be found. He's been around forever and knows everybody, he's very good at that, so he was a natural fit for interim CIO. I suspect the number one requirement (though explicitly unstated) when hiring a new CIO was "keep us out of the wall street journal" and so we made the interim permanent, and his "don't rock the boat, stay the course" style is not really great when one of his senior directors comes in and convinces him to make a cloud push. So he's somehow simultaneously directing us to upend the entirety of our operations but also not actually disrupting anything, which works out... about as well as you'd expect.
|
# ? Jul 20, 2023 18:15 |
|
FISHMANPET posted:Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.
|
# ? Jul 20, 2023 18:26 |
Stack and Outpost are dumb as poo poo. There really isn’t a use case for them
|
|
# ? Jul 20, 2023 19:14 |
|
Docjowles posted:Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast! I have some fond memories of my devs finding out that AWS was more than 1ms away, sometimes had packet loss, sometimes instances had to be retired, sometimes DX or VPN had maintenance, etc. Turns out there's a lot of work writing code that has some weird thing called "partition tolerance".
|
# ? Jul 21, 2023 16:46 |
|
madsushi posted:I have some fond memories of my devs finding out that AWS was more than 1ms away, sometimes had packet loss, sometimes instances had to be retired, sometimes DX or VPN had maintenance, etc. Turns out there's a lot of work writing code that has some weird thing called "partition tolerance".
|
# ? Jul 24, 2023 15:42 |
|
Much of my career dealing with careless organizations centers around basically nobody designing any software for common, routine failures like a hard drive failures, memory going bad, a switch going flakey, etc. and troubleshooting random AF software problems in prod that point to issues like an obscure bug in some switch because a legacy application from like 1998 relied upon certain hardware implementation which doesn't exist anymore 10+ years later. Most organizations don't have the resources to have developers (because most orgs can't find developers that aren't trash in the first place) spend effort on anything besides features, frankly, so oftentimes throwing money at much more plentiful sysadmins / ops was the only viable path toward keeping things running. And now organizations trying to get rid of their sysadmins and datacenters are finding this much more cruel, desperate reality that both cloud-aware people and software is probably all more expensive and rare to setup than their old trash n-tier apps from 2004 whose engineers are all long gone and offshored. Granted, I am familiar with many organizations that were so trash at their datacenters that even an AWS instance in us-east-1 that would randomly go down and recover with instance recovery spanked their old datacenters' reliability and so even a naive cloud washed lift and shift really was justifiable as a business (I once measured routinely 1 9 reliability based upon e-mails complaining about something being down rather than even goddamn Nagios). It mostly is an indictment of their lovely datacenter management and organizational ossification over decades rather than a ringing endorsement of cloud. In fact, these same organizations almost always are repeating the same problems of mismanagement and micromanaging with massive sprawl that their cloud environments are going to be the same thing with AWS and Azure capturing all their growing legacy costs. In this respect it's better to outsource your poo poo you're clearly not good at to someone better. Oftentimes Bad Companies (these same ones usually) have outsourced things they're somewhat ok at and cut down into being untenable. It's not like there was a good bureaucratic reason to ensure things went south, but CIO gonna CIO for that hefty bonus payout I guess. As such I am still holding firm to the idea that AWS's massive business success is essentially monetizing the most profitable, scalable, low-effort parts of handling low-maturity organizations' technical debt. You don't need to do even 90% of the work if the customer is happy enough with 80% of the important stuff with 40% less personnel involved on their end (remember: they can't hire nor retain anyone competent - basically anyone posting in this thread alone is more competent and talented than 99% of the folks I've seen in these environments). Because people are really bad at estimating the remainder of the 20% last mile efforts, which is something AWS will make vaguely possible while staying very, very far away from which is great politically as well as in terms of pure business.
|
# ? Jul 24, 2023 19:11 |
|
What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org). We use Gitlab, and I've got an engineer who's presented me a custom solution that you can include in a pipeline, but I'd rather use something off the shelf like commit-analyzer or GitVersion instead. Thoughts?
|
# ? Aug 1, 2023 00:59 |
|
we rolled our own that reads commit messages formatted in the conventional commit style to generate the next semantic version and tag the repo in ado
|
# ? Aug 1, 2023 01:28 |
|
I don't remember seeing gitlog, but we definitely looked at commit-analyzer and a few others pretty much all of the options we looked at had way more features than we needed, and were also missing stuff that we wanted like teams integration
|
# ? Aug 1, 2023 01:39 |
|
The Iron Rose posted:What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org). I just use gitversion. It's fine, it gets the job done and doesn't require any hand holding once you get it set up which takes like 15 minutes.
|
# ? Aug 1, 2023 03:10 |
|
I tried using semantic versioning with core roller but it was a real pain in the rear end. Linking everything to git shas is really nice if your company will allow you to do it that way https://github.com/coreroller/coreroller
|
# ? Aug 1, 2023 06:21 |
|
The Iron Rose posted:What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org).
|
# ? Aug 1, 2023 14:24 |
|
Does it have to be semantic versioning? I've used Gitversion for some PowerShell stuff where the version has certain constraints. In places where it matters less, like container images for internal apps, I use the azure DevOps build number, which is based on the date so it's easily sortable.
|
# ? Aug 1, 2023 14:36 |
|
How would you write a DevOps resume differently if you were targeting contractor roles
|
# ? Aug 1, 2023 22:03 |
|
Hadlock posted:How would you write a DevOps resume differently if you were targeting contractor roles
|
# ? Aug 1, 2023 22:22 |
|
The company I work for is going all in on AWS account segmentation. As a long-time Terraform guy, what should I know about CloudFormation StackSets?
|
# ? Aug 8, 2023 17:08 |
|
I have it on my ToDo list to look at StackSets, because they'd make it trivial for me to ensure various IAM users/roles/policies/groups exist in each linked account, e.g. stuff we need for basic admin tasks. Definitely beats writing some Ansible automation to STS:AssumeRole into each account and set it all up, especially given the number of accounts we manage. But I don't think I'd want to touch it for any compute infra.
|
# ? Aug 8, 2023 21:11 |
|
They're bad and you shouldn't use them, mostly.minato posted:[..] beats writing some Ansible automation to STS:AssumeRole into each account and set it all up [...] This is exactly what stacksets do. They even have required permissions.
|
# ? Aug 8, 2023 21:25 |
|
|
# ? May 31, 2024 19:06 |
|
If any of you use Moq in your testing you should probably yank it out, or at the very least pin it at version 4.18 The project owner intentionally hid email-harvesting malware in a minor update yesterday. https://github.com/moq/moq/issues/1372
|
# ? Aug 10, 2023 03:59 |