Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Fool
Oct 16, 2003


Tacos Al Pastor posted:

Maybe this is the wrong place to ask this question but I dont see a Docker thread per se. Is there a way to combine docker hub container images into one container image? I want to be able to pull python and selenium/cypress/robot framework in one docker pull request.

What do you actually need to solve for, because what your asking for doesn't exist but there might be a few different ways to get acceptable results

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

I think what you want to do is just make your own Dockerfile. Use the FROM instruction to copy the poo poo you want out of the various images into your new mega image and then build and run that.

Hadlock
Nov 9, 2004

^^ Yes

Tacos Al Pastor posted:

Maybe this is the wrong place to ask this question but I dont see a Docker thread per se. Is there a way to combine docker hub container images into one container image? I want to be able to pull python and selenium/cypress/robot framework in one docker pull request.

FROM: selenium:latest

RUN: apt-update \
apt-install cypress &&\
robot &&\
python3

Or whatever. I'd pick your FROM container to be whichever has the fiddliest install + the most dependencies packed in

or yeah just roll your own it's not that hard

code:
FROM: alpine:latest

RUN apk update && apk add cypress selenium robot python3
or whatever. Then just dig through the dockerfile of each official container and cherry pick whatever is in there for setup that's important

Tacos Al Pastor
Jun 20, 2003

Hadlock posted:

^^ Yes

FROM: selenium:latest

RUN: apt-update \
apt-install cypress &&\
robot &&\
python3

Or whatever. I'd pick your FROM container to be whichever has the fiddliest install + the most dependencies packed in

or yeah just roll your own it's not that hard

code:
FROM: alpine:latest

RUN apk update && apk add cypress selenium robot python3
or whatever. Then just dig through the dockerfile of each official container and cherry pick whatever is in there for setup that's important

This is exactly what I want! Thanks guys. Yes I want to run a separate container to run side by side with the containers that we already have running for our app(web app, database, etc).

Docjowles
Apr 9, 2009

One other thing you can do in your Dockerfile is use multiple FROM AS images combined with the COPY —from command. This is an optimization but if there’s already a public image that has the stuff you want you can just copy the relevant files from it. Like as an example grabbing the terraform binary out of the terraform:1.4.2 image. This way you know exactly what version you’re getting and Docker won’t have to rebuild the layer if apt/yum/whatever decides to update the version it provides. This style of building tends to be faster and more reproducible, assuming the images you want to crib from exist.

Hopefully that makes sense. I would provide a better example but phone posting :effort: The docker docs or any number of blog posts can fill in the blanks. What Hadlock posted is totally fine too

Docjowles fucked around with this message at 04:26 on Jul 11, 2023

Necronomicon
Jan 18, 2004

Docjowles posted:

One other thing you can do in your Dockerfile is use multiple FROM AS images combined with the COPY —from command. This is an optimization but if there’s already a public image that has the stuff you want you can just copy the relevant files from it. Like as an example grabbing the terraform binary out of the terraform:1.4.2 image. This way you know exactly what version you’re getting and Docker won’t have to rebuild the layer if apt/yum/whatever decides to update the version it provides. This style of building tends to be faster and more reproducible, assuming the images you want to crib from exist.

Hopefully that makes sense. I would provide a better example but phone posting :effort: The docker docs or any number of blog posts can fill in the blanks. What Hadlock posted is totally fine too

You're talking about multi-stage builds, right? They're good for keeping your image sizes at sane, reasonable levels and you can just cherry pick exactly what you need from each given image.

Docjowles
Apr 9, 2009

Necronomicon posted:

You're talking about multi-stage builds, right? They're good for keeping your image sizes at sane, reasonable levels and you can just cherry pick exactly what you need from each given image.

yeah

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
don’t think about it too much though, disk is cheap. especially don’t do something stupid like use a base container with a nonstandard libc implementation so you can save 100s of megabytes

vanity slug
Jul 20, 2010

gently caress off alpine

Docjowles
Apr 9, 2009

Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast!

Most everything will be in the cloud eventually, but in the meantime maybe adjust the 1ms timeout on your test idk.

12 rats tied together
Sep 7, 2006

setting up a direct connect will get you there in about 1ms. maybe 2 if you're unlucky.

Docjowles
Apr 9, 2009

I forgot to include that in my rant but yes of course I am aware of direct connect and we have multiple 10gb circuits in use. However all of our poo poo is in us-east-1 but not all of our physical locations are anywhere near us-east-1. So there is "the packets have to traverse the country" latency that not even Amazon's network backbone can totally eliminate.

I am sympathetic to the complaint that latency is worse in this transitional period but from a network standpoint I don't think we can do anything more about it.

12 rats tied together
Sep 7, 2006

certainly, and 14 ms is also totally fine of course.

had an interesting scenario at work recently where cross AZ traffic latency increase was perhaps intolerable for some applications and we did have to work to eliminate it however we could, but the application team was easy to work with and understanding of the limitations in place, especially with how finite aws becomes if you have a complicated placement strategy

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Glad you got to multi-AZ deployments. Have fun supporting GraphQL forever now.

Zorak of Michigan
Jun 10, 2006


Docjowles posted:

Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast!

Most everything will be in the cloud eventually, but in the meantime maybe adjust the 1ms timeout on your test idk.

I've seen this going past me, too. My team isn't officially responsible but it's amazing to watch people complain about how their new cloud systems don't perform, and then they post a flow diagram showing that a process requiring 5 calls is now basically alternating between on-prem and cloud with each call, so yeah, you're picking up a fair degree of latency there, duh.

Doom Mathematic
Sep 2, 2008
North America is about 14 light-milliseconds wide.

RFC 1925 posted:

No matter how hard you push and no matter what the priority, you can't increase the speed of light.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I want to know whose brilliant idea it was to make EKS pod security groups not work in any reasonable way unless you change four distinct settings in the VPC CNI. The SNAT one just absolutely boggles the mind

e: in before "there is nothing reasonable about security groups"

Vulture Culture fucked around with this message at 22:28 on Jul 19, 2023

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


At $old_job I was roped into plenty of meetings with our TAM and their lead RDS people to try to shift hundreds of on-prem Oracle databases into AWS. This was back when the Oracle RDS offering was just getting off the ground, so we're talking single-digit TB support and a long list of caveats. They worked their asses off to try to make it happen, but there were two bears they could never outrun - the size and growth rate of our biggest instances, and our DBAs building 15 years of critical processes around Oracle tech debt.

Nothing moves faster than the speed of light, but in those meetings, our goalposts got close.

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"

FISHMANPET posted:

Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.

Sounds like you’ll be putting those dbs in OCI before long

Docjowles
Apr 9, 2009

FISHMANPET posted:

Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.

This sounds like the cloud strategy should be "don't" but that's probably not a popular thing for a tech executive to say


mods???

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Docjowles posted:

This sounds like the cloud strategy should be "don't" but that's probably not a popular thing for a tech executive to say

mods???

We're a large public research university whose last CIO got an article about him in the Wall Street Journal when we fired him. We appointed an interim who was, and I say this respectfully, a professional seat warmer. It was his job, when some high-level leader left, to just sit in the chair, keep things afloat, until a true replacement could be found. He's been around forever and knows everybody, he's very good at that, so he was a natural fit for interim CIO. I suspect the number one requirement (though explicitly unstated) when hiring a new CIO was "keep us out of the wall street journal" and so we made the interim permanent, and his "don't rock the boat, stay the course" style is not really great when one of his senior directors comes in and convinces him to make a cloud push. So he's somehow simultaneously directing us to upend the entirety of our operations but also not actually disrupting anything, which works out... about as well as you'd expect.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.
Congrats, you found the one use case for Outposts

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
Stack and Outpost are dumb as poo poo. There really isn’t a use case for them

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Docjowles posted:

Today on "I thought you assholes said the cloud would be better", devs are complaining that the latency between the parts of the preprod environment that we've migrated to AWS and those that have not yet is unacceptable and their apps and tests are constantly timing out. Our latency to us-east-1 is ~14ms and short of changing the laws of physics I can't do much here buddy. I'm sorry you no longer enjoy sub-millisecond latency between boxes in the same rack but that's kind of what we signed up for here. 14ms is still pretty fast!

Most everything will be in the cloud eventually, but in the meantime maybe adjust the 1ms timeout on your test idk.

I have some fond memories of my devs finding out that AWS was more than 1ms away, sometimes had packet loss, sometimes instances had to be retired, sometimes DX or VPN had maintenance, etc. Turns out there's a lot of work writing code that has some weird thing called "partition tolerance".

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

I have some fond memories of my devs finding out that AWS was more than 1ms away, sometimes had packet loss, sometimes instances had to be retired, sometimes DX or VPN had maintenance, etc. Turns out there's a lot of work writing code that has some weird thing called "partition tolerance".
It's super easy if you don't care about performance at all: you just make every read and every write require a quorum, and never accept any out-of-order transactions. Done!

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Much of my career dealing with careless organizations centers around basically nobody designing any software for common, routine failures like a hard drive failures, memory going bad, a switch going flakey, etc. and troubleshooting random AF software problems in prod that point to issues like an obscure bug in some switch because a legacy application from like 1998 relied upon certain hardware implementation which doesn't exist anymore 10+ years later. Most organizations don't have the resources to have developers (because most orgs can't find developers that aren't trash in the first place) spend effort on anything besides features, frankly, so oftentimes throwing money at much more plentiful sysadmins / ops was the only viable path toward keeping things running.

And now organizations trying to get rid of their sysadmins and datacenters are finding this much more cruel, desperate reality that both cloud-aware people and software is probably all more expensive and rare to setup than their old trash n-tier apps from 2004 whose engineers are all long gone and offshored. Granted, I am familiar with many organizations that were so trash at their datacenters that even an AWS instance in us-east-1 that would randomly go down and recover with instance recovery spanked their old datacenters' reliability and so even a naive cloud washed lift and shift really was justifiable as a business (I once measured routinely 1 9 reliability based upon e-mails complaining about something being down rather than even goddamn Nagios). It mostly is an indictment of their lovely datacenter management and organizational ossification over decades rather than a ringing endorsement of cloud. In fact, these same organizations almost always are repeating the same problems of mismanagement and micromanaging with massive sprawl that their cloud environments are going to be the same thing with AWS and Azure capturing all their growing legacy costs. In this respect it's better to outsource your poo poo you're clearly not good at to someone better. Oftentimes Bad Companies (these same ones usually) have outsourced things they're somewhat ok at and cut down into being untenable. It's not like there was a good bureaucratic reason to ensure things went south, but CIO gonna CIO for that hefty bonus payout I guess.

As such I am still holding firm to the idea that AWS's massive business success is essentially monetizing the most profitable, scalable, low-effort parts of handling low-maturity organizations' technical debt. You don't need to do even 90% of the work if the customer is happy enough with 80% of the important stuff with 40% less personnel involved on their end (remember: they can't hire nor retain anyone competent - basically anyone posting in this thread alone is more competent and talented than 99% of the folks I've seen in these environments). Because people are really bad at estimating the remainder of the 20% last mile efforts, which is something AWS will make vaguely possible while staying very, very far away from which is great politically as well as in terms of pure business.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org).

We use Gitlab, and I've got an engineer who's presented me a custom solution that you can include in a pipeline, but I'd rather use something off the shelf like commit-analyzer or GitVersion instead.

Thoughts?

The Fool
Oct 16, 2003


we rolled our own that reads commit messages formatted in the conventional commit style to generate the next semantic version and tag the repo in ado

The Fool
Oct 16, 2003


I don't remember seeing gitlog, but we definitely looked at commit-analyzer and a few others

pretty much all of the options we looked at had way more features than we needed, and were also missing stuff that we wanted like teams integration

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Iron Rose posted:

What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org).

We use Gitlab, and I've got an engineer who's presented me a custom solution that you can include in a pipeline, but I'd rather use something off the shelf like commit-analyzer or GitVersion instead.

Thoughts?

I just use gitversion. It's fine, it gets the job done and doesn't require any hand holding once you get it set up which takes like 15 minutes.

Hadlock
Nov 9, 2004

I tried using semantic versioning with core roller but it was a real pain in the rear end. Linking everything to git shas is really nice if your company will allow you to do it that way

https://github.com/coreroller/coreroller

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Iron Rose posted:

What systems do people use for automated semantic versioning of repositories? This is a general thing - it can include everything from docker images, to terraform modules, to CI/CD templates, to the actual codebases for our services (mostly python within my SRE/devops group, JS/Python/Golang for the rest of our org).

We use Gitlab, and I've got an engineer who's presented me a custom solution that you can include in a pipeline, but I'd rather use something off the shelf like commit-analyzer or GitVersion instead.

Thoughts?
I use Semantic Release across about 100 repos, works great. Version bumps can be tricky if you're working with version strings embedded in files in boutique ways, but that's going to be true of anything that does this

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Does it have to be semantic versioning? I've used Gitversion for some PowerShell stuff where the version has certain constraints. In places where it matters less, like container images for internal apps, I use the azure DevOps build number, which is based on the date so it's easily sortable.

Hadlock
Nov 9, 2004

How would you write a DevOps resume differently if you were targeting contractor roles

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

How would you write a DevOps resume differently if you were targeting contractor roles
Like for a consultancy? Emphasize writing skills and make it obvious that they can trust you in front of clients the day you walk in the door

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The company I work for is going all in on AWS account segmentation. As a long-time Terraform guy, what should I know about CloudFormation StackSets?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
I have it on my ToDo list to look at StackSets, because they'd make it trivial for me to ensure various IAM users/roles/policies/groups exist in each linked account, e.g. stuff we need for basic admin tasks. Definitely beats writing some Ansible automation to STS:AssumeRole into each account and set it all up, especially given the number of accounts we manage.

But I don't think I'd want to touch it for any compute infra.

12 rats tied together
Sep 7, 2006

They're bad and you shouldn't use them, mostly.

minato posted:

[..] beats writing some Ansible automation to STS:AssumeRole into each account and set it all up [...]

This is exactly what stacksets do. They even have required permissions.

Adbot
ADBOT LOVES YOU

Collateral Damage
Jun 13, 2009

If any of you use Moq in your testing you should probably yank it out, or at the very least pin it at version 4.18

The project owner intentionally hid email-harvesting malware in a minor update yesterday.

https://github.com/moq/moq/issues/1372

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply