Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
trem_two
Oct 22, 2002

it is better if you keep saying I'm fat, as I will continue to score goals
Fun Shoe

Vulture Culture posted:

External DNS is extremely easy to get working with the AWS Load Balancer Controller nowadays. It mostly Just Works. The documentation is very good, and most of it is superfluous if you understand what you're doing (assuming roles into the correct accounts if the zone is hosted in a different account. etc.). Enable the registry to use TXT records for ownership, then keep an eye on the logs to see what's failing.

I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Configuration is also mounted on volumes. There's a separation between data, behavior, and secrets ultimately that are important for scalability and security alike. If you need to rotate secrets it's a purely operational change separate from code and it's not like the actual business data of the application is supposed to be impacted, after all.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

trem_two posted:

I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?

We use external-dns pretty extensively in our EKS clusters and haven't had any issues running it in UPSERT mode. Our use case is about as simple as it gets, though: services run on a single cluster with a classic ELB + Nginx ingress controller setup and route 53 hosted zones (one public, one private) are in the same account and it only manages ~70 records.

We even used it to migrate traffic from one cluster to another as servers became ready. The external-dns deployment on the new cluster used a different TXT owner ID and we put an extra step at the end of the service's deployment pipeline to update the TXT record associated with its ingress once it had deployed to the new cluster.

whats for dinner fucked around with this message at 23:53 on Oct 31, 2022

Methanar
Sep 26, 2013

by the sex ghost

Methanar posted:

https://community.cloudflare.com/t/txt-record-breaks-wildcard-cname/150786/4

To quote RFC 1 1912
“A common mistake is thinking that a wildcard MX for a zone will apply to all hosts in the zone. A wildcard MX will apply only to names in the zone which aren’t listed in the DNS at all.”

That is, if there is a wildcard MX for *.example.com , and an A record (but no MX record) for https://www.example.com , the correct response (as per RFC 1 1034) to an MX request for https://www.example.com is “no error, but no data”; this is in contrast to the possibly expected response of the MX record attached to *.example.com .


With the TXT record set. The A record gets an NX response and does not actually get wildcarded to anything useful at all.

I've been burned by external-dns multiple times. This one was a particularly unpleasant surprise if you use wildcards.
Also it once deleted all of my A records because I had not properly set --registry=txt


So don't make either of those mistakes, I guess.

Methanar fucked around with this message at 00:04 on Nov 1, 2022

LochNessMonster
Feb 3, 2005

I need about three fitty


fletcher posted:

It's a common pattern to pull down some config at runtime. You wouldn't want secrets baked into the images, for example.

This is probably the best solution. Currently everything besides secrets is baked into the image, meaning we’ve got 1 image per customer. Which is fine for customers, just not for us.

minato posted:

The build is config-dependent, or the runtime is config-dependent? Because if it's the latter, you should absolutely be injecting the config in at runtime, not during build. While it can be convenient to ship config baked into the container, it results in situations like this where now you need 1 container per config. If the config was injected at runtime, you only need 1 build.

This is the main issue I’m trying to sort. My predecessors have baked runtime config into the build process and have done so in a pretty convoluted way making it more difficult to seperate the two properly. It wasn’t that big of a deal, until things start to scale.

I think I’m going to have to take see if I can make the client config changes update to an s3 bucket and pull from there on start.

Junkiebev
Jan 18, 2002


Feel the progress.

LochNessMonster posted:

Is there a good way to start Azure DevOps pipelines in batches. I'm trying to find a way to trigger over 1k downstream pipelines after my initial pipeline runs successfully.

I'm not sure if our Azure DevOps infra will like it if I start them all at once as we share the build agents company wide. On busy days we're already running into some limitations where we see 30+ min of queues. Bad scaling/sizing on their part, I know, but I don't want to make the problem worse. The plan is to start this process outside of business hours to minimize impact on the rest of the organization, but you just know there's going to be one day that somebody can't deploy a hotfix for a prio 1 incident because there's a 4 hour queue for the build agents.

The main pipeline will create a feature branch and updates a config file with versions for each downstream repo, which will build on commit. The downstream repo's are managed in a config file in the main repo, so it's iterable. The only thing I came up with so far is externalize updating the downstream repos so it can be done in batches. Was hoping I'm missing something and there's an easier way.

We've used Scale-Set Build Agents to great effect

trem_two
Oct 22, 2002

it is better if you keep saying I'm fat, as I will continue to score goals
Fun Shoe

whats for dinner posted:

We use external-dns pretty extensively in our EKS clusters and haven't had any issues running it in UPSERT mode. Our use case is about as simple as it gets, though: services run on a single cluster with a classic ELB + Nginx ingress controller setup and route 53 hosted zones (one public, one private) are in the same account and it only manages ~70 records.

We even used it to migrate traffic from one cluster to another as servers became ready. The external-dns deployment on the new cluster used a different TXT owner ID and we put an extra step at the end of the service's deployment pipeline to update the TXT record associated with its ingress once it had deployed to the new cluster.

Cool, good to know, thanks. That sounds fairly similar to my use case.

LochNessMonster
Feb 3, 2005

I need about three fitty


Junkiebev posted:

We've used Scale-Set Build Agents to great effect

Thanks, that sounds like something I’ll probably use in the future. For now I’m rewriting the part to load config on runtime instead of build.

Sylink
Apr 17, 2004

trem_two posted:

I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?

Related to my earlier posts, I got external-DNS working pretty easily after reading through everything again. So my ingress is nice and smooth as I just deploy it and the ALB stuff magically appears with new/updated Route 53 records. Its really nice actually.

EDIT: I solved my problem that was not worth reading here.

Sylink fucked around with this message at 03:05 on Nov 4, 2022

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

trem_two posted:

I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?
No issues, but when I first installed it into production I also made sure to deny route53:ChangeResourceRecordSetsActions ForAnyValue:StringEquals DELETE on route53:ChangeResourceRecordSets until I fully understood what it was going to do and when

Music Theory
Aug 7, 2013

Avatar by Garden Walker
I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform?

I'd also like to automatically generate a github release for certain commit tags, if that's possible -- I haven't yet figured out how to do that in github actions.

e: I'm leaning toward buildbot, since I've got some spare server resources for self-hosting and I like that it uses python

Music Theory fucked around with this message at 23:22 on Nov 6, 2022

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Music Theory posted:

I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform?

I'd also like to automatically generate a github release for certain commit tags, if that's possible -- I haven't yet figured out how to do that in github actions.

e: I'm leaning toward buildbot, since I've got some spare server resources for self-hosting and I like that it uses python

Depends. Argo. Jenkins.

Methanar
Sep 26, 2013

by the sex ghost

Music Theory posted:

I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform?

I'd also like to automatically generate a github release for certain commit tags, if that's possible -- I haven't yet figured out how to do that in github actions.

e: I'm leaning toward buildbot, since I've got some spare server resources for self-hosting and I like that it uses python

Just lock yourself into github for personal stuff, it's fine.

Any job you get you're just going to use whatever the company already uses anyway so whatever.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Music Theory posted:

I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform?

I'd also like to automatically generate a github release for certain commit tags, if that's possible -- I haven't yet figured out how to do that in github actions.

e: I'm leaning toward buildbot, since I've got some spare server resources for self-hosting and I like that it uses python
Building releases off the commit tag seems sensible but is usually the wrong flow, because that version identifier is probably supposed to end up embedded in your code somewhere. The tag is usually an incidental artifact that's generated as part of your software release process. That's the biggest reason you're finding poor support for this workflow in your tools.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Yeah, I've found producing tags as part of a release to work better than producing releases from tags. Creating the git tag is part of publishing a release, so it should come after you build the thing which will be published.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
it’s also surprisingly kinda hard to build deterministic artifacts

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Deterministic and / or reproducible builds are really hard. This describes just packages for Debian https://www.qubes-os.org/news/2021/10/08/reproducible-builds-for-debian-a-big-step-forward/

With that said, it's a holy grail of sorts for many but not necessarily very pragmatic in practice. I'm untangling a horribly overengineered process for a 100+ engineer 12+ year org that's consumed many, many engineers' hours that is overcomplicated because engineers n years ago wanted to be able to reproduce every package that was in an entire OS release up and down the stack in case we had to go back to a specific package combination. Guess how many times that's been necessary for a customer requirement or to handle a nasty bug? Guess how many times people bitch about how hard it is to build a basic release? Yeah... So I'm going with build a release from scratch, provide a package list of every artifact, then tagging all that as an audit trail than a reproducibility trail. Because ain't nobody got time to do anything besides checkout a commit off trunk, create a hotfix branch, test it, and merge it back before you sign off and go drinking. If you want the same packages back you can go through the manifest, download the packages yourself that you care about (we're not going wild over OpenSSL and glibc differences... yet). The merge back and generating all the artifacts like manifests beyond the manual "fix this broken thing" commit is easy enough to deploy, validate, and sign as a pipeline compared to trying to get every drat commit ID perfect. God forbid you ever do a rebase anywhere, drop a ref, or screw up git filter repo and be unable to get some ancient drunken commit back that your company depended upon and didn't realize until some routine maintenance or migration exposed it.

vanity slug
Jul 20, 2010

just use nix op

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Most industries/organizations are way more interested in knowing where a specific dependency version was in use at the time of a breach than they are in actually rebuilding software from years ago. Long-term reproducibility is mostly important for folks who have an interest in reproducing results of specific analyses (like peer-reviewed science), and occasionally important for folks maintaining long-term support releases of complex and interdependent software ecosystems (like Linux distributions). My hypothesis is that most people/teams/orgs would be better off investing into managing records of their dependencies than actually being continuously able to rebuild with a specific dependency set at a later point in time.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

Yeah, I've found producing tags as part of a release to work better than producing releases from tags. Creating the git tag is part of publishing a release, so it should come after you build the thing which will be published.
Since I posted the first reply from a phone and didn't get to talk much about actual workflows I've found to work well, I'll take the opportunity now to pitch SemVer and Semantic Release. The idea is that you tag your commits using a specific commit message convention (example), and the release automations figure out the rest of your release workstream. If you've committed a fix, you automatically build and release a patch release. If you've committed a feature, you have a minor release. If you've made a breaking change, you now have a major release.

These workflows are really nice because they work well with the assumptions made by other folks' automated dependency management workstreams, like Dependabot or Renovate. Your continuous releases will be continually incorporated by your downstream consumers, which helps you get feedback on your changes really quickly. The downside is that you need to be confident in the quality of your code and releases.

My team has been using semantic-release for basically everything—probably thirty or so distinct internal products—since the beginning of the year, and I can't even remember the last time I deliberately released a piece of software.

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

Jenkins sucks don’t ever use it

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

Reject modernity, embrace monke. Return to computer-less paperwork.

kalel
Jun 19, 2012

madmatt112 posted:

Jenkins sucks don’t ever use it

yeah use cloudbees instead. Lol

Junkiebev
Jan 18, 2002


Feel the progress.

man i am feeling a bit burnt out of late - i just got a ticket complaining that a build pipeline which used to take 90 seconds took 110 seconds *once*

npm is involved - the gently caress do you want, guy? i don't control The Internet

Junkiebev fucked around with this message at 06:57 on Nov 15, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

we've made life too easy for these assholes

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The problem wasn’t that computers were a mistake but that users exist. No more users, no more tickets. QED

Hadlock
Nov 9, 2004

Junkiebev posted:

man i am feeling a bit burnt out of late - i just got a ticket complaining that a build pipeline which used to take 90 seconds took 110 seconds *once*

npm is involved - the gently caress do you want, guy? i don't control The Internet

Enjoy hosting a private npm repository in perpetuity

Warbird
May 23, 2012

America's Favorite Dumbass

:mods:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Hadlock posted:

Enjoy hosting a private npm repository in perpetuity

To be fair, a proxy repo is a must for any serious software project. If you can't deploy a hotfix because some rando 3rd party repo is throwing 5xx errors, it's no bueno. We use nexus, and we proxy docker, helm, maven, nuget, pypi, rubygems, yum, and npm. Makes it easy!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

fletcher posted:

To be fair, a proxy repo is a must for any serious software project. If you can't deploy a hotfix because some rando 3rd party repo is throwing 5xx errors, it's no bueno. We use nexus, and we proxy docker, helm, maven, nuget, pypi, rubygems, yum, and npm. Makes it easy!
Artifactory also works fine for this. It's worth it not constantly running into rate limiting errors from Docker Hub the second you introduce CI for a couple of container images.

Warbird
May 23, 2012

America's Favorite Dumbass

I’ve never really messed with that before now. I assume it just locally caches an image on first pull so you don’t blow out your allotment in 2 minutes?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Warbird posted:

I’ve never really messed with that before now. I assume it just locally caches an image on first pull so you don’t blow out your allotment in 2 minutes?

It depends on what sort of scale you operate at. Sure, local cache is good even if you have a proxy repo, if only to limit the number of times your proxy repo is hit as well. Where does that local cache live though? Is it available to all of your build nodes? How long does that cache live for?

Warbird
May 23, 2012

America's Favorite Dumbass

Nice try, sign the SOW and I’ll tell you. :v:

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

Anybody ever used kaniko inside a Jenkins kubernetes agent, specifically to build an intermediate container? One job, one jenkinsfile, one Dockerfile. Kaniko builds off the dockerfile, and then I want to use that image to spin up a container and do the business-end of my task.

No matter how I do it, the Jenkins kubernetes agent doesn’t seem to be able to use that just-built image. Irrespective of whether I push it to a registry and tell Jenkins to pull ‘er down, or I tell it to just use the local image.

Plank Walker
Aug 11, 2005
What's the right way to set up a Gitlab workflow with access to AWS for deployments? Considering baking the permissions needed for deployments into a role that the runners have, but the runners are also deployed via Gitlab so at some point, there seems to need to have AWS credentials stored as project variables to bootstrap everything and that seems like a pain from a security and key rotation perspective.

The Fool
Oct 16, 2003


What do you normally use to manage secrets? I'd be very surprised if gitlab couldn't reach in to it

Plank Walker
Aug 11, 2005

The Fool posted:

What do you normally use to manage secrets? I'd be very surprised if gitlab couldn't reach in to it

AWS secrets manager, which completes this circular dependency.

Looks like setting up OpenID to get temporary credentials could work, then there'd be no keys to store/rotate: https://docs.gitlab.com/ee/ci/cloud_services/aws/

12 rats tied together
Sep 7, 2006

used to use oidc heavily for github actions -> aws and its a great experience all around. simple to configure and understand, easy to debug because its all just IAM roles in AWS, i highly recommend it

these days, our gitlab runners are aws nodes, so we just give them instance profiles, which is even easier

StumblyWumbly
Sep 12, 2007

Batmanticore!
I work on embedded software, and we do hardware in the loop tests using GitHub Actions. When we open the PR, we build the embedded code, send the the binaries to some Raspi test runners that are connected to the hardware. It works out really well, but I'd like to expand the configurations we test. Ideally I'd like to set it up so that when no other test active we automatically deploy the latest release or development code and run some random configuration or long term test.

Anyone have thoughts on the best way to do this? Easiest might be to have something local that sends API calls to GitHub that trigger the test action and sends out an email if there's an error. Almost all the processing should be handled by the runners, so that should be pretty cheap.

Are there any easier or more common solutions to this kind of thing?

Adbot
ADBOT LOVES YOU

The NPC
Nov 21, 2010


For anyone managing an openshift cluster, how much do you fiddle with resource requests and limits on operators?

We are running a (what I feel is small) cluster with 3x 8 cpu 32 gb ram worker nodes. Our actual utilization is really low, but we are already running into scheduling issues because of cpu requests. Is anyone else modifying operator configs to have significantly fewer resources?

I don't see this talked about much and assume I am doing something wrong at this point.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply