|
Vulture Culture posted:External DNS is extremely easy to get working with the AWS Load Balancer Controller nowadays. It mostly Just Works. The documentation is very good, and most of it is superfluous if you understand what you're doing (assuming roles into the correct accounts if the zone is hosted in a different account. etc.). Enable the registry to use TXT records for ownership, then keep an eye on the logs to see what's failing. I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?
|
# ? Oct 31, 2022 19:11 |
|
|
# ? May 19, 2024 15:32 |
|
Configuration is also mounted on volumes. There's a separation between data, behavior, and secrets ultimately that are important for scalability and security alike. If you need to rotate secrets it's a purely operational change separate from code and it's not like the actual business data of the application is supposed to be impacted, after all.
|
# ? Oct 31, 2022 19:31 |
|
trem_two posted:I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah? We use external-dns pretty extensively in our EKS clusters and haven't had any issues running it in UPSERT mode. Our use case is about as simple as it gets, though: services run on a single cluster with a classic ELB + Nginx ingress controller setup and route 53 hosted zones (one public, one private) are in the same account and it only manages ~70 records. We even used it to migrate traffic from one cluster to another as servers became ready. The external-dns deployment on the new cluster used a different TXT owner ID and we put an extra step at the end of the service's deployment pipeline to update the TXT record associated with its ingress once it had deployed to the new cluster. whats for dinner fucked around with this message at 23:53 on Oct 31, 2022 |
# ? Oct 31, 2022 23:48 |
|
Methanar posted:https://community.cloudflare.com/t/txt-record-breaks-wildcard-cname/150786/4 I've been burned by external-dns multiple times. This one was a particularly unpleasant surprise if you use wildcards. Also it once deleted all of my A records because I had not properly set --registry=txt So don't make either of those mistakes, I guess. Methanar fucked around with this message at 00:04 on Nov 1, 2022 |
# ? Nov 1, 2022 00:02 |
|
fletcher posted:It's a common pattern to pull down some config at runtime. You wouldn't want secrets baked into the images, for example. This is probably the best solution. Currently everything besides secrets is baked into the image, meaning we’ve got 1 image per customer. Which is fine for customers, just not for us. minato posted:The build is config-dependent, or the runtime is config-dependent? Because if it's the latter, you should absolutely be injecting the config in at runtime, not during build. While it can be convenient to ship config baked into the container, it results in situations like this where now you need 1 container per config. If the config was injected at runtime, you only need 1 build. This is the main issue I’m trying to sort. My predecessors have baked runtime config into the build process and have done so in a pretty convoluted way making it more difficult to seperate the two properly. It wasn’t that big of a deal, until things start to scale. I think I’m going to have to take see if I can make the client config changes update to an s3 bucket and pull from there on start.
|
# ? Nov 1, 2022 07:55 |
|
LochNessMonster posted:Is there a good way to start Azure DevOps pipelines in batches. I'm trying to find a way to trigger over 1k downstream pipelines after my initial pipeline runs successfully. We've used Scale-Set Build Agents to great effect
|
# ? Nov 1, 2022 17:02 |
|
whats for dinner posted:We use external-dns pretty extensively in our EKS clusters and haven't had any issues running it in UPSERT mode. Our use case is about as simple as it gets, though: services run on a single cluster with a classic ELB + Nginx ingress controller setup and route 53 hosted zones (one public, one private) are in the same account and it only manages ~70 records. Cool, good to know, thanks. That sounds fairly similar to my use case.
|
# ? Nov 1, 2022 17:34 |
|
Junkiebev posted:We've used Scale-Set Build Agents to great effect Thanks, that sounds like something I’ll probably use in the future. For now I’m rewriting the part to load config on runtime instead of build.
|
# ? Nov 2, 2022 10:26 |
|
trem_two posted:I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah? Related to my earlier posts, I got external-DNS working pretty easily after reading through everything again. So my ingress is nice and smooth as I just deploy it and the ALB stuff magically appears with new/updated Route 53 records. Its really nice actually. EDIT: I solved my problem that was not worth reading here. Sylink fucked around with this message at 03:05 on Nov 4, 2022 |
# ? Nov 3, 2022 19:27 |
|
trem_two posted:I really wanted to use external-dns, but there were some issues in the GitHub repo that were a bit frightening, including issues that were closed due to being “stale” since nobody picked them up. You’ve had a good experience with external-dns though yeah?
|
# ? Nov 5, 2022 18:16 |
|
I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform? I'd also like to automatically generate a github release for certain commit tags, if that's possible -- I haven't yet figured out how to do that in github actions. e: I'm leaning toward buildbot, since I've got some spare server resources for self-hosting and I like that it uses python Music Theory fucked around with this message at 23:22 on Nov 6, 2022 |
# ? Nov 6, 2022 23:18 |
|
Music Theory posted:I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform? Depends. Argo. Jenkins.
|
# ? Nov 7, 2022 01:20 |
|
Music Theory posted:I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform? Just lock yourself into github for personal stuff, it's fine. Any job you get you're just going to use whatever the company already uses anyway so whatever.
|
# ? Nov 7, 2022 01:47 |
|
Music Theory posted:I've never really done any build/test automation beyond some simple github actions, but I want to get better at it & I don't want to be locked in to github stuff. Is there a good replacement for github actions that isn't tied to a specific platform?
|
# ? Nov 7, 2022 20:07 |
|
Yeah, I've found producing tags as part of a release to work better than producing releases from tags. Creating the git tag is part of publishing a release, so it should come after you build the thing which will be published.
|
# ? Nov 7, 2022 22:26 |
|
it’s also surprisingly kinda hard to build deterministic artifacts
|
# ? Nov 7, 2022 22:51 |
|
Deterministic and / or reproducible builds are really hard. This describes just packages for Debian https://www.qubes-os.org/news/2021/10/08/reproducible-builds-for-debian-a-big-step-forward/ With that said, it's a holy grail of sorts for many but not necessarily very pragmatic in practice. I'm untangling a horribly overengineered process for a 100+ engineer 12+ year org that's consumed many, many engineers' hours that is overcomplicated because engineers n years ago wanted to be able to reproduce every package that was in an entire OS release up and down the stack in case we had to go back to a specific package combination. Guess how many times that's been necessary for a customer requirement or to handle a nasty bug? Guess how many times people bitch about how hard it is to build a basic release? Yeah... So I'm going with build a release from scratch, provide a package list of every artifact, then tagging all that as an audit trail than a reproducibility trail. Because ain't nobody got time to do anything besides checkout a commit off trunk, create a hotfix branch, test it, and merge it back before you sign off and go drinking. If you want the same packages back you can go through the manifest, download the packages yourself that you care about (we're not going wild over OpenSSL and glibc differences... yet). The merge back and generating all the artifacts like manifests beyond the manual "fix this broken thing" commit is easy enough to deploy, validate, and sign as a pipeline compared to trying to get every drat commit ID perfect. God forbid you ever do a rebase anywhere, drop a ref, or screw up git filter repo and be unable to get some ancient drunken commit back that your company depended upon and didn't realize until some routine maintenance or migration exposed it.
|
# ? Nov 8, 2022 13:11 |
|
just use nix op
|
# ? Nov 8, 2022 17:50 |
|
Most industries/organizations are way more interested in knowing where a specific dependency version was in use at the time of a breach than they are in actually rebuilding software from years ago. Long-term reproducibility is mostly important for folks who have an interest in reproducing results of specific analyses (like peer-reviewed science), and occasionally important for folks maintaining long-term support releases of complex and interdependent software ecosystems (like Linux distributions). My hypothesis is that most people/teams/orgs would be better off investing into managing records of their dependencies than actually being continuously able to rebuild with a specific dependency set at a later point in time.
|
# ? Nov 9, 2022 18:25 |
|
Plorkyeran posted:Yeah, I've found producing tags as part of a release to work better than producing releases from tags. Creating the git tag is part of publishing a release, so it should come after you build the thing which will be published. These workflows are really nice because they work well with the assumptions made by other folks' automated dependency management workstreams, like Dependabot or Renovate. Your continuous releases will be continually incorporated by your downstream consumers, which helps you get feedback on your changes really quickly. The downside is that you need to be confident in the quality of your code and releases. My team has been using semantic-release for basically everything—probably thirty or so distinct internal products—since the beginning of the year, and I can't even remember the last time I deliberately released a piece of software.
|
# ? Nov 9, 2022 18:31 |
Jenkins sucks don’t ever use it
|
|
# ? Nov 11, 2022 00:36 |
Reject modernity, embrace monke. Return to computer-less paperwork.
|
|
# ? Nov 11, 2022 00:36 |
|
madmatt112 posted:Jenkins sucks don’t ever use it yeah use cloudbees instead. Lol
|
# ? Nov 11, 2022 01:03 |
|
man i am feeling a bit burnt out of late - i just got a ticket complaining that a build pipeline which used to take 90 seconds took 110 seconds *once* npm is involved - the gently caress do you want, guy? i don't control The Internet Junkiebev fucked around with this message at 06:57 on Nov 15, 2022 |
# ? Nov 15, 2022 06:54 |
|
we've made life too easy for these assholes
|
# ? Nov 15, 2022 07:02 |
|
The problem wasn’t that computers were a mistake but that users exist. No more users, no more tickets. QED
|
# ? Nov 15, 2022 21:07 |
|
Junkiebev posted:man i am feeling a bit burnt out of late - i just got a ticket complaining that a build pipeline which used to take 90 seconds took 110 seconds *once* Enjoy hosting a private npm repository in perpetuity
|
# ? Nov 15, 2022 23:04 |
|
|
# ? Nov 15, 2022 23:13 |
Hadlock posted:Enjoy hosting a private npm repository in perpetuity To be fair, a proxy repo is a must for any serious software project. If you can't deploy a hotfix because some rando 3rd party repo is throwing 5xx errors, it's no bueno. We use nexus, and we proxy docker, helm, maven, nuget, pypi, rubygems, yum, and npm. Makes it easy!
|
|
# ? Nov 16, 2022 02:08 |
|
fletcher posted:To be fair, a proxy repo is a must for any serious software project. If you can't deploy a hotfix because some rando 3rd party repo is throwing 5xx errors, it's no bueno. We use nexus, and we proxy docker, helm, maven, nuget, pypi, rubygems, yum, and npm. Makes it easy!
|
# ? Nov 16, 2022 15:03 |
|
I’ve never really messed with that before now. I assume it just locally caches an image on first pull so you don’t blow out your allotment in 2 minutes?
|
# ? Nov 17, 2022 01:00 |
Warbird posted:I’ve never really messed with that before now. I assume it just locally caches an image on first pull so you don’t blow out your allotment in 2 minutes? It depends on what sort of scale you operate at. Sure, local cache is good even if you have a proxy repo, if only to limit the number of times your proxy repo is hit as well. Where does that local cache live though? Is it available to all of your build nodes? How long does that cache live for?
|
|
# ? Nov 17, 2022 01:12 |
|
Nice try, sign the SOW and I’ll tell you.
|
# ? Nov 17, 2022 02:29 |
Anybody ever used kaniko inside a Jenkins kubernetes agent, specifically to build an intermediate container? One job, one jenkinsfile, one Dockerfile. Kaniko builds off the dockerfile, and then I want to use that image to spin up a container and do the business-end of my task. No matter how I do it, the Jenkins kubernetes agent doesn’t seem to be able to use that just-built image. Irrespective of whether I push it to a registry and tell Jenkins to pull ‘er down, or I tell it to just use the local image.
|
|
# ? Nov 18, 2022 23:58 |
|
What's the right way to set up a Gitlab workflow with access to AWS for deployments? Considering baking the permissions needed for deployments into a role that the runners have, but the runners are also deployed via Gitlab so at some point, there seems to need to have AWS credentials stored as project variables to bootstrap everything and that seems like a pain from a security and key rotation perspective.
|
# ? Nov 21, 2022 17:09 |
|
What do you normally use to manage secrets? I'd be very surprised if gitlab couldn't reach in to it
|
# ? Nov 21, 2022 17:28 |
|
The Fool posted:What do you normally use to manage secrets? I'd be very surprised if gitlab couldn't reach in to it AWS secrets manager, which completes this circular dependency. Looks like setting up OpenID to get temporary credentials could work, then there'd be no keys to store/rotate: https://docs.gitlab.com/ee/ci/cloud_services/aws/
|
# ? Nov 21, 2022 17:56 |
|
used to use oidc heavily for github actions -> aws and its a great experience all around. simple to configure and understand, easy to debug because its all just IAM roles in AWS, i highly recommend it these days, our gitlab runners are aws nodes, so we just give them instance profiles, which is even easier
|
# ? Nov 21, 2022 18:19 |
|
I work on embedded software, and we do hardware in the loop tests using GitHub Actions. When we open the PR, we build the embedded code, send the the binaries to some Raspi test runners that are connected to the hardware. It works out really well, but I'd like to expand the configurations we test. Ideally I'd like to set it up so that when no other test active we automatically deploy the latest release or development code and run some random configuration or long term test. Anyone have thoughts on the best way to do this? Easiest might be to have something local that sends API calls to GitHub that trigger the test action and sends out an email if there's an error. Almost all the processing should be handled by the runners, so that should be pretty cheap. Are there any easier or more common solutions to this kind of thing?
|
# ? Nov 28, 2022 23:48 |
|
|
# ? May 19, 2024 15:32 |
|
For anyone managing an openshift cluster, how much do you fiddle with resource requests and limits on operators? We are running a (what I feel is small) cluster with 3x 8 cpu 32 gb ram worker nodes. Our actual utilization is really low, but we are already running into scheduling issues because of cpu requests. Is anyone else modifying operator configs to have significantly fewer resources? I don't see this talked about much and assume I am doing something wrong at this point.
|
# ? Nov 29, 2022 20:09 |