Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Hadlock posted:

Counterpoint

Half of our nodes in an AZ went down due to a power outage

I did not know this until I was debugging a single Jenkins failure when it happened, and then saw the notification from AWS. The load got shunted to healthy (online) nodes and hardly saw a blip

If you were curious I've been working with Kubernetes for the past 4 and for the most part I like it but let's not fool ourselves, it's not always the appropriate tool for every workload or every organization.

Adbot
ADBOT LOVES YOU

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


I feel filthy. I had to put a windows VM in one of my K8s clusters.

Zorak of Michigan
Jun 10, 2006


Every post it's a new low with you.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Zorak of Michigan posted:

Every post it's a new low with you.

i'm a whore

Wizard of the Deep
Sep 25, 2005

Another productive workday

jaegerx posted:

I feel filthy. I had to put a windows VM in one of my K8s clusters.

We all know there are some things XP just does better, but that doesn't mean you should be proud of it.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
What’s everyone’s favourite API gateway/authentication services?

We’ve a customer facing webapp and a bunch of API services in AWS/GCP/Azure, some of which are internal and some of which I want to be customer facing. Multi cloud so I can’t use the native cloud provider offerings.

Currently evaluating Kong (which highkey sucks in my tests so far), and Cloudflare Workers/Access/API Gateway, but very very open to other alternatives.

vanity slug
Jul 20, 2010

Apigee is neat.

The Fool
Oct 16, 2003


Apigee is what we used until someone high up made the decision to migrate to azure apim.



I don't touch either one so I don't actually have any specific opinions about the products.

Docjowles
Apr 9, 2009

Interesting to hear that Kong still sucks. I haven’t used it in several years but it was such a colossal pain in the rear end to operate. Figured maybe they would have made some effort to improve in the meantime.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Rookie question about IaC:

As I learn this tech, what is good/best practice for building up a project with terraform and testing as I go? Do I just iterate on the main.tf file and build on as I go?

Ex: Do I build out my TF resources for S3 buckets and then apply/verify they work? Then edit the file and add on my permissions and update my stack to verify those work? Then add on my Lambda etc??

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Hughmoris posted:

Rookie question about IaC:

As I learn this tech, what is good/best practice for building up a project with terraform and testing as I go? Do I just iterate on the main.tf file and build on as I go?

Ex: Do I build out my TF resources for S3 buckets and then apply/verify they work? Then edit the file and add on my permissions and update my stack to verify those work? Then add on my Lambda etc??

yes

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
Hit apply. Error? Make change based on error messages. Hit apply again. Error? Make change based on new error message.

If it wouldn’t get me in trouble I’d post commit logs that are hundreds of ‘changed false to true’ ‘added missing args’ ‘misspelled true’ before a success. At least with TFC you can get spec plans working and see it’s going to fail before you complete the pull request lmao

The Fool
Oct 16, 2003


my commit history is hundreds of one line commits tweaking one value or another, sometimes the same thing repeatedly


no-one ever sees them though because I squash before I merge

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
I work with a team that freely embarrasses themselves but we all do it so no one can judge. Someone did bring up squashing the other day but we decided it’s funnier this way.

Also keeping a long running project is a good way to learn by not pinning your provider version and just riding out major changes. I don’t ever pin versions unless the current is bugged for something I’ve deployed or we pin it once we realize a rework on a new version of the provider will take way longer than the urgency of whatever change/addition we’re making

The Fool
Oct 16, 2003


i am a moron posted:

rework on a new version of the provider will take way longer than the urgency of whatever change/addition we’re making

we have a ~70 module library using azurerm

Erwin
Feb 17, 2006

Make sure you destroy everything and recreate it regularly to ensure that works. Rolling forward with your Terraform and never starting the over won’t guarantee it’ll work from scratch next time you want to reuse the module.

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"

The Fool posted:

we have a ~70 module library using azurerm

Yea I think half of our modules are pinned on 2.99, anything not consumed by our team or very adjacent teams isn’t being hosed with at the moment because we don’t want to block anyone

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

i am a moron posted:

I work with a team that freely embarrasses themselves but we all do it so no one can judge. Someone did bring up squashing the other day but we decided it’s funnier this way.

Also keeping a long running project is a good way to learn by not pinning your provider version and just riding out major changes. I don’t ever pin versions unless the current is bugged for something I’ve deployed or we pin it once we realize a rework on a new version of the provider will take way longer than the urgency of whatever change/addition we’re making

this is the way


Hughmoris posted:

Rookie question about IaC:

As I learn this tech, what is good/best practice for building up a project with terraform and testing as I go? Do I just iterate on the main.tf file and build on as I go?

Ex: Do I build out my TF resources for S3 buckets and then apply/verify they work? Then edit the file and add on my permissions and update my stack to verify those work? Then add on my Lambda etc??

never use public terraform modules. private modules maintained by your org are fine.

SurgicalOntologist
Jun 17, 2004

Random question: Why don't you see Jobs more used in Kubernetes?

Of course not for your typical frontend/backend webapps, but for microservices like a mail sending service, or anything else that would typically have a queue logic, Jobs would make sense, right? But I hardly see them discussed. We have some services that most of the time are idling, then they get a request and have to perform a task. It seems much harder to figure out the best scaling parameters as a Deployment with HPA than as a Job which in a sense, scales automatically.

Obviously it wouldn't make sense if you are running over a certain number of these tasks all the time, so maybe a mail service is not the best example (maybe also not a long-running enough task for spinning up a container to make sense), but I assume most organizations have at least some (lowecase j) jobs that run infrequently.

Another drawback is not getting FIFO queuing if that's a requirement.

Did I answer my own question -- is it just that the venn diagram of relatively long duration, relatively low frequency, and no requirement for a real queue is a pretty small overlap?

Hadlock
Nov 9, 2004

Writing code that can be handled by twelve lines of bash, but baking it into the monolith to make everything more complex and brittle then it needs to be, and thus justifying your job as a sub par junior developer is a time-honored tradition

Moving that kind of job to the actual scheduler puts it in a gray area: does the infra team need to fix these when they break, who writes them, how does it fit into our monitoring and alerting story

K8s also supports cron jobs as a drop in replacement, which is handy for decommissioning your "prod utils" server but I've never seen a job preemptively created by a developer

And yeah once your monolith has the celery/redis pattern, everything just gets thrown in there until it starts creating deadlocks

I think the real answer here is that developers don't even know it's an options, and/or don't trust k8s enough compared to existing solutions

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
IME "job" patterns usually fall under batch processes for data pipelines such as in ETL or OLAP workloads. By the time an organization has the resources for a K8S setup that's not trash tier engineers are probably looking to move on from low maturity Jenkins jobs on a cron timer and looking at more domain specific ecosystems like Spark, Airflow, Luigi, Argo, etc.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


https://registry.terraform.io/providers/hashicorp/helm/latest/docs

How late to the party am I on just finding this?

Hadlock
Nov 9, 2004

Looks like v0.10 was released 4 years ago

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
Not too late but I had a principal engineer interrogating a team about why they’re doing this a while back. I think they were questioning why you’d do it in TF and not in your manifest or something I didn’t pay too close attention and I’m not a k8s person but I think the team in question stopped doing helm in TF

Hadlock
Nov 9, 2004

Helm in TF is really useful for cluster provisioning where you install cert-manager, external dns manager and stuff like maybe flux or whatever bootstrappy utility stuff always gets installed. I wouldn't use it to install your monolith helm chart and do your auto deploy of prod, but it does have it's place

luminalflux
May 27, 2005



Hadlock posted:

Helm in TF is really useful for cluster provisioning where you install cert-manager, external dns manager and stuff like maybe flux or whatever bootstrappy utility stuff always gets installed. I wouldn't use it to install your monolith helm chart and do your auto deploy of prod, but it does have it's place

It's basically useful to bootstrap what you need into the cluster until Argo/Flux can take the wheel. It's super brittle and flips the gently caress out if you change values / parameters in the cluster without it knowing.

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"

Hadlock posted:

Helm in TF is really useful for cluster provisioning where you install cert-manager, external dns manager and stuff like maybe flux or whatever bootstrappy utility stuff always gets installed. I wouldn't use it to install your monolith helm chart and do your auto deploy of prod, but it does have it's place

It was for AKS so none of this applied, someone just saw TF could do helm and was like oh cool

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
I would avoid doing helm in terraform.

I still like ansible better.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


The Iron Rose posted:

I would avoid doing helm in terraform.

I still like ansible better.

I’m trying to save key strokes and this almost kinda works for me.

Methanar
Sep 26, 2013

by the sex ghost
we just use a hack script that renders and dumps yaml to a static file in the ami bake process to be applied by kubectl apply later on by user-data for init.

It made a lot of sense in helm 2's day.

Hadlock
Nov 9, 2004

Recruiter spam for senior roles at the beginning of the year, top of range was 200-205, maybe 1% of spam was over $210

I'd say 10% of recruiter spam is offering north of $235 for top of range now, 1% is $240 or higher

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I was seeing 10% or so above 200k base this past year. The number of blockchain companies messaging me went up for reasons I can’t fathom though.

Hadlock
Nov 9, 2004

9/10 times there's a problem with circleci it seems like the problem is a micro gently caress up on their end, which isn't shown on their status page. Seems like couple times a month

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Lol we rolled out a serviceentry for our istio setup and it broke all egress traffic across all of our prod environments. Extremely good poo poo.

Methanar
Sep 26, 2013

by the sex ghost
I'm going to take a month long holiday at some point and spend the time taking a part time job at pizza hut just for the fun of it. 4-5 hours a day just hanging out making min wage with a bunch of other pizza makers.

I just want to go into the back of a store and make pizzas all day and not have to worry about loving anything.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

I've joked to a buddy several times that there's a pizza place near me that I want to work at for 3-6 months just to learn how they make their pizza.

xzzy
Mar 5, 2009

Happiness Commando posted:

I've joked to a buddy several times that there's a pizza place near me that I want to work at for 3-6 months just to learn how they make their pizza.

Mom said her first job was at a KFC because her mom wanted to get the recipe for their 7 herbs and spices. I guess they figured it was made from scratch in each restaurant.

It didn't work.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Does anyone have a recommended tool for performance regression testing specifically? I know that performance consistency is going to be weird when running shared-core test instances on commodity cloud providers, but I'm looking to spot big regressions and overall trends, not small differences. Ideally something that is easily integrated with Github Actions, but anything at all would be of interest to me as this is a space I know very little about.

Specifically, I've got an application that got significant performance improvement from reducing branching in one place, and reorganizing some loops such that they get vectorized. Both of these optimizations are a little bit fragile, and performance regression testing seems the only way to automatically determine if the vectorization gets broken or an unpredictable conditional gets added, both of which sink performance on this specific hot loop.

El Grillo
Jan 3, 2008
Fun Shoe
We have a large-ish repo, and relatively small server storage space (due to cost). We only need to distribute the repo folders to our servers, we don't need to send the ~14gig /.git directory. Is there any way to do this, whilst still having it as a repo on the servers so that we can do small updates to servers without having to just redownload the whole repo every time?

The only suggestions I see on the net seem to be to do a git clone --shallow depth:1, and then basically deregister the repo on the server and delete the /.git directory. But that still requires you to have enough space to get the /.git directory in the first place, and --shallow doesn't help us much (only reduces that directory by about 1 gig).

I suspect the answer is 'no' but figured here of all places someone would be able to give a definitive answer.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

El Grillo posted:

We have a large-ish repo, and relatively small server storage space (due to cost). We only need to distribute the repo folders to our servers, we don't need to send the ~14gig /.git directory. Is there any way to do this, whilst still having it as a repo on the servers so that we can do small updates to servers without having to just redownload the whole repo every time?

The only suggestions I see on the net seem to be to do a git clone --shallow depth:1, and then basically deregister the repo on the server and delete the /.git directory. But that still requires you to have enough space to get the /.git directory in the first place, and --shallow doesn't help us much (only reduces that directory by about 1 gig).

I suspect the answer is 'no' but figured here of all places someone would be able to give a definitive answer.

I've got a couple of thoughts. How big is the actual total distributable directory you're wanting to send out? Is it almost 14GB? Bigger, because .git is compressed and your uncompressed is even bigger? If you have huge files in your git history that were checked in in the past and then removed and you are confident you will no longer need them, you can clean them out easily with a tool that edits git history like The BFG Repo Cleaner: https://rtyley.github.io/bfg-repo-cleaner/

Alternatively, you could start using a flow with git archive, which will create a zip or tar of the entire contents of the repository without the .git, and then you could distribute the git archive to the servers..

I'm assuming that big size is driven by some type of huge binary files in the repository, you could also separate those out with Git Large File Storage: https://git-lfs.github.com/, or even just by having a script to download those files over https from some server that offers them that you could run immediately after checkout.

Does any of that sound reasonable?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply