Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
tango alpha delta
Sep 9, 2011

Ask me about my wealthy lifestyle and passive income! I love bragging about my wealth to my lessers! My opinions are more valid because I have more money than you! Stealing the fruits of the labor of the working class is okay, so long as you don't do it using crypto. More money = better than!
I think I remember how to set up a kubernetes container in Artifactory, but the devs in my last job would usually keep a local repo on their dev machine, IIRC.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Oysters Autobio posted:

is this the right thread for docker questions?

I'm looking to be able to whip up some demo streamlit apps to get some stuff in front of team mates and other users to try.

Got another guy on the data science team who's already stood one up on an on-prem server, and chatting with him mentioned + showed me that I should be able to ssh into there and if I wanted to, mount a volume on it for my own docker containers and sort of "carve out my own space"
Depending upon how cowboy / undisciplined your data science folks are with the infrastructure / ops side of things you may also want to consider adding ngrok to their toolkits to help each other debug and demo their locally developed work or maybe whatever is hanging out in a cloud Jupyter notebook. Granted, this may open up the developer's machine for attack but sometimes in my evil, devious, passive-aggressive recesses of my soul I think letting people get hacked may be the only way for them to avoid pushing obviously insecure stuff into a cloud or to try to get around security measures I've taken. What's the amount of data that your team members are working with usually? That kind of determines what makes sense to try to do locally and what's unavoidably remote-heavy.

At my company I'm helping out on occasion with the pachyderm clusters used by our data science and ML folks, and that may be worth taking a look at for a decent compromise between infrastructure folks and data science people. On the other hand, I have reservations about recommending anything that was acquired by HPE in the same way I have difficulty recommending Broadcom-acquired stacks.

Hadlock
Nov 9, 2004

necrobobsledder posted:

At my company I'm helping out on occasion with the pachyderm clusters used by our data science and ML folks, and that may be worth taking a look at for a decent compromise between infrastructure folks and data science people. On the other hand, I have reservations about recommending anything that was acquired by HPE in the same way I have difficulty recommending Broadcom-acquired stacks.

If it makes you feel any better, the cluster provisioning design and execution was done by a goon, and I don't think anything has changed since the hpe acquisition (yet)

Warbird
May 23, 2012

America's Favorite Dumbass

My current client/role has me playing around with a K8s-ed out Gitlab instance for CICD stuff and I'm getting legit mad at how much better this is than Jenkins in more or less every way possible. Holy hell.

Junkiebev
Jan 18, 2002


Feel the progress.

I have a vendor application I’m trying to instrument for observability in a series of k8s clusters running Prometheus Operator. It’s already instrumented for prom, but the poo poo thing is it can only push in Prometheus remote write binary format. I would just throw in the towel and pointed at the Prometheus instance for the cluster, but multiple as app copies are running in a cluster and they collide.

Generally, I would use the Prometheus push gateway (and target that with a service monitor) as an intermediary, but it’s not a fit on protocol + no TTL settings. Maybe the aggregation gateway is the move?

It would be very inefficient to run Prometheus sidecars with ephemeral data directories and minuscule TTL settings, but that would almost certainly work.

Junkiebev fucked around with this message at 15:48 on Mar 17, 2024

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

quote:

it’s not a fit on protocol + no TTL settings.

IDGI why isn’t it a fit?

Cheston
Jul 17, 2012

(he's got a good thing going)
How do I handle non-idempotent e2e tests? I.E. how do I make an end-to-end test for a site's invite code flow work both in dev and in production? I need a valid invite code in the database to run the test, and I need to actually consume that code to finish the test. Locally I can just reset postgres between runs, but I can't do that in production. Do I make an API for the e2e test to hit to make sure the right data is populated? I don't have a lot of backend experience and I feel like I'm doing something wrong.

**this is serverless nextjs (vercel), the backend is supabase (managed postgres). there's currently no staging environment, just dev and prod.

Cheston fucked around with this message at 20:19 on Mar 17, 2024

Junkiebev
Jan 18, 2002


Feel the progress.

madmatt112 posted:

IDGI why isn’t it a fit?

Because I’d have to rely on a scrape config which somehow only took the “latest” results. Is that a thing?

zokie
Feb 13, 2006

Out of many, Sweden

Cheston posted:

How do I handle non-idempotent e2e tests? I.E. how do I make an end-to-end test for a site's invite code flow work both in dev and in production? I need a valid invite code in the database to run the test, and I need to actually consume that code to finish the test. Locally I can just reset postgres between runs, but I can't do that in production. Do I make an API for the e2e test to hit to make sure the right data is populated? I don't have a lot of backend experience and I feel like I'm doing something wrong.

**this is serverless nextjs (vercel), the backend is supabase (managed postgres). there's currently no staging environment, just dev and prod.

Ideally you want your tests to behave like your users as much as possible. But if the invite code is sent by text or email I would try to avoid that. Having a special API that is protected somehow would be one way of doing it, probably what I would do if I wanted to automate that process. But since that test being green doesn’t really show you that actual people can accept invites and register then maybe it’s not a good test?

What type of invites codes are they? If they come from another user to send to a friend you should be able to get them by acting as a real user and then using them in another clean context. If they are more like verifying email is correct then maybe just leave it to manual testing unless you want to automate email access also. If your not checking that those emails get sent out then you can’t be sure that everything works. But it all depends on what you are trying to do of course

kaaj
Jun 23, 2013

don't stop, carry on.
You could technically get an email address to send invite codes to and then automation to act on that? It shouldn’t be that hard although it would involve some work. I’d at least consider that, I really dislike having hidden APIs to do stuff if there’s a way to avoid them.

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

Junkiebev posted:

Because I’d have to rely on a scrape config which somehow only took the “latest” results. Is that a thing?

I think that’s the idea - every time you scrape, you get the most recent data point, and store it with a timestamp. Repeat ad nauseam and eventually you have a TSDB.

Hadlock
Nov 9, 2004

Warbird posted:

My current client/role has me playing around with a K8s-ed out Gitlab instance for CICD stuff and I'm getting legit mad at how much better this is than Jenkins in more or less every way possible. Holy hell.

Not that I'm a big fan of Jenkins but what do you like about it over Jenkins

Warbird
May 23, 2012

America's Favorite Dumbass

Mostly that it’s not loving Jenkins.

LochNessMonster
Feb 3, 2005

I need about three fitty


Hadlock posted:

Not that I'm a big fan of Jenkins but what do you like about it over Jenkins

Probably something like not having to deal with this.

code:

node {
    echo 'No quotes in single backticks'
    sh 'echo $BUILD_NUMBER'
    echo 'Double quotes are silently dropped'
    sh 'echo "$BUILD_NUMBER"'
    echo 'Even escaped with a single backslash they are dropped'
    sh 'echo \"$BUILD_NUMBER\"'
    echo 'Using two backslashes, the quotes are preserved'
    sh 'echo \\"$BUILD_NUMBER\\"'
    echo 'Using three backslashes still results in preserving the single quotes'
    sh 'echo \\\"$BUILD_NUMBER\\\"'
    echo 'To end up with \" use \\\\\\\" (yes, seven backticks)'
    sh 'echo \\\\\\"$BUILD_NUMBER\\\\\\"'
    echo 'This is fine and all, but we cannot substitute Jenkins variables in single quote strings'
    def foo = 'bar'
    sh 'echo "${foo}"'
    echo 'This does not interpolate the string but instead tries to look up "foo" on the command line, so use double quotes'
    sh "echo \"${foo}\""
    echo 'Great, more escaping is needed now. How about just concatenate the strings? Well that gets kind of ugly'
    sh 'echo \\\\\\"' + foo + '\\\\\\"'
    echo 'We still needed all of that escaping and mixing concatenation is hideous!'
    echo 'There must be a better way, enter dollar slashy strings (actual term)'
    def command = $/echo \\\"${foo}\\\"/$
    sh command
    echo 'String interpolation works out of the box as well as environment variables, escaped with double dollars'
    def vash = $/echo \\\"$$BUILD_NUMBER\\\" ${foo}/$
    sh vash
    echo 'It still requires escaping the escape but that is just bash being bash at that point'
    echo 'Slashy strings are the closest to raw shell input with Jenkins, although the non dollar variant seems to give an error but the dollar slash works fine'
}

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

LochNessMonster posted:

Probably something like not having to deal with this.

code:
...
This is a lot of words to say "I didn't know about """triple quoting""" until today"

TBH, a big part of the problem with interpolating vs. not interpolating variables is that people don't pay attention to at what point they've stopped providing variables to things and have started dynamically generating code, as though landing in this place is the fault is variable interpolation somehow. Don't do that. There's a hundred ways not to do that. Provide things that need interpolation as inputs to your shell script, or export them as environment variables. Have your Jenkinsfile be Jenkins code and have your shell scripts be shell scripts. I heartily recommend folks don't generate shell scripts and get annoyed about how hard this obviously hard problem is

Vulture Culture fucked around with this message at 14:50 on Mar 18, 2024

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
it’s always funny to me when people say they are trying to solve the problem of groovy’s terrible syntax by introducing another flavor of turing complete yaml

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
the problem jenkins is solving for everyone is centralized job scheduling, orchestration, code reuse, testable pipelines/components, custom plugins, etc.

from what I know (very little), the yamlshit generally does not solve all of these problems or does not do so as well as jenkins does, and until it does I don’t care if I have to write php, golang, whatever in my pipelines, the language is just a means to those greater ends

The Fool
Oct 16, 2003


I don't use jenkins specifically, but for cicd pipelines generally, I feel like having a bunch of logic and/or inline code in your pipeline definition is a huge anti-pattern.

Pipeline should just call a list of tasks, logic to determine if the task should be run is fine, but not more than that.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Obviously you greenfield it better if possible but since you're likely just shoving some already-designed scripts into jenkins for ease-of-execution GUI, RBAC, and logging, you work with what you're given.

The real crime here is how much developing pipelines suck as a development workflow. there's no breakpointing, no IDE, a special one-off language and no real way to troubleshoot beyond executing the script over and over again. Just trying to get a simple maintenance task done means just covering everything in echos, executing it 10 times (+1 if you're providing parameters in the pipeline which modify the job itself) while hitting your shin on every single dereference failure and syntax error read through the job console log.

Good luck trying to develop a jenkins library because the best our modern systems have to offer are an attached job configuration window gui, a separate execute job window gui and clicking into the console every time to see what happened.

This is why most people recommend leaning more on shell scripts, not less, getting those working separately with a faster execute/fix loop and then using jenkins glue as the thinnest coordination/execution piece possible.

Bhodi fucked around with this message at 15:14 on Mar 18, 2024

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

my homie dhall posted:

it’s always funny to me when people say they are trying to solve the problem of groovy’s terrible syntax by introducing another flavor of turing complete yaml

The most damning criticism of Groovy is when the creator said

quote:

I can honestly say if someone had shown me the Programming in Scala book by by Martin Odersky, Lex Spoon & Bill Venners back in 2003 I'd probably have never created Groovy.

The Fool
Oct 16, 2003


Bhodi posted:

This is why most people recommend leaning more on shell scripts, not less, getting those working separately with a faster execute/fix loop and then using jenkins glue as the thinnest coordination/execution piece possible.

Yeah, this is part of my point. Develop your scripts in whatever language (gently caress bash), test them independently, then when it comes time to put them in a pipeline you have reusable blocks that you can call out in steps.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
pre:
make ci
or gtfo

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

my homie dhall posted:

the problem jenkins is solving for everyone is centralized job scheduling, orchestration, code reuse, testable pipelines/components, custom plugins, etc.

from what I know (very little), the yamlshit generally does not solve all of these problems or does not do so as well as jenkins does, and until it does I don’t care if I have to write php, golang, whatever in my pipelines, the language is just a means to those greater ends
I'm all for shooting down useless technology migrations, but Jenkins is quite bad at every one of these things except job scheduling. There's a few ways of obtusely handling code reuse if you have admin on the server too, but it's mostly a fight between the sandbox and people who nonstop hassle you to turn it off

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Agreed it's bad but I figure a lot of teams need to setup an automation platform for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease, and Jenkins is just simple enough that 1 person can get it setup "enough" for a team within a short time. The better alternatives (a k8s cluster? Ansible Automation Platform?) are complex enough that they need an Ops team.

The Fool
Oct 16, 2003


minato posted:

Agreed it's bad but I figure a lot of teams need to setup an automation platform for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease, and Jenkins is just simple enough that 1 person can get it setup "enough" for a team within a short time. The better alternatives (a k8s cluster? Ansible Automation Platform?) are complex enough that they need an Ops team.

Azure Devops , GitLab, Github Actions, or CircleCI?

All of those are super easy for a small team to set up

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

The Fool posted:

Azure Devops , GitLab, Github Actions, or CircleCI?

All of those are super easy for a small team to set up
Yes, for CICD. But not for

quote:

random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease

We did a survey of the hundreds of Jenkins instances we discovered running within the company, and many of them were being used for non-CICD purposes.

The Fool
Oct 16, 2003


you say that but I have absolutely done plenty of bullshit task automation in azure devops and github actions

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
This isn't entirely on topic for this thread but idk this feels like the right audience for the question.

We're a SaaS company in what our CEO calls the "growth stage" that sells our product to other businesses. Our product sends out some emails to customers as part of its normal operation. We also utilize a number of SaaS products that send email on our behalf. I've got two different problems to solve here that are related insomuch as vendors seem to wrap them both up in a single product offering.

We're at our limit for SPF lookups but need to add another vendor tool that will send mail as us.
We also receive a deluge of DMARC aggregate reports that we all just filter to the trash, but the emails we send are important enough to our customers that we think it's time to buy some kind of tool to ingest these reports to be sure we're behaving properly.

I'm kind of confused and overwhelmed trying to find vendors in this space. We're not running business2consumer mass email campaigns where we send out millions of emails to consumers. We're not an MSP managing an infinite number of domains for our customers. We're not some massive enterprise that needs some massive email security suite (I'm looking at you Proofpoint). I just want some console to process our DMARC reports and also something to manage our SPF records.

Am I correct to be looking for a single vendor for this? Does anyone have advice for vendors in this space?

I have come across this list of vendors providing DMARC/SPF solutions, but even with that list I'm a bit overwhelmed. Is SPF Flattening good enough for us? Should I look for a vendor that does SPF macros? Does it matter?

FISHMANPET fucked around with this message at 16:50 on Mar 19, 2024

The Fool
Oct 16, 2003


FISHMANPET posted:


We're at our limit for SPF lookups but need to add another vendor tool that will send mail as us.


does include: not work for you, or is that what you mean by lookups?

vanity slug
Jul 20, 2010

I've used Dmarcian before and it works well, I'm using PowerDMARC for my personal domains.

If you're hitting SPF lookup limits you could go for SPF flattening, but in the long run you're better off splitting stuff off to their own subdomains.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah, with our existing include records we've hit the limit of 10 lookups, we tried to add another tool for marketing and failed validation because it brought us up to 14 lookups, so we removed that to get back to 10 and are figuring out how we move forward here.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Vulture Culture posted:

I'm all for shooting down useless technology migrations, but Jenkins is quite bad at every one of these things except job scheduling. There's a few ways of obtusely handling code reuse if you have admin on the server too, but it's mostly a fight between the sandbox and people who nonstop hassle you to turn it off

I agree, but what platform is doing all of them better?

Hadlock
Nov 9, 2004

minato posted:

for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease,

Jenkins is not what pops to mind when I think of "review logs with ease"

12 rats tied together
Sep 7, 2006

everybody being unsure about why we're actually running that garbage absolutely tracks w/ my experience of jenkins as well TBH

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

my homie dhall posted:

I agree, but what platform is doing all of them better?
I guess what I mean is that all of these features are means to an end, and there's ways of getting them that don't rely on having totally insecure, unstable (or easily destabilized) core platforms. The Jenkins model of extensibility is like a Windows 98 kernel where you just keep shoving in driver after driver: no good will ever come of this.

I don't know that GitHub Actions or GitLab CI do "run thing at time" better, but gently caress, at least they have working access control.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
We’re migrating some k8s clusters, which historically have been split between individual dev teams, and would like to start building resources in the same cluster to save some money and operational overhead.

This means NetworkPolicies, namespaced roleBindings, resource quotas/separate node pools, et cetera. My inclination is to require namespaces in the form <service-team>. I’m aware that partial matches are prohibited, so this is really just a logical organization thing so it’s obvious that “redis” is actually owned by “team foo”.

We’re currently using the Azure CNI rather than cilium, which can potentially change.

Any advice here? Mostly in terms of naming conventions. every deployment is tagged with service and team labels/annotations.

The Iron Rose fucked around with this message at 18:39 on Mar 19, 2024

George Wright
Nov 20, 2005

The Iron Rose posted:

We’re migrating some k8s clusters, which historically have been split between individual dev teams, and would like to start building resources in the same cluster to save some money and operational overhead.

This means NetworkPolicies, namespaced roleBindings, resource quotas/separate node pools, et cetera. My inclination is to require namespaces in the form <service-team>. I’m aware that partial matches are prohibited, so this is really just a logical organization thing so it’s obvious that “redis” is actually owned by “team foo”.

We’re currently using the Azure CNI rather than cilium, which can potentially change.

Any advice here? Mostly in terms of naming conventions. every deployment is tagged with service and team labels/annotations.

I don’t like to embed team information in namespace names. This should be metadata attached to the namespace by way of labels. Services can change hands, teams can rename themselves, reorgs can delete teams. It’s easier to update metadata than it is to move a service to a new namespace. In most cost tracking services the team label automatically gets applied to any objects within that namespace, so we don’t worry about annotations on the deployments or pods.

We heavily use rbac-manager for managing access to namespaces. It is used to create rolebindings between namespaces and groups from our directory. Our directory structure sucks in that our IT people only offer team based groups, but this is better than a single flat group. To grant a team access is to add an annotation to the namespace.

We don’t do much network segmentation so I can’t offer insight there.

Hadlock
Nov 9, 2004

George Wright posted:

This should be metadata attached to the namespace by way of labels.

This is the way

Anything you'd instinctually reach towards redis or SQL lite to manage rarely updated cluster or deployment data... Almost always is better done via metadata/annotations or metadata/labels depending on your use case. Annotations (not labels) in particular can hold long nested, structured data if you want to get really wild/sloppy but you can't select based on them to cull/tail logs etc

The NPC
Nov 21, 2010


If you are storing team info in metadata, what are your namespace naming conventions? If there isn't team1-redis and team2-redis how do you prevent collisions?

For the record we are using team-app-env so we have webdev-homepage-dev, webdev-homepage-uat, finance-batch-dev etc. with each of these tied to an AD group for permissions. We include the environment in the name because we have 1 nonprod cluster.

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

If the zookeeper app needs redis, there's a redis deployment in zookeeper-dev, zookeeper-staging, and zookeeper-prod namespaces (prod should be on a different cluster). If the platform team or the backend team owns zookeeper, that's fine, just update rbac for that user group

It would be to be a company wide ultra high performance HA redis cluster to need it's own namespace. Deployments are plenty enough organizational division in 85% of cases

In my namespaces you have, front end, back end, redis, memcachd, some kind of queue server all together. Most services are pretty low demand (Max 100mb memory) in the lower environments so you just get your own dedicated redis and your dev environment closely mimics prod down to the config level

Cluster wide stuff like Prometheus and Loki live in a shared metrics namespace

Edit: teams don't get their own namespace playgrounds to build weird poo poo that sucks up resources and causes problems. Only services! If team B wants a slack bot/service it gets it's own CI/CD and namespace and grafana dashboard just like prod you can have any color car you want so long as it's black you can deploy any service you want as long as it follows the deploy and monitor pattern of prod

Also empty quoting from the oldie proggy thread:


Hadlock fucked around with this message at 01:38 on Mar 20, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply