|
I think I remember how to set up a kubernetes container in Artifactory, but the devs in my last job would usually keep a local repo on their dev machine, IIRC.
|
# ? Mar 14, 2024 07:06 |
|
|
# ? Jun 5, 2024 03:42 |
|
Oysters Autobio posted:is this the right thread for docker questions? At my company I'm helping out on occasion with the pachyderm clusters used by our data science and ML folks, and that may be worth taking a look at for a decent compromise between infrastructure folks and data science people. On the other hand, I have reservations about recommending anything that was acquired by HPE in the same way I have difficulty recommending Broadcom-acquired stacks.
|
# ? Mar 14, 2024 21:46 |
|
necrobobsledder posted:At my company I'm helping out on occasion with the pachyderm clusters used by our data science and ML folks, and that may be worth taking a look at for a decent compromise between infrastructure folks and data science people. On the other hand, I have reservations about recommending anything that was acquired by HPE in the same way I have difficulty recommending Broadcom-acquired stacks. If it makes you feel any better, the cluster provisioning design and execution was done by a goon, and I don't think anything has changed since the hpe acquisition (yet)
|
# ? Mar 14, 2024 21:58 |
|
My current client/role has me playing around with a K8s-ed out Gitlab instance for CICD stuff and I'm getting legit mad at how much better this is than Jenkins in more or less every way possible. Holy hell.
|
# ? Mar 14, 2024 22:44 |
|
I have a vendor application I’m trying to instrument for observability in a series of k8s clusters running Prometheus Operator. It’s already instrumented for prom, but the poo poo thing is it can only push in Prometheus remote write binary format. I would just throw in the towel and pointed at the Prometheus instance for the cluster, but multiple as app copies are running in a cluster and they collide. Generally, I would use the Prometheus push gateway (and target that with a service monitor) as an intermediary, but it’s not a fit on protocol + no TTL settings. Maybe the aggregation gateway is the move? It would be very inefficient to run Prometheus sidecars with ephemeral data directories and minuscule TTL settings, but that would almost certainly work. Junkiebev fucked around with this message at 15:48 on Mar 17, 2024 |
# ? Mar 17, 2024 15:41 |
quote:it’s not a fit on protocol + no TTL settings. IDGI why isn’t it a fit?
|
|
# ? Mar 17, 2024 18:09 |
|
How do I handle non-idempotent e2e tests? I.E. how do I make an end-to-end test for a site's invite code flow work both in dev and in production? I need a valid invite code in the database to run the test, and I need to actually consume that code to finish the test. Locally I can just reset postgres between runs, but I can't do that in production. Do I make an API for the e2e test to hit to make sure the right data is populated? I don't have a lot of backend experience and I feel like I'm doing something wrong. **this is serverless nextjs (vercel), the backend is supabase (managed postgres). there's currently no staging environment, just dev and prod. Cheston fucked around with this message at 20:19 on Mar 17, 2024 |
# ? Mar 17, 2024 20:15 |
|
madmatt112 posted:IDGI why isn’t it a fit? Because I’d have to rely on a scrape config which somehow only took the “latest” results. Is that a thing?
|
# ? Mar 17, 2024 20:30 |
|
Cheston posted:How do I handle non-idempotent e2e tests? I.E. how do I make an end-to-end test for a site's invite code flow work both in dev and in production? I need a valid invite code in the database to run the test, and I need to actually consume that code to finish the test. Locally I can just reset postgres between runs, but I can't do that in production. Do I make an API for the e2e test to hit to make sure the right data is populated? I don't have a lot of backend experience and I feel like I'm doing something wrong. Ideally you want your tests to behave like your users as much as possible. But if the invite code is sent by text or email I would try to avoid that. Having a special API that is protected somehow would be one way of doing it, probably what I would do if I wanted to automate that process. But since that test being green doesn’t really show you that actual people can accept invites and register then maybe it’s not a good test? What type of invites codes are they? If they come from another user to send to a friend you should be able to get them by acting as a real user and then using them in another clean context. If they are more like verifying email is correct then maybe just leave it to manual testing unless you want to automate email access also. If your not checking that those emails get sent out then you can’t be sure that everything works. But it all depends on what you are trying to do of course
|
# ? Mar 17, 2024 20:51 |
|
You could technically get an email address to send invite codes to and then automation to act on that? It shouldn’t be that hard although it would involve some work. I’d at least consider that, I really dislike having hidden APIs to do stuff if there’s a way to avoid them.
|
# ? Mar 17, 2024 21:24 |
Junkiebev posted:Because I’d have to rely on a scrape config which somehow only took the “latest” results. Is that a thing? I think that’s the idea - every time you scrape, you get the most recent data point, and store it with a timestamp. Repeat ad nauseam and eventually you have a TSDB.
|
|
# ? Mar 17, 2024 22:16 |
|
Warbird posted:My current client/role has me playing around with a K8s-ed out Gitlab instance for CICD stuff and I'm getting legit mad at how much better this is than Jenkins in more or less every way possible. Holy hell. Not that I'm a big fan of Jenkins but what do you like about it over Jenkins
|
# ? Mar 17, 2024 22:36 |
|
Mostly that it’s not loving Jenkins.
|
# ? Mar 18, 2024 02:02 |
|
Hadlock posted:Not that I'm a big fan of Jenkins but what do you like about it over Jenkins Probably something like not having to deal with this. code:
|
# ? Mar 18, 2024 10:52 |
|
LochNessMonster posted:Probably something like not having to deal with this. TBH, a big part of the problem with interpolating vs. not interpolating variables is that people don't pay attention to at what point they've stopped providing variables to things and have started dynamically generating code, as though landing in this place is the fault is variable interpolation somehow. Don't do that. There's a hundred ways not to do that. Provide things that need interpolation as inputs to your shell script, or export them as environment variables. Have your Jenkinsfile be Jenkins code and have your shell scripts be shell scripts. I heartily recommend folks don't generate shell scripts and get annoyed about how hard this obviously hard problem is Vulture Culture fucked around with this message at 14:50 on Mar 18, 2024 |
# ? Mar 18, 2024 14:44 |
|
it’s always funny to me when people say they are trying to solve the problem of groovy’s terrible syntax by introducing another flavor of turing complete yaml
|
# ? Mar 18, 2024 14:58 |
|
the problem jenkins is solving for everyone is centralized job scheduling, orchestration, code reuse, testable pipelines/components, custom plugins, etc. from what I know (very little), the yamlshit generally does not solve all of these problems or does not do so as well as jenkins does, and until it does I don’t care if I have to write php, golang, whatever in my pipelines, the language is just a means to those greater ends
|
# ? Mar 18, 2024 15:07 |
|
I don't use jenkins specifically, but for cicd pipelines generally, I feel like having a bunch of logic and/or inline code in your pipeline definition is a huge anti-pattern. Pipeline should just call a list of tasks, logic to determine if the task should be run is fine, but not more than that.
|
# ? Mar 18, 2024 15:11 |
|
Obviously you greenfield it better if possible but since you're likely just shoving some already-designed scripts into jenkins for ease-of-execution GUI, RBAC, and logging, you work with what you're given. The real crime here is how much developing pipelines suck as a development workflow. there's no breakpointing, no IDE, a special one-off language and no real way to troubleshoot beyond executing the script over and over again. Just trying to get a simple maintenance task done means just covering everything in echos, executing it 10 times (+1 if you're providing parameters in the pipeline which modify the job itself) while hitting your shin on every single dereference failure and syntax error read through the job console log. Good luck trying to develop a jenkins library because the best our modern systems have to offer are an attached job configuration window gui, a separate execute job window gui and clicking into the console every time to see what happened. This is why most people recommend leaning more on shell scripts, not less, getting those working separately with a faster execute/fix loop and then using jenkins glue as the thinnest coordination/execution piece possible. Bhodi fucked around with this message at 15:14 on Mar 18, 2024 |
# ? Mar 18, 2024 15:12 |
|
my homie dhall posted:it’s always funny to me when people say they are trying to solve the problem of groovy’s terrible syntax by introducing another flavor of turing complete yaml The most damning criticism of Groovy is when the creator said quote:I can honestly say if someone had shown me the Programming in Scala book by by Martin Odersky, Lex Spoon & Bill Venners back in 2003 I'd probably have never created Groovy.
|
# ? Mar 18, 2024 15:15 |
|
Bhodi posted:This is why most people recommend leaning more on shell scripts, not less, getting those working separately with a faster execute/fix loop and then using jenkins glue as the thinnest coordination/execution piece possible. Yeah, this is part of my point. Develop your scripts in whatever language (gently caress bash), test them independently, then when it comes time to put them in a pipeline you have reusable blocks that you can call out in steps.
|
# ? Mar 18, 2024 15:17 |
|
pre:make ci
|
# ? Mar 18, 2024 15:32 |
|
my homie dhall posted:the problem jenkins is solving for everyone is centralized job scheduling, orchestration, code reuse, testable pipelines/components, custom plugins, etc.
|
# ? Mar 18, 2024 17:27 |
|
Agreed it's bad but I figure a lot of teams need to setup an automation platform for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease, and Jenkins is just simple enough that 1 person can get it setup "enough" for a team within a short time. The better alternatives (a k8s cluster? Ansible Automation Platform?) are complex enough that they need an Ops team.
|
# ? Mar 18, 2024 17:34 |
|
minato posted:Agreed it's bad but I figure a lot of teams need to setup an automation platform for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease, and Jenkins is just simple enough that 1 person can get it setup "enough" for a team within a short time. The better alternatives (a k8s cluster? Ansible Automation Platform?) are complex enough that they need an Ops team. Azure Devops , GitLab, Github Actions, or CircleCI? All of those are super easy for a small team to set up
|
# ? Mar 18, 2024 17:37 |
|
The Fool posted:Azure Devops , GitLab, Github Actions, or CircleCI? quote:random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease We did a survey of the hundreds of Jenkins instances we discovered running within the company, and many of them were being used for non-CICD purposes.
|
# ? Mar 18, 2024 20:15 |
|
you say that but I have absolutely done plenty of bullshit task automation in azure devops and github actions
|
# ? Mar 18, 2024 20:18 |
|
This isn't entirely on topic for this thread but idk this feels like the right audience for the question. We're a SaaS company in what our CEO calls the "growth stage" that sells our product to other businesses. Our product sends out some emails to customers as part of its normal operation. We also utilize a number of SaaS products that send email on our behalf. I've got two different problems to solve here that are related insomuch as vendors seem to wrap them both up in a single product offering. We're at our limit for SPF lookups but need to add another vendor tool that will send mail as us. We also receive a deluge of DMARC aggregate reports that we all just filter to the trash, but the emails we send are important enough to our customers that we think it's time to buy some kind of tool to ingest these reports to be sure we're behaving properly. I'm kind of confused and overwhelmed trying to find vendors in this space. We're not running business2consumer mass email campaigns where we send out millions of emails to consumers. We're not an MSP managing an infinite number of domains for our customers. We're not some massive enterprise that needs some massive email security suite (I'm looking at you Proofpoint). I just want some console to process our DMARC reports and also something to manage our SPF records. Am I correct to be looking for a single vendor for this? Does anyone have advice for vendors in this space? I have come across this list of vendors providing DMARC/SPF solutions, but even with that list I'm a bit overwhelmed. Is SPF Flattening good enough for us? Should I look for a vendor that does SPF macros? Does it matter? FISHMANPET fucked around with this message at 16:50 on Mar 19, 2024 |
# ? Mar 18, 2024 20:57 |
|
FISHMANPET posted:
does include: not work for you, or is that what you mean by lookups?
|
# ? Mar 18, 2024 21:06 |
|
I've used Dmarcian before and it works well, I'm using PowerDMARC for my personal domains. If you're hitting SPF lookup limits you could go for SPF flattening, but in the long run you're better off splitting stuff off to their own subdomains.
|
# ? Mar 18, 2024 21:07 |
|
Yeah, with our existing include records we've hit the limit of 10 lookups, we tried to add another tool for marketing and failed validation because it brought us up to 14 lookups, so we removed that to get back to 10 and are figuring out how we move forward here.
|
# ? Mar 18, 2024 21:09 |
|
Vulture Culture posted:I'm all for shooting down useless technology migrations, but Jenkins is quite bad at every one of these things except job scheduling. There's a few ways of obtusely handling code reuse if you have admin on the server too, but it's mostly a fight between the sandbox and people who nonstop hassle you to turn it off I agree, but what platform is doing all of them better?
|
# ? Mar 19, 2024 01:57 |
|
minato posted:for CICD and/or random jobs that need (a) non-technical people to launch them and (b) people to look at output logs with relative ease, Jenkins is not what pops to mind when I think of "review logs with ease"
|
# ? Mar 19, 2024 02:35 |
|
everybody being unsure about why we're actually running that garbage absolutely tracks w/ my experience of jenkins as well TBH
|
# ? Mar 19, 2024 02:41 |
|
my homie dhall posted:I agree, but what platform is doing all of them better? I don't know that GitHub Actions or GitLab CI do "run thing at time" better, but gently caress, at least they have working access control.
|
# ? Mar 19, 2024 14:20 |
|
We’re migrating some k8s clusters, which historically have been split between individual dev teams, and would like to start building resources in the same cluster to save some money and operational overhead. This means NetworkPolicies, namespaced roleBindings, resource quotas/separate node pools, et cetera. My inclination is to require namespaces in the form <service-team>. I’m aware that partial matches are prohibited, so this is really just a logical organization thing so it’s obvious that “redis” is actually owned by “team foo”. We’re currently using the Azure CNI rather than cilium, which can potentially change. Any advice here? Mostly in terms of naming conventions. every deployment is tagged with service and team labels/annotations. The Iron Rose fucked around with this message at 18:39 on Mar 19, 2024 |
# ? Mar 19, 2024 18:36 |
|
The Iron Rose posted:We’re migrating some k8s clusters, which historically have been split between individual dev teams, and would like to start building resources in the same cluster to save some money and operational overhead. I don’t like to embed team information in namespace names. This should be metadata attached to the namespace by way of labels. Services can change hands, teams can rename themselves, reorgs can delete teams. It’s easier to update metadata than it is to move a service to a new namespace. In most cost tracking services the team label automatically gets applied to any objects within that namespace, so we don’t worry about annotations on the deployments or pods. We heavily use rbac-manager for managing access to namespaces. It is used to create rolebindings between namespaces and groups from our directory. Our directory structure sucks in that our IT people only offer team based groups, but this is better than a single flat group. To grant a team access is to add an annotation to the namespace. We don’t do much network segmentation so I can’t offer insight there.
|
# ? Mar 19, 2024 22:57 |
|
George Wright posted:This should be metadata attached to the namespace by way of labels. This is the way Anything you'd instinctually reach towards redis or SQL lite to manage rarely updated cluster or deployment data... Almost always is better done via metadata/annotations or metadata/labels depending on your use case. Annotations (not labels) in particular can hold long nested, structured data if you want to get really wild/sloppy but you can't select based on them to cull/tail logs etc
|
# ? Mar 19, 2024 23:38 |
|
If you are storing team info in metadata, what are your namespace naming conventions? If there isn't team1-redis and team2-redis how do you prevent collisions? For the record we are using team-app-env so we have webdev-homepage-dev, webdev-homepage-uat, finance-batch-dev etc. with each of these tied to an AD group for permissions. We include the environment in the name because we have 1 nonprod cluster.
|
# ? Mar 20, 2024 00:42 |
|
|
# ? Jun 5, 2024 03:42 |
|
If the zookeeper app needs redis, there's a redis deployment in zookeeper-dev, zookeeper-staging, and zookeeper-prod namespaces (prod should be on a different cluster). If the platform team or the backend team owns zookeeper, that's fine, just update rbac for that user group It would be to be a company wide ultra high performance HA redis cluster to need it's own namespace. Deployments are plenty enough organizational division in 85% of cases In my namespaces you have, front end, back end, redis, memcachd, some kind of queue server all together. Most services are pretty low demand (Max 100mb memory) in the lower environments so you just get your own dedicated redis and your dev environment closely mimics prod down to the config level Cluster wide stuff like Prometheus and Loki live in a shared metrics namespace Edit: teams don't get their own namespace playgrounds to build weird poo poo that sucks up resources and causes problems. Only services! If team B wants a slack bot/service it gets it's own CI/CD and namespace and grafana dashboard just like prod Also empty quoting from the oldie proggy thread: gbut posted:lol Hadlock fucked around with this message at 01:38 on Mar 20, 2024 |
# ? Mar 20, 2024 01:35 |