Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Junkiebev
Jan 18, 2002


Feel the progress.

Spring Heeled Jack posted:

As a vmware shop I’m super interested in whatever their plans are for the integrated k8s product, whenever it decides to actually surface.

But for now we use AKS and it’s been pretty good due to the above (ignoring everything else about actually running k8s).

However we’re getting to the point where we would like some clusters on prem because that’s where our big boy DBs are, and our devs need a playground for modernizing our old rear end LOB apps. I’ve started looking at ranchers offerings and they seem pretty turn key once you get an infrastructure deployment pipeline going.

Rancher owns incredibly hard

Adbot
ADBOT LOVES YOU

Junkiebev
Jan 18, 2002


Feel the progress.

Blinkz0rz posted:

What's the current approach in terms of k8s and organizing it around applications: one giant cluster that houses everything or a bunch of smaller clusters focused around domains?

All my stuff goes in either the production or pre-production cluster, as appropriate for Region, unless the service accounts have goofy-rear end RBAC requirements (AWX/dapr/etc) - if you do that you need to become positively hitleresque with admission controllers (eg OPA Gatekeeper and publish your policies on a company-readable git repo) and it'd be reasonable to enforce repo access w/ something like artifactory

e: additionally, tainted node-pools for heavy compute teams are your friend and you are going to want to use something like kubecost for chargebacks/showbacks - learn to love the poo poo out of metadata

Junkiebev fucked around with this message at 03:43 on Oct 12, 2021

Junkiebev
Jan 18, 2002


Feel the progress.

What’s the Goon Take on Nomad? Seems dumb when we have 3 dozen K8s clusters to choose from, but New Dev Team insists it’s “Better” in ways they can’t articulate.

They’ve never used it, or containers, before

Junkiebev
Jan 18, 2002


Feel the progress.

Warbird posted:

Are there any hot tips on Jenkins integration to AD? We’ve been bickering with the AD team of our client to help us sort out what’s not playing nice, but they’re being stubborn about it. Everything seems to be fine but whenever the app tries to connect it just gets its connection reset.

For auth? Are you using LDAPs? Is the certificate valid?

Junkiebev
Jan 18, 2002


Feel the progress.

Where do you see Traefik requiring cluster admin? I'm guessing you are following an implementation guide of some sort, but the RBAC in the helm chart looks fairly non-threatening...

https://github.com/traefik/traefik-helm-chart/blob/master/traefik/templates/rbac/clusterrole.yaml

That said, unless you are doing something particularly nifty, probably just use nginx as an ingress-controller

Junkiebev fucked around with this message at 20:37 on Feb 25, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

we are running kubernetes on containerd windows 2022, authing via GMSA, in production :getin:

Junkiebev fucked around with this message at 23:59 on Mar 4, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

LochNessMonster posted:

Just curious, how do windows licenses work for containers?

It’s quite murky! The ad-joined host talks to volume licensing servers, but as for the pods? :iiam:

Junkiebev
Jan 18, 2002


Feel the progress.

Is there a tool for beautifying terraform hcl? I’m inheriting a dog’s breakfast with inconsistent *everything* and would prefer not to have to rewrite a bunch of it so as to be legible

Junkiebev
Jan 18, 2002


Feel the progress.

Walked posted:

hclfmt is around: https://github.com/fatih/hclfmt

however, the generally used version (above) of the tool has a major bug with consecutively commented lines and also was abandoned (but mostly works fine)

I found, buried in one of the hashi repos, they seem to have either rewritten or forked the tool above (tbh I didnt really look closely), but you gotta compile yourself:
https://github.com/hashicorp/hcl/tree/main/cmd/hclfmt

this version fixes the issues I had

Thanks!

Junkiebev
Jan 18, 2002


Feel the progress.

Blinkz0rz posted:

What I'd love to have is a way to mash different docker compose stacks together with shared dependencies but I don't think that's possible.

We end up using kustomize base+overlays for loads of stuff where helm charts are too much of a pain in the rear end for the value they provide.

Junkiebev
Jan 18, 2002


Feel the progress.

quarantinethepast posted:

I've got a what should be basic question that I can't figure out.

I've got an EC2 node added as a Jenkins agent on my company's master, and I can confirm that I can access this master on the EC2 node through ports 443 and 50000:
code:
[ec2-user@ip-my-ip ~]$ telnet master-2.jenkins.[mycompany].com 50000
...
Connected to master-2.jenkins.[mycompany].com.
Escape character is '^]'
However, when I try to start Jenkins like so:

code:
java -jar agent.jar -jnlpUrl https://master-2.jenkins.[mycompany].com/computer/[agent-name]/jenkins-agent.jnlp -secret @secret_file -workDir "/home/ec2-user/jenkins"
I get the error:
code:
java.io.IOException: https://master-2.jenkins.[mycompany].com/ provided port:50000 is not reachable
        at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:311)
        at hudson.remoting.Engine.innerRun(Engine.java:724)
        at hudson.remoting.Engine.run(Engine.java:540)
How can it be that the port 50000 is open on master but JNLP still considers it unreachable? This seems like a total contradiction I know.

“It’s always DNS”*

Is that url resolvable on your ec2 instance?

*unless you have a nextGen firewall in between them which will allow connections, but block based on protocol**

**unless it’s in k8s: then it’s rbac

Junkiebev
Jan 18, 2002


Feel the progress.

Anyone using https://buildpacks.io/ for templated-building? Is that a Cool Way To Be?

Junkiebev
Jan 18, 2002


Feel the progress.

Lady Radia posted:

it's super frustrating that k8s lives up to the hype for the most part lol, i wish it were worse to work with and that rancher just didnt work half the time so i could argue against it.

Yea both k8s and rancher are Dope

Junkiebev
Jan 18, 2002


Feel the progress.

duck monster posted:

This is fun. Deploy script that uses IMAGEVERSION var in .deploy to drive a few things in k8

code:
source .deploy
export API_IMAGE=registry.digitalocean.com/<stuff goes here>:$IMAGEVERSION
doctl registry login
docker build -t <stuff here>:$IMAGEVERSION .
docker tag <stuff here>i:$IMAGEVERSION $API_IMAGE
docker push $API_IMAGE
pushd <k8s dir>
./yg e -i '.spec.template.spec.containers[0].image = strenv(API_IMAGE)' api/api-deployment.yaml         #Update the k8s yaml
kubectl replace -f api/api-deployment.yaml
popd
Theres a bit more to it thats stuff that'd get me in trouble to reveal. But thats a nifty little script that we put on a git hook to a deploy branch and magic! Instant deployments. Just update the IMAGEVERSION .deploy in the git repo, push to del, and your good to go.

Next step is to get some CI in on it (Probably jenkins) to run the tests and make sure we're not pushing hot garbage. I think. If the boss will let me

you should use kustomize for this imho - it's k8s native and base+overlays is slick and easy to understand

it spits out all k8s manifests and you publish that as a release artifact

Junkiebev fucked around with this message at 21:38 on Jun 17, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

it’s fun explaining promise-theory to BC/DR staff and I get to do it 2 times a week

Junkiebev
Jan 18, 2002


Feel the progress.

is microsoft going to start hard-charging into gihub and let AzDO whither on the vine? I'd venture that they are!

Junkiebev
Jan 18, 2002


Feel the progress.

Methanar posted:

I have just spent the last 90 minutes conclusively proving that something should not work. And yet it does.

thread title

Junkiebev
Jan 18, 2002


Feel the progress.

it looks like docker's ONBUILD command is falling out of fashion (b/c it's not OCI-compatible?) - does anyone know what new thing is replacing it, functionally? It was handy to just 1-liner to reference a build image for various frameworks

Junkiebev
Jan 18, 2002


Feel the progress.

chutwig posted:

I would suggest using k3s so that the kubelet deals with talking to the container runtime and you deal with the relatively standardized Kubernetes API. Dealing with podman/containerd directly is a pain in the rear end.

this + use kustomize so you don't have to write out entire-rear end manifests

Junkiebev
Jan 18, 2002


Feel the progress.

I can't decide if something is a crazy anti-pattern for terraform

I have a bunch of vcenters (some linked, but links don't propagate tag categories or values)
I have a tag category (department number - single cardinality) and tags (the actual department number values) I'd like to put on them in a uniform way so that they may be applied to VMs and such.
What I'm thinking is
JSON with the vCenter URI available via REST call
hard-code tag categories in TF module
JSON with the tag values available via REST call
tagging done in a terraform module with a provider populated by provided variables in main.tf
for-each the vCenters, run the module
within the module, for-each the tags and create them

is this madness because it's not super declarative, or shrewd? I'm sure I'd end up using dynamics, but you can't initialize or reference different providers within a dynamic afaik

Junkiebev fucked around with this message at 04:09 on Sep 10, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

12 rats tied together posted:

I would hope that your resources accept lists of tags,

nope

12 rats tied together posted:

The VMware API is, as I recall, complete dogshit

yep

Junkiebev
Jan 18, 2002


Feel the progress.

New Yorp New Yorp posted:

Honestly I don't get what problem you're trying to solve. What's the thing that's preventing you from just having a set of tags defined that are applied to all of the resources that need tags? Is this some AWS thing I'm missing because I don't use AWS?

In order to assign a tag to a resource in vSphere, the tag category [key] and tag value [value] must pre-exist, and be eligible for assignment to that "type" of resource

I would like to create a tag category called "Department", with a cardinality of 1

I would like to create possible values from a list (of 300 or so) so that values exist uniformly across several vCenters.

I'm not trying to assign tags to anything - I'm trying to create them identically, so that they are able to be used, in several vCenters.

Junkiebev fucked around with this message at 04:31 on Sep 10, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

12 rats tied together posted:

Ah, got it, and the way that you "assign" a tag to "a vCenter" is to create it under a particular provider, where the provider has your admin access to that vCenter baked in?

Well that's the kicker - the provider has the vCenter address as a property, so I'd need to instantiate the provider within the module i'd be calling in either a dynamic or a foreach which makes it a bit dicey if a vCenter is removed at a later date (which doesn't happen often, but does happen)

Junkiebev
Jan 18, 2002


Feel the progress.

honestly I could solve this entire god drat problem w/ a PowerShell script, but then some jerk would need to own it and that jerk would be me (thread title)

Junkiebev
Jan 18, 2002


Feel the progress.

Wizard of the Deep posted:

Comedy option: Route 53 DNS TXT entry with a TTL of 60 seconds.

(I don't have anything helpful. My brain just conjured up that mess and I thought I shouldn't be the only one to suffer)

add it to a FROM scratch docker image :unsmigghh:

Junkiebev fucked around with this message at 21:31 on Sep 21, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

MightyBigMinus posted:

just put it behind a cdn and and use the purge function when it changes

it will cost you dollars in bandwidth. whole dollars!

this is the real answer fyi

Junkiebev
Jan 18, 2002


Feel the progress.

Warbird posted:


Isn't that what Packer is all about?


packer spins up an ISO in an infrastructure provider, does stuff to it, including generally "generalizing" (sysprep, etc) it, and shits out a "templatized" image into the media library of the provider you chose. The thing you are trying to do is a perfect use-case assuming you are going to do it repeatedly.

Junkiebev
Jan 18, 2002


Feel the progress.

Methanar posted:

It's 12:38, past midnight.

You're a bit bummed out.
You lay down in bed, it's not that comfortable and your tinnitus is flaring up.
You close your eyes and wait for the next work day.
Your phone chimes.
"Maybe it's a bumble match"
It's not.

"I'm sure it's everything is fine."
code:
{
timestamp: 1665392458894,
status: 999,
error: "None",
message: "No message available"
}
its 3:05 AM
Spinnaker was very not fine.
I am very not fine.

#HugOps

Junkiebev
Jan 18, 2002


Feel the progress.

Junkiebev posted:

I can't decide if something is a crazy anti-pattern for terraform

I have a bunch of vcenters (some linked, but links don't propagate tag categories or values)
I have a tag category (department number - single cardinality) and tags (the actual department number values) I'd like to put on them in a uniform way so that they may be applied to VMs and such.
What I'm thinking is
JSON with the vCenter URI available via REST call
hard-code tag categories in TF module
JSON with the tag values available via REST call
tagging done in a terraform module with a provider populated by provided variables in main.tf
for-each the vCenters, run the module
within the module, for-each the tags and create them

is this madness because it's not super declarative, or shrewd? I'm sure I'd end up using dynamics, but you can't initialize or reference different providers within a dynamic afaik

i came up with a remarkably cursed solution for this which is a shell script to parse json and dynamically build providers.tf and main.tf in the module which does the tagging and a thrice-damned dynamic map iteration which makes me nauseous but also :smug:

poo poo works, ship it

code:
/*
    ___           __     __________ __        __      _       __ 
   /   |         / /__  / __/ __/ //_/       / /___  (_)___  / /_
  / /| |    __  / / _ \/ /_/ /_/ ,<     __  / / __ \/ / __ \/ __/
 / ___ |   / /_/ /  __/ __/ __/ /| |   / /_/ / /_/ / / / / / /_  
/_/  |_|   \____/\___/_/ /_/ /_/ |_|   \____/\____/_/_/ /_/\__/  
                                                                                                                                                                                        
*/

Junkiebev fucked around with this message at 17:48 on Oct 10, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

LochNessMonster posted:

Is there a good way to start Azure DevOps pipelines in batches. I'm trying to find a way to trigger over 1k downstream pipelines after my initial pipeline runs successfully.

I'm not sure if our Azure DevOps infra will like it if I start them all at once as we share the build agents company wide. On busy days we're already running into some limitations where we see 30+ min of queues. Bad scaling/sizing on their part, I know, but I don't want to make the problem worse. The plan is to start this process outside of business hours to minimize impact on the rest of the organization, but you just know there's going to be one day that somebody can't deploy a hotfix for a prio 1 incident because there's a 4 hour queue for the build agents.

The main pipeline will create a feature branch and updates a config file with versions for each downstream repo, which will build on commit. The downstream repo's are managed in a config file in the main repo, so it's iterable. The only thing I came up with so far is externalize updating the downstream repos so it can be done in batches. Was hoping I'm missing something and there's an easier way.

We've used Scale-Set Build Agents to great effect

Junkiebev
Jan 18, 2002


Feel the progress.

man i am feeling a bit burnt out of late - i just got a ticket complaining that a build pipeline which used to take 90 seconds took 110 seconds *once*

npm is involved - the gently caress do you want, guy? i don't control The Internet

Junkiebev fucked around with this message at 06:57 on Nov 15, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

we've made life too easy for these assholes

Junkiebev
Jan 18, 2002


Feel the progress.

Twerk from Home posted:

When first putting CPU limits in place, be aware that it can wreck your latency if your application has more threads than its CPU allocation, which almost everything on the JVM or CLR will.

https://danluu.com/cgroup-throttling/

I do think that in the big picture we are heading for some type of CPU pinning, but most of the orchestration platforms don't core pinning comfortably yet.

They need to make an nproc equivalent for k8s

E: can you pull limits/requests from the downward api?

Junkiebev fucked around with this message at 08:49 on Dec 2, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

luminalflux posted:

Yes. We do this to pass down limits and requests set on the application container in the pod to our Ansible init container, along with pod labels and annotations. Since Ansible is rendering configuration based on the number of CPUs and amount of memory as an init container, we couldn't use the automaxprocs-style parsing.

(later on we'll get rid of the ansible init container but this is due to keeping configuration similar between our EC2 deploys and ansible deploys while migrating)
code:
  spec:
      volumes:
        - name: podinfo
          downwardAPI:
            items:
              - path: "labels"
                fieldRef:
                  fieldPath: metadata.labels
              - path: "annotations"
                fieldRef:
                  fieldPath: metadata.annotations
              - path: cpu_requests
                resourceFieldRef:
                  resource: requests.cpu
                  containerName: application
              - path: cpu_limits
                resourceFieldRef:
                  resource: limits.cpu
                  containerName: application
      initContainers:
        - name: ansible
          volumeMounts:
            - name: podinfo
              mountPath: /etc/podinfo

noice

Junkiebev
Jan 18, 2002


Feel the progress.

The NPC posted:

Thanks for the links and advice everyone. Looks like our use case (charge back on a shared cluster) is one of the few reasons to set cpu limits.

we get around this at my company with node pools - common pool? lol QOS. dedicated compute? you can only sit on your own balls, but it costs more.

Junkiebev
Jan 18, 2002


Feel the progress.

MightyBigMinus posted:

sure but latency is a function of distance so none of the rube goldberg poo poo is going to matter

in a world other than this, simply stating this might matter

Junkiebev
Jan 18, 2002


Feel the progress.

"how can we cut the latency between London and SGX in half?"
"errr - Plate Techtonics?"

Junkiebev
Jan 18, 2002


Feel the progress.

Sylink posted:

in kubernetes, lets say you are trying to rolling update pods that have requests to limit the pods per node, but you need to ignore that to update the pods. how do you get around that?

I'm in a situation where on rolling update, some of the new pods are stuck pending because the scheduler sees the existing pods eating the resources, when obviously the final state it would replace those pods.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable

.spec.strategy.type: RollingUpdate

.spec.strategy.rollingUpdate.maxUnavailable: 1

I'm assuming your replicas are <4, because the default is 25% and the absolute number is calculated from percentage, rounding down.

k8s is weird

Junkiebev fucked around with this message at 05:49 on Dec 21, 2022

Junkiebev
Jan 18, 2002


Feel the progress.

depending on your workload, you might want to ask your doctor if StatefulSets are right for you!

Adbot
ADBOT LOVES YOU

Junkiebev
Jan 18, 2002


Feel the progress.


this word does not exist in the Quran, so I deny it!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply