Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I work for basically an MSP that provides basically IaaS to our (internal) customers so devops is kind of tricky for us because there's no dev, only ops. Nonetheless we've automated a ton of stuff with powershell and Azure runbooks, and have written and opensourced a number of Powershell modules that we use all over our code. We ended up hacking together the very barest form of a build "pipeline" before any of us knew what that even meant. Everything is in Github (internal code in our on prem enterprise Github, open sourced in public Github) with everything hooked into an Azure runbook via webhook, and whenever stuff gets updated it gets deployed: either copied to a set of servers to run or copied automatically into the Azure runbook.

Well I've been aware of CI for a little bit now and finally found enough info about building a pipeline for Powershell modules and I spent all day Friday on it and I've now got one of our modules plugged into both Appveyor and Azure Pipelines (used a tutorial that used Appveyor but then found Azure and got that working as well). It doesn't deploy yet but it does run some pretty basic Pester tests to ensure the code is valid Powershell and that it passes PSScriptAnalyzer tests. So now I'm one of you I guess.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Pile Of Garbage posted:

Nice, sounds pretty cool. Where are the tests being executed, inside containers or some serverless dealio?

I... don't know? They execute in whatever Appveyor or Azure Pipelines runs in.

This is the module I've been working with: https://github.com/umn-microsoft-automation/UMN-SCOM/tree/build-pipeline
My yaml files for both Appveyor and Azure are the same, they specify a Windows server image and then call build.ps1 in the Build directory, which calls psake.ps1 which runs the tests.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
OK, wondering if this workflow is possible in Github (or, I guess more specifically, if it's possible to enforce it with protected branches) and also if it's sane.

We've got a number of runbooks and scripts that do stuff like build new machines or delete machines or all sorts of things like that. It can be difficult to test some of these with Pester (at least with our limited Pester skills) so they changes may require manually testing. What I'd like to do is setup CI with protected branches so the workflow looks like this:
  • You make whatever changes you want in your branch, every commit to your branch will call the buildpipeline that will run some automated pester tests
  • Created a protected "Test" branch that you can only make a pull request against if your branch builds successfully (CI automatically assigns a status to each branch so this part I know can be done)
  • Next I'd like code that gets merged to Test to add a "pending" status and deploy to some kind of test environment. I think I can do this with with a webhook that will call a runbook that will then reach out with the Github API and set the status to "pending" for "manual test" or whatever I want to call it.
  • Once manual tests are completed whoever is running the test would do something that would fire off another API call to set the status to "Success"
  • Protect the master branch such that it can only receive merges from Test and only when the status is Success

Am I crazy? Am I sane? Am I an idiot?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Bhodi posted:

The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting.

The benefit of doing it this way is that you can test multiple feature commits at the same time on a periodic basis, it conveniently follows common business requirements like sprints and quarterly releases, and if you have REALLY long tests you can tune the auto testing to fit them instead of having them queue behind each other as devs frantically try and get their features in at 3pm on a friday before the end of the sprint.

We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?).

I'm sure this isn't unique but a lot of our code depends on a ton of other stuff so integration testing requires we have basically a mirror of a lot of our environment, and each runbook needs something radically different. Testing our self service server builds requires a separate form to accept submissions from. Testing code that runs during our server build requires modifications to our server build process. Testing code that updates our inventory requires a bunch of test Google documents, etc etc. So it would be nice to have all those environments setup so upon doing something (pushing to a specific test branch) the code gets deployed in whatever way is appropriate to test it so after that we can fill out the form and submit a server request, or build a server that will run the test code, or modify the test google documents instead of the prod...

Maybe we're small enough that I'm over thinking it. Maybe I should just start setting up those test scenarios and setup our deploy automation to start doing deploys when it seems commits to branches other than master.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You can use libgit2 or one of its language specific derivatives that will give you more control over your commits including the "who" of the commit. You may even be able to do that with the regular git tools.

The who of the commit is a property of the commit not of the push to remote, so you can create your commits as whoever you want and push them to remote with a shared account and they'll show up as committed by whoever you specified.

Fake edit: you said svn not git because it's 2002 I guess

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I haven't used Terraform or Jenkins specifically but if you want a human to verify the plan output could you put an approval in your pipeline that requires a human to look at it and say "yes this is good" and then approve it and let the pipeline still do the automated deployment, rather than doing a whole deployment manually.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Zorak of Michigan posted:

Re container chat, my org is still in its infancy in containerizing workloads. I've been advocating Kubernetes because, when I tinkered with Swarm, I couldn't imagine it scaling up to the number of different teams I would hope would eventually be using our container environment. Is there something easier to live with for an on-prem deployment than Kubernetes that can still support multiple siloed teams deploying to it?

Piggybacking on this and answers, what about products that offer k8s on prem like openshift, or some product VMware just bought whose name escapes me, or, I don't know, other vendors?

We're a big public University so there could legitimately be a lot of research applications that could use auto scaling and other features. But if we have it we'd also get a lot of simpler "line of business" apps that may not need those capabilities, but if they're there they'll get used, and then for no reason we'll be depending on them and then we'll be stuck with tooling more complicated than we need.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Be careful, Terraform can't make subscriptions.

We do the same thing. We already had a pretty robust provisioning automation system built up around Google Forms, Google Sheets, and Azure Automation, that we were able to easily expand for cloud provisioning.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Are you using Azure DevOps for code storage as well or just pipelining and putting your somewhere else like GitHub? And to be more specific are build and tests failing on code that's supposed to be "production" ready? Because obviously you won't know for sure if it builds and passes tests until you've run it through a pipeline that will build it and and run tests, but you don't have to "commit to master" so to speak to get that.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We did 3 days of PI Planning (our first time) last week over Zoom. We're infrastructure, so the overall "product" is poorly defined. One of the biggest actual theoretical advantages of safe is coordinating dependencies between teams, but nearly all our features are independent. I don't think sprints really work for infrastructure, when there's generally more external dependencies that can't be managed, and a lot of the work (even the planned work) is much more reactive and dependent on getting feedback from customers. We've also all been scrambled up out of our traditional domain based teams (linux, windows server, database) into a bunch of generic jack of all trades teams with a little bit of everything.

Love Safe!

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
One of our features is to migrate systems into a central management tool, and one of the user stories is to make any permissions adjustments if system owners want to see some data about their systems in the tool that they can't currently. I have no idea how much work it will take, or when I'll even be able to do it, because we're dependent on system owners actually migrating, and then it depends what their wants are and how hard they are to implement!

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
GitHub as an on-prem offering as well, GitHub Enterprise Server. The latest version even supports GitHub Actions, so you've got some CI/CD builtin (though we haven't upgraded to that version yet so I don't have any experience with it yet).

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Are you sure you couldn't do that with filters, or setting up a Google Group?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

New Yorp New Yorp posted:

It's a fork. They've diverged quite a bit, but I don't know how applicable the divergence is to this particular case.

The build systems may have diverged but the build agents are identical. When you click through the Azure DevOps documentation about what's installed on the runners, it takes you to the GitHub Actions repo.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I've actually had pretty good success with Microsoft support, both on the Azure side and non-Azure side. Maybe we're a big enough customer that we're getting some behind-the-scenes white glove treatment? I think it's been a while since I've done a non-Azure support case, but very often we'd have to bug our TAM to escalate the ticket out of Tier 1 hell, but from that point things have been pretty good.

I've also had really fantastic support from GitHub for our GitHub Enterprise Server instance, pretty much every issue I've ever had has been "known" and if they don't have an immediate fix available, there will at least be a workaround available.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

luminalflux posted:

Most everything does OIDC now thankfully. I can't imagine having to build support for the eldritch horrors that SAML contains

*Laughs in higher Ed/Shibboleth*

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I'm in the market for a new job. I'm currently a "devops engineer" but with skills gaps that you could drive a truck through. I'm a pretty quick learner, we just have pretty backwards infrastructure so I don't do a lot of modern stuff. I'm ~12 years into my IT career, so not a "junior" by any stretch, though I don't think I'd hit the bullet points for many "senior" devops engineer positions. Anyone have tips on the kinds of things I should be focused on learning to help me get my foot in the door?

I've got lots of experience with Azure DevOps, some container experience, some Azure App Service experience, some other Azure services, and over a decade of more complex "sysadmin" experience. I started my career on Solaris and Linux, ended up deep in the Windows world, but am still pretty passable at Linux. I can pretty quickly learn just about anything technical thrown at me, I'm just not sure what I should be throwing at myself.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Based on some of the recent discussion here, is there much I should be looking at besides deploying to Kubernetes? Build some test clusters using AKS and/or EKS?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Man, I really wish I got to work with smart people like all of you and solve these kinds of interesting problems. I guess my frustration over the state of my org currently is one of the reasons I'm getting pushed out of it! We're nowhere near the organizational maturity to try any of this, and leadership is incredibly allergic to any actual improvements, so things keep getting worse and worse, and they all get more and more stubborn.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So I'm doing some Terraform for the first time. What I'm managing is Azure DevOps pipelines (there's a provider for that!). For each customer that we onboard into this particular service, we setup a pipeline for them, and grant some access to it. I've got a "proof of concept" in a single main.tf file, with 2 data sources and 4 resources needed to create a new pipeline. Right now all the pipelines would share those two data sources and 1 of the resources, and the other 3 resources are unique to the pipeline. So for each new pipeline I'd need 3 new resources.

I know I could just copy/paste but also I know that that's a really bad idea. But I'm not sure what the right approach to this actually would be. My guess is a local module where the pipeline-specific resources are defined, then I define all my pipelines in variables.tf in a map, and use for-each to iterate through them. Am I on the right track here? Is there another way to be doing this?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

12 rats tied together posted:

resource for_each with a map argument is best way to get started on this type of thing

This would make sense (to me) if I was just creating multiple of the same resource, but what I'm doing is creating multiple sets of a set of resources. Each thing comprises an azuredevops_build_definition, an azuredevops_group, and and an azuredevops_build_definition_permissions. And they're dependent on outputs of each other. The permissions resource needs the id of the group and the id of the build definition, so I'm not sure how I could effectively pull that off in a for_each without grouping those resources into a single kind of entity, and (to my very limited knowledge) the module is the only way to do that.

It doesn't look like the for expression can define resources, so I'm not sure how else to group these other than a module.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Ooooook, that makes sense. I'll give that a try.

Though, shouldn't it be each.value1.value and each.value2.value? Looks like you transposed the order of the values.

E: and I was planning on taking advantage of some "default" values in the module variables, but I can just move to using conditional expressions in the actual resource block it looks like. If value is set, use that, otherwise use this default value

FISHMANPET fucked around with this message at 20:15 on Feb 21, 2023

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Well, success. That was, all-in-all, much easier than I thought it would be. Each "pipeline" is defined via 8 values in a local map, and my actual resource definition is only about 60 lines of code.

I'll have to do a little work because of an edge case I just discovered with provider weirdness, but this is all pretty slick.

Is there a way to force terraform to verify that its current stored state actually aligns with the state of the actual objects? I know it should be doing that but, because of provider weirdness things got out of sync.

Basically, I used the same group for two different items, and then removed one of the items. So it removed the group definition entirely, but unfortunately, it doesn't know that, and so it removed the access permissions I set, and a group membership I set. I'm going to workaround this in a way that should prevent it from happening entirely, but still kind of curious if there's a way to force terraform to sync its state.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Like I said, it's... provider weirdness.

You can create an azuredevops_group resource, which can be either a group internal to azure devops, or it can be a reference to an azure AD group. Each element in my local map had a reference to a particular Azure AD group. When I created it, terraform saw it as two distinct group resources, but in Azure DevOps it was only a single group resource. In Terraform I imagine they had the same id, but terraform saw them as two distinct objects. If I remove one of my items from the local map, terraform sees that there's something in state that should no longer be there - the second reference to that group. So it deletes the group. As far as it knows, nothing has happened to the group reference in the first element. So it's happily stored the descriptor for that reference in the state for the first element. But it no longer maps to anything that actually exists in azure devops. And the provider is... not smart enough to realize that. So I guess that's really a provider bug that it's not actually syncing state the way terraform should be.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
It is kind of a weird fit for terraform, though it might still be easier than other alternatives...

Basically I need an easy repeatable way to create a pipeline definition with defined access control. The "offering" I'm building out is an internal service that builds and serves documentation static sites built with mkdocs. The building is done using a template in Azure DevOps that their repo connects to, and as long as they're using that template, the pipeline will automatically build and deploy their site when they make a commit. So whenever we onboard a new "site" I need to create a pipeline definition, create a "Group" inside Azure DevOps, and grant that group specific permissions onto the pipeline definition so they can see their linting failures.

I could do it all with their API obviously, since this provider is just a wrapper around the API. I'm deploying the assets in Azure that actually run the service (Azure App Service, Storage Account, etc) via Terraform, so it wasn't a huge lift for me to try terraform here as well. Not sure how I'll proceed at this point. I'm pretty sure I know how to work around the "provider weirdness" (which is just a subset of the general Terraform problem of storing the state for the same object in multiple places) to prevent the issue I ran into. I could use terraform in a "stateless" way and just use it to create new resources, instead of writing a script to do the same thing via API.

And yeah, the tale of "it works perfectly except for this weird rough spot" is pretty much a tale as old as time in IT, you'll never get rid of all the rough spots, you just decide what rough spots are easiest for you to deal with and build in that direction.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
OK, another probably dumb terraform question. Is there a way to reuse some settings definitions between resources of different types?

I'm setting up an Azure App Service app, and it has a concept of "slots". A slot is just another app service app, but it has a connection back to the original "app" and in terraform it's a different resource type. I'd like to keep them identical, especially because slots can swap around, and it's possible for the main app to "swap" with one of the slots, and then that could throw state into a tizzy. I suppose I could just do statements like "https_only = azurerm_linux_web_app.app.https_only" for all the statements. But is there some way where I can define a block outside of an individual resource and just "insert" it into my resource definitions?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I won't be managing the swapping of slots or the app settings via terraform, I just don't want to get into a situation where I've applied different settings to my staging slot and the "production" slot, and then they get swapped, and now terraform is all mad because the state's messed up.

Once I get this running the odds of me touching it again are also very slim. I'm mostly going down this path because I burnt myself not turning on "https_only" when I hand-created the app the first time (and also an opportunity to play around with terraform).

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

The Fool posted:

this isn't necessarily true
it is the default behavior but both azure and terraform support moving slots between asps

one of the teams I support does this to manage down time when doing upgrades and pre-deploy sku changes

Are you able to actually manage this via terraform? I think I can create a slot and specify a different app service plan at creation time, but if I specify the app service plan that the main app is running on, it seems to store that as null in the state. Then if you try and change the app service plan (even if the value is the one it's actually running on) the plan will fail because I think it tries to validate the old value of empty string and then fail. I'm considering filing a bug about it, unless there's some aspect I'm missing.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
What if your hiring pipeline is empty because all your jobs are being camped in by people who would have trouble with Duplo blocks, which IaC tool makes sense there.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Fun fact, working for a 150 year old public research institution means that no matter how badly IT fails the "business" we can never truly fail and force any kind of reckoning or change.

Anyways I'm curious about the problems with Terraform and HCL. I suspect we'll always be fractured and independent enough that problems at scale will never ever really come up.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Feral Integral posted:

What's the latest poo poo people are using for CI these days? Last thing I worked on was jenkins and a bunch of plugins

I'm "still" using Azure DevOps. We don't have access to GitHub Actions, but even if we did, a lot of the stuff I'm doing in Azure DevOps doesn't seem to be possible in GitHub Actions. Microsoft isn't really saying much about what their preferred future is, considering they have two competing products (Azure DevOps Pipelines and GitHub Actions). In true Microsoft fashion, they're not going to truly "deprecate" anything when people are using it and, importantly, paying for it, but it does seem like a lot of their tutorials are more and more geared towards deploying the thing in GitHub Actions instead of Azure DevOps Pipelines. But they don't really seem to have feature parity either. Azure DevOps does feel much more "enterprisey" to me, which is beneficial for some of the stuff I'm doing with it.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I doubt anyone is excited about the source code management or project tracking features of Azure DevOps. I use DevOps Pipelines with GitHub Enterprise which works quite well.

Is there any indications that they're working on getting more "Enterprise" features into Actions? Some of the approval and resource protection features are pretty critical for some stuff I've built, and I just don't see any equivalents in Actions currently.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I haven't looked much because my little app is still not using up enough space to use up the fixed allocation, but there are some options for auto pruning old images.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
The impression I've gotten all along, without digging in very much, is that GitHub Actions is really about the "CI" of "CI/CD". It's great at running tests on your code, and building an artifact, and pushing that artifact into your artifact store. Which is great for open source software where "deploying" means publishing to a package manager. But the options for actually deploying your artifact on your infrastructure seem somewhat limited with Actions, and that's an area where Azure Pipelines has a big leg up. Maybe the real answer is that you should be using some entirely different tool for CD, but the fact is Azure Pipelines does CD, and going from Azure Pipelines -> GitHub Actions + some other tool feels like a step backward.

I could be way off base though, I've literally never touched GitHub Actions myself, just looked in from the outside.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Someone is building out a pipeline in our shared Azure DevOps org for building reference Windows images. The job consumes all our parallel runs for about 3 solid hours. They're running it during the day.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We "support" Windows Server 2016, 2019, and 2022, Core and Desktop experience. We build a new image from scratch each month with all the latest updates on it, I don't know exactly what the job is doing, this is a new process using Packer, but each OS version takes about an hour. And we have 2 consecutive jobs in our environment, so 3 hours.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Limit is 1 self-hosted for free in private projects.

Once the pipeline is built and working I don't care if it runs over night, because nothing is happening overnight that's urgent where you're gonna block up the queue. This is just in the "building and testing the pipeline" stage. There are about 8 different ways to handle this that don't gum up the whole org during the work day, I just have to convince them to try... any of them.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We could pay for more, it's $40/month per concurrent cloud job. Concurrent on-prem jobs are $15/month. 2 has been fine for years, I think maybe like 1-2% of the time are we actually using both jobs. So it came as quite a shock to suddenly see the whole thing stopped up like this.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Nobody's ready to admit it, but a large portion of our org's "cloud strategy" (which is effectively just the word "cloud" written on a cocktail napkin) is in jeopardy because everything major we do is in Oracle databases on-prem, and apparently they're very sensitive to latency. So it's not that queries run from cloud apps are slow, they literally fail. We could do some direct connect stuff to get latency down, but without acknowledging the problem we'll never put in the effort to solve it. So we'll just be stuck in a holding pattern waiting for the laws of physics to change to allow our move to the cloud to continue.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Docjowles posted:

This sounds like the cloud strategy should be "don't" but that's probably not a popular thing for a tech executive to say

mods???

We're a large public research university whose last CIO got an article about him in the Wall Street Journal when we fired him. We appointed an interim who was, and I say this respectfully, a professional seat warmer. It was his job, when some high-level leader left, to just sit in the chair, keep things afloat, until a true replacement could be found. He's been around forever and knows everybody, he's very good at that, so he was a natural fit for interim CIO. I suspect the number one requirement (though explicitly unstated) when hiring a new CIO was "keep us out of the wall street journal" and so we made the interim permanent, and his "don't rock the boat, stay the course" style is not really great when one of his senior directors comes in and convinces him to make a cloud push. So he's somehow simultaneously directing us to upend the entirety of our operations but also not actually disrupting anything, which works out... about as well as you'd expect.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply