Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

New Yorp New Yorp posted:

What's the pattern for handling provider versioning within Terraform modules?

Let's say I have module A that is consumed by base. base says it needs provider version ~> 3.0. A says it needs provider version >= 3.0.0. They both have lock files that point to the same version, 3.5.

I upgrade base from 3.5 to 3.8. base immediately breaks because Module A is on 3.5. base can't simultaneously use 3.5 and 3.8. Is the trick to simply exclude lock files from version control for modules? That seems very wrong.

[edit] let's say that module A is being sourced out of a Git repo reference to a tag, not anything fancy like Terraform cloud.
I believe the lock files for everything except the root module are irrelevant and are ignored, and only the constraints are considered when evaluating provider versions against modules

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The problem with most of the managed services in most major clouds is that aside from opaque repackaging of other services, most of what they give you aren't fully-baked products. They're building blocks with few opinions, which are great when you're trying to construct business processes, and absolutely awful when you're trying to build humane user experiences for your developers. The most valuable products are still the things that could have come from a 2004-era Google paper: S3, DynamoDB, anything you build for intentionally that lets you forget about scale. Once you get cross-cutting concerns like backup and data retention involved, every other product manages to somehow make scaling harder just by existing.

This isn't a feather in the cap of on-prem in any way; it's more saying that cloud fails to add value more than it actually does it.

FISHMANPET posted:

Meanwhile the new place is a young tech company where I think the biggest benefit of the cloud early on was that they could start without a huge upfront investment. And now they're cloud native and also world wide and but still a relatively small compute foot print, and that just won't ever make sense to go physical.
You'd think that would be universal. The biggest footprint I ever managed (~15k physical servers, overflow into public cloud to the tune of another ~10k instances) was a 30-person startup that started that way, but ended up colocating a custom hardware platform in custom racks. We bombed due to top line monetization problems, but the move bought us two years of runway.

Vulture Culture fucked around with this message at 14:29 on Jan 26, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Resdfru posted:

Not discounting what you're saying but wanted to point out that aws orgs and control tower can make 'do thing in a bunch of accounts' pretty quick and easy
Yeah, scaling across a ton of accounts is only a problem if you're doing it with mismatched automation tech. The bigger problem is visibility, and AWS still cares as little about this today as they did 10 years ago. (Config could be really good at this, and the fact that it isn't is evidence that AWS wants your users to be unhappy.)

Micro accounts are containers for data planes, and if you're trying to manage containers using Puppet or something, you're obviously missing some benefits out of that approach.

e: I'll give 12 rats that Amazon's underinvestment in Resource Access Manager is also an obnoxious complication

Vulture Culture fucked around with this message at 22:10 on Jan 27, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I made this for a slide deck so now I have to subject you to it also

Only registered members can see post attachments!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Fool posted:

the issue is that there are actually a million different ways to do this. From rolling a web app with an api shim, to config files in a repo, to forms being submitted to a webhook.
Has anyone here used step functions for this kind of stuff? It seems like it would be useful for serverless workflow automations, but I have yet to build anything real on top.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Applications have dependencies and it's managing the entire stack of dependencies that makes this a hard problem, not making a single leaf route invoke the right thing

In simpler times, most of those dependencies were configurations and libraries and code modules, now they're mostly living organisms, tardigrades floating in clouds

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
There's also a major education issue in the poorer and less medically serviced parts of the world, where you have this horrendous combination of live virus vaccines and people who don't get their full course of immunizations. So you end up with places like rural India where there are minor outbreaks due to vaccine-derived polio strains—something not rare in immunocompromised people but where it won't spread through communities—and it creates this perfect shitstorm.

What doesn't help our chances in the US is that we see dead diseases popping up in the same communities over and over, and they happen to be ones where large crowds of almost universally unvaccinated people get together for large ceremonies and observances. Though, I guess that's going to be common in the political religions too in a generation.

Vulture Culture fucked around with this message at 14:16 on Jan 31, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Fool posted:

I think there's room for profitable companies in this space, but Hashicorp (and others) keep making pants on head stupid decisions for investor story time. No-one wants to make a "reliable product or service that makes a little bit of money consistently"
HashiCorp once had more engineers working on Sentinel than we have engineers in our entire TechOps organization (and maybe they still do?), only for customers to still complain that they wanted Open Policy Agent support instead, so take that as you will when they hand you an arbitrarily large number based on a totally different licensing model than last year's quote

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

I'm not super happy with ArgoCD but I'm too far along the implementation path to back out and switch to flux because I need to get this delivered

ArgoCD is pretty good for what it does. But then to update the image tag of the container you need to.... Install a third party plugin that's v0.12 and loudly points out that it could change at any time?

Looks like there's a PR ready to merge but the guy who maintains the plugin has abandoned it and wants someone else to take over the plugin, but doesn't offer any way to contact them :cripes: also a bunch of proceduralists are adding red tape

Third there's no first class support for AWS ECR, gently caress me, guys come on. Ok fine I'll install a weird third party helm chart to get the ecr login secret, I guess. Now I have to create a local fork of this third party chart to support my CD system

I'm all for "do one thing, and do it well" but it doesn't seem like these functions need to be independent of the main helm chart, you've already broken ArgoCD into five+ services

Of interest, it looks like the guys who started ArgoCD gave up on it, literally forked argocd-image-updater and built a new CD system on top of it, Kargo (although they've since fully rewritten the image updater code). Kargo is too new for my tastes but I'm not loving this "band of merry helm charts" approach to building a functional CD system; flux would have been a very choice at this point I think.
The documentation and UX for ArgoCD and Flux both paint a picture where ArgoCD is a lot more batteries-included than Flux is, and I was very surprised to find in practice that the opposite is true

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
e: nm, this thread already talked about the Weaveworks shutdown

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I mean, if races don't matter, the easiest option is to run some privileged DaemonSets that set your sysctls how you need them. If races do matter, you can still use this approach, you just need to set taints on your nodes and have the DaemonSets clear the taints when they're done with a first run

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Today I learned that all our AWS infra is built with Ansible. Which is obviously possible, because they've done it, but holy moly is it a mindfuck trying to actually figure out how anything is put together.
It's not easier with Terraform written by lots of different teams

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

How do you approach cloudfront distributions with IaC for k8s. I guess our front end gets compiled to a stack of files on S3 served via cloudfront

Spinning up the S3 bucket it points at is cake

I'm looking at doing an AWS ack controller for both S3 and cloudfront (basically , AWS resources as k8s crds) but the way external dns is wired, is to look for ingress controllers and point DNS to a load balancer

https://github.com/aws-controllers-k8s

I could do this at the terraform level but I'm super loathe to do that because I lose a lot of flexibility for my front end team to do copies of prod in dev without a lot of manual drudgery

The big downside to acks is documentation is extremely sparse besides the simple S3 example, or maybe I'm not looking hard enough
Are you doing some kind of micro-account segmentation for ephemeral environments? If not, why create separate buckets and distributions for each temporary deployment instead of prefixing assets on a Git SHA or whatever?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
How are y'all working around AWS's design decision of RAM-shared resources not having any tags visible in other accounts?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

We went the route of many microsegmented accounts for better and worse. Whether an account is dev/stage/prod and who owns it and so on are known simply by some metadata on the account itself. So we have not found being militant about tagging to be useful at all. Instead we have many other problems :pseudo:

Really the only time lack of tags has come up is if we try to engage with a cloud vendor that expects there to be many, rigorously maintained tags and their software just can't handle any other asset tracking strategy.
Yeah this is really more for stuff like transit gateways and VPC Lattice service networks that are shared out to whole Organizations or OUs

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The main security benefit of micro-segmentation is that you consistently get to use managed IAM policies, rather than rewriting them all into tag-based policies that fail open if you leave out a condition. It's up to you what kind of benefit that confers. For any migration to go smoothly you're probably going to find yourself adopting zero-trust (VPC Lattice or some other service mesh, OIDC federation out of all your managed platforms, etc.), at which point there's actually less benefit still on the table for all the remaining parts of the migration.

Put another way: micro-segmentation is a killer feature for pure-play AWS, but the more of this functionality you have rolled into high-level abstractions through an internal developer platform or portal, the less useful it's going to be to you.

I view it as pretty similar to containerization: there are some apps that adapt really well into a native container orchestration world, and some legacy/enterprise apps that are best left for now as fat containers that mix too many concerns. The goal is to minimize the number of changes moving each piece of your larger system into a new account/VPC environment, because doing the opposite usually goes poorly unless you have central ops doing all the lift-and-shift work.

Vulture Culture fucked around with this message at 17:48 on Feb 23, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

So I ran across a blog post the other day that had an interesting term, "reference architecture" specific to platform architecture/DevOps and that's sent me deep down a philosophical rabbit hole. I've really been struggling to find/define "best practices" or "state of the art" I think it's loosely defined as "containers using git ops and iac"

Should reference architectures be opinionated, like Ruby on Rails? Or left wide open
Should reference architectures be cross-cloud (AWS/gcp/azure etc)
Should reference architectures support all deployment types? CRUD, stream processing, LLM etc
Where does the definition of a reference architecture start and stop? Is it a helm chart, or terraform that deploys the cluster + bootstrappy charts to provide XYZ base functionality? Plus flux/Argo?
Are IAM/Secrets management/password rotation part of the reference architecture?
How do you encode/validate best practices across all "layers" of the reference architecture
Which DNS provider(s) would you support
Is GHA/Jenkins/spinnaker part of this? It's turtles all the way down where do you draw the line

I'm pretty close, I think, to publishing a generic "reference architecture" similar to what I've built at work that uses terraform, k8s, ArgoCD, GitHub actions, but it lacks ownership of IAM and doesn't have any automation of secrets management beyond basic kms access for one user per environment

Most of the "blogs" or medium.com articles I've seen are written by guys who are trying to build a reputation and seem like they just barely know what the hell they're doing, or the toy demo they're deploying only works in a vacuum and is not extensible and in general garbage and you spend an inordinate amount of time splicing a working answer into your existing IaC

Nobody here has pointed me at a reference architecture but I guess I'll take a stab at it one more time: does anyone have a favorite reference architecture they like, or have seen that's moderately kept up to date?

Open to any and all commentary on the subject up to and including "this is a stupid idea, nobody is only creating a crud app, a simplified reference architecture is a stupid idea it's barely better than the medium.com articles"
Key problem is that most cloud-based businesses are also bought into verbal recitation of Conway's Law solving all of their problems, and there is no cloud architecture in existence resilient enough to withstand a vanity reorg

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It sounds like it's just NLP-enhanced search results, nothing more or less

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
There's basically no benefit. Containers make deployment faster, which you don't often want to do with a database until you reach a certain level of sharding. Deploying quickly, even to track patch releases, comes with its own drawbacks: restarting a database flushes the in-memory cache, and your performance probably suffers from cold starts.

Container tech has gotten a lot better in the last decade, but there's still overhead. The one place in your stack you might really need to max your vertical scale is probably the worst spot for it, especially if you're licensing your database tech by the CPU. It's not like this type of isolation confers many advantages on a system that isn't running multiple workloads in the first place.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

Anything that's everyone's responsibility quickly becomes in practice nobody's responsibility nor accountability
I view this through the lens of people who have to pre-divide chores with their spouses and if that's you, bless you, but there are absolutely other ways to do it

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
A ton of companies are just building Microservice Jenga at this point, and the next big-paying job is just going to be fixing that

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

:catstare: What the actual gently caress, did these guys walk out of a portal from 2005 lol. That's about the last time I encountered FTPing code to "the server" as a deployment strategy
Most academic HPC work is still running off of cluster-attached NFS volumes, as far as I know. There's career software engineers and there's everyone else

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Yeah, this is a really sensible approach.

A lot of service and platform designs fail to consider who is going to be making what kind of changes to the software at what pace, so we end up making precisely the wrong decisions. Core business logic gets broken into dozens and dozens of tiny independently-shipped parts that are substantially harder to change and test for people who are close to, but not on, the owning team. At the same time, we're continuing to build a lot of the central platforms that run the business as monoliths, when there's absolutely no benefit to doing so because the people making contributions are coming from everywhere, and nobody has any context about the system anyway.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Something else to remember about Conway's Law is that it's really mentally tempting to simplify that organizational design into a reporting structure org chart, but the question who's doing the work? goes far beyond the nominal conversations about ownership that people like to start from.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NihilCredo posted:

https://resume.ingy.net/

at some point it is time to admit you have a problem
Everything about this shouts "Perl developer" in the loudest voice you've ever heard


CPAN? More like CPEP :smug:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

LochNessMonster posted:

Probably something like not having to deal with this.

code:
...
This is a lot of words to say "I didn't know about """triple quoting""" until today"

TBH, a big part of the problem with interpolating vs. not interpolating variables is that people don't pay attention to at what point they've stopped providing variables to things and have started dynamically generating code, as though landing in this place is the fault is variable interpolation somehow. Don't do that. There's a hundred ways not to do that. Provide things that need interpolation as inputs to your shell script, or export them as environment variables. Have your Jenkinsfile be Jenkins code and have your shell scripts be shell scripts. I heartily recommend folks don't generate shell scripts and get annoyed about how hard this obviously hard problem is

Vulture Culture fucked around with this message at 14:50 on Mar 18, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

my homie dhall posted:

the problem jenkins is solving for everyone is centralized job scheduling, orchestration, code reuse, testable pipelines/components, custom plugins, etc.

from what I know (very little), the yamlshit generally does not solve all of these problems or does not do so as well as jenkins does, and until it does I don’t care if I have to write php, golang, whatever in my pipelines, the language is just a means to those greater ends
I'm all for shooting down useless technology migrations, but Jenkins is quite bad at every one of these things except job scheduling. There's a few ways of obtusely handling code reuse if you have admin on the server too, but it's mostly a fight between the sandbox and people who nonstop hassle you to turn it off

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

my homie dhall posted:

I agree, but what platform is doing all of them better?
I guess what I mean is that all of these features are means to an end, and there's ways of getting them that don't rely on having totally insecure, unstable (or easily destabilized) core platforms. The Jenkins model of extensibility is like a Windows 98 kernel where you just keep shoving in driver after driver: no good will ever come of this.

I don't know that GitHub Actions or GitLab CI do "run thing at time" better, but gently caress, at least they have working access control.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

Deployments are plenty enough organizational division in 85% of cases
I agree with the rest of your post, but could you clarify this? K8s RBAC is a problem that leaks sewage all over use cases that rely on partial match.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

George Wright posted:

If you’re handling PII or you’ve got a reliable, well used, integrated, and supported PKI, then you should terminate at the pod. Otherwise it’s easier to terminate at the LB and let your cloud provider deal with certs.
Fun fact: the HIPAA Security Rule and many similar compliance regimes don't actually require encryption in transit for your private network. Terminating on-host is something you do to pass third-party audits.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

neosloth posted:

We tried to run istio and it caused a bunch of outages and upgrade headaches with no tangible benefit. It sounds cool tho
Most people shouldn't operate their own data planes period.

I'm again going to say that if your host offers VPC Lattice or something like it, and you aren't either using it or trying to get it adopted, it's out of stubbornness and not because you're looking out for your users

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
If you live in the US, continue all your open interview loops until you have a signed letter with a start date. In America, even a signed letter isn't a guarantee that you're going to work a single day on the job.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Iron Rose posted:

Architecture is critically important but it shouldn’t be considered a separate job or role, it should be something all engineers do as and when it’s required of them. Architecture without skin in the game leads to bad advice, no accountability, and worse relationships between teams. I do not believe it is a skillset you can divorce from engineering implementation.
Disagree. Single teams need great design and rarely need architecture at all. I can virtually guarantee that any given company, especially ones where performance management is taken seriously, has plenty of people (and possibly too many) worried about how to improve things at their team's scope. Architecture needs to be focused on how the company's systems interoperate to meet higher-level goals. If you need architects at all, you demonstrably and verifiably have a problem where 100% of the person's skin in the game is built upon doing the right thing for the company despite all the people fighting you to do otherwise. Match the scope to the role.

Architecture functions often work more like product functions than engineering ones: there's a hundred ways to do it, and until you get someone in the org who's showstopper good and sets the bar for everyone else, it's going to tread water.

Getting incentives right matters for an org that's actually moving in a coherent direction, but you should rely on emergence until you see someone succeed vibrantly. I've seen architects with local "skin in the game" flounder and fail to set their own local priorities effectively, and I've seen architects with great product, project, or program management skills really flourish despite having none. It's all circumstantial.

Vulture Culture fucked around with this message at 18:53 on Apr 2, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Iron Rose posted:

How do yall handle caching authentication tokens between multiple pods/processes/etc? Current practice is to just toss a 5min TTL JWT into the cluster local redis so the authentication service doesn’t get swamped with requests. This cluster runs probably 30k pods every day that need a half dozen tokens from a half dozen services each, and we get hella rate limited by everything to MS’ management plane to our own internal keycloak auth endpoints if we don’t leverage a shared token cache. Throwing the token in redis doesn’t feel especially secure, but it does sure reduce the number of 429s we get!
Typically, you'd build your broader distributed system so that you use your IdP, instead of circumventing it by using your JWT as a private-but-not-secret session identifier. Normally, you'd do it like this:

  • You have a single auth domain, and an application instance (pod, whatever) in this auth domain receives a single identity.
  • The issued token is a JWT or something similar signed by the IdP. By having it signed by a trusted broker, each application verifying the validity of the token just needs to verify the cryptographic signature and interpret the JWT claims. The token is never passed back to the auth service except for sensitive uses that need an explicit check for whether an auth token has been revoked.
  • The initial authentication exchange returns two tokens: a short-lived auth token, and a longer-lived refresh token. Given a refresh token, the IdP can reissue an auth token without consulting any external state or authority. This eliminates bottlenecks to horizontal scale.

In this configuration, you might hit a bottleneck if you're cold-starting your whole system, but in general, your IdP should be able to scale to basically a limitless number of token refreshes without much CPU effort beyond cryptographically signing the JWTs on refresh.

Secrets management, where you're exchanging your application identity for a credential to a different system/application, is another ballgame.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madmatt112 posted:

Does anyone have a link/resource about why Broadcom is such a widely vilified company? The only thing I know them for is like mobile chips or the like. Did they do something specific with a software company that I’m not aware of?
They basically treat their acquisitions the same as a private equity firm, vulture capitalism

https://www.crn.com/news/virtualization/2024/broadcom-tells-partner-negotiating-for-charity-vmware-is-not-for-everybody

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

Developers want better visibility into their build and deploy process (of course, for good reason) most of this happens either in GitHub actions, or ArgoCD

I've identified, I think, 13 distinct build and deploy tasks across the front and back end for our monolith, touching a bunch of services across multiple vendors (GitHub, AWS, cloudflare, code analysis tools etc)

How do you add visibility into this at your place, what kind of pattern

I'm thinking of two slack channels, one "short" channel with the high level green or red light for the overall deploy, so maximum two kinds of alerts, and a "verbose" channel that includes all 13 slack messages, that have a link to the query in the logging where the problem can be better inspected
Treat each task as a span in a distributed trace and use your existing distributed tracing practice to monitor your deployment health

This is more or less how Datadog's CI/CD monitoring product handles it, anyway, just with a pretty bow on it

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
And it's official

https://www.hashicorp.com/blog/hashicorp-joins-ibm

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cyril Sneer posted:

Hiya, I'm setting up a little vanity web server on my own hardware (a lower-end mini PC). Separately from this I have my development machine. I'm trying to put together my own little cutesy ci/cd workflow but need some (a lot) of guidance. I'm comfortable with git but not much beyond that.

(1) One approach would be to simply install Python (using FastAPI + uvicorn here), git, and just run the code.

- This basically gets me all the way there, but I'd still be manually using git to pull any changes -- automating this would be nice!
- Keeping the two Python environments the same could require some manual tinkering.

(2) Install Docker and run it as a docker image..

- This is interesting in that I don't have to install anything (other than Docker)
- The deployment process is even more vague to me. If I build the image on my dev machine, how do I get it onto the server?
- Part of the site uses a local database -- would this have to be in the image, or can it stay outside (i.e., can code running inside access outside, local files)?
- Performance. I'm a bit concerned how this might run on my low-spec hardware.

I'm on a bit of a learning journey here. Hope this isn't too simplistic a question!
Welcome to a cool journey!

Generally, you have somewhere to host your artifacts. This might be something like Docker Hub, GitHub, or GitLab's container registries. You build somewhere, you push to a central location. Then you have your deployment target initiate an image pull from that repository and launch the container. You can use something like Ansible to initiate that image pull and container creation over an SSH connection without needing to install any agents or other dependencies on the server.

You can run a database service in a container too, but you generally don't want this to be in the same container as your app. You want one container for each service, one container for your database, and you want to network them together. (It will be fast, like connecting to localhost over TCP, because in a single-host configuration your traffic will never leave the host.) And make sure you mount a data volume into your DB container, otherwise you'll be deleting all your data every time you restart the database!

Performance hit from Docker should be really negligible, if your host is already running Linux. If you're running Docker Desktop under Windows or macOS on the mini PC you're using as a deployment target, things get a lot more complicated for you.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Fool posted:

our terraform enterprise license agreement is up next year and management is freaking out about how ibm might make changes

unrelated, how is spacelift?
The product seems solid but the UX on it feels wobbly as gently caress in the same way as, like, ArgoCD. Definitely not as polished as TFC but it will get the job done

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

"Might"

They need to make $6 billion in something like ~3 years in fees to not call it a loss
I was trying to buy HPC software from Platform Computing when IBM acquired them. IIRC, despite being a life sciences nonprofit and having Janis Landry-Lane, actual head of IBM life sciences, on our account, the quote added an entire 0

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply