Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methanar
Sep 26, 2013

by the sex ghost

Hadlock posted:

Hi this is your annual reminder to do a quick audit of all your S3 buckets and make sure they're not world read/writable

You should have a script that does this every 5 seconds tbh

Adbot
ADBOT LOVES YOU

Votlook
Aug 20, 2005

Hadlock posted:

My company has institutionalized "git push --force" :shrug:

I've got management buy in to wage a holy war against this, so that's a plus

If you're having problems cloning a giant repo, try git clone --shallow

I force push to my feature branches all the time, but always take care to disallow force pushing to master/main/develop.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Methanar posted:

You should have a script that does this every 5 seconds tbh
Pretty sure we had to audit for changes to this as part of our SOC2 work. Definitely remember PagerDuty alerts going off for S3 bucket and Cloudtrail changes hooked up via SNS. This applies to all of our accounts developer or not. This whole "prod" vs "non-prod" distinction is kinda irrelevant to me honestly because each account is an attack vector basically.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

necrobobsledder posted:

This whole "prod" vs "non-prod" distinction is kinda irrelevant to me honestly because each account is an attack vector basically.
Yeah, these days the attackers just want to mine coins, so arguably it's the non-prod accounts that need more security oversight because Devs Do Dumb poo poo all the time that expose accounts. "I was setting up CI and forwarding the logs to a bucket, and then I wanted to see the logs so I made the bucket public, it's just harmless CI logs right? Whoops I added set -o xtrace to debug a CI script and now all our cloud API keys got logged, sooo-rrreeee"

12 rats tied together
Sep 7, 2006

its possible to structure your accounts such that a dev account being compromised is 0-risk to production accounts, but it doesn't stop AWS from sending you a bunch of nasty emails about terminating all of your accounts anyway, and then you'll have to actually learn and use the organizations api instead of running random python scripts from bored AWS SAs

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
By all means separate accounts to minimize blast radius. The AWS practice to have a login account and role hop to others is certainly a bit vulnerable based around the assume role admittance criteria, but I'm mostly speaking in terms of challenging the intuitive idea that dev accounts are less important to protect than production resulting in the anti-pattern of pretending like non-prod accounts can be setup like total garbage and be little risk to the company when developers oftentimes keep credentials for all their environments stuck in ~/.aws/ and many attacks require little dwell time now. It's why we regularly nuke our non-prod accounts and build from scratch as well.

I'm not going to blindly advocate for the "test in prod, just stop deluding yourself that you really need a separate dev and prod account" trend, but it's good to occasionally revisit your assumptions and whether it matches your reality.

Docjowles
Apr 9, 2009

necrobobsledder posted:

I'm mostly speaking in terms of challenging the intuitive idea that dev accounts are less important to protect than production resulting in the anti-pattern of pretending like non-prod accounts can be setup like total garbage and be little risk to the company when developers oftentimes keep credentials for all their environments stuck in ~/.aws/ and many attacks require little dwell time now. It's why we regularly nuke our non-prod accounts and build from scratch as well.

Yeah I've had this exact conversation lol. "Well our security tool charges us per account so please only deploy it to the high value production accounts" Uh ok but pretty sure if the "low value dev accounts" have Direct Connect back to our data centers and are capable of running buttcoin miners and so on the people compromising them will have a different interpretation of whether they are high value targets than you seem to :thunk:

barkbell
Apr 14, 2006

woof
are there good resources for how to build a devops pipeline? i understand there are a lot of unique issues that each pipeline needs to address, but i'm just thinking like some best practices kinda stuff.

Hadlock
Nov 9, 2004

Pipeline for what stack? How long is a piece of string? A maven containerized stack on GKE is going to look worlds apart from deploying node.js to raw EC2 instances

barkbell
Apr 14, 2006

woof
Mine is scala, angular, psql

I meant more like before having automatic deployments on commit, the team needs strong test coverage to have confidence things wont break etc Things like that to keep in mind and figuring out what enables what

Methanar
Sep 26, 2013

by the sex ghost

barkbell posted:

automatic deployments on commit

Don't do this. At least only do it on a real semver tagged release.

Hadlock
Nov 9, 2004

Methanar posted:

Don't do this. At least only do it on a real semver tagged release.

Care to expand on this? Do you recommend a manual QA team, release manager, compatibility matrix? Is this best for non-monorepo?

Methanar
Sep 26, 2013

by the sex ghost

Hadlock posted:

Care to expand on this? Do you recommend a manual QA team, release manager, compatibility matrix? Is this best for non-monorepo?

I'm not a release expert, but I can't imagine a world where yolo deploying every commit, passing tests or not, is a good idea.

How do you handle any sort of schema update? Or dependency management at all. You probably don't want to have your app try to call APIs from some upstream that aren't implemented yet and have somebody typing git push be the gate keeper of that. And if you're constantly deploying everything across your engineering org how do you ever know what's really live anywhere.

Also how does multiple people working on a feature/branch work. How do you enforce that everybody involved is always properly rebased in exactly the right manner with no possibility of ambiguity or accidental regression breaking somebody else.

Methanar fucked around with this message at 19:05 on Oct 8, 2021

Hadlock
Nov 9, 2004

About half the people I've talked to deploy to production on merge to master. You need an eng org that's on top of their poo poo with strong policy enforcement and culture for it to work though. Front end teams are especially well suited for this

If you have PHI/SOX requirements there are some regulatory hurdles that make a more traditional release-manager-y semver more favorable

Schema changes can be scary but for a stable product, dangerous ones seem to clump together into once or twice a quarter events and, again, with good culture and strong policies these are well advertised and get vertical and horizontal buy-in before the merge happens

TL;DR make sure you don't have poo poo eng culture and/or management

The Fool
Oct 16, 2003


We release on semver tags, and semver tags are generated automatically from master on a nightly basis.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Continuous Deployment and multiple-releases-per-day is a thing, but for any important workload it requires a massive infra of guardrails in case Something Goes Wrong that can generally only be justified by companies with lots of $$$ to throw at the problem.

- does the system have monitoring and alerting?
- can it roll back easily? If not, are new features at least gated by config-driven feature-flags?
- can you pause the CD pipeline in case of extra-ordinary events?
- is there a progressive series of tests gating release? (e.g. lint -> unit -> integration -> smoke -> canary with shadow traffic -> canary with real traffic)
- is the release gate based on binary "it works / it fails" or statistics + thresholds "it works > x% of the time" / "metric X differs by < y% compared with the same time last week"?
- if a build fails a release gate, is there a process between Dev & Ops to figure out why? It may not always be the developer's fault.
- does the code automatically handle different versions of other systems it interacts with (databases, APIs, etc)
- is the commit-to-master rate infrequent enough that the deploy rate is manageable? If it's super frequent, you will be batching features in a single deployment; do you have a way of isolating any problems found?
- Can you track bugs found in prod to a specific deployment? Your logging will probably need to log which version of the software is running at any given time. Knowing what version of a dependency was running is also critical, so mono-repos & vendored dependencies can help here.
- Is your latency between commit-time and deploy-time very long? Long build + deploy causes issues when debugging issues and deploying fixes.
- Are there legal/audit (SOX) issues your pipeline needs to comply with?

I have worked on a system such as this, and it was hell. Echoing what Methanar said, do not do this if you can avoid it. (Or only if the app is not important)

barkbell
Apr 14, 2006

woof
e: ^ thanks. these are the kind of questions i need to think about

From my understanding, a big part of building out the devops pipeline is culture and workflow. I just am looking for like: here's a book/blog writer/whatever for how to approach and think about devops

12 rats tied together
Sep 7, 2006

every commit of your main branch should be deployable, but that doesnt mean deploying every commit, unless you commit straight to main i guess but dont do that

OP I recommend Martin Fowler: https://martinfowler.com/bliki/ContinuousDelivery.html

barkbell
Apr 14, 2006

woof
do people not like trunk-based git workflows? it seems good to me

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Forgot to mention:
- Is your repo shared with other products / teams? (Not uncommon in a monorepo situation) If so, how will you know if any commits to master affect your product? (conventions help, and build deps tools like Bazel can answer these questions, but Bazel is not pleasant to work with).
- Many feature branch CI systems will perform a final round of tests before merging with master to ensure that master remains in a deployable state. What's your plan if master moves faster than the time it takes these tests take to run? (Again, not uncommon in a large monorepo shared by multiple teams)
- Do you ignore commits-to-master that don't functionally change the code? E.g. comment or README.md updates?
- If an undeployable commit lands on master, it will block deploys of all subsequent commits. Can you be sure that you can identify the bad commit quickly and revert it? On a weekend?


(I am getting PTSD flashbacks just thinking about this)

Hadlock
Nov 9, 2004

barkbell posted:

e: ^ thanks. these are the kind of questions i need to think about

From my understanding, a big part of building out the devops pipeline is culture and workflow. I just am looking for like: here's a book/blog writer/whatever for how to approach and think about devops

Yeah devops is steering the culture ship away from the rocks. You need management that cares AND has been bitten by this before so they actually understand why they should care. Occasionally you can do a ground swell movement, I was able to militantly enforce git flow at one company to the point that everything from our documentation, up is formal gitflow/semver, just checked, six years later it's still in place. That kind of groundswell is hard and needs to happen early in a small company. Management buy in in a large org is mandatory if groundswell did not happen

You should buy and read "The Phoenix Project" it's a fictional how/why and actually a good read. It's strictly process/situation, not tech but really enforces the devops mindset and factoring in unplanned work + how/why to plan for break-fix

12 rats tied together
Sep 7, 2006

trunk based development where everyone contributes to the trunk through short lived feature branches is just github flow with some more words on the end, and its fine. trunk based development where everyone commits right to main is bad

barkbell
Apr 14, 2006

woof
gotcha, ya i meant more like small branches going straight back into main. i dont imagine a good flow with just committing straight to main

ill check out phoenix project

thanks for all the help so far, im sure ill post more dumb questions in th efuture

Hadlock
Nov 9, 2004

It's good that you're doing your research, but I'm genuinely curious, how did you end up in charge of designing CI/CD rather than just hiring someone

barkbell
Apr 14, 2006

woof
i did some work on the ci/cd stuff at my last job, but requirements for what i needed to know were much more known and defined for me. new job saw that and wanted me to help on their devops stuff which is not fully fleshed out yet. theres another dev working on it as well, and we have a consultant to help a bit too. so i'm not alone on this, i just want to investigate how things are done elsewhere, bring some ideas to the table where i can, and just generally get good at it

e: oh and the job before that i did a pipeline for an application but it was very low stakes in terms of users and stuff

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
We have the develop branch set to deploy to the test environment on commit, and master set to deploy to production on commit, but nobody can push to master directly, and PR to master requires approval from x reviewers. Works for us so far but we have a very small environment compared to the stuff some of you guys deal with.

I do need to automate the rest of the environment as well though, like making sure security groups and other resources are set right. Is Pulumi good for that?

12 rats tied together
Sep 7, 2006

Pulumi is good for that, yeah. I would describe pulumi/cdk as basically a new generation of infrastructure code tooling, they have their own set of problems but they're an entire leap ahead in functionality compared to cloudformation/terraform/etc.

In particular, Pulumi programs written in python/node are also valid modules and python/node modules can be imported via relative path, which gives you a crazy amount of composition, inheritance, and orchestration control built right into the tool instead of needing ansible, terragrunt, or some other kind of wrapper. C#/golang feel bad to write in but they also feel bad to write in general so that's probably not a Pulumi issue.

A feature I'm a big fan of are Component Resources as well as the opts resource attribute, basically you get a lot of control over logically grouping resources in "pulumi preview" output, and it's really easy to bundle up a group of extremely verbose dependencies into something a lot more DRY. Debug logging/debugging in general is better than terraform but worse than ansible. Making programs easier to debug is on the roadmap, as I understand.


Conceptually Pulumi is ~reasonably similar to terraform, the main things I would mention after using it for a couple months are:

1- The model for getting outputs from resources into downstream resources is a little funky. In terraform you just go "${aws_vpc.some_vpc.id}", or whatever, and you can just use that wherever else you want to in that workspace. In Pulumi you can ~sometimes do this, but often you need to use a mechanism similar to javascript promises, where you call a "resolve all these values" function and pass them into an anonymous function, where you can actually refer to them.

2- Pulumi resource naming (the URN, the value in state) cares a lot more about your setup than terraform does, which basically just looks at your current working directory. Pulumi programs exist inside of a stack, which exist inside of a project, and the names of each must be explicitly configured and don't have any bearing on the filesystem unless you make them. Stacks also must be uniquely named in a Project, and if you rename either Stack or Project you have to do state surgery because both are interpolated into the resource URN.

3- Pulumi uses the AWS golang sdk which has some weird opinions about auth, especially role assumption auth, and doesn't follow the standards set by the canonical python sdk. Since Pulumi is language agnostic, it uses the golang sdk even if you aren't writing golang, and it kinda sucks to inherit problems specific to that set of maintainers that you otherwise would not.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
We ship a virtual appliance as part of our application and have built it with packer, the virtualbox ova builder, and a bunch of shell scripts that was written a few years ago and basically not touched since then. Virtuabox sucks majorly and we get broken builds every now and then because our build machines fail to release resources and vboxmanage shits the bed.

Other than swapping out the scripts for chef/ansible/salt is there a better way to do this? I'd love to be able to get rid of virtualbox specifically.

FamDav
Mar 29, 2008

Methanar posted:

I'm not a release expert, but I can't imagine a world where yolo deploying every commit, passing tests or not, is a good idea.

if your automation isn’t telling you what will be issues before they reach your customers. invest more in automation.

quote:

How do you handle any sort of schema update? Or dependency management at all. You probably don't want to have your app try to call APIs from some upstream that aren't implemented yet and have somebody typing git push be the gate keeper of that. And if you're constantly deploying everything across your engineering org how do you ever know what's really live anywhere.

first, you don’t let people push without a code review that includes release/revert instructions. api not in production? don’t ship it or put it behind a feature flag.

and you know what’s live everywhere by tracking state and querying it. a variety of deployment systems exist to manage this task, many open source.

quote:

Also how does multiple people working on a feature/branch work. How do you enforce that everybody involved is always properly rebased in exactly the right manner with no possibility of ambiguity or accidental regression breaking somebody else.

well, avoid feature branches and embrace feature flags. and if someone is dumb enough to do a long lived feature branch and screws things up on merge, the. that’s what the tests, staging environments, etc are for.

luminalflux
May 27, 2005



Methanar posted:

I'm not a release expert, but I can't imagine a world where yolo deploying every commit, passing tests or not, is a good idea.

Hi hello I do the release engineering among all other stuff I deal with, and we do CI/CD where if a merge to main passes tests, bucko it's getting deployed whether you want to or not.

Basic flow for our main app:
  • CI runs on each push to your branch
  • If CI is green and you get approval, you can merge to main
  • CI runs on main branch. If this goes green, it merges that SHA to the release branch.
  • Release branch gets deployed to staging
  • CI runs browser-based integration tests against staging to make sure that the backend app and react client play well and we didn't make important butans unclickable
  • If this goes green, and deploys aren't frozen, CD kicks off a deploy to Spinnaker
  • Spinnaker bakes an AMI with the new version of the app (*)
  • Spinnaker makes a new autoscaling group with the new AMI, and attaches it to the loadbalancer
  • When healthchecks go green on the new ASG, Spinnaker drops out the old ASG.

(*) this is actually done as part of CI running on main but it looks better this way when written out)

If there's any sort of emergency "oh poo poo we hosed up", we can freeze deploys and then revert to an older version until we figure out what the gently caress.
Other services deploy to staging, and then have a "deploy latest staging to production" pipeline they can run, but in 99% of cases we're always running the tip of the release branch.

quote:

How do you handle any sort of schema update?

Schema updates are tricky. We kick off ours manually for a couple reasons: one is that we use gh-ost since our database is Large McHuge and some DDL changes take a couple days, and need a human to go "yeah ok you can cut over to the new table now". It also makes it easier to deal with than 64 instances trying to run "rake migrate up" at the same time. Other teams have a process where they boot one instance to run flyway migrations if they're not using the gh-ost flow.

Generally the way the application handles this is that it's compatible for a couple schema versions forwards and backwards, and we flip feature flags when the new migration has run.

quote:

Or dependency management at all.

What kind of dependencies? OS (apt/yum) or language dependencies (npm/pypi/maven) are handled in that we build a new AMI for each version, and deploy that AMI. Same for our container based deploys.


quote:

You probably don't want to have your app try to call APIs from some upstream that aren't implemented yet and have somebody typing git push be the gate keeper of that

Feature flags are very useful here, and this is generally what we do when rolling out new services or service version. Client service gets deployed with the ability to call the Dickbutt service despite it not being ready. When Dickbutt service team finally gets it deployed, the client team flips a feature flag. Works for mobile apps as well.

The key here is being able to degrade gracefully if components aren't available or behaving.

quote:

And if you're constantly deploying everything across your engineering org how do you ever know what's really live anywhere.

It's not super hard to build a service that gathers info on what apps are deployed onto which hosts running which versions. I did this in a couple days using Consul and next.js. Datadog and Sentry also gather this for us, so you can either use their built-in views for seeing what's running where, or build your own dashboards on top of this.

quote:

Also how does multiple people working on a feature/branch work. How do you enforce that everybody involved is always properly rebased in exactly the right manner with no possibility of ambiguity or accidental regression breaking somebody else.

This is 90% a process issue. We have checks for "branch must be up to date to merge" in github, but that's just a final check. Getting a good process into place for how to handle multiple engineers working on the same branch is mostly a people issue.

luminalflux fucked around with this message at 21:27 on Oct 10, 2021

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

luminalflux posted:

Hi hello I do the release engineering among all other stuff I deal with, and we do CI/CD where if a merge to main passes tests, bucko it's getting deployed whether you want to or not.

Basic flow for our main app:
  • CI runs on each push to your branch
  • If CI is green and you get approval, you can merge to main
  • CI runs on main branch. If this goes green, it merges that SHA to the release branch.
  • Release branch gets deployed to staging
  • CI runs browser-based integration tests against staging to make sure that the backend app and react client play well and we didn't make important butans unclickable
  • If this goes green, and deploys aren't frozen, CD kicks off a deploy to Spinnaker
  • Spinnaker bakes an AMI with the new version of the app (*)
  • Spinnaker makes a new autoscaling group with the new AMI, and attaches it to the loadbalancer
  • When healthchecks go green on the new ASG, Spinnaker drops out the old ASG.

how long does this process take end to end?

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
Also, why use a separate release branch from main? We do basically the same thing minus that aspect so I’m curious to know what some of the gains and drawbacks are.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
What's the current approach in terms of k8s and organizing it around applications: one giant cluster that houses everything or a bunch of smaller clusters focused around domains?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Blinkz0rz posted:

What's the current approach in terms of k8s and organizing it around applications: one giant cluster that houses everything or a bunch of smaller clusters focused around domains?

I'm wary of one giant anything -- too many eggs in one basket. Kubernetes is generally reliable but unless you've worked out the kinks and can bring a production cluster online via automation in under 30 minutes I would tend toward the latter.

Hadlock
Nov 9, 2004

It depends. How long is a piece of string

I typically use one cluster, several namespaces for a single monolith + supporting services, probably a second cluster for tooling, a third for analytics

If you were a multinational corp like Mastercard or IBM you might have a diverse enough workload for multiple clusters for the primary workload(s), several per region

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Release branches are kind of an artifact from git-flow workflows in my experience with infra teams that use the process. Decoupling infrastructure releases from software releases is probably a good idea if the teams are also decoupled mostly. I've found that for smaller organizations the overhead of all these processes likely outweighs the benefits so processes that can scale down as well as up are important. Right now with a team of a handful of people we're having to maintain roughly... 3 dozen repositories and every release it's insane to do PRs for all these repos and do back merges per git-flow standards that made sense when there were also 3 dozen+ engineers doing work.

But we really just have dev, feature or hot fix branches, release branches, and tags with some automation that performs a lot of chores for us, but said automation has been fallible as we've tried to downscale.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
I'd look at it the way I'd divvy up billing accounts and bare metal instances; look at the requirements of the users + finance + security + ops:
- do you have to account for $ costs between different user groups?
- is the cluster for prod, vs (say) sandbox?
- does a user group have specific weird use cases that might impact other user groups?
- is authn or authz going to be different for each user group?
- Will some users demand admin level?
- Will users demand modern features? How will this affect upgrading the cluster itself?

If any of these are up in the air, then I'd be tempted to stand up a cluster per domain until those answers shake out. It's relatively easy to stand up a cluster these days; the downside is that you'd be creating 3x control plane nodes per cluster, which is not cost-efficient. (That said, there are 'hyper-cluster' technologies in the works which avoid proliferating control plane hosts by co-locating multiple independent control planes on the same 3x hosts, but I haven't looked into it.)

12 rats tied together
Sep 7, 2006

You can get everything you need organizationally with a single cluster, but you don't want your clusters to constantly go split brain so line them up based on connectivity. Cluster per AWS region is a safe watermark.

To some degree it also depends on how much you want to be manually janitoring each cluster, if you're using e.g. AWS EKS as a stand-in for an ec2 ASG, go ahead and create tons of them. If you have to stand up the CNI by hand, allocate IP addresses by hand, etc, you could totally automate that stuff end-to-end but you'll probably have a better time going mono-cluster.

luminalflux
May 27, 2005



my homie dhall posted:

how long does this process take end to end?

Spinnaker pipeline takes ~20 minutes to run. The leadup to that (unittests, staging deploy, smoketests) adds another ~30-50 minutes. So about ~60-70 minutes looking at git history vs when it actually was marked "deploy finished" in slack.

The Iron Rose posted:

Also, why use a separate release branch from main? We do basically the same thing minus that aspect so I’m curious to know what some of the gains and drawbacks are.

Mostly an easy way to signal "this is known tested". We generally can rely on "tip of release branch is deployable". It predates me but it doesn't bother me and i kinda like it for that.

Adbot
ADBOT LOVES YOU

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Lots of helpful replies.

My team is responsible for building a bunch of hub services for the rest of our product suite and while some of them work together and logically make sense to co-tenant in the same cluster, we have some others that could probably fit in a separate namespace or a separate cluster without any issue either way.

Being the good friend to Ops that I try to be, I want to reduce the complexity for them to operate the cluster(s) while at the same time making sure that we're not overburdening a single cluster given our traffic patterns. Not an urgent decision by any means, just curious what the pros and cons are for either approach.

Also bumping this Q which fell through the cracks:

Blinkz0rz posted:

We ship a virtual appliance as part of our application and have built it with packer, the virtualbox ova builder, and a bunch of shell scripts that was written a few years ago and basically not touched since then. Virtuabox sucks majorly and we get broken builds every now and then because our build machines fail to release resources and vboxmanage shits the bed.

Other than swapping out the scripts for chef/ansible/salt is there a better way to do this? I'd love to be able to get rid of virtualbox specifically.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply