Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr Shiny Pants posted:

So something like: rsync the required files or updates and update from the local files?

Is this something I need to build myself or are there some nice utilities available, I can't be the only one dealing with something like this.
This could really be as simple as a systemd unit that downloads a compose file and runs it, if you're capable of running your reporting out of band

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012

Vulture Culture posted:

This could really be as simple as a systemd unit that downloads a compose file and runs it, if you're capable of running your reporting out of band

Ok, I can figure this out. We have no CM tooling currently available, is there something in Salt or something else that can schedule connectivity?

Like check for updates at some interval? Or wait for the network to become available?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr Shiny Pants posted:

Ok, I can figure this out. We have no CM tooling currently available, is there something in Salt or something else that can schedule connectivity?

Like check for updates at some interval? Or wait for the network to become available?
If you don't like cron, systemd can do both of these things. If you need continuous reconfiguration based on the link state of an always-online system, you probably want something like an rtnetlink listener unless a dhclient hook would be good enough

Erwin
Feb 17, 2006

Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said.

If you have the budget for it, I know first-hand that DC/OS runs on cruise ships with sporadic satellite connections. I have strong opinions on DC/OS and simple container orchestration wouldn't be a use case I'd pick it for, but I know it's successfully solving the problem you are trying to solve.

Mr Shiny Pants
Nov 12, 2012

Erwin posted:

Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said.

If you have the budget for it, I know first-hand that DC/OS runs on cruise ships with sporadic satellite connections. I have strong opinions on DC/OS and simple container orchestration wouldn't be a use case I'd pick it for, but I know it's successfully solving the problem you are trying to solve.

I will take a look at this, thanks for the suggestions guys.It is brought down to limit costs, so we have control over it.
How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want.

Erwin
Feb 17, 2006

Mr Shiny Pants posted:

How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want.

Gross. Besides, Jenkins agents are meant to run pipelines that start and finish, not indefinite services. Or do you mean have a Jenkins agent at each site that orchestrates other servers?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
OK, wondering if this workflow is possible in Github (or, I guess more specifically, if it's possible to enforce it with protected branches) and also if it's sane.

We've got a number of runbooks and scripts that do stuff like build new machines or delete machines or all sorts of things like that. It can be difficult to test some of these with Pester (at least with our limited Pester skills) so they changes may require manually testing. What I'd like to do is setup CI with protected branches so the workflow looks like this:
  • You make whatever changes you want in your branch, every commit to your branch will call the buildpipeline that will run some automated pester tests
  • Created a protected "Test" branch that you can only make a pull request against if your branch builds successfully (CI automatically assigns a status to each branch so this part I know can be done)
  • Next I'd like code that gets merged to Test to add a "pending" status and deploy to some kind of test environment. I think I can do this with with a webhook that will call a runbook that will then reach out with the Github API and set the status to "pending" for "manual test" or whatever I want to call it.
  • Once manual tests are completed whoever is running the test would do something that would fire off another API call to set the status to "Success"
  • Protect the master branch such that it can only receive merges from Test and only when the status is Success

Am I crazy? Am I sane? Am I an idiot?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting.

The benefit of doing it this way is that you can test multiple feature commits at the same time on a periodic basis, it conveniently follows common business requirements like sprints and quarterly releases, and if you have REALLY long tests you can tune the auto testing to fit them instead of having them queue behind each other as devs frantically try and get their features in at 3pm on a friday before the end of the sprint.

Bhodi fucked around with this message at 03:23 on Mar 13, 2019

Hadlock
Nov 9, 2004

Mr Shiny Pants posted:

Let's say I will have some Linux machines that will be deployed around the world with intermittent connectivity ( think satellites ) running docker on some Linux variant.

What would be a good way to keep these under control and up to date?

Would it be possible to just run regular docker on them and push updates using something like Ansible?

Check out CoreOS and their cloud config files

Basically you pull down a new cloud config file, then reboot the server. The server executes the cloud config file (run these containers with these arguments) and away you go. If the cloud config file fails, it boots from the previous safe config. Maybe setup a cron job to do updates at predicted connectivity periods.

This sort of follows the telcom/networking model of "new update, but fail back to the previous version if it doesn't work"

CoreOS uses Omaha protocol to poll for updates, but that is an entire other rabbit hole.

TL;DR CoreOS was designed from the ground up to do exactly what you plan on doing

Mr Shiny Pants
Nov 12, 2012

Hadlock posted:

Check out CoreOS and their cloud config files

Basically you pull down a new cloud config file, then reboot the server. The server executes the cloud config file (run these containers with these arguments) and away you go. If the cloud config file fails, it boots from the previous safe config. Maybe setup a cron job to do updates at predicted connectivity periods.

This sort of follows the telcom/networking model of "new update, but fail back to the previous version if it doesn't work"

CoreOS uses Omaha protocol to poll for updates, but that is an entire other rabbit hole.

TL;DR CoreOS was designed from the ground up to do exactly what you plan on doing

I'll have a look, thanks.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
CoreOS Container Linux is probably not what you want. It's primarily a minimal OS to run docker containers and little else. The double-buffered root is for regular updates of the kernel + the minimal toolset, not the apps installed. And the cloud-config stuff was replaced by Ignition, which is really just a small tool for injecting the necessary custom systemd units and mounting volumes and so on.

12 rats tied together
Sep 7, 2006

Docjowles posted:

Yeah, it’s that. It spends 15 minutes refreshing the state. And we haven’t even imported all the zones we would have in production yet, lol, this is just a subset for a test.

Probably going to end up writing our own tool to do this which isn’t terribly hard. I just always prefer to use popular off the shelf stuff first if possible.

I was wondering if there was some obvious workaround or something since I assume we are not the first team wanting to manage large zones via terraform. But maybe I am uniquely dumb :pseudo:

I just wanted to echo that I hated doing anything route53 with Terraform and the ansible -> jinja2 for loop -> cloudformation template approach is managing something like 3k records in <30sec using Route53 RecordSets

It looks like Terraform doesn't support the RecordSet resource so my unironic recommendation would be to use the cloudformation terraform resource to push a stack composed of RecordSet objects.

Mr Shiny Pants posted:

Let's say I will have some Linux machines that will be deployed around the world with intermittent connectivity ( think satellites ) running docker on some Linux variant.

What would be a good way to keep these under control and up to date?

Would it be possible to just run regular docker on them and push updates using something like Ansible?

Sure, this sounds fine. Definitely go with the compose file though instead of "docker run whatever".

If you can distribute the ansible repo (including secrets), I'd suggest tinkering with ansible-pull to see if it works better than push-mode updates. Ansible is a great tool but it's not ~super good at handling hosts that are expected to be consistently unreachable.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Bhodi posted:

The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting.

The benefit of doing it this way is that you can test multiple feature commits at the same time on a periodic basis, it conveniently follows common business requirements like sprints and quarterly releases, and if you have REALLY long tests you can tune the auto testing to fit them instead of having them queue behind each other as devs frantically try and get their features in at 3pm on a friday before the end of the sprint.

We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?).

I'm sure this isn't unique but a lot of our code depends on a ton of other stuff so integration testing requires we have basically a mirror of a lot of our environment, and each runbook needs something radically different. Testing our self service server builds requires a separate form to accept submissions from. Testing code that runs during our server build requires modifications to our server build process. Testing code that updates our inventory requires a bunch of test Google documents, etc etc. So it would be nice to have all those environments setup so upon doing something (pushing to a specific test branch) the code gets deployed in whatever way is appropriate to test it so after that we can fill out the form and submit a server request, or build a server that will run the test code, or modify the test google documents instead of the prod...

Maybe we're small enough that I'm over thinking it. Maybe I should just start setting up those test scenarios and setup our deploy automation to start doing deploys when it seems commits to branches other than master.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
I'm likely to betray a lot of ignorance on these topics as I explain what I'm thinking about. All of this is my first foray into online deployment of anything other than a static site. I'm overwhelmed with options in front of me. The components of the project are:

- A vue-cli static site that uses vue-router
- A CouchDB database
- A node express server that receives and responds to requests from the web client, making some DB communication in between

In the future, there will be a background task that forever crunches away at the data in the couchdb. Probably this will be baked into the existing express app.

I'm looking to host over https on a digital ocean Ubuntu droplet. The website, couchdb, and server should all be served over https.

A suggestion I've come across is to use nginx as a 'reverse proxy' for all of these services. So that domain.com/ serves the vue-cli page, domain.com/couch/ routes to the database, and domain.com/api routes to the express server. The advantage here is that only the nginx server itself needs to be configured for https - one such configuration instead of three. Do things 'just work' under such a configuration? EG, I'm using file inputs on the site to upload attachments to the couch db, I'm using the client-side pouchdb to sync data sets with the couch db.


Vue-cli's deployment guide suggests nginx behind (inside of?) a docker container. Why would they suggest that instead of just giving instructions for nginx? Is it possible for me to included all three of my pieces inside a docker image for a one-line deployment?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

FISHMANPET posted:

We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?).

I'm sure this isn't unique but a lot of our code depends on a ton of other stuff so integration testing requires we have basically a mirror of a lot of our environment, and each runbook needs something radically different. Testing our self service server builds requires a separate form to accept submissions from. Testing code that runs during our server build requires modifications to our server build process. Testing code that updates our inventory requires a bunch of test Google documents, etc etc. So it would be nice to have all those environments setup so upon doing something (pushing to a specific test branch) the code gets deployed in whatever way is appropriate to test it so after that we can fill out the form and submit a server request, or build a server that will run the test code, or modify the test google documents instead of the prod...

Maybe we're small enough that I'm over thinking it. Maybe I should just start setting up those test scenarios and setup our deploy automation to start doing deploys when it seems commits to branches other than master.
Whoops totally missed this, I kinda forgot this thread existed. Yeah, I think your gut is probably right, best thing to is set up a simple case and try it out, you can always add more complexity later but it's very, very difficult to reduce complexity once it's been added. If you've got manual steps you're going to find it difficult to close the CI pipeline loop.

Tags are good, but only if you care about seeing whether a specific commit passed tests at a glance. Making releases in github does functionally the same thing.


For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that.

Bhodi fucked around with this message at 22:22 on Apr 21, 2019

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Terraform has some template stuff and I've used it with cloud-init and launch configurations for basic config management type stuff but that's not really what it's designed to do and it'll be extremely frustrating if you ever need to scale out.

You'll have better luck using Chef for all of the configuration stuff or even a 3rd party service like AWS SSM, Consul, or Zookeeper.

Nomnom Cookie
Aug 30, 2009



Bhodi posted:

For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that.

Work uses terraform with puppet so I'm sure some of the details won't apply, but the basic scheme we have is like this:

1. Terraform template populates userdata with facts (node attributes? Dunno the chef equivalent) for env, role, sub-role, etc. instance_role=${instance_role} is about the limit of our template complexity, which is as it should be.
2. Instance pops up with puppet agent baked into the AMI, classes to apply and all configs are determined by a global/env/role cascade set up in hiera.

It's possible to lean more heavily on terraform for CM with its provisioner stuff. It's a terrible idea in almost all cases (we have a terraform module that uses provisioners to bootstrap the puppet master and that's about it for provisioners) but you can use it. But don't use it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Kevin Mitnick P.E. posted:

Work uses terraform with puppet so I'm sure some of the details won't apply, but the basic scheme we have is like this:

1. Terraform template populates userdata with facts (node attributes? Dunno the chef equivalent) for env, role, sub-role, etc. instance_role=${instance_role} is about the limit of our template complexity, which is as it should be.
2. Instance pops up with puppet agent baked into the AMI, classes to apply and all configs are determined by a global/env/role cascade set up in hiera.

It's possible to lean more heavily on terraform for CM with its provisioner stuff. It's a terrible idea in almost all cases (we have a terraform module that uses provisioners to bootstrap the puppet master and that's about it for provisioners) but you can use it. But don't use it.
With the Chef provisioner, you can directly use the environment and run_list arguments to specify how the node should identify itself during bootstrap.

The Terraform provisioner stuff is totally fine but you should consider adopting the following completely ridiculous pattern because of quirks in Terraform's flow when things go wrong: do not attach your provisioner directly to an instance resource. For each instance you are provisioning, create a null_resource to host the instance's provisioner, and feed it the host of the SSH connection directly. This way, you can taint and untaint nodes and provisioners separately, which will save your rear end if something goes unexpectedly wrong during provisioning of a big dumb ugly set of resources.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Newf posted:

A suggestion I've come across is to use nginx as a 'reverse proxy' for all of these services.
I recommend Caddy Server instead of Nginx, because Caddy out-of-the-box automatically provisions and renews your HTTPS certificates via LetsEncrypt. (Nginx has a plugin to do it, but it's extra work to install. On the flipside, Caddy doesn't have any binary distributables so it must be built from Go source, but that's literally 2 commands).

Newf posted:

Vue-cli's deployment guide suggests nginx behind (inside of?) a docker container. Why would they suggest that instead of just giving instructions for nginx?
Because putting it into a container means that what you build on your laptop == what you deploy on the server, and it's also easier to install and run.

Newf posted:

Is it possible for me to included all three of my pieces inside a docker image for a one-line deployment?
Yes, but that's not best practice. Generally each docker container should only contain one component. That way you can upgrade/restart/scale up each component individually while not affecting other components.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Bhodi posted:

For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that.

I would recommend spending some time getting to know Packer so that you can build AMIs. There's a temptation to use Chef to both lay down your baseline and then configure instance-specific settings, which is problematic because it takes more time to run and you need to add in a reboot for security patches/kernel updates to take effect, which is annoying to deal with. Moving the baseline setup into Packer means you spend less time converging, don't need to reboot, minimize the complexity of the cookbooks, and have a known good AMI built on a schedule that you can use across the rest of your infrastructure.

Are you planning to use Chef in client-server mode or chef-zero mode? If client-server, have you considered how you plan to handle key distribution or reaping dead nodes from the Chef server? There are valid reasons to not use Ansible or SaltStack, but Jenkins probably isn't the right tool to replace them.

Lily Catts
Oct 17, 2012

Show me the way to you
(Heavy Metal)
When applying for DevOps/SRE jobs, how do you prepare? Have you ever made a project portfolio (i.e. here is a web app deployed in AWS and here is its CI/CD pipeline)?

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

minato posted:

I recommend Caddy Server instead of Nginx, because Caddy out-of-the-box automatically provisions and renews your HTTPS certificates via LetsEncrypt. (Nginx has a plugin to do it, but it's extra work to install. On the flipside, Caddy doesn't have any binary distributables so it must be built from Go source, but that's literally 2 commands).

I'm intrigued. I can't overstate how intimidated I am working in this domain. The config files here look a lot easier to grok than with nginx. By 2 commands, you mean `curl`ing the install script and running it with bash? https://getcaddy.com seems to have worked for me.

minato posted:

Yes, but that's not best practice. Generally each docker container should only contain one component. That way you can upgrade/restart/scale up each component individually while not affecting other components.

Say a project has three components that are highly coupled and the work of a single author. Does it start to look like a better practice then?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Newf posted:

Say a project has three components that are highly coupled and the work of a single author. Does it start to look like a better practice then?

Not really. If you want to go down the road of using docker, then use it properly. If you don't want to use docker properly, don't shoehorn things into docker.

I'm being bitchy about this because my work life is an unending nightmare of tools awkwardly and improperly being shoehorned into the stack for no reason other than to say that tool is in use.

New Yorp New Yorp fucked around with this message at 05:31 on Apr 22, 2019

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

New Yorp New Yorp posted:

Not really. If you want to go down the road of using docker, then use it properly. If you don't want to use docker properly, don't shoehorn things into docker.

I'm being bitchy about this because my work life is an unending nightmare of tools awkwardly and improperly being shoehorned into the stack for no reason other than to say that tool is in use.

Trust me, I'm going to have as few moving parts as possible. Just asking for the security of being shut down.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

+1 for Caddy. It's one of the most hilariously easy to configure pieces of software I've ever used.

About the official binaries, they are only free for non-commercial use. If you want to use it commercially without paying $25/instance/mo. (but please see if your company can afford it, it's made by two students from the U.S. Midwest and IMO they deserve it) you can either compile it yourself or use the unofficial docker image which is automatically compiled from the Apache-license source code.

Hadlock
Nov 9, 2004

Newf posted:


- A vue-cli static site that uses vue-router
- A CouchDB database
- A node express server that receives and responds to requests from the web client, making some DB communication in between

Not what you were asking for exactly, but I would run jwilder/nginx reverse proxy container and volume mount in the ssl cert in

Then run the three services as containers and add the appropriate -e SITE=website.com to the front facing website container (so that nginx detects it and writes the correct rules for it) and then configure all three services to talk to each other over localhost 0.0.0.0

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Yeah I'd try to get this working without docker first. That solves a problem you don't have yet, and if you're not familiar with it then it's just gonna introduce more headaches.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

NihilCredo posted:

+1 for Caddy. It's one of the most hilariously easy to configure pieces of software I've ever used.

About the official binaries, they are only free for non-commercial use. If you want to use it commercially without paying $25/instance/mo. (but please see if your company can afford it, it's made by two students from the U.S. Midwest and IMO they deserve it) you can either compile it yourself or use the unofficial docker image which is automatically compiled from the Apache-license source code.

Yes, I've noticed the licensing fees. This is a solo side-project. I do have commercialization in mind but not any time in the next six months.

Currently in my DigitalOcean console:

pre:
Serving HTTPS on port 443
https://eduquilt.com

Serving HTTPS on port 80
https://eduquilt.com
With the Caddyfile:

code:
eduquilt.com
proxy /couch localhost:5984
proxy /express localhost:3000
In the meantime, neither http://eduquilt.com http://159.203.60.117 are resolving for me.

Would Caddy know / report an error if it didn't have access to those ports? Or could it fail silently?

edit: ufw firewall is running on my server. ufw status returned:

pre:
OpenSSH
5984
443
OpenSSH (v6)
5984 (v6)
443 (v6)
I've opened up port 80, and now eduquilt.com is giving: 404 Site https://www.eduquilt.com is not served on this interface

Newf fucked around with this message at 07:19 on Apr 22, 2019

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
You'll need to run Caddy as root so it can bind to 80/443. It will an error if it can't bind to the ports or auto-provision a SSL certificate. You may need to open 80/443 on the host's firewall.

Methanar
Sep 26, 2013

by the sex ghost

minato posted:

You'll need to run Caddy as root so it can bind to 80/443. It will an error if it can't bind to the ports or auto-provision a SSL certificate. You may need to open 80/443 on the host's firewall.

Don't do this. Give caddy the capability to bind to 443 instead


sudo setcap cap_net_bind_service+ep $(which caddy)

SeaborneClink
Aug 27, 2010

MAWP... MAWP!
You probably shouldn't be recommending people willing violate Caddy's license, especially if the pretense is there that someone has the intention of using it on a for profit venture.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

SeaborneClink posted:

You probably shouldn't be recommending people willing violate Caddy's license, especially if the pretense is there that someone has the intention of using it on a for profit venture.

Nobody is violating anything. The code is Apache, the binary is commercial. Matt Holt (Caddy founder) says so himself:

mholt posted:

Yep, these are good questions. The FAQ on the licenses page answers them:

quote:

Which license do I need?
If your company uses official Caddy binaries internally, in production, or distributes Caddy, a commercial license is required. This includes companies that use Caddy for research. The personal license is appropriate for academic research, personal projects, websites that aren't for profit, and development at home.

Is Caddy open source?
Yes, it is. Caddy's source code is licensed under Apache 2.0, which requires attribution and stating changes made to the code when forking it, using it in your own projects, or distributing it. This website distributes official, compiled Caddy binaries, which are licensed differently.

Does this license apply to the source code?
These licenses do NOT apply to the source code. If you use Caddy source code in your work or product, you must give attribution and state all changes. Please email us if you would like to purchase a custom license to the source code, which can waive these requirements!

If I build Caddy from source, which license applies?
The source code is Apache 2.0 licensed. It requires that you give attribution and state changes. Building from source does not give you permission to white-label Caddy in your own work. You will also have to manage Caddy plugins on your own.

It's similar to how vscode and Microsoft Visual Studio Code, OpenJDK and OracleJDK, etc, are licensed. Open source project but commercial finished product.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
OK. I had to re-download caddy with the specific digitalOcean plugin.

Progress is being made, but I am extremely sleepy and need to call it tonight.

code:
eduquilt.com, [url]www.eduquilt.com[/url] {
  # proxy /couch localhost:5984
  proxy /express localhost:3000
}

couch.eduquilt.com {
  proxy / localhost:5984
}
The proxy /couch -> localhost:5984 on eduquilt.com/couch was doing *something*, but not working. Visiting eduquilt.com/couch returned {"error":"not_found","reason":"Database does not exist."}, as if I'd visited localhost:5984/incorrectDatabaseName instead of localhost:5984.

The proxy set up at couch.eduquilt.com is working correctly though, after I set up an A record for 'couch' on Digital Ocean.


What remains is that the www subdomain isn't doing anything. No idea why. Maybe I should pull it to another line?

e: SomethingAwful is wrapping those [url] tags in my caddy file...
e2: the www subdomain is working now. Huzzah.
fake edit3: please don't do stuff on the site - it isn't actually meant to be live right now, but I need to get things in order for a closed beta of sorts

Newf fucked around with this message at 09:17 on Apr 22, 2019

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Methanar posted:

Don't do this. Give caddy the capability to bind to 443 instead


sudo setcap cap_net_bind_service+ep $(which caddy)
Yeah this is the Correct way to do things.

Newf posted:

fake edit3: please don't do stuff on the site - it isn't actually meant to be live right now, but I need to get things in order for a closed beta of sorts
Even if goons are nice and respect this, other people won't. Malicious folks are crawling the web all the time looking for poorly-secured installations to break into and leverage as bots or bitcoin miners or any manner of nefarious things. If you want things locked down while you're sorting things out, your best bet is to (temporarily) IP-filter traffic to your own IPs. You can do this at the firewall level, or if you compiled Caddy with the ipfilter plugin then just add a block to your Caddyfile:

code:
example.com {
  ipfilter / {
    rule allow
    ip 1.2.3.4/32
    ip 5.6.7.8/32
  }
}
For more advanced protection, you might want to put the site behind CloudFlare (which has a free tier). CloudFlare has a web-application-firewall feature to block a lot of malicious traffic. The idea is that your site's DNS points at the CloudFlare service, which is then configured to proxy safe traffic to your web host. You can then IP-restrict your webhost traffic so that it only accepts requests originating from CloudFlare. And I think you can go one better; CloudFlare has a feature (dunno if it's free) where they effectively set up a VPN between them and your webhost, so you never have to directly expose your webhost to the internet at all.

Nomnom Cookie
Aug 30, 2009



Am I reading these posts correctly? A database is being exposed to the internet? Don't do that. ssh -L 5984:localhost:5984 my.cloud.butt or something

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

chutwig posted:

I would recommend spending some time getting to know Packer so that you can build AMIs. There's a temptation to use Chef to both lay down your baseline and then configure instance-specific settings, which is problematic because it takes more time to run and you need to add in a reboot for security patches/kernel updates to take effect, which is annoying to deal with. Moving the baseline setup into Packer means you spend less time converging, don't need to reboot, minimize the complexity of the cookbooks, and have a known good AMI built on a schedule that you can use across the rest of your infrastructure.

Are you planning to use Chef in client-server mode or chef-zero mode? If client-server, have you considered how you plan to handle key distribution or reaping dead nodes from the Chef server? There are valid reasons to not use Ansible or SaltStack, but Jenkins probably isn't the right tool to replace them.
We're not planning to build a separate AMI per app. We've got two known-good patched and security vetted AMIs generated through a different process: one for docker (contains a much larger partition for the image cache) and one for general use - both run chef-client on boot. We have several apps that are already using chef (server) to provision and so we'll be leveraging previously written cookbooks for those apps. We're planning to use only terraform plans to build out hosts for apps and so the decommissioning of both dns and chef node+client will happen in the plan itself (through the chef provider?). Most plans will either build a box and launch a local docker container through the docker provider or run the desired chef app cookbook from the runlist. My tentative plan is to have a separate testing tfvars file that we can hook into a ci pipeline in our test space for testing pull requests, and have both the chef cookbook repo and the terraform repo both execute the same test via a Jenkinsfile in the root of the terraform repo.

Bhodi fucked around with this message at 20:43 on Apr 22, 2019

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

Kevin Mitnick P.E. posted:

Am I reading these posts correctly? A database is being exposed to the internet? Don't do that. ssh -L 5984:localhost:5984 my.cloud.butt or something

Newf posted:

I'm likely to betray a lot of ignorance on these topics as I explain what I'm thinking about. All of this is my first foray into online deployment of anything other than a static site.

Maybe you're reading it right!

I've since turned off public access to :5984, so that db access is proxied through Caddy.

Maybe also worth pointing out is that user auth is directly built into couch db - the database itself is my auth layer and manages user accounts.

Only registered members can see post attachments!

Docjowles
Apr 9, 2009

You know how everyone's name, address, social security number, etc, has been leaked in the last couple years? A lot of that was because of dumbasses putting databases directly on public IP's with no authentication.

I am pointing this out as a positive thing. You are now better at operations and security than a large swathe of the IT workforce. Be proud! Also, be horrified that the people in charge of our PII can't even meet this standard. But mostly proud.

Nomnom Cookie
Aug 30, 2009



CouchDB’s authorization mechanism seems quite limited, but I can imagine a case where a logged in user is meant to have global write access. The idea of putting a DB on the internet still gives me the willies, though. (Reverse proxy is close enough to count.)

Adbot
ADBOT LOVES YOU

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
level up and put this in a vpc, only expose load balancer to www

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply