|
Mr Shiny Pants posted:So something like: rsync the required files or updates and update from the local files?
|
# ? Mar 11, 2019 19:02 |
|
|
# ? May 15, 2024 03:08 |
|
Vulture Culture posted:This could really be as simple as a systemd unit that downloads a compose file and runs it, if you're capable of running your reporting out of band Ok, I can figure this out. We have no CM tooling currently available, is there something in Salt or something else that can schedule connectivity? Like check for updates at some interval? Or wait for the network to become available?
|
# ? Mar 11, 2019 19:56 |
|
Mr Shiny Pants posted:Ok, I can figure this out. We have no CM tooling currently available, is there something in Salt or something else that can schedule connectivity?
|
# ? Mar 11, 2019 21:04 |
|
Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said. If you have the budget for it, I know first-hand that DC/OS runs on cruise ships with sporadic satellite connections. I have strong opinions on DC/OS and simple container orchestration wouldn't be a use case I'd pick it for, but I know it's successfully solving the problem you are trying to solve.
|
# ? Mar 12, 2019 14:53 |
|
Erwin posted:Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said. I will take a look at this, thanks for the suggestions guys.It is brought down to limit costs, so we have control over it. How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want.
|
# ? Mar 12, 2019 16:33 |
|
Mr Shiny Pants posted:How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want. Gross. Besides, Jenkins agents are meant to run pipelines that start and finish, not indefinite services. Or do you mean have a Jenkins agent at each site that orchestrates other servers?
|
# ? Mar 12, 2019 17:00 |
|
OK, wondering if this workflow is possible in Github (or, I guess more specifically, if it's possible to enforce it with protected branches) and also if it's sane. We've got a number of runbooks and scripts that do stuff like build new machines or delete machines or all sorts of things like that. It can be difficult to test some of these with Pester (at least with our limited Pester skills) so they changes may require manually testing. What I'd like to do is setup CI with protected branches so the workflow looks like this:
Am I crazy? Am I sane? Am I an idiot?
|
# ? Mar 12, 2019 23:53 |
|
The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting. The benefit of doing it this way is that you can test multiple feature commits at the same time on a periodic basis, it conveniently follows common business requirements like sprints and quarterly releases, and if you have REALLY long tests you can tune the auto testing to fit them instead of having them queue behind each other as devs frantically try and get their features in at 3pm on a friday before the end of the sprint. Bhodi fucked around with this message at 03:23 on Mar 13, 2019 |
# ? Mar 13, 2019 03:08 |
|
Mr Shiny Pants posted:Let's say I will have some Linux machines that will be deployed around the world with intermittent connectivity ( think satellites ) running docker on some Linux variant. Check out CoreOS and their cloud config files Basically you pull down a new cloud config file, then reboot the server. The server executes the cloud config file (run these containers with these arguments) and away you go. If the cloud config file fails, it boots from the previous safe config. Maybe setup a cron job to do updates at predicted connectivity periods. This sort of follows the telcom/networking model of "new update, but fail back to the previous version if it doesn't work" CoreOS uses Omaha protocol to poll for updates, but that is an entire other rabbit hole. TL;DR CoreOS was designed from the ground up to do exactly what you plan on doing
|
# ? Mar 13, 2019 06:49 |
|
Hadlock posted:Check out CoreOS and their cloud config files I'll have a look, thanks.
|
# ? Mar 14, 2019 06:52 |
|
CoreOS Container Linux is probably not what you want. It's primarily a minimal OS to run docker containers and little else. The double-buffered root is for regular updates of the kernel + the minimal toolset, not the apps installed. And the cloud-config stuff was replaced by Ignition, which is really just a small tool for injecting the necessary custom systemd units and mounting volumes and so on.
|
# ? Mar 14, 2019 07:42 |
|
Docjowles posted:Yeah, it’s that. It spends 15 minutes refreshing the state. And we haven’t even imported all the zones we would have in production yet, lol, this is just a subset for a test. I just wanted to echo that I hated doing anything route53 with Terraform and the ansible -> jinja2 for loop -> cloudformation template approach is managing something like 3k records in <30sec using Route53 RecordSets It looks like Terraform doesn't support the RecordSet resource so my unironic recommendation would be to use the cloudformation terraform resource to push a stack composed of RecordSet objects. Mr Shiny Pants posted:Let's say I will have some Linux machines that will be deployed around the world with intermittent connectivity ( think satellites ) running docker on some Linux variant. Sure, this sounds fine. Definitely go with the compose file though instead of "docker run whatever". If you can distribute the ansible repo (including secrets), I'd suggest tinkering with ansible-pull to see if it works better than push-mode updates. Ansible is a great tool but it's not ~super good at handling hosts that are expected to be consistently unreachable.
|
# ? Mar 27, 2019 01:08 |
|
Bhodi posted:The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting. We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?). I'm sure this isn't unique but a lot of our code depends on a ton of other stuff so integration testing requires we have basically a mirror of a lot of our environment, and each runbook needs something radically different. Testing our self service server builds requires a separate form to accept submissions from. Testing code that runs during our server build requires modifications to our server build process. Testing code that updates our inventory requires a bunch of test Google documents, etc etc. So it would be nice to have all those environments setup so upon doing something (pushing to a specific test branch) the code gets deployed in whatever way is appropriate to test it so after that we can fill out the form and submit a server request, or build a server that will run the test code, or modify the test google documents instead of the prod... Maybe we're small enough that I'm over thinking it. Maybe I should just start setting up those test scenarios and setup our deploy automation to start doing deploys when it seems commits to branches other than master.
|
# ? Mar 27, 2019 03:57 |
|
I'm likely to betray a lot of ignorance on these topics as I explain what I'm thinking about. All of this is my first foray into online deployment of anything other than a static site. I'm overwhelmed with options in front of me. The components of the project are: - A vue-cli static site that uses vue-router - A CouchDB database - A node express server that receives and responds to requests from the web client, making some DB communication in between In the future, there will be a background task that forever crunches away at the data in the couchdb. Probably this will be baked into the existing express app. I'm looking to host over https on a digital ocean Ubuntu droplet. The website, couchdb, and server should all be served over https. A suggestion I've come across is to use nginx as a 'reverse proxy' for all of these services. So that domain.com/ serves the vue-cli page, domain.com/couch/ routes to the database, and domain.com/api routes to the express server. The advantage here is that only the nginx server itself needs to be configured for https - one such configuration instead of three. Do things 'just work' under such a configuration? EG, I'm using file inputs on the site to upload attachments to the couch db, I'm using the client-side pouchdb to sync data sets with the couch db. Vue-cli's deployment guide suggests nginx behind (inside of?) a docker container. Why would they suggest that instead of just giving instructions for nginx? Is it possible for me to included all three of my pieces inside a docker image for a one-line deployment?
|
# ? Apr 21, 2019 21:49 |
|
FISHMANPET posted:We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?). Tags are good, but only if you care about seeing whether a specific commit passed tests at a glance. Making releases in github does functionally the same thing. For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that. Bhodi fucked around with this message at 22:22 on Apr 21, 2019 |
# ? Apr 21, 2019 22:18 |
|
Terraform has some template stuff and I've used it with cloud-init and launch configurations for basic config management type stuff but that's not really what it's designed to do and it'll be extremely frustrating if you ever need to scale out. You'll have better luck using Chef for all of the configuration stuff or even a 3rd party service like AWS SSM, Consul, or Zookeeper.
|
# ? Apr 21, 2019 23:02 |
|
Bhodi posted:For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that. Work uses terraform with puppet so I'm sure some of the details won't apply, but the basic scheme we have is like this: 1. Terraform template populates userdata with facts (node attributes? Dunno the chef equivalent) for env, role, sub-role, etc. instance_role=${instance_role} is about the limit of our template complexity, which is as it should be. 2. Instance pops up with puppet agent baked into the AMI, classes to apply and all configs are determined by a global/env/role cascade set up in hiera. It's possible to lean more heavily on terraform for CM with its provisioner stuff. It's a terrible idea in almost all cases (we have a terraform module that uses provisioners to bootstrap the puppet master and that's about it for provisioners) but you can use it. But don't use it.
|
# ? Apr 22, 2019 01:55 |
|
Kevin Mitnick P.E. posted:Work uses terraform with puppet so I'm sure some of the details won't apply, but the basic scheme we have is like this: The Terraform provisioner stuff is totally fine but you should consider adopting the following completely ridiculous pattern because of quirks in Terraform's flow when things go wrong: do not attach your provisioner directly to an instance resource. For each instance you are provisioning, create a null_resource to host the instance's provisioner, and feed it the host of the SSH connection directly. This way, you can taint and untaint nodes and provisioners separately, which will save your rear end if something goes unexpectedly wrong during provisioning of a big dumb ugly set of resources.
|
# ? Apr 22, 2019 02:05 |
|
Newf posted:A suggestion I've come across is to use nginx as a 'reverse proxy' for all of these services. Newf posted:Vue-cli's deployment guide suggests nginx behind (inside of?) a docker container. Why would they suggest that instead of just giving instructions for nginx? Newf posted:Is it possible for me to included all three of my pieces inside a docker image for a one-line deployment?
|
# ? Apr 22, 2019 02:09 |
|
Bhodi posted:For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that. I would recommend spending some time getting to know Packer so that you can build AMIs. There's a temptation to use Chef to both lay down your baseline and then configure instance-specific settings, which is problematic because it takes more time to run and you need to add in a reboot for security patches/kernel updates to take effect, which is annoying to deal with. Moving the baseline setup into Packer means you spend less time converging, don't need to reboot, minimize the complexity of the cookbooks, and have a known good AMI built on a schedule that you can use across the rest of your infrastructure. Are you planning to use Chef in client-server mode or chef-zero mode? If client-server, have you considered how you plan to handle key distribution or reaping dead nodes from the Chef server? There are valid reasons to not use Ansible or SaltStack, but Jenkins probably isn't the right tool to replace them.
|
# ? Apr 22, 2019 02:35 |
|
When applying for DevOps/SRE jobs, how do you prepare? Have you ever made a project portfolio (i.e. here is a web app deployed in AWS and here is its CI/CD pipeline)?
|
# ? Apr 22, 2019 03:26 |
|
minato posted:I recommend Caddy Server instead of Nginx, because Caddy out-of-the-box automatically provisions and renews your HTTPS certificates via LetsEncrypt. (Nginx has a plugin to do it, but it's extra work to install. On the flipside, Caddy doesn't have any binary distributables so it must be built from Go source, but that's literally 2 commands). I'm intrigued. I can't overstate how intimidated I am working in this domain. The config files here look a lot easier to grok than with nginx. By 2 commands, you mean `curl`ing the install script and running it with bash? https://getcaddy.com seems to have worked for me. minato posted:Yes, but that's not best practice. Generally each docker container should only contain one component. That way you can upgrade/restart/scale up each component individually while not affecting other components. Say a project has three components that are highly coupled and the work of a single author. Does it start to look like a better practice then?
|
# ? Apr 22, 2019 05:05 |
|
Newf posted:Say a project has three components that are highly coupled and the work of a single author. Does it start to look like a better practice then? Not really. If you want to go down the road of using docker, then use it properly. If you don't want to use docker properly, don't shoehorn things into docker. I'm being bitchy about this because my work life is an unending nightmare of tools awkwardly and improperly being shoehorned into the stack for no reason other than to say that tool is in use. New Yorp New Yorp fucked around with this message at 05:31 on Apr 22, 2019 |
# ? Apr 22, 2019 05:29 |
|
New Yorp New Yorp posted:Not really. If you want to go down the road of using docker, then use it properly. If you don't want to use docker properly, don't shoehorn things into docker. Trust me, I'm going to have as few moving parts as possible. Just asking for the security of being shut down.
|
# ? Apr 22, 2019 05:47 |
|
+1 for Caddy. It's one of the most hilariously easy to configure pieces of software I've ever used. About the official binaries, they are only free for non-commercial use. If you want to use it commercially without paying $25/instance/mo. (but please see if your company can afford it, it's made by two students from the U.S. Midwest and IMO they deserve it) you can either compile it yourself or use the unofficial docker image which is automatically compiled from the Apache-license source code.
|
# ? Apr 22, 2019 05:48 |
|
Newf posted:
Not what you were asking for exactly, but I would run jwilder/nginx reverse proxy container and volume mount in the ssl cert in Then run the three services as containers and add the appropriate -e SITE=website.com to the front facing website container (so that nginx detects it and writes the correct rules for it) and then configure all three services to talk to each other over localhost 0.0.0.0
|
# ? Apr 22, 2019 05:55 |
|
Yeah I'd try to get this working without docker first. That solves a problem you don't have yet, and if you're not familiar with it then it's just gonna introduce more headaches.
|
# ? Apr 22, 2019 05:56 |
|
NihilCredo posted:+1 for Caddy. It's one of the most hilariously easy to configure pieces of software I've ever used. Yes, I've noticed the licensing fees. This is a solo side-project. I do have commercialization in mind but not any time in the next six months. Currently in my DigitalOcean console: pre:Serving HTTPS on port 443 https://eduquilt.com Serving HTTPS on port 80 https://eduquilt.com code:
Would Caddy know / report an error if it didn't have access to those ports? Or could it fail silently? edit: ufw firewall is running on my server. ufw status returned: pre:OpenSSH 5984 443 OpenSSH (v6) 5984 (v6) 443 (v6) Newf fucked around with this message at 07:19 on Apr 22, 2019 |
# ? Apr 22, 2019 06:16 |
|
You'll need to run Caddy as root so it can bind to 80/443. It will an error if it can't bind to the ports or auto-provision a SSL certificate. You may need to open 80/443 on the host's firewall.
|
# ? Apr 22, 2019 06:43 |
|
minato posted:You'll need to run Caddy as root so it can bind to 80/443. It will an error if it can't bind to the ports or auto-provision a SSL certificate. You may need to open 80/443 on the host's firewall. Don't do this. Give caddy the capability to bind to 443 instead sudo setcap cap_net_bind_service+ep $(which caddy)
|
# ? Apr 22, 2019 06:55 |
|
You probably shouldn't be recommending people willing violate Caddy's license, especially if the pretense is there that someone has the intention of using it on a for profit venture.
|
# ? Apr 22, 2019 07:30 |
|
SeaborneClink posted:You probably shouldn't be recommending people willing violate Caddy's license, especially if the pretense is there that someone has the intention of using it on a for profit venture. Nobody is violating anything. The code is Apache, the binary is commercial. Matt Holt (Caddy founder) says so himself: mholt posted:Yep, these are good questions. The FAQ on the licenses page answers them:
|
# ? Apr 22, 2019 08:35 |
|
OK. I had to re-download caddy with the specific digitalOcean plugin. Progress is being made, but I am extremely sleepy and need to call it tonight. code:
The proxy set up at couch.eduquilt.com is working correctly though, after I set up an A record for 'couch' on Digital Ocean. What remains is that the www subdomain isn't doing anything. No idea why. Maybe I should pull it to another line? e: SomethingAwful is wrapping those [url] tags in my caddy file... e2: the www subdomain is working now. Huzzah. fake edit3: please don't do stuff on the site - it isn't actually meant to be live right now, but I need to get things in order for a closed beta of sorts Newf fucked around with this message at 09:17 on Apr 22, 2019 |
# ? Apr 22, 2019 09:09 |
|
Methanar posted:Don't do this. Give caddy the capability to bind to 443 instead Newf posted:fake edit3: please don't do stuff on the site - it isn't actually meant to be live right now, but I need to get things in order for a closed beta of sorts code:
|
# ? Apr 22, 2019 15:45 |
|
Am I reading these posts correctly? A database is being exposed to the internet? Don't do that. ssh -L 5984:localhost:5984 my.cloud.butt or something
|
# ? Apr 22, 2019 19:46 |
|
chutwig posted:I would recommend spending some time getting to know Packer so that you can build AMIs. There's a temptation to use Chef to both lay down your baseline and then configure instance-specific settings, which is problematic because it takes more time to run and you need to add in a reboot for security patches/kernel updates to take effect, which is annoying to deal with. Moving the baseline setup into Packer means you spend less time converging, don't need to reboot, minimize the complexity of the cookbooks, and have a known good AMI built on a schedule that you can use across the rest of your infrastructure. Bhodi fucked around with this message at 20:43 on Apr 22, 2019 |
# ? Apr 22, 2019 20:39 |
|
Kevin Mitnick P.E. posted:Am I reading these posts correctly? A database is being exposed to the internet? Don't do that. ssh -L 5984:localhost:5984 my.cloud.butt or something Newf posted:I'm likely to betray a lot of ignorance on these topics as I explain what I'm thinking about. All of this is my first foray into online deployment of anything other than a static site. Maybe you're reading it right! I've since turned off public access to :5984, so that db access is proxied through Caddy. Maybe also worth pointing out is that user auth is directly built into couch db - the database itself is my auth layer and manages user accounts.
|
# ? Apr 23, 2019 01:34 |
|
You know how everyone's name, address, social security number, etc, has been leaked in the last couple years? A lot of that was because of dumbasses putting databases directly on public IP's with no authentication. I am pointing this out as a positive thing. You are now better at operations and security than a large swathe of the IT workforce. Be proud! Also, be horrified that the people in charge of our PII can't even meet this standard. But mostly proud.
|
# ? Apr 23, 2019 07:15 |
|
CouchDB’s authorization mechanism seems quite limited, but I can imagine a case where a logged in user is meant to have global write access. The idea of putting a DB on the internet still gives me the willies, though. (Reverse proxy is close enough to count.)
|
# ? Apr 23, 2019 08:04 |
|
|
# ? May 15, 2024 03:08 |
|
level up and put this in a vpc, only expose load balancer to www
|
# ? Apr 23, 2019 11:21 |