Rocko Bonaparte posted:Well, I managed to get vagrant to boot up VMs for testing our code individually, but I'm trying to figure out how to combine all this so I launch all test VMs simultaneously. Vagrant has a multi-vm mode where I just specify all the different configurations in on Vagrantfile. I tried this and it appeared to try to run them serially. I put my test in the provisioning logic, so I am not too surprised something like that might happen. However, I wanted to double check if that's the case. Generally, does vagrant provision boxes sequentially? You may want to look into test kitchen: https://kitchen.ci/ It's built by the Chef guys but I don't think you necessary have to use Chef with it. It works great for spinning up a bunch of VMs in parallel, running stuff on them, and then tearing them down.
|
|
# ? Aug 2, 2019 20:47 |
|
|
# ? May 15, 2024 03:12 |
|
I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?
|
# ? Aug 14, 2019 17:39 |
|
rt4 posted:I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?
|
# ? Aug 14, 2019 22:20 |
|
rt4 posted:I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow? They池e all bad in their own unique ways, sorry
|
# ? Aug 14, 2019 22:22 |
|
rt4 posted:I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow? What is it you don't like about Ruby and/or what problems are you having with Puppet's performance? Also what are you using Puppet for? If you give some more details then people might be able to point you in the direction of a better tool.
|
# ? Aug 14, 2019 22:35 |
|
cfEngine2, Salt, and Ansible are all options to avoid Ruby. I use Salt these days over Ansible when it comes to configuration management. Ansible can be sped up with modes that turn it into another agent-based management system basically, but just trying to upgrade playbooks to support new features without dealing with adjusting my own playbooks yet again got old. Also, Ansible doesn't really have anything like the Reactor system that Salt has and deploying Stackstorm alongside Ansible for configuration management seems like a hack.
|
# ? Aug 14, 2019 23:08 |
|
rt4 posted:I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow? a 5,000 line, hand-crafted bash script called /usr/local/bin/fix_everything.sh that you run on an hourly cron
|
# ? Aug 14, 2019 23:53 |
|
necrobobsledder posted:cfEngine2, Salt, and Ansible are all options to avoid Ruby. I use Salt these days over Ansible when it comes to configuration management. Ansible can be sped up with modes that turn it into another agent-based management system basically, but just trying to upgrade playbooks to support new features without dealing with adjusting my own playbooks yet again got old. Also, Ansible doesn't really have anything like the Reactor system that Salt has and deploying Stackstorm alongside Ansible for configuration management seems like a hack. Same w/r/t ansible vs salt. I've fallen in love with salt reactor, help
|
# ? Aug 15, 2019 17:32 |
|
Hi thread. Not sure whether this is the right place to ask, I checked over in the virtualization thread and was directed here. I have a scenario where I would like to provide three similar Linux environments for development:
And a nice way to keep everything in sync. Surely I'm not the first guy to think of this. Please enlighten me.
|
# ? Aug 16, 2019 12:31 |
|
Vagrant with the ansible provisioner is the simplest toolchain that can do everything you need here, IMO, and is a very common choice for setting up interactive development environments.
|
# ? Aug 16, 2019 16:19 |
|
I'm getting really fed up with declarative poo poo for systems management and just want to go back to procedural Things really do run in cycles, don't they. We're back to fancy shell scripts
|
# ? Aug 22, 2019 03:13 |
|
IMO declarative is only good when the dependencies between objects are too complex to be reasonably understood or when there are so many objects that writing procedural logic becomes too error prone. Gaining experience with a system mitigates both of these concerns, so it makes sense to me that people begin to prefer procedural tooling over time. It does highlight though that taking care in your tooling choices is critical to sanely running long term infrastructure. Going back to Terraform v Ansible, it's not impossible to write a declarative tool that gives you the level of introspection and control over the graph walk that Ansible gives you by virtue of being a procedural tool, the features are just not present yet. Similarly, it's definitely not impossible to write a procedural tool (say, a script) that is bullshit garbage, doesn't log to anywhere, requires you to step through with a debugger, etc. It's important to choose tools that let you do things, definitely, but its also important to consider what percentage of things that are possible with a tool are actually shipped features on it. I think a fundamental of imperative tooling that is often forgotten is that it can also be used to organize, link, and run instances of declarative tooling. In my experience it's also very common for people to leap at declarative tools, unfortunately, as a substitute for actually understanding the underlying provider. This tends to work about as well as you would expect. An example that comes up a lot for me at work is "you must make sure the subnet exists before you can create an ec2 instance in it" -- this is a feature, in my opinion, not a bug.
|
# ? Aug 22, 2019 15:07 |
|
"Oh hey I haven't looked a the CI thread in awhile. Maybe I should close it since I've just about finished up my prototype..."fletcher posted:You may want to look into test kitchen: https://kitchen.ci/ Gaaaaaaaaaaah why did I have to see this now?
|
# ? Aug 22, 2019 22:24 |
|
Helianthus Annuus posted:a 5,000 line, hand-crafted bash script called /usr/local/bin/fix_everything.sh that you run on an hourly cron don't forget 2>&1 /dev/null
|
# ? Aug 24, 2019 21:22 |
|
Anyone else going to be at Hashiconf next week in Seattle? I have other plans afterward besides happy hour but I can certainly hang out with some of you fine folks at lunch instead of awkwardly hanging out alone.
|
# ? Sep 3, 2019 01:11 |
|
necrobobsledder posted:Anyone else going to be at Hashiconf next week in Seattle? I have other plans afterward besides happy hour but I can certainly hang out with some of you fine folks at lunch instead of awkwardly hanging out alone. Thing I miss from being a block from the convention center... Random lunches with Goons... I'm down in Renton now about 15mi to the south and wish could have a drink with ya.
|
# ? Sep 3, 2019 03:00 |
|
For those using docker, how do you handle running a container with arbitrary UID/GIDs? I see run/compose lets you specify a user to execute as, but that doesn't address file permissions inside the container at all. Due to security requirements we've got completely different dev/stg/prd environments, each with it's own AD domain. Additionally there are images we'd like to be able to share with different departments, but will have a different user/group running in them. We'd like to avoid having to build an image per env + per user, but do need to be able to control who it's running as due to external mounts. I've been looking around for a solution that isn't essentially just run as root, chown everything, downgrade to a less privileged user, exec. I must be missing something because this seems like a fairly obvious thing you might need to do, but I've been pulling my hair out trying to find an answer. PBS fucked around with this message at 02:45 on Sep 5, 2019 |
# ? Sep 5, 2019 02:43 |
|
PBS posted:For those using docker, how do you handle running a container with arbitrary UID/GIDs? I see run/compose lets you specify a user to execute as, but that doesn't address file permissions inside the container at all.
|
# ? Sep 5, 2019 03:14 |
|
Vulture Culture posted:I don't know what filesystem you're running against or what your permissions look like, but could something like fixuid help you out here as opposed to dynamically chowning the permissions on the mount? Taking a look at that now, but to answer your question, The file system is nfs4, permissions are standard posix with ad user/groups with unique ids. OS is centos/rhel 7.5, using an ldap provider with sssd. Edit: That looks somewhat similar to what I'm already doing, just a cleaner way of doing it. Right now I'm using the s6-overlay and a script it has called fix-attrs. This is roughly equivilent to chowning everything, but not necessary precisely the same. It's worth a shot though given it lets me set a user other than root. PBS fucked around with this message at 03:42 on Sep 5, 2019 |
# ? Sep 5, 2019 03:22 |
|
I知 also using s6-overlay for this. Will look into this solution too, thanks for sharing! We池e moving away from NFS mounted volumes for persistent data entirely though. We致e had so many issues with it over the last 2 years.
|
# ? Sep 5, 2019 05:07 |
|
LochNessMonster posted:I知 also using s6-overlay for this. Will look into this solution too, thanks for sharing! Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since. This is just for our swarm clusters which run things we can't put in kubernetes for reasons that are too sad for me to try to explain, nfs is really the only shared file system available to them atm. It's really staggering when I look back and see how much time I've wasted working around all our snowflake requirements or just general dumb practices we're dragged along with.
|
# ? Sep 5, 2019 05:19 |
|
PBS posted:Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since. I知 also using it on Swarm because, as you said, there is no alternative. I think the majority of my issues have been with NFS. Our decision to disable the routing mesh and running our own ingress service is a close second though. In hindsight we should致e picked K8s over Swarm but 2-3 years back it was more of a coin toss.
|
# ? Sep 5, 2019 05:47 |
|
I bet on K8S over Swarm back in 2014 mostly because I saw most shops that would benefit are big behemoths that were burned on dumpster fires like OpenStack and Swarm seemed to be too simplistic comparably. More complex doesn't mean better obviously, but big and complicated seems to get the most funding in a marketplace over the simplest solutions like Nomad.
|
# ? Sep 5, 2019 14:00 |
|
I'm trying to setup a workflow where an ECS task is kicked off to process some data when it's uploaded to S3. Is this workflow still relevant https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ ? Is the SQS portion necessary for passing around the object/filename, or is there something simpler I could do with event logs?
|
# ? Sep 5, 2019 15:32 |
|
tortilla_chip posted:I'm trying to setup a workflow where an ECS task is kicked off to process some data when it's uploaded to S3. Is this workflow still relevant https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ ? Is the SQS portion necessary for passing around the object/filename, or is there something simpler I could do with event logs? I think you can accomplish it in a more straightforward way using cloudwatch events now: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-ECS.html
|
# ? Sep 5, 2019 15:41 |
|
Thanks! This appears to be what I'm looking for. Still not quite clear on how to reference the object name from the event and pass that in as an override or environment variable to the task, but I'll keep digging.
|
# ? Sep 5, 2019 16:08 |
|
You might actually be able to dig up the event structure for s3:PutObject in the documentation, but if you can't you can instead point your event rule at a cloudwatch logs group first and then just trigger the event a few times to get some examples. That's usually what I do when I'm struggling to find something in the documentation, anyway
|
# ? Sep 5, 2019 18:05 |
|
I知 finally moving my platform from a trainwreck MSP to AWS and that means also migrating some services that are not mine. To do that I知 creating an EKS cluster per project which needs to be provisioned by Terraform. Any configuration will be done by Ansible and code deployment by Jenkins. This means that my IaC configs need to be reusable for other projects than my own so instead of having it in my applications codebase I知 creating a seperate IaC repo. I was wondering if there are any standards / best practices on how to structure the code? I was thinking something along the lines of code:
LochNessMonster fucked around with this message at 07:02 on Sep 13, 2019 |
# ? Sep 12, 2019 18:32 |
|
My personal preference would be:code:
|
# ? Sep 12, 2019 19:03 |
|
12 rats tied together posted:you probably don't need to run terraform all the time, so it shouldn't run unless requested with --tags terraform. We actually like running terraform every time, just to hit plan with '-detailed-exitcode' set so we can kill the build when drift is detected. It's a nice "wtf" infrastructure linter.
|
# ? Sep 12, 2019 19:30 |
|
12 rats tied together posted:My personal preference would be: How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this?
|
# ? Sep 13, 2019 07:00 |
|
JehovahsWetness posted:We actually like running terraform every time, just to hit plan with '-detailed-exitcode' set so we can kill the build when drift is detected. It's a nice "wtf" infrastructure linter. That's totally fair. If you're having ansible run terraform for you I believe you need to force the task to be in check_mode (by appending check_mode: yes to it) in order to have it run terraform plan for you. You should still be able to register the tasks, though, and then trigger an assertion suite on every playbook run. Triggering assertion suites is a super useful pattern in general! LochNessMonster posted:How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this? Yeah, that is correct. There are basically 2 high-level ways you can approach multi-environments in ansible but they both hinge on the inventory being different in each environment. The first, and probably simplest, one would be to maintain a playbook for each different environment. So, you'd have prod-project-1, test-project-1, staging-project-1, etc. I usually like to leave a comment at the top of these playbooks indicating what inventory should be used to run it. There's a fair amount of copy paste involved in this approach but it is definitely the most intuitive and I've seen that it's easiest to get other people actually involved when the project looks like this. They usually end up wanting to remove the duplicate code eventually anyway, which doubles as a nice learning exercise. The second would be to have just a single playbook per project and ensuring that your playbooks can always run in every supported environment. This is definitely ideal, but take care that your playbook doesn't turn into a rat's nest of conditionally executed tasks. Another gotcha is that ansible group_vars are great but they are hyper-focused on which hosts your play is targeting. For managing cloud orchestration, you usually target your laptop (connection: local) or a CI host so you can take advantage of your existing aws credentials, installed python modules, etc. Targeting your laptop means it's non-obvious how you would pull variables from group_vars/production vs group_vars/staging. I've seen this handled in 2 different ways: you can add localhost to real groups inside your inventory, so that ansible will load the correct group_vars in each context. If you have localhost orchestration and remote configuration in the same playbook, you'll have to use "delegate_to:" and similar to ensure that you don't try to install apt packages on your laptop or whatever. You can usually come up with some combination of nested groups that will allow you to sidestep this issue, or you can use a host pattern to exclude localhost on some plays. The other approach, and my personal preference, is to put all of the data that your orchestration needs into a nested yaml dict. Pass an "environment" or "stage" variable to your playbooks, and use that variable as a top-level key lookup. You can put this yaml dict in group_vars/all/ or you can put it somewhere silly like your project root and then manually load it in your playbooks using the include_vars module. Quick example: code:
|
# ? Sep 13, 2019 15:37 |
|
Have your Ansible playbooks look up variables from Consul as well as from your local variables but I think this approach is kinda clunky https://docs.ansible.com/ansible/latest/plugins/lookup/consul_kv.html and Ansible being so hell-bent on variables from locally backed data doesn't work for a lot of scenarios. I am trying to move our Salt states to lookup pillar data from Consul and it's just a lot more configurable for myself and state composition is clearer to me than Ansible (maybe I have too much Kool Aid in my system?). The idea is that with a single KV store you can let different groups use whatever front-end tool they want for their orchestration or configuration and they can focus upon behaviors while the data itself is decoupled using the KV store, canonicalized, and ACLed off to keep people from shooting off their foot or face. This approach may be a Bad Idea for some companies but I think it's good for building consensus of configuration data.
|
# ? Sep 13, 2019 17:06 |
|
It's something that comes up a decent amount at my current employer, ansible does fully support reading almost all of its data from remote sources. The consul_kv lookup plugin exists, you can also write vars plugins yourself, but probably most useful and least obvious is that variable values in ansible can be interpolations themselves. So, you don't need to repeatedly run the plugin in your playbooks, you can just put it in your group_vars like normal: code:
I've found that a lot of people reach for storing vars values in consul or some other type of key value store at first but after actually using ansible's inventory structure for a bit in production, that desire usually ends up disappearing or getting backburnered immediately because it's really not that useful or valuable compared to how group_vars already works. Given the project's momentum and massive amount of contributors you can take the lack of existence of any vars plugin other than group_vars as pretty decent proof of this, if it was actually useful to do it would be in the project by now. I'm definitely on the chugging ansible kool aid side of the spectrum but I usually see the desire transform from "how can we get ansible to read from consul?" to "how do we install, configure, and populate consul keys with ansible?". Ultimately I'm of the opinion that you should not let people use whatever orchestration/configuration tools they want to, that you should have one single tool, and if your one single tool can't do everything you need it to you should throw it out.
|
# ? Sep 13, 2019 20:00 |
|
12 rats tied together posted:The second would be to have just a single playbook per project and ensuring that your playbooks can always run in every supported environment. This is definitely ideal, but take care that your playbook doesn't turn into a rat's nest of conditionally executed tasks. Environment selection shouldn't involve conditionally executed tasks because you want it to run the same regardless of where you are running it. The best approach is to just use variables with sane defaults which get overwritten based on the environment being targeted. 12 rats tied together posted:Another gotcha is that ansible group_vars are great but they are hyper-focused on which hosts your play is targeting. For managing cloud orchestration, you usually target your laptop (connection: local) or a CI host so you can take advantage of your existing aws credentials, installed python modules, etc. Targeting your laptop means it's non-obvious how you would pull variables from group_vars/production vs group_vars/staging. You're conflating things here. The connection keyword specifies the connection plugin which Ansible uses when executing tasks. The chosen plugin has no bearing on the host(s) which the playbook is being executed against. Also availability of installed Python modules is not an issue because no actual code is executed remotely. Also IMO if you're using Ansible against actual hosts instead of just doing cloud orchestration then you should deffo be using AWX/Tower. Being able to leverage credential injection and inventory scripts make it mad useful. Edit: 12 rats tied together posted:You can put this yaml dict in group_vars/all/ or you can put it somewhere silly like your project root and then manually load it in your playbooks using the include_vars module. If you have YAML files that you want to explicitly load as part of a play with include_vars you can just put them inside a vars folder in the root of the project. They can then be included by referencing just the file name. This isn't mentioned in the directory layout portion of the doco but it does work and is preferable to leaving non-playbook YAML files in the project root. Pile Of Garbage fucked around with this message at 10:55 on Sep 14, 2019 |
# ? Sep 14, 2019 10:46 |
|
Pile Of Garbage posted:You're conflating things here. The connection keyword specifies the connection plugin which Ansible uses when executing tasks. The chosen plugin has no bearing on the host(s) which the playbook is being executed against. Also availability of installed Python modules is not an issue because no actual code is executed remotely. You can also exploit this behavior to "fake multithread" a bunch of async local tasks by running them against a real group but with connection: local, but I've never found it to be very useful compared to normal async. Pile Of Garbage posted:If you have YAML files that you want to explicitly load as part of a play with include_vars you can just put them inside a vars folder in the root of the project. They can then be included by referencing just the file name. This isn't mentioned in the directory layout portion of the doco but it does work and is preferable to leaving non-playbook YAML files in the project root.
|
# ? Sep 14, 2019 20:27 |
|
Running into a terraform issue and as I'm pretty new to it, I can't wrap my head around it. I'm using a module to fill my variables.tf, but when running [fixed]terraform init[fixed] I'm getting the following error (management is the name of my module):code:
I'm running the init command from path/, file structure is as follows: code:
edit: I'm an idiot. I had module.tf in both path/ and path/terraform/. Removing the latter solved the issue. LochNessMonster fucked around with this message at 15:01 on Sep 19, 2019 |
# ? Sep 19, 2019 14:46 |
|
.
Hadlock fucked around with this message at 22:46 on Sep 19, 2019 |
# ? Sep 19, 2019 22:38 |
|
Two words: local-exec resources. I知 moving towards more use of auto variables.tf emitted from external systems like Consul and using data providers instead of hacks like that nowadays
|
# ? Sep 19, 2019 22:50 |
|
|
# ? May 15, 2024 03:12 |
|
12 rats tied together posted:The local connection plugin specifically subverts the normal behavior here and executes tasks directly on the controller, regardless of hosts in the play. This is specific to the local connection plugin, and you normally use it to avoid needing to SSH to your own laptop, an AWX worker container, CI host, etc. Yes, that is what the local connection plugin does however it does not "subvert" normal behaviour in any way. More specifically it doesn't alter the other functions of Ansible such as gathering facts. A better way of explaining it is that it merely modifies the location which tasks are executed from. From my experience the best way of selecting connection plugins is to simply set the ansible_connection variable on the hosts in the inventory or on the template itself if you're using AWX. Earlier this year I was writing playbooks for applying configuration to Cisco IOS devices with a requirement to support ancient Telnet-only devices. I achieved this using the telnet module and liberal use of the ansible_connection variable at the host level in the inventory.
|
# ? Sep 20, 2019 14:16 |