Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Rocko Bonaparte posted:

Well, I managed to get vagrant to boot up VMs for testing our code individually, but I'm trying to figure out how to combine all this so I launch all test VMs simultaneously. Vagrant has a multi-vm mode where I just specify all the different configurations in on Vagrantfile. I tried this and it appeared to try to run them serially. I put my test in the provisioning logic, so I am not too surprised something like that might happen. However, I wanted to double check if that's the case. Generally, does vagrant provision boxes sequentially?

My other situation is figuring out the best way to run each of the different VMs in a different mode. Specifically, I need to test both for Python 2 and Python 3, and I'm trying to do them on separate VMs. The stuff I am running can switch modes basically with an environment variable. Does anybody know a good way to duplicate each VM with a different toggle like that?

I'm trying to avoid having to write a bunch of automation around this. I'm already bummed about the multi-vm sequential thing and hope I'm just wrong. Being able to kick the whole thing off from one Vagrantfile would be great.

You may want to look into test kitchen: https://kitchen.ci/

It's built by the Chef guys but I don't think you necessary have to use Chef with it. It works great for spinning up a bunch of VMs in parallel, running stuff on them, and then tearing them down.

Adbot
ADBOT LOVES YOU

spiritual bypass
Feb 19, 2008

Grimey Drawer
I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

rt4 posted:

I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?
There's nothing particularly slow about Puppet itself, and you should profile what's actually going wrong before you spend a month reimplementing the same exact problem in another tool

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

rt4 posted:

I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?

They池e all bad in their own unique ways, sorry

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

rt4 posted:

I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?

What is it you don't like about Ruby and/or what problems are you having with Puppet's performance? Also what are you using Puppet for?

If you give some more details then people might be able to point you in the direction of a better tool.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
cfEngine2, Salt, and Ansible are all options to avoid Ruby. I use Salt these days over Ansible when it comes to configuration management. Ansible can be sped up with modes that turn it into another agent-based management system basically, but just trying to upgrade playbooks to support new features without dealing with adjusting my own playbooks yet again got old. Also, Ansible doesn't really have anything like the Reactor system that Salt has and deploying Stackstorm alongside Ansible for configuration management seems like a hack.

Helianthus Annuus
Feb 21, 2006

can i touch your hand
Grimey Drawer

rt4 posted:

I use Puppet at work but I just don't like Ruby. What alternatives might I consider that aren't so drat slow?

a 5,000 line, hand-crafted bash script called /usr/local/bin/fix_everything.sh that you run on an hourly cron

Potato Salad
Oct 23, 2014

nobody cares


necrobobsledder posted:

cfEngine2, Salt, and Ansible are all options to avoid Ruby. I use Salt these days over Ansible when it comes to configuration management. Ansible can be sped up with modes that turn it into another agent-based management system basically, but just trying to upgrade playbooks to support new features without dealing with adjusting my own playbooks yet again got old. Also, Ansible doesn't really have anything like the Reactor system that Salt has and deploying Stackstorm alongside Ansible for configuration management seems like a hack.

Same w/r/t ansible vs salt.

I've fallen in love with salt reactor, help

bolind
Jun 19, 2005



Pillbug
Hi thread.

Not sure whether this is the right place to ask, I checked over in the virtualization thread and was directed here.

I have a scenario where I would like to provide three similar Linux environments for development:

  1. A bare bones environment with a couple of compilers, custom tools, static code analyzers and such.
  2. A superset of 1) with editors, graphical tools etc.
  3. A Windows (VirtualBox? Docker for Windows?) version of 2)

And a nice way to keep everything in sync.

Surely I'm not the first guy to think of this. Please enlighten me.

12 rats tied together
Sep 7, 2006

Vagrant with the ansible provisioner is the simplest toolchain that can do everything you need here, IMO, and is a very common choice for setting up interactive development environments.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I'm getting really fed up with declarative poo poo for systems management and just want to go back to procedural

Things really do run in cycles, don't they. We're back to fancy shell scripts

12 rats tied together
Sep 7, 2006

IMO declarative is only good when the dependencies between objects are too complex to be reasonably understood or when there are so many objects that writing procedural logic becomes too error prone. Gaining experience with a system mitigates both of these concerns, so it makes sense to me that people begin to prefer procedural tooling over time.

It does highlight though that taking care in your tooling choices is critical to sanely running long term infrastructure. Going back to Terraform v Ansible, it's not impossible to write a declarative tool that gives you the level of introspection and control over the graph walk that Ansible gives you by virtue of being a procedural tool, the features are just not present yet. Similarly, it's definitely not impossible to write a procedural tool (say, a script) that is bullshit garbage, doesn't log to anywhere, requires you to step through with a debugger, etc.

It's important to choose tools that let you do things, definitely, but its also important to consider what percentage of things that are possible with a tool are actually shipped features on it.

I think a fundamental of imperative tooling that is often forgotten is that it can also be used to organize, link, and run instances of declarative tooling. In my experience it's also very common for people to leap at declarative tools, unfortunately, as a substitute for actually understanding the underlying provider. This tends to work about as well as you would expect.

An example that comes up a lot for me at work is "you must make sure the subnet exists before you can create an ec2 instance in it" -- this is a feature, in my opinion, not a bug. :shobon:

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
"Oh hey I haven't looked a the CI thread in awhile. Maybe I should close it since I've just about finished up my prototype..."

fletcher posted:

You may want to look into test kitchen: https://kitchen.ci/

It's built by the Chef guys but I don't think you necessary have to use Chef with it. It works great for spinning up a bunch of VMs in parallel, running stuff on them, and then tearing them down.

Gaaaaaaaaaaah why did I have to see this now?

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

Helianthus Annuus posted:

a 5,000 line, hand-crafted bash script called /usr/local/bin/fix_everything.sh that you run on an hourly cron

don't forget 2>&1 /dev/null

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Anyone else going to be at Hashiconf next week in Seattle? I have other plans afterward besides happy hour but I can certainly hang out with some of you fine folks at lunch instead of awkwardly hanging out alone.

Hughlander
May 11, 2005

necrobobsledder posted:

Anyone else going to be at Hashiconf next week in Seattle? I have other plans afterward besides happy hour but I can certainly hang out with some of you fine folks at lunch instead of awkwardly hanging out alone.

Thing I miss from being a block from the convention center... Random lunches with Goons... I'm down in Renton now about 15mi to the south and wish could have a drink with ya.

PBS
Sep 21, 2015
For those using docker, how do you handle running a container with arbitrary UID/GIDs? I see run/compose lets you specify a user to execute as, but that doesn't address file permissions inside the container at all.

Due to security requirements we've got completely different dev/stg/prd environments, each with it's own AD domain. Additionally there are images we'd like to be able to share with different departments, but will have a different user/group running in them.

We'd like to avoid having to build an image per env + per user, but do need to be able to control who it's running as due to external mounts.

I've been looking around for a solution that isn't essentially just run as root, chown everything, downgrade to a less privileged user, exec. I must be missing something because this seems like a fairly obvious thing you might need to do, but I've been pulling my hair out trying to find an answer.

PBS fucked around with this message at 02:45 on Sep 5, 2019

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

PBS posted:

For those using docker, how do you handle running a container with arbitrary UID/GIDs? I see run/compose lets you specify a user to execute as, but that doesn't address file permissions inside the container at all.

Due to security requirements we've got completely different dev/stg/prd environments, each with it's own AD domain. Additionally there are images we'd like to be able to share with different departments, but will have a different user/group running in them.

We'd like to avoid having to build an image per env + per user, but do need to be able to control who it's running as due to external mounts.

I've been looking around for a solution that isn't essentially just run as root, chown everything, downgrade to a less privileged user, exec. I must be missing something because this seems like a fairly obvious thing you might need to do, but I've been pulling my hair out trying to find an answer.
I don't know what filesystem you're running against or what your permissions look like, but could something like fixuid help you out here as opposed to dynamically chowning the permissions on the mount?

PBS
Sep 21, 2015

Vulture Culture posted:

I don't know what filesystem you're running against or what your permissions look like, but could something like fixuid help you out here as opposed to dynamically chowning the permissions on the mount?

Taking a look at that now, but to answer your question,

The file system is nfs4, permissions are standard posix with ad user/groups with unique ids. OS is centos/rhel 7.5, using an ldap provider with sssd.

Edit: That looks somewhat similar to what I'm already doing, just a cleaner way of doing it. Right now I'm using the s6-overlay and a script it has called fix-attrs. This is roughly equivilent to chowning everything, but not necessary precisely the same. It's worth a shot though given it lets me set a user other than root.

PBS fucked around with this message at 03:42 on Sep 5, 2019

LochNessMonster
Feb 3, 2005

I need about three fitty


I知 also using s6-overlay for this. Will look into this solution too, thanks for sharing!

We池e moving away from NFS mounted volumes for persistent data entirely though. We致e had so many issues with it over the last 2 years.

PBS
Sep 21, 2015

LochNessMonster posted:

I知 also using s6-overlay for this. Will look into this solution too, thanks for sharing!

We池e moving away from NFS mounted volumes for persistent data entirely though. We致e had so many issues with it over the last 2 years.

Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since.

This is just for our swarm clusters which run things we can't put in kubernetes for reasons that are too sad for me to try to explain, nfs is really the only shared file system available to them atm.

It's really staggering when I look back and see how much time I've wasted working around all our snowflake requirements or just general dumb practices we're dragged along with.

LochNessMonster
Feb 3, 2005

I need about three fitty


PBS posted:

Yeah it's a bear, I remember it being a nightmare in my home lab, but I finally got it all working and it's Just Worked ever since.

This is just for our swarm clusters which run things we can't put in kubernetes for reasons that are too sad for me to try to explain, nfs is really the only shared file system available to them atm.

It's really staggering when I look back and see how much time I've wasted working around all our snowflake requirements or just general dumb practices we're dragged along with.

I知 also using it on Swarm because, as you said, there is no alternative. I think the majority of my issues have been with NFS. Our decision to disable the routing mesh and running our own ingress service is a close second though.

In hindsight we should致e picked K8s over Swarm but 2-3 years back it was more of a coin toss.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I bet on K8S over Swarm back in 2014 mostly because I saw most shops that would benefit are big behemoths that were burned on dumpster fires like OpenStack and Swarm seemed to be too simplistic comparably. More complex doesn't mean better obviously, but big and complicated seems to get the most funding in a marketplace over the simplest solutions like Nomad.

tortilla_chip
Jun 13, 2007

k-partite
I'm trying to setup a workflow where an ECS task is kicked off to process some data when it's uploaded to S3. Is this workflow still relevant https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ ? Is the SQS portion necessary for passing around the object/filename, or is there something simpler I could do with event logs?

JHVH-1
Jun 28, 2002

tortilla_chip posted:

I'm trying to setup a workflow where an ECS task is kicked off to process some data when it's uploaded to S3. Is this workflow still relevant https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ ? Is the SQS portion necessary for passing around the object/filename, or is there something simpler I could do with event logs?

I think you can accomplish it in a more straightforward way using cloudwatch events now:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-ECS.html

tortilla_chip
Jun 13, 2007

k-partite
Thanks! This appears to be what I'm looking for. Still not quite clear on how to reference the object name from the event and pass that in as an override or environment variable to the task, but I'll keep digging.

12 rats tied together
Sep 7, 2006

You might actually be able to dig up the event structure for s3:PutObject in the documentation, but if you can't you can instead point your event rule at a cloudwatch logs group first and then just trigger the event a few times to get some examples.

That's usually what I do when I'm struggling to find something in the documentation, anyway :shobon:

LochNessMonster
Feb 3, 2005

I need about three fitty


I知 finally moving my platform from a trainwreck MSP to AWS and that means also migrating some services that are not mine.

To do that I知 creating an EKS cluster per project which needs to be provisioned by Terraform. Any configuration will be done by Ansible and code deployment by Jenkins.

This means that my IaC configs need to be reusable for other projects than my own so instead of having it in my applications codebase I知 creating a seperate IaC repo.

I was wondering if there are any standards / best practices on how to structure the code?

I was thinking something along the lines of

code:
Project1/
	Terraform/
		main.tf
		variables.tf
		output.tf
	Ansible/
		playbooks/
			roles/
	Jenkins/
Project2
	Terraform/
	 Ansible/
	Jenkins/

LochNessMonster fucked around with this message at 07:02 on Sep 13, 2019

12 rats tied together
Sep 7, 2006

My personal preference would be:
code:
iac/
  ansible/
    playbooks/
      roles/
        terraform-thing-1/ (tasks/, templates/, handlers/, etc)
        terraform-thing-2/
        ansible-thing-1/
        ansible-thing-2/
        jenkins-config/
        [etc]
      project-1.yaml
      project-2.yaml
Put all of your orchestration and config (terraform, ansible, and jenkins) into a playbook named after each project. Use ansible's terraform and jenkins_job modules to run your terraform operations and configure jenkins from ansible-playbook. Use task tags to support least-resistance code paths through your playbooks: you probably don't need to run terraform all the time, so it shouldn't run unless requested with --tags terraform.

JehovahsWetness
Dec 9, 2005

bang that shit retarded

12 rats tied together posted:

you probably don't need to run terraform all the time, so it shouldn't run unless requested with --tags terraform.

We actually like running terraform every time, just to hit plan with '-detailed-exitcode' set so we can kill the build when drift is detected. It's a nice "wtf" infrastructure linter.

LochNessMonster
Feb 3, 2005

I need about three fitty


12 rats tied together posted:

My personal preference would be:
code:
iac/
  ansible/
    playbooks/
      roles/
        terraform-thing-1/ (tasks/, templates/, handlers/, etc)
        terraform-thing-2/
        ansible-thing-1/
        ansible-thing-2/
        jenkins-config/
        [etc]
      project-1.yaml
      project-2.yaml
Put all of your orchestration and config (terraform, ansible, and jenkins) into a playbook named after each project. Use ansible's terraform and jenkins_job modules to run your terraform operations and configure jenkins from ansible-playbook. Use task tags to support least-resistance code paths through your playbooks: you probably don't need to run terraform all the time, so it shouldn't run unless requested with --tags terraform.

How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this?

12 rats tied together
Sep 7, 2006

JehovahsWetness posted:

We actually like running terraform every time, just to hit plan with '-detailed-exitcode' set so we can kill the build when drift is detected. It's a nice "wtf" infrastructure linter.

That's totally fair. If you're having ansible run terraform for you I believe you need to force the task to be in check_mode (by appending check_mode: yes to it) in order to have it run terraform plan for you. You should still be able to register the tasks, though, and then trigger an assertion suite on every playbook run. Triggering assertion suites is a super useful pattern in general!

LochNessMonster posted:

How do you differentiate between dev/test/uat/prod environments? Let ansible take care of that based on the inventory / group vars which are used to deploy this?

Yeah, that is correct. There are basically 2 high-level ways you can approach multi-environments in ansible but they both hinge on the inventory being different in each environment.

The first, and probably simplest, one would be to maintain a playbook for each different environment. So, you'd have prod-project-1, test-project-1, staging-project-1, etc. I usually like to leave a comment at the top of these playbooks indicating what inventory should be used to run it. There's a fair amount of copy paste involved in this approach but it is definitely the most intuitive and I've seen that it's easiest to get other people actually involved when the project looks like this. They usually end up wanting to remove the duplicate code eventually anyway, which doubles as a nice learning exercise.

The second would be to have just a single playbook per project and ensuring that your playbooks can always run in every supported environment. This is definitely ideal, but take care that your playbook doesn't turn into a rat's nest of conditionally executed tasks.

Another gotcha is that ansible group_vars are great but they are hyper-focused on which hosts your play is targeting. For managing cloud orchestration, you usually target your laptop (connection: local) or a CI host so you can take advantage of your existing aws credentials, installed python modules, etc. Targeting your laptop means it's non-obvious how you would pull variables from group_vars/production vs group_vars/staging.

I've seen this handled in 2 different ways: you can add localhost to real groups inside your inventory, so that ansible will load the correct group_vars in each context. If you have localhost orchestration and remote configuration in the same playbook, you'll have to use "delegate_to:" and similar to ensure that you don't try to install apt packages on your laptop or whatever. You can usually come up with some combination of nested groups that will allow you to sidestep this issue, or you can use a host pattern to exclude localhost on some plays.

The other approach, and my personal preference, is to put all of the data that your orchestration needs into a nested yaml dict. Pass an "environment" or "stage" variable to your playbooks, and use that variable as a top-level key lookup. You can put this yaml dict in group_vars/all/ or you can put it somewhere silly like your project root and then manually load it in your playbooks using the include_vars module.

Quick example:

code:
- name: manage terraform vpc stack
  terraform:
    project_path: terraform/aws/whatever
    variables:
      aws_account_id: "{{ aws_accounts[environment].id }}"
      peered_vpcs: "{{ aws_vpc_peerings[environment] }}"
    [etc]
Another nice side effect of using ansible to run your terraform is that you can j2 template your terraform configuration before feeding it into "project_path:". So, you can do for-real for loops, you can do variable substitution in areas that terraform does not support (for example: module source/version, terraform config values), and instead of nesting modules you can use j2 template inheritance to push repetitive declarations into the same state to better take advantage of terraform's dag walk.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Have your Ansible playbooks look up variables from Consul as well as from your local variables but I think this approach is kinda clunky https://docs.ansible.com/ansible/latest/plugins/lookup/consul_kv.html and Ansible being so hell-bent on variables from locally backed data doesn't work for a lot of scenarios. I am trying to move our Salt states to lookup pillar data from Consul and it's just a lot more configurable for myself and state composition is clearer to me than Ansible (maybe I have too much Kool Aid in my system?). The idea is that with a single KV store you can let different groups use whatever front-end tool they want for their orchestration or configuration and they can focus upon behaviors while the data itself is decoupled using the KV store, canonicalized, and ACLed off to keep people from shooting off their foot or face. This approach may be a Bad Idea for some companies but I think it's good for building consensus of configuration data.

12 rats tied together
Sep 7, 2006

It's something that comes up a decent amount at my current employer, ansible does fully support reading almost all of its data from remote sources. The consul_kv lookup plugin exists, you can also write vars plugins yourself, but probably most useful and least obvious is that variable values in ansible can be interpolations themselves.

So, you don't need to repeatedly run the plugin in your playbooks, you can just put it in your group_vars like normal:

code:
(group_vars/production/$application/common.yaml)
my_config_setting: "{{ lookup('consul_kv', 'my/key', whatever) }}"

(group_vars/test/$application/common.yaml)
my_config_setting: hardcoded-string
This works with any type of interpolation (variable reference, any filter, any lookup, etc) and it works everywhere in ansible-playbook, even the inventory files which IMO is a little non-obvious from the documentation. However, no matter how remote/dynamic your variable data gets you will always have some local ansible data. It just doesn't make sense to put something like the value for "ansible_ssh_become_method" in consul.

I've found that a lot of people reach for storing vars values in consul or some other type of key value store at first but after actually using ansible's inventory structure for a bit in production, that desire usually ends up disappearing or getting backburnered immediately because it's really not that useful or valuable compared to how group_vars already works. Given the project's momentum and massive amount of contributors you can take the lack of existence of any vars plugin other than group_vars as pretty decent proof of this, if it was actually useful to do it would be in the project by now. I'm definitely on the chugging ansible kool aid side of the spectrum but I usually see the desire transform from "how can we get ansible to read from consul?" to "how do we install, configure, and populate consul keys with ansible?".

Ultimately I'm of the opinion that you should not let people use whatever orchestration/configuration tools they want to, that you should have one single tool, and if your one single tool can't do everything you need it to you should throw it out.

Pile Of Garbage
May 28, 2007



12 rats tied together posted:

The second would be to have just a single playbook per project and ensuring that your playbooks can always run in every supported environment. This is definitely ideal, but take care that your playbook doesn't turn into a rat's nest of conditionally executed tasks.

Environment selection shouldn't involve conditionally executed tasks because you want it to run the same regardless of where you are running it. The best approach is to just use variables with sane defaults which get overwritten based on the environment being targeted.

12 rats tied together posted:

Another gotcha is that ansible group_vars are great but they are hyper-focused on which hosts your play is targeting. For managing cloud orchestration, you usually target your laptop (connection: local) or a CI host so you can take advantage of your existing aws credentials, installed python modules, etc. Targeting your laptop means it's non-obvious how you would pull variables from group_vars/production vs group_vars/staging.

You're conflating things here. The connection keyword specifies the connection plugin which Ansible uses when executing tasks. The chosen plugin has no bearing on the host(s) which the playbook is being executed against. Also availability of installed Python modules is not an issue because no actual code is executed remotely.

Also IMO if you're using Ansible against actual hosts instead of just doing cloud orchestration then you should deffo be using AWX/Tower. Being able to leverage credential injection and inventory scripts make it mad useful.

Edit:

12 rats tied together posted:

You can put this yaml dict in group_vars/all/ or you can put it somewhere silly like your project root and then manually load it in your playbooks using the include_vars module.

If you have YAML files that you want to explicitly load as part of a play with include_vars you can just put them inside a vars folder in the root of the project. They can then be included by referencing just the file name. This isn't mentioned in the directory layout portion of the doco but it does work and is preferable to leaving non-playbook YAML files in the project root.

Pile Of Garbage fucked around with this message at 10:55 on Sep 14, 2019

12 rats tied together
Sep 7, 2006

Pile Of Garbage posted:

You're conflating things here. The connection keyword specifies the connection plugin which Ansible uses when executing tasks. The chosen plugin has no bearing on the host(s) which the playbook is being executed against. Also availability of installed Python modules is not an issue because no actual code is executed remotely.
The local connection plugin specifically subverts the normal behavior here and executes tasks directly on the controller, regardless of hosts in the play. This is specific to the local connection plugin, and you normally use it to avoid needing to SSH to your own laptop, an AWX worker container, CI host, etc.

You can also exploit this behavior to "fake multithread" a bunch of async local tasks by running them against a real group but with connection: local, but I've never found it to be very useful compared to normal async.

Pile Of Garbage posted:

If you have YAML files that you want to explicitly load as part of a play with include_vars you can just put them inside a vars folder in the root of the project. They can then be included by referencing just the file name. This isn't mentioned in the directory layout portion of the doco but it does work and is preferable to leaving non-playbook YAML files in the project root.
Cool, this is good to know. I've been putting them inside the root of group_vars, so they aren't loaded automatically, but a vars folder would be preferable.

LochNessMonster
Feb 3, 2005

I need about three fitty


Running into a terraform issue and as I'm pretty new to it, I can't wrap my head around it. I'm using a module to fill my variables.tf, but when running [fixed]terraform init[fixed] I'm getting the following error (management is the name of my module):

code:
[me@box]$ terraform init                                                                                                                                                                                                        
Initializing modules...
Downloading /path/Terraform for management...
- management in .terraform/modules/management 
Downloading /path/Terraform for management.management...
- management.management in .terraform/modules/management.management
 Downloading /path/Terraform for management.management.management... 
- management.management.management in .terraform/modules/management.management.management
Downloading /path/Terraform for management.management.management.management...                                                                                                                            
- management.management.management.management in .terraform/modules
/management.management.management.management
Downloading /path/Terraform for management.management.management.management.management...                                                                                                                  
- management.management.management.management.management in .terraform/modules/management.management.management.management.management          

<this continues for some time> 


Terraform tried to remove
.terraform/modules/management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management.management
in order to reinstall this module, but encountered an error: unlinkat```
This appears to be the following bug

I'm running the init command from path/, file structure is as follows:
code:
path/module.tf
path/terraform/main.tf
path/terraform/variables.tf
The issue on the issue tracker says this happens when 2 or more modules depend on each other, but as all values in the module (the only one I'm using) are hardcoded, I'm not seeing on which other module this would depend. This seems like such trivial behaviour that I feel I must be doing something wrong, but I have no clue what.

edit: I'm an idiot. I had module.tf in both path/ and path/terraform/. Removing the latter solved the issue.

LochNessMonster fucked around with this message at 15:01 on Sep 19, 2019

Hadlock
Nov 9, 2004

.

Hadlock fucked around with this message at 22:46 on Sep 19, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Two words: local-exec resources. I知 moving towards more use of auto variables.tf emitted from external systems like Consul and using data providers instead of hacks like that nowadays

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



12 rats tied together posted:

The local connection plugin specifically subverts the normal behavior here and executes tasks directly on the controller, regardless of hosts in the play. This is specific to the local connection plugin, and you normally use it to avoid needing to SSH to your own laptop, an AWX worker container, CI host, etc.

Yes, that is what the local connection plugin does however it does not "subvert" normal behaviour in any way. More specifically it doesn't alter the other functions of Ansible such as gathering facts. A better way of explaining it is that it merely modifies the location which tasks are executed from.

From my experience the best way of selecting connection plugins is to simply set the ansible_connection variable on the hosts in the inventory or on the template itself if you're using AWX. Earlier this year I was writing playbooks for applying configuration to Cisco IOS devices with a requirement to support ancient Telnet-only devices. I achieved this using the telnet module and liberal use of the ansible_connection variable at the host level in the inventory.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply