Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SeaborneClink
Aug 27, 2010

MAWP... MAWP!

CyberPingu posted:

That's still a super bad posture to take imo. But I guess I'm just super glad I don't work at the place you do.

Adbot
ADBOT LOVES YOU

spiritual bypass
Feb 19, 2008

Grimey Drawer
"I fixed the key spray problem" actually sounds like a good line to put on a resume before leaving that lovely workplace behind

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

CyberPingu posted:

Yeah I went down the output and dependency route. Cheers.
fyi we've got a massive list of terraform problems we've discovered and this one is at the top of it. The only other choice is using remote state store rather than dependencies, but be aware dependencies may not work as expected unless you create a dummy dependency variable within the module itself to force terraform to do the right thing.

here's the git issue - feel free to throw your pleas on top of the multiple year thread with hundreds of posts:
https://github.com/hashicorp/terraform/issues/17101

I've got a dozen git issues right behind that that we've run into and documented and last time we had hashicorp on a phone for renewal talks we pinned them to the wall about their lack of responsiveness to these kinds of multi-year architecture destroying-issues. I could write hundreds of words on the problems we've discovered and our lovely band-aid workarounds as we try to move to code-driven deployment.

I almost want to type it up and publish in blog format to at least make people aware of the absolutely massive list of gotchas that terraform has, things we wish we knew a year ago when we decided to use the new hotness (pronounced hot mess).

the tl;dr is modules suck, fail at encapsulation and only have partial functionality - critical things like looping and dependencies are missing or only partially implemented.

Bhodi fucked around with this message at 17:44 on Feb 27, 2020

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
I need some Ansible help, I think this is the place to ask?

Scenario:

I have multiple servers with a consistent app root directory and variable subdirectories.

Like this:

Host1:
/app/subfolder{1,2,3...}

Host2:
/app/subfolder{5,3,9...}

I only care about the first level of subdirs as the layout is standardized underneath them.

I need to copy a static list of files to each of those subdirectories.

I'm trying to stay DRY here, the way I originally did this was to run a find command on the /app folders, register the output, 5 tasks that copy each of the files to the target servers using with_items, which isn't very dry.

Is there a way I can collapse these 5 tasks down into one using the output from the find task and a static list of files? I've read around but I'm getting constantly confused with documentation and tips for using both a with_ or loop.

12 rats tied together
Sep 7, 2006

Without putting too much thought or work into it, that sounds like something you would normally do with rsync, which is supported through ansible via synchronize.

You will probably have to commit the resulting structure to your ansible repository in some way, my initial attempt here would be something like:

code:
roles/mything/files/app/
  /subfolder1/some_static_file
  /subfolder2/some_static_file
  [etc]
And then something like:
code:
roles/mything/tasks/main.yaml:
- name: sync static files to app servers
  synchronize:
    dirs: yes
    src: ./app/  # or maybe "{{ role_path }}/files/app"
    dest: /app/
    recursive: yes
  delegate_to: localhost
You'll need the delegate_to for ansible to pick up the static files you have in your git repository. You might have to fight with ansible to make sure it picks up the correct path to your app subfolders, which I would normally shove into a role, but that implementation is up to you. You can run with -vvv (or maybe 4 vs? I don't remember) to have ansible show you where it's looking for files with synchronize, or if synchronize doesn't support that level of verbosity, you can append a debugger to your task with the "always" setting and poke around the internals.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

12 rats tied together posted:

Without putting too much thought or work into it, that sounds like something you would normally do with rsync, which is supported through ansible via synchronize.

You will probably have to commit the resulting structure to your ansible repository in some way, my initial attempt here would be something like:

code:
roles/mything/files/app/
  /subfolder1/some_static_file
  /subfolder2/some_static_file
  [etc]
And then something like:
code:
roles/mything/tasks/main.yaml:
- name: sync static files to app servers
  synchronize:
    dirs: yes
    src: ./app/  # or maybe "{{ role_path }}/files/app"
    dest: /app/
    recursive: yes
  delegate_to: localhost
You'll need the delegate_to for ansible to pick up the static files you have in your git repository. You might have to fight with ansible to make sure it picks up the correct path to your app subfolders, which I would normally shove into a role, but that implementation is up to you. You can run with -vvv (or maybe 4 vs? I don't remember) to have ansible show you where it's looking for files with synchronize, or if synchronize doesn't support that level of verbosity, you can append a debugger to your task with the "always" setting and poke around the internals.

Jesus, that's perfect, thank you.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Matt Zerella posted:

I need some Ansible help, I think this is the place to ask?

Scenario:

I have multiple servers with a consistent app root directory and variable subdirectories.

Like this:

Host1:
/app/subfolder{1,2,3...}

Host2:
/app/subfolder{5,3,9...}

I only care about the first level of subdirs as the layout is standardized underneath them.

I need to copy a static list of files to each of those subdirectories.

I'm trying to stay DRY here, the way I originally did this was to run a find command on the /app folders, register the output, 5 tasks that copy each of the files to the target servers using with_items, which isn't very dry.

Is there a way I can collapse these 5 tasks down into one using the output from the find task and a static list of files? I've read around but I'm getting constantly confused with documentation and tips for using both a with_ or loop.
The previous advice in this thread is good, but also, is this a problem that you can solve more simply with containers?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Vulture Culture posted:

The previous advice in this thread is good, but also, is this a problem that you can solve more simply with containers?

*theme to 2001 space oddessy plays as a giant monolith rises in the distance*

12 rats tied together
Sep 7, 2006

Feel free to use ansible to build your containers too and then you never have to worry about wasting implementation code on a suboptimal infrastructure type. I use this pattern a bunch for a bunch of CI projects at my employer, it's like 3 LoC to swap between configuring a remote system or building a container image of the result of configuring a remote system.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Container infrastructure has been iffy historically for me being on horribly managed, archaic systems that explicitly forbid Docker or LXC and where running them required approval levels not to mention the approval processes for getting private registries setup. I mean, I don't have to do that crap now thankfully but old habits die hard when you're used to fear of doing anything out of fear of more paperwork.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else?

Docjowles
Apr 9, 2009

necrobobsledder posted:

Container infrastructure has been iffy historically for me being on horribly managed, archaic systems that explicitly forbid Docker or LXC and where running them required approval levels not to mention the approval processes for getting private registries setup. I mean, I don't have to do that crap now thankfully but old habits die hard when you're used to fear of doing anything out of fear of more paperwork.

I'm glad I have these forums to give me a reality check every time I feel like my company is doing something very dumb. It can always be so much worse :allears:

JehovahsWetness
Dec 9, 2005

bang that shit retarded

Blinkz0rz posted:

What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else?

All of our projects that use Jenkins have Jenkins Job Builder definitions in the repo to define the jobs, vars, schedules, views, etc and they all just point to Jenkinsfiles that are pulled from the repo @ job runtime. Works pretty well since we can apply the JJB definitions in our CI/CD pipeline and nobody makes anything by hand.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Blinkz0rz posted:

What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else?
Jenkins Job Builder is what those forced to use Jenkins have been doing. At least at my company alongside a hodgepodge of Travis CI

Docjowles posted:

I'm glad I have these forums to give me a reality check every time I feel like my company is doing something very dumb. It can always be so much worse :allears:
I suspect that the people that have had to deal with all that are too busy drinking themselves to an early grave or practicing white boarding to post on these forums. But really, that's started shifting as every other shop fired the people that kept stopping engineers from getting anything done which is the other problem with security and process theater that plagues our F100 and public sector.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
+1 for Jenkins Job Builder. We basically have Jenkins running as a 100% stateless pipeline itself, including all jobs/plugins/jenkins version upgrades. It works quite well.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Blinkz0rz posted:

What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else?
To go against the crowd, we(I) decided against jenkins job builder because we already have consistent repos for building docker containers, terraform, stuff like that, connected to github and a stub Jenkinsfile launching a shared library, and our "end user" of jenkins are developers somewhat unfamiliar with the garbage that is build engineering.

Blue Ocean's made pretty large strides with ease of setting up new Jenkinsfile style jobs; setting up a new repo is pretty much 4 clicks of the next button which was a perfect low-effort solution to our end-user's needs.

SurgicalOntologist
Jun 17, 2004

We use Gitlab CI with GCP and I have a very lightweight container I'm trying to deploy as a review app (i.e., different deployment for each merge request). My first approach was with Google Cloud Run which has the disadvantage that it doesn't allow you to route URLs or requests to specific revisions of the container. So we always get the last-built version instead of the branch-specific version. This is No Good.

Looks like this feature is missing in Cloud Run, and I can't figure out a good solution that isn't complete overkill for this little service (it's a mock of a more heavyweight service for the purpose of serving fake data to tests, etc.).

I think I have to give each revision a unique service name so that the containers don't conflict with each other. The downside of that is then I get an unpredictable URL each time, so I have to get the URL (presumably I can query it with gcloud) and then change an nginx config or something. It hardly seems worth it. Is there a better way? Is this something worth branching out to AWS for?

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Bhodi posted:

To go against the crowd, we(I) decided against jenkins job builder because we already have consistent repos for building docker containers, terraform, stuff like that, connected to github and a stub Jenkinsfile launching a shared library, and our "end user" of jenkins are developers somewhat unfamiliar with the garbage that is build engineering.

Blue Ocean's made pretty large strides with ease of setting up new Jenkinsfile style jobs; setting up a new repo is pretty much 4 clicks of the next button which was a perfect low-effort solution to our end-user's needs.

Can also +1 the Jenkins Shared Library approach - the thing that ultimately drove us away from Jenkinsfiles/whatever is domain knowledge of groovy and so on. Our engineers are much more comfortable working with YAML than they are with Groovy or whatever else, so we try to match that.

Eventually I am going to yank out Jenkins and just move to some other minimal CI solution with a YAML back end, hopefully JJB is the bridge that gets us there.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Shared libraries come with their own baggage of fun and while Jenkins is JVM based it’s never been easy for me to write pipelines or jobs in anything except Groovy. Like if I want to try Kotlin for our jobs, that’s not going to be convenient as I re-run jobs repeatedly to figure out another class loader problem when using anything other than plain old Java and Jenkins classes. It fits user hostile while being “user friendly” in its own universe of misery.

Seriously, what about this makes one think “easy to test CI”? https://medium.com/disney-streaming/testing-jenkins-shared-libraries-4d4939406fa2

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

necrobobsledder posted:

Shared libraries come with their own baggage of fun and while Jenkins is JVM based it’s never been easy for me to write pipelines or jobs in anything except Groovy. Like if I want to try Kotlin for our jobs, that’s not going to be convenient as I re-run jobs repeatedly to figure out another class loader problem when using anything other than plain old Java and Jenkins classes. It fits user hostile while being “user friendly” in its own universe of misery.

Seriously, what about this makes one think “easy to test CI”? https://medium.com/disney-streaming/testing-jenkins-shared-libraries-4d4939406fa2
gently caress that garbage. You CAN use shared libraries to import groovy and do native tests and go down the groovy rabbithole but we only use it for a "load and execute this shared declarative jenkinsfile from this one repo" stub.

The jenkinsfile itself is similar to below, with a bunch of predefined steps which lints branches, deploys and tests a dev instance/container on PRs, and tags/uploads on master branch. It's just 200 lines of generic code that pulls in config files/secrets from vault and runs shell commands. You'd be insane to try and develop your actual tests in groovy. You can use conditional build steps to skip specific stages but in the end jenkins is best used for just some really simple flow control goo that's executed through hooks from your source control, like line 80 in this:

https://github.com/jenkinsci/pipeline-examples/blob/master/declarative-examples/jenkinsfile-examples/mavenDocker.groovy

We use a jenkins docker image and launch all our jobs as docker containers. For our tests, all jenkins does is execute test/*.sh and if anything returns non-zero the job fails. The scripts in the test directory can execute complex rspec code or sometimes something so basic as a curl test against a built container. This decoupling of tests has the advantage of being tool-agnostic and allow you to test outside of the jenkins pipeline with just a local docker daemon and local checkout of the repo.

All of this could easily be ported to concourse or whatever CI hotness you choose. It CAN be very complex but it really shouldn't be.

Bhodi fucked around with this message at 05:08 on Mar 3, 2020

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Yo yo CI crew.


Im building a Terraform module right now where im passing in an object map, im stealing some stuff from Gruntworks and the map they have is

code:
variable "metric_map" {
  description = "A map of filter metrics."
  type = map(object({
    pattern     = string
    description = string
  }))
}
I need to pass in a large amount of values into this which i currently have in a .tfvars file, is this the best way to go about this or can i just plug all the values into the main map part, if so, how?

Docjowles
Apr 9, 2009

https://www.terraform.io/docs/configuration/variables.html

tfvars is a good idea, I think, in the name of separating data from code and making your terraform reusable. You could dump them in the “default” section of the variable definition but that’s kind of lovely.

Other options include passing them to terraform on the command line or as environment variables if that suits your workflow better for whatever reason.

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.

Docjowles posted:

https://www.terraform.io/docs/configuration/variables.html

tfvars is a good idea, I think, in the name of separating data from code and making your terraform reusable. You could dump them in the “default” section of the variable definition but that’s kind of lovely.

Other options include passing them to terraform on the command line or as environment variables if that suits your workflow better for whatever reason.

Yeah the issue atm is that you need to pass the tfvars in at the command line, which isnt really resuable. Im fine with them being dumped in the defualt section as no one is going to see this really, they are just going to be calling it when putting a hcl file in our live repo

Docjowles
Apr 9, 2009

Terraform will auto load tfvars files in the current directory if you match a naming convention. See the doc I linked. You don’t have to pass them on the command line.

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Ahhhhh right

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Ok 2nd question,

This module is going to be applied to several accounts and sets up a lot of cloudwatch alarms, since the definition for these alarms is configured inside a variable as part of the object map i cant add a 2nd variable inside that to get the account that its been applied to, so when the alarm triggers and sends a slack notifcation, i have no idea what account its been triggered on. Is there a way to resolve that

Docjowles
Apr 9, 2009

You can dynamically get the current AWS account number from the caller_identity data source. Would referencing that in your module help?

TheCog
Jul 30, 2012

I AM ZEPA AND I CLAIM THESE LANDS BY RIGHT OF CONQUEST
Anyone here used terratest for testing terraform managed infrastructure? If so, any advice/pitfalls to avoid?

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Ive got that in already, its just how to pass that into the below so that it reads likle

code:
"UnauthorizedAPICalls" = {
    pattern = "{($.errorCode= \"*UnauthorizedOperation\") || ($.errorCode= \"AccessDenied*\")}"
    description = "A user has made an unauthorized API call"
  }
description = "A user in account <foo> has made an unauthorized API call"

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

CyberPingu posted:

Ive got that in already, its just how to pass that into the below so that it reads likle

code:
"UnauthorizedAPICalls" = {
    pattern = "{($.errorCode= \"*UnauthorizedOperation\") || ($.errorCode= \"AccessDenied*\")}"
    description = "A user has made an unauthorized API call"
  }
description = "A user in account <foo> has made an unauthorized API call"

code:
data "aws_caller_identity" "current" {}

...

description = "A user in account ${data.aws_caller_identity.current} has made an unauthorized API call"
That'll give you the account number. If you maintain a map of number to friendly name you'll be in good shape.

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.

Blinkz0rz posted:

code:
data "aws_caller_identity" "current" {}

...

description = "A user in account ${data.aws_caller_identity.current} has made an unauthorized API call"
That'll give you the account number. If you maintain a map of number to friendly name you'll be in good shape.

You cant add a variable into a variable though.


code:

  on terraform.tfvars line 4:
   4:     description = "A user has in account ${data.aws_caller_identity.current} made an unauthorized API call"

Variables may not be used here.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

CyberPingu posted:

You cant add a variable into a variable though.


code:

  on terraform.tfvars line 4:
   4:     description = "A user has in account ${data.aws_caller_identity.current} made an unauthorized API call"

Variables may not be used here.

Can you use join to compose the whole string?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Why not just change the code so that description is just the specific message for the thing that's alerting and format the message with the account in the thing that loads the description?

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.

deedee megadoodoo posted:

Why not just change the code so that description is just the specific message for the thing that's alerting and format the message with the account in the thing that loads the description?

The thing that loads the description is actually pulled from another module.

Warbird
May 23, 2012

America's Favorite Dumbass

Does anyone know where I can get some documentation on the secure-scripts plugin for Jenkins? It seems pretty straightforward but doesn’t seem to be 100% working on the Jenkins 2.X container we’re pulling from RH.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

CyberPingu posted:

The thing that loads the description is actually pulled from another module.
Shouldn't matter if you use a format string instead of interpolation:
https://www.terraform.io/docs/configuration/functions/format.html

Boz0r
Sep 7, 2006
The Rocketship in action.
I have a couple of .NET MVC/WebAPI projects, both Core and Framework on Azure, calling each other with http requests. I'd like my build pipeline to start these web services, run some integration tests, and kill the services/servers when they're done.
How do I do this?

Some of the services make external calls that I use Moq to mock in my unit tests.
How would I do the same thing in those integration tests?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

I have a couple of .NET MVC/WebAPI projects, both Core and Framework on Azure, calling each other with http requests. I'd like my build pipeline to start these web services, run some integration tests, and kill the services/servers when they're done.
How do I do this?

Some of the services make external calls that I use Moq to mock in my unit tests.
How would I do the same thing in those integration tests?

Can you run the environment in containers? If so, docker compose can help you here.

If not, how do you run the integration tests locally? If you have to manually go and run things and set up an environment that will make the tests pass, then that's a problem. Solve it for a developer's local machine and you've solved the problem universally. Generally, running integration tests after deployment against a dev environment is an okay approach for this kind of thing. Also keep in mind that integration tests are mostly intended to verify that services are communicating properly -- failure means "these things can't talk". Proving they can talk in a local environment isn't giving you much of a signal.

Boz0r
Sep 7, 2006
The Rocketship in action.

New Yorp New Yorp posted:

Can you run the environment in containers? If so, docker compose can help you here.

I don't know. I don't know a lot about Azure yet. Most of the apps are pretty simple MVC apps.

New Yorp New Yorp posted:

If not, how do you run the integration tests locally? If you have to manually go and run things and set up an environment that will make the tests pass, then that's a problem. Solve it for a developer's local machine and you've solved the problem universally. Generally, running integration tests after deployment against a dev environment is an okay approach for this kind of thing. Also keep in mind that integration tests are mostly intended to verify that services are communicating properly -- failure means "these things can't talk". Proving they can talk in a local environment isn't giving you much of a signal.

We haven't made any integration tests yet. I'm trying to get the team to prioritize it :).

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

I don't know. I don't know a lot about Azure yet. Most of the apps are pretty simple MVC apps.


We haven't made any integration tests yet. I'm trying to get the team to prioritize it :).

Containers / Docker Compose have nothing to do with Azure.

Don't prioritize integration tests, prioritize unit tests. Unit tests verify correct behavior of units of code (classes, methods, etc). Integration tests verify that the correctly-working units of code can communicate to other correctly-working units of code (service A can talk to service B). Both serve an important purpose, but the bulk of your test effort should go into unit tests.

New Yorp New Yorp fucked around with this message at 14:55 on Mar 9, 2020

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply