|
CyberPingu posted:That's still a super bad posture to take imo. But I guess I'm just super glad I don't work at the place you do.
|
# ? Feb 27, 2020 12:19 |
|
|
# ? May 22, 2024 03:39 |
|
"I fixed the key spray problem" actually sounds like a good line to put on a resume before leaving that lovely workplace behind
|
# ? Feb 27, 2020 12:21 |
|
CyberPingu posted:Yeah I went down the output and dependency route. Cheers. here's the git issue - feel free to throw your pleas on top of the multiple year thread with hundreds of posts: https://github.com/hashicorp/terraform/issues/17101 I've got a dozen git issues right behind that that we've run into and documented and last time we had hashicorp on a phone for renewal talks we pinned them to the wall about their lack of responsiveness to these kinds of multi-year architecture destroying-issues. I could write hundreds of words on the problems we've discovered and our lovely band-aid workarounds as we try to move to code-driven deployment. I almost want to type it up and publish in blog format to at least make people aware of the absolutely massive list of gotchas that terraform has, things we wish we knew a year ago when we decided to use the new hotness (pronounced hot mess). the tl;dr is modules suck, fail at encapsulation and only have partial functionality - critical things like looping and dependencies are missing or only partially implemented. Bhodi fucked around with this message at 17:44 on Feb 27, 2020 |
# ? Feb 27, 2020 16:55 |
|
I need some Ansible help, I think this is the place to ask? Scenario: I have multiple servers with a consistent app root directory and variable subdirectories. Like this: Host1: /app/subfolder{1,2,3...} Host2: /app/subfolder{5,3,9...} I only care about the first level of subdirs as the layout is standardized underneath them. I need to copy a static list of files to each of those subdirectories. I'm trying to stay DRY here, the way I originally did this was to run a find command on the /app folders, register the output, 5 tasks that copy each of the files to the target servers using with_items, which isn't very dry. Is there a way I can collapse these 5 tasks down into one using the output from the find task and a static list of files? I've read around but I'm getting constantly confused with documentation and tips for using both a with_ or loop.
|
# ? Feb 27, 2020 17:45 |
|
Without putting too much thought or work into it, that sounds like something you would normally do with rsync, which is supported through ansible via synchronize. You will probably have to commit the resulting structure to your ansible repository in some way, my initial attempt here would be something like: code:
code:
|
# ? Feb 27, 2020 17:57 |
|
12 rats tied together posted:Without putting too much thought or work into it, that sounds like something you would normally do with rsync, which is supported through ansible via synchronize. Jesus, that's perfect, thank you.
|
# ? Feb 27, 2020 18:02 |
|
Matt Zerella posted:I need some Ansible help, I think this is the place to ask?
|
# ? Feb 29, 2020 18:56 |
|
Vulture Culture posted:The previous advice in this thread is good, but also, is this a problem that you can solve more simply with containers? *theme to 2001 space oddessy plays as a giant monolith rises in the distance*
|
# ? Feb 29, 2020 20:25 |
|
Feel free to use ansible to build your containers too and then you never have to worry about wasting implementation code on a suboptimal infrastructure type. I use this pattern a bunch for a bunch of CI projects at my employer, it's like 3 LoC to swap between configuring a remote system or building a container image of the result of configuring a remote system.
|
# ? Feb 29, 2020 21:06 |
|
Container infrastructure has been iffy historically for me being on horribly managed, archaic systems that explicitly forbid Docker or LXC and where running them required approval levels not to mention the approval processes for getting private registries setup. I mean, I don't have to do that crap now thankfully but old habits die hard when you're used to fear of doing anything out of fear of more paperwork.
|
# ? Mar 2, 2020 14:54 |
|
What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else?
|
# ? Mar 2, 2020 14:57 |
|
necrobobsledder posted:Container infrastructure has been iffy historically for me being on horribly managed, archaic systems that explicitly forbid Docker or LXC and where running them required approval levels not to mention the approval processes for getting private registries setup. I mean, I don't have to do that crap now thankfully but old habits die hard when you're used to fear of doing anything out of fear of more paperwork. I'm glad I have these forums to give me a reality check every time I feel like my company is doing something very dumb. It can always be so much worse
|
# ? Mar 2, 2020 15:35 |
|
Blinkz0rz posted:What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else? All of our projects that use Jenkins have Jenkins Job Builder definitions in the repo to define the jobs, vars, schedules, views, etc and they all just point to Jenkinsfiles that are pulled from the repo @ job runtime. Works pretty well since we can apply the JJB definitions in our CI/CD pipeline and nobody makes anything by hand.
|
# ? Mar 2, 2020 15:36 |
|
Blinkz0rz posted:What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else? Docjowles posted:I'm glad I have these forums to give me a reality check every time I feel like my company is doing something very dumb. It can always be so much worse
|
# ? Mar 2, 2020 15:43 |
|
+1 for Jenkins Job Builder. We basically have Jenkins running as a 100% stateless pipeline itself, including all jobs/plugins/jenkins version upgrades. It works quite well.
|
# ? Mar 2, 2020 16:36 |
|
Blinkz0rz posted:What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else? Blue Ocean's made pretty large strides with ease of setting up new Jenkinsfile style jobs; setting up a new repo is pretty much 4 clicks of the next button which was a perfect low-effort solution to our end-user's needs.
|
# ? Mar 2, 2020 17:11 |
|
We use Gitlab CI with GCP and I have a very lightweight container I'm trying to deploy as a review app (i.e., different deployment for each merge request). My first approach was with Google Cloud Run which has the disadvantage that it doesn't allow you to route URLs or requests to specific revisions of the container. So we always get the last-built version instead of the branch-specific version. This is No Good. Looks like this feature is missing in Cloud Run, and I can't figure out a good solution that isn't complete overkill for this little service (it's a mock of a more heavyweight service for the purpose of serving fake data to tests, etc.). I think I have to give each revision a unique service name so that the containers don't conflict with each other. The downside of that is then I get an unpredictable URL each time, so I have to get the URL (presumably I can query it with gcloud) and then change an nginx config or something. It hardly seems worth it. Is there a better way? Is this something worth branching out to AWS for?
|
# ? Mar 2, 2020 18:53 |
|
Bhodi posted:To go against the crowd, we(I) decided against jenkins job builder because we already have consistent repos for building docker containers, terraform, stuff like that, connected to github and a stub Jenkinsfile launching a shared library, and our "end user" of jenkins are developers somewhat unfamiliar with the garbage that is build engineering. Can also +1 the Jenkins Shared Library approach - the thing that ultimately drove us away from Jenkinsfiles/whatever is domain knowledge of groovy and so on. Our engineers are much more comfortable working with YAML than they are with Groovy or whatever else, so we try to match that. Eventually I am going to yank out Jenkins and just move to some other minimal CI solution with a YAML back end, hopefully JJB is the bridge that gets us there.
|
# ? Mar 2, 2020 19:55 |
|
Shared libraries come with their own baggage of fun and while Jenkins is JVM based it’s never been easy for me to write pipelines or jobs in anything except Groovy. Like if I want to try Kotlin for our jobs, that’s not going to be convenient as I re-run jobs repeatedly to figure out another class loader problem when using anything other than plain old Java and Jenkins classes. It fits user hostile while being “user friendly” in its own universe of misery. Seriously, what about this makes one think “easy to test CI”? https://medium.com/disney-streaming/testing-jenkins-shared-libraries-4d4939406fa2
|
# ? Mar 3, 2020 00:25 |
|
necrobobsledder posted:Shared libraries come with their own baggage of fun and while Jenkins is JVM based it’s never been easy for me to write pipelines or jobs in anything except Groovy. Like if I want to try Kotlin for our jobs, that’s not going to be convenient as I re-run jobs repeatedly to figure out another class loader problem when using anything other than plain old Java and Jenkins classes. It fits user hostile while being “user friendly” in its own universe of misery. The jenkinsfile itself is similar to below, with a bunch of predefined steps which lints branches, deploys and tests a dev instance/container on PRs, and tags/uploads on master branch. It's just 200 lines of generic code that pulls in config files/secrets from vault and runs shell commands. You'd be insane to try and develop your actual tests in groovy. You can use conditional build steps to skip specific stages but in the end jenkins is best used for just some really simple flow control goo that's executed through hooks from your source control, like line 80 in this: https://github.com/jenkinsci/pipeline-examples/blob/master/declarative-examples/jenkinsfile-examples/mavenDocker.groovy We use a jenkins docker image and launch all our jobs as docker containers. For our tests, all jenkins does is execute test/*.sh and if anything returns non-zero the job fails. The scripts in the test directory can execute complex rspec code or sometimes something so basic as a curl test against a built container. This decoupling of tests has the advantage of being tool-agnostic and allow you to test outside of the jenkins pipeline with just a local docker daemon and local checkout of the repo. All of this could easily be ported to concourse or whatever CI hotness you choose. It CAN be very complex but it really shouldn't be. Bhodi fucked around with this message at 05:08 on Mar 3, 2020 |
# ? Mar 3, 2020 04:25 |
Yo yo CI crew. Im building a Terraform module right now where im passing in an object map, im stealing some stuff from Gruntworks and the map they have is code:
|
|
# ? Mar 5, 2020 11:02 |
|
https://www.terraform.io/docs/configuration/variables.html tfvars is a good idea, I think, in the name of separating data from code and making your terraform reusable. You could dump them in the “default” section of the variable definition but that’s kind of lovely. Other options include passing them to terraform on the command line or as environment variables if that suits your workflow better for whatever reason.
|
# ? Mar 5, 2020 13:19 |
Docjowles posted:https://www.terraform.io/docs/configuration/variables.html Yeah the issue atm is that you need to pass the tfvars in at the command line, which isnt really resuable. Im fine with them being dumped in the defualt section as no one is going to see this really, they are just going to be calling it when putting a hcl file in our live repo
|
|
# ? Mar 5, 2020 13:25 |
|
Terraform will auto load tfvars files in the current directory if you match a naming convention. See the doc I linked. You don’t have to pass them on the command line.
|
# ? Mar 5, 2020 13:43 |
Ahhhhh right
|
|
# ? Mar 5, 2020 13:46 |
Ok 2nd question, This module is going to be applied to several accounts and sets up a lot of cloudwatch alarms, since the definition for these alarms is configured inside a variable as part of the object map i cant add a 2nd variable inside that to get the account that its been applied to, so when the alarm triggers and sends a slack notifcation, i have no idea what account its been triggered on. Is there a way to resolve that
|
|
# ? Mar 5, 2020 13:55 |
|
You can dynamically get the current AWS account number from the caller_identity data source. Would referencing that in your module help?
|
# ? Mar 5, 2020 14:01 |
|
Anyone here used terratest for testing terraform managed infrastructure? If so, any advice/pitfalls to avoid?
|
# ? Mar 5, 2020 14:05 |
Ive got that in already, its just how to pass that into the below so that it reads liklecode:
|
|
# ? Mar 5, 2020 14:05 |
|
CyberPingu posted:Ive got that in already, its just how to pass that into the below so that it reads likle code:
|
# ? Mar 5, 2020 14:14 |
Blinkz0rz posted:
You cant add a variable into a variable though. code:
|
|
# ? Mar 5, 2020 14:17 |
|
CyberPingu posted:You cant add a variable into a variable though. Can you use join to compose the whole string?
|
# ? Mar 5, 2020 14:48 |
|
Why not just change the code so that description is just the specific message for the thing that's alerting and format the message with the account in the thing that loads the description?
|
# ? Mar 5, 2020 14:52 |
deedee megadoodoo posted:Why not just change the code so that description is just the specific message for the thing that's alerting and format the message with the account in the thing that loads the description? The thing that loads the description is actually pulled from another module.
|
|
# ? Mar 5, 2020 16:01 |
|
Does anyone know where I can get some documentation on the secure-scripts plugin for Jenkins? It seems pretty straightforward but doesn’t seem to be 100% working on the Jenkins 2.X container we’re pulling from RH.
|
# ? Mar 6, 2020 04:57 |
|
CyberPingu posted:The thing that loads the description is actually pulled from another module. https://www.terraform.io/docs/configuration/functions/format.html
|
# ? Mar 9, 2020 11:38 |
|
I have a couple of .NET MVC/WebAPI projects, both Core and Framework on Azure, calling each other with http requests. I'd like my build pipeline to start these web services, run some integration tests, and kill the services/servers when they're done. How do I do this? Some of the services make external calls that I use Moq to mock in my unit tests. How would I do the same thing in those integration tests?
|
# ? Mar 9, 2020 13:15 |
|
Boz0r posted:I have a couple of .NET MVC/WebAPI projects, both Core and Framework on Azure, calling each other with http requests. I'd like my build pipeline to start these web services, run some integration tests, and kill the services/servers when they're done. Can you run the environment in containers? If so, docker compose can help you here. If not, how do you run the integration tests locally? If you have to manually go and run things and set up an environment that will make the tests pass, then that's a problem. Solve it for a developer's local machine and you've solved the problem universally. Generally, running integration tests after deployment against a dev environment is an okay approach for this kind of thing. Also keep in mind that integration tests are mostly intended to verify that services are communicating properly -- failure means "these things can't talk". Proving they can talk in a local environment isn't giving you much of a signal.
|
# ? Mar 9, 2020 14:07 |
|
New Yorp New Yorp posted:Can you run the environment in containers? If so, docker compose can help you here. I don't know. I don't know a lot about Azure yet. Most of the apps are pretty simple MVC apps. New Yorp New Yorp posted:If not, how do you run the integration tests locally? If you have to manually go and run things and set up an environment that will make the tests pass, then that's a problem. Solve it for a developer's local machine and you've solved the problem universally. Generally, running integration tests after deployment against a dev environment is an okay approach for this kind of thing. Also keep in mind that integration tests are mostly intended to verify that services are communicating properly -- failure means "these things can't talk". Proving they can talk in a local environment isn't giving you much of a signal. We haven't made any integration tests yet. I'm trying to get the team to prioritize it .
|
# ? Mar 9, 2020 14:38 |
|
|
# ? May 22, 2024 03:39 |
|
Boz0r posted:I don't know. I don't know a lot about Azure yet. Most of the apps are pretty simple MVC apps. Containers / Docker Compose have nothing to do with Azure. Don't prioritize integration tests, prioritize unit tests. Unit tests verify correct behavior of units of code (classes, methods, etc). Integration tests verify that the correctly-working units of code can communicate to other correctly-working units of code (service A can talk to service B). Both serve an important purpose, but the bulk of your test effort should go into unit tests. New Yorp New Yorp fucked around with this message at 14:55 on Mar 9, 2020 |
# ? Mar 9, 2020 14:53 |