Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methanar
Sep 26, 2013

by the sex ghost

fletcher posted:

Hmm interesting, thanks for the suggestion! Are there any downsides to using WSL1 ?

https://docs.microsoft.com/en-us/windows/wsl/compare-versions

The disk io performance is worse. But other than that, it's fine. You might also be limited to an old version of ubuntu, no idea if it was ever upgraded past ubuntu 16.04, or even 14.04 for that matter. If you have anything you want to compile to native code, you probably will still want to do that on a real linux machine as well instead of fake wsl. This includes c libraries like librdkafka.

Methanar fucked around with this message at 03:40 on Jan 22, 2021

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Honestly if you don't need the /mnt/c stuff you're possibly better off just running a small Linux vm vs WSL1 or 2.

Or guess if you need vargrant then you'll probably want WSL.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Yeah it sounds like I might be better off sticking with the VirtualBox VM for now. Maybe in a few months Microsoft and/or VirtualBox will have released updates that make it easier and I can revisit WSL2. A colleague mentioned he didn't run into any VirtualBox / WSL2 issues with his Windows Insider build of windows...sounds like things are getting close. It's also nice having the same OS in the dev VM that is on the dedicated server.

Hadlock
Nov 9, 2004

I've heard some pretty good things about WSL2 from my peers

My personal experiences with WSL1 with vscode and navigating the windows/linux filesystems between ps1 console and vscode inbuilt terminal were not good. From what I understand most of this is solved with WSL2, but I haven't had to mix and match since 2 came out

Hadlock fucked around with this message at 03:54 on Jan 22, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

fletcher posted:

Yeah it sounds like I might be better off sticking with the VirtualBox VM for now. Maybe in a few months Microsoft and/or VirtualBox will have released updates that make it easier and I can revisit WSL2. A colleague mentioned he didn't run into any VirtualBox / WSL2 issues with his Windows Insider build of windows...sounds like things are getting close. It's also nice having the same OS in the dev VM that is on the dedicated server.

I doubt there will be such an update. This has been an issue between Hyper-V and VirtualBox for years. And the same issue exists between KVM and VirtualBox on the Linux side. My understanding is this is a technical limitation with using CPU VT-x features, only one virtualization software can use them at a time.

Can VirtualBox and KVM run alongside each other?
Using Hyper-V with Oracle VM VirtualBox

Super-NintendoUser
Jan 16, 2004

COWABUNGERDER COMPADRES
Soiled Meat
I'm about ready to pull my hair about on this stupid jenkins job. I'm pretty new at jenkins, can someone give me a hand with this.

Basically, I have a RHEL server that I need jenkins to connect to via SSH and do some commands on it. The problem is that I need it to use some environment variables on the target host. But I can't get jenkins job to load them.

code:
 pipeline {
  agent {
    node {
      label 'centos7_base'
    }
  }
  environment {
    OWNER="jmjf@company.com"
    SSH = "ssh -o StrictHostKeyChecking=no ${params.USERNAME}@${params.HOST}"
    ENV_USER = "${params.USERNAME}"
  }
  parameters {
    string(name: 'HOST', defaultValue: host.mycompany.corp', description: 'Host to build on')
    choice(name: 'USERNAME', choices: ['user_media_1', 'user_media_2'], description: 'Username to manage')
    string(name: 'SERVELT_VERSION', defaultValue: '', description: 'Which version of servlet  to pull from Nexus')
  }
  stages {
    stage('ssh') {
      steps {
        container('shell'){
          sshagent (['QA-sshkey-admin-host01']) {
            sh "$SSH source /home/$ENV_USER/.bash_profile && env"
            sh "$SSH source /home/$ENV_USER/.bash_profile && echo $SOME_ENV_VAR"
          }
        }
      }
    }
  }
}
So with that, the "env" command returns the entire list of env variables correctly, but I just can't get the $SOME_ENV_VAR to resolve. I tried every combination of escapes, double quotes, or single quotes. But I can't figure it out. Anyone have any insights?

edit: of course right after I post this I figure it out. The problem is that it takes a while to build, so it's a nightmare to figure out. This works:
code:
            sh "$SSH 'source /home/$ENV_USER/.bash_profile && echo \${SOME_ENV_VAR}' "
I figured it was something stupid with the quotes and / .

Super-NintendoUser fucked around with this message at 18:46 on Jan 25, 2021

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Welcome to Jenkins quote hell. Enjoy your stay.

it only gets worse from here

Hadlock
Nov 9, 2004

If you have shell access on the other end, you can dump everything into a CSV (or whatever) and pipe that then into base 64, and then expand it out on the other side in a safe space and awk/sed your way to victory

Super-NintendoUser
Jan 16, 2004

COWABUNGERDER COMPADRES
Soiled Meat

Hadlock posted:

If you have shell access on the other end, you can dump everything into a CSV (or whatever) and pipe that then into base 64, and then expand it out on the other side in a safe space and awk/sed your way to victory

xzzy
Mar 5, 2009

Based on the snippet posted it might not be the best choice for this specific case, but the envinject plugin is pretty great in general for managing environment variables in a jenkins job. Users can enter variables in the gui or specify them in a file to be imported so it generally satisfies the gui types and the shell types.

Hadlock
Nov 9, 2004

Our analytics team has sworn off django(???) and they're doing all their migrations via tribal knowledge manually at the command line

I've been able to walk them back from the ledge they're willing to look at a third party migration solution, because Django ORM is a four letter word so now we have to use something else. The only tool anybody on the engineering team has used for this (including myself) is a product called flyway. I used it at a Java house, it was wildly successful, but I wasn't directly attached to the integration of our product with flyway

The only problem I can see with flyway is that it's heavily slanted towards Java apps, and all of their workflow is in Python

TL;DR how do I migrations good

I was going to post this in the general programming questions thread but honestly I don't trust generic programmer advice on database stuff

Option B if there's just absolutely no off the shelf tools for python migrations, I guess I could write a crude system myself, but as much as my office embraces "not written here" that sounds like a tremendous waste of time

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Hadlock posted:

Our analytics team has sworn off django(???) and they're doing all their migrations via tribal knowledge manually at the command line

I've been able to walk them back from the ledge they're willing to look at a third party migration solution, because Django ORM is a four letter word so now we have to use something else. The only tool anybody on the engineering team has used for this (including myself) is a product called flyway. I used it at a Java house, it was wildly successful, but I wasn't directly attached to the integration of our product with flyway

The only problem I can see with flyway is that it's heavily slanted towards Java apps, and all of their workflow is in Python

TL;DR how do I migrations good

I was going to post this in the general programming questions thread but honestly I don't trust generic programmer advice on database stuff

Option B if there's just absolutely no off the shelf tools for python migrations, I guess I could write a crude system myself, but as much as my office embraces "not written here" that sounds like a tremendous waste of time

What was the issue they had with Django migrations? I mean conceptually Flyway is not really much different right? I don't have a ton of experience with Flyway but when I came across it a few years ago it sounded very similar to Django migrations. So without knowing what the hangup was specifically with Django migrations, hard to say whether something else will not result in the same problems.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
I use alembic for a very basic data model and it is needs suiting

abraham linksys
Sep 6, 2010

:darksouls:
If your engineers want to write SQL migrations in a Python DSL, I'd check out Alembic (though I dunno how easily it is used "directly" instead of with SQLAlchemy models). Otherwise, if they want to write migrations in SQL, Flyway is great even if the binary is huge since it bundles a JVM, but on the plus side, Java's just an implementation detail in its usage (except for the occasional stack trace if things go completely awry).

There are a few other alternatives to Flyway written in various languages, but again, since the migrations are written in SQL, this is really just an implementation detail.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Is there a tool that will lookup all of our resources in AWS and generate some sort of a map or report that shows what resources are linked to what other resources, so we can clean up the ones that aren't being used by anything?
Ideally I'd like to see all resources that seem orphaned which can easily be cleaned up, and also maybe drill down from a VPC to see all the resources attached to it, which I can then either tie back to a running instance or delete with confidence.

We don't have anyone with any specific AWS training on staff so there's been a lot of trial and error in getting things set up, especially earlier on when we were starting to move our services over there, so there are a number of resources that were maybe not fully set up or not fully deleted which I'm trying to identify and clean up, both from a cost-saving POV as well as general OCD.

Hadlock
Nov 9, 2004

Lucid charts has something that will work, but they're charging an arm and a leg for it

You might also look at tools that scan to see what ports are open to the internet on which hosts, and what S3 buckets are set to world readable (you'd be surprised how often this happens, aws recently changed their workflow to make it harder to gently caress up this bad, but people are gonna always take shortcuts) not a week goes by that some world readable s3 bucket is discovered full of database backups of the RNC political donors database (this actually happened)

Docjowles
Apr 9, 2009

I haven’t run it myself but I’ve come across this thing before https://github.com/duo-labs/cloudmapper

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Resource cleanup is a really common problem, and there are a bunch of tools that will help you find and destroy resources (AWSNuke, CloudCustodian, and a bunch of hacky little tools that various teams have written for themselves like aws-terminator). But I don't know how many of them are good at creating some sort of dependency graph; that's inherently a tricky problem because there isn't always a strict tree hierarchy, and it's not always possible to know if something is used or not (e.g. a Route 53 Hosted Zone isn't explicitly linked to an EC2 resource that might be using it).

AWS is super-annoying in this regard, and I cynically feel they do this deliberately to nickle-and-dime you on dangling resources. It wasn't until recently that if you tried to destroy a VPC in the console and there were attached resources, it would error out but wouldn't tell you what they were, setting off a tedious hunt for esoteric resource types. The console now helps to destroy some of them, but its dependency-finding algorithm is opaque. (Azure is easier here; everything must be created in a Resource Group, and if you nuke a Resource Group it destroys all resources inside it, you don't have to unwind the dependency tree.)

One way you can mitigate this is to enforce good tagging. You can tag almost any resource, there's a single API to bulk tag resources (though it's not 100% comprehensive, and you have the fun of learning all about ARNs), and you can implement policies that resources can't be created without certain tags, and/or that tags are auto-applied. Then when you go to clean up, it's just a matter of searching for resources with the right tags and nuking them all.

Pile Of Garbage
May 28, 2007



You need to approach it from both ends. Reviewing the bills and Cost Explorer will tell you exactly what is costing you money but you'll need to talk to whoever is using the resources to get the full story.

I've been doing a fair amount of cost control within AWS over the past month with full access to billing at the org root account but limited access to the accounts under it. I'm no expert so my process was to simply look for anomalies and then engage stakeholders directly to discuss them.

Just today I was reviewing Kinesis costs and noticed extended shard hours costs across a bunch of accounts. After discussing with the stakeholder we came to the conclusion that extended retention was required for production but for dev/test they could alter their CI/CD pipelines to not set the retention beyond 24 hours to reduce the cost.

Service Control Policies, Resource Tagging Policies and general infrastructure standards in your org can only go so far. To get real savings you'll need to get everyone involved.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Thanks for the suggestions, especially CloudMapper - it has a `find_unused` command which should hopefully do at least some of what I'm looking for.
I think enforcing tagging on everything going forward is a good start also - it'd at least help to identify things that should be cleaned up at some point.

Mr Shiny Pants
Nov 12, 2012
I am looking to automate a lot of our stuff, especially the deployment of resources.

Is Terraform still the way to go? Or are there better alternatives?

I really like what I've seen so far of Terraform.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mr Shiny Pants posted:

I am looking to automate a lot of our stuff, especially the deployment of resources.

Is Terraform still the way to go? Or are there better alternatives?

I really like what I've seen so far of Terraform.

Needs more context. What clouds are you using? Are you targeting Kubernetes? What's your technology stack? What are you using for CI? What automation do you already have in place? Are you the only person who will be supporting this, or will it be a team effort?

Hadlock
Nov 9, 2004

Terraform is still good, better since v0.11 or 0.12 when they sort of committed (briefly) to a 1.0 spec

Terraform is better if you're doing Career-Driven Development as it's a transferable skill and most everybody interviewing you will nod their head when those words come out of your mouth

edit: kind of wondering what happened to terraform 1.0? I guess they're afraid that if they publish a stable spec, the community will fork it and add all the good features people have been wanting for years and leave them in the dust? Terraform is like this close >.< to being truly great but has a bunch of weird gotchas related to opinionated decisions the community has zero control over

Hadlock fucked around with this message at 20:48 on Jan 30, 2021

vanity slug
Jul 20, 2010

at least 0.14 has forward compatible state files

JehovahsWetness
Dec 9, 2005

bang that shit retarded
My favorite TF bit is this line has been in the docs since 2017:

quote:

The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.

Any day now! (We have had teams use GoogleCloudPlatform/terraformer to gen a bunch of config but I've never hosed with it.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
One person on one of our teams did a Terraform configuration generator and it took them like three days to do it. This should not be difficult for a company whose Sentinel engineering team is bigger than most software companies

Mr Shiny Pants
Nov 12, 2012

New Yorp New Yorp posted:

Needs more context. What clouds are you using? Are you targeting Kubernetes? What's your technology stack? What are you using for CI? What automation do you already have in place? Are you the only person who will be supporting this, or will it be a team effort?

Zilch at the moment. I am not looking for anything that only works on a specific cloud, we have way too many configurations running for that to work. Looking at Terraform and the provider model and the huge opensource effort writing providers behind it, that gives a lot of possibilities ( GCP, Vsphere Libvirt etc.). No CI at the moment, but if I can help it: probably Drone. It will be a team effort, I am just asking you guys for some insights.


Hadlock posted:

Terraform is still good, better since v0.11 or 0.12 when they sort of committed (briefly) to a 1.0 spec

Terraform is better if you're doing Career-Driven Development as it's a transferable skill and most everybody interviewing you will nod their head when those words come out of your mouth

edit: kind of wondering what happened to terraform 1.0? I guess they're afraid that if they publish a stable spec, the community will fork it and add all the good features people have been wanting for years and leave them in the dust? Terraform is like this close >.< to being truly great but has a bunch of weird gotchas related to opinionated decisions the community has zero control over

I am not exactly doing this for my career, I just happen to really like the things I've seen from it. Especially the provider stuff and it is nice to get behind a technology that won't be dead just after we took an interest in it. :D


Vulture Culture posted:

One person on one of our teams did a Terraform configuration generator and it took them like three days to do it. This should not be difficult for a company whose Sentinel engineering team is bigger than most software companies

Could you expand on this? Edit: Generating Terraform configs from already running deployments I guess?

Mr Shiny Pants fucked around with this message at 18:38 on Jan 31, 2021

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





terraform consumes so much time and attention at every place i've been that used it that i'm convinced it's a scam to ensure full employment of programmers who don't want to program

Mr Shiny Pants
Nov 12, 2012

the talent deficit posted:

terraform consumes so much time and attention at every place i've been that used it that i'm convinced it's a scam to ensure full employment of programmers who don't want to program

Hmmm, that's not good. I am wondering if it would be good fit to provision VMs and the like and use something like Ansible for configuration.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

the talent deficit posted:

terraform consumes so much time and attention at every place i've been that used it that i'm convinced it's a scam to ensure full employment of programmers who don't want to program

For us terraform has been more of a "set it and forget it" type of experience, it's been great

Methanar
Sep 26, 2013

by the sex ghost

Mr Shiny Pants posted:

Hmmm, that's not good. I am wondering if it would be good fit to provision VMs and the like and use something like Ansible for configuration.

:monocle:

Mr Shiny Pants
Nov 12, 2012

idgi.

fletcher posted:

For us terraform has been more of a "set it and forget it" type of experience, it's been great

Maybe we should hold a poll. :)

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mr Shiny Pants posted:

Hmmm, that's not good. I am wondering if it would be good fit to provision VMs and the like and use something like Ansible for configuration.

Consider treating VMs as cattle vs VMs as pets. Use something like Packer to create immutable VM images that can be freely created or destroyed with minimal ongoing configuration management.

The Fool
Oct 16, 2003


the talent deficit posted:

terraform consumes so much time and attention at every place i've been that used it that i'm convinced it's a scam to ensure full employment of programmers who don't want to program

We have a team (that I am on) of 8 people who’s primary responsibility is janitoring terraform

However, we support multiple teams of developers and create and maintain modules because they’re not allowed to deploy resources directly

The Fool
Oct 16, 2003



Terraform for infrastructure deployment then ansible or some other config management is a pretty common pattern and probably what you should be doing

12 rats tied together
Sep 7, 2006

Mr Shiny Pants posted:

Maybe we should hold a poll. :)

This is something I could post about for hours, but I've already done that a bunch of times ITT so I'll just try to summarize my take on this for you: Terraform is a fine tool for simple workloads, it's especially nice as a "high floor" tool where it's impossible for you to be under a certain level of productivity and still count as "using Terraform".

It gets worse the more you rely on it, and especially as the complexity of your deployments gets higher. If you're using it as a feature team, for a volunteer/personal project, or a small infrastructure deployment as part of a PaaS consumer team (ex: dba, stream processing team, etc), it is good enough to basically be an emerging best practice.

If you're on an infrastructure engineering team providing that PaaS abstraction to other feature teams, it's a really bad tool and you shouldn't use it, you'll be able to come up with something way better yourselves.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





12 rats tied together posted:

This is something I could post about for hours, but I've already done that a bunch of times ITT so I'll just try to summarize my take on this for you: Terraform is a fine tool for simple workloads, it's especially nice as a "high floor" tool where it's impossible for you to be under a certain level of productivity and still count as "using Terraform".

It gets worse the more you rely on it, and especially as the complexity of your deployments gets higher. If you're using it as a feature team, for a volunteer/personal project, or a small infrastructure deployment as part of a PaaS consumer team (ex: dba, stream processing team, etc), it is good enough to basically be an emerging best practice.

If you're on an infrastructure engineering team providing that PaaS abstraction to other feature teams, it's a really bad tool and you shouldn't use it, you'll be able to come up with something way better yourselves.

this is basically where i land. if you can do it in an afternoon terraform is fine (but also most things are going to be fine and it comes down mostly to taste and experience). if you are writing terraform to enable other teams to write more terraform you end up with awful messes

The Fool
Oct 16, 2003


The Fool posted:

We have a team (that I am on) of 8 people who’s primary responsibility is janitoring terraform

However, we support multiple teams of developers and create and maintain modules because they’re not allowed to deploy resources directly

12 rats tied together posted:

If you're on an infrastructure engineering team providing that PaaS abstraction to other feature teams, it's a really bad tool and you shouldn't use it, you'll be able to come up with something way better yourselves.

the talent deficit posted:

this is basically where i land. if you can do it in an afternoon terraform is fine (but also most things are going to be fine and it comes down mostly to taste and experience). if you are writing terraform to enable other teams to write more terraform you end up with awful messes

:hmmyes:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most of our Terraform is set it and forget it here but if you need to upgrade from 0.11 to 0.12 the lift may be so high to redo your whole infrastructure it can cripple your velocity so nobody will ever upgrade until the fateful day that Hashicorp announces the 0.11 EOL day.

We had maybe three engineers trying to do testing of infrastructure and a CI/CD workflow around Terraform (or any other tool that could supplant it like Pulumi honestly), but we timebox stuff to avoid bikeshedding things to death and if something is super important like a companywide CI transition we'll get volunteers from various teams to bang their heads on the effort for a year depending upon the back of the napkin math we did on a build v buy decision.


Building your own tooling as a feature team is a Bad Idea when you should be relying more upon other teams, that's not much of a contest. However, things aren't so cut and dry when you're a tooling and platform team with limited bandwidth. We do a ton of open source ecosystem investment partly because we hire back from these same communities and building proprietary stuff for what amounts to internal bikeshedding is silly and we have shied away from it culturally due to some serious technical debt incurred due to too much proprietary tooling investment that doesn't make sense for our hiring guidelines and maintaining proprietary stuff is what makes people despair here, so we opt for OSS first as a strong rule.


With that said, there's hardly anything out there that's super strong at orchestrating complex deployments out of the box besides Ansible and Saltstack. The configuration and deployment ecosystem for non-containerized software is pretty much frozen in time since 2015 and there's not much else that can be written out as a general solution. Because everyone presumes you're a SaaS-only shop these days or that a shop that runs anything on-premise is loaded with cash thinking you're a bank or F500 that doesn't blink at paying $100k+ for Terraform and Vault enterprise editions.

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

12 rats tied together posted:

This is something I could post about for hours, but I've already done that a bunch of times ITT so I'll just try to summarize my take on this for you: Terraform is a fine tool for simple workloads, it's especially nice as a "high floor" tool where it's impossible for you to be under a certain level of productivity and still count as "using Terraform".

It gets worse the more you rely on it, and especially as the complexity of your deployments gets higher. If you're using it as a feature team, for a volunteer/personal project, or a small infrastructure deployment as part of a PaaS consumer team (ex: dba, stream processing team, etc), it is good enough to basically be an emerging best practice.

If you're on an infrastructure engineering team providing that PaaS abstraction to other feature teams, it's a really bad tool and you shouldn't use it, you'll be able to come up with something way better yourselves.

the talent deficit posted:

this is basically where i land. if you can do it in an afternoon terraform is fine (but also most things are going to be fine and it comes down mostly to taste and experience). if you are writing terraform to enable other teams to write more terraform you end up with awful messes

:same:

If your team has the budget for managed services, terraform etc for standing up managed kubetnetes + cloud level permissions, managed database etc that's great. Then let teams run wild on top of that and get work done

We have a team of four, soon to be six, I think I am the only person on the team who is not a dedicated infrastructure-as-code janitor, we're trying to bake everything into this twisted speghetti cloud formation and one of our new guys was so frustrated with the disaster he quit rather than keep working on it, and it's only going to get more weird and more twisted and more speghetti. There's no end goal, this is, somehow, considered internally as really good, or so upper management has been led to believe. I don't recommend that level of stuff at all

It's taken us six months to roll out a postgres prometheus exporter, just about the simplest app deploy, because the templates for our existing internal cloud formation prometheus exporter templates have a bunch of weird dependencies on the cloud formation for our monolith, so it's either build out a whole new set of cloud formation templates from scratch, or build an entire monolith environment stack and try and turn off all the hidden default "on" stuff until you just have the exporter. It is so, so, so hosed

Hadlock fucked around with this message at 01:57 on Feb 2, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply