Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Boz0r
Sep 7, 2006
The Rocketship in action.

New Yorp New Yorp posted:

Containers / Docker Compose have nothing to do with Azure.

Don't prioritize integration tests, prioritize unit tests. Unit tests verify correct behavior of units of code (classes, methods, etc). Integration tests verify that the correctly-working units of code can communicate to other correctly-working units of code (service A can talk to service B). Both serve an important purpose, but the bulk of your test effort should go into unit tests.

We've mad a lot of unit tests. I haven't checked the actual percent, but our code coverage is pretty good.

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

We've mad a lot of unit tests. I haven't checked the actual percent, but our code coverage is pretty good.

Be aware that code coverage is a largely meaningless number. The only value it provides is telling you what code hasn't attempted to be tested. It does not tell you that the tests are well-written, valid, and that the code under test implements requirements correctly.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!
We've switched to just measuring the code coverage of changed lines at PR time, as a reminder to the dev of what tests they might be missing. Not sure it's really made any difference but I just checked the overall coverage number for the first time in months and it's slightly up, so I suppose that's good.

Woof Blitzer
Dec 29, 2012

[-]
I'm setting up a Jenkins deployment for the first time, is there anything I should be aware of with master/node communication over a VPN/firewall?

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
1) why
2) are you planning on JNLP/SSH communication? If the latter, you can configure keep alive settings and so on. JNLP can be pretty brittle in latency sensitive connections.

Woof Blitzer
Dec 29, 2012

[-]

Gyshall posted:

1) why
2) are you planning on JNLP/SSH communication? If the latter, you can configure keep alive settings and so on. JNLP can be pretty brittle in latency sensitive connections.

So you can use SSH for node control then? I used JNLP but encountered some trouble, I'm taking a look at it next week so it might be something minor idk. As for the why: big company fuckery. I can place the master in the datacenter but for dumb reasons it's easier to use a server in the office.

Docjowles
Apr 9, 2009

Yes you can configure nodes to talk to the master over SSH, works fine (to the extent that anything in Jenkins works).

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
If anyone wants them, Jeff Geerling made his books pay what you want down to free on Leanpub: https://leanpub.com/u/geerlingguy

The Ansible for Kubernetes book isn't done yet but you get the updates as he goes.

Pile Of Garbage
May 28, 2007



Woof Blitzer posted:

So you can use SSH for node control then? I used JNLP but encountered some trouble, I'm taking a look at it next week so it might be something minor idk. As for the why: big company fuckery. I can place the master in the datacenter but for dumb reasons it's easier to use a server in the office.

IIRC JNLP is only required for Windows nodes (Last I checked). As long as it's a flat L3 network between the master and node with no NAT or proxies it will work fine. Just a matter of getting traffic allowed via whatever firewalls are in the way.

Also unless your setup is super complicated it really is preferable to have your master and node on the same network. Pretty sure you can run node on master as well. Only reason to have nodes elsewhere is if they need access to specific resources.

Boz0r
Sep 7, 2006
The Rocketship in action.
I have a C# solution on ADO with a bunch of projects, some target .NET Framework and some target .NET Core. On my own machine, all bins are put in the /bin/ folder, but when ADO builds it, the core projects get put in /bin/Release/ or something similar, which breaks some of my scripts. How do I fix this?

EDIT: Fixed it. One of the projects' csproj only had a debug conditional with an absolute path, the others had one for both configurations.

Boz0r fucked around with this message at 09:05 on Mar 20, 2020

Ghost of Reagan Past
Oct 7, 2003

rock and roll fun
So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Ghost of Reagan Past posted:

So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools.

Terraform isn't really going to deploy it for you. You'd want something like Ansible to make a pass at it after you've stood up the infrastructure with terraform.

Another option on DO is to use their hosted Kubernetes.

Nomnom Cookie
Aug 30, 2009



Ghost of Reagan Past posted:

So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools.

If your forseeable needs would be covered by a few pets, then something like terraform+ansible would be fine.If you expect to have several or more instances then look at kubernetes on DO. AFAIK k8s is basically how DO supports non-toy use cases so that is also the way to go if you need autoscaling.

Alternately, have you considered a serverless thing like Lambda, if your app can be adapted to work in it? The best devops is avoided devops.

Ghost of Reagan Past
Oct 7, 2003

rock and roll fun
Alright thanks. I'll look into this stuff. I've only limited exposure to devops/deployment tools, and have never remotely spun up servers from scratch, so it's a learning experience!

I might be able to do serverless but I'd probably have to do some rearchitecting and I don't know how expensive that would get.

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Can someone in as eli5 terms as possible explain something to me,

i currently have a object map as a variable like this

code:

variable "metric_map" {
  description = "A map of filter metrics."
  type = map(object({
    pattern     = string
    description = string
  }))
}
I need to define a load of defaults for that and keep them in the body of the vars.tf file, so not in a terraform.tfvars file.

Im just wondering what the syntax would be for declaring a mass amount of defaults like this


code:
metric_map = {
  "UnauthorizedAPICalls" = {
    pattern = "{($.errorCode= \"*UnauthorizedOperation\") || ($.errorCode= \"AccessDenied*\")}"
    description = "A user has in account made an unauthorized API call"
  }
  "ConsoleSignInWithoutMFA" = {
    pattern = "{($.eventName= \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\")}"
    description = "User without MFA has signed into the console"
  }
  "RootUserConsoleSignIn" = {
    pattern = "{$.userIdentity.type = \"Root\" && $.eventType = \"AwsConsoleSignIn\"}"
    description = "Root user signed in to the console"
  },
}
E. nevermind, sorted this pretty easily, just needed a

code:

variable "metric_map" {
  description = "A map of filter metrics."
  type = map(object({
    pattern     = string
    description = string
  }))
}

default = {<values for map>}

CyberPingu fucked around with this message at 12:14 on Mar 24, 2020

Walked
Apr 14, 2003

It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again.

Any new players in the last 1-2 years worth checking out? We were on CircleCI at my last place and it was fine; dont mind using it again but want to be sure I'm not missing anything making waves more recently.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Walked posted:

It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again.

Any new players in the last 1-2 years worth checking out? We were on CircleCI at my last place and it was fine; dont mind using it again but want to be sure I'm not missing anything making waves more recently.

Github Actions. It's almost completely identical to the YAML-based pipelines in Azure DevOps, but yet for some insane reason, just different enough so that it's not directly cross-compatible.

Mr Shiny Pants
Nov 12, 2012

Walked posted:

It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again.

Any new players in the last 1-2 years worth checking out? We were on CircleCI at my last place and it was fine; dont mind using it again but want to be sure I'm not missing anything making waves more recently.

I've been using Drone at my work. I like it, especially when paired with Gitea.

Walked
Apr 14, 2003

Mr Shiny Pants posted:

I've been using Drone at my work. I like it, especially when paired with Gitea.

Drone always stuck out to me as a sweet option; but never heard anyone else using it.

I’ll give it another peek

Mr Shiny Pants
Nov 12, 2012

Walked posted:

Drone always stuck out to me as a sweet option; but never heard anyone else using it.

I’ll give it another peek

Me neither, but we did not have anything here yet so I was free to choose whatever struck my fancy. And it did. :) I like to host my own stuff so that was a big plus.

Compared to all the other stuff I saw: Gitlab, Circle and Jenkins it feels really clean. IMHO though.

It's lightweight, one thing that took me awhile to figure out is that everything needs to be done through containers. Copy something? Run a container etc. etc.

Took me a day to set it up and now it pulls Repos that have a release flag set, compiles everything inside a container and deploys to a docker host. Pretty sweet.

Mr Shiny Pants fucked around with this message at 22:04 on Mar 27, 2020

SAVE-LISP-AND-DIE
Nov 4, 2010
I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily.

Pile Of Garbage
May 28, 2007



SAVE-LISP-AND-DIE posted:

I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily.

I'm not sure of any standards for static analysis, I guess the closest thing might be the OWASP Code Review Guide (Not sure if that's the latest version, the OWASP website appears to be in a state of flux).

Really it's much like selecting a linter: it depends on the language(s) you are targeting and the tool(s) you're using to run your pipelines. Both OWASP and NIST maintain lists that are worth a look. Off the top of my head GitLab has static analysis built-in for their CI/CD runner however only in the paid Enterprise Edition.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

SAVE-LISP-AND-DIE posted:

I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily.

Sonarqube is a popular one. We use it for Java and a few other languages, I believe it has support for .NET and nodejs.

Boz0r
Sep 7, 2006
The Rocketship in action.
We've got 10+ build pipelines in ADO with solutions that include a bunch of common projects. I'd like to trigger these builds on changes to the individual projects, but also the common projects. Is there a better way of setting up some trigger dependencies instead of adding all the paths manually?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

We've got 10+ build pipelines in ADO with solutions that include a bunch of common projects. I'd like to trigger these builds on changes to the individual projects, but also the common projects. Is there a better way of setting up some trigger dependencies instead of adding all the paths manually?

Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule.

Boz0r
Sep 7, 2006
The Rocketship in action.

New Yorp New Yorp posted:

Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule.

That's the plan for the future, but in this initial phase we're making a shitload of changes.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Even still, isn't it more chaotic to always use broken dependencies? Why not pin package versions?

Boz0r
Sep 7, 2006
The Rocketship in action.

Gyshall posted:

Even still, isn't it more chaotic to always use broken dependencies? Why not pin package versions?

But that would make sense, and our customer doesn't like that kind of thing.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

New Yorp New Yorp posted:

Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule.
You aren't wrong, but this is a really big "it depends" that's a suboptimal choice in a lot of circumstances/organization designs

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The combinatorics of testing so many upstream changes at a time is not feasible for most organizations that should be doing them, unfortunately. People have enough problems understanding how Jenkins matrix jobs work let alone the sheer amount of tests and configuration pedantry that should be run to make fully reproducible builds of all their software artifacts. It's sad how little progress has been made here as an industry when I've been talking about doing this for... ugh, 15+ years now even back into my college days. The fundamental problem is more around people than technology limitations and is increasingly more obvious as I keep fumbling along from company to company.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug


Vulture Culture posted:

You aren't wrong, but this is a really big "it depends" that's a suboptimal choice in a lot of circumstances/organization designs

I strongly disagree unless you're talking about a very small set of consumers. Having a dozen applications start rolling out new versions because a shared dependency was updated is only going to cause pain. Breaking changes aside, the risk of introducing a new bug or fixing a bug that's being treated as correct behavior by the consumer is so high. Nothing is better than scrambling to fix a bunch of applications because someone else needed a change made to a common dependency.

Being able to say "I am using version X of this dependency and it works correctly at this point in time. " Is wonderful. It also makes tracing bugs easier since you can pinpoint the build where the bug was discovered and work backwards from there to find when it was introduced, which is especially important if the bug results in incorrect data that has to be audited and corrected.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

New Yorp New Yorp posted:

I strongly disagree unless you're talking about a very small set of consumers. Having a dozen applications start rolling out new versions because a shared dependency was updated is only going to cause pain. Breaking changes aside, the risk of introducing a new bug or fixing a bug that's being treated as correct behavior by the consumer is so high. Nothing is better than scrambling to fix a bunch of applications because someone else needed a change made to a common dependency.

Being able to say "I am using version X of this dependency and it works correctly at this point in time. " Is wonderful. It also makes tracing bugs easier since you can pinpoint the build where the bug was discovered and work backwards from there to find when it was introduced, which is especially important if the bug results in incorrect data that has to be audited and corrected.
I guess I should clarify that I'm talking about bad systems and the roadmap to fixing them, and that versioned libraries are a desired end state, but not a next step. Out there in the world, there's a lot of tightly-coupled bullshit that falls into one of the following situations:

  • Libraries without solid API contracts (protoduction)
  • Libraries that broker client access to systems without solid API contracts
  • Libraries that should be services, directly accessing data/databases without fixed and supportable schemas

The further you get into weird integrations with line-of-business garbage, the more of these you run into, and versioning libraries can make all these problems worse. And the way you have to deal with these and get yourself out of the hole is by doing ~four things: stabilizing the API, providing a mechanism to coordinate updates between consumers when there's a change, testing updates to these weird integrations across the board when something about the global system is about to change, and refactoring your library out into a service with a reasonable contract so you limit these problems and you can move to a supportable end state.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
That sounds like an absolute hellscape

The Fool
Oct 16, 2003


Gyshall posted:

That sounds like an absolute hellscape

Also known as “the real world” for a ton of enterprises

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Fool posted:

Also known as “the real world” for a ton of enterprises
It's true for almost every ERP integration in history but also represents about 90% of production configuration management code. And don't get me started on scientific computing pipelines

Vulture Culture fucked around with this message at 06:07 on Apr 5, 2020

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Given that I'm using systemd to 'serviceize' an express app on a DO droplet, what should be my procedure for updating the app?

I've got a myApp.service file in /etc/systemd/system with:

code:
ExecStart=/usr/bin/node /path/to/app.js
Restart=always
User=nobody
Group=nogroup
Environment=PATH=/usr/bin:usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/path/to
StartLimitAction=0

[Install]
WantedBy=multi-user.target
When I have a newly built app.js (and /node_modules, etc), is it safe / best practices / ok practices / not too stupid to just put the updated files in place and then run "systemctl restart myApp"? Is reload more appropriate?

Methanar
Sep 26, 2013

by the sex ghost

Newf posted:

Given that I'm using systemd to 'serviceize' an express app on a DO droplet, what should be my procedure for updating the app?

I've got a myApp.service file in /etc/systemd/system with:

When I have a newly built app.js (and /node_modules, etc), is it safe / best practices / ok practices / not too stupid to just put the updated files in place and then run "systemctl restart myApp"? Is reload more appropriate?

I like the pattern of dropping new code on the filesystem and then having a symlink pointing to the version you want.
code:
 /path/to/app.js 
would be a symlink to /path/v2/app.js from /path/v1/app.js . You'd need to make sure your workdir is also a symlink

Replacing files that are running is a bit dangerous and possibly slows down rollbacks if necessary.

Methanar fucked around with this message at 06:50 on Apr 12, 2020

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

Methanar posted:

I like the pattern of dropping new code on the filesystem and then having a symlink pointing to the version you want.
code:
 /path/to/app.js 
would be a symlink to /path/v2/app.js from /path/v1/app.js . You'd need to make sure your workdir is also a symlink

Replacing files that are running is a bit dangerous and possibly slows down rollbacks if necessary.

This is a better pattern than what I have going now for sure, thanks. To clarify, I still have to restart the service, because it's running something in-memory that needs to be reloaded. Correct?

Methanar
Sep 26, 2013

by the sex ghost

Newf posted:

This is a better pattern than what I have going now for sure, thanks. To clarify, I still have to restart the service, because it's running something in-memory that needs to be reloaded. Correct?

Yes.

Adbot
ADBOT LOVES YOU

JHVH-1
Jun 28, 2002
I was managing an express app and before we ended up moving to docker containers we were using pm2 and it worked well. Pretty sure it had a watch command to check for changes, but you can also use the restart command after you deploy your code. Plus it can run an app pool if you want to run multiple copies on a multithreaded system, and provides a graceful shutdown mechanism.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply