|
New Yorp New Yorp posted:Containers / Docker Compose have nothing to do with Azure. We've mad a lot of unit tests. I haven't checked the actual percent, but our code coverage is pretty good.
|
# ? Mar 9, 2020 15:28 |
|
|
# ? May 16, 2024 00:56 |
|
Boz0r posted:We've mad a lot of unit tests. I haven't checked the actual percent, but our code coverage is pretty good. Be aware that code coverage is a largely meaningless number. The only value it provides is telling you what code hasn't attempted to be tested. It does not tell you that the tests are well-written, valid, and that the code under test implements requirements correctly.
|
# ? Mar 9, 2020 15:58 |
|
We've switched to just measuring the code coverage of changed lines at PR time, as a reminder to the dev of what tests they might be missing. Not sure it's really made any difference but I just checked the overall coverage number for the first time in months and it's slightly up, so I suppose that's good.
|
# ? Mar 13, 2020 23:39 |
|
I'm setting up a Jenkins deployment for the first time, is there anything I should be aware of with master/node communication over a VPN/firewall?
|
# ? Mar 16, 2020 01:39 |
|
1) why 2) are you planning on JNLP/SSH communication? If the latter, you can configure keep alive settings and so on. JNLP can be pretty brittle in latency sensitive connections.
|
# ? Mar 16, 2020 02:49 |
|
Gyshall posted:1) why So you can use SSH for node control then? I used JNLP but encountered some trouble, I'm taking a look at it next week so it might be something minor idk. As for the why: big company fuckery. I can place the master in the datacenter but for dumb reasons it's easier to use a server in the office.
|
# ? Mar 16, 2020 04:20 |
|
Yes you can configure nodes to talk to the master over SSH, works fine (to the extent that anything in Jenkins works).
|
# ? Mar 16, 2020 14:38 |
|
If anyone wants them, Jeff Geerling made his books pay what you want down to free on Leanpub: https://leanpub.com/u/geerlingguy The Ansible for Kubernetes book isn't done yet but you get the updates as he goes.
|
# ? Mar 16, 2020 21:55 |
|
Woof Blitzer posted:So you can use SSH for node control then? I used JNLP but encountered some trouble, I'm taking a look at it next week so it might be something minor idk. As for the why: big company fuckery. I can place the master in the datacenter but for dumb reasons it's easier to use a server in the office. IIRC JNLP is only required for Windows nodes (Last I checked). As long as it's a flat L3 network between the master and node with no NAT or proxies it will work fine. Just a matter of getting traffic allowed via whatever firewalls are in the way. Also unless your setup is super complicated it really is preferable to have your master and node on the same network. Pretty sure you can run node on master as well. Only reason to have nodes elsewhere is if they need access to specific resources.
|
# ? Mar 17, 2020 11:49 |
|
I have a C# solution on ADO with a bunch of projects, some target .NET Framework and some target .NET Core. On my own machine, all bins are put in the /bin/ folder, but when ADO builds it, the core projects get put in /bin/Release/ or something similar, which breaks some of my scripts. How do I fix this? EDIT: Fixed it. One of the projects' csproj only had a debug conditional with an absolute path, the others had one for both configurations. Boz0r fucked around with this message at 09:05 on Mar 20, 2020 |
# ? Mar 20, 2020 08:38 |
|
So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools.
|
# ? Mar 21, 2020 22:26 |
|
Ghost of Reagan Past posted:So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools. Terraform isn't really going to deploy it for you. You'd want something like Ansible to make a pass at it after you've stood up the infrastructure with terraform. Another option on DO is to use their hosted Kubernetes.
|
# ? Mar 21, 2020 23:53 |
|
Ghost of Reagan Past posted:So right now I have an application up on a server on Digitalocean. It's just up directly, not behind a load balancer, without a database instance, etc. It doesn't get any traffic today but I'm anticipating it getting traffic in the next week or so. I have time so I'd like to do it right. Right now I ssh'ed in and set everything but I'd like to do it right. My gut tells me that I should Dockerize the application and use Terraform to deploy it to however many instances I feel like, with a database off in the background? But as someone who has very little "devops" knowledge, what might be the right approach? I'm willing to move it off Digitalocean to, say, Heroku, but that's pricier and this is a good opportunity to play with different tools. If your forseeable needs would be covered by a few pets, then something like terraform+ansible would be fine.If you expect to have several or more instances then look at kubernetes on DO. AFAIK k8s is basically how DO supports non-toy use cases so that is also the way to go if you need autoscaling. Alternately, have you considered a serverless thing like Lambda, if your app can be adapted to work in it? The best devops is avoided devops.
|
# ? Mar 22, 2020 03:48 |
|
Alright thanks. I'll look into this stuff. I've only limited exposure to devops/deployment tools, and have never remotely spun up servers from scratch, so it's a learning experience! I might be able to do serverless but I'd probably have to do some rearchitecting and I don't know how expensive that would get.
|
# ? Mar 22, 2020 19:04 |
Can someone in as eli5 terms as possible explain something to me, i currently have a object map as a variable like this code:
Im just wondering what the syntax would be for declaring a mass amount of defaults like this code:
code:
CyberPingu fucked around with this message at 12:14 on Mar 24, 2020 |
|
# ? Mar 24, 2020 10:56 |
|
It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again. Any new players in the last 1-2 years worth checking out? We were on CircleCI at my last place and it was fine; dont mind using it again but want to be sure I'm not missing anything making waves more recently.
|
# ? Mar 26, 2020 03:28 |
|
Walked posted:It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again. Github Actions. It's almost completely identical to the YAML-based pipelines in Azure DevOps, but yet for some insane reason, just different enough so that it's not directly cross-compatible.
|
# ? Mar 27, 2020 00:30 |
|
Walked posted:It's been a while since I did a survey of the field for CI/CD systems; just changed jobs and get to do it again. I've been using Drone at my work. I like it, especially when paired with Gitea.
|
# ? Mar 27, 2020 21:29 |
|
Mr Shiny Pants posted:I've been using Drone at my work. I like it, especially when paired with Gitea. Drone always stuck out to me as a sweet option; but never heard anyone else using it. I’ll give it another peek
|
# ? Mar 27, 2020 21:34 |
|
Walked posted:Drone always stuck out to me as a sweet option; but never heard anyone else using it. Me neither, but we did not have anything here yet so I was free to choose whatever struck my fancy. And it did. I like to host my own stuff so that was a big plus. Compared to all the other stuff I saw: Gitlab, Circle and Jenkins it feels really clean. IMHO though. It's lightweight, one thing that took me awhile to figure out is that everything needs to be done through containers. Copy something? Run a container etc. etc. Took me a day to set it up and now it pulls Repos that have a release flag set, compiles everything inside a container and deploys to a docker host. Pretty sweet. Mr Shiny Pants fucked around with this message at 22:04 on Mar 27, 2020 |
# ? Mar 27, 2020 21:58 |
|
I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily.
|
# ? Apr 2, 2020 12:17 |
|
SAVE-LISP-AND-DIE posted:I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily. I'm not sure of any standards for static analysis, I guess the closest thing might be the OWASP Code Review Guide (Not sure if that's the latest version, the OWASP website appears to be in a state of flux). Really it's much like selecting a linter: it depends on the language(s) you are targeting and the tool(s) you're using to run your pipelines. Both OWASP and NIST maintain lists that are worth a look. Off the top of my head GitLab has static analysis built-in for their CI/CD runner however only in the paid Enterprise Edition.
|
# ? Apr 2, 2020 12:41 |
SAVE-LISP-AND-DIE posted:I'm looking for resources on static analysis tools that I can run as part of my CI pipelines for security purposes. Are there any industry standards? I'm interested in .NET Core and Nodejs primarily. Sonarqube is a popular one. We use it for Java and a few other languages, I believe it has support for .NET and nodejs.
|
|
# ? Apr 2, 2020 19:11 |
|
We've got 10+ build pipelines in ADO with solutions that include a bunch of common projects. I'd like to trigger these builds on changes to the individual projects, but also the common projects. Is there a better way of setting up some trigger dependencies instead of adding all the paths manually?
|
# ? Apr 3, 2020 09:49 |
|
Boz0r posted:We've got 10+ build pipelines in ADO with solutions that include a bunch of common projects. I'd like to trigger these builds on changes to the individual projects, but also the common projects. Is there a better way of setting up some trigger dependencies instead of adding all the paths manually? Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule.
|
# ? Apr 3, 2020 16:17 |
|
New Yorp New Yorp posted:Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule. That's the plan for the future, but in this initial phase we're making a shitload of changes.
|
# ? Apr 3, 2020 18:03 |
|
Even still, isn't it more chaotic to always use broken dependencies? Why not pin package versions?
|
# ? Apr 4, 2020 00:19 |
|
Gyshall posted:Even still, isn't it more chaotic to always use broken dependencies? Why not pin package versions? But that would make sense, and our customer doesn't like that kind of thing.
|
# ? Apr 4, 2020 10:17 |
|
New Yorp New Yorp posted:Use versioned packages for common dependencies. You shouldn't force consumers of common libraries to take a new version of a common library, they should opt in on their own schedule.
|
# ? Apr 4, 2020 16:42 |
|
The combinatorics of testing so many upstream changes at a time is not feasible for most organizations that should be doing them, unfortunately. People have enough problems understanding how Jenkins matrix jobs work let alone the sheer amount of tests and configuration pedantry that should be run to make fully reproducible builds of all their software artifacts. It's sad how little progress has been made here as an industry when I've been talking about doing this for... ugh, 15+ years now even back into my college days. The fundamental problem is more around people than technology limitations and is increasingly more obvious as I keep fumbling along from company to company.
|
# ? Apr 4, 2020 17:11 |
|
Vulture Culture posted:You aren't wrong, but this is a really big "it depends" that's a suboptimal choice in a lot of circumstances/organization designs I strongly disagree unless you're talking about a very small set of consumers. Having a dozen applications start rolling out new versions because a shared dependency was updated is only going to cause pain. Breaking changes aside, the risk of introducing a new bug or fixing a bug that's being treated as correct behavior by the consumer is so high. Nothing is better than scrambling to fix a bunch of applications because someone else needed a change made to a common dependency. Being able to say "I am using version X of this dependency and it works correctly at this point in time. " Is wonderful. It also makes tracing bugs easier since you can pinpoint the build where the bug was discovered and work backwards from there to find when it was introduced, which is especially important if the bug results in incorrect data that has to be audited and corrected.
|
# ? Apr 4, 2020 18:22 |
|
New Yorp New Yorp posted:I strongly disagree unless you're talking about a very small set of consumers. Having a dozen applications start rolling out new versions because a shared dependency was updated is only going to cause pain. Breaking changes aside, the risk of introducing a new bug or fixing a bug that's being treated as correct behavior by the consumer is so high. Nothing is better than scrambling to fix a bunch of applications because someone else needed a change made to a common dependency.
The further you get into weird integrations with line-of-business garbage, the more of these you run into, and versioning libraries can make all these problems worse. And the way you have to deal with these and get yourself out of the hole is by doing ~four things: stabilizing the API, providing a mechanism to coordinate updates between consumers when there's a change, testing updates to these weird integrations across the board when something about the global system is about to change, and refactoring your library out into a service with a reasonable contract so you limit these problems and you can move to a supportable end state.
|
# ? Apr 5, 2020 00:59 |
|
That sounds like an absolute hellscape
|
# ? Apr 5, 2020 02:04 |
|
Gyshall posted:That sounds like an absolute hellscape Also known as “the real world” for a ton of enterprises
|
# ? Apr 5, 2020 04:25 |
|
The Fool posted:Also known as “the real world” for a ton of enterprises Vulture Culture fucked around with this message at 06:07 on Apr 5, 2020 |
# ? Apr 5, 2020 06:04 |
|
Given that I'm using systemd to 'serviceize' an express app on a DO droplet, what should be my procedure for updating the app? I've got a myApp.service file in /etc/systemd/system with: code:
|
# ? Apr 12, 2020 05:39 |
|
Newf posted:Given that I'm using systemd to 'serviceize' an express app on a DO droplet, what should be my procedure for updating the app? I like the pattern of dropping new code on the filesystem and then having a symlink pointing to the version you want. code:
Replacing files that are running is a bit dangerous and possibly slows down rollbacks if necessary. Methanar fucked around with this message at 06:50 on Apr 12, 2020 |
# ? Apr 12, 2020 06:47 |
|
Methanar posted:I like the pattern of dropping new code on the filesystem and then having a symlink pointing to the version you want. This is a better pattern than what I have going now for sure, thanks. To clarify, I still have to restart the service, because it's running something in-memory that needs to be reloaded. Correct?
|
# ? Apr 12, 2020 22:06 |
|
Newf posted:This is a better pattern than what I have going now for sure, thanks. To clarify, I still have to restart the service, because it's running something in-memory that needs to be reloaded. Correct? Yes.
|
# ? Apr 12, 2020 22:53 |
|
|
# ? May 16, 2024 00:56 |
|
I was managing an express app and before we ended up moving to docker containers we were using pm2 and it worked well. Pretty sure it had a watch command to check for changes, but you can also use the restart command after you deploy your code. Plus it can run an app pool if you want to run multiple copies on a multithreaded system, and provides a graceful shutdown mechanism.
|
# ? Apr 12, 2020 22:55 |