Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
I run a small consulting business, doing development for a bunch of different clients. I've been looking at setting up a CI/build server to get things a bit more organized. Most of our dev is C# - typically MVC for web, a mixture of WCF and Web API for services, and some windows services/console apps as well. However, we also do a reasonable amount of work for a couple of clients in C on the Raspberry Pi (we currently build everything using the CodeBlocks IDE on the pi itself, but do most of the dev on Windows under Visual Studio), and some embedded C dev on credit card terminals as well (NetBeans IDE with Cygwin+ARM cross compilers). There's also one java project using Eclipse and Maven and Spring and some other junk.

What I'm hoping to find is some setup which can do a nightly (as well as on-demand) build of all the projects that have changed, along with updating the version numbers for all the assemblies/projects being built, and committing the updated version resources back to source control. I know about the x.x.*.* versioning that Visual Studio can do, but I want something a bit more deterministic. Post build, if it's a web project, deploy to IIS somewhere, if it's a service, stop the existing one, deploy, restart, and if it's just a standalone app or one of the Pi/credit card projects, just push to a dedicated share, with the current version number in the name somewhere. What I want is for all the nightly builds/deploys to update the dev environment automatically. Then, once we're ready to move something into test, just change the deployment URL's and credentials and initiate an on-demand build, and have everything update the test environment as well. I should be able to create some shell scripts or makefiles or whatever for the Raspberry Pi and credit card projects if there aren't any already, as well as writing some helper apps to update the build numbers for the non .net projects where necessary. Source control is svn and git at the moment, with potentially some clients using TFS in the future.

Is there a single product that can do this? Or a set of products that work reasonably well together? We've had a few instances in the past with source control mismanagement and having difficulty tying specific versions of binaries to source control revisions, so I'm wanting to automate as much of this as possible.

Adbot
ADBOT LOVES YOU

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

Ithaqua posted:

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.

That definitely makes sense. Not sure what I was thinking by wanting to rebuild to the test environment rather than just re-deploy.

I think it's time to stop staring at Wikipedia features tables trying to find something that does everything I need, and just install Jenkins and TeamCity in VMs and see which are easiest to configure and manage for my needs.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Not sure if this is the right thread for it, but do you guys have any recommendations for free Windows server monitoring tools? Free because I'll need to monitor like 2-3 servers at the most, and budget is tight at the moment. Not even sure what it is I need yet, but I'm guessing a view of CPU, RAM, disk and network utilization for a given period? After years of letting my clients' IT depts worry about production environments, I'm finally getting close to hosting a couple of products of my own and aside from watching logs and keeping up with the backups and updates, I also want to make sure I'm not over/under-utilizing my hardware, and also keep an eye out for unusual spikes in resource usage.
The servers will basically be running IIS and SQL server, and maybe one or two Windows services, and will be mostly servicing mobile apps and 3rd party systems via web-api calls.
Feel free to call me out if my approach is totally wrong also. I'm a dev, not an IT guy, and I plan to hire one once these products are making enough to be able to pay for someone, but in the meanwhile, it'll just be me logging in via remote desktop every day and being thankful that they haven't been owned by cryptolocker yet.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

Internet Explorer posted:

PRTG has a 100 sensor free version.

EssOEss posted:

I use Prometheus as my go-to monitoring solution. It is entirely free and very capable.

Thanks for both these suggestions! I think I'll install PRTG for now, and play around with Prometheus until I have worked out how to configure it properly, and then decide which to stick with.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
The past few pages of container-talk have convinced me to finally try to learn exactly how they work and start using them myself. Been reading through some documentation while waiting for things to download, but there's something that's confusing me so far:

I'm developing on Windows 10 and deploying to Windows Server. Docker says that it doesn't require a VM and can run directly off the underlying OS. But since there's no Windows 10 base image, it looks like I have to download a Windows Server Core image. This makes sense.

So is Docker going to run Windows Server in a VM off Windows 10, and then run my containers in the VM?
On my target machine, if the Windows Server version matches the docker base image OS version, will it run the containers directly off the underlying OS via the docker engine, or will it still create a Windows Server VM on my Windows Server machine regardless?
If it can create VMs when the OS that the container is configured for doesn't match the underlying OS, can I run containers for Linux and Windows on the same box or is it limited to just one OS type per docker engine? Related, could I run containers for Windows Server Core and Windows Server Nano side by side on the same box, even though they are different base images?

All very basic stuff I'm sure, but I've not had a chance to get started with containers before now, and I want to make sure I understand things properly and not half-assed.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

New Yorp New Yorp posted:

Stop thinking in terms of VMs. Containers aren't VMs, they are isolation layers. Windows containers run on the Hyper-V hypervisor to get access to system resources (CPU, memory, disk, etc). The "base image" is more of a set of basic capabilities than it is a full OS. This is why containers start in a few seconds instead of a minute or two -- starting a container doesn't involve booting up a full kernel, it just hooks into the already-running kernel. This is, of course, a massive simplification.

Sure, but my understanding/experience of Hyper-V so far has been a means to run VMs, hence my confusion. Also, I was trying to work out how it would handle presenting the Server Core base image to the container when it was actually running on Windows 10, but I guess since they basically share the same kernel for the most part, that makes it a lot easier.
This does make things clearer for me though, thanks!

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Azure VSTS question

I'm slowly trying to bring my dev practises closer to tyool 2018. This week's goal is to get all my common/shared dlls building automatically and publishing to a private nuget repo (all using Visual Studio Online).

I have a new git repo created for the shared projects, and I'm starting with a common one with no other dependencies to make things as simple as possible, but I'm hitting a snag: the dll is signed, and the .pfx is protected by a password. Obviously when building locally this works fine because I've given VS the password. But the Azure build agent doesn't have the password, so I get the following error in the build log:

quote:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(3156,5): Error MSB3325: Cannot import the following key file: blah.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_3F96EE9FFF3951D5

I found some snackoverflow posts which aren't entirely helpful either. For example, this one suggests to import the signing cert in a step prior to the build, which I've done similar to this:

code:
$pfxpath = 'path.to.pfx'
$password = 'password'

Add-Type -AssemblyName System.Security
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxpath, $password, [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]"PersistKeySet")
$store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "MY", CurrentUser
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
$store.Add($cert)
$store.Close()

This runs fine, but it doesn't seem to be importing it to the place that the build tool expects to find it, because I still get the error

quote:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(3156,5): Error MSB3325: Cannot import the following key file: blah.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_13620F4AB3C1C9B0

I'm guessing I need to modify my powershell script to install the cert into the specified key container name, but this name changes each time, and I don't know how to query that.

So what's the best way to get VSO to build a signed assembly? I don't mind using a non-password-protected .pfx, but then that means I can't check that into git, so I don't know how to get the pfx into the build step without being in the repo. Any ideas?

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

uncurable mlady posted:

i dealt with that exact same problem; you can't import that pfx in a non-interactive fashion in any way, so your only option is to create a new keyfile (the extension MS uses for this is snk, i believe), change the path in the csproj to point to the snk file, then commit that to vcs.

ed: you can always host the snk file on s3 or the azure equiv. then download it as part of the build if you're unable to check it into source control.

Thanks! That worked perfectly

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
I need some direction with moving some existing web apps from VMs into AWS. Not sure if there's a better thread for it than this.
We have a code-ignitor PHP app on a VM. I have so far been able to create and deploy a docker image containing the app into Elastic Beanstalk. There are 2 issues that I still need to work out though. Be aware that I have almost no experience with either docker or EB, but neither does anyone else on our team either.

Issue 1: code ignitor stores stuff like database credentials in files like application/config/database.php. The docker image I've created has a database.php inside it which connects to the db in the test environment, and it works. But I'd obviously want to be able to deploy the same docker image into the prod environment once we're ready to migrate over. What's the best way to inject settings into a container? Do I set environment variables and then run scripts via Dockerfile that copy the environment variables out into the proper places, or is there a better way to do this?

Issue 2: the app in EB works great for me, but others are having difficulty whenever submitting a form, so they can't even log in. AWS logs have a "upstream sent too big header while reading response header from upstream" error, which apparently means that the client is trying send session data that's too big for the EB Nginx to handle. We apparently came across the same issue with our VM, so the Nginx config there was updated to increase the proxy buffer size. Problem is, the Nginx instance that's forwarding requests to my web app sits outside my docker container, so how would I update the config for something outside my environment?

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Thanks for all the suggestions, much appreciated!

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Is there a tool that will lookup all of our resources in AWS and generate some sort of a map or report that shows what resources are linked to what other resources, so we can clean up the ones that aren't being used by anything?
Ideally I'd like to see all resources that seem orphaned which can easily be cleaned up, and also maybe drill down from a VPC to see all the resources attached to it, which I can then either tie back to a running instance or delete with confidence.

We don't have anyone with any specific AWS training on staff so there's been a lot of trial and error in getting things set up, especially earlier on when we were starting to move our services over there, so there are a number of resources that were maybe not fully set up or not fully deleted which I'm trying to identify and clean up, both from a cost-saving POV as well as general OCD.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Thanks for the suggestions, especially CloudMapper - it has a `find_unused` command which should hopefully do at least some of what I'm looking for.
I think enforcing tagging on everything going forward is a good start also - it'd at least help to identify things that should be cleaned up at some point.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
We have the develop branch set to deploy to the test environment on commit, and master set to deploy to production on commit, but nobody can push to master directly, and PR to master requires approval from x reviewers. Works for us so far but we have a very small environment compared to the stuff some of you guys deal with.

I do need to automate the rest of the environment as well though, like making sure security groups and other resources are set right. Is Pulumi good for that?

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
I hope this is the right thread for this… I’m trying to get a bunch of metrics from our services into Prometheus and so far everything is fine, but I’m confused about how to partition my data in Prometheus.

We have client software that pushes data to the backend from each of our customers, so I have a counter for incoming_transaction_count. But now I want to be able to tell if any customer hasn’t pushed transactions in the past hour and alert on that, so I thought I’d label each incoming_transaction_count with the customer id. But the Prometheus docs says that labels shouldn’t be used for a high cardinality, and we’re looking at thousands of distinct customer ids. Then they also say that the metric name shouldn’t be procedurally generated so I shouldn’t create distinct counters for each customer id.

I know these are all guidelines and I’m free to disregard them, but they’re there for a reason so I’d rather set things up properly from the start if there’s a better way, although I’m not seeing how given the two guidelines above.
For now I have a couple of metrics I would want to track per customer, but that would probably grow to at least 10-20 for at least 2000 customers. I know Prometheus will be able to handle this sort of load without much stress but I also don’t want to do things inefficiently out of ignorance.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

my homie dhall posted:

Your concern from the load perspective is how many time series you are creating. If the problems you're trying to solve with prometheus involve slicing and dicing at the customer level there's really no way around adding a unique per-customer tag to your metrics. So for every metric you push with those tags you just need to be aware of the fact that you're creating (customer count)x the number of time series. As long as you are judicious with which metrics you're tagging (ie not pushing the 1000s of metrics that might come from something like node exporter) per-customer, my guess is that you'll probably be fine.

That’s exactly what I wanted to hear, thanks.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

El Grillo posted:

These are all super helpful, sorry for the delay in responding.

The actual repo files are just over 8GB.

Unfortunately the size is not due to large individual files that can be pruned or split off in some way (in fact, by necessity due to other systems we're using there is no individual file over 15mb and there are about 150k files total). This fact combined with many commits, and some of those commits making changes to thousands of files at a time, is I presume why the .git is ~14gb even when using shallow clone.

All that being said, I'm ignorant enough to not be sure yet whether the code suggestion above from Quebec Bagnet would work but I'm pretty sure it wouldn't because --depth 1 simply prevents it from pulling versions older than the most recent, but the most recent contains all that history of commits anyway right?

The ideal solution might be some way to get clone to ignore all the commit history before the latest tagged release, because we essentially do not need easy access to that previous history once we've reached a new tagged release.
Maybe the ultimate solution is that we just use rclone to get the latest tagged release each time we deploy.. that would mean downloading the whole 8gigs+ each time though right? Because rclone won't diff and update individual files?
Then we could keep production servers at low drive volumes, and only need to have a larger drive to contain the whole repo on our test server which we need to rapidly deploy new commits to (i.e. needs to have the whole repo so we can simply update it rapidly at every new commit as we're approaching release and are debugging).

In any scenario, rapid deployment is pretty important so ideally we don't want to have to do a full blank slate download of 150k files (the 8GB) to all servers at every tagged release. But it might simply be that there is no software solution that allows us to just get the latest commits (properly diff'd so we're not doing a full clone every time) without also having the whole .git folder locally too.

edit:

Ah we are not using docker but maybe we could. I will pass this along to our admin see what he thinks. Shallow doesn't do the job sadly (see above).
We can push using git because no need to push from the deployment side.

You said initially that the destinations all have low storage space due to cost so I’m guessing there are a lot of destinations where an extra 10gb per node will add up. In that case, what if you provision one new node with 16gb of space, clone the repo there including whatever extra history comes along, and use that to publish the resulting 8gb of actual data/code that you want to distribute to the other nodes?

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
What's the best way to get Prometheus metrics out of ECS services into Grafana Cloud without using Cloudwatch?

We have a couple of services with auto-scaling rules spread across a couple of clusters. Each task is a .net core web service. I want to expose some custom metrics from these services so we can track a bunch of stats for support. It's easy to scrape the metrics endpoints when there's only one instance of each task running, but if it scales up then I don't know how I'd scrape each distinct container from Grafana Cloud's Prometheus.
Is my best bet to add Grafana Agent to my containers and use remote_write to push the metrics to Prometheus instead of trying to scrape?

As you can probably tell, we are a small dev team who are not at all mature in terms of cloud engineering capabilities.

Adbot
ADBOT LOVES YOU

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

Extremely Penetrated posted:

We use New Relic instead of Prometheus, but yeah the pattern is to run an OpenTel collector agent as a sidecar to your .NET app's container. That's not the same as installing the agent in your Dockerfile, which I suspect is what you were thinking. AWS has a sample task definition. If you're using Fargate your options are pretty limited and this one is probably the least poo poo.

Thanks, that looks helpful. Yeah we are using fargate, and I was originally looking at installing the agent in my dockerfile… lots more research to do!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply