Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zephirus
May 18, 2004

BRRRR......CHK

The Fool posted:

Messing around in my lab and trying to figure out a reliable way to deploy a node app to Windows.

No containers, just a Window Server VM with nothing installed.

I'm using Azure Devops Pipelines, and deploying the code from repo isn't a problem, I'm just not sure how to reliably ensure Node is present and if it is already running, how to reload the app.

Put a devops agent on it

powershell release task

- Install chocolatey
- Choco install nodejs

npm/yarn/whatever to deploy packages or distribute files from build artifacts via artifact download task

Adbot
ADBOT LOVES YOU

Zephirus
May 18, 2004

BRRRR......CHK
There are CSI drivers for AzureFile, which is essentially a SMB3 share, and there are existing custom drivers for it in Azure AKS, however both of these are limited to actual azure file resources and don't support custom cifs shares.

Also echoing all the other comments about windows containers, it's not good.

You will have to track your windows container bases with the version on your windows nodes as there's no hyper-v isolation, so you have to run all the containers in process isolation mode (and you can't do this with mismatched os versions). When/if you upgrade your nodes, you'll have to be very careful with scheduling to avoid it, usually create a new node pool and ensure new containers built are deployed onto that, which you'll have to do every 6 months or so if you're on SAC.

Zephirus
May 18, 2004

BRRRR......CHK

Wicaeed posted:

We use LaunchDarkly, it seems to work well.

Question Time: What's the most sane way to deploy Helm charts these days?

I'm building a Rancher cluster for an IT-Ops team to run some hosted apps (Atlassian Jira/Confluence/Stash) and some day also run monitoring tools like Prometheus as well, however nobody on this team wants to use the existing Platform-team owned CI/CD environment that has historically been used with K8s thus far.

I'm thinking of exploring GitHub Actions, GitLab Runners and maybe even BitBucket Agents to see if any of these will meet their requirements:

* Should be able to be hosted internally (ie, on a private, internet-connected subnet in our Datacenter) as a VM
* Ideally, only the Runner itself would need to be hosted on-prem. The management pane can completely live in the cloud w/o issue.

Github/bitbucket work well in this form in my experience, gitlab is IMO a total shitshow for running on prem agents from cloud. If you don't have a k8s instance you have to rely on their wonky fork of docker machine, if you do have a k8s cluster you have to rely on their k8s executor which is just as shoddy.

I would implore you to reconsider running atlassian apps as containers unless you rarely change the versions. Jira and confluence are particularly fussy about upgrading between container versions - both have required manual sql fuckery between minor versions for us recently.

If you want to keep running them licensed beyond next year you'll need to move to datacentre sku which requires shared storage (and a metric wheelbarrow of cash) which may be an issue depending on your container stack.

Zephirus
May 18, 2004

BRRRR......CHK

Blinkz0rz posted:

My rule tends to be “consume helm charts, don’t write them” and it’s worked out pretty well so far.

We're at about 4 vendors now where we have to maintain custom forks of their helm charts because they don't consider customisations that might need to happen or that it might not be running in EKS or GCP, or that maybe we don't want to run the db in k8s or similar. The situation isn't ideal to be honest.

12 rats tied together posted:

to give terraform a little bit of credit, they do explicitly tell you not to do that in the docs these days.

unfortunately they were 4-5 years too late and now it's endemic.

I'm a fair way through un-doing this mentality on our tf estate. All our infra was in modules and it just made it impossible to update providers without breaking everything.

I want to throw a hat into the TF deployment ring for Atlantis which works really well for us.

Zephirus
May 18, 2004

BRRRR......CHK

The Fool posted:

app service is lovely to manage in terraform. Just deploy the base resource, certs and custom domain bindings if you need them, add app settings to lifecycle ignore, and for slots only create them, don't manage swapping them in terraform

deployments, slot management, and app settings should be managed out of band

Seconding this, slot movement/promotion etc are deployment tasks. FYI if, like some of our devs, you're considering using slots for development be aware they'll all share the app service plan so if you cake the cpu or memory with your dev or uat slot it'll break your prod one too.

Zephirus
May 18, 2004

BRRRR......CHK

The Fool posted:

I was flying all day, but I can double check the behavior tomorrow. It for sure works on 2.x versions of the provider, was taken away with 3.x, then added back very recently.

I genuinely had no idea you could do this, that's actually really useful.

Zephirus
May 18, 2004

BRRRR......CHK
That's not really the take we get from MS. The heavy steer we got from our MS reps and others in MS when we switched SCM tools was that ADO is basically in LTS and features are going to trickle in, but focus and resources were targeted on Github.

We went GHE about 8 months ago, and I've no complaints, EMUs are a bit clunky but once it's working it's no fuss.

I don't hear any different these days when we talk to them about the remaining ADO accounts we have

Zephirus
May 18, 2004

BRRRR......CHK

i am a moron posted:

There are people who are part of MS that came from GH that say that, if you ask around it’s far from settled internally. Had to escalate a conversation where someone from MS said that exact thing to a client (this was two years ago as well), their official stance is that both products are remaining. There are people inside the company who want to kill AzDO and have been repeating this for multiple years. AzDO is still getting features added and things like the cloud agents are being updated, and I still think GHE isn’t feature parity.

AFAIK the builds for the GH cloud agents and the ADO cloud agents are exactly the same, but yeah there are some things that that aren't at parity, mostly everything on the testing side. Boards/Projects is something that is better on the GitHub side IMO, I think they want to snag some Atlassian refugees so work is going in there.

Zephirus
May 18, 2004

BRRRR......CHK

New Yorp New Yorp posted:

The underlying agent is a fork of the Azure pipelines agent but they are very different implementations. Actions is a generic work flow runner that responds to many different events. Azure pipelines is specifically for continuous delivery scenarios.

GH has a limited security model that's not well suited for highly regulated organizations that require very granular permissions and audit trails.

I think i've confused things here, I don't mean the literal agent which is different, I mean the cloud agent image, which is shared across Gh actions and ADO cloud agents. (https://github.com/actions/runner-images)

Without being a GitHub apologist, I think there's a big difference between Enterprise management with a full GHEC enterprise and Enterprise Managed users, and a standard GitHub org with SAML. It's a different solution designed for orgs that don't want to work like normal GitHub, and closer to the ADO model (though the requirements to set up teams to do AAD group based granular permission management is annoying).

Zephirus
May 18, 2004

BRRRR......CHK
Our flow for azure cost is basically
  • Call cost management export API for each subscription to dump cost to a storage account
  • Import that data to a central database
  • Write horrible SQL queries to make some sense of the data
  • Use those to send fancy nice spreadsheet showing cost deltas for division resources that gets ignored
  • repeat every month

which is nasty but is basically the same as our aws cost reporting.

We did get screwed when azure updated all the billing export formats not that long ago.

The python documentation has been really bad, especially since you still have two different authentication libraries and you have to check which your api uses. I think most of it is auto-generated.

Adbot
ADBOT LOVES YOU

Zephirus
May 18, 2004

BRRRR......CHK

Gucci Loafers posted:

Has anyone here worked with Azure Functions and Cosmos DB? I don't know if I finally losing it but I find the concept or at least the ability to implement a binding freaking impossible. All I am trying to do is simply have my function app with a HTTP Trigger query a single row (or document or whatever Cosmos DB calls it) and increase it's value. For whatever reason, the current tutorials no longer work and I don't get how am I supposed to decipher their documentation. Where does the code go exactly? How do I interrupt the below article? Or is it because I am not a dev and don't know enough C#? :smith:

Azure Cosmos DB trigger and bindings

Every time i've anything more than 'put object' to cosmosdb in functions I've created a cosmosclient using the sdk rather than using bindings. I'm not sure how much overhead this adds if you're doing something like durable functions but it's easier to me than messing with extra inbound and outbound bindings.

https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-dotnet?pivots=devcontainer-codespace#authenticate-the-client

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply