Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
freeasinbeer
Mar 26, 2015

by Fluffdaddy

busfahrer posted:

I'm trying to deploy the docker example voting app on minikube. Using ingress-nginx, I'm routing / to the voting service and /result/ to the result service, rewriting to /. When I access the result service via http://.../result, it can't find the stylesheets for example, because they are linked at /stylesheets/ as opposed to ./stylesheets/. So this brings me to the general question: Is it possible to let each container think they live at / while still getting things to work, using nginx routing?

Yep:

https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rewrite

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The island is a big problem. Fine for a POC but a huge risk beyond that point.

LochNessMonster posted:

I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?
My reasons are similar to any other major decision - what does it take to implement before it does anything valuable, does it help a team enough for the efforts put in, and does it degrade gracefully when you (inevitably) screw it up?

Branching conventions for infrastructure code is similar to Conway's Law in that it can provide structure and reduce bikeshedding to get things done - part of the successes of REST despite the surprising amount of bikeshedding around it. Issue is that until a few posts ago I've yet to see a company go "gosh, how we laid out our repo makes our deployments" or "git-flow helped us stay focused in ops when we were pretty awful before" compared to "we got everyone all-in to fix our deployments" and "we started doing TDD for our infrastructure code." Unlike a lot of other principles in software, adopting git-flow doesn't fix bad ops practices and in the face of more fundamentally flawed situations, it exacerbates problems that are too important to cause delays over compared to development. Because I presume bad, I choose least-risk and git-flow is not least-risk.

FWIW, everything after this is more or less is probably inflammatory bitching but I want to caution relying upon false gods of standards and to seek answers to more meaningful decisions.

Getting Started / Time to Value

Let's say you have the following directory layout:

code:
envs/
- dev.yaml
- qa.yaml
- prod.yaml
If you have two separate feature branches X and Y and one development environment as this indicates, how do you correctly test the features in X and Y independently and do not stomp on each other in a shared environment? The usual answer I've seen is something like "don't deploy feature branches to shared environments, dummy" and "always use rebase before merges" - how are those PR-merge-fail-in-develop cycles going to keep your development branch clean too (of course we have a job that reverts bad merges! who doesn't? /s)? How many feature branches are in progress against live infrastructure at any given moment and how are they separated? Let's presume that's all negotiated though - what happens when there's an error that results in needing to wipe out the environment (very common with our CloudFormation stacks)? How much can this approach of non-deployable feature branches tolerate long release cycles where nothing gets deployed? These can be answered using git-flow but half the time the discussions float back into git-flow rather than about whether the

What ultimately must happen with git-flow in an infrastructure repository is thus:

1. Strict / rigorous process or tooling to enforce it. git hooks, CI based scripts, Jenkinsfiles, etc. Now your operations culture must accept this in place of more documentation or you've just created double work for yourselves. Not a problem if it's O(log n) level of documentation / time wrt automated work, but with asymmetrically skilled teams it becomes O(n2)
2. Strictly defined / relatively easy deployments of an environment to validate the health of the code. In its absence comes superstition / religion, lack of trust, and decay in structure
3. Deployment infrastructure that accommodates the git-flow style release promotion process

Does it deliver value / how does it degrade?

These kinds of conditions can prevent innovation for inexperienced / undisciplined teams, raises the floor significantly for hiring operations engineers (not a problem at $hot_startup, showstopper issue at $mediocre_inc / later years of $notsohot_startup), and usually causes bikeshedding when trying to adapt to use cases that don't fit the branching model. Git-flow was meant to describe the typical open source style collaboration methodology for structured software releases - who releases infrastructure changes at the pace and style of software development again? Rare is the shop that keeps ops engineers in a 70%+ project mode. As software degrades, more commits get made to hotfix/* than develop, and what used to be structure friction and a liability (blaming the "complexity" of a branching convention rather than inappropriate hiring - I think some of us have seen this appalling view) - you're not using git-flow anymore if you just directory push to master, disable Jenkins jobs from the previous guy, and .

The branching model isn't hard to understand, but what are you doing with it in terms of infrastructure code? Does it help you avoid outages? Does separating release/* from larger merges to master make sense for your infrastructure? Does it make communication among team members clearer? I have not seen a clear win on anything that a branching model can help with that other things can accomplish.

Solutions / approaches then?

In another interpretation of git-flow for infrastructure without the above directory style, you map an environment to a branch / tag (tags get deployed to prod) and you're probably fine as long as the number of branches is fairly low or you have a fairly nimble, lightweight footprint. This approach potentially causes a linear explosion of infrastructure costs, especially if your environments must match production-like configuration.

Another method is to merge to master in a more mono-repo style approach and what's live is what's in master, and all other branches and tags are not meant to be deployed automatically anywhere. You might deploy a feature branch targeting a non-prod environment if you're experimenting (deploy feature-idk-wtf-is-going-on to funzone) and multiple releases are developed like they're separate features. This is not git-flow either (where's that develop branch we always base off of for releases?) and has caused less pilot error than git-flow conventions in operations work.

Most of my experience is in situations where the deployment is complete trash mid-90s style and infrastructure as code by my predecessors failed because the ideals of the system didn't match the reality of the software, and a system that forces absolute compliance for success is not something that can be recommended (no different to me than crap like CMDBs and late-90s ASPs). Something that "sort of works" and has lower collective cognitive overhead is better than a lot of stuff that would work but.... A good example is the decision to use Ansible at my current place where it's really, really awkward to ssh to even a development system, and trying to shape the system to make Ansible viable is too laborious to do before it can deliver value, so we've deployed Salt - nothing controversial here.

Sorry if there's something unclear, this is just pieced together randomly between the 15 minutes I have a day between another random breaking change and more deployments that I have to cherry-pick and back merge several conflicting releases across 4-5 branches every other time we find something wrong while people are frustrated at why it all takes so long.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
What's a good way to handle getting JaCoCo code coverage files off of instances that may come and go regularly due to autoscaling?

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Hadlock posted:

Something has gone horribly wrong with the implementation if you can only deploy one thing to one cluster :psyduck:

The flip side of that argument is that multi-tenancy in Kubernetes is strange, bulkheading is good, and clusters should be cheap. Things like the ClusterAPI model exist to bring the whole undercloud/overcloud notion into the Kubernetes world, so that you have very few handspun control plane clusters and all your other clusters are spawned declaratively from these ones, whether into AWS or OpenStack or bare metal through Packet or your own nightmare sauce that calls Foreman/Cobbler/Ironic.

Hadlock
Nov 9, 2004

chutwig posted:

The flip side of that argument is that multi-tenancy in Kubernetes is strange, bulkheading is good, and clusters should be cheap.

I have been easing my company in to the boiling put very very slowly

They seem to really like the idea that 1 namespace = one server, and 1 cluster = one data center

Almost exactly one year ago I was at lunch and suggested we build one reporting server per stack and our vp of engineering looked at me like i was crazy, and said something to the effect of "that will never ever happen"

Today that same vp of engineering held a meeting and said we would convert from our bare metal data center to kubernetes clusters in aws

If I'd started out telling them that we should spin up one cluster per stack, this project would have never gotten off the ground. To them that's like saying 1 stack per data center.

Also, our stack fits very comfortably on 2 x t2.xlarge, it's kind of overkill to have one master for a 2 node cluster, plus if you want decent HA you would need a third node... Hosting 20 environments on one cluster with two nodes overflow you still get good ha and only need one master... We are very sensitive to price, apparently one guy on the board, at another company their ops department lost track of spending too the time of quarter mil per month, and my aws budget is closely watched as a result

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
Are there any good online resources regarding networking patterns in container orchestration? We're very much coming from a place of VLAN network-segmentation with ACLs and we're wondering how this translates into the wonderful world of containers.

Assuming we have two separate apps running in a single swarm cluster that share serivces between them (say a mobile app backend and a component of our customer facing website that share a common user info API), is there any reason to separate the networks? Or just run them all together under a single overlay? Does k8s handle this kind of thing any differently?

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Spring Heeled Jack posted:

Are there any good online resources regarding networking patterns in container orchestration? We're very much coming from a place of VLAN network-segmentation with ACLs and we're wondering how this translates into the wonderful world of containers.

Assuming we have two separate apps running in a single swarm cluster that share serivces between them (say a mobile app backend and a component of our customer facing website that share a common user info API), is there any reason to separate the networks? Or just run them all together under a single overlay? Does k8s handle this kind of thing any differently?

K8s will run everything on the same overlay. If you want to restrict connectivity in that overlay you would use something like a service mesh or NetworkPolicy resources.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

Ploft-shell crab posted:

K8s will run everything on the same overlay. If you want to restrict connectivity in that overlay you would use something like a service mesh or NetworkPolicy resources.

So I guess that goes into my next question,

We currently have separate dev/test/mock networks each with their own DB copy that gets refreshed from prod. From what I’ve read the common practice is to just run all of these environments on the same swarm (I would imagine on separate overlays), removing the need for us to run like 5 IIS servers per environment.

So when we deploy to one of these environments, we have some variable replacement going on with the env files to change the DB strings and other environment specific values.

Are there any other security practices we can put in place for traffic leaving the swarm (while only maintaining one swarm on a single subnet) to prevent dev containers from going to the test DB, or vise versa? Or do we just a rely on variable replacement to ensure connections are set and go where they should? I know this will face scrutiny from our sec guys so I’m trying to find answers to questions I know they will have.

Methanar
Sep 26, 2013

by the sex ghost

Spring Heeled Jack posted:

So I guess that goes into my next question,

We currently have separate dev/test/mock networks each with their own DB copy that gets refreshed from prod. From what I’ve read the common practice is to just run all of these environments on the same swarm (I would imagine on separate overlays), removing the need for us to run like 5 IIS servers per environment.

So when we deploy to one of these environments, we have some variable replacement going on with the env files to change the DB strings and other environment specific values.

Are there any other security practices we can put in place for traffic leaving the swarm (while only maintaining one swarm on a single subnet) to prevent dev containers from going to the test DB, or vise versa? Or do we just a rely on variable replacement to ensure connections are set and go where they should? I know this will face scrutiny from our sec guys so I’m trying to find answers to questions I know they will have.

Just have your dev stuff point to a dev service for l4 routing to dev backend dbs


label everything for the environment that its in.

Then set NetworkPolicies to disallow anything from speaking to dev that is itself not dev tagged

https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/advanced-policy

If you're using calico as your network, their docs are really good for how to do this.



Alternatively different clusters for each environment but I'm not entirely convinced that's the right answer in general.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

Methanar posted:

Just have your dev stuff point to a dev service for l4 routing to dev backend dbs


label everything for the environment that its in.

Then set NetworkPolicies to disallow anything from speaking to dev that is itself not dev tagged

https://docs.projectcalico.org/v3.5/getting-started/kubernetes/tutorials/advanced-policy

If you're using calico as your network, their docs are really good for how to do this.



Alternatively different clusters for each environment but I'm not entirely convinced that's the right answer in general.

So we’re using swarm (on prem) for everything at the moment since we have the need for a few windows containers (hopefully not for much longer). I’m guessing that limits me from most of the options you’re talking about? Or are there comparable solutions?

Methanar
Sep 26, 2013

by the sex ghost

Spring Heeled Jack posted:

So we’re using swarm (on prem) for everything at the moment since we have the need for a few windows containers (hopefully not for much longer). I’m guessing that limits me from most of the options you’re talking about? Or are there comparable solutions?

Oh sorry I saw something about k8s above. I don't know enough about swarm to be too helpful there.

In k8s at least microsegmentation of workloads with calico is actually pretty good if that's helpful. I did a big thing a few weeks ago demoing it out.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

Methanar posted:

Oh sorry I saw something about k8s above. I don't know enough about swarm to be too helpful there.

In k8s at least microsegmentation of workloads with calico is actually pretty good if that's helpful. I did a big thing a few weeks ago demoing it out.

Yeah our end goal is to use one of the managed k8s services for everything, but containers are still a new thing here so we started with swarm since it was simple to get up and running, met most of our needs, and we didn’t have to deal with managing WAN connections to a cloud provider as of yet.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Spring Heeled Jack posted:

Yeah our end goal is to use one of the managed k8s services for everything, but containers are still a new thing here so we started with swarm since it was simple to get up and running, met most of our needs, and we didn’t have to deal with managing WAN connections to a cloud provider as of yet.

K8s is pretty quick to setup now. It’s harder than swarm but still not too bad. Are you on premises or in the cloud?

If in cloud just jump on gke or if your devs are real lazy Openshift on premise.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

jaegerx posted:

K8s is pretty quick to setup now. It’s harder than swarm but still not too bad. Are you on premises or in the cloud?

If in cloud just jump on gke or if your devs are real lazy Openshift on premise.

On prem. I had a small cluster up and running but started hitting snags when I introduced a Server 2019 host which is ‘supposed’ to work fine. Maybe I’ll give it a go again, I had to jump on some other projects in the meantime.

Methanar
Sep 26, 2013

by the sex ghost
Yeah GKE is really, really easy to get started with. Knowing nothing, I wouldn't bother with swarm on-prem initially if you've got real short term goals of running on managed k8s.

K8s in general on-prem is hard for a lot of reasons. Using someone else's managed offering instantly solves most of them.

On the other hand, windows containers.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Oh man. Containers on windows you’re on your own. Good luck buddy. May you boldly go into the unknown.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

jaegerx posted:

Oh man. Containers on windows you’re on your own. Good luck buddy. May you boldly go into the unknown.

It was literally only used for a single service which has since been ported to an open source variant that can run on .net core. We’re a .net shop to we have a lot of legacy monolithic crap but all new stuff is done on Core.

It uhh hasn’t been too bad so far. We have SA with Microsoft so we actually get support through the official channels which has helped diagnose a few weird issues with the 2019 base images and swarm networking.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Then ignore gke and go with aks. Azure cli is loving god awful though. It’s the worst of the big 3 providers and yes I’ll fight you about it.

Methanar
Sep 26, 2013

by the sex ghost
Azure is trash and I will never use it again or condone anyone using it

do not use azure this is not the general IT thread

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


I'd argue with him but he's not wrong, azure is trash.

Hadlock
Nov 9, 2004

Has anyone played around with Loki, the grafana/prometheus-ized version of Kibana/Splunk/Graylog

In my vast backlog of new technologies I'm supposed to bring to fruition is a unified logging infrastructure... right now we have some hacky homebrew tail logs to a crude web interface things, and I am not allowed to use Kibana because they charge ~$16,000 for LDAP integration with their xpass plugin or whatever its called

Grafana 6 is supposed to come out at the end of February, along with support for Loki, which was Alpha at Thanksgiving 2018... and I presume will get a very crude 1.0 release around the same time as Grafana 6 is released

In addition to not being allowed to use Kibana, Graylog is a pain in the rear end to do because it requires no older than Elasticsearch 5, and there's no good helm charts for that without their xpass plugin for ~reasons~

Literally anything would be a step up from where we're at and I think realistically we only generate maybe 20-100GB a day in logs so it's not like we need a lot of resources to pull this off. At least Loki is kubernetes native and uses the same querying language (concepts anyways) as prometheus...

Docjowles
Apr 9, 2009

Can you not put kibana behind Apache/nginx to do LDAP for you? That’s the usual workaround for Elastic’s poo poo authentication story.

Pile Of Garbage
May 28, 2007



Hadlock posted:

I am not allowed to use Kibana because they charge ~$16,000 for LDAP integration with their xpass plugin or whatever its called

X-Pack is what it's called. Also yeah the cost is bullshit.

Docjowles posted:

Can you not put kibana behind Apache/nginx to do LDAP for you? That's the usual workaround for Elastic's poo poo authentication story.

That's what we do, nginx reverse-proxy with LDAP auth configured. However we only do it for Kibana. The main issue is having no authentication for Elasticsearch itself. If you can reach the Elasticsearch endpoint (Usually tcp/9000) then you can manipulate it all you want via the API. We haven't tried putting Elasticsearch behind the nginx reverse-proxy but I'm pretty sure it would break poo poo...

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

Pile Of Garbage posted:

X-Pack is what it's called. Also yeah the cost is bullshit.


That's what we do, nginx reverse-proxy with LDAP auth configured. However we only do it for Kibana. The main issue is having no authentication for Elasticsearch itself. If you can reach the Elasticsearch endpoint (Usually tcp/9000) then you can manipulate it all you want via the API. We haven't tried putting Elasticsearch behind the nginx reverse-proxy but I'm pretty sure it would break poo poo...

Yeah just limit the end points that can talk to it. But also Kibana has that dev console which also lets you gently caress with poo poo so :shrug:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Isn’t it possible to set the ElasticSearch API behind a gateway such as Kong or even API gateway if you wanted to try to do some poor man ACLs of the endpoint?

Also, I got the impression X-Pack is part of the offering from the Elastic cloud hosted stack - anyone know off the top of their head if that’s included? The page was hard enough to even get to the pricing calculator let alone tell me what’s actually included

Pile Of Garbage
May 28, 2007



Janitor Prime posted:

Yeah just limit the end points that can talk to it. But also Kibana has that dev console which also lets you gently caress with poo poo so :shrug:

The dev console in Kibana works via that same Elasticsearch endpoint. Restricting connections to Elasticsearch will work but might not be scalable or sufficient for some environments. Putting Logstash in front of Elasticsearch can simplify that but if you're using Beat agents then they'll be gimped when not connecting directly to Elasticsearch.

But yeah this is all just mainly a situational thing. Also ELK annoys me so I might be biased :lol:

necrobobsledder posted:

Isn't it possible to set the ElasticSearch API behind a gateway such as Kong or even API gateway if you wanted to try to do some poor man ACLs of the endpoint?

Yeah that's what Janitor Prime and myself mentioned earlier. The issue with putting Elasticsearch behind such a gateway is that Logstash and Beat agents don't have a mechanism to authenticate with that gateway.

necrobobsledder posted:

Also, I got the impression X-Pack is part of the offering from the Elastic cloud hosted stack - anyone know off the top of their head if that's included? The page was hard enough to even get to the pricing calculator let alone tell me what's actually included

Like most enterprise licensing stuff Elastic make things obtuse as hell but authentication is definitely a paid feature as detailed here: https://www.elastic.co/subscriptions

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

Yeah that's what Janitor Prime and myself mentioned earlier. The issue with putting Elasticsearch behind such a gateway is that Logstash and Beat agents don't have a mechanism to authenticate with that gateway.
Completely, totally incorrect. Logstash's ES output in HTTP mode will respect username and password in any URL in the hosts option, or via the user and password fields. If you'd rather use a token than basic auth, you can use custom_headers. Beats has username and password options for Elasticsearch, or it can do TLS-based client authentication against Logstash's Beats input.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Pile Of Garbage posted:

Yeah that's what Janitor Prime and myself mentioned earlier. The issue with putting Elasticsearch behind such a gateway is that Logstash and Beat agents don't have a mechanism to authenticate with that gateway.
I was meaning that you could make complex rules using more sophisticated services beyond nginx or haproxy. I’d keep Logstash and other machine level services routed privately, lock down access to the machine like we all should be doing, and hide any APIs or UIs from users from direct access including from operators. Maybe I’m misunderstanding your restrictions but I don’t see why this access and routing split is unacceptable unless you’re worried about people logging into machines and masquerading as service accounts to execute their queries and bypass any user authentication methods?

quote:

Like most enterprise licensing stuff Elastic make things obtuse as hell but authentication is definitely a paid feature as detailed here: https://www.elastic.co/subscriptions
Not clear which subscription I’d be getting with a cloud hosted option https://www.elastic.co/cloud/elasticsearch-service unless it means that they deploy the basic open source software license by default and you have to pay on top of what they charge (those instance costs look suspiciously cheap compared to what I’d have to pay). We’re using Splunk now and while it costs a bunch we don’t use 80%+ of the features you get with it and am shopping for anything else because evidently I’m getting more grief over the Splunk license costs than whether it’s providing any value. If the total Elastic licensing and instance cost is comparable to Splunk I still might drop ship because they’ve been complete twats and asking for a PO over less than $80k of invoices and nearly caused us a visibility outage as they dropped that on us last minute.

Walked
Apr 14, 2003

necrobobsledder posted:



Another method is to merge to master in a more mono-repo style approach and what's live is what's in master, and all other branches and tags are not meant to be deployed automatically anywhere. You might deploy a feature branch targeting a non-prod environment if you're experimenting (deploy feature-idk-wtf-is-going-on to funzone) and multiple releases are developed like they're separate features. This is not git-flow either (where's that develop branch we always base off of for releases?) and has caused less pilot error than git-flow conventions in operations work.


Little late but this is how we approach things. It works pretty well for us.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

jaegerx posted:

I'd argue with him but he's not wrong, azure is trash.

They have a product called DevOps tho

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Gyshall posted:

They have a product called DevOps tho

...which isn't really part of Azure (i.e. developed by a different team), just sharing the Azure brand. It's excellent.

Warbird
May 23, 2012

America's Favorite Dumbass

So my new workplace is pretty nice and I got a pay raise from the last position. Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in.

Fukkin what. This is still a net improvement, but what are we doing here people?

Walked
Apr 14, 2003

Warbird posted:

So my new workplace is pretty nice and I got a pay raise from the last position. Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in.

Fukkin what. This is still a net improvement, but what are we doing here people?

The gently caress.

I've run into similar when I was working on a DoD installation once; where I'd research poo poo from home and then email myself PowerShell snippets; way back in the day. It was real dumb.

But this sounds even less logical. I don't understand. And I'm normally pretty understanding of corporate policy and restriction.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Warbird posted:

So my new workplace is pretty nice and I got a pay raise from the last position. Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in.

Fukkin what. This is still a net improvement, but what are we doing here people?

Enjoy ur job in Russia

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Warbird posted:

So my new workplace is pretty nice and I got a pay raise from the last position. Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in.

Fukkin what. This is still a net improvement, but what are we doing here people?

what on earth? quit.

Warbird
May 23, 2012

America's Favorite Dumbass

Walked posted:

The gently caress.

I've run into similar when I was working on a DoD installation once; where I'd research poo poo from home and then email myself PowerShell snippets; way back in the day. It was real dumb.

But this sounds even less logical. I don't understand. And I'm normally pretty understanding of corporate policy and restriction.

Believe me no one else in Ops is a fan either. It's a large financial institution so some dimwit in security likely had a decent business case that got approved at a high level. If I worked in office it would be one thing, but it's remote so it just lets me gently caress around on my personal laptop and move stuff over once I find something useful. Contract's not too long so I can bounce if it gets too stupid. I'm also going to start expensing my laptop so eh.

geeves
Sep 16, 2004

Warbird posted:

So my new workplace is pretty nice and I got a pay raise from the last position. Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in.

Fukkin what. This is still a net improvement, but what are we doing here people?

This happened at a healthcare provider I worked with briefly many years ago. Over Christmas, while everyone was gone, they blocked a lot of social media sites and locked down a lot of stuff. Github was considered social media and guess where some of their code was stored. (Nothing HIPPA related, etc.)

I already was planning on leaving, this just pushed me out the door and I resigned and sited this reason. Many other devs did too. The department they just built up over the last 6 months lost 20 people over the next couple of weeks. Down from 30 to 10 devs.

Word from the inside was that they didn't learn their lesson and only doubled down on locking down the internet and installed JAMF which they used to remove subversion seeing as that could be a vector to steal code and send it to their biggest competitor.

The VP of Operations was quickly fired for attempting to run his draconian fiefdom and costing them millions in lost time.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
What’s everyone using for a container registry service? Rolling your own? We’ve been using ACR but the features seem lacking.

Also management wants something with a backup/restore option because ‘what if someone gets in and deletes all of our images?’, or is the right answer to that ‘we would just rebuild the image from the repo’?

freeasinbeer
Mar 26, 2015

by Fluffdaddy
We are moving to gcr it’s cheap multi region and not over complicated.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Spring Heeled Jack posted:

What’s everyone using for a container registry service? Rolling your own? We’ve been using ACR but the features seem lacking.

Also management wants something with a backup/restore option because ‘what if someone gets in and deletes all of our images?’, or is the right answer to that ‘we would just rebuild the image from the repo’?
If someone wanted to delete all your images maliciously, why wouldn't they also delete all your backups maliciously? If you're concerned, set up a backup job to periodically copy them to a secondary account with no other access.

freeasinbeer posted:

We are moving to gcr it’s cheap multi region and not over complicated.
GCR is our registry of choice too. It's not the friendliest UX around, and I wouldn't choose it for hosting public images, but it's way cheaper and faster than Docker Hub.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply