Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hadlock
Nov 9, 2004

My guess is that somebody ran a report against all their vendors and sent out the results to everyone as part of a check box exercise to justify their job existence

What your manager should have said was that tls 1.2 is the current global standard and does not have an EOL but that you intend to switch to tls 1.3 to stay in compliance when that happens. The check box guy would have his response and be happy and go away. Maybe sprinkle in some stuff about customer compatibility

Only if they really really pushed on this, should this request have gotten to your desk. I would push back on wasting any more time on this task

Edit: your manager sucks at protecting you and your time

Hadlock fucked around with this message at 16:51 on Oct 27, 2021

Adbot
ADBOT LOVES YOU

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
YMMV if you operate under PCI DSS, fedramp or some other external ATO or security framework, which may mandate these sorts of things, but if that's the case it should be coming from your security department or your own security tools, not from an external customer.

TBH I kinda disagree with it being a waste of time and just checking a box. Normally I advocate with going with defaults but security hardening is something everyone should be comfortable with doing and something you should have already BEEN doing. Don't assume sane defaults when it comes to configuration of external connections and a customer calling you out on it is a failure of your company process, not just them being annoying.

Bhodi fucked around with this message at 16:59 on Oct 27, 2021

xzzy
Mar 5, 2009

Hadlock posted:

Only if they really really pushed on this, should this request have gotten to your desk. I would push back on wasting any more time on this task

Edit: your manager sucks at protecting you and your time

This is how security works. No one understands what any of the reports mean, they sit around and let nessus do its thing, wait for it to spit out a scary pdf and roll the turd downhill until someone makes the error go away.

hey mom its 420
May 12, 2007

thanks for the help guys! super helpful.

It's just a web app, and 95% of our traffic uses TLS1.3 anyway, so if some customers complain we'll just tell them to update their browsers. so I guess it'll be good to restrict the list a bit.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Bhodi posted:

TBH I kinda disagree with it being a waste of time and just checking a box. Normally I advocate with going with defaults but security hardening is something everyone should be comfortable with doing and something you should have already BEEN doing. Don't assume sane defaults when it comes to configuration of external connections and a customer calling you out on it is a failure of your company process, not just them being annoying.

That's true. It was always so embarrassing when someone installed Apache on RHEL7 and opened it to the internet with default TLS settings, and sometime later CERT-FI sends an email about another POODLE vulnerable web server on our network.

Anyone know of a user agent translator? Feed it your Apache or Nginx logs and it would tell what kind of devices a percentage of your customers are using and what TLS level would cut them off.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Historically for myself the most lax customers that are furthest behind in technology tend to be super important clients that are like 30% of the total company revenue so I have to do horrible things like scanning for their IP ranges advertised from their networks and make an nginx rule that offers certain ciphers only for them while everyone else gets what I meant to do. Freakin' enterprise I tell ya.

Hadlock
Nov 9, 2004

Gob bless ole wanda tingle in payroll, still using a Windows 98 machine because she "has it like she wants it" and nobody wants to be the guy to delay everyone's paycheck because Wanda couldn't get everything just right without her fluffy pooch desktop wallpaper

She'll upgrade when she's dead good and ready to :colbert:

^ this is a true story from m 2013 for a publicly traded company

Hadlock fucked around with this message at 21:35 on Oct 27, 2021

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

necrobobsledder posted:

Historically for myself the most lax customers that are furthest behind in technology tend to be super important clients that are like 30% of the total company revenue so I have to do horrible things like scanning for their IP ranges advertised from their networks and make an nginx rule that offers certain ciphers only for them while everyone else gets what I meant to do. Freakin' enterprise I tell ya.

cripes

12 rats tied together
Sep 7, 2006

i worked at a place where we had a legacy, enterprise integration that boiled down to "execute whatever comes into this tcp port as if it were ruby". external software security audit had a field day with that one

Scikar
Nov 20, 2005

5? Seriously?

It was always legacy mission critical apps requiring ancient versions of IE that seemed to get in the way in the past, but lately I think a lot of enterprises have come under pressure to finally virtualize the browser or isolate those machines. The next pain in the rear end is old mobile devices that are stuck on like Android 4.0, but if you only serve desktops I really think it's gotten a lot better in the last couple of years.

Methanar
Sep 26, 2013

by the sex ghost
lol us-gov-west-1 is dead

madmatt112
Jul 11, 2016

Is that a cat in your pants, or are you just a lonely excuse for an adult?

“AWS - at least we’re cheaper than Azure!”

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
I hope this is the right thread for this… I’m trying to get a bunch of metrics from our services into Prometheus and so far everything is fine, but I’m confused about how to partition my data in Prometheus.

We have client software that pushes data to the backend from each of our customers, so I have a counter for incoming_transaction_count. But now I want to be able to tell if any customer hasn’t pushed transactions in the past hour and alert on that, so I thought I’d label each incoming_transaction_count with the customer id. But the Prometheus docs says that labels shouldn’t be used for a high cardinality, and we’re looking at thousands of distinct customer ids. Then they also say that the metric name shouldn’t be procedurally generated so I shouldn’t create distinct counters for each customer id.

I know these are all guidelines and I’m free to disregard them, but they’re there for a reason so I’d rather set things up properly from the start if there’s a better way, although I’m not seeing how given the two guidelines above.
For now I have a couple of metrics I would want to track per customer, but that would probably grow to at least 10-20 for at least 2000 customers. I know Prometheus will be able to handle this sort of load without much stress but I also don’t want to do things inefficiently out of ignorance.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
Your concern from the load perspective is how many time series you are creating. If the problems you're trying to solve with prometheus involve slicing and dicing at the customer level there's really no way around adding a unique per-customer tag to your metrics. So for every metric you push with those tags you just need to be aware of the fact that you're creating (customer count)x the number of time series. As long as you are judicious with which metrics you're tagging (ie not pushing the 1000s of metrics that might come from something like node exporter) per-customer, my guess is that you'll probably be fine.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

my homie dhall posted:

Your concern from the load perspective is how many time series you are creating. If the problems you're trying to solve with prometheus involve slicing and dicing at the customer level there's really no way around adding a unique per-customer tag to your metrics. So for every metric you push with those tags you just need to be aware of the fact that you're creating (customer count)x the number of time series. As long as you are judicious with which metrics you're tagging (ie not pushing the 1000s of metrics that might come from something like node exporter) per-customer, my guess is that you'll probably be fine.

That’s exactly what I wanted to hear, thanks.

22 Eargesplitten
Oct 10, 2010



Docker question. I have a container that runs a python script that needs to accept a file as an argument. The problem is that the file is going to be on the user's computer. Is there a way to do this other than to set up volumes? That seems like overkill and also messy since it could result in new associations being created every single time the container is run.

Votlook
Aug 20, 2005

22 Eargesplitten posted:

Docker question. I have a container that runs a python script that needs to accept a file as an argument. The problem is that the file is going to be on the user's computer. Is there a way to do this other than to set up volumes? That seems like overkill and also messy since it could result in new associations being created every single time the container is run.

Can you make the script accept input from stdin, so that you can pipe the content of the file to the script in the container, with something like 'cat myfile.txt | docker run -i python myscript.py'?

Otherwise volumes are the way to go.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
You can also bind-mount the file into the container, e.g. --mount /some/path/outside/container:/some/path/inside/container:ro,Z (the "ro" means read only, and the "Z" helps deal with selinux issues).

If the external file is small, you can also shove it into an environment variable and pass it in that way.

22 Eargesplitten
Oct 10, 2010



Thanks, I'll look at both options. Either pass it as stdin or pass it as an environmental variable since the file is in almost every situation going to be less than 100-200 characters.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Hopefully my question isn't too stone-age for people who understand modern devops processes to answer.

I have a page running on a DO droplet whose components are a vue spa, an express app, and couchdb. It's all served via caddyserver. The configuration of the droplet has been ad-hoc and by hand (build locally, ftp, etc), so I'm looking to get some of the configuration etc under version control and automate updates etc through gh actions.

Right now I'm trying to set up an action that detects if my Caddyfile has changed, scp's the new one to the droplet, and restarts the caddyserver. Problem is that the Caddyfile lives in a permissioned location (/etc/caddy/Caddyfile) so my naive attempts are failing.

https://github.com/NiloCK/vue-skuilder/blob/master/.github/workflows/deploy-caddyfile.yml is my work-in-progress, and you can feel free to have a laugh at my flailing prior attempts.

Any advice?

The Fool
Oct 16, 2003


My potentially naive solution would be to move your caddyfile to a non-privileged location, then create a symbolic link to the privileged location.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

The Fool posted:

My potentially naive solution would be to move your caddyfile to a non-privileged location, then create a symbolic link to the privileged location.

This is a good idea, and lines up well with the way I upload new builds of the SPA (also based on advice from this thread. Maybe from you!).

Permission issues may still remain around running the `caddy reload` command.

Maybe I should actually look at Caddy's API for management as well.

barkbell
Apr 14, 2006

woof
devops really exposes bad process in the team, huh

12 rats tied together
Sep 7, 2006

the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit

Walked
Apr 14, 2003

12 rats tied together posted:

the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit

Might put this on my resume tbh

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Newf posted:

This is a good idea, and lines up well with the way I upload new builds of the SPA (also based on advice from this thread. Maybe from you!).

Permission issues may still remain around running the `caddy reload` command.

Maybe I should actually look at Caddy's API for management as well.

Literally the first quickstart example in the documentation page is how to update the caddyfile via HTTP POST, so yeah:

https://caddyserver.com/docs/quick-starts/api

Use SSH local port forwarding to remotely access port 2019 on the server and you're good to go.

e: Note that the quickstart example uses the caddyfile in raw JSON format. Assuming you have a regular Caddyfile in your repo, you need to set the Content-Type header to 'text/caddyfile'. (This is explained in the documentation for the /load endpoint.)

NihilCredo fucked around with this message at 18:55 on Nov 19, 2021

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Devops is the process of automating everything and trading off all engineering problems until all your technical debt is running K8S and getting paged for that all the time in creative ways instead.

12 rats tied together
Sep 7, 2006

one of my favorite things to do at work in recent years is to point the bullshit questioner gun at kubernetes and watch things get really uncomfortable when it becomes obvious who is resume boosting for their next gig at the expense of trying to solve the problem they are presented with

e: basically, i agree with necrobobsledder.

12 rats tied together fucked around with this message at 20:33 on Nov 19, 2021

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

Devops is the process of automating everything and trading off all engineering problems until all your technical debt is running K8S and getting paged for that all the time in creative ways instead.
What, it's bad to stuff your infrastructure into a God Object?

e: More people need to realize that Kubernetes is the cloud-native supported target for commercial off-the-shelf software that you host yourself. ISVs have every reason to target the most hybrid, agnostic approach possible. Your 100-engineer company doesn't. It's much closer to a replacement for your old vSphere cluster than a useful deployment target. It's enterprise tech, not startup tech.

Vulture Culture fucked around with this message at 22:36 on Nov 19, 2021

zokie
Feb 13, 2006

Out of many, Sweden

12 rats tied together posted:

the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit

Maybe not the right thread but it hope it fits: we have another team managing a ES cluster we use for regular logging stuff and real user metrics. They started with just the one, but now they have setup a test cluster for their testing and a pre cluster that “is a prod environment for your non prod environments” and they want us to just have our prod environment logging &c to the prod cluster.

I am flat out refusing, that means 100% more work for us. Not just maintaining reports and dashboards, but also just uuugh. I’m pretty sure Support/Application Management barely knows this stuff exists and me and the rest of our team use it for like 80% test stuff and only ever check the prod data if some issue is escalated all the way to us.

Also it’s not like we treat all environments equal, we purge test indexes much faster.

Am I crazy? Isn’t this just useless?

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

zokie posted:

Maybe not the right thread but it hope it fits: we have another team managing a ES cluster we use for regular logging stuff and real user metrics. They started with just the one, but now they have setup a test cluster for their testing and a pre cluster that “is a prod environment for your non prod environments” and they want us to just have our prod environment logging &c to the prod cluster.

I am flat out refusing, that means 100% more work for us. Not just maintaining reports and dashboards, but also just uuugh. I’m pretty sure Support/Application Management barely knows this stuff exists and me and the rest of our team use it for like 80% test stuff and only ever check the prod data if some issue is escalated all the way to us.

Also it’s not like we treat all environments equal, we purge test indexes much faster.

Am I crazy? Isn’t this just useless?

It sounds like what you're asking is if your infrastructure team should have N > 1 instances of critical infrastructure, which to me seems in your best interest. In doing so, they should be taking steps to make sure this transition is as transparent as possible, meaning they should be providing a way for you to replicate and update whatever existing tooling you have across any new instances they decide to bring up.

Hadlock
Nov 9, 2004

12 rats tied together posted:

the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit

This guy has seen some poo poo

Hadlock
Nov 9, 2004

Vulture Culture posted:

What, it's bad to stuff your infrastructure into a God Object?

e: More people need to realize that Kubernetes is the cloud-native supported target for commercial .... It's enterprise tech, not startup tech.

I mean, if you start with k8s and only design 12 factor apps, it's pretty straight forward

Backing 11 years worth of badly written code and questionable architecture decisions that "mostly work as long as everything is on the same server" into 12 factor and then containerizing, then deploying k8s via terraform is painful, sure

Once you learn the 12 factor/container/k8s pattern and train up engineering to deploy new services in a sane and consistent manner, managed k8s is like greased lightning. Sorry you feel otherwise

zokie
Feb 13, 2006

Out of many, Sweden

my homie dhall posted:

It sounds like what you're asking is if your infrastructure team should have N > 1 instances of critical infrastructure, which to me seems in your best interest. In doing so, they should be taking steps to make sure this transition is as transparent as possible, meaning they should be providing a way for you to replicate and update whatever existing tooling you have across any new instances they decide to bring up.

It’s an elastic search [bold]cluster[/bold], we might have selected to use something like Application Insights or one of the dozens managed ES providers that exist, and if we did we wouldn’t use pre-azure.com or pre-aws.com for our non prod environments.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Anybody in here self-hosting their Terraform state? We work in a tough regulatory environment and basically need to keep it on our own servers. I'm not sure which storage is most ergonomic and safe.

Zorak of Michigan
Jun 10, 2006


12 rats tied together posted:

the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit

My job is currently readying for war over whether "governance to keep people from blindly including any NPM package they want, directly from public repos into builds" is or is not stupid bullshit. I'm not sure "we probably wouldn't detect NPM attacks in a timely fashion anyway" is as good a defense as they think it is.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

cum jabbar posted:

Anybody in here self-hosting their Terraform state? We work in a tough regulatory environment and basically need to keep it on our own servers. I'm not sure which storage is most ergonomic and safe.
It's not very large and it's just a plaintext file so anything that has file locking and that you have appropriate backups to and possibly versioning will work fine.

Docjowles
Apr 9, 2009

Zorak of Michigan posted:

My job is currently readying for war over whether "governance to keep people from blindly including any NPM package they want, directly from public repos into builds" is or is not stupid bullshit. I'm not sure "we probably wouldn't detect NPM attacks in a timely fashion anyway" is as good a defense as they think it is.

Extremely relatable. We are working on porting some services to AWS and in the process trying to shore up some highly questionable (/nonexistent) operational and security practices while we have the chance to rebuild greenfield. The strenuous pushback from very senior engineers of “well our on prem situation is a total dumpster fire anyway so I don’t see the point in making this one thing better” has been something to behold.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Last time I had to deal with that, we compromised with a local mirror which had scheduled syncing to a staging area and went to our security team for scan/approval before going live (just simlinking to the staging directory). It made security and auditing happy because there was appropriate approvals and the real security ended up being the ~1-2 week delay from live which allowed us to just not do the sync that week when something dumb hits the news. Didn't protect from the whole "this thing had been compromised for 6+ months" or npm dependency bloat but it would catch those quickly caught and reverted hacked account uploads.

Adbot
ADBOT LOVES YOU

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Bhodi posted:

Last time I had to deal with that, we compromised with a local mirror which had scheduled syncing to a staging area and went to our security team for scan/approval before going live (just simlinking to the staging directory). It made security and auditing happy because there was appropriate approvals and the real security ended up being the ~1-2 week delay from live which allowed us to just not do the sync that week when something dumb hits the news. Didn't protect from the whole "this thing had been compromised for 6+ months" or npm dependency bloat but it would catch those quickly caught and reverted hacked account uploads.

A 1-2 week delay in wanting to use a dependency to approval?

If so, that's nuts. Putting security as a primary blocker is fundamentally unscalable and only gives engineering orgs ammo to push back against better security measures.

That's why good security teams have increased focus on shifting security left to put that onus on tooling that developers work with directly like npm audit, snyk, etc. and adding gates in the CI/CD pipeline to prevent malicious packages from sneaking through.

The only downside is that it has to be implemented alongside decent endpoint visibility tools and a solid active response process in the event that a malicious packages is identified on a dev's endpoint but if you're putting up a local mirror hopefully you have enough opsec to have those tools and processes in place already.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply