|
My guess is that somebody ran a report against all their vendors and sent out the results to everyone as part of a check box exercise to justify their job existence What your manager should have said was that tls 1.2 is the current global standard and does not have an EOL but that you intend to switch to tls 1.3 to stay in compliance when that happens. The check box guy would have his response and be happy and go away. Maybe sprinkle in some stuff about customer compatibility Only if they really really pushed on this, should this request have gotten to your desk. I would push back on wasting any more time on this task Edit: your manager sucks at protecting you and your time Hadlock fucked around with this message at 16:51 on Oct 27, 2021 |
# ? Oct 27, 2021 16:48 |
|
|
# ? May 16, 2024 17:27 |
|
YMMV if you operate under PCI DSS, fedramp or some other external ATO or security framework, which may mandate these sorts of things, but if that's the case it should be coming from your security department or your own security tools, not from an external customer. TBH I kinda disagree with it being a waste of time and just checking a box. Normally I advocate with going with defaults but security hardening is something everyone should be comfortable with doing and something you should have already BEEN doing. Don't assume sane defaults when it comes to configuration of external connections and a customer calling you out on it is a failure of your company process, not just them being annoying. Bhodi fucked around with this message at 16:59 on Oct 27, 2021 |
# ? Oct 27, 2021 16:52 |
|
Hadlock posted:Only if they really really pushed on this, should this request have gotten to your desk. I would push back on wasting any more time on this task This is how security works. No one understands what any of the reports mean, they sit around and let nessus do its thing, wait for it to spit out a scary pdf and roll the turd downhill until someone makes the error go away.
|
# ? Oct 27, 2021 16:53 |
|
thanks for the help guys! super helpful. It's just a web app, and 95% of our traffic uses TLS1.3 anyway, so if some customers complain we'll just tell them to update their browsers. so I guess it'll be good to restrict the list a bit.
|
# ? Oct 27, 2021 16:59 |
|
Bhodi posted:TBH I kinda disagree with it being a waste of time and just checking a box. Normally I advocate with going with defaults but security hardening is something everyone should be comfortable with doing and something you should have already BEEN doing. Don't assume sane defaults when it comes to configuration of external connections and a customer calling you out on it is a failure of your company process, not just them being annoying. That's true. It was always so embarrassing when someone installed Apache on RHEL7 and opened it to the internet with default TLS settings, and sometime later CERT-FI sends an email about another POODLE vulnerable web server on our network. Anyone know of a user agent translator? Feed it your Apache or Nginx logs and it would tell what kind of devices a percentage of your customers are using and what TLS level would cut them off.
|
# ? Oct 27, 2021 18:05 |
|
Historically for myself the most lax customers that are furthest behind in technology tend to be super important clients that are like 30% of the total company revenue so I have to do horrible things like scanning for their IP ranges advertised from their networks and make an nginx rule that offers certain ciphers only for them while everyone else gets what I meant to do. Freakin' enterprise I tell ya.
|
# ? Oct 27, 2021 20:08 |
|
Gob bless ole wanda tingle in payroll, still using a Windows 98 machine because she "has it like she wants it" and nobody wants to be the guy to delay everyone's paycheck because Wanda couldn't get everything just right without her fluffy pooch desktop wallpaper She'll upgrade when she's ^ this is a true story from m 2013 for a publicly traded company Hadlock fucked around with this message at 21:35 on Oct 27, 2021 |
# ? Oct 27, 2021 21:33 |
|
necrobobsledder posted:Historically for myself the most lax customers that are furthest behind in technology tend to be super important clients that are like 30% of the total company revenue so I have to do horrible things like scanning for their IP ranges advertised from their networks and make an nginx rule that offers certain ciphers only for them while everyone else gets what I meant to do. Freakin' enterprise I tell ya. cripes
|
# ? Oct 28, 2021 00:40 |
|
i worked at a place where we had a legacy, enterprise integration that boiled down to "execute whatever comes into this tcp port as if it were ruby". external software security audit had a field day with that one
|
# ? Oct 28, 2021 00:54 |
|
It was always legacy mission critical apps requiring ancient versions of IE that seemed to get in the way in the past, but lately I think a lot of enterprises have come under pressure to finally virtualize the browser or isolate those machines. The next pain in the rear end is old mobile devices that are stuck on like Android 4.0, but if you only serve desktops I really think it's gotten a lot better in the last couple of years.
|
# ? Oct 28, 2021 06:46 |
|
lol us-gov-west-1 is dead
|
# ? Nov 2, 2021 19:30 |
“AWS - at least we’re cheaper than Azure!”
|
|
# ? Nov 5, 2021 02:12 |
|
I hope this is the right thread for this… I’m trying to get a bunch of metrics from our services into Prometheus and so far everything is fine, but I’m confused about how to partition my data in Prometheus. We have client software that pushes data to the backend from each of our customers, so I have a counter for incoming_transaction_count. But now I want to be able to tell if any customer hasn’t pushed transactions in the past hour and alert on that, so I thought I’d label each incoming_transaction_count with the customer id. But the Prometheus docs says that labels shouldn’t be used for a high cardinality, and we’re looking at thousands of distinct customer ids. Then they also say that the metric name shouldn’t be procedurally generated so I shouldn’t create distinct counters for each customer id. I know these are all guidelines and I’m free to disregard them, but they’re there for a reason so I’d rather set things up properly from the start if there’s a better way, although I’m not seeing how given the two guidelines above. For now I have a couple of metrics I would want to track per customer, but that would probably grow to at least 10-20 for at least 2000 customers. I know Prometheus will be able to handle this sort of load without much stress but I also don’t want to do things inefficiently out of ignorance.
|
# ? Nov 8, 2021 08:57 |
|
Your concern from the load perspective is how many time series you are creating. If the problems you're trying to solve with prometheus involve slicing and dicing at the customer level there's really no way around adding a unique per-customer tag to your metrics. So for every metric you push with those tags you just need to be aware of the fact that you're creating (customer count)x the number of time series. As long as you are judicious with which metrics you're tagging (ie not pushing the 1000s of metrics that might come from something like node exporter) per-customer, my guess is that you'll probably be fine.
|
# ? Nov 8, 2021 12:26 |
|
my homie dhall posted:Your concern from the load perspective is how many time series you are creating. If the problems you're trying to solve with prometheus involve slicing and dicing at the customer level there's really no way around adding a unique per-customer tag to your metrics. So for every metric you push with those tags you just need to be aware of the fact that you're creating (customer count)x the number of time series. As long as you are judicious with which metrics you're tagging (ie not pushing the 1000s of metrics that might come from something like node exporter) per-customer, my guess is that you'll probably be fine. That’s exactly what I wanted to hear, thanks.
|
# ? Nov 8, 2021 14:03 |
|
Docker question. I have a container that runs a python script that needs to accept a file as an argument. The problem is that the file is going to be on the user's computer. Is there a way to do this other than to set up volumes? That seems like overkill and also messy since it could result in new associations being created every single time the container is run.
|
# ? Nov 8, 2021 21:01 |
|
22 Eargesplitten posted:Docker question. I have a container that runs a python script that needs to accept a file as an argument. The problem is that the file is going to be on the user's computer. Is there a way to do this other than to set up volumes? That seems like overkill and also messy since it could result in new associations being created every single time the container is run. Can you make the script accept input from stdin, so that you can pipe the content of the file to the script in the container, with something like 'cat myfile.txt | docker run -i python myscript.py'? Otherwise volumes are the way to go.
|
# ? Nov 8, 2021 21:17 |
|
You can also bind-mount the file into the container, e.g. --mount /some/path/outside/container:/some/path/inside/container:ro,Z (the "ro" means read only, and the "Z" helps deal with selinux issues). If the external file is small, you can also shove it into an environment variable and pass it in that way.
|
# ? Nov 8, 2021 22:21 |
|
Thanks, I'll look at both options. Either pass it as stdin or pass it as an environmental variable since the file is in almost every situation going to be less than 100-200 characters.
|
# ? Nov 8, 2021 23:38 |
|
Hopefully my question isn't too stone-age for people who understand modern devops processes to answer. I have a page running on a DO droplet whose components are a vue spa, an express app, and couchdb. It's all served via caddyserver. The configuration of the droplet has been ad-hoc and by hand (build locally, ftp, etc), so I'm looking to get some of the configuration etc under version control and automate updates etc through gh actions. Right now I'm trying to set up an action that detects if my Caddyfile has changed, scp's the new one to the droplet, and restarts the caddyserver. Problem is that the Caddyfile lives in a permissioned location (/etc/caddy/Caddyfile) so my naive attempts are failing. https://github.com/NiloCK/vue-skuilder/blob/master/.github/workflows/deploy-caddyfile.yml is my work-in-progress, and you can feel free to have a laugh at my flailing prior attempts. Any advice?
|
# ? Nov 19, 2021 14:53 |
|
My potentially naive solution would be to move your caddyfile to a non-privileged location, then create a symbolic link to the privileged location.
|
# ? Nov 19, 2021 15:42 |
|
The Fool posted:My potentially naive solution would be to move your caddyfile to a non-privileged location, then create a symbolic link to the privileged location. This is a good idea, and lines up well with the way I upload new builds of the SPA (also based on advice from this thread. Maybe from you!). Permission issues may still remain around running the `caddy reload` command. Maybe I should actually look at Caddy's API for management as well.
|
# ? Nov 19, 2021 15:48 |
|
devops really exposes bad process in the team, huh
|
# ? Nov 19, 2021 18:07 |
|
the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit
|
# ? Nov 19, 2021 18:36 |
|
12 rats tied together posted:the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit Might put this on my resume tbh
|
# ? Nov 19, 2021 18:40 |
|
Newf posted:This is a good idea, and lines up well with the way I upload new builds of the SPA (also based on advice from this thread. Maybe from you!). Literally the first quickstart example in the documentation page is how to update the caddyfile via HTTP POST, so yeah: https://caddyserver.com/docs/quick-starts/api Use SSH local port forwarding to remotely access port 2019 on the server and you're good to go. e: Note that the quickstart example uses the caddyfile in raw JSON format. Assuming you have a regular Caddyfile in your repo, you need to set the Content-Type header to 'text/caddyfile'. (This is explained in the documentation for the /load endpoint.) NihilCredo fucked around with this message at 18:55 on Nov 19, 2021 |
# ? Nov 19, 2021 18:50 |
|
Devops is the process of automating everything and trading off all engineering problems until all your technical debt is running K8S and getting paged for that all the time in creative ways instead.
|
# ? Nov 19, 2021 19:46 |
|
one of my favorite things to do at work in recent years is to point the bullshit questioner gun at kubernetes and watch things get really uncomfortable when it becomes obvious who is resume boosting for their next gig at the expense of trying to solve the problem they are presented with e: basically, i agree with necrobobsledder. 12 rats tied together fucked around with this message at 20:33 on Nov 19, 2021 |
# ? Nov 19, 2021 20:29 |
|
necrobobsledder posted:Devops is the process of automating everything and trading off all engineering problems until all your technical debt is running K8S and getting paged for that all the time in creative ways instead. e: More people need to realize that Kubernetes is the cloud-native supported target for commercial off-the-shelf software that you host yourself. ISVs have every reason to target the most hybrid, agnostic approach possible. Your 100-engineer company doesn't. It's much closer to a replacement for your old vSphere cluster than a useful deployment target. It's enterprise tech, not startup tech. Vulture Culture fucked around with this message at 22:36 on Nov 19, 2021 |
# ? Nov 19, 2021 22:30 |
|
12 rats tied together posted:the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit Maybe not the right thread but it hope it fits: we have another team managing a ES cluster we use for regular logging stuff and real user metrics. They started with just the one, but now they have setup a test cluster for their testing and a pre cluster that “is a prod environment for your non prod environments” and they want us to just have our prod environment logging &c to the prod cluster. I am flat out refusing, that means 100% more work for us. Not just maintaining reports and dashboards, but also just uuugh. I’m pretty sure Support/Application Management barely knows this stuff exists and me and the rest of our team use it for like 80% test stuff and only ever check the prod data if some issue is escalated all the way to us. Also it’s not like we treat all environments equal, we purge test indexes much faster. Am I crazy? Isn’t this just useless?
|
# ? Nov 19, 2021 22:38 |
|
zokie posted:Maybe not the right thread but it hope it fits: we have another team managing a ES cluster we use for regular logging stuff and real user metrics. They started with just the one, but now they have setup a test cluster for their testing and a pre cluster that “is a prod environment for your non prod environments” and they want us to just have our prod environment logging &c to the prod cluster. It sounds like what you're asking is if your infrastructure team should have N > 1 instances of critical infrastructure, which to me seems in your best interest. In doing so, they should be taking steps to make sure this transition is as transparent as possible, meaning they should be providing a way for you to replicate and update whatever existing tooling you have across any new instances they decide to bring up.
|
# ? Nov 20, 2021 03:52 |
|
12 rats tied together posted:the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit This guy has seen some poo poo
|
# ? Nov 20, 2021 07:24 |
|
Vulture Culture posted:What, it's bad to stuff your infrastructure into a God Object? I mean, if you start with k8s and only design 12 factor apps, it's pretty straight forward Backing 11 years worth of badly written code and questionable architecture decisions that "mostly work as long as everything is on the same server" into 12 factor and then containerizing, then deploying k8s via terraform is painful, sure Once you learn the 12 factor/container/k8s pattern and train up engineering to deploy new services in a sane and consistent manner, managed k8s is like greased lightning. Sorry you feel otherwise
|
# ? Nov 20, 2021 07:32 |
|
my homie dhall posted:It sounds like what you're asking is if your infrastructure team should have N > 1 instances of critical infrastructure, which to me seems in your best interest. In doing so, they should be taking steps to make sure this transition is as transparent as possible, meaning they should be providing a way for you to replicate and update whatever existing tooling you have across any new instances they decide to bring up. It’s an elastic search [bold]cluster[/bold], we might have selected to use something like Application Insights or one of the dozens managed ES providers that exist, and if we did we wouldn’t use pre-azure.com or pre-aws.com for our non prod environments.
|
# ? Nov 20, 2021 08:58 |
|
Anybody in here self-hosting their Terraform state? We work in a tough regulatory environment and basically need to keep it on our own servers. I'm not sure which storage is most ergonomic and safe.
|
# ? Nov 20, 2021 13:39 |
|
12 rats tied together posted:the question at the heart of devops is "what if we didn't have to do all that stupid bullshit?" which serves the dual function of highlighting all of the stupid bullshit as well as making enemies of people whose entire career is doing stupid bullshit My job is currently readying for war over whether "governance to keep people from blindly including any NPM package they want, directly from public repos into builds" is or is not stupid bullshit. I'm not sure "we probably wouldn't detect NPM attacks in a timely fashion anyway" is as good a defense as they think it is.
|
# ? Nov 20, 2021 16:09 |
|
cum jabbar posted:Anybody in here self-hosting their Terraform state? We work in a tough regulatory environment and basically need to keep it on our own servers. I'm not sure which storage is most ergonomic and safe.
|
# ? Nov 20, 2021 16:13 |
|
Zorak of Michigan posted:My job is currently readying for war over whether "governance to keep people from blindly including any NPM package they want, directly from public repos into builds" is or is not stupid bullshit. I'm not sure "we probably wouldn't detect NPM attacks in a timely fashion anyway" is as good a defense as they think it is. Extremely relatable. We are working on porting some services to AWS and in the process trying to shore up some highly questionable (/nonexistent) operational and security practices while we have the chance to rebuild greenfield. The strenuous pushback from very senior engineers of “well our on prem situation is a total dumpster fire anyway so I don’t see the point in making this one thing better” has been something to behold.
|
# ? Nov 20, 2021 16:33 |
|
Last time I had to deal with that, we compromised with a local mirror which had scheduled syncing to a staging area and went to our security team for scan/approval before going live (just simlinking to the staging directory). It made security and auditing happy because there was appropriate approvals and the real security ended up being the ~1-2 week delay from live which allowed us to just not do the sync that week when something dumb hits the news. Didn't protect from the whole "this thing had been compromised for 6+ months" or npm dependency bloat but it would catch those quickly caught and reverted hacked account uploads.
|
# ? Nov 20, 2021 16:45 |
|
|
# ? May 16, 2024 17:27 |
|
Bhodi posted:Last time I had to deal with that, we compromised with a local mirror which had scheduled syncing to a staging area and went to our security team for scan/approval before going live (just simlinking to the staging directory). It made security and auditing happy because there was appropriate approvals and the real security ended up being the ~1-2 week delay from live which allowed us to just not do the sync that week when something dumb hits the news. Didn't protect from the whole "this thing had been compromised for 6+ months" or npm dependency bloat but it would catch those quickly caught and reverted hacked account uploads. A 1-2 week delay in wanting to use a dependency to approval? If so, that's nuts. Putting security as a primary blocker is fundamentally unscalable and only gives engineering orgs ammo to push back against better security measures. That's why good security teams have increased focus on shifting security left to put that onus on tooling that developers work with directly like npm audit, snyk, etc. and adding gates in the CI/CD pipeline to prevent malicious packages from sneaking through. The only downside is that it has to be implemented alongside decent endpoint visibility tools and a solid active response process in the event that a malicious packages is identified on a dev's endpoint but if you're putting up a local mirror hopefully you have enough opsec to have those tools and processes in place already.
|
# ? Nov 20, 2021 17:02 |