|
Biowarfare posted:Why are they not being reeducated or terminated instead? Lol, just lol if you think rules apply to Shelly in Accounts Receivable. Her mess of a filling system is a form of job security and she knows it.
|
# ? Nov 4, 2020 19:05 |
|
|
# ? Jun 2, 2024 16:47 |
|
Volmarias posted:Lol, just lol if you think rules apply to Shelly in Accounts Receivable. Her mess of a filling system is a form of job security and she knows it. This type of personality screams "upset because infosec removed my malware emoji toolbar from IE"
|
# ? Nov 4, 2020 19:09 |
|
Biowarfare posted:Why are they not being reeducated or terminated instead? I've never worked anywhere that would fire someone for failing a training. The same small subset of users will continuously fail, get the same refresher training or 1-on-1 with some poor IT staff member, and then immediately click on whatever stupid poo poo comes their way. Has anyone seen someone fired for falling for phishing emails?!?
|
# ? Nov 4, 2020 19:29 |
|
BaseballPCHiker posted:I've never worked anywhere that would fire someone for failing a training. The same small subset of users will continuously fail, get the same refresher training or 1-on-1 with some poor IT staff member, and then immediately click on whatever stupid poo poo comes their way. It isn't the failing a training part, it's the part where they're continually a liability to the company, wasting security's time making custom rulesets, repeatedly refusing to learn from mistakes and continuing to be a massive risk even after multiple attempts at education?
|
# ? Nov 4, 2020 19:32 |
|
Biowarfare posted:It isn't the failing a training part, it's the part where they're continually a liability to the company, wasting security's time making custom rulesets, repeatedly refusing to learn from mistakes and continuing to be a massive risk even after multiple attempts at education? Counterpoint: the IT department is a cost center, while Jeff in Sales brings in hella cash. So what if he's brought our entire company to its knees on multiple occasions?
|
# ? Nov 4, 2020 19:57 |
|
If you make security punitive you end up with resentful users who never report things they should and actively try to bypass controls because they don’t trust “the man”
|
# ? Nov 4, 2020 20:10 |
|
Yeah, punitive reactions to phishing tests will just bite security in the rear end.
|
# ? Nov 4, 2020 20:30 |
|
Bob Morales posted:They end up sending messages that don't have things like incorrect domains, invalid users, etc. Sorry I dont follow you. I use kb4. Can you re state?
|
# ? Nov 4, 2020 20:35 |
|
Biowarfare posted:It isn't the failing a training part, it's the part where they're continually a liability to the company, wasting security's time making custom rulesets, repeatedly refusing to learn from mistakes and continuing to be a massive risk even after multiple attempts at education? Is it really wasting security's time if it's literally part of their job description?
|
# ? Nov 4, 2020 20:38 |
|
droll posted:Sorry I dont follow you. I use kb4. Can you re state? Knowbe4 can send an unflawed message, even thought it's not from your server (because you whitelist them or setup spf records etc) No typos in the domains or user names. Things people would normally be told to look for.
|
# ? Nov 4, 2020 20:53 |
|
Bob Morales posted:Knowbe4 can send an unflawed message, even thought it's not from your server (because you whitelist them or setup spf records etc) Can is the operative word. You pick the template or create your own, so you can definitely choose not to. I'm assuming you mean the display name on the From: when you refer to 'username'.
|
# ? Nov 4, 2020 21:21 |
|
I don't understand, is this a non-tech company thing? I've definitely seen people walked out for repeated security mishaps - the key word here being repeated, where they adamantly refuse to learn after being educated, but I'm surrounded by engineers and little to no sales/marketing/etc in my circle. The punitive thing isn't that they fell for it. It's that they are repeatedly falling for it, many times, putting your network, assets, and company data at risk, and all the education in the world hasn't helped. If a SDE causes repeated prod incidents and refuses to learn from them I'd expect them to be PIP'd and terminated after multiple times. Why is this different for someone that potentially has access to PII and hasn't learnt after many attempts?
|
# ? Nov 4, 2020 21:37 |
|
Biowarfare posted:I don't understand, is this a non-tech company thing? I've definitely seen people walked out for repeated security mishaps - the key word here being repeated, where they adamantly refuse to learn after being educated, but I'm surrounded by engineers and little to no sales/marketing/etc in my circle. The punitive thing isn't that they fell for it. It's that they are repeatedly falling for it, many times, putting your network, assets, and company data at risk, and all the education in the world hasn't helped. Most non engineering companies may not even consider information security something they have to worry about. Too small to be a target, they've got some kind of fancy firewall their managed service provider sold them, etc. These kinds of risks are also rarely listened to, and decisions are made based on people's personal fiefdoms. A security team, if they exist, may simply not have the power to enforce termination for certain or even any employees, especially if it's for failing tests and potentially causing a problem, rather than actually having caused issues yet. It's different for you, because you're in an engineering role, and expected to know better. Not loving up a prod deployment is in fact part of your job, so doing just that means that you're unable to perform your primary job. Your boss has the wherewithal and ability to do something about you, because you make them look bad. Someone in a totally different organization may be very well protected politically for some reason or another, and failing a phishing test multiple times just means giving extra special care via increased filtering as the best possible punishment.
|
# ? Nov 4, 2020 23:48 |
|
Realistically, if you're counting on phishing training to keep your users from getting owned, you've given up the game. Training reduces incident volume and lets your hunt team get signatures to look for, nothing more. There will always be users who screw up. Many times it's not even their fault when they do so. The thought that someone who is otherwise good at their job would get fired for failing a phishing test is insane. No user can be perfect. If your posture relies on people being perfect, you've failed as a security org.
|
# ? Nov 5, 2020 04:59 |
|
"your hunt team" Is that something like .00001% of orgs have or something? Like, can you count the hunt teams in the united states on two hands?
|
# ? Nov 5, 2020 05:03 |
|
The hells a hunt team? Some sort of offensive cybersec dudes? Why would bussiness want that kind of liability?
|
# ? Nov 5, 2020 05:07 |
|
I think it's that team Dick Cheney was on
|
# ? Nov 5, 2020 05:20 |
|
Defenestrategy posted:The hells a hunt team? Some sort of offensive cybersec dudes? Why would bussiness want that kind of liability? I think MS has one, that’s the only example I can think of off the top of my head. E: https://en.m.wikipedia.org/wiki/Microsoft_Digital_Crimes_Unit The Fool fucked around with this message at 05:33 on Nov 5, 2020 |
# ? Nov 5, 2020 05:22 |
|
The team that looks for poo poo, usually because an alert fires. The analysts in the SOC count
|
# ? Nov 5, 2020 05:31 |
|
Mandiant, Microsoft, Amazon, Google, even Symantec is running one these days
Potato Salad fucked around with this message at 10:47 on Nov 5, 2020 |
# ? Nov 5, 2020 05:53 |
|
Hunting is when you to look for malicious activity based on certain indicators, without necessarily having any indication that you've been compromised by some alert or somesuch. Usually you have a bunch of IoC's* for one or more intrusion sets that you want to focus on and you go digging inside your network for them. *Indicators of Compromise, so file hashes, IP addresses, yara rules, snort rules, certain event log codes, etc.
|
# ? Nov 5, 2020 07:23 |
|
Volmarias posted:Most non engineering companies may not even consider information security something they have to worry about. Too small to be a target, they've got some kind of fancy firewall their managed service provider sold them, etc. These kinds of risks are also rarely listened to, and decisions are made based on people's personal fiefdoms. Thanks, I appreciate this detailed explanation that tells me I'm way too deep in the tech bubble and have forgotten how most other industries work. Stuff like checking email headers is pretty much second nature at this point and I just realised that it's incredibly unreasonable to expect Shelly in Accounts Receivable to be able to read through something like that, or salespeople, etc. Impotence fucked around with this message at 09:01 on Nov 5, 2020 |
# ? Nov 5, 2020 08:37 |
|
Defenestrategy posted:The hells a hunt team? Some sort of offensive cybersec dudes? Why would bussiness want that kind of liability?
|
# ? Nov 5, 2020 09:49 |
|
Well poo poo, I can write a half-assed effortpost on threat hunting to distract myself from politics. A bunch of people have hit on a lot of the basics for what threat hunting is and all that - basically proactively going out and looking through your security telemetry for indicators of malicious activity. It's kind of the inverse of typical secops where you're playing whack-a-mole with alerts. Hunting can be a really formal thing, where you'd have a dedicated team attached to your SOC, or something informal like one person getting inquisitive on a Friday afternoon for shits and grins. Most security teams are so strapped for resources and have other fundamental problems so hunting is not really a thing. But it's a loving fantastic buzzword so everyone thinks it's right for them and vendors love to talk about how they can help you do it. In bigger enterprise SOC environments where you've got a reasonably mature security program, you may see a division carved off as a formal hunt team. If you contract out your front line security monitoring to a MSSP then they may claim to do some of this too. The academic idea is basically that you'd take a hypothesis for what a bad guy would do on your network, and then using whatever tools are at your disposal you'd look for indicators of malicious activity consistent with your hypothesis. If you go back to the MITRE ATT&CK discussion from a few pages back, that's a pretty common point of reference for this and this is one of the more common applied uses of the framework. So it's kind of meant to be a bit of a scientific process, which is cool as a concept and it's way more engaging than waiting for alerts to roll in passively. It also pairs up with some of the concepts laid out in David Bianco's Pyramid of Pain model which is worth a read. Anyway, with how all of this works there's an unspoken expectation that you'd have access to legit threat intel too. Not ephemeral poo poo like IP and hash lists (the bottom of the pyramid model!) but more of the ATT&CK-like stuff. If you have reliable threat intel (and let's be clear: few do), then they can say "hey we have high confidence that criminal actors are targeting companies like ours to collect trade secrets, and these are the tactics that have been observed that are consistent with these actors". So the idea would be that your hunters could take that intel and then start looking for all of those tactics - for example, maybe this is rudimentary poo poo like looking for evidence of passing the hash in windows event logs. And if they find something malicious then your incident response processes (basically everything after the alert) would kick in to deal with the threat. I'm oversimplifying a bunch. Realistically you can hunt using pretty much anything from looking at your AV logs or something to running WMI queries across the network and analyzing the output. I'm a shithead consultant so I see a bit of everything, most of the time the most common approach I see combines an EDR product (Carbon Black or Crowdstrike, Defender ATP, etc.) and a decent SIEM. The SIEM part is still sometimes a bitch though because log sources are pretty inconsistent though - not collecting logs from user workstations results in a massive visibility gap which is not great when the bad guys target users first. SysMon events fed into an ELK stack is also a really great one. I've also seen some really awesome stuff being done with tools like OSquery and Kolide too.
|
# ? Nov 6, 2020 03:06 |
|
My follow up question: whats the path so you can get paid to do that look like because that sounds fun
|
# ? Nov 6, 2020 03:44 |
|
Defenestrategy posted:My follow up question: whats the path so you can get paid to do that look like because that sounds fun keyword "analyst". has overlap with forensics and general detection & response paths, as well. IME a lot of places have people that do the things that I'm comfortable calling threat hunting, but it's generally part of broader analyst/d&r positions Oct's right that having actual threat intel is helpful when you're looking at legit nationstates and big organized crime stuff, but it's not insane for even small orgs to have non-conclusive IOCs fire that occasion someone to go look (as opposed to mostly foolproof IOCs, where it's basically "you're owned, survey the damage and reformat" every time) e: not to be confused with people with the title "analyst" that are analyzing actual malware samples, though. there's a lot of overloaded terms. a huge portion of infosec jobs are in this space, it's not like some crazy super-niche thing or whatever. of course, jobs that do only hunting are more rare Achmed Jones fucked around with this message at 03:58 on Nov 6, 2020 |
# ? Nov 6, 2020 03:53 |
|
Defenestrategy posted:My follow up question: whats the path so you can get paid to do that look like because that sounds fun I'd say that it is part of the progression that comes with working in a really big enterprise SOC (where they don't care about skill so much as your ability to close alerts quickly) or working for a MSSP as an analyst in their SOC, sucking the correct amount of management butthole, and proving you're competent. Typically hunters are the people in the SOC who have shown they are good at using the available tools and just generally know their poo poo. Getting past the automated HR keyword screening is probably the biggest hurdle for getting started. You can also start doing this poo poo independently wherever you currently are and start to build a solid resume that way by playing up how you built your employer's threat hunting program from scratch. Recruiters love that poo poo. Outside of that, networking helps a lot. Black Hills Information Security runs a pretty sizable Discord and is a great resource for career development. If you follow them on LinkedIn they sometimes post the link to join (and then delete it later on). The community there is really helpful when you're getting started and there are tons of discussions around building a career and getting further into it.
|
# ? Nov 6, 2020 04:14 |
|
Woops! https://twitter.com/mlqxyz/status/1326223139496468481 tl;dr: Intel's power monitoring, combined with the current implementation of the Linux driver, allows side-channel attacks, albeit at lower bandwidth than an attached oscilloscope.
|
# ? Nov 10, 2020 20:17 |
|
How do you manage phone lines? I’ve started dabbling in OSINT stuff and obviously I’d prefer not to use my personal number. Is a tracphone the way to go? Are there free services that are recommended?
|
# ? Nov 10, 2020 22:56 |
|
Looking for some sort of tool to help manage policy compliance. Something where we can specify some baseline policies, and track the compliance of vendors and subsidiaries and be able to generate reports for management. Anything like that out there? I feel something like this should exist, but my google is failing me.
|
# ? Nov 10, 2020 23:49 |
|
The Fool posted:Looking for some sort of tool to help manage policy compliance. Something where we can specify some baseline policies, and track the compliance of vendors and subsidiaries and be able to generate reports for management. Like are you talking about document management or something else? Security policies? What are you wanting these reports to report exactly?
|
# ? Nov 11, 2020 00:14 |
|
Sickening posted:Like are you talking about document management or something else? Security policies? What are you wanting these reports to report exactly? I was thinking something like a internal policy version of MS's compliance score. My bumbling VP is trying to put together 'cybersecurity risk and compliance' report for the board.
|
# ? Nov 11, 2020 00:47 |
|
The Fool posted:I was thinking something like a internal policy version of MS's compliance score. My bumbling VP is trying to put together 'cybersecurity risk and compliance' report for the board. Yes and No. For instance, Microsoft is touting their Cloud App Security Service as CASB. Its really not what I would call a CASB, but it does have certain perks. Under the Security Configuration Apps (under connected apps), you can add Several Azure Tenants, subscripts, as well as many GCP/AWS accounts to this service. This gives you the ability to dashboard and create policies against those spaces. It also gives you the ability to create reports under a limited capacity to show things that aren't up to baseline. So all your security centers will show up there as well as their assorted flavors in GCP/AWS. Outside the guardrails you should already implement in those spaces, this gives you another layer of alerting, automated response, and dashboarding. I also mention this space because you are probably already licensed for it. There is an issue of managing those baselines for all your separate azure spaces and different platforms like GCP/AWS (and that is a large conversation). But outside of that, this gives you a central dashboard to connect them all. The reporting isn't super good, but its good enough to basically be able to automate some reporting to show who isn't managing their poo poo correctly. Configuring guardrails and setting up baselines in these cloud spaces is something I am super passionate about and often something people do even impliment in these spaces. ESPECIALLY azure.
|
# ? Nov 11, 2020 02:14 |
|
In January I will be the CTO of a 6 person nonprofit where the main product is a webapp connecting low-income high schoolers with free tutoring. In the meantime I’m advising and getting familiar with things. Last night they got their first security researcher reporting a vulnerability that will be disclosed once the researcher finds out if there’s something in it for them, and today I found out BugCrowd starts at $15k/yr. My entire tech budget for next year is $10k. When I google disclosure programs it’s all BugCrowd and HackerOne etc. Anyone have resources for setting up your own responsible disclosure process? Especially for a small org? I was really hoping I’d have a year or two before we started getting these.
|
# ? Nov 11, 2020 03:52 |
|
You just do the things that they do yourself. You handle NDA, triage, payouts, etc. It's not uncommon at all. e: I answered the wrong question, but your best bet is to sign up for one of those and see what the process entails. Then you throw up a page detailing your bug bounty program or whatever (if you want to advertise it, but you probably don't). It's also ok not to pay for bugs if you can't afford it. Just set up security@ and write a page about how to report vulns they find. Set up a hall of fame page or whatever if you want to give public acknowledgment instead of cash. Achmed Jones fucked around with this message at 04:13 on Nov 11, 2020 |
# ? Nov 11, 2020 04:11 |
|
This is pretty normal. I would not expect a low income helping nonprofit to have a hackerone program, but I would also appreciate them answering security@. I'd also consider people trying to extort a bug bounty from a low income helping nonprofit to be assholes.
|
# ? Nov 11, 2020 04:28 |
|
Anyone know of a trustworthyish ~$5/month paid socks/HTTP(S) proxy I could use to change the IP my traffic is from? Public proxys are worthlessly slow. I have several web scrapers running on AWS scraping government data. One very small government website completely blocks AWS. Like...cant even go to the root domain. I dont want to configure a VPN just for this one site and a VPN would likely invite hard to diagnose errors from AWS.
|
# ? Nov 11, 2020 15:54 |
|
CarForumPoster posted:Anyone know of a trustworthyish ~$5/month paid socks/HTTP proxy I could use to change the IP my traffic is from? Public proxys are worthlessly slow. They're probably blocking it by APN and might block the VPN too. Are you also considering whether the traffic is from that governments jurisdiction? Gonna say Mullvad though.
|
# ? Nov 11, 2020 15:56 |
|
Achmed Jones posted:It's also ok not to pay for bugs if you can't afford it. Just set up security@ and write a page about how to report vulns they find. Set up a hall of fame page or whatever if you want to give public acknowledgment instead of cash. Biowarfare posted:I would not expect a low income helping nonprofit to have a hackerone program, but I would also appreciate them answering security@. This. You can argue for/against BBounties forever but the fact of the matter is, if a researcher discloses irresponsibly citing your lack of BB setup as the reason, they're a giant rear end in a top hat.
|
# ? Nov 11, 2020 16:01 |
|
|
# ? Jun 2, 2024 16:47 |
|
The Fool posted:Looking for some sort of tool to help manage policy compliance. Something where we can specify some baseline policies, and track the compliance of vendors and subsidiaries and be able to generate reports for management. I asked a friend who works in compliance and she mentioned Auditboard and LogicGate. The latter her team uses for risk management but it has a policy compliance module. Overall she's been happy with LogicGate but had a hard time getting non-infosec folks to adopt it so they're currently trying to see if they can recreate the workflows in jira.
|
# ? Nov 11, 2020 16:14 |