Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
22 Eargesplitten
Oct 10, 2010



We do have AWS, could put it on there I guess.

Adbot
ADBOT LOVES YOU

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day
I work for a company that has a product competitive with Splunk and others. Is it bad form to talk about how it compares to others here?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Lucid Nonsense posted:

I work for a company that has a product competitive with Splunk and others. Is it bad form to talk about how it compares to others here?

gently caress Splunk, go for it.

evil_bunnY
Apr 2, 2003

Lucid Nonsense posted:

I work for a company that has a product competitive with Splunk and others. Is it bad form to talk about how it compares to others here?
If it weren't splunk maybe. Bonus point if you're at least semi-candid about its design limitation.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day
Scalability is an issue with the other products that we don't have.



(If anyone has experience that contradicts the info above, please let me know. We want this to be as accurate as possible)

If you're already pot committed with Splunk, we can deduplicate the info you're sending to it, reducing the number of Splunk servers required, and cutting the license costs. We have a few customers who have saved enough on Splunk to pay for our software and still see a significant net cost reduction.

As a standalone product, we have triggers, automations, alerting, easy to build dashboards and reports, and set up takes about 15 minutes.

I manage the developement team and customer service, so I can answer any questions any has.

22 Eargesplitten
Oct 10, 2010



For those that have used Hashivault, how many man hours did it take to implement? Were you using enterprise or open source?

If hashivault could be stored either on our network or Amazon’s butt, and the open source one met our needs, and was low maintenance there’s actually a chance that they would use it. Looks like the AD credential login method is available on the open source version?

I see the enterprise version also supports MFA for LDAP, is that purely SMS or does it have options like Google’s MFA keygen app that I forget the name of?

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

evil_bunnY posted:

If it weren't splunk maybe. Bonus point if you're at least semi-candid about its design limitation.

I'll be as candid as I can without pissing off the boss.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Lucid Nonsense posted:

Scalability is an issue with the other products that we don't have.



(If anyone has experience that contradicts the info above, please let me know. We want this to be as accurate as possible)

If you're already pot committed with Splunk, we can deduplicate the info you're sending to it, reducing the number of Splunk servers required, and cutting the license costs. We have a few customers who have saved enough on Splunk to pay for our software and still see a significant net cost reduction.

As a standalone product, we have triggers, automations, alerting, easy to build dashboards and reports, and set up takes about 15 minutes.

I manage the developement team and customer service, so I can answer any questions any has.

Can your product fork data to a third-party or is it the type that once you send data to it that is it?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Lucid Nonsense posted:

Scalability is an issue with the other products that we don't have.



(If anyone has experience that contradicts the info above, please let me know. We want this to be as accurate as possible)

If you're already pot committed with Splunk, we can deduplicate the info you're sending to it, reducing the number of Splunk servers required, and cutting the license costs. We have a few customers who have saved enough on Splunk to pay for our software and still see a significant net cost reduction.

As a standalone product, we have triggers, automations, alerting, easy to build dashboards and reports, and set up takes about 15 minutes.

I manage the developement team and customer service, so I can answer any questions any has.

So I know plenty of products make similar claims that they can ingest large numbers of TB/day on a small amount of hardware, but this is typically because they aren't doing the indexing processing up front like Splunk does. And that means you're just dumping the data down an archival disk hole and queries will take forever. Performance characteristics for search speed would be more important to me than ingest.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Also I think your Splunk sizing numbers are way off. We were doing 500GB/day with 4 search heads and 4 indexers. And those were 5 years old and fairly dinky boxes.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Lain Iwakura posted:

Can your product fork data to a third-party or is it the type that once you send data to it that is it?

Here's how event reception works for us:

Event comes in -> parsermodule -> forwardermodule -> indexing/storage/trigger actions

So, we can forward incoming events before they go into storage. To get it after it's stored would require exporting from the UI, or using API calls. The parsermodule allows you to rewrite info in events, or use our event enrichment to add useful info like device id's, device location, etc, before it's forwarded.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

BangersInMyKnickers posted:

So I know plenty of products make similar claims that they can ingest large numbers of TB/day on a small amount of hardware, but this is typically because they aren't doing the indexing processing up front like Splunk does. And that means you're just dumping the data down an archival disk hole and queries will take forever. Performance characteristics for search speed would be more important to me than ingest.

Ours is all real time. When an event comes in, it's searchable in the UI immediately. When you create an alert for say, disk errors, an email is sent when the event is ingested (or within milliseconds). For higher scale environments, it does take some pretty beefy hardware.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

BangersInMyKnickers posted:

Also I think your Splunk sizing numbers are way off. We were doing 500GB/day with 4 search heads and 4 indexers. And those were 5 years old and fairly dinky boxes.

That is possible. We based our sizing on what data we could get from existing customers who were moving away from Splunk, or reducing it's footprint. It's possible their environments were overbuilt, so I'd like to hear about anyone else's experience here.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Lucid Nonsense posted:

Ours is all real time. When an event comes in, it's searchable in the UI immediately. When you create an alert for say, disk errors, an email is sent when the event is ingested (or within milliseconds). For higher scale environments, it does take some pretty beefy hardware.

Yeah, that makes sense for dashes if that's all you need it for. Our IR people want to dig months of data after an identified incident and that needs fast disk to shovel all that data back in to memory for analysis. Are you able to do that with the appropriate hardware or is data essentially archived once it falls out of ram?

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

BangersInMyKnickers posted:

Yeah, that makes sense for dashes if that's all you need it for. Our IR people want to dig months of data after an identified incident and that needs fast disk to shovel all that data back in to memory for analysis. Are you able to do that with the appropriate hardware or is data essentially archived once it falls out of ram?

Live data and archive retention are user configurable (I think default 7/365). You can restore archive data from the command line with something like "logzilla archive restore --from-date 2017-08-11 --to-date 2017-08-12" (or using unix timestamps for more specific ranges). This puts it back into live data, and will be re-archived when the autoarchive runs overnight. How much you can restore depends on your hardware of course.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Lucid Nonsense posted:

Scalability is an issue with the other products that we don't have.



(If anyone has experience that contradicts the info above, please let me know. We want this to be as accurate as possible)

If you're already pot committed with Splunk, we can deduplicate the info you're sending to it, reducing the number of Splunk servers required, and cutting the license costs. We have a few customers who have saved enough on Splunk to pay for our software and still see a significant net cost reduction.

As a standalone product, we have triggers, automations, alerting, easy to build dashboards and reports, and set up takes about 15 minutes.

I manage the developement team and customer service, so I can answer any questions any has.

your mention of "deduplication" scares me. in a court case i need to return exact logs and not reconstructed data. i know zip algos are just about doing the same thing but those are not streams and won't change when created.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

EVIL Gibson posted:

your mention of "deduplication" scares me. in a court case i need to return exact logs and not reconstructed data. i know zip algos are just about doing the same thing but those are not streams and won't change when created.

It's configurable. The default is a 60 second window, but it can be set to 0 to disable it.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Lucid Nonsense posted:

Live data and archive retention are user configurable (I think default 7/365). You can restore archive data from the command line with something like "logzilla archive restore --from-date 2017-08-11 --to-date 2017-08-12" (or using unix timestamps for more specific ranges). This puts it back into live data, and will be re-archived when the autoarchive runs overnight. How much you can restore depends on your hardware of course.

This sounds really janky for our incident response team. They typically will want to treat older data as if it is fresh but with the understanding that it is stored on slower but larger disks--"cold storage" if you must. Having to actually specify that they must restore the data into the live environment will be frustrating.

Detection of an incident can even with good teams take up to or even exceed six months. If you're specifying that they must unarchive the data (and presumably must wait) in order to figure out what is going on, that isn't very effective.

Lain Iwakura fucked around with this message at 16:34 on May 14, 2019

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Lain Iwakura posted:

This sounds really janky for our incident response team. They typically will want to treat older data as if it is fresh but with the understanding that it is stored on slower but larger disks--"cold storage" if you must. Having to actually specify that they must restore the data into the live environment will be frustrating.

Detection of an incident can even with good teams take up to or even exceed six months. If you're specifying that they must unarchive the data (and presumably must wait) in order to figure out what is going on, that isn't very effective.

My dev team and I agree. We've told management that it would be fairly easy to implement an 'Include Archives' option in searches for this. But they push back that it would affect perceptions about our performance. If a customer requested it, we'd have more ammo to argue back.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Competing on performance is a trap that way, yeah.

IME the more separate you can make the tasks or scenarios that have very different performance expectations, the easier it is to separate in users/customers thinking. You might even run a second head in "investigation mode" that searches archives, so that regular users aren't just a checkbox away from a slow experience. (And you'd want to make archive search as fast as possible too, of course.)

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Lucid Nonsense posted:

My dev team and I agree. We've told management that it would be fairly easy to implement an 'Include Archives' option in searches for this. But they push back that it would affect perceptions about our performance. If a customer requested it, we'd have more ammo to argue back.

I couldn't evaluate your product for this very reason.

For me to have my IR team to switch from having on-demand data to having to request an archive that may or may not go far enough back would be highly problematic. Also for legal reasons I would want to have zero deduplication and even the hint of the tool having the ability to do that easily or by mistake would throw my entire legal and risk team into a tizzy which I preferably like to avoid because I like to avoid meetings with them when possible.

I need to keep IR as simple as possible because they're usually working on a time crunch, have too many people sticking their nose into their business, and they want to work with systems that deliver data when they need it. I couldn't suggest that your product is ready for a DFIR team but is probably fine for someone who just needs operational data. I don't see it as at all beneficial to security until this archiving situation is sorted.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Lucid Nonsense posted:

My dev team and I agree. We've told management that it would be fairly easy to implement an 'Include Archives' option in searches for this. But they push back that it would affect perceptions about our performance. If a customer requested it, we'd have more ammo to argue back.

I can tell you conclusively that you would have been ruled out for not meeting requirements if you were on our RFP. Seamlessly searching (or nearly) through the full dataset is a hard requirement.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender
I would seriously suggest working with a few different IR teams who use different products to see how they use these tools as it'll demonstrate how incapable your product is in the security space.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Lain Iwakura posted:

I would seriously suggest working with a few different IR teams who use different products to see how they use these tools as it'll demonstrate how incapable your product is in the security space.

Thanks for the feedback. See any other fatal flaws?

I'm going to take everyone's comments to the CEO and push to implement archive searching. If they agree, we could probably have it in production in 4-6 weeks.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Lucid Nonsense posted:

Thanks for the feedback. See any other fatal flaws?

I'm going to take everyone's comments to the CEO and push to implement archive searching. If they agree, we could probably have it in production in 4-6 weeks.

I know I am nitpicking but your website is terrible.



As much as I hate LogRhythm, I can at least say their website is functional and tells me what the product does right off of the bat and most importantly doesn't have a chat window popping up the moment I view the site:



If I view Splunk's, they don't even have a chat applet:

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Chat applet aggressiveness likely correlates inversely to how mature the outside sales organization is.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Subjunctive posted:

Chat applet aggressiveness likely correlates inversely to how mature the outside sales organization is.

Outside sales is how I indicate how my time will be with them. I've had to have sales persons banned from my company due to their poor behaviour elsewhere.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day
My involvement with sales/marketing is limited to doing product demos and providing support, fortunately. I'll mention the chat thing, though. It was just added to the site last week.

MF_James
May 8, 2008
I CANNOT HANDLE BEING CALLED OUT ON MY DUMBASS OPINIONS ABOUT ANTI-VIRUS AND SECURITY. I REALLY LIKE TO THINK THAT I KNOW THINGS HERE

INSTEAD I AM GOING TO WHINE ABOUT IT IN OTHER THREADS SO MY OPINION CAN FEEL VALIDATED IN AN ECHO CHAMBER I LIKE

22 Eargesplitten posted:

What’s a good on-site password manager (preferably capable of storing extra information related to the accounts) that allows multiple log-in accounts? Access logging would be a nice feature too.

My company stores admin account information for servers that contain extremely valuable client data on Sharepoint. I’m not sure whether that is better or worse than it sounds to me, but it doesn’t sound good. That’s potentially millions in fines, even more from legal fees and damages from lawsuits and a destroyed reputation if someone gets on the Sharepoint and takes the credentials.

Granted you need an account that can access the servers hosting the VMs, but still.

My old company used AuthAnvil, I think it's relatively cheap, they also have a 2FA offering.

geonetix
Mar 6, 2011


Lain Iwakura posted:

Outside sales is how I indicate how my time will be with them. I've had to have sales persons banned from my company due to their poor behaviour elsewhere.

My favorite sales person keeps spilling beans about his other customers. It’s great. He was really upset when we told him he wasn’t welcome to our meetings anymore.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

geonetix posted:

My favorite sales person keeps spilling beans about his other customers. It’s great. He was really upset when we told him he wasn’t welcome to our meetings anymore.

I had one go on about a former colleague of mine and how hot she is. Telling a lesbian that her former coworker is hot is pretty loving gross, so I requested he never returned.

Another was a former coworker of mine who was constantly hostile towards my boss (and was a complete prick when I worked with him at the place we both worked at) who later found himself banned. He snuck into my office through another sales meeting and now his employer is not welcomed to submit RFPs.

Judge Schnoopy
Nov 2, 2005

dont even TRY it, pal

Lain Iwakura posted:

I know I am nitpicking but your website is terrible.

Not only do I never want the 'worlds first' anything, network event orchestration platforms definitely exist.

I wouldn't give your product a second look if you claim to be the only one in a space that is well populated. Head in the sand, oblivious, untested with the real world, etc

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

YosCrossPoastin

BangersInMyKnickers posted:

If anyone needs to push this manually via reg keys for non-gpo systems,

NLA Required:
HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp Dword:UserAuthentication value 1

128-bit encryption only:
HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp Dword:MinEncryptionLevel value 3

TLS 1.0 only:
HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp Dword:SecurityLayer value 2

Encrypted RPC Calls:
HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp Dword:fEncryptRPCTraffic value 1

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Judge Schnoopy posted:

Not only do I never want the 'worlds first' anything, network event orchestration platforms definitely exist.

I wouldn't give your product a second look if you claim to be the only one in a space that is well populated. Head in the sand, oblivious, untested with the real world, etc

I'm not a fan of the marketing, but they're trying to define us as something different from the rest of the pack. When people ask me I say we're a logging tool with extra features. Basically, we're a step above network automation, but not to the level of orchestration.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
I may have alluded to it before but I also work for a company that sells a SaaS SIEM with a focus on ID&R as well as a few other products. I don't work on the product but the SIEM engineering team is colocated with my team and I used to support their ingestion and analytics pipeline in my previous role.

Not sure whether I want to name the company but you can probably figure it out based on my posting history.

Ama if you're curious. DM if you want more info.

Beccara
Feb 3, 2005
SaaS SIEM is something I'm having to look into now, Working at an MSP with a little under 150 high value clients. Tried Alien Vault

Total Sale: USD 183,320.00

even under their MSP model is was $20k+/mth. Does your company have something in a price that scales down well enough?

Mustache Ride
Sep 11, 2001



22 Eargesplitten posted:

For those that have used Hashivault, how many man hours did it take to implement? Were you using enterprise or open source?

If hashivault could be stored either on our network or Amazon’s butt, and the open source one met our needs, and was low maintenance there’s actually a chance that they would use it. Looks like the AD credential login method is available on the open source version?

I see the enterprise version also supports MFA for LDAP, is that purely SMS or does it have options like Google’s MFA keygen app that I forget the name of?

I've done a 3 cluster Enterprise deployment on prem and it took a month to stand up, tear down, add features, puppetize (seriously don't do this) and all the else.

If I had a choice I would have stood it up in AWS with Hashi's Terraform AWS module for Vault, configured all the poo poo for the dev teams and probably been out in like 2 weeks max. Don't do on prem, Hashi has all the tools to stand it up in AWS with no problem.

Enterprise is needed if you want to do MFA: Okta, Duo, PingID or use Namespaces, or do cluster duplication. Otherwise do open source.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
The Microsoft Sentinel demo did not go well, unfortunately. :(

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Beccara posted:

SaaS SIEM is something I'm having to look into now, Working at an MSP with a little under 150 high value clients. Tried Alien Vault

Total Sale: USD 183,320.00

even under their MSP model is was $20k+/mth. Does your company have something in a price that scales down well enough?

Sadly I have no idea as I have no exposure to our pricing in any way. If you're curious, though, DM me your email and I'll pass it along to our sales folks.

Adbot
ADBOT LOVES YOU

CLAM DOWN
Feb 13, 2007




CommieGIR posted:

The Microsoft Sentinel demo did not go well, unfortunately. :(

F

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply