Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

PBS posted:

I get what you're saying (maybe), but in this case the password itself is 2fa to get.

If I'm answering my own question I'd say no, the weblogin itself isn't 2fa because you're just entering a single passcode to authenticate. (Even though the passcode itself is 2fa to get)

I'm wondering if this kind of login would be considered secure overall, and what any obvious pitfalls to it would be.

Whether "this kind of login" is secure depends on the infrastructure surrounding it. RSA's implementation is secure 2FA, but it would be possible to present something that looks almost identical to the user that is not 2FA.

With the SecureID setup, the password/PIN/thing-you-know is managed by the server. The token is independent and has no relationship to it. Enter a wrong PIN, and the only way to know is when you actually try to use it on the server. If you compromise the token system, and can get or provision a token associated with arbitrary credentials to yourself, the system will still not authenticate you without the correct PIN and keeps you out.

But, take something that looks superficially similar from a client perspective: I have an "authenticator" app on my phone, which has a passcode set. I have to enter the passcode to get access to the app, so it might seem just as secure - but the authentication is all contained in my phone. If I can provision another token, the whole system is broken. So, that's not 2FA at all.

Having all the factors wrapped up in one number is a red herring. The same thing ultimately happens when you send an auth request to most any server that's not using challenge/response: all your credentials are wrapped up in a single request, encrypted, and sent along.

The pitfalls are about what you'd expect. PINs aren't very strong passwords, and all the communication happens over a single channel (which isn't inherently a bad thing, but now that "user has an always-on, always connected computer in their pocket, accessible over a different network, which can inform them of login attempts" is a reasonable assumption in many cases, we might as well use it).

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Sentient Data posted:

So, this is apparently interesting: http://www.apple.com/customer-letter/


vvv: They're limited to numeric passwords? I'm too used to Android where at-rest pops up a full keyboard and at least suggests going beyond just numbers

E2: though actually, why do they allow the OS to be upgraded when the phone is still locked? I thought all data connections were a no-go while the lock screen is up. It seems like part of the firmware should be a hook that says "Forced software upgrade without unlocking the phone? fine, but wipe all user data before writing the new os/firmware"

iOS can do passwords of essentially arbitrary length on a 77-character keyboard. But very few people do it, because unless you've pissed off the NSA, "4-6 digit numeric PIN, 10 attempts erases it" is good enough to secure data on the phone.

The hardware itself can be put into a low-level recovery mode called DFU. This enforces signing requirements, but otherwise lets you write pretty much whatever you want to the firmware. The normal use would be to wipe and restore the firmware even if the OS layer is hosed, but with images of all the software that should be on the device, it wouldn't be too hard to come up with a few strategic jump instructions around "if failed attempts > 10, wipe device" and "if attempts in past few minutes > threshold, delay next attempt".

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

flosofl posted:

Nothing to do with the apologists or nothing to do with Lastpass? Because it has everything to do with Lastpass. It's Lastpass that's generating those glyphs, and it's Lastpass that's being shown in the screenshot on that site.

So what are you saying?

Infosec Thread: DON'T ROLL YOUR OWN STRCMP()

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

NFX posted:

Is LessPass and LastPass the same thing?

No, they're completely different.

LastPass is a conventional password manager which lets you define passwords and encrypts them behind a master key. It's had some serious problems in its implementation, but there's nothing special about its theory of operation.

LessPass is basically hash(domain + username + masterPassword). People come up with this idea every once in a while because it "solves" a lot of known problems with traditional password managers, and they're not experienced or careful enough to see the new, bigger problems it introduces.

The weird glyphs are a mnemonic device to validate your password. LessPass doesn't ever give you a "your password was wrong" prompt - a different master password is just a different input to the hash function and gives you a different output. So, they give you a different hash function with a limited output set and map it to some icons, and you remember that your password confirmation is "blue building orange heart black car" or whatever. If you don't see those icons then you know your password is wrong.

Of course, that's an information leak. Especially when you provide confirmation for each character as it's entered, which makes it trivial to break by hand. But what else would you expect from a password manager which doesn't ever let you change your password?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Boris Galerkin posted:

(On the other hand some companies we worked with did request we have our IT guys encrypt everything onto enterprise grade hard drives and physically mail those as opposed to sending data ~via the cloud~ so it wasn't all bad .)

Yeah, it was.

"The cloud" is just a buzzword for a family of applications and hosted services. It can range from "hey, I put the user passwords and stored credit cards on Dropbox, password is sup3rs3cr3t" to a secure application that happens to be hosted on the same platform Amazon or Google use for their own sensitive information.

If an organization completely rejects integrating with anything hosted in the cloud, it's possible that they're dealing with incredibly sensitive information that can't be trusted to outside systems. If that's what is going on, they should be sending the hard drives handcuffed to people with guns, not through the mail. That's bad. But, it's a lot more likely that somebody, somewhere, heard that "the cloud isn't secure" and now they don't use the cloud, because they believe the foundation of good security is Some Guy Said. That's even worse.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Thanks Ants posted:

Are these things accessible because they use UPnP or are people port forwarding?

They're mostly consumer routers.

hobbesmaster posted:

Those AWS connected devices all need to be running a Linux too. At least they offer a C version along with the JavaScript! Except everyone is using the JavaScript. That's why your pet food dispenser needs a cortex A15 more powerful than your typical desktop from 15 years ago to run.

Amazon is now pushing something called "green grass" which allows AWS lambdas to be run directly on the devices or some such. Minimum requirements? At least one Armv6 core at 1GHz, 256MB ram.

Amazon's model for Greengrass stuff is decent. You have a hub with reasonable intelligence that can sustain some functionality if the connection goes down, and acts as a central communication point for all the dumb sensors and whatnot running on the local network. The dumb stuff talks to the hub via purely local MQTT, instead of exposing a buggy http server to the world via UPNP or something. Done right, the sensors can be dumber and lower-powered, the hub is a smaller, better-hardened potential attack surface, and the whole system doesn't poo poo itself when connectivity drops for a minute. Running the same code locally and externally means that it's easy to develop as a single platform, and in a larger commercial/industrial setting, you can tune what runs best locally versus in the cloud with minimal hassle.

(meanwhile, back in reality, consumer IOT devices will continue to be put together by absolute poo poo-tier developers copy-pasting outdated versions of bad sample code, commercial systems will continue to be implemented on tight schedules that abandon security as unfinished /*FIXME!*/ tech debt that never leaves the backlog, and like many other AWS services, Greengrass is supposed to provide a convenient path to vendor lock-in - but none of that is really a surprise)

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

hobbesmaster posted:

The problem is that it's the gateways that are misconfigured not the cortex m0 talking to it over ble/zigbee/whatever. Possibly harder is the need to get consumers to use a separate IoT gateway from whatever cheap thing they got from their ISP. The average consumer wifi router doesn't meet the minimum requirements.

Yes - and the selling point here is that Amazon manages the actual communication logistics, including auth and encryption. You write application code and put it on their managed platform. Amazon's not perfect, but they're way better than random poo poo-tier IOT developer #420 who's rollin' their own auth with "if (password == "s|_|pers3cret")..."

As for the hardware, it's kind of a moot point. It's not like the platform software is going to be installed and configured on old routers. It's mainly targeted at commercial operations that are OK with a small appliance to run their cloud enabled HVAC/lighting system or whatever. Any consumer bridge that does adopt it, though, isn't going to have too much trouble. It'll run on a Raspberry Pi Zero, which retails for all of :10bux:.

hobbesmaster posted:

Only in modern processors you'd use in phones. Node.js on a arm9 is an order of magnitude slower than anything else.

99.9% of the time speed won't matter. You can seamlessly call larger-scale services, so you put your dumb "is this even worth caring about?" conditionals and low-latency responses close to the device, and push anything that requires actual work out into the massive compute fabric you're connected to (all at a very low price per gigabyte-second of AWS Lambda time, of course, so cheap that you should never worry about future costs).

Node is lovely for anything complex, but it's easy to write, it's easy to make calls out to the world, and there's a robust API ecosystem for whatever integration you want to do. It's good enough for if-conditional-then-call and queuing up events to POST everything about what you do to gently caress-yo-privacy.net at one hour intervals.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Furism posted:

Is there any downside to using TrueCrypt over Veracrypt?

The very first thing on the TrueCrypt project homepage sums it up:

quote:

WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues

This page exists only to help migrate existing data encrypted by TrueCrypt.

VeraCrypt was forked from TrueCrypt, and is still actively developed and tested. TrueCrypt's encryption isn't horribly broken (as far as anyone knows and has publicly disclosed), but there's a known privilege-escalation issue in its Windows drivers that will never be fixed.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Thanks Ants posted:

If you put your instance behind CloudFront then you can use a free AWS cert.

Yeah, but then you still need to encrypt the CloudFront->EC2 connection, which happens over the public internet.

AWS's cert management is great if it matches what you want to do, and don't mind the lock-in, but it is very opinionated about where you terminate SSL.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Rufus Ping posted:

I'm pretty sure this isn't how DNS works?

How is the additional data being incorporated into the query and what is its (legitimate) purpose? I've never heard of anything like this

You query subdomains of a special DNS server that knows KNXGKYLLPFCE4UZB.mydomain.com isn't an actual request for a subdomain but actually a few bits of encoded data. (Preventing issues from caching, etc is left as an exercise for the reader)

Your server returns responses in the TXT field, or as a CNAME with another encoded domain.

The technique is really slow, but most people trust the DNS infrastructure to not do anything nefarious, and it'll happily route traffic for you even if IP-level stuff is blocked.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

wyoak posted:

Yeah the client isn't going to be sending the server a TXT field, and if the queries are using common domains I don't even understand how the client and server would be talking to each other (unless the client is sending the DNS packets directly to the server, which seems to negate the biggest advantage of DNS tunneling to begin with).

All DNS messages have the same format, and there's theoretically nothing to stop a client from appending answer sections with TXT records to its request (except that it doesn't make any sense, of course). Back when dinosaurs roamed the earth, this was actually valid under the RFC as an inverse query, although I'm not sure that anyone ever bothered to implement it and it was officially deprecated something like fifteen years ago.

I guess if you control the local client and its local DNS server, it'd be enough for a covert channel that might look kind of like legitimate traffic to somebody who wasn't inspecting too closely.

Space Gopher fucked around with this message at 03:07 on Jul 22, 2017

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Boris Galerkin posted:

God this seems so much more complicated that it should be. Like are you serious, signing up for the wrong thing can result in me waiving my rights? The gently caress?

I know this is probably all by design to fool people into doing it to help their bottom line but god drat.

Pretty much any time you click on an "I agree to the terms and conditions" checkbox, you're waiving your right to sue in a real courtroom, and your right to join in a class action lawsuit.

The Supreme Court has found that this is perfectly legal, because you freely entered into a contract and you were obviously aware of all the legal implications. If it's attached to, say, the bill payment process for the only electricity provider in the area, well, you can always buy a generator or drop off your payment in person past the alligator moat, right?

Equifax was a rare exception, because they don't have a direct consumer touch point where they can force you to accept those T&Cs. But, well, they're trying their best to thread the needle of "force consumers into giving up as many rights as possible" and "keep some positive PR when actual lawyers begin to actually read those draconian contracts and explain what's going on in them."

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Methylethylaldehyde posted:

There are SMS to email services you can get, which neatly fixes that issue. No actual human being to sweet talk, no ability to get the sim changed over to a lovely burner phone.

No. There are known vulnerabilities in SMS and call routing systems that can hijack the SMS before it ever gets to your email gateway. It's been used in the wild to hit bank accounts.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Avenging_Mikon posted:

There’s highly secure enterprises in Vancouver?

HSBC has their Canadian HQ there, and HSBC's most profitable clientele tends to demand discreet, effective security.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

apropos man posted:

Would it be possible to have two verions of the kernel: one for Vee-Emming and one for plain desktop/laptop use? I don't wanna lose up to 30% performance.

This is probably a bad idea, but fuckit: hit post.

This is a bad idea.

It looks like VM escapes (or guest-to-host/cross-guess reads, or whatever) are one possibility for this attack. They're getting a lot of attention because so many public-facing services are isolated with VMs. But that doesn't mean they're the only bad thing that can happen. Unless your single-tenant no-VM desktop is air gapped, physically secured so only you can use it, and runs only carefully audited software, you need to be able to isolate unprivileged code from kernel space.

apropos man posted:

Wtf is that supposed to mean? I'd be better off with Win10 on it? Chortle.

Clearly your choice of OS is superior to the washed masses. Chortle.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EVIL Gibson posted:

Edit:. Posted a little bit more but can we return a good is of the Turbo button again? It used to bump the processor speed at the risk of a possiblity of inaccurate results compared to the slower speed which was always accurate.

Imagining someone will make a Linux version where you can bind a trigger to run faster in Insecure mode and slow in Secure mode.

This is an incredibly bad idea and nobody's going to do it.

First of all, that's not how the turbo button worked. Turbo on would run the processor at its full rated speed, without any impact to accuracy. Turbo off would downclock it to match an 8088, for legacy applications that needed precise timing (back before anybody had spare cycles to spend checking a software timer).

Second, it'd be incredibly irresponsible to release a kernel with known security flaws, unless you happen to like botnets.

Third, there's not much overlap between use cases expected to see a heavy performance penalty, and use cases that would ever in a million years want to enable "insecure mode." It's not a straight 30% performance hit. It's a performance hit specifically on context switches between privileged and unprivileged space. If you just hang around all day in unprivileged space crunching numbers, you won't see any discernible impact. The most significant hit happens when user code gets chatty with drivers - which is exactly what you'd expect to see in a server. That's exposed to a network. Which should never, ever run "insecure mode" anything.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Absurd Alhazred posted:

I'm going to rephrase my question from earlier: regardless of what your program is, isn't it effectively calling some kind of graphics API every frame, meaning 60+ times per second for many video games, but something like 30 times pers second at least? Isn't that going to slow down since that's going through the kernel, even if you somehow manage to reduce everything to one big draw call a frame? Heck, your windows manager is doing that all the time. Am I missing something?

Yes, there will be some slowdown. You're probably assuming the performance hit is nastier than it is. 60 times a second really isn't that much in this situation. The scheduler alone will run at a much higher frequency.

More important, context switches from user space to kernel space are already pretty expensive, and they tend to be associated with expensive operations. The Meltdown mitigation adds another step to the process of switching context. That's not nothing, but to get the worst case performance hit, you have to do literally nothing but hand control back and forth between your code and the kernel in a tight loop.

In something like a game, you're going to be doing a lot more computational heavy lifting on both sides of that call, so the overall impact is smaller. In a basic CRUDish web server, on the other hand, your main functionality is computationally cheap validation and mapping, and dispatching external I/O for a lot of small requests (both to the DB/downstream services and to the client). Lots of tiny bits of work handed back and forth between the kernel and user space, and comparatively less pure computation in user space, mean more of a performance hit.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Thermopyle posted:

Eagerly awaiting patches from Asus for my 5 year old self-built system.


Hahaha, I will be waiting forever.

Microcode updates can go through the OS. Windows updates occasionally include them.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Truga posted:

Yeah, if your server just unconditionally 301s all http network to https (which it should), I don't think there's a way to downgrade that in any way.

A sophisticated MITM could run a proxy that fetches content from your server over https, modifies it to suppress https links, and serves that to the victim over plain http. No certs mean you just blindly trust DNS to tell you what the authoritative server is. Or, they could just build a fake copy of your site, basically using their control over DNS to enhance a phishing attack.

HSTS makes sure that https connectivity is a latching operation - if you connect to the legit site once, it won't let you be fooled by an HTTP MITM ever again. Since a lot of threats involve rogue access points for mobile devices, it provides a useful enhancement.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EssOEss posted:

No auditing - that would indeed be too much of a hassle. If someone infiltrated the system and got my build process to sign their malicious code, I doubt I would ever notice (maybe if Windows Defender catches it by coincidence during the signing process). I accept this risk.


I do not accept this definition. A signature says who the code came from, that is all. What is the logic here? If you draw other implications from this signature, your security model is a bit dubious (though I can accept drawing negative implications from a *lack* of any accepted signature).

A code signing cert is supposed to say that the signing organization has tested and validated a given release. You might not "accept this definition," but the rest of the industry does.

You're turning it into "this was built on a certain build automation server."

On that note, does anybody have good resources on securing Jenkins and friends when they're used for deployment automation in web apps? It's always worried me that a lot of these systems get godlike permissions (especially w/r/t AWS/Azure/GCP accounts!) but tend to have lovely security.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

bitprophet posted:

Do you mean specifically hardening the Jenkins servers/services themselves, or securing the overall workflow? Your 2nd comment implies you're at least thinking about the latter, in which case you should take a look at secrets management systems like Vault. Having a tool in charge of distributing & rotating secrets, and enforcing that they are on short-lived leases, is a big step up from "meh I just dropped my, or a similarly long-lived, AWS API secret into Jenkins' config, now an attacker gets to be god forever if they break in". Instead, they only get to be god for, say, 15 minutes, or an hour, instead of retaining those privileges for weeks/months until they're ready to leverage them.

Related, it doesn't require use of a secrets store (tho they often make the process easier) but another relatively low hanging fruit option is to follow principle of least privilege and only give Jenkins API keys that do exactly and only what it really needs to do.

You may think "ugh, my deployment needs instance creation, listing, modification and termination, plus all the same for volumes, plus most of that for AMIs, and ... being explicit is too much work, I'm just gonna give it a full admin role." Resisting that temptation and handing out only what you need, means that if Jenkins starts working for the enemy it doesn't have e.g. the ability to assign admin privileges to other users, or destroy backups, or etc. An attacker that can nuke instances is one thing, an attacker that can lock you out of the system or create a quietly unnoticed backdoor is much worse.

I'm talking about securing the overall workflow.

For most of the systems I'm talking about, they're native to one cloud platform provider or another, and they provide decent management infrastructure. For instance, in AWS, I don't bake creds into the system; it gets an IAM role and manages its own temporary credentials (with EC2 systems manager as a pretty decent secret store). And sure, I follow the principle of least privilege and enumerate everything the deployment automation can do - but a deployment automation system, by definition, is going to be able to affect everything I care about.

Jenkins or whatever other orchestration system might not be able to affect its own environment, but it's still going to be able to put code onto what I really care about : the application servers (or configs for "serverless" services, or whatever) that talk to the world, and whatever data stores they're using. The Jenkins environment going down, even if it's totally unrecoverable, isn't that bad an outcome. The real nightmare is somebody using production systems to serve coinhive.js or whatever malware to end users. I'm not in a PCI/HIPAA/whatever space right now, but obviously there are even worse outcomes when you're dealing with sensitive personal data.

Has anybody managed to actually crack this nut in a way that's manageable, lets application developers automate a decent chunk of their own infrastructure deployment (ideally with CFN/ARM/etc type templating), and doesn't give anybody with the ability to crack open an orchestration system godlike powers?

RFC2324 posted:

I don't need EV, I just think everything should be https, so thats cool. Next step is to figure out how to setup lets encrypt on AWS for the people I can't convince to buy a long term cert.

ACM certs are free and rotate themselves seamlessly. As long as you're OK with Amazon lock-in, terminating TLS at the load balancer, and never getting to touch the private key yourself, they are really, really good.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EssOEss posted:

I agree absolutely - digital signatures are there to prove who something came from. That's no the claim that was made in the above discussion, though, which was that having a digital signature means that software is "tested" or "validated" and implies something positive security-wise about it.

This is, however, absolute fantasy - just like a lock does not mean to the user that you can shove your password onto the website, signed code does not make it secure. What is trusted is the identity, not the fact that something is signed. You should not install drivers signed by Beanie Babies LLC just like you should not put any passwords into https://facebook.notascammer.ipromise.ru even if it has a pretty green lock.

What you're saying is that you plan to be untrustworthy, and your users should accept that.

The point of a signed binary release is that it says, "this is a legitimate piece of software put out by EssOEss; if you trust that person/company/OU, then you can trust this software."

What you want to do is turn that statement into, "this came from my automated build pipeline, gently caress if I know what's in there, but good luck."

The equivalent in a web context is Facebook allowing people to deploy random poo poo straight from source control to a public-facing server with a *.facebook.com cert and key.

Space Gopher fucked around with this message at 08:24 on Mar 1, 2018

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EssOEss posted:

I agree.


This seems to be an exaggeration. Of course I know and trust what is in there - why would I not? The mere fact that I do not want to implement some bothersome "user has to manually unlock signing key every 24 hours" process or some bureaucratic auditing scheme (that would become a pointless formality once the person doing it gets fed up) does not mean that my build pipeline is suddenly filled with malware.

I accept the risk that if that happens, it would be hard to notice fast enough, but that does not mean it is going to happen. Indeed, a large part of the reason I accept the risk is that the probability is almost infinitesimally low.

The point is to have a system that's resilient against compromise, at least a little bit.

Say I'm some evil malicious hacker who breaks into your build system from some faraway place. Maybe I'm the guy who owns https://facebook.notascammer.ipromise.ru. I know, you don't think it could ever happen, just like everybody else who's ever been compromised, but I've got control of your build system and a burning desire to steal your identity and compromise your users.

If you have automated code signing with the same cert you use for releases, that's it, I win. I can sign anything I please by pushing it to your build system. As far as the world knows, it's totally legit and you've signed off on it.

If you have automated code signing with an internal-only cert, and a separate system for signing release builds that puts a human in the loop to push a button, then my job gets a whole lot harder. I need to somehow socially engineer someone in your organization into pushing the "release" button when there's not actually a release, or wait and pray that you don't find the intrusion before your next release and whatever secret-ingredient scripting I've dropped into your pipeline works as expected the first time. Maybe I can still do that, but you get another significant chance to stop me before I get to what I want and you get your fifteen minutes of fame on security twitter and Ars Technica.

apseudonym posted:

Don't touch the poop thread

Somebody already did; trustico.com is 503ing.

e: according to the twitter thread it was executing the injected commands as root, too. jesus christ.

Space Gopher fucked around with this message at 16:52 on Mar 1, 2018

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Rocko Bonaparte posted:

Well I figured out my problem with KeePass was that I never actually saved the database after adding a bunch of stuff, and I didn't seem to prompt me over that when I closed it in my original assessment. After learning to be religious with the CTRL-S combo with it, stuff persists just fine. I'm wondering now if there's an Android app that would sync with a KeePass database kept on my own VPS. That would mean HTTP or preferably SCP. I know DropBox and Google Drive has been supported forever, but I'd rather not use those if I can help it. Has anything updated in the Android space to support SCP? Everything I see is from 2014.

You can set most keepass clients to save the database automatically on change or close.

What's your goal with keeping the file on a VPS? If you're not using AWS, Azure, or GCP, your database is probably more secure on Google Drive than it is on your VPS provider. If you're worried about somebody getting access to your database and brute-forcing it, just use a keyfile that you copy manually between authorized clients.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

PBS posted:

My company has recently started forcing cache-control headers that turn off all client/server side caching for all of our webapps via our shared apache servers.

Is this common, or is it as dumb as it seems?

If you're only talking about APIs, it might make sense as step zero in figuring out a caching strategy. API response caching gets important at scale, but "don't cache anything" is a safe default while you catalog your endpoints and try to understand what requests you can serve from cache and what needs to hit the backing server every time.

If it includes static content it is the second dumbest possible option; I hope you like pointless infrastructure load.

(But watch out for the even dumber option of "oh crap, we can't sustain this load, better set client-side max-age to some very high value everywhere!" that you might see in the backlash - if you're not set up with sensible revision control already then you're going to have a very bad time rolling out hotfixes to static content)

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Subjunctive posted:

Doesn’t Facebook do continuous deployment now?

CD shouldn't let anybody go straight from a dev branch, all the way out to production, without any checks. There should be human code reviews and multiple layers of automated tests. Merging to master might end up automatically initiating a global prod deploy, but any sane CI/CD pipeline will make sure that there's an audit trail and quality gates before that merge happens, and more quality gates between the merge and the final big push.

If a single person can push random poo poo (as in, potentially untested or failing code - including code with obvious or maliciously placed vulnerabilities) all the way through your release process, then you've got a deeply dysfunctional environment.

Assuming FB hasn't overhauled their web release pipeline since https://code.facebook.com/posts/270314900139291/rapid-release-at-massive-scale/ , it looks like they have a robust quality gate system. They run automated tests before allowing code into master, then go through a couple of canary stages (employees-only, then 2% of global traffic) before the whole world gets a given quasi-CD release. I'm guessing that there's some level of two-person-rule code review in the merge to master, too.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Cup Runneth Over posted:

https://www.pcgamer.com/juggling-passwords-is-a-chore-and-soon-you-might-not-have-to/

Thoughts? Is biometric verification a valid replacement for passwords?

PC Gamer is not the best place to get your security news.

The idea isn't to use biometrics as passwords directly.

Instead, you have a strong token you persist onto some secure device - a U2F device, secure enclave in some larger system, etc. Then, you use biometrics to authenticate yourself to the device, which then goes through a mutual authentication process with the website.

The end result is that, to log in, you need to have an enrolled device in your possession, that in turn needs to see your biometric identifier before it will do anything. Somebody with only a perfect image of your fingerprint/retina/whatever won't be able to do anything with it, because they don't have the device that actually has the token. Somebody who pickpockets your device won't be able to do anything with it, barring hardware level insecurity, because they need your biometric identifier to unlock the device. And, a phisher won't be able to trick you into revealing your token, because the device needs to see proof that the system asking for the token is the one that issued it in the first place.

There are a lot of places where weaknesses might theoretically crop up, but the system is way, way better than a username/password combo.

e: I should point out that this is a simplified version of what's actually going on, which is actually based around key exchange. But the core idea remains the same: you use biometrics to authenticate yourself to a device that then has the knowledge to do mutual auth with whoever you're talking to.

Space Gopher fucked around with this message at 21:50 on Mar 4, 2019

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

RFC2324 posted:

are you saying someones password hashed to Penis1?

Are you saying you trust a client to handle hashing a password?

The mistake is logging sensitive request bodies. There's nothing wrong with sending unhashed passwords over https, as long as you don't store them.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Arsenic Lupin posted:

I need to persuade my husband that LastPass is more secure than 1Password. Anybody got cites?

https://bugs.chromium.org/p/project-zero/issues/detail?id=1209

https://bugs.chromium.org/p/project-zero/issues/detail?id=1217

https://bugs.chromium.org/p/project-zero/issues/detail?id=1225

Take a special look at the first two, where LastPass was hit with a critical vulnerability, rolled out a duct-taped fix, said everything was good - and the same exploit with a tiny bit of extra special-case bypass popped right back up again. Tavis wrote the following in his next bug report for that reason:

quote:

NOTE: Please ***do not*** release a patch until you're confident all cases have been fixed. Releasing a patch that just fixes the single case that I've made a demo for will make it very easy to identify the vulnerability and for someone to simply exploit any of the hundreds of others of cases where you've made this mistake. Please communicate with me on your plan to release fixes so that we can make sure the process goes smoothly.

The bugs are a couple of years old at this point, but general consensus on LastPass is that they're much more focused on "look at our good security!" PR than on actual security.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.
The caller ID system allows callers to send whatever they please as the "from" address. There are legitimate uses for this - for instance, an outbound-only customer service return call might show up with the main customer service phone number in caller ID - but there's no authentication, and it's heavily used by scammers.

Your number was almost certainly used by a scammer in outbound caller ID. There's nothing you can do about it and the phone companies effectively don't care - they're already spinning up revenue streams for spam call blocking services.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

wolrah posted:

SHAKEN/STIR definitely looks like it will resolve this if it gets sufficiently wide adoption, but at this point I'm not sure how long that'll take.

SHAKEN/STIR is great and all, but remember, it terminates at the carrier level.

There's nothing to stop Verizon or AT&T from putting the functionality to block spoofed CID detected by new auth mechanisms behind an "enhanced spam call blocking service" package for a monthly fee.

And would you look at that, they're already selling those services.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Internet Explorer posted:

Alright, gotta ask a question that is beyond my expertise and I don't have too much time to look into it.

We have a project we're implementing that is not using SSL/TLS for its web traffic and it uses pass-through auth using local AD (KERBEROS/NTLM I assume). I'm trying to find out if this could be used for a pass the hash type attack. I know it would be trivial to log into the website and application as a user because that cookie is unencrypted. What I'm trying to figure out is if this config to lead to AD login tokens being able to be replayed. Anyone have any thoughts?

The specific concern of "if I get a dump of just the unencrypted traffic between a client and this web app, then I can turn that into credentials that let me access everything else the client can do over AD" is mitigated by Kerberos and even NTLM. NTLM transmits a password hash, so it's theoretically vulnerable to pass-the-hash, but that hash is encrypted when it goes over the wire. Kerberos hands out tickets that are tightly scoped.

But, it's 2020. There's no excuse for unencrypted http for pictures of your cat, let alone anything behind access control. Fix your spec.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Tobermory posted:

Oh, we're putting people who know what they're doing in charge of things. I'm supposed to help them out in finding some of those people. It's the standard issue where an organization needs technical expertise, but doesn't have enough technical expertise to know what to ask for. Getting pointed in the right direction is incredibly helpful.

Pissing off organized crime, human traffickers, and repressive governments is the kind of thing that might get productive attention from big names in the infosec/crypto worlds, especially if your organization or people in it have a solid track record in that space.

Also, you're talking about entering an adversarial space, with some of the most determined and well-funded adversaries on the planet. You're not going to be able to out-hire them on the open market. Your best shot is to take advantage of highly talented people who are more driven by ideals than money.

I'd drop an email to Bruce Schneier with the same basic ask you have here: what you're trying to do, what kind of help you need (probably just recommended tools and introductions to anyone who can help), and why it's important. He might or might not be able to give you a meaningful response, but it's at least worth the ask.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Wiggly Wayne DDS posted:

why are you latching onto this being a personal bias at all? there was nothing stopping him from sinkholing the domain and getting the same effect - it only needed to resolve. instead he put a lot of effort into keeping the company's servers up to watch all of the traffic coming in. can you see why this comes off badly?

you keep creating these motivations and capabilities that just don't add up to the reality of what happened. it's an issue with a lot of security news though, people rush in having read an article that glosses over the details and shout down anyone informed telling them that's not how it happened

speaking of glossing over details

quote:

It took a few hours longer for Hutchins and his colleagues at Kryptos Logic to understand that WannaCry was still a threat. In fact, the domain that Hutchins had registered was still being bombarded with connections from WannaCry-infected computers all over the globe as the remnants of the neutered worm continued to spread: It would receive nearly 1 million connections over the next two days. If their web domain went offline, every computer that attempted to reach the domain and failed would have its contents encrypted, and WannaCry's wave of destruction would begin again.

the kill switch wasn't just "this record can be resolved in DNS", it was getting a response from a server at that domain. servers go down, botnet goes active again, so it was pretty important to keep those servers up!

Wired really wants to tell a rehabilitation story and it's hard to see exactly how legit it really is, but "he is obviously sketchy because he had to keep the servers alive" is just plain wrong

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

D. Ebdrup posted:

Technically speaking, no language is Turing complete - because the termination checker can't check itself.

That's not what Turing completeness means, at all. You're mixing up Turing completeness with the halting problem.

A language is Turing complete if you can use it to write an emulator for arbitrary Turing machines. A Turing machine is just a finite state machine that takes input from a single spot on an infinite tape, and can optionally overwrite the current tape symbol, shift its position, or halt based on FSM state transitions. The "infinite tape length" and "unlimited possible states" requirements are typically waived when discussing real-world systems, for obvious reasons. That's the entire definition in simple terms.

Almost all programming languages are Turing complete. It takes some effort to design a non-Turing complete language, because you can get to a Turing machine with nothing but variable storage, conditionals, and jump instructions (or, if you feel like goto is harmful, loop structures that don't put predefined fixed bounds on the number of trips through the loop, or recursion).

The halting problem is separate. It says that there's no algorithm that can take in a program for a Turing-complete system and an input to that program, and reliably answer the question of whether the program eventually halts. This isn't a limitation of Turing machines; it's just provably mathematically impossible. The concept of a Turing machine doesn't involve a "termination checker," because that's something that can't exist, even in a mathematical abstraction where the Turing machine can have an infinite length tape and an arbitrary number of states for the head.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

D. Ebdrup posted:

I would be 100% okay with SMS 2FA if the standard had just recommended that every message be prepended with something like "Authentication Code:" so that iOS and Android could look for that in SMS messages and then blur the rest of the contents of the SMS from being displayed on the lock screen. Heck, it should be possible to implement the information as meta-data in the message itself.
I'm half-convinced that would solve every problem with SMS 2FA that doesn't involve the targeted attacks, for example where accounts are stolen via social engineering.

The big threat with SMS 2FA isn't somebody reading the code off your lock screen. If you're worried about that, just set your phone to not display SMS previews on the lock screen at all, which will stop exposing both SMS 2FA codes and that message from your ex that says "hey this is awkward but you might wanna get tested." More generally, starting off with "you have to find a way to get the person's phone in your hands" is not a feature of a strong attack.

The problem with SMS 2FA is that phone numbers are not strongly tied to hardware, people, or cryptographic secrets. Phone provider CSRs are willing to help an attacker with a SIM swap, because they're judged on fast resolutions and survey scores, not security. Anyone who can break into not-particularly-secure provider customer accounts can set up call forwarding, and many services that do SMS 2FA also wire it up to a "call me" option that reads the code over a text-to-speech engine. SS7 attacks can redirect incoming SMSes directly to an attacker using the same mechanisms that let your phone number work overseas, and larger-scale organized crime treats access to SS7 as a commodity. There are a lot of ways to compromise 2FA SMS before your phone is ever involved, and that's the reason that SMS is not a good 2FA mechanism.

Space Gopher fucked around with this message at 17:15 on Jul 4, 2020

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.
You can make an "SEIM tool" by hiring a low-level tech, assigning them to poke through reports from endpoint security software, and telling them to log anything that looks weird into a shared Excel doc.

It won't do much good, but that kind of setup is a lot more common in 800-person companies than a full Splunk system.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Volmarias posted:

What is even the point of this program?

The idea is that a safety net with some holes is more useful than no safety net at all.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Ynglaur posted:

I'll be honest, I don't see how a salesperson buying someone lunch is bribing them. I mean, it's lunch. If they show up with the keys to a new car, or a suitcase full of cash gift cards, I get it. But lunch? Human beings can share a meal and not be signing over each other's honor.

It's not a bribe in the sense of "I choose your product even though I know it might not be the best, you give me a comically large sack of cash with a dollar sign on it," but people are much much friendlier to a sales pitch if you give them small, low-value gifts. This has been true as long as sales has existed, and offers of food in particular speak to something pretty deep in our brains/what we've learned from the social order. Keep in mind that the root of "companion" is literally just Latin for "bread with."

BlankSystemDaemon posted:

Getting to eat well on the company dime sounds like the least bad part about being a sales person, honestly.

I'm a couple steps removed from sales, but thanks to a bunch of work travel and sales-related work, I've had plenty of opportunity to dine well on the company dime.

It's fun at first but it honestly gets kind of old. Like work travel, there's a wide gulf between "doing a fun thing for your own enjoyment" and "doing an ostensibly fun thing but actually it's work."

e:

evil_bunnY posted:

My one friend in sales had *so* much trouble keeping fit lol

oh god a thousand times this, I'll have the salad, thanks, and can you please go light on the dressing?

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

BaseballPCHiker posted:

I'm sure others will have better ideas but I would run Wireshark, take a good long packet capture, then see where your traffic is going. If you arent able to install Wireshark you could put in more effort by getting an old fashioned hub, and connecting your work computer to it, then another computer to the hub running Wireshark.

The answer to this is going to be "to the corporate VPN," whether it's logging and transmitting every single click and keystroke, or just phoning home once a day to see what updates are whitelisted by their MDM setup. The entire point of the VPN is that the traffic is opaque to anybody using, say, Wireshark to sniff and analyze it.

You might be able to make some inferences based on traffic volume, but outside of that, it's not going to be very helpful without some way to MITM the VPN - which would be very noticeable to anybody looking for it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply