|
Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures. How many major security busts down the years have been caused by people implementing their own crypto? Not many, compared to the number that have been caused by things like input validation or string handling. (Or using crypto implemented by a trained professional who knew exactly what they were doing except goto fail)
|
# ? Aug 25, 2016 20:46 |
|
|
# ? May 31, 2024 10:23 |
|
Yeah, if I were asked how much I knew about crypto, I'd say "a) not much, b) don't roll your own, c) that being said, here's roughly how I understand things to work...". Just like if someone asked me to sort a list, my first response would be "use the library sort() method" before I went into how to actually implement that method. You want to show you're aware that there are existing solutions and that you'd use them normally, but you also need to show that you know (broadly) how such solutions function, because that can be important when it comes to correctly using them.
|
# ? Aug 25, 2016 21:06 |
|
Soricidus posted:Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures. Maybe, but knowing how to implement a data structure is more of testament of knowing the properties that make it useful in certain situations. Waving your hands and saying "I'm going use a map for this" is cool but is that a hash_map or a RBT or tree-like implementation? Understanding the underlying implementation and what it excels and fails at is probably some really useful skill to have when you write software, similar to crypto. You should not try to reinvent any of these things, but they aren't inherently bad interview questions because inventing them isn't the goal of the interview.
|
# ? Aug 25, 2016 21:46 |
|
Today's adventure: a unit test which doesn't work unless other unit tests are run beforehand.
|
# ? Aug 25, 2016 22:29 |
|
qntm posted:Today's adventure: a unit test which doesn't work unless other unit tests are run beforehand. This is why one must randomize (and better, parallelize) their tests.
|
# ? Aug 25, 2016 22:31 |
|
Munkeymon posted:Perl is basically Calvinball: The Language
|
# ? Aug 25, 2016 22:32 |
|
Soricidus posted:Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures. Knowing things about data structures is important in writing software because they affect the performance and correctness of your code, regardless of who write them. Similarly, implementing something higher-level like an authentication token scheme may require you to understand the security properties of HMACs even though you hopefully won't implement that part yourself.
|
# ? Aug 25, 2016 23:58 |
|
i'm not saying knowing about data structures isn't important or that the interviews are necessarily bad. it just seems weird that some people try to shut down any discussion of crypto by shouting about how you should never touch it, when it's probably not even in the top 10 of things that idiots regularly gently caress up with dire consequences (site hacked and all customer data stolen, etc). it's silly to single out crypto when the mantra should be "don't reinvent any wheels at all".
|
# ? Aug 26, 2016 00:46 |
|
It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.
|
# ? Aug 26, 2016 00:57 |
|
Isn't using crypto components poorly also known as "bad crypto?" I agree that it's not about the underlying algorithms and rolling them yourself, it's about the combination or integration thereof. So yeah, maybe companies should interview about integration, but they won't because their integration tests are nonexistent.
|
# ? Aug 26, 2016 01:06 |
|
Plorkyeran posted:It's actually hard to think of any major publicised fuckups that were the result of bad crypto Do we count hashing here?
|
# ? Aug 26, 2016 01:17 |
|
Plorkyeran posted:It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer. Where do dual_ec_drbg and the PS3 nonce fall?
|
# ? Aug 26, 2016 01:32 |
|
Voted Worst Mom posted:Maybe, but knowing how to implement a data structure is more of testament of knowing the properties that make it useful in certain situations. Waving your hands and saying "I'm going use a map for this" is cool but is that a hash_map or a RBT or tree-like implementation? Understanding the underlying implementation and what it excels and fails at is probably some really useful skill to have when you write software, similar to crypto. You should not try to reinvent any of these things, but they aren't inherently bad interview questions because inventing them isn't the goal of the interview. To be honest I think it's more than that; if a person can write a FizzBuzz, implement a linked list from scratch, and write a merge sort they probably can be, at the absolute minimum, a mediocre programmer in the face of more complex things. Let's be honest, there's a ton of work out there for anybody mediocre or higher. How much of programming is just storing, sorting, or manipulating data, flow control, and writing for loops?
|
# ? Aug 26, 2016 01:57 |
|
Plorkyeran posted:It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer. Trucha signing bug is probably the most notorious.
|
# ? Aug 26, 2016 03:45 |
|
There's also cryptocat, telegram, the horrible magento encryption...
|
# ? Aug 26, 2016 03:48 |
|
You could argue that the constant refrain of "seriously don't do this" is actually a big reason that crypto-related fuckups are relatively few and far-between.
|
# ? Aug 26, 2016 03:50 |
|
How many iPhone and Android apps do you suppose are running with less than secure crypto because developers don't want to go through export compliance? I'm guessing that's going to be a significant source of problems.
|
# ? Aug 26, 2016 03:59 |
|
Doing some maintenance work on a WebApi project developed by two guys who are no longer around. It has two controllers. Each expects credentials to be passed in differently. Here's the validation for each:code:
code:
|
# ? Aug 26, 2016 11:03 |
|
Just spent half an hour trying to work out why a method appeared to be only half-populating a list when it appeared impossible that it could be bailing early, then realised that ~80 lines of it was actually a lambda. (C#)
|
# ? Aug 26, 2016 16:28 |
|
Hammerite posted:Just spent half an hour trying to work out why a method appeared to be only half-populating a list when it appeared impossible that it could be bailing early, then realised that ~80 lines of it was actually a lambda. (C#) Can we see this?
|
# ? Aug 26, 2016 17:04 |
|
Plorkyeran posted:It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.
|
# ? Aug 26, 2016 19:53 |
|
I'd call DRM fuckups public services tbh. Always roll your own DRM, please, media companies!
|
# ? Aug 26, 2016 20:21 |
|
the 3DS had a keyscrambler which took two known inputs and "scrambled" it into a key, which was directly used inside the same AES engine for hardware decryption. so theoretically we would need to decap the chip to figure out the keyscrambler. and then Nintendo accidentally published some full keys to an alpha channel and then for the release channel realized their mistake and published the two scrambled halves, lol. but they did it twice so we now know the rotates / shifts / xors the keyscrambler uses.
|
# ? Aug 26, 2016 22:46 |
|
Nintendo is a neverending source of "bad crypto" and I made some effortposts in the past about it.Suspicious Dish posted:Also, because I want a quick thing I can cross-post to the Security Fuckup Megathread again, let's talk about 3DS savefiles for a minute. I already talked about this with a friend, so I'm mostly going to be summarizing from our Skype logs.
|
# ? Aug 26, 2016 22:48 |
|
Suspicious Dish posted:Nintendo is a neverending source of "bad crypto" and I made some effortposts in the past about it.
|
# ? Aug 27, 2016 01:26 |
|
Plorkyeran posted:It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer. there's the perennial 'someone implemented crypto X and reused an IV/Nonce Y', like lots of RC4 impls, or EC-DSA (sony, bitcoin) (if you re-use a random number in the bitcoin transaction crypto you sign can leak your private key) there's the "not using a secure equality function" ones where either there's a timing attack, or my favourite, where strcmp was used to check signatures, and yeah, well, null bytes broke everything hilariously badly (nintendo) or that time every debian user had very bad gpg keys because someone broke the code there's "my random number generator is poo poo" ones too, including a game show, but most of these can be handwaved as "bad crypto implementations" and not "bad crypto" for bad crypto, you have things like dvd encryption, which spawned things like DeCSS, or PURPLE/ENIGMA for some less computery variants but bad crypto rarely makes its way to bad crypto implementations, most academic stuff is built very slowly and very incrementally. there's a battery of standard tests and analysis, and crypto breaks are often theoretic before practical: there's usually enough time to move off the bad ideas before they fall apart in the end the implementation errors usually happen way before the design errors still there's good reasons we use tls 1.3 and not ssl 1.0
|
# ? Aug 27, 2016 02:50 |
|
tef posted:or that time every debian user had very bad gpg keys because someone broke the code To recap the story, back in 2006 the Debian maintainer for the OpenSSL packages broke libssl's entropy pool in such that it was only being seeded by the current process ID (PID), and so only contained ~32000 unique values. This was discovered (or at least, made public) in 2008. Thus, for almost two years every private key generated with openssl, including SSH keys and X.509 certificate keys came from a limited key set, which was quickly enumerated after the bug was made public. GPG keys were actually unaffected since GnuPG doesn't use OpenSSL due to licensing issues. The maintainer, of course, got a much due public flogging, but at the time most people dismissed OpenSSL's role in the debacle. The back story was that libssl was using, among other sources, reads from uninitialized memory to seed its entropy pool. The thought was that uninitialized memory might contain randomness, and even if it doesn't, using it as part of the seed wasn't harmful. The issue is that this is about the only "legitimate" reason to use uninitialized memory (which is to say that the practice is still quite dubious even if it isn't invalid). Thus, Valgrind and other programs that check for memory use errors were complaining when run on programs using libssl, or when run on programs that use other libraries that, internally, use libssl. This resulted in user complaints and spurious bug reports made to Debian due to misinterpretation of Valgrind's complaints of memory use errors. So, the Debian OpenSSL maintainer was trying to do right by Debian users and reduce the volume of spurious reports by removing the part of libssl that used uninitialized memory. OpenSSL does (did?) have an #ifdef ("PURIFY") for this purpose, but it wasn't documented and not obvious to the maintainer. He attempted to patch libssl himself and, failed. However, before doing so, he posted to the openssl-dev mailing list with a plea to check his work and, basically made an effort to do the right thing. But he didn't get a useful response, and so the rest is history. Years later Heartbleed happens. In the aftermath we learn that OpenSSL was being maintained by just two guys who were overloaded and generally way in over their heads. Nobody externally had done a serious code review of the project, and when the OpenBSD/LibreSSL guys did they found that, unsurprisingly, OpenSSL was buggy and full of poo poo code, and poo poo legacy code, for no particularly great reason. Personally I view the Debian situation as a early warning that the OpenSSL project had, at best, questionable management, and one that came six years before Heartbleed really blew that open. ExcessBLarg! fucked around with this message at 06:10 on Aug 27, 2016 |
# ? Aug 27, 2016 06:07 |
|
ExcessBLarg! posted:The issue is that this is about the only "legitimate" reason to use uninitialized memory (which is to say that the practice is still quite dubious even if it isn't invalid). It is invalid. Reading unitialized memory is undefined behavior, which means the compiler could do anything at all. Even assuming best intentions, compilers are allowed to assume that undefined behavior never happens, so if a sufficiently smart compiler detected that access to unitialized memory was inevitable along one code path, it could optimize it away entirely, and could excise arbitrary amounts of legitimate entropy gathering code along with it. You could reasonably end up with no entropy at all, after mixing in entropy tainted by undefined behavior. This wasn't an issue in this case, but it's a potential time-bomb that should be avoided.
|
# ? Aug 27, 2016 06:50 |
|
eth0.n posted:It is invalid. Reading unitialized memory is undefined behavior, which means the compiler could do anything at all. Even assuming best intentions, compilers are allowed to assume that undefined behavior never happens, so if a sufficiently smart compiler detected that access to unitialized memory was inevitable along one code path, it could optimize it away entirely, and could excise arbitrary amounts of legitimate entropy gathering code along with it. You could reasonably end up with no entropy at all, after mixing in entropy tainted by undefined behavior. No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something.
|
# ? Aug 27, 2016 06:59 |
|
vOv posted:No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something. Ah, OK, misunderstood this. Interestingly, it looks like it could be undefined behavior if implemented by using uninitilzed local variables that don't have their address taken. I.e., C++ code:
C++ code:
|
# ? Aug 27, 2016 07:11 |
|
Still, undefined behavior and indeterminate values are verifiable details. It's entirely possible to productively rely on "undefined" behavior in a language as long as you're able to influence and/or keep abreast of implementation details in the environments your library is compiled or executed in. Sure, there's always the chance your library has a compatibility issue when compiled with Borland Turbo C or some other uncommon compiler, but if you're willing to work with the core development team of a library, compiler, OS etc you can get away with a lot of behavior other people can't. I'm not sure OpenSSL or this specific example falls in this camp because it seems to have been horribly understaffed and mismanaged, but it's not entirely wrong to rely on undefined things as long as you know how to handle it and have a backup plan in case it needs to be changed for some reason. Hell, this is the reason GCC has several flags to disable certain optimizations (and security code often needs to be compiled with very selective optimizations anyway lest it "optimize" your code into something timing attack sensitive). E: That said, never rely on undefined behavior in your own code unless you're doing something wonky for fun. I've personally worked on a few libraries that relied on not-strictly defined behavior in Go (low-level performance-sensitive numeric code that involved writing assembly, and OpenGL/CL bindings and corresponding math libraries that had data structures that could be sent to the GPU), but in those cases we had to be actively scanning for things that might affect it, and in a few cases had to outright ask the core developers whether certain bits of compiler behavior (mostly behavior regarding struct layout, alignment, field order, and such) were likely to change even if the spec is mum on them. In one case we (and most libraries dealing with FFI C code) had to rewrite some of the library to prevent certain behavior when changes needed to be made to improve the GC. This is more possible with newer languages with a single canonical compiler* and more communicative core devs like Rust or Go, though. * Okay, Go has `go tool compile` and gccgo, technically, but very few people actually use gccgo for anything. Linear Zoetrope fucked around with this message at 07:46 on Aug 27, 2016 |
# ? Aug 27, 2016 07:32 |
|
vOv posted:No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something. You absolutely can optimize it away. Trap values are not required to behave like a single consistent value.
|
# ? Aug 27, 2016 09:23 |
|
ExcessBLarg! posted:Personally I view the Debian situation as a early warning that the OpenSSL project had, at best, questionable management, and one that came six years before Heartbleed really blew that open. OpenSSL may be a turd of a project but commenting out lines to make warnings go away with no oversight is beyond questionable management
|
# ? Aug 27, 2016 15:12 |
|
xtal posted:Can we see this? I'd probably better not post it, even though it's worthless code we're looking to replace with a new system - it would reveal a bit too much about our systems if I posted it. Basically it pops up a dialog box that tells the user "something is happening, please wait" and the lambda is a task being done by the dialog box that involves loading some rows from a database. If something throws in the lambda (like, say, because a stored procedure doesn't exist, because we're now pointing to a partially populated test database) it would quit and just load whatever rows had been encountered so far. All other rows get silently discarded! I say "the lambda", it's now a regular method on the class that gets called in a one-liner lambda expression, because I refactored it when it became clear it was a pain in the rear end to debug.
|
# ? Aug 27, 2016 16:16 |
|
Hammerite posted:I'd probably better not post it, even though it's worthless code we're looking to replace with a new system - it would reveal a bit too much about our systems if I posted it. I just did this, but it was for a 300 or so line c++ lambda that was called precisely once, immediately after its definition was closed.
|
# ? Aug 27, 2016 16:35 |
|
tef posted:OpenSSL may be a turd of a project but commenting out lines to make warnings go away with no oversight is beyond questionable management
|
# ? Aug 27, 2016 19:01 |
|
rjmccall posted:You absolutely can optimize it away. Trap values are not required to behave like a single consistent value. Hmm, I guess you're right. Reading from a trap representation is undefined, and the only type guaranteed not to have any trap representations is unsigned char (the code in question just used char).
|
# ? Aug 27, 2016 19:22 |
|
ExcessBLarg! posted:So when you get regular bug reports from angry users running Valgrind on their own programs, which complains about libssl reading from uninitialized memory, what would you do? Tell them not to use OpenSSL. It shouldn't be the job of the distributions to paper over such things. Take it to upstream, and use something else if upstream turns out to not care about quality.
|
# ? Aug 27, 2016 19:32 |
|
Compilers really will optimise that sort of thing away:code:
Assembly output posted:foo(int, int): # @foo(int, int)
|
# ? Aug 27, 2016 22:00 |
|
|
# ? May 31, 2024 10:23 |
|
Athas posted:Tell them not to use OpenSSL. It shouldn't be the job of the distributions to paper over such things. Take it to upstream, and use something else if upstream turns out to not care about quality. As for alternatives there weren't many in 2006. The most capable alternative at the time was GnuTLS which is it's own can of worms. The problem was worse than "don't use OpenSSL" though. People running Valgrind were interpreting the libssl reads as bugs, and so filing bug reports accordingly. So there's a squelch issue there. Furthermore it wasn't just affecting people who used OpenSSL directly, but any library that itself used OpenSSL.
|
# ? Aug 27, 2016 22:37 |