Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Soricidus
Oct 21, 2010
freedom-hating statist shill
Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures.

How many major security busts down the years have been caused by people implementing their own crypto? Not many, compared to the number that have been caused by things like input validation or string handling. (Or using crypto implemented by a trained professional who knew exactly what they were doing except goto fail)

Adbot
ADBOT LOVES YOU

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
Yeah, if I were asked how much I knew about crypto, I'd say "a) not much, b) don't roll your own, c) that being said, here's roughly how I understand things to work...". Just like if someone asked me to sort a list, my first response would be "use the library sort() method" before I went into how to actually implement that method. You want to show you're aware that there are existing solutions and that you'd use them normally, but you also need to show that you know (broadly) how such solutions function, because that can be important when it comes to correctly using them.

Coffee Mugshot
Jun 26, 2010

by Lowtax

Soricidus posted:

Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures.

How many major security busts down the years have been caused by people implementing their own crypto? Not many, compared to the number that have been caused by things like input validation or string handling. (Or using crypto implemented by a trained professional who knew exactly what they were doing except goto fail)

Maybe, but knowing how to implement a data structure is more of testament of knowing the properties that make it useful in certain situations. Waving your hands and saying "I'm going use a map for this" is cool but is that a hash_map or a RBT or tree-like implementation? Understanding the underlying implementation and what it excels and fails at is probably some really useful skill to have when you write software, similar to crypto. You should not try to reinvent any of these things, but they aren't inherently bad interview questions because inventing them isn't the goal of the interview.

qntm
Jun 17, 2009
Today's adventure: a unit test which doesn't work unless other unit tests are run beforehand.

xtal
Jan 9, 2011

by Fluffdaddy

qntm posted:

Today's adventure: a unit test which doesn't work unless other unit tests are run beforehand.

This is why one must randomize (and better, parallelize) their tests.

No Safe Word
Feb 26, 2005

Munkeymon posted:

Perl is basically Calvinball: The Language
this is really really good, quoting for the new page

Series DD Funding
Nov 25, 2014

by exmarx

Soricidus posted:

Funny how people get so hung up on crypto, when most programming interviews focus heavily on forcing people to prove they are willing and eager to implement other things you should never ever implement yourself*, such as basic data structures.

How many major security busts down the years have been caused by people implementing their own crypto? Not many, compared to the number that have been caused by things like input validation or string handling. (Or using crypto implemented by a trained professional who knew exactly what they were doing except goto fail)

Knowing things about data structures is important in writing software because they affect the performance and correctness of your code, regardless of who write them. Similarly, implementing something higher-level like an authentication token scheme may require you to understand the security properties of HMACs even though you hopefully won't implement that part yourself.

Soricidus
Oct 21, 2010
freedom-hating statist shill
i'm not saying knowing about data structures isn't important or that the interviews are necessarily bad. it just seems weird that some people try to shut down any discussion of crypto by shouting about how you should never touch it, when it's probably not even in the top 10 of things that idiots regularly gently caress up with dire consequences (site hacked and all customer data stolen, etc). it's silly to single out crypto when the mantra should be "don't reinvent any wheels at all".

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.

Coffee Mugshot
Jun 26, 2010

by Lowtax
Isn't using crypto components poorly also known as "bad crypto?" I agree that it's not about the underlying algorithms and rolling them yourself, it's about the combination or integration thereof. So yeah, maybe companies should interview about integration, but they won't because their integration tests are nonexistent.

spiritual bypass
Feb 19, 2008

Grimey Drawer

Plorkyeran posted:

It's actually hard to think of any major publicised fuckups that were the result of bad crypto

Do we count hashing here?

JawnV6
Jul 4, 2004

So hot ...

Plorkyeran posted:

It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.

Where do dual_ec_drbg and the PS3 nonce fall?

ToxicSlurpee
Nov 5, 2003

-=SEND HELP=-


Pillbug

Voted Worst Mom posted:

Maybe, but knowing how to implement a data structure is more of testament of knowing the properties that make it useful in certain situations. Waving your hands and saying "I'm going use a map for this" is cool but is that a hash_map or a RBT or tree-like implementation? Understanding the underlying implementation and what it excels and fails at is probably some really useful skill to have when you write software, similar to crypto. You should not try to reinvent any of these things, but they aren't inherently bad interview questions because inventing them isn't the goal of the interview.

To be honest I think it's more than that; if a person can write a FizzBuzz, implement a linked list from scratch, and write a merge sort they probably can be, at the absolute minimum, a mediocre programmer in the face of more complex things. Let's be honest, there's a ton of work out there for anybody mediocre or higher.

How much of programming is just storing, sorting, or manipulating data, flow control, and writing for loops?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Plorkyeran posted:

It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.

Trucha signing bug is probably the most notorious.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
There's also cryptocat, telegram, the horrible magento encryption...

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
You could argue that the constant refrain of "seriously don't do this" is actually a big reason that crypto-related fuckups are relatively few and far-between.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane
How many iPhone and Android apps do you suppose are running with less than secure crypto because developers don't want to go through export compliance? I'm guessing that's going to be a significant source of problems.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Doing some maintenance work on a WebApi project developed by two guys who are no longer around. It has two controllers. Each expects credentials to be passed in differently. Here's the validation for each:

code:
        public int PerformChannelAuthentication(ChannelAuth credentials, string methodName, int accessLevel, out int valDays)
        {
            valDays = 90;
            return 0;
            // actual code that does some sort of credential checking commented out, either because it didn't work, or they couldn't be bothered to set everything up in the db. 
        }

code:
        public bool ValidateChannelCredentials(string name, string pass)
        {
            if(name.ToLower() != "test" || pass.ToLower() != "test")
            {
                return false;
            }
            return true;
        }

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
Just spent half an hour trying to work out why a method appeared to be only half-populating a list when it appeared impossible that it could be bailing early, then realised that ~80 lines of it was actually a lambda. (C#)

xtal
Jan 9, 2011

by Fluffdaddy

Hammerite posted:

Just spent half an hour trying to work out why a method appeared to be only half-populating a list when it appeared impossible that it could be bailing early, then realised that ~80 lines of it was actually a lambda. (C#)

Can we see this?

ExcessBLarg!
Sep 1, 2001

Plorkyeran posted:

It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.
Are timing and other side-channel attacks "bad crpyto?" You can implement a crpyto algorithm that's entirely correct on paper, but if you neglect timing issues it can be broken wide open. There's a number of examples, but the one I recall is the brute force of the PSP master key, which a timing attack made a lot easier by breaking it down into four 32-bit keys instead of one 128-bit key.

Soricidus
Oct 21, 2010
freedom-hating statist shill
I'd call DRM fuckups public services tbh. Always roll your own DRM, please, media companies!

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
the 3DS had a keyscrambler which took two known inputs and "scrambled" it into a key, which was directly used inside the same AES engine for hardware decryption. so theoretically we would need to decap the chip to figure out the keyscrambler. and then Nintendo accidentally published some full keys to an alpha channel and then for the release channel realized their mistake and published the two scrambled halves, lol. but they did it twice so we now know the rotates / shifts / xors the keyscrambler uses.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Nintendo is a neverending source of "bad crypto" and I made some effortposts in the past about it.

Suspicious Dish posted:

Also, because I want a quick thing I can cross-post to the Security Fuckup Megathread again, let's talk about 3DS savefiles for a minute. I already talked about this with a friend, so I'm mostly going to be summarizing from our Skype logs.

For everyone that knows a basic bit of crypto, you probably know about the one-time pad. For everyone else, basically, you take some plaintext, and then XOR it with a random key that's as long, and you have some encrypted ciphertext. And if you then XOR it with the random key again, you get back your plaintext. This is 100% secure and can't be broken, assuming your key is truly random, and you NEVER USE THE KEY AGAIN. Have a visual demonstration, from the excellent Crypto101 book by my good friend Laruens van Houtven.



If you reuse the key, then you can XOR the two ciphertexts together and learn quite a lot about the plaintexts, even if you might not be able to reuse them.

(Also, if you don't know, the "XOR" operator can be thought of a bitwise != operator, or a "give me all the places where these bits are different". So, the only place where the ciphertexts are different is where the pictures are different, since the keys are the same.)

The XOR operator is denoted by this funny symbol: ⊕

This will come in handy later. This has been your basic introduction to crypto.



The 3DS is a very, very smart system in terms of its encryption. It has a hardware keyscrambler as part of its AES module. Instead of putting in the decryption keys in RAM and doing AES in software, the 3DS takes two inputs, keyX and keyY. It slaps them in a register of a custom hardware module of the 3DS. That hardware then does some complex magic that nobody knows what it actually is to get a key. Once it does that, you feed it your encrypted content and it gives you the decrypted content. So, we see two random inputs, and then your encrypted blocks, and out pops the decrypted stuff. We have no idea what the public key used to decrypt is, much less the private key. The only way to figure out exactly what it's doing is to decap the chip and look at how the silicon lines connect together. There was a "chip decap" fundraiser for this, but it was run by a scammer and he ran off with thousands of dollars (lol homebrew community).

So, theoretically unbreakable... right? Let's take a look at how the savegames are encrypted: http://3dbrew.org/wiki/Savegames

quote:

On the 3DS savegames are stored much like on the DS, that is on a FLASH chip in the gamecart. On the DS these savegames were stored in plain-text but on the 3DS a layer of encryption was added. This is AES-CTR, as the contents of several savegames exhibit the odd behavior that xor-ing certain parts of the savegame together will result in the plain-text appearing.

The reason this works is because the stream cipher used has a period of 512 bytes. That is to say, it will repeat the same keystream after 512 bytes. The way you encrypt with a stream cipher is you XOR your data with the keystream as it is produced. Unfortunately, if your streamcipher repeats and you are encrypting a known plain-text (in our case, zeros) you are basically giving away your valuable keystream.

lol.

OK, so this is some complex stuff. Let's go over how AES works.

AES is what's known as a "block cipher". Basically, it takes some block of 16 bytes, and scrambles it together with a key to produce a new block.

Now let's apply this to a big file. The most obvious and naive thing to do is to simply split the file up into blocks of 16 bytes each, and then run AES over each of them. This is given the dumb fancy name "Electronic Codebook" or ECB and YOU SHOULD NEVER USE IT HOLY poo poo WHY DOES IT HAVE A COOL FANCY CYBERPUNK NAME. From Wikipedia:



Yeah, great encryption there buddy. Basically, since the keys are the same, and the plaintexts are the same, the ciphertext is going to be the same for every block that's the same.

Here's the "flow" diagram of ECB, showing that all blocks are independent:



So, the smarter way to do it is to use "CBC" or "cipherblock chaining". Basically, you take the output of the last block, and XOR it together with the plaintext of the next block, and then scramble it together with your key. This is what you should use.



However, this still has a downside. The issue is that if you want random access into the file (I want to look at some portion in the middle), you have to decrypt the entire file up until then. Not optimal for savefiles, where you want to load and save from your SD card directly, rather than load the entire file into memory.

But that's OK. There's yet another mode called "CTR" which allows random access. Basically, instead of chaining blocks together, you feed a counter value into the AES as your plaintext, and then get a random block out. You then XOR that with your plaintext to create your ciphertext:



This is also really easy to do random seeking with. Divide the seek location by 16, and there's your block number, and then you feed that into AES.

Except... wait a minute. The XOR is after the AES? What? What's going on here...?

Oh... oh my. This is... it's a one-time pad!!! AES is just used to generate a key for your one-time pad! And since the counter is always changing, so is your key! Clever! So it's still unbreakable!

quote:

The reason this works is because the stream cipher used has a period of 512 bytes. That is to say, it will repeat the same keystream after 512 bytes.

WHAT! No no no no no no no Nintendo, no!!!! Jesus Christ, Nintendo! The counter resets to 0 after every 512 bytes, or every 32 blocks.


Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Suspicious Dish posted:

Nintendo is a neverending source of "bad crypto" and I made some effortposts in the past about it.
Thank you for this excellent writeup! I'd always wondered what the deal with all the different modes was. Now I finally know enough to implement my own cryptosystem. :downs:

tef
May 30, 2004

-> some l-system crap ->

Plorkyeran posted:

It's actually hard to think of any major publicised fuckups that were the result of bad crypto, while there's countless examples from security problems at pretty much every other layer.

there's the perennial 'someone implemented crypto X and reused an IV/Nonce Y', like lots of RC4 impls, or EC-DSA (sony, bitcoin)

(if you re-use a random number in the bitcoin transaction crypto you sign can leak your private key)

there's the "not using a secure equality function" ones where either there's a timing attack, or my favourite, where strcmp was used to check signatures, and yeah, well, null bytes broke everything hilariously badly (nintendo)

or that time every debian user had very bad gpg keys because someone broke the code

there's "my random number generator is poo poo" ones too, including a game show, but most of these can be handwaved as "bad crypto implementations" and not "bad crypto"


for bad crypto, you have things like dvd encryption, which spawned things like DeCSS, or PURPLE/ENIGMA for some less computery variants

but bad crypto rarely makes its way to bad crypto implementations, most academic stuff is built very slowly and very incrementally. there's a battery of standard tests and analysis, and crypto breaks are often theoretic before practical: there's usually enough time to move off the bad ideas before they fall apart

in the end the implementation errors usually happen way before the design errors

still there's good reasons we use tls 1.3 and not ssl 1.0

ExcessBLarg!
Sep 1, 2001

tef posted:

or that time every debian user had very bad gpg keys because someone broke the code
I've been defensive of the Debian maintainer that broke OpenSSL and, in light of Heartbleed the whole situation makes more sense.

To recap the story, back in 2006 the Debian maintainer for the OpenSSL packages broke libssl's entropy pool in such that it was only being seeded by the current process ID (PID), and so only contained ~32000 unique values. This was discovered (or at least, made public) in 2008. Thus, for almost two years every private key generated with openssl, including SSH keys and X.509 certificate keys came from a limited key set, which was quickly enumerated after the bug was made public. GPG keys were actually unaffected since GnuPG doesn't use OpenSSL due to licensing issues.

The maintainer, of course, got a much due public flogging, but at the time most people dismissed OpenSSL's role in the debacle.

The back story was that libssl was using, among other sources, reads from uninitialized memory to seed its entropy pool. The thought was that uninitialized memory might contain randomness, and even if it doesn't, using it as part of the seed wasn't harmful. The issue is that this is about the only "legitimate" reason to use uninitialized memory (which is to say that the practice is still quite dubious even if it isn't invalid). Thus, Valgrind and other programs that check for memory use errors were complaining when run on programs using libssl, or when run on programs that use other libraries that, internally, use libssl. This resulted in user complaints and spurious bug reports made to Debian due to misinterpretation of Valgrind's complaints of memory use errors.

So, the Debian OpenSSL maintainer was trying to do right by Debian users and reduce the volume of spurious reports by removing the part of libssl that used uninitialized memory. OpenSSL does (did?) have an #ifdef ("PURIFY") for this purpose, but it wasn't documented and not obvious to the maintainer. He attempted to patch libssl himself and, failed. However, before doing so, he posted to the openssl-dev mailing list with a plea to check his work and, basically made an effort to do the right thing. But he didn't get a useful response, and so the rest is history.

Years later Heartbleed happens. In the aftermath we learn that OpenSSL was being maintained by just two guys who were overloaded and generally way in over their heads. Nobody externally had done a serious code review of the project, and when the OpenBSD/LibreSSL guys did they found that, unsurprisingly, OpenSSL was buggy and full of poo poo code, and poo poo legacy code, for no particularly great reason. Personally I view the Debian situation as a early warning that the OpenSSL project had, at best, questionable management, and one that came six years before Heartbleed really blew that open.

ExcessBLarg! fucked around with this message at 06:10 on Aug 27, 2016

eth0.n
Jun 1, 2012

ExcessBLarg! posted:

The issue is that this is about the only "legitimate" reason to use uninitialized memory (which is to say that the practice is still quite dubious even if it isn't invalid).

It is invalid. Reading unitialized memory is undefined behavior, which means the compiler could do anything at all. Even assuming best intentions, compilers are allowed to assume that undefined behavior never happens, so if a sufficiently smart compiler detected that access to unitialized memory was inevitable along one code path, it could optimize it away entirely, and could excise arbitrary amounts of legitimate entropy gathering code along with it. You could reasonably end up with no entropy at all, after mixing in entropy tainted by undefined behavior.

This wasn't an issue in this case, but it's a potential time-bomb that should be avoided.

vOv
Feb 8, 2014

eth0.n posted:

It is invalid. Reading unitialized memory is undefined behavior, which means the compiler could do anything at all. Even assuming best intentions, compilers are allowed to assume that undefined behavior never happens, so if a sufficiently smart compiler detected that access to unitialized memory was inevitable along one code path, it could optimize it away entirely, and could excise arbitrary amounts of legitimate entropy gathering code along with it. You could reasonably end up with no entropy at all, after mixing in entropy tainted by undefined behavior.

This wasn't an issue in this case, but it's a potential time-bomb that should be avoided.

No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something.

eth0.n
Jun 1, 2012

vOv posted:

No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something.

Ah, OK, misunderstood this.

Interestingly, it looks like it could be undefined behavior if implemented by using uninitilzed local variables that don't have their address taken. I.e.,

C++ code:
char entropy[8];
add_entropy(&entropy);
is not undefined behavior, but:

C++ code:
long entropy;
add_entropy(entropy);
would be. I assume OpenSSL did the former, though.

Linear Zoetrope
Nov 28, 2011

A hero must cook
Still, undefined behavior and indeterminate values are verifiable details. It's entirely possible to productively rely on "undefined" behavior in a language as long as you're able to influence and/or keep abreast of implementation details in the environments your library is compiled or executed in. Sure, there's always the chance your library has a compatibility issue when compiled with Borland Turbo C or some other uncommon compiler, but if you're willing to work with the core development team of a library, compiler, OS etc you can get away with a lot of behavior other people can't.

I'm not sure OpenSSL or this specific example falls in this camp because it seems to have been horribly understaffed and mismanaged, but it's not entirely wrong to rely on undefined things as long as you know how to handle it and have a backup plan in case it needs to be changed for some reason. Hell, this is the reason GCC has several flags to disable certain optimizations (and security code often needs to be compiled with very selective optimizations anyway lest it "optimize" your code into something timing attack sensitive).

E: That said, never rely on undefined behavior in your own code unless you're doing something wonky for fun. I've personally worked on a few libraries that relied on not-strictly defined behavior in Go (low-level performance-sensitive numeric code that involved writing assembly, and OpenGL/CL bindings and corresponding math libraries that had data structures that could be sent to the GPU), but in those cases we had to be actively scanning for things that might affect it, and in a few cases had to outright ask the core developers whether certain bits of compiler behavior (mostly behavior regarding struct layout, alignment, field order, and such) were likely to change even if the spec is mum on them. In one case we (and most libraries dealing with FFI C code) had to rewrite some of the library to prevent certain behavior when changes needed to be made to improve the GC. This is more possible with newer languages with a single canonical compiler* and more communicative core devs like Rust or Go, though.

* Okay, Go has `go tool compile` and gccgo, technically, but very few people actually use gccgo for anything.

Linear Zoetrope fucked around with this message at 07:46 on Aug 27, 2016

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

vOv posted:

No it's not. Reading from uninitialized memory returns an indeterminate value, which isn't the same as undefined behavior: you can treat the read as if there was any value stored in it (including a trap value), but you're not allowed to just optimize it away or format the hard disk or something.

You absolutely can optimize it away. Trap values are not required to behave like a single consistent value.

tef
May 30, 2004

-> some l-system crap ->

ExcessBLarg! posted:

Personally I view the Debian situation as a early warning that the OpenSSL project had, at best, questionable management, and one that came six years before Heartbleed really blew that open.

OpenSSL may be a turd of a project but commenting out lines to make warnings go away with no oversight is beyond questionable management

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

xtal posted:

Can we see this?

I'd probably better not post it, even though it's worthless code we're looking to replace with a new system - it would reveal a bit too much about our systems if I posted it.

Basically it pops up a dialog box that tells the user "something is happening, please wait" and the lambda is a task being done by the dialog box that involves loading some rows from a database. If something throws in the lambda (like, say, because a stored procedure doesn't exist, because we're now pointing to a partially populated test database) it would quit and just load whatever rows had been encountered so far. All other rows get silently discarded!

I say "the lambda", it's now a regular method on the class that gets called in a one-liner lambda expression, because I refactored it when it became clear it was a pain in the rear end to debug.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

Hammerite posted:

I'd probably better not post it, even though it's worthless code we're looking to replace with a new system - it would reveal a bit too much about our systems if I posted it.

Basically it pops up a dialog box that tells the user "something is happening, please wait" and the lambda is a task being done by the dialog box that involves loading some rows from a database. If something throws in the lambda (like, say, because a stored procedure doesn't exist, because we're now pointing to a partially populated test database) it would quit and just load whatever rows had been encountered so far. All other rows get silently discarded!

I say "the lambda", it's now a regular method on the class that gets called in a one-liner lambda expression, because I refactored it when it became clear it was a pain in the rear end to debug.

I just did this, but it was for a 300 or so line c++ lambda that was called precisely once, immediately after its definition was closed.

ExcessBLarg!
Sep 1, 2001

tef posted:

OpenSSL may be a turd of a project but commenting out lines to make warnings go away with no oversight is beyond questionable management
So when you get regular bug reports from angry users running Valgrind on their own programs, which complains about libssl reading from uninitialized memory, what would you do?

vOv
Feb 8, 2014

rjmccall posted:

You absolutely can optimize it away. Trap values are not required to behave like a single consistent value.

Hmm, I guess you're right. Reading from a trap representation is undefined, and the only type guaranteed not to have any trap representations is unsigned char (the code in question just used char).

Athas
Aug 6, 2007

fuck that joker

ExcessBLarg! posted:

So when you get regular bug reports from angry users running Valgrind on their own programs, which complains about libssl reading from uninitialized memory, what would you do?

Tell them not to use OpenSSL. It shouldn't be the job of the distributions to paper over such things. Take it to upstream, and use something else if upstream turns out to not care about quality.

Qwertycoatl
Dec 31, 2008

Compilers really will optimise that sort of thing away:
code:
int foo(int what, int x)
{
  int y;
  return y ^ x;
}

Assembly output posted:

foo(int, int): # @foo(int, int)
retq

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Athas posted:

Tell them not to use OpenSSL. It shouldn't be the job of the distributions to paper over such things. Take it to upstream, and use something else if upstream turns out to not care about quality.
The maintainer of OpenSSL telling users to get lost really isn't viable. It is the job of a package maintainer to patch programs in the necessary ways to make them fit well with the rest of the distribution, which may include handling bugs upstream won't acknowledge. Debian might not do this as much as Ubuntu or Mint or something, but they still do it.

As for alternatives there weren't many in 2006. The most capable alternative at the time was GnuTLS which is it's own can of worms.

The problem was worse than "don't use OpenSSL" though. People running Valgrind were interpreting the libssl reads as bugs, and so filing bug reports accordingly. So there's a squelch issue there. Furthermore it wasn't just affecting people who used OpenSSL directly, but any library that itself used OpenSSL.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply