Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nielsm
Jun 1, 2009



Knyteguy posted:

flip = !flip;

Thanks for the descriptive global variable that sets itself seemingly randomly.

You really need to encapsulate that logic.

C++ code:
class {
  bool f;
public:
  operator bool () const { return f; }
  bool operator () () { return f = !f; }
} flip;

int main() {
  if (flip) { printf("butt"); }
  if (flip()) { printf("hole"); }
}

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Sinestro posted:

What if the algorithm that you need is well-described in literature but there's no implementation available for the language that you have to use? Are you supposed to just throw up your hands and walk away until a cryptography ubermench walks by and magics it away?

Stop writing PhP-based cryptography, I guess

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

One of the things that makes crypto trickier than most software development is that it's pretty hard to test. Producing expected output, not crashing, not leaking memory, having good performance...not the characteristics that matter. Does your key generation produce something strong enough? Does the cipher mode you've chosen match the data model? Is there a collision attack that can subvert my verification? What does your test suite look like if you find a bug like those, such that it prevents you from picking another hash with collision attack in the future?

There's no general principle of software development competence that tells you "RC4 is bad". If you choose an inefficient search algorithm or cache implementation, you get degraded performance, but no loss of correctness. If you choose a weak ciphersystem you lose correctness, and the whole point of encrypting data is that it's important to keep to keep secret, so the stakes are always high.

I worked for a while at a company whose product was entirely built around crypto, with multiple published cryptographers, and we got it wrong. I've worked with teams responsible for TLS for hundreds of millions of users on client and server, and we got it wrong. Implementation error is a very high risk, because the interactions between the primitives is basically impossible to reason about deeply and thoroughly, and the effects of subtle errors so serious. If my hash function is biased for a map implementation, I get unbalanced buckets. If it's biased as part of a crypto flow I am unlikely to be able to tell and the effects could be severe.

The right approach for anyone who is not actively working on a necessary-because-not-existing crypto tool, is to work at a higher level of abstraction as suggested earlier. Use something that gives you "secure stream", don't roll your own key management to feed into OpenSSL (which itself has too sharp an API, IMO). Use a library to deal with data at rest, or better the facilities of a filesystem or database that has that support baked in. You will want to do your own "simpler" thing, because you don't need all the capabilities of whatever thing exists, and you'd like to improve performance. That is the devil talking, waiting to post to full-disclosure@ with ASCII art in the footer. You'll get it wrong, and probably always worry that you did (if you're not a loser) unless you have Thomas Ptacek or such auditing you. And the first thing they'll tell you is "don't roll your own, idiot".

If someone proposes to roll their own in your place of work, demand to see their cypherpunks@ posting history. Also their Matasano crypto challenge results, and a blood test. (If they aren't familiar with both, and can't tell you what parts of Applied Cryptography's advice no longer applies, have security escort them from the building.)

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

QuarkJets posted:

Stop writing PhP-based cryptography, I guess

There's implementations of basically everything in C, so people can write extensions to get to it.

But ask yourself "why do I want to use crypto?" If the answer is "protect data against $attack", ask how you'll be confident that what you implemented actually did so.

Dylan16807
May 12, 2010

Sinestro posted:

What if the algorithm that you need is well-described in literature but there's no implementation available for the language that you have to use? Are you supposed to just throw up your hands and walk away until a cryptography ubermench walks by and magics it away?

Is it being used in a context where it's fed private data that it could leak though timing attacks? If so, terrible idea to implement it yourself. If not, you can likely handle it, but put plenty of warning labels on it.

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Sinestro posted:

What if the algorithm that you need is well-described in literature but there's no implementation available for the language that you have to use? Are you supposed to just throw up your hands and walk away until a cryptography ubermench walks by and magics it away?

Good sign that you've chosen the wrong language for your task

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Sinestro posted:

What if the algorithm that you need is well-described in literature but there's no implementation available for the language that you have to use? Are you supposed to just throw up your hands and walk away until a cryptography ubermench walks by and magics it away?

Stop using whatever lovely toy programming language you're using that can't call C libraries?

Karate Bastard
Jul 31, 2007

Soiled Meat

Sinestro posted:

What if the algorithm that you need is well-described in literature but there's no implementation available for the language that you have to use? Are you supposed to just throw up your hands and walk away until a cryptography ubermench walks by and magics it away?

If you are looking for a fun pastime or to improve humanity? Implement it, post it, don't use it. Otherwise, take the advice of the gents above.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Post it and tell people you're using it for something where people will die if it's insecure; wait for unsolicited security audits to roll in.

Karate Bastard
Jul 31, 2007

Soiled Meat
No, people* give zero shits about life and death. Post that you're providing "scatologically impervious microtransactions" or "trusted arbitrage microsecond trading", or "guild messaging" or something interesting. While you're really channelling /dev/random or dickbutt.jpg.

*shitlords.

Deus Rex
Mar 5, 2005

Knyteguy posted:

flip = !flip;

Thanks for the descriptive global variable that sets itself seemingly randomly.

Huh? It's just inverting the flag that indicates a person's Pinoy ancestry.

evensevenone
May 12, 2001
Glass is a solid.
While we're at it can the Serious Crypto People stop writing crypto libraries in C? I mean C is fun and all, but basically 100% of major security issues this year have been due to the typical trivial C fuckups that never occur in other languages.

I mean they'll probably all switch to Haskell or something, but whatever.

Progressive JPEG
Feb 19, 2003

evensevenone posted:

While we're at it can the Serious Crypto People stop writing crypto libraries in C? I mean C is fun and all, but basically 100% of major security issues this year have been due to the typical trivial C fuckups that never occur in other languages.

I mean they'll probably all switch to Haskell or something, but whatever.

The Heartbleed stuff would've been quickly discovered in a memory debugger (or via periodic segfaults) if OpenSSL hadn't decided to implement their own bespoke memory pool. The same memory pool strategy is frequently used even in "safe" GCed languages where the programmer is worried about triggering the GC too often, but OpenSSL depended on "freed" data still being accessible (!!) in this pool, and therefore blew up if anyone attempted to disable the pool to verify correct behavior. Meanwhile it was this same memory pool that kept data around for access by the bug, instead of giving it back to the system, which would've made the bug cause segfaults rather than consistent data exposure. Heartbleed was far beyond a typical trivial C fuckup -- their problem was systematic and reflects OpenSSL's horseshit codebase and development model. The vulnerability would've worked regardless of the language.

Shellshock meanwhile was effectively a parser implementation error, not a buffer overrun.

But all this nitpicking aside I do feel like the world would be a better place if glibc's default malloc was more like OpenBSD's.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Shellshock was ultimately just the danger of obscure clever features rather than a bug. The intended functionality is already such a major security vulnerability that the actual exploits found were pretty irrelevant.

There was nothing C-specific about goto fail. I don't know of any languages where it's impossible to cause a major bug by accidentally duplicating a line.

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."

Plorkyeran posted:

I don't know of any languages where it's impossible to cause a major bug by accidentally duplicating a line.

In anything with statement numbers you'd either get a compiler warning or the duplicated line would have no effect. :v:

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Is it even possible to write a serious crypto lib in non-unsafe Haskell? The entire design of the language kind of falls apart once you consider CPU time spent to be an observable side effect.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Internet Janitor posted:

In anything with statement numbers you'd either get a compiler warning or the duplicated line would have no effect. :v:
Good point! Clearly we should do all crypto work in GW-BASIC.

Could still be foiled by an overly-helpful editor that automatically renumbers lines, though.

Centripetal Horse
Nov 22, 2009

Fuck money, get GBS

This could have bought you a half a tank of gas, lmfao -
Love, gromdul

Yeah, the person used a complete word that has some vague association with the variable's purpose. That would fly right under my horror radar.

evensevenone
May 12, 2001
Glass is a solid.

Plorkyeran posted:

Shellshock was ultimately just the danger of obscure clever features rather than a bug. The intended functionality is already such a major security vulnerability that the actual exploits found were pretty irrelevant.

There was nothing C-specific about goto fail. I don't know of any languages where it's impossible to cause a major bug by accidentally duplicating a line.

Well, the first shellshock wasn't a C thing, but after looking at bash for like a day, they discovered 3 more serious vulnerabilities, 2 of which were buffer overflows.

Goto fail was absolutely a C thing. That design pattern (goto fail, and return values to indicate success/fail) is used because C doesn't have exceptions, and the duplicated line was executed because if statements are allowed to take a single unbraced statement. Which is cute, but dumb.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

evensevenone posted:

Well, the first shellshock wasn't a C thing, but after looking at bash for like a day, they discovered 3 more serious vulnerabilities, 2 of which were buffer overflows.

Goto fail was absolutely a C thing. That design pattern (goto fail, and return values to indicate success/fail) is used because C doesn't have exceptions, and the duplicated line was executed because if statements are allowed to take a single unbraced statement. Which is cute, but dumb.

If exceptions weren't literally the worst part about programming with modern languages I'd agree with you.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

evensevenone posted:

Well, the first shellshock wasn't a C thing, but after looking at bash for like a day, they discovered 3 more serious vulnerabilities, 2 of which were buffer overflows.
Even with zero bugs in the implementation, passing user-controlled environment variables to bash was an inherently unsafe thing to do. Code-execution exploits in your thing designed to execute code is not an escalation of privileges.

evensevenone posted:

Goto fail was absolutely a C thing. That design pattern (goto fail, and return values to indicate success/fail) is used because C doesn't have exceptions, and the duplicated line was executed because if statements are allowed to take a single unbraced statement. Which is cute, but dumb.
There would be literally zero difference in what would have happened if the goto statement was instead a throw statement, and given that we don't know how the duplicated line came about, we really can't say that it would not have ended up outside the braces had the if been braced. There are certainly languages where it'd be less likely to result in an actual bug, but reliance against editor or merge fuckups is a very strange criteria to judge the security of languages on.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Plorkyeran posted:

Even with zero bugs in the implementation, passing user-controlled environment variables to bash was an inherently unsafe thing to do. Code-execution exploits in your thing designed to execute code is not an escalation of privileges.

There would be literally zero difference in what would have happened if the goto statement was instead a throw statement, and given that we don't know how the duplicated line came about, we really can't say that it would not have ended up outside the braces had the if been braced. There are certainly languages where it'd be less likely to result in an actual bug, but reliance against editor or merge fuckups is a very strange criteria to judge the security of languages on.

Are you sure about this? I'll admit to just reading about this bug now, but from what I can gather:

code:
static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    ...
 
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;
    ...
 
fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;
}
It's not just the errant extra "goto fail;", its that SSLHashSHA1.update will probably pass and return an error code of 0 (success) which will subsequently sent back to the caller at the "return err;" line. So even if SSLHashSHA1.final would have failed it won't be reported.

If you replace this with exceptions you'll either be catching and re-throwing the ones from update, final, etc. or you'll let them propagate up from those functions. And even if that wasn't the case somehow, you wouldn't have the situation where throwing an exception appears to be success to the caller; you'd discover the bug the first time you ran the code. The only way it could be the same is if you just used throw-catch to replace goto-fail, but still used error codes as function return values. Which doesn't make much sense to me.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Progressive JPEG posted:

Meanwhile it was this same memory pool that kept data around for access by the bug, instead of giving it back to the system, which would've made the bug cause segfaults rather than consistent data exposure.

Freeing data on the heap doesn't necessarily meaning freeing pages. I actually don't know what happens if you access free'd heap memory that's within a still-mapped page.

For secure memory, you actually need to implement your own memory pool, because you need to call mlock on all pages you allocate. Using glibc's standard heap implementation means that the pages can be swapped out to disk, and as far as I know there's no way to have it mlock on every page it maps.

sarehu
Apr 20, 2007

(call/cc call/cc)

Suspicious Dish posted:

Freeing data on the heap doesn't necessarily meaning freeing pages. I actually don't know what happens if you access free'd heap memory that's within a still-mapped page.

Nothing happens. Maybe you corrupt the heap, if an allocation happened between the free and the write. Or eventually something gets allocated on top of what you wrote.

Valgrind would catch the error though.

Athas
Aug 6, 2007

fuck that joker

Plorkyeran posted:

Is it even possible to write a serious crypto lib in non-unsafe Haskell? The entire design of the language kind of falls apart once you consider CPU time spent to be an observable side effect.

Right, don't write crypto in Haskell. Most high-level languages are probably going to be vulnerable to side channel attacks. Also, don't store keys or the like in memory that you cannot zero as soon as you are done with it. (This also means "don't store keys in movable memory, such as many GC'ed heaps".) Most crypto is quite simple in terms of data structures, so there is really no point to implementing the algorithms in languages more expressive than C anyway.

It depends on how paranoid you are, of course.

Vanadium
Jan 8, 2005

Pure C is probably too high level too, who knows the compiler isn't optimize some zeroing of memory away or will optimize some "temporary" copies into existence??

sarehu
Apr 20, 2007

(call/cc call/cc)
Yes, see http://www.daemonology.net/blog/2014-09-04-how-to-zero-a-buffer.html and then http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html. I wouldn't be able to tell you how far those posts descend into spergatory, or if they enter it at all.

shodanjr_gr
Nov 20, 2007

sarehu posted:

Yes, see http://www.daemonology.net/blog/2014-09-04-how-to-zero-a-buffer.html and then http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html. I wouldn't be able to tell you how far those posts descend into spergatory, or if they enter it at all.

quote:

(I know that at least one developer, when confronted by this problem, decided to sanitize his stack by zeroing until he triggered a page fault — but that is an extreme solution, and is both non-portable and very clear C "undefined behaviour".)

:chanpop:

feedmegin
Jul 30, 2008


Uh, won't zeroing the stack until a page fault on a modern operating system basically just make the kernel allocate a poo poo-ton of RAM to the process until the stack hits the heap or the system runs out of RAM, whichever comes first? Generally there isn't a syscall or anything to extend the stack, it's something the kernel handles automagically.

(And assuming a single-threaded process and 64 bits, the stack and the heap are likely to be a long way apart...)

zergstain
Dec 15, 2005

I'm pretty certain stack size is set at link time. My ld man page says the default is 8MB.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's specified by RLIMIT_STACK. POSIX even says:

quote:

This is the maximum size of the initial thread's stack, in bytes. The implementation does not automatically grow the stack beyond this limit. If this limit is exceeded, SIGSEGV shall be generated for the thread.

b0lt
Apr 29, 2005

feedmegin posted:

Uh, won't zeroing the stack until a page fault on a modern operating system basically just make the kernel allocate a poo poo-ton of RAM to the process until the stack hits the heap or the system runs out of RAM, whichever comes first? Generally there isn't a syscall or anything to extend the stack, it's something the kernel handles automagically.

(And assuming a single-threaded process and 64 bits, the stack and the heap are likely to be a long way apart...)

Pretty much every implementation will have a PROT_NONE guard page at the end of each stack.

JawnV6
Jul 4, 2004

So hot ...

feedmegin posted:

(And assuming a single-threaded process and 64 bits, the stack and the heap are likely to be a long way apart...)

Not as far as you'd think. Canonical addresses shrink the space way down, you'll hit a GP# with a naive traversal.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



Afaik zeroing memory is fine as long as it's already allocated to the process, so there is a certain sense in it. Once you try outside memory you're lucky with a soft crash (assuming old OSes).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

feedmegin posted:

Uh, won't zeroing the stack until a page fault on a modern operating system basically just make the kernel allocate a poo poo-ton of RAM to the process until the stack hits the heap or the system runs out of RAM, whichever comes first? Generally there isn't a syscall or anything to extend the stack, it's something the kernel handles automagically.

On Windows, you'd hit the guard page at the end of the stack, get your guard page exception, and stop writing. Then everything would work fine until your stack needed to legitimately grow into that space, at which point it would just segfault because you used up the guard page the first time, and so the allocator wouldn't realize that it needs to allocate more stack...

Space Whale
Nov 6, 2014
Excuse my ignorance, but:

Wouldn't the os itself in serving up virtual memory be able to and better to zero stuff out itself, if security is that important?

I thought almost no applications ever saw memory without going through redirection from the O.S. anyway.

shodanjr_gr
Nov 20, 2007

Space Whale posted:

Excuse my ignorance, but:

Wouldn't the os itself in serving up virtual memory be able to and better to zero stuff out itself, if security is that important?

I thought almost no applications ever saw memory without going through redirection from the O.S. anyway.

This whole discussion concerns exploit mitigation within a single process. When you get memory from the OS (e.g. via a malloc) the page of memory is zeroed out anyway (so that no information leaks between processes).

My understanding is however that freeing some random memory via free() on a pointer will not necessarily lead to the OS kernel reclaiming the page (and zeroing it out). Consequently, if someone manages to exploit some other bug in your application, they can use that to potentially read remnants of sensitive stuff (e.g. crypto keys, string describing favorite types of pornography, etc).

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
So memory is virtual. Each process gets an entire address space to fool around with. Two programs can have different mappings for memory, and these mappings don't have to be mapped directly to RAM. You can say "this memory address is on disk" (swapped out) or "this memory address doesn't actually exist" (unallocated). The kernel keeps this mapping for every single process, and when it switches between processes, it pokes something known as the "page tables" through a variety of internal CPU functions which sets up the memory mappings currently in use. So, the page tables say "this address points to this RAM address" or "this address is non-existent" or similar.

For efficiency reasons, it doesn't actually tell it every single address. It tells it in 4k chunks known as "pages". So you specify "the 4096-8192 range is mapped to this space in physical memory". And that's the lowest memory allocation you can do: 4k.

The C runtime has more granular memory allocation functions, accessed through malloc and free. The C runtime sets up a data structure known as a "heap", and allows you to make very small allocations. The kernel doesn't know anything about these, it just knows about the 4k chunks that the process asked for. When you've freed up enough so that you don't need the 4k page anymore, you can simply tell the kernel about that.

There's two questions:

1. What's in the contents of a new page (4k region that's mapped into my process) when I get it from the kernel? Since the kernel might recycle RAM that was used by other processes, it will zero it first to make sure no private data from another process gets it.

2. Will the C runtime make sure to zero out contents when I free stuff from its internal heap? The answer is most likely "no".

b0lt
Apr 29, 2005

Suspicious Dish posted:

1. What's in the contents of a new page (4k region that's mapped into my process) when I get it from the kernel? Since the kernel might recycle RAM that was used by other processes, it will zero it first to make sure no private data from another process gets it.

Also, the kernel won't zero out a page that was released immediately, it'll add it to a list of pages to zero out when there are spare cycles. If there's no memory pressure, and there are processes busy doing stuff, you could hypothetically do a cold boot attack and recover data from pages that were unmapped, but not zeroed explicitly.

Adbot
ADBOT LOVES YOU

Evil_Greven
Feb 20, 2007

Whadda I got to,
whadda I got to do
to wake ya up?

To shake ya up,
to break the structure up!?
How about a security horror to send off 2014?

quote:

The Internet Systems Consortium (ISC) has taken the site down for maintenance because they "believe we may be infected with malware."

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply