Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Absurd Alhazred posted:

Doesn't git also use SHA1? :smith:

wolrah posted:

Linus responded with his thoughts here: http://marc.info/?l=git&m=148787047422954

tl;dr:


Doing something like digest authentication would avoid this problem, but since most logins over HTTP(S) use HTML password fields rather than HTTP authentication that would require extra work that I'm sure most people just assumed SSL would protect them.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos
I was thinking of the possibility of repo corruption, not data transfer security.

Max Facetime
Apr 18, 2009

did someone say docker??!?!

-* runs completely out-of-breath to the thread *-

-* collapses from the exertion *-

-* dies, and leaves this for loot: *-

Only registered members can see post attachments!

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

https://twitter.com/eBay/status/834417166313193472

:bravo:

Absurd Alhazred
Mar 27, 2010

by Athanatos

Guess it's time to stop using eBay.

spankmeister
Jun 15, 2008






Absurd Alhazred posted:

I was thinking of the possibility of repo corruption, not data transfer security.

Git signs commits, not individual files. Committing the colliding files will result in different hashes for the both of them because it also hashes some metadata along with the content of the commit, this metadata will be different every time.

Getting a collision in the commit is much more difficult because you can't just randomly put data in there like with pdf.

But don't take my word for it, read Linus' comments on it.

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

Arsenic Lupin posted:

Yeah, I was coming here to post that. WE HAVE KNOWN BETTER FOR loving DECADES. Does Cloudflare have code review? Is it entirely done by drunks?
ragel isn't made by cloudflare, the bug is in code it generates

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

zen death robot posted:

Yeah Lowtax was thinking we weren't effected and I just said it's impossible to rule it out and it's safer to assume everyone was effected and to tell ppl to change passwords. It's the responsible thing to do.

No reasonable person should think worse of us for warning users imo.

You shouldcould invalidate session cookies, force people to login again. Only people who happened to actually login during the impacted period could have had a password leaked, but anyone who browsed the forums at all would have had their auth cookie leaked.

If I logout but someone's stolen my cookie, can they still access the site? Or does logging out clear the session server-side? Different backends handle it differently, I have no idea how Radium setup SA's persistent logins.

E: not being as demanding and clarifying my Q

Harik fucked around with this message at 11:48 on Feb 25, 2017

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

spankmeister posted:

Git signs commits, not individual files. Committing the colliding files will result in different hashes for the both of them because it also hashes some metadata along with the content of the commit, this metadata will be different every time.

Getting a collision in the commit is much more difficult because you can't just randomly put data in there like with pdf.

But don't take my word for it, read Linus' comments on it.

git hashes blobs, trees, and commits

right now they don't collide because the type prefix on the files changes the preamble to the collision: http://stackoverflow.com/a/42435393

so somebody just has to figure out how to collide input strings to sha1 that are prefixed with "blob filesize\0"

spankmeister
Jun 15, 2008






Cocoa Crispies posted:

git hashes blobs, trees, and commits

right now they don't collide because the type prefix on the files changes the preamble to the collision: http://stackoverflow.com/a/42435393

so somebody just has to figure out how to collide input strings to sha1 that are prefixed with "blob filesize\0"

Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc..

Not impossible, just a lot harder.

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

spankmeister posted:

Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc..

Not impossible, just a lot harder.

I'm curious to know if the shattered.io team started with a fixed size, because if they did, it's fundamentally the same problem, just with a larger fixed header

NFX
Jun 2, 2008

Fun Shoe

Absurd Alhazred posted:

Guess it's time to stop using eBay.

they (and paypal) have been like this for a long time. troy hunt has yelled at them about it before. while searching for that one i came across this, which is even better: https://www.troyhunt.com/its-not-about-supporting-password/

quote:

We'd lose our security certificate if we allowed pasting. It could leave us open to a "brute force" attack. Thanks ^Steve

Dylan16807
May 12, 2010

spankmeister posted:

Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc..

Not impossible, just a lot harder.

you just set the filesize to 10KB or whatever. once you have the collision you add identical data to both files until the size is right.

spankmeister
Jun 15, 2008






Dylan16807 posted:

you just set the filesize to 10KB or whatever. once you have the collision you add identical data to both files until the size is right.

I don't think it quite works like that.

You start with file A and file B. You want to give file B the same hash as A, so you add blocks to B until it has the same hash. Now B has the same hash but is larger than A. You can now start to add the same data to A and B but B wll always be larger, or the number of "equal" blocks will be different to account for the size difference and therefore the hash will be different.

A Pinball Wizard
Mar 23, 2005

I know every trick, no freak's gonna beat my hands

College Slice
https://mackeeper.com/blog/post/334-extensive-breach-at-intl-airport

quote:

In what should be considered a complete compromise of network integrity, New York’s Stewart International Airport was recently found exposing 760 gigs of backup data to the public internet. No password. No username. No authentication whatsoever.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

spankmeister posted:

I don't think it quite works like that.

You start with file A and file B. You want to give file B the same hash as A, so you add blocks to B until it has the same hash. Now B has the same hash but is larger than A. You can now start to add the same data to A and B but B wll always be larger, or the number of "equal" blocks will be different to account for the size difference and therefore the hash will be different.

that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b.

minivanmegafun
Jul 27, 2004

why the gently caress does mackeeper run a blog. do they actually have competent people working on their garbage?

spankmeister
Jun 15, 2008






Jabor posted:

that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b.

oh right, that makes sense.

Cybernetic Vermin
Apr 18, 2005

Jabor posted:

that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b.

which makes sense since sha-1 also hashes the length of the input (as part of a padding suffix that always gets appended), so the attack no doubt gets unnecessarily messy if one tries to vary that bit at the same time

as such i don't think linus argument makes sense at all, given what they had to do otherwise it would have been no harder to achieve the same attack against git. granted there doesn't seem to be an obvious endgame to attacking git either, but it is just as broken as anything using sha-1.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
i posted about this in some other thread, but git is mostly fine because, despite being a dvcs, there's usually a trusted central repository with restricted push access, and a repository won't accept new commits that have the same hash as an existing commit. so the only attacks are if: a project admin intentionally corrupts an existing commit; a project contributor exactly predicts an incoming commit and intentionally pre-collides it; or there's an intermediary repository vulnerable to those attacks that people regularly clone instead of the central one. those are serious concerns, but they do first assume that one of a small number of accounts is compromised, and the only killer one is when it's an admin. also it's not too hard to detect because the actual source trees will hash differently, and not an even a perfect pre-image attack will make them hash the same after an arbitrary patch is applied

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
obviously they should move to a new hash function, tho

FlapYoJacks
Feb 12, 2009

Jet fuel can't melt Buffalo Nas'

wolrah
May 8, 2006
what?

rjmccall posted:

i posted about this in some other thread, but git is mostly fine because, despite being a dvcs, there's usually a trusted central repository with restricted push access, and a repository won't accept new commits that have the same hash as an existing commit. so the only attacks are if: a project admin intentionally corrupts an existing commit; a project contributor exactly predicts an incoming commit and intentionally pre-collides it; or there's an intermediary repository vulnerable to those attacks that people regularly clone instead of the central one. those are serious concerns, but they do first assume that one of a small number of accounts is compromised, and the only killer one is when it's an admin. also it's not too hard to detect because the actual source trees will hash differently, and not an even a perfect pre-image attack will make them hash the same after an arbitrary patch is applied

The intermediary repository issue does come in to play for targeted attacks, for example if a company maintains their own local clone for update management and/or internal patches then it's a very interesting target.

That said, after digging a bit in to the architecture of Git I find myself more and more in agreement with even Linus' statements from years ago where he downplayed the importance of the algorithm almost altogether. As far as I've been able to figure, even if you have the ability to generate arbitrary collisions on demand it'd still be nearly impossible to remain undetected for any extended period of time. I can't really come up with any ways to make it work without being easily detected unless the attacker has both privileged access to the filesystem of the trusted repository and the ability to commit without review. Anyone who gets a clean copy of that commit before you make your changes is a liability as far as being discovered, and that most importantly includes the person who committed it. Even if you had the ability to generate arbitrary collisions instantly the moment a random commit goes through you'd still have to destroy their copies if you wanted to get away with it long term.

The only even vaguely plausible situation I can come up for this to work in a non-targeted manner requires being able to do arbitrary collisions in insignificant time periods and maintaining your own mirror repository where you're redoing the attack on every single commit so it looks like a legit mirror but isn't, and you use that repository in a tutorial, docker setup, etc.

I may be wrong in my understanding though, and of course Git should migrate to something else regardless, if only because it doesn't look good to be using a broken algorithm.

Arsenic Lupin
Apr 12, 2012

This particularly rapid💨 unintelligible 😖patter💁 isn't generally heard🧏‍♂️, and if it is🤔, it doesn't matter💁.


Malloc Voidstar posted:

ragel isn't made by cloudflare, the bug is in code it generates
Excellent point, that I totally missed. You can't code-review generated code, and how many people are qualified to code-review the lexer to make sure it will never generate bad code? (That's a real question.)

Cybernetic Vermin
Apr 18, 2005

mostly though it is a rather subtle task to judge in what ways the assumption of hashes not colliding has snuck into the code, while the overall architecture may stand up reasonably well to collisions i am not that confident of any sweeping judgement when it comes to a system and broader ecosystem most certainly not *engineered* to survive that scenario

Absurd Alhazred
Mar 27, 2010

by Athanatos

Cybernetic Vermin posted:

mostly though it is a rather subtle task to judge in what ways the assumption of hashes not colliding has snuck into the code, while the overall architecture may stand up reasonably well to collisions i am not that confident of any sweeping judgement when it comes to a system and broader ecosystem most certainly not *engineered* to survive that scenario

Couldn't you just integration test while swapping in a "hashes everything to 4" function instead of the supposedly secure one?

Migishu
Oct 22, 2005

I'll eat your fucking eyeballs if you're not careful

Grimey Drawer

:sloppy:

wolrah
May 8, 2006
what?
Linus just posted a more detailed followup regarding Git and SHA1: https://plus.google.com/+LinusTorvalds/posts/7tp2gYWQugL

pseudorandom name
May 6, 2007

huh. that SHA-1 variant that detects collisions and just hashes it some more is interesting

Raere
Dec 13, 2007

git is overcomplicated and overengineered, I'm surprised that it uses sha1

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

Linoos posted:

But if you use git for source control like in the kernel, the stuff you really care about is source code, which is very much a transparent medium. If somebody inserts random odd generated crud in the middle of your source code, you will absolutely notice.

uh huh

pseudorandom name
May 6, 2007

tbf git is completely unsuited for the storage of anything besides plain text

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

the bold part was in his original btw

redleader
Aug 18, 2005

Engage according to operational parameters
i'm not surprised that ragel was the origin of buttbleed. it's an unreadable garbage language that compiles to garbage c

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

i would really like to know what the honest internal feelings are of the cloudflare guy who raged at tavis about being blocked lol

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

quote:

That may be a bit of a red herring though and only part of the puzzle. Within the backups I was able to locate an email chain indicating that AvPORTS purchased at least one Buffalo Terastation backup NAS device in March of 2016.

Those of you that keep up with my work may recall this same make and model of NAS device being at the center of a recently reported Ameriprise Financial data breach. In fact, I have made several other recent breach findings involving this particular device.

My hypothesis is that there may be a default opening of port 873 on some number of Buffalo Terastations. Keep in mind that port 873 had been intentionally opened on Stewart International’s firewall during part of the experiment with ShadowProtect.


Paging Larches

ratbert90 posted:

Jet fuel can't melt Buffalo Nas'

moron izzard
Nov 17, 2006

Grimey Drawer
did the deadline for requiring stores to use chip readers in the US get pushed back? because I still see places of varying size (including cinemark) with a chunk of cardboard stuck up the chip reader slot saying "swipe instead"

susan b buffering
Nov 14, 2016

A Yolo Wizard posted:

did the deadline for requiring stores to use chip readers in the US get pushed back? because I still see places of varying size (including cinemark) with a chunk of cardboard stuck up the chip reader slot saying "swipe instead"

the deadline got pushed back in december iirc

Daman
Oct 28, 2011
hey has anyone used splunks universal forwarder as an alternative to expensive endpoint security poo poo (carbon black)?

it says it can log new processes, services, logins, runkeys, etc which is probably enough to detect if an endpoint got owned.

is this good enough? budget is $0, and there's like no trail for these things in the corporation at present. only other things I can think to do is run LimaCharlie or Eljefe on hosts, and that would only serve to tell us they did double click the exe they downloaded, or the webapp on this server was popped because a process spawned as a child of php-fpm. it would also serve to make another server exist, to promptly break when I'm not there to babysit the company in a few months... idk how robust those are

Adbot
ADBOT LOVES YOU

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Daman posted:

hey has anyone used splunks universal forwarder as an alternative to expensive endpoint security poo poo (carbon black)?

it says it can log new processes, services, logins, runkeys, etc which is probably enough to detect if an endpoint got owned.

is this good enough? budget is $0, and there's like no trail for these things in the corporation at present. only other things I can think to do is run LimaCharlie or Eljefe on hosts, and that would only serve to tell us they did double click the exe they downloaded, or the webapp on this server was popped because a process spawned as a child of php-fpm. it would also serve to make another server exist, to promptly break when I'm not there to babysit the company in a few months... idk how robust those are

have fun with your 500mb/day ingestion limit!

our ops dude set up splunk to do audit logging of AD alone and we blew past 5gb/day with just AD lol

  • Locked thread