|
Absurd Alhazred posted:Doesn't git also use SHA1? wolrah posted:Linus responded with his thoughts here: http://marc.info/?l=git&m=148787047422954
|
# ? Feb 25, 2017 09:13 |
|
|
# ? May 17, 2024 13:45 |
|
I was thinking of the possibility of repo corruption, not data transfer security.
|
# ? Feb 25, 2017 09:20 |
|
did someone say docker??!?! -* runs completely out-of-breath to the thread *- -* collapses from the exertion *- -* dies, and leaves this for loot: *-
|
# ? Feb 25, 2017 10:09 |
|
https://twitter.com/eBay/status/834417166313193472
|
# ? Feb 25, 2017 10:22 |
|
Guess it's time to stop using eBay.
|
# ? Feb 25, 2017 10:23 |
|
Absurd Alhazred posted:I was thinking of the possibility of repo corruption, not data transfer security. Git signs commits, not individual files. Committing the colliding files will result in different hashes for the both of them because it also hashes some metadata along with the content of the commit, this metadata will be different every time. Getting a collision in the commit is much more difficult because you can't just randomly put data in there like with pdf. But don't take my word for it, read Linus' comments on it.
|
# ? Feb 25, 2017 10:41 |
|
Arsenic Lupin posted:Yeah, I was coming here to post that. WE HAVE KNOWN BETTER FOR loving DECADES. Does Cloudflare have code review? Is it entirely done by drunks?
|
# ? Feb 25, 2017 11:25 |
|
zen death robot posted:Yeah Lowtax was thinking we weren't effected and I just said it's impossible to rule it out and it's safer to assume everyone was effected and to tell ppl to change passwords. It's the responsible thing to do. You If I logout but someone's stolen my cookie, can they still access the site? Or does logging out clear the session server-side? Different backends handle it differently, I have no idea how Radium setup SA's persistent logins. E: not being as demanding and clarifying my Q Harik fucked around with this message at 11:48 on Feb 25, 2017 |
# ? Feb 25, 2017 11:41 |
|
spankmeister posted:Git signs commits, not individual files. Committing the colliding files will result in different hashes for the both of them because it also hashes some metadata along with the content of the commit, this metadata will be different every time. git hashes blobs, trees, and commits right now they don't collide because the type prefix on the files changes the preamble to the collision: http://stackoverflow.com/a/42435393 so somebody just has to figure out how to collide input strings to sha1 that are prefixed with "blob filesize\0"
|
# ? Feb 25, 2017 12:06 |
|
Cocoa Crispies posted:git hashes blobs, trees, and commits Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc.. Not impossible, just a lot harder.
|
# ? Feb 25, 2017 12:14 |
|
spankmeister posted:Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc.. I'm curious to know if the shattered.io team started with a fixed size, because if they did, it's fundamentally the same problem, just with a larger fixed header
|
# ? Feb 25, 2017 12:55 |
|
Absurd Alhazred posted:Guess it's time to stop using eBay. they (and paypal) have been like this for a long time. troy hunt has yelled at them about it before. while searching for that one i came across this, which is even better: https://www.troyhunt.com/its-not-about-supporting-password/ quote:We'd lose our security certificate if we allowed pasting. It could leave us open to a "brute force" attack. Thanks ^Steve
|
# ? Feb 25, 2017 13:05 |
|
spankmeister posted:Yeah but then you need to know the size of the colliding blocks beforehand because when you add blocks the filesize changes so the preamble changes so the hash changes so you need more/different colliding blocks so the filesize changes so the hash changes etc.. you just set the filesize to 10KB or whatever. once you have the collision you add identical data to both files until the size is right.
|
# ? Feb 25, 2017 13:47 |
|
Dylan16807 posted:you just set the filesize to 10KB or whatever. once you have the collision you add identical data to both files until the size is right. I don't think it quite works like that. You start with file A and file B. You want to give file B the same hash as A, so you add blocks to B until it has the same hash. Now B has the same hash but is larger than A. You can now start to add the same data to A and B but B wll always be larger, or the number of "equal" blocks will be different to account for the size difference and therefore the hash will be different.
|
# ? Feb 25, 2017 14:04 |
|
https://mackeeper.com/blog/post/334-extensive-breach-at-intl-airportquote:In what should be considered a complete compromise of network integrity, New York’s Stewart International Airport was recently found exposing 760 gigs of backup data to the public internet. No password. No username. No authentication whatsoever.
|
# ? Feb 25, 2017 14:19 |
|
spankmeister posted:I don't think it quite works like that. that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b.
|
# ? Feb 25, 2017 15:07 |
|
why the gently caress does mackeeper run a blog. do they actually have competent people working on their garbage?
|
# ? Feb 25, 2017 15:08 |
|
Jabor posted:that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b. oh right, that makes sense.
|
# ? Feb 25, 2017 15:11 |
|
Jabor posted:that's not how this attack works. the file size is the same, there is just a pair of blocks somewhere in the middle that differs between a and b. which makes sense since sha-1 also hashes the length of the input (as part of a padding suffix that always gets appended), so the attack no doubt gets unnecessarily messy if one tries to vary that bit at the same time as such i don't think linus argument makes sense at all, given what they had to do otherwise it would have been no harder to achieve the same attack against git. granted there doesn't seem to be an obvious endgame to attacking git either, but it is just as broken as anything using sha-1.
|
# ? Feb 25, 2017 15:15 |
|
i posted about this in some other thread, but git is mostly fine because, despite being a dvcs, there's usually a trusted central repository with restricted push access, and a repository won't accept new commits that have the same hash as an existing commit. so the only attacks are if: a project admin intentionally corrupts an existing commit; a project contributor exactly predicts an incoming commit and intentionally pre-collides it; or there's an intermediary repository vulnerable to those attacks that people regularly clone instead of the central one. those are serious concerns, but they do first assume that one of a small number of accounts is compromised, and the only killer one is when it's an admin. also it's not too hard to detect because the actual source trees will hash differently, and not an even a perfect pre-image attack will make them hash the same after an arbitrary patch is applied
|
# ? Feb 25, 2017 16:09 |
|
obviously they should move to a new hash function, tho
|
# ? Feb 25, 2017 16:12 |
|
Jet fuel can't melt Buffalo Nas'
|
# ? Feb 25, 2017 17:53 |
|
rjmccall posted:i posted about this in some other thread, but git is mostly fine because, despite being a dvcs, there's usually a trusted central repository with restricted push access, and a repository won't accept new commits that have the same hash as an existing commit. so the only attacks are if: a project admin intentionally corrupts an existing commit; a project contributor exactly predicts an incoming commit and intentionally pre-collides it; or there's an intermediary repository vulnerable to those attacks that people regularly clone instead of the central one. those are serious concerns, but they do first assume that one of a small number of accounts is compromised, and the only killer one is when it's an admin. also it's not too hard to detect because the actual source trees will hash differently, and not an even a perfect pre-image attack will make them hash the same after an arbitrary patch is applied The intermediary repository issue does come in to play for targeted attacks, for example if a company maintains their own local clone for update management and/or internal patches then it's a very interesting target. That said, after digging a bit in to the architecture of Git I find myself more and more in agreement with even Linus' statements from years ago where he downplayed the importance of the algorithm almost altogether. As far as I've been able to figure, even if you have the ability to generate arbitrary collisions on demand it'd still be nearly impossible to remain undetected for any extended period of time. I can't really come up with any ways to make it work without being easily detected unless the attacker has both privileged access to the filesystem of the trusted repository and the ability to commit without review. Anyone who gets a clean copy of that commit before you make your changes is a liability as far as being discovered, and that most importantly includes the person who committed it. Even if you had the ability to generate arbitrary collisions instantly the moment a random commit goes through you'd still have to destroy their copies if you wanted to get away with it long term. The only even vaguely plausible situation I can come up for this to work in a non-targeted manner requires being able to do arbitrary collisions in insignificant time periods and maintaining your own mirror repository where you're redoing the attack on every single commit so it looks like a legit mirror but isn't, and you use that repository in a tutorial, docker setup, etc. I may be wrong in my understanding though, and of course Git should migrate to something else regardless, if only because it doesn't look good to be using a broken algorithm.
|
# ? Feb 25, 2017 18:11 |
|
Malloc Voidstar posted:ragel isn't made by cloudflare, the bug is in code it generates
|
# ? Feb 25, 2017 18:23 |
|
mostly though it is a rather subtle task to judge in what ways the assumption of hashes not colliding has snuck into the code, while the overall architecture may stand up reasonably well to collisions i am not that confident of any sweeping judgement when it comes to a system and broader ecosystem most certainly not *engineered* to survive that scenario
|
# ? Feb 25, 2017 18:25 |
|
Cybernetic Vermin posted:mostly though it is a rather subtle task to judge in what ways the assumption of hashes not colliding has snuck into the code, while the overall architecture may stand up reasonably well to collisions i am not that confident of any sweeping judgement when it comes to a system and broader ecosystem most certainly not *engineered* to survive that scenario Couldn't you just integration test while swapping in a "hashes everything to 4" function instead of the supposedly secure one?
|
# ? Feb 25, 2017 18:38 |
|
|
# ? Feb 25, 2017 19:15 |
|
Linus just posted a more detailed followup regarding Git and SHA1: https://plus.google.com/+LinusTorvalds/posts/7tp2gYWQugL
|
# ? Feb 25, 2017 22:30 |
|
huh. that SHA-1 variant that detects collisions and just hashes it some more is interesting
|
# ? Feb 25, 2017 22:46 |
|
git is overcomplicated and overengineered, I'm surprised that it uses sha1
|
# ? Feb 25, 2017 23:37 |
|
Linoos posted:But if you use git for source control like in the kernel, the stuff you really care about is source code, which is very much a transparent medium. If somebody inserts random odd generated crud in the middle of your source code, you will absolutely notice. uh huh
|
# ? Feb 25, 2017 23:48 |
|
tbf git is completely unsuited for the storage of anything besides plain text
|
# ? Feb 25, 2017 23:50 |
|
the bold part was in his original btw
|
# ? Feb 25, 2017 23:51 |
|
i'm not surprised that ragel was the origin of buttbleed. it's an unreadable garbage language that compiles to garbage c
|
# ? Feb 26, 2017 01:40 |
|
i would really like to know what the honest internal feelings are of the cloudflare guy who raged at tavis about being blocked lol
|
# ? Feb 26, 2017 01:47 |
|
quote:That may be a bit of a red herring though and only part of the puzzle. Within the backups I was able to locate an email chain indicating that AvPORTS purchased at least one Buffalo Terastation backup NAS device in March of 2016. Paging Larches ratbert90 posted:Jet fuel can't melt Buffalo Nas'
|
# ? Feb 26, 2017 02:19 |
|
did the deadline for requiring stores to use chip readers in the US get pushed back? because I still see places of varying size (including cinemark) with a chunk of cardboard stuck up the chip reader slot saying "swipe instead"
|
# ? Feb 26, 2017 06:13 |
|
A Yolo Wizard posted:did the deadline for requiring stores to use chip readers in the US get pushed back? because I still see places of varying size (including cinemark) with a chunk of cardboard stuck up the chip reader slot saying "swipe instead" the deadline got pushed back in december iirc
|
# ? Feb 26, 2017 06:32 |
|
hey has anyone used splunks universal forwarder as an alternative to expensive endpoint security poo poo (carbon black)? it says it can log new processes, services, logins, runkeys, etc which is probably enough to detect if an endpoint got owned. is this good enough? budget is $0, and there's like no trail for these things in the corporation at present. only other things I can think to do is run LimaCharlie or Eljefe on hosts, and that would only serve to tell us they did double click the exe they downloaded, or the webapp on this server was popped because a process spawned as a child of php-fpm. it would also serve to make another server exist, to promptly break when I'm not there to babysit the company in a few months... idk how robust those are
|
# ? Feb 26, 2017 06:43 |
|
|
# ? May 17, 2024 13:45 |
|
Daman posted:hey has anyone used splunks universal forwarder as an alternative to expensive endpoint security poo poo (carbon black)? have fun with your 500mb/day ingestion limit! our ops dude set up splunk to do audit logging of AD alone and we blew past 5gb/day with just AD lol
|
# ? Feb 26, 2017 06:50 |