hashes and collisions have been on the back-burner for a bit now w/ recent hullabaloo...
so the thought that keeps coming back (reminder: /me != math guy) came from my experiences w/ gentoo... either the kernel or portage (but not both ;) used .sig files which contained multiple hashes for verification of the download integrity.
so say you've got a 1/x chance of collision in md5 and a 1/y chance of collision in sha1 (assuming that x & y are both reasonably large numbers), then isn't the likelihood of getting a collision of *both* hashes on the same file exponentially larger than getting a collision on x or y individually?
so if we're really worried about the apparently real weaknesses in some md5 and the up and coming realistic weaknesses in sha1 (via that chinese-professor-ninja-woman & her math students iirc), why not just start checking multiple hashes each time we verify integrity?
no new technology needed, just parse more than 1 value before you evaluate that if/then to true, right?
Subscribe to:
Post Comments (Atom)
1 comment:
I think this is actually a basic practice with forensics. When you make a hash of forensic data with hash algorithm A, you should always make a second hash of the same data with algorithm B -- that way if either of the algorithms are devalued you'll have a secondary hash.
Post a Comment