Assuming file names can be different and that hashes aren't in a db somewhere already.
md5sum `find / -type f` | sort > md5sums uniq -c md5sums | egrep -v " *1 " > dupes grep -f dupes md5sums
Not sure if a sha sum would be faster, but md5sum is embedded in muscle memory for me. There's probably a one liner out of this.
Also might want to scope that find to stay out of /proc, or to stick into /home. Exercise for the interested reader ;)
Sean
On Fri, Mar 25, 2011 at 3:01 PM, Kevin McGregor kevin.a.mcgregor@gmail.comwrote:
Would anyone like to volunteer suggestions for a utility that searches a filesystem for duplicate files? I'm running Ubuntu 10.04 on my server, and I'm sure I have lots of duplication, which I'd like to get rid of. I'm interested in both CLI and GUI solutions.
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable