You have a bunch of files (for instance, jpegs). Over time they get moved around, you get a bunch from some family members and before you know it you have a situation where you have the same file multiple times. Of course you can manually sort out these duplicates, but you can also automate duplicate detection.
After a short search, I found this solution on LinuxQuestions.org:
tmp=$(mktemp) find . -type f |xargs md5sum > $tmp awk '{ print $1 }' $tmp |sort |uniq -d |while read f; do grep "^$f" $tmp echo "" done
This outputs a list of duplicate files once it has run to completion.
However, it borks when it encounters whitespaces, special characters like the apostrophe etc. A solution:
#!/bin/bash tmp=$(mktemp) find . -type f | sed -e "s/'/\\\'/g" |xargs -I{} md5sum {} > $tmp awk '{ print $1 }' $tmp |sort |uniq -d | while read f; do grep "^$f" $tmp echo "" done
Use the -I{} and {} to make sure the input to md5sum is not terminated by whitespaces, but only by endlines. Also, the “| sed -e “s/’/\\\’/g”” part replaces every occurence of the apostrophe “‘” with its escaped version “\'” as you would when entering it on the commandeline.
This is able to traverse deep into directory structures, and also accepts any filename I encountered in my dataset. It is however quite CPU intensive, as it calculates the MD5 hash for every file. If you only want to compare based on filename, the whole operation becomes a lot more lightweight.
Duplicate detection with locate/mlocate.db
Actually, it is not necessary to manually index all files, good chance this is already being done by the updatedb cronjob. For instance,
skidder@@spetznas:~$ locate fstab /etc/fstab
and it also finds some other files containing the string fstab. Unfortunately, mlocate.db is a very simple list of filenames only – a file size and an MD5 would greatly ease the detection of duplicates. So far I have not found a way to do this more efficiently than the shellscript posted above.