
Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tlug] Seeking recommendations for file consolidation
On Fri, 2006-08-18 at 19:34 +0900, Zev Blut wrote:
> I really am willing to provide the beer and pizza if Stephen is
> willing to take part in realizing his git idea. Plus, it will give me
> a good excuse to learn more about git ;-)
The question I have of a git based approach is how does git determine
the difference between a new binary file and a file that has been
renamed and modified between versions, and furthermore how does it know
that this file is not to be considered a new file. Perhaps there is a
good way of doing this but the general problem (i.e. not ignoring corner
cases) would appear to be unsolvable. Is this not the case?
If you are just wanting duplicates hard-linked to the same inode, will
this not work?
#!/bin/bash
# remove duplicates
XXXSUM=/usr/bin/md5sum
mkdir .sums
for file in `find ./* -type f`
do
filesum=`${XXXSUM} $file|sed 's/^\([^ ][^ ]*\) .*$/\1/'`
if [ -f .sums/$filesum ]
then
echo "$file duplicate."
fi
mv -u ${file} .sums/$filesum
rm ${file} 2>>/dev/null
ln .sums/$filesum ${file}
done
rm -rf .sums
Edward
Home |
Main Index |
Thread Index