Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] how to tune reiser4 for millions of files?



On 2010-02-01 10:03 +0100 (Mon), Michal Hajek wrote:

> But the used part of filesystem is 113GB big, which means data have
> something like 80GB. File sizes are about 10-12KB. So once again, Curt
> made quite a good guess. I may try to run "du -sk" overnight.  

It's not worth it, since "probably no more than 80GB +/-20%" is
perfectly accurate enough for our needs, and du won't give you the
actual data size anyway, any more than df will. In both cases they give
you the block usage, which may be significantly higher than the actual
data size.

(That, by the way, is another reason to rewrite the program doing the
writing; if you have have an average of 1K per file on a filesystem with
4K blocks or fragments, you will write four times as many blocks as
you'd need to otherwise.)

> ls: memory exhausted

Use ulimit to increase the size of memory allocate to ls, and make sure
you're running on a 64-bit machine.

> Also, I do not know why [find] took so much more time [than ls].

I mentioned this before; you're forking off many more processes and
doing all sorts of other work to build the command line arguments and so
on, when you use xargs. If my edit of your statement above was really
what you meant, anyway.

cjs
-- 
Curt Sampson         <cjs@example.com>         +81 90 7737 2974
             http://www.starling-software.com
The power of accurate observation is commonly called cynicism
by those who have not got it.    --George Bernard Shaw


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links