Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] distributed file systems



> If you can limit all your users to submit new data to one server,
> and later delete it, yet they'll be able to read from ANY server it
> will simplify
> the problem a great deal. You'll have one master RW server and many RO
> slave servers that sync from the master.

Yes, this is true and was in fact our first thought. The problem is
that the admin is slow as well and, especially at the beginning, we
don't want to give anybody any reason what-so-ever not to use it. If
we can't find any other reasonable solution we will go with the one RW
many RO approach.

> you can use a rsync script triggered by a inotify (or dnotfy) to replicate
> the changes. Or you can run cron or endless loop script to do:
> `rsync -HavP --delete master:/dir1/ /dir1/`, of course with properly setup ssh
> PKI authentication.

I have something like this running now for backups and if I didn't
know of any other tools this is what I would do to push out the
changes from RW to RO.

> BTW, how quickly is "fairly quickly" ? A minute, 5 s, 1 s, less?

Short enough so that it doesn't appear broken to anyone. They refresh
it a few times, and "oh, yeah, there it is." Probably anything under
10 sec is OK.

> I haven't seen exactly the same problem, but I use subversion for tasks like
> that, almost never looking at the revision history, but utilizing the
> efficient delta
> transfer algorithm.

Interesting. OK, since both you and cjs have mentioned it I will try
it out. Nice thing about this is I don't have to any system or
application level changes to try it. I would want it running in some
sort of endless loop if I could.

Cheers,
Sach


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links