Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] distributed file systems



Sort-of sounds like apart from your 50 text files you could just keep
the uploaded files in one place and run a caching proxy in each of
your remote locations (or use a CDN and rely on the fact that they
already run a caching proxy in each of your remote locations).

Failing that, if you really have to be able to upload from multiple
locations to make the uploads faster, do you have the option of
keeping their files in separate namespaces, so that for any given
file, you know that the master lives in location X, and the slaves /
cached copies live in location Y and Z? If so you could still do
something on the lines of master/slaves (one filesystem/caching
proxies, one filesystem/CDN).

Then do something else for your 50 or so pseudo-database-y text files.
(Version control, "click here to rsync" or whatever.)

Edmund Edgar


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links