Mailing List Archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Help understanding a disk near-disaster

David J Iannucci writes:
 > On Mon, Sep 10, 2012, at 17:19, Stephen J. Turnbull wrote:
 > >  > That was going to be my guess. You had a filesystem mounted by 2
 > >  > Linux installations at the same time. That's a big problem unless
 > >  > the filesystem is one of the clustered filesystems,
 > >
 > > Er, even then it would still be a big problem because in his setup the
 > > two installations each think they monopolize the metadata for the
 > > underlying data store.

The previous poster assumes that the filesystem is mounted in two
places at one time; I'm maintaining that assumption.  But with cluster
FSes there's a further consideration besides the VFS-level data and
metadata that one needs to be concerned about, namely setting up both
OSes to have different datastores for the cluster FS local cache.

 > Thanks for the thoughts, Steve. But I'd like to understand a little
 > better... if I've umounted, doesn't that basically "free" the filesystem
 > from the OS that had mounted it, i.e. back to a "neutral" state?

Yes.  But note that is not a question of what state the file system is
in.  For example, you could do a sync(8) to ensure that the OS
metadata cache and the filesystem metadata are identical, but you'd
still have problems without the umount.  What umount does is to tell
the OS to sync, then discard its metadata cache completely, and reread
it from the disk on the next mount.

 > What sort of metadata remain "tied" to the original OS,

None, at that point (but you've probably already figured that out).

BTW, you might have a little easier time visualizing what's going on
if you phrase it as "free the OS from worrying about the (now
unmounted) filesystem."

 > I guess [a virtualized OS] would do for me, but if what I really
 > want is a "live boot"?

Several options:

[totally lo-tech] Just shut the main OS all the way down and live with
the startup cost.

[higher-tech] umount the shared filesystem by hand before
hibernating.  Is this unreliable (since it's something you have to
remember to do) and annoying procedure really preferable to a full

[even-higher-tech but not as robust] Put a script in your hibernate
configuration that unmounts that filesystem before hibernation, then
remounts it afterward.  (Probably you can do this with udev.)  The
reason that it's not robust is that you can't umount if any of the
files are in use, either open files in an editor (which desktop apps
like *Office tend to do) or a shell or background process cd'ed into a
directory on that FS.  You could use lsof to find those processes and
use kill -9 on them, but that's not very nice.

[best?] Find a virtual OS host that will handle live boots in a

Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links