Mailing List Archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Help with fsck and ocfs2 (or even ext4?)...

Hi Jim,

On Tue, Sep 28, 2021 at 6:00 PM Jim Blackson <> wrote:
> Raymond Wan <> wrote:
> > ... I'm more concerned about data loss and how I can get this file system back into read/write mode.
> Raymond Wan <> wrote:
> > The data itself isn't "raw" data, but processed data that represents
> > about 9 months of processing time...I can't lose it or else I'm doomed...
> > Wait -- is this mailing list public?  Oops...  :-P
> If data loss is a primary concern, please do not fsck around with (write
> to) the SAN.  Worst case is losing the SAN, so how about shutting down
> the SAN, carefully label and record the position of each HDD/SDD in the
> SAN (so you known which disk goes in which slot in which array), then
> "dd/ddrescue" duplicate each and every HDD to another set of matching
> HDDs.  This new set is your backup.

Sorry for the late reply and thanks a lot for your advice!  After
Christian's first reply, I had already started shifting away from
OCFS2 and over to NFS/ext4.  And that meant not doing fsck any more
and copying files to external hard drives so that I can do a reformat.
So, in terms of the danger of data loss, I am fine now!  Thank you!

Having said that, when I first started running fsck.ocfs2, I wasn't
aware of what you said above.  That is, the possible harm I could be
doing with additional fsck's.

Your instructions are very helpful; but I neglected to mention one
important point.  And that, we don't have enough spare disk space.
Indeed, that's a problem with my employer -- to have "just enough"
space.  I have mentioned it to them before for some time.  I think
they think that it is unnecessary.  And there's only so much I can do
to draw their attention to the problem.  Hopefully, I can use what has
happened as further evidence that we need more hardware resources...

> After that, you can reboot the SAN and immediately copy off as many
> files as you can.
> Once your critical data is copied and verified, then you can debug the
> SAN and file system with a little less fear. :-)

So, back to one of my earlier ideas.  If I knew some part of the
directory structure was corrupted, is it possible to "edit" the data
structure of the directories (i.e., using debugfs.ocfs2, for example,
which I believe has an ext4 equivalent) to "fix" the problem?  If this
were to happen again, then I would like to consider this as one

> As for SANs, sorry I'm not familiar with ocfs2 and don't know your
> configuration.  However, many commercial SANs I have seen come in for
> recovery are composed of 5 or 6 layers.  One challenge is identifying
> source of errors while not making things worse.
> The lowest layer is the individual SSD/HDD drives themselves.  These
> are formed into hardware or software low-level RAIDs, then formed
> again into a few large RAIDs.  These second-level RAIDs are gathered
> into "pools" or "tiers" of storage for use by the SAN system.  The pools
> are often divided into system, snapshot, and logical volumes by the SAN
> software.  The logical volumes are for user data; system for mapping
> user blocks to pool storage, and snapshot area for internal system
> backups if configured.

So, the problem was at one of the upper layers.  The hard disks appear
to be healthy.  While one server was writing to the OCFS2 file system,
it was restarted because it froze.  However, whether something within
the SAN caused it to freeze, I don't know.  Earlier in this thread,
Fernando mentioned that the I/O access times were slow -- perhaps some
misconfiguration of the VLAN caused the writing to take too long,
making the server freeze.

> The worst case is a failed rebuild of a live low-level RAID.  A rebuild
> overwrites all the data on that RAID; a failure will corrupt the
> second-level RAID, which corrupts the pool, which corrupts the system
> mapping and logical user data. When that happens you don't know what you
> have, and you don't know where it is.
> Is your SAN healthy?  One possiblity is a failing HDD; trying to read
> bad sectors causes an access delay, bad data causing a bad inode number?

The SAN is healthy.  All of the lights are fine and the hard drives appear fine.

For now, I've moved everything off to external hard drives,
re-formatted, and have moved everything back.  Users have begun to use
this ext4/NFS combination and it seems ok so far.  Perhaps some day,
in another job, I'll revisit OCFS2.  But for now, I'll stick with this
and am very glad that those 7 months of data processing time does not
have to be redone!

Thank you for your help and reply!!


Home | Main Index | Thread Index