Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Help with fsck and ocfs2 (or even ext4?)...



Hoi,

On Mon, Sep 20, 2021 at 01:21:17AM +0800, Raymond Wan wrote:
> [..]
> Actually, the vendor of the SAN performed the initial installation (I won't
> say who the vendor was, but let's say their name rhymes with "Dell" :-P ).
> And they used ext4. Since they're the experts, I didn't question it.  Within
> minutes of using it on our cluster, files started mysteriously disappearing.
> It was quite frustrating.

ext4 is fine - as long as you ensure that at any time just one of the
systems who can "see" that blockdevice is actually mounting the device.
Mounting and writing to the blockdevice from multiple systems is asking
for havoc, each system "thinks" it has exclusive access.

> I asked on ServerFault and a couple of people clarified to me that ext4
> wouldn't work.  I still don't understand it...I thought a SAN could look
> after the disk the same way a server looks after an ext4 disk that is NFS
> exported...

SAN means here that just blockdevices are handed out, if multiple systems
need access, they need to coordinate, that is done with ocfs2 or gfs2.
With NFS, again just one system is accessing the blockdevice, and is then
doing locks/coordinating as part of NFS.

If you want to "replicate" that havoc of ext4 from multiple simultaneous
systems, or gfs2/ocfs, I recommend this:
- a linux system acting as hypervisor
- multiple KVM guests
- the hypervisor sharing one or multiple iSCSI devices
- the guests accessing these - they are the shared block devices.

NFS would be easier to operate, but when the one NFS server is not clus-
tered and goes down, the whole storage is unavailable.  
Chris


Home | Main Index | Thread Index