Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] solaris 10 ZFS and Japanese



Rabbs helped me find some unofficial answers to my questions (below)!  
(Thanks!)  There's no official ZFS release schedule yet, but I think 
I'll wait for it a while.

Cheers,
11011011

>1.  How easy (or possible) is it to install the root/boot filesystem on  ZFS RAID1?
>  
>
It is part of the design, and it should be very straightforward.  Having
said that, because the root filesystem will be affected, it's part of
the boot re-architecture project (currently going into Project Nevada,
the OpenSolaris builds, which is also Solaris Express).  You should be
able to have RAID-1 like functionality very easily and simply OOB.  If
you need to add disks (i.e. upgrade to faster/larger disks) you should
even be able to do that without taking the OS down.  This also has
implications on LiveUpgrade (i.e. the ability to upgrade the OS while
it's running, then reboot to the upgrade in a service outage).


>2.  Does the mirrored self-healing data work with all IDE hardware  (i.e.,
>what does "repair" mean and how does it relocate unwritable disk  blocks)?
>  
>
Yes, it works with all hardware. It's in the design of the filesystem, 
so it doesn't matter if you're using a USB memory stick or a 2 petabyte 
SAN. As long as it's a block device, Solaris will be able to handle it. 
Relocating is actually nothing new-- it's a matter of recalculating the 
missing data (i.e. 1 bit has flipped, figure out what it should be from 
a checksum [it's in the algorithms for the checksums and calculations 
that the real magic of ZFS's auto-healing is] and then put it somewhere 
else. Then I assume they mark the block bad, but I don't know that for 
certain. DOS had stuff like this back in the 80s-- it just wasn't as 
sophisticated or on the fly. But that was also in the days of horrible 
reliability MFM hard drives.

Obviously, there are levels of self-healing to this. You can get some of 
it with just one device, but if the entire device fails, you have a 
problem. So, you'd need two or more devices (behind different 
controllers, on different arrays, different switches, different power) 
to get the ultimate reliability. I personally have seen and been 
involved in situations where certain vendors array's firmware has 
silently corrupted data! ZFS would make some of these errors detectable 
at the time of the write.


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links