- use raid_init_component_labels instead of hand-rolling a component_label.
(missed doing this for the rf_used_spare case when updating component labels).
- fix serial number initialization in raid_init_component_labels
of a real component failure (or a simulated failure):
- add 'numNewFailures' to keep track of the number of disk failures
since mod_counter was last updated for each component label.
- make sure we call rf_update_component_labels() upon any component failure,
real or simulated.
- make current default label values available everywhere
- make sure numBlocks and blockSize in component labels get initialized
for all component labels
- check for component size to be smaller than or equal to the partition size
when autoconfiguring
- make component_label variables more consistent (==> clabel)
- re-work incorrect component configuration code
- re-work disk configuration code
- cleanup initial configuration of raidPtr info
- add auto-detection of components and RAID sets (Disabled, for now)
- allow / on RAID sets (Disabled, for now)
- rename "config_disk_queue" to "rf_ConfigureDiskQueue" and properly prototype
in rf_diskqueue.h
- protect some headers with #if _KERNEL (XXX this needs to be fixed properly)
and cleanup header formatting.
- expand the component labels (yes, they should be backward/forward compatible)
- other bits and pieces (some function names are still bogus, and will get
changed soon)
- fire up a new thread for parity re-writes, copybacks, and reconstructs.
The ioctl's which trigger these actions now return immediately.
- add progress accounting for the above actions.
- minor rototillage of rf_netbsdkintf.c to deal with all of the above.
and cleanup arguments. While we're here, cleanup raidstrategy(), and nuke
a bunch of unused debugging stuff.
RAIDframe + softdeps now play very nicely together.
RAIDframe driver to stop it from eating too much kernel memory when
writing data. But that fix had a nasty side-affect of hurting write
performance (*much* more than I thought it would). These changes nuke
that "fix", and instead put in a more reasonable mechanism for limiting
the number of simultaneous IO's which can be happening for each RAID device.
The result is a noticeable improvement in write throughput. The End.
clean bit. This is somewhat bogus as RAID 0 does not have any parity,
but is a slightly cleaner than other solutions, and makes the handling
of clean bits for RAID 0 consistent with the handling of clean bits at
other RAID levels.
from the tsleep()'s (they probably shouldn't have been there in the
first place!). Making parity re-writing and copybacks interruptable
will require re-designing how a few things are done (e.g. how memory
is freed for structures shipped off to routines that run asynchronously
relative to the calling routine). Fix a few other tsleep's while we're at it.
support a little: clean-bits now also get frobbed on partition
opening/closing, rather than just at device configuration and device
unconfiguration. Schedule shutdownhooks() stuff for nukage at a later
date, since it isn't really necessary any more.