device in the rare event that the one it wants is already in use.
Thanks to Wolfgang Stukenbrock for noticing the bug and filing the PR.
This fix addresses PR#14862.
the stuff that used to live in rf_types.h, rf_raidframe.h, rf_layout.h,
rf_netbsd.h, rf_raid.h, rf_decluster,h, and a few other places.
Believe it or not, when this is all done, things will be cleaner.
No functional changes to RAIDframe.
being shutdown, then unconfigure the RAID set too. This fixes a number
of issues with doing proper unconfigures/shutdowns of multi-level RAID
sets.
Thanks to Jason Thorpe and Bill Squier for the ideas/suggestions on
how/where to do this, and to Bill Squier for testing.
Thus we choose "4 * number_of_columns" as a more reasonable
value (until someone comes up with something better).
This pretends to properly address PR#11989.
the number of partitions is > OLDMAXPARTITIONS. This is better
than silently truncating the label (don't want to silently throw
away partitions when using an old disklabel binary on a label with
> 8 partitions). From Enami Tsugutomo.
rather than assigning to the whole field, set or clear individual flags,
which implies that the B_BUSY and B_INVAL flags will remain set.
this allows us to make the assertion in brelse() that B_BUSY is set,
which is the purpose of all this.
reported by 'systat iostat' and friends are now much more correct for
RAIDframe devices. Thanks to Andrew Doran for poking me about this,
and for suggestions on and review of the changes.
long as at least one of the master or the mirror is available for each
of the N/2 'rows' of the set. (No, RAIDframe doesn't do N-way mirroring..)
Thanks to Manuel Bouyer for noting the problem.
renegades, and must be handled correctly. In particular, they should
be added to their old auto-config set, but then immediately released.
Failing to do otherwise means that they potentialy end up in a
different (and competing!) RAID set which may auto-configure in place
of the correct one, and cause all sorts of chaos at auto-configure
time.
change these from bp->b_un.b_addr to bp->b_data, as well. This also
allows us more flexibility to experiment with other data buffer types
hung off of struct buf.
- use raid_init_component_labels instead of hand-rolling a component_label.
(missed doing this for the rf_used_spare case when updating component labels).
- fix serial number initialization in raid_init_component_labels
of a real component failure (or a simulated failure):
- add 'numNewFailures' to keep track of the number of disk failures
since mod_counter was last updated for each component label.
- make sure we call rf_update_component_labels() upon any component failure,
real or simulated.
- make current default label values available everywhere
- make sure numBlocks and blockSize in component labels get initialized
for all component labels
- check for component size to be smaller than or equal to the partition size
when autoconfiguring
- make component_label variables more consistent (==> clabel)
- re-work incorrect component configuration code
- re-work disk configuration code
- cleanup initial configuration of raidPtr info
- add auto-detection of components and RAID sets (Disabled, for now)
- allow / on RAID sets (Disabled, for now)
- rename "config_disk_queue" to "rf_ConfigureDiskQueue" and properly prototype
in rf_diskqueue.h
- protect some headers with #if _KERNEL (XXX this needs to be fixed properly)
and cleanup header formatting.
- expand the component labels (yes, they should be backward/forward compatible)
- other bits and pieces (some function names are still bogus, and will get
changed soon)
- fire up a new thread for parity re-writes, copybacks, and reconstructs.
The ioctl's which trigger these actions now return immediately.
- add progress accounting for the above actions.
- minor rototillage of rf_netbsdkintf.c to deal with all of the above.
and cleanup arguments. While we're here, cleanup raidstrategy(), and nuke
a bunch of unused debugging stuff.
RAIDframe + softdeps now play very nicely together.
RAIDframe driver to stop it from eating too much kernel memory when
writing data. But that fix had a nasty side-affect of hurting write
performance (*much* more than I thought it would). These changes nuke
that "fix", and instead put in a more reasonable mechanism for limiting
the number of simultaneous IO's which can be happening for each RAID device.
The result is a noticeable improvement in write throughput. The End.
clean bit. This is somewhat bogus as RAID 0 does not have any parity,
but is a slightly cleaner than other solutions, and makes the handling
of clean bits for RAID 0 consistent with the handling of clean bits at
other RAID levels.