- add two new members to the component label:
u_int numBlocksHi
u_int partitionSizeHi
and store the top 32 bits of the real number of blocks and
partition size. modify rf_print_component_label(),
rf_does_it_fit(), rf_AutoConfigureDisks() and
rf_ReconstructFailedDiskBasic().
- call disk_blocksize() after disk_attach() [ from mlelstv ]
- shift the block number relative to DEV_BSHIFT in raidstart()
and InitBP() so that accesses work for non 512-byte devices.
[ from mlelstv ]
- update rf_getdisksize() to use the new getdisksize() [ from
mlelstv. this part needs a separate change for netbsd-5. ]
reviewed by: oster, christos and darrenr
never have a parity map, make the parity map ioctls fail with EINVAL.
This makes `raidctl -m` print a scary-looking error on such sets, which
is an improvement over the previous behavior of falsely claiming that
the parity map would be enabled on the next configuration.
first component is missing. (Since the merging just OR's the maps,
this isn't that big of a deal, as it will just over-estimate the
amount of checking that needs to be done.)
Drastically reduces the amount of time spent rewriting parity after an
unclean shutdown by keeping better track of which regions might have had
outstanding writes. Enabled by default; can be disabled on a per-set
basis, or tuned, with the new raidctl(8) commands.
Discussed on tech-kern@ to a general air of approval; exhortations to
commit from mrg@, christos@, and others.
Thanks to Google for their sponsorship, oster@ for mentoring the
project, assorted developers for trying very hard to break it, and
probably more I'm forgetting.
partutil.c::getdiskinfo to use it to get disk geometry info.
Use DIOCGWEDGEINFO ioctl to get information about partition size, if disk
driver doesn't support it use old DIOCGDINFO. This patch adds support for
wedge like devices(lvm logical volumes, ZFS zvol partitions) to newfs and
other tools.
No objections on tech-userlevel@.
There are still about 1600 left, but they have ',' or /* ... */
in the actual variable definitions - which my awk script doesn't handle.
There are also many that need () -> (void).
(The script does handle misordered arguments.)
disk queue is being held. Work around this by dropping the lock before
bdev_strategy(), and re-grabbing the lock afterwards. This is a
temporary measure until I get to gutting this queue locking code.
There has been some success with this in addressing PR#39993.
This patch is from Antti Kantee. Thanks!
we need to account for that. Failure to do so means we can end up
waiting forever for writes we think are outstanding, but which have
already completed.
Addresses the RAIDframe part of PR#40569. Thanks to Matthias Scheler
for reporting the issue and verifying the fix.
device doesn't support flushing the cache. Fixes an issue (reported
privately) where ST39120A drives are not capable of flushing the
cache, and RAIDFrame was incessantly complaining.
its not on a free list.
Also change buf_init() to not automatically mark buffers `busy' since this
only makes sense for bufcache buffers.
Mark all buf_init'd buffers 'busy' on the places where they ought to be
flagged as such to not confuse the buffer cache.
Fixes PR 38923.
- Don't bother taking the v_interlock or bumping b_vp->v_numoutput --
there won't be any other writers for this bp, and so there's no point
doing this locking song'n'dance.
Patch from Juergen Hannken-Illjes. Thanks!!!
Addresses PR#38856. With this change I've been unable to
replicate the hard hangs.
Reconmap used to have one pointer for every reconstruction unit. This
does not scale well in the land of 1TB disks, where some 100MB+ of
"status pointers" are required for typical configurations. Convert
the reconstruction code to use a "sliding status window" which will
scale nicely regardless of the number of stripes/reconstruction units
in the RAID set. Convert the main reconstruction loop to rebuild the
array in chunks rather than in one big lump.
As part of these changes, introduce a function to kick any waiters on
the head separation callback list, and use that in the main
reconstruction event queue to wake up the waiters if things have
stalled. (I believe this may fix a race condition that could occur at
at least at the very end of a disk during reconstruction under heavy
IO load.)
Thanks to Brian Buhrow for all his help, support, and patience in
testing these changes.