deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map). Try to deal with this:
* Group all information about the backend allocator for a pool in a
separate structure. The pool references this structure, rather than
the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
to become available, but will still fail if it cannot callocate KVA
space for the pages. If this happens, carefully drain all pools using
the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
some pages, and use that information to make draining easier and more
efficient.
* Get rid of PR_URGENT. There was only one use of it, and it could be
dealt with by the caller.
From art@openbsd.org.
device in the rare event that the one it wants is already in use.
Thanks to Wolfgang Stukenbrock for noticing the bug and filing the PR.
This fix addresses PR#14862.
the stuff that used to live in rf_types.h, rf_raidframe.h, rf_layout.h,
rf_netbsd.h, rf_raid.h, rf_decluster,h, and a few other places.
Believe it or not, when this is all done, things will be cleaner.
No functional changes to RAIDframe.
being shutdown, then unconfigure the RAID set too. This fixes a number
of issues with doing proper unconfigures/shutdowns of multi-level RAID
sets.
Thanks to Jason Thorpe and Bill Squier for the ideas/suggestions on
how/where to do this, and to Bill Squier for testing.
Thus we choose "4 * number_of_columns" as a more reasonable
value (until someone comes up with something better).
This pretends to properly address PR#11989.
the number of partitions is > OLDMAXPARTITIONS. This is better
than silently truncating the label (don't want to silently throw
away partitions when using an old disklabel binary on a label with
> 8 partitions). From Enami Tsugutomo.
rather than assigning to the whole field, set or clear individual flags,
which implies that the B_BUSY and B_INVAL flags will remain set.
this allows us to make the assertion in brelse() that B_BUSY is set,
which is the purpose of all this.
reported by 'systat iostat' and friends are now much more correct for
RAIDframe devices. Thanks to Andrew Doran for poking me about this,
and for suggestions on and review of the changes.
long as at least one of the master or the mirror is available for each
of the N/2 'rows' of the set. (No, RAIDframe doesn't do N-way mirroring..)
Thanks to Manuel Bouyer for noting the problem.
renegades, and must be handled correctly. In particular, they should
be added to their old auto-config set, but then immediately released.
Failing to do otherwise means that they potentialy end up in a
different (and competing!) RAID set which may auto-configure in place
of the correct one, and cause all sorts of chaos at auto-configure
time.
change these from bp->b_un.b_addr to bp->b_data, as well. This also
allows us more flexibility to experiment with other data buffer types
hung off of struct buf.