pool each time a new array was configured. This causes grief
with things like 'vmstat -m' by causing it to loop. Make RAIDframe
only initialize PSS bits once.
Pointed out by simonb@. Fix tested by simonb@. Thanks!
render the RAID set completely dead. Instead, we retry the IO a
maximum of RF_RETRY_THRESHOLD times (currently '5'), and then just
return an IO error if the IO fails. This should reduce the damage
caused by having multiple disks appear to fail when the culprit is
really something else (power, controllers, etc.)
1) Introduce functions to allocate and free the emergency IO buffers.
2) Make sure we free any allocated emergency buffers in the event that
we bail out during configuration, or when we unconfigure an array.
3) if we run out of memory trying to allocate a given type of buffer,
don't continue to try to allocate more of those buffers.
(Partially addresses PR#25787)
write paths within RAIDframe. They also resolve the "panics with
RAID 5 sets with more than 3 components" issue which was present
(briefly) in the commits which were previously supposed to address
the malloc() issue.
With this new code the 5-component RAID 5 set panics are now gone.
It is also now also possible to swap to RAID 5.
The changes made are:
1) Introduce rf_AllocStripeBuffer() and rf_FreeStripeBuffer() to
allocate/free one stripe's worth of space. rf_AllocStripeBuffer() is
used in rf_MapUnaccessedPortionOfStripe() where it is not sufficient to
allocate memory using just rf_AllocBuffer(). rf_FreeStripeBuffer() is
called from rf_FreeRaidAccDesc(), well after the DAG is finished.
2) Add a set of emergency "stripe buffers" to struct RF_Raid_s.
Arrange for their initialization in rf_Configure(). In low-memory
situations these buffers will be returned by rf_AllocStripeBuffer()
and re-populated by rf_FreeStripeBuffer().
3) Move RF_VoidPointerListElem_t *iobufs from the dagHeader into
into struct RF_RaidAccessDesc_s. This is more consistent with the
original code, and will not result in items being freed "too early".
4) Add a RF_RaidAccessDesc_t *desc to RF_DagHeader_s so that we have a
way to find desc->iobufs.
5) Arrange for desc in the DagHeader to be initialized in InitHdrNode().
6) Don't cleanup iobufs in rf_FreeDAG() -- the freeing is now delayed
until rf_FreeRaidAccDesc() (which is how the original code handled the
allocList, and for which there seem to be some subtle, undocumented
assumptions).
7) Rename rf_AllocBuffer2() to be rf_AllocBuffer() and remove the
former rf_AllocBuffer(). Fix all callers of rf_AllocBuffer().
(This was how it was *supposed* to be after the last time these
changes were made, before they were backed out).
8) Remove RF_IOBufHeader and all references to it.
9) Remove desc->cleanupList and all references to it.
Fixes PR#20191
sufficient to clobber this nasty little bug. The behaviour observed
was a panic when doing a 'raidctl -f' on a component when DAGs were
in flight for the given RAID set. Unfortunatly, the faulty behaviour
was very intermittent, and it was difficult to not only reliably
reproduce the bug (nor determine when it was fixed!) but also to even
figure out what might be the cause of the problem.
The real issue was that ci_vp for the failed component was being
set to NULL in rf_FailDisk(), but with DAGs still in flight, some
of them were still expecting to use ci_vp to determine where to
read to/write from!
The fix is to call rf_SuspendNewRequestsAndWait() from rf_FailDisk()
to make sure the RAID set is quiet and all IOs have completed before
mucking with ci_vp and other data structures. rf_ResumeNewRequests()
is then used to continue on as usual.
used in the event that we can't malloc a buffer of the appropriate
size in the traditional way. rf_AllocIOBuffer() and rf_FreeIOBuffer()
deal with allocating/freeing these structures. These buffers are
stored in a list on the 'iobuf' list. iobuf_count keeps track of how
many buffers are available, and numEmergencyBuffers is the effective
"high-water" mark for the freelist. The buffers allocated by
rf_AllocIOBuffer() are stripe-unit sized, which is the maximum
size requested by any of the callers.
Add an iobufs entry to RF_DagHeader_s. Use it for keeping track of
buffers that get allocated from the free-list.
Add a "generic list" pool (VoidPointerListElement Pool) for elements
used to maintain a list of allocated memory. [It is somewhat less
than ideal to add another little pool to handle this...]
Teach rf_AllocBuffer() to use the new rf_AllocIOBuffer(). Modify
other Mallocs to use rf_AllocIOBuffer(), and to update dag_h->iobufs as
appropriate.
Update rf_FreeDAG() to handle cleanup of dag_h->iobufs.
While here, add some missing pool_destroy() calls for a number of pools.
With these changes, it should (in theory) be possible to swap on
RAID 5 sets again. That said, I've not had any success there yet --
but the last issue I saw at least wasn't in RAIDframe. :-}
[There is room for this code to become a bit more consise, but I
wanted to do a checkpoint here with something known to work :) ]
rf_PrintUserStats() was mean for the simulator, and doesn't provide
any real info in kernel-space, especially for reconstructs.
Reconstructing actually renders the stats even more useless, since it
resets them all to zero before the reconstruct starts!
- since rf_PrintUserStats() is no longer used, nuke it along with the
routines that feed it. Nothing was using this code, and if we ever
need it again, we know where to find it.
by RAIDframe. Convert all other RAIDframe global pools to use pools
defined within this new structure.
- Introduce rf_pool_init(), used for initializing a single pool in
RAIDframe. Teach each of the configuration routines to use
rf_pool_init().
- Cleanup a few pool-related comments.
- Cleanup revent initialization and #defines.
- Add a missing pool_destroy() for the reconbuffer pool.
(Saves another 1K off of an i386 GENERIC kernel, and makes
stuff a lot more readable)
by RAIDframe. Convert all other RAIDframe global pools to use pools
defined within this new structure.
- Introduce rf_pool_init(), used for initializing a single pool in
RAIDframe. Teach each of the configuration routines to use
rf_pool_init().
- Cleanup a few pool-related comments.
- Cleanup revent initialization and #defines.
- Add a missing pool_destroy() for the reconbuffer pool.
(Saves another 1K off of an i386 GENERIC kernel, and makes
stuff a lot more readable)
- introduce RF_MIN_*'s, as necessary. These will indicate the
low-water mark for pools as well as the pool_prime() value.
- add pool_setlowat() for the critical pools.
- pool_prime() and pool_setlowat() the raidframe_cbufpool.
- re-order some pool_prime()'s and pool_sethiwat()'s for clarity.
memory. Since we only now ever "return(0)", just return (void)
instead.
Cleanup all uses of rf_ShutdownCreate() to not worry about
it ever failing. Shaves another 600 bytes off of an i386 GENERIC kernel.
dynamically allocated variable-sized array (dagArray). Convert code
to use the new linked list stuff instead of the array stuff (the ratio
of one dagList per stripe still applies). The big advantage is in
being able to more efficiently allocate the dagLists on-the-fly, and
not have to know the size(s) of the array beforehand.
skrueger-at-europe-dot-com. (It turns out that the mutex used to
serve two different purposes, not just one, and for its current use,
it's actually miss-named. Will fix that some other time.)
can't fail. Simplify life in rf_BootRaidframe(), and then nuke
rf_lkmgr_mutex_init(). Cleanup rf_threadstuff.h a bit more too.
rf_threadstuff.c is about to Go Away.
(other than NULL when raidPtr is initialized). That means
SignalReconDone() never does anything useful. Bye-bye!
Say good-bye to recon_done_procs and recon_done_procs_mutex (and its
initializer) as well.
Mash DO_RAID_COND in rf_driver.c out of existance.
- Nuke (already #if 0'ed) _rf_create_managed_lkmgr_mutex() while we're
busy here.
simplify DO_INIT in rf_engine.c
rf_mutex_init(m)
now. The rest of the fluff is no longer needed.
It also cannot fail, so error checking on rf_create_managed_mutex()
is just wasting space.
Nuke the #define's associated with rf_create_managed_mutex().
Convert rf_create_managed_mutex(listp,m) to just rf_mutex_init(m).
Remove wasteful "error checking" and simplify all instances where this
is called. (another 0.3K saved in the binary, but the real savings
is in code readability!)
to make things look far more complicated than they really are. It was
also impossible for any of the mutex/cond initializations in
init_rad() to actually fail, making the "error detection code"
unneeded. Collapse the little work done by init_rad into
rf_AllocRaidAccDesc(), and nuke init_rad() and clean_rad(). Save
another 0.25K in GENERIC.
[To be accurate/complete, init_rad() and clean_rad() *ARE* used in the
simulator version of RAIDframe. But we're so far removed from that
now that there is no point pretending otherwise.]
- all freelists converted to pools
- initialization of structure members in certain cases where
code was relying on specific allocation and usage properties
to keep structures in a "known state" (that doesn't work with
pools!).
- make most pool_get() be "PR_WAITOK" until they can be analyzed
further, and/or have proper error handling added.
- all RF_Mallocs zero the space returned, so there is no difference
between RF_Calloc and RF_Malloc. In fact, all the RF_Calloc()'s
do is tend to do is get things horribly confused.
Make RF_Malloc() the "general memory allocator", with
RF_MallocAndAdd() the "general memory allocator with
allocation list".
- some of these RF_Malloc's et al. are destined to disappear.
- remove rf_rdp_freelist entirely (it's not used anywhere!)
- remove: #include "rf_freelist.h"
- to the files that were relying on the above, add: #include "rf_general.h"
- add: #include "rf_debugMem.h" to rf_shutdown.h to make it happy
about the loss of: #include "rf_freelist.h".
This shrinks an i386 GENERIC kernel by approx 5K. RAIDframe now
weighs in at about 162K on i386.
of strenuous agreement, and some general agreement, this commit is
going ahead because it's now starting to block some other changes I
wish to make.]
Remove most of the support for the concept of "rows" from RAIDframe.
While the "row" interface has been exported to the world, RAIDframe
internals have really only supported a single row, even though they
have feigned support of multiple rows.
Nothing changes in configuration land -- config files still need to
specify a single row, etc. All auto-config structures remain fully
forward/backwards compatible.
The only visible difference to the average user should be a
reduction in the size of a GENERIC kernel (i386) by 4.5K. For those
of us trolling through RAIDframe kernel code, a lot of the driver
configuration code has become a LOT easier to read.
failing a component that has been spared, or "double-failing"
an already failed component. XXX This isn't the right place to fix
this, but better here than no-where (and I'm hoping to move it sometime
soon).