memory. Since we only now ever "return(0)", just return (void)
instead.
Cleanup all uses of rf_ShutdownCreate() to not worry about
it ever failing. Shaves another 600 bytes off of an i386 GENERIC kernel.
dynamically allocated variable-sized array (dagArray). Convert code
to use the new linked list stuff instead of the array stuff (the ratio
of one dagList per stripe still applies). The big advantage is in
being able to more efficiently allocate the dagLists on-the-fly, and
not have to know the size(s) of the array beforehand.
VOP_STRATEGY(bp) is replaced by one of two new functions:
- VOP_STRATEGY(vp, bp) Call the strategy routine of vp for bp.
- DEV_STRATEGY(bp) Call the d_strategy routine of bp->b_dev for bp.
DEV_STRATEGY(bp) is used only for block-to-block device situations.
give it back if we don't need it. If we don't allocate it before
we take our lock, LOCKDEBUG (rightfully) complains that we're trying
to grab something from the pool with PR_WAITOK. This code (and the
PR_WAITOK in particular) really needs to be revisited at some point.
skrueger-at-europe-dot-com. (It turns out that the mutex used to
serve two different purposes, not just one, and for its current use,
it's actually miss-named. Will fix that some other time.)
Collapse the related variables down to zero. That means 'flags' is 0
as well. Nuke the extraction macros, a bunch of the variables, and replace
'flags' as well.
The compiler already knew that these chunks of code
could never be reached (since lu_flag was always 0), so it
already ignored them.
No functional changes.
rf_enableAtomicRMW changes.]
Cleanup rf_enableAtomicRMW and its use. According to the comments, we
can't set this to anything other than zero anyway. Shaves off another
900 bytes. lu_flag's days are numbered now, as are the middle
parameters of RF_CREATE_PARAM3.
can't set this to anything other than zero anyway. Shaves off another
900 bytes. lu_flag's days are numbered now, as are the middle
parameters of RF_CREATE_PARAM3.
debugging printf, and in rf_netbsdkintf.c. We can do the calculations
inside of RF_DEBUG_RECON for the one debugging printf, and only
perform the percentCompleted calculation "on demand" in the
rf_netbsdkintf.c case. Shaves a few more bytes off an i386 GENERIC
kernel, and ever-so-slightly decreases the amount of work performed
during a reconstruct.
rf_DecrAccessesCountState wasn't in the correct spot in
RF_AccessState_e. Following up on that has resulted in one other
correction. Changing orderings of these states is tricky, and
shouldn't be attempted without some thorough analysis. For the
changes committed, the following analysis is offerred:
1) RAIDframe uses a little state machine to take care of building,
executing, and processing the DAGs used to direct IO.
2) The rf_DecrAccessesCountState state is handled by the function
rf_State_DecrAccessCount(). The purpose of this state is to
decrement the number of "accesses-in-flight".
3) rf_Cleanup_State is handled by rf_State_Cleanup(). Its job is to
do general cleanup of DAG arrays and any stripe locks.
4) DefaultStates[] in rf_layout.c indicates that the right spot
for rf_DecrAccessesCountState is just before rf_Cleanup_State.
Analysis of code for both states indicates that the order doesn't
matter too much, although rf_State_DecrAccessCount() should probably
take place *after* rf_State_Cleanup() to be more correct.
5) Comments in rf_State_ProcessDAG() indicates that the next state
should be rf_Cleanup_State. However: it attempts to get there by using
desc->state++;
which actually takes it to just rf_DecrAccessesCountState! This turned
out to be OK before, since rf_Cleanup_State would follow right after,
and all would be taken careof (albeit in arguably the "less correct"
order).
6) With the current ordering, if we head directly to rf_Cleanup_State
(as we do, for example, if multiple components fail in a RAID 5 set),
then we'll actually miss going trough rf_DecrAccessesCountState), and
could end up never being able to reach quiescence! Perhaps not too
big of a deal, given that the RAID set is pretty much toast by that
point at which such a drastic state change happens, but might as well
have this correct.
The changes made are:
1) Since having rf_State_DecrAccessCount() come after
rf_State_Cleanup() is just fine, change rf_layout.c to reflect that
rf_DecrAccessesCountState comes after rf_Cleanup_State (i.e. they swap
positions in the state list). This means that going to
rf_Cleanup_State after bailing on a failed DAG access will do all the
right things -- the state will get cleaned up, and then the access
counts will get decremented properly. The comment in
rf_State_ProcessDAG() is now actually correct -- the next state *will*
be rf_Cleanup_State.
2) Move rf_DecrAccessesCountState in RF_AccessState_e to just after
rf_CleanupState. This puts RF_AccessState_e in sync with
DefaultStates[]. Fortunately, these states are rarely referred to by
name, and so this change ends up being mostly cosmetic -- it really
only fixes cleanup behaviour for the recent "Failed to create a DAG"
changes.
~forever. This requires a number of things:
1) If we can't create a DAG, set desc->numStripes to 0 in
rf_SelectAlgorithm. This will ensure that we don't attempt to free
any dagArray[] elements in rf_StateCleanup.
2) Modify rf_State_CreateDAG() to not panic in the event of a DAG
failure. Instead, set the bp->b_flags and bp->b_error, and set things
up to skip to rf_State_Cleanup().
3) Need to mark desc->status as "bad" so that we actually stop looking
for a different DAG. (which we won't find... no matter how many times
we try).
4) rf_State_LastState() will then do the biodone(), and return EIO for
the IO in question.
5) Remove some " || 1 "'s from ProcessNode(). These were for
debugging, and we don't need the failure notices spewing
over and over again as the failing DAGs are processed.
6) Needed to change
if (asmap->numDataFailed + asmap->numParityFailed > 1)
to
if ((asmap->numDataFailed + asmap->numParityFailed > 1) ||
(raidPtr->numFailures > 1)){
in rf_raid5.c so that it doesn't try to return
rf_CreateNonRedundantWriteDAG as the creation function.
7) Note that we can't apply the above change to the RAID 1 code as
with the silly "fake 2-D" RAID 1 sets, it is possible to have 2 failed
components in the RAID 1 set, and that would stop them from working.
(I really don't know why/how those "fake 2-D" RAID 1 sets even work
with all the "single-fault" assumptions present in the rest of the
code.)
8) Needed to protect rf_RAID0DagSelect() in a similar way -- it should
return NULL as the createFunc.
9) No point printing out "Multiple disks failed..." a zillion times.
RF_DAG_RETURN_DAG
RF_DAG_RETURN_ASM
RF_DAG_TEST_ACCESS
and the code that goes with them. A couple more of these
can probably go too, but I might need them in a bit.
bp->b_proc for mapping userspace buffers to kernelspace in the
original rf_kintf.c. That means bp isn't of any use in RF_BZERO()
for us, and the macro can be replaced with just the memset().
No functional changes.
was just an accident in the first place. Cleanup function decls and
a few comments. [ok.. so I wasn't going to fix this many.. but once
you're on a roll....]
datastructures allowed! Punt.
accessTraceBufCount, rf_accessTraceBufSize, and
rf_stopCollectingTraces are similarly declared, initialized, and then
never changed. Punt.
rf_ShutdownAccessTrace() now does nothing. Remove it, and the
callback setup stuff from rf_ConfigureAccessTrace().
can't fail. Simplify life in rf_BootRaidframe(), and then nuke
rf_lkmgr_mutex_init(). Cleanup rf_threadstuff.h a bit more too.
rf_threadstuff.c is about to Go Away.
(other than NULL when raidPtr is initialized). That means
SignalReconDone() never does anything useful. Bye-bye!
Say good-bye to recon_done_procs and recon_done_procs_mutex (and its
initializer) as well.
Mash DO_RAID_COND in rf_driver.c out of existance.
- Nuke (already #if 0'ed) _rf_create_managed_lkmgr_mutex() while we're
busy here.
simplify DO_INIT in rf_engine.c
here" department.
remove _rf_init_threadgroup() and rf_destroy_threadgroup() which were
already #if 0'ed.
rf_cond_destroy() does nothing. Nuke it, and all callers.
rf_cond_init() doesn't deserve to be a separate function any more.
Fix up the remaining 3 callers, and nuke rf_cond_init().
Another 0.4K goes "poof", but still no functionality lost!
rf_mutex_init(m)
now. The rest of the fluff is no longer needed.
It also cannot fail, so error checking on rf_create_managed_mutex()
is just wasting space.
Nuke the #define's associated with rf_create_managed_mutex().
Convert rf_create_managed_mutex(listp,m) to just rf_mutex_init(m).
Remove wasteful "error checking" and simplify all instances where this
is called. (another 0.3K saved in the binary, but the real savings
is in code readability!)
neither of these ever fail, no need to have a return value. That
makes all the "error detection" on these functions completely
unneeded. But since we're here, if we don't have a return value, then
why not make these macros? My.. look how things keep shrinking, with
no loss in functionality!
clean_rad() were -- these days they only serve to clutter things up.
Remove the functions, and put the 2 lines of actual useful initialization
into rf_AllocMCPair().
to make things look far more complicated than they really are. It was
also impossible for any of the mutex/cond initializations in
init_rad() to actually fail, making the "error detection code"
unneeded. Collapse the little work done by init_rad into
rf_AllocRaidAccDesc(), and nuke init_rad() and clean_rad(). Save
another 0.25K in GENERIC.
[To be accurate/complete, init_rad() and clean_rad() *ARE* used in the
simulator version of RAIDframe. But we're so far removed from that
now that there is no point pretending otherwise.]
- all freelists converted to pools
- initialization of structure members in certain cases where
code was relying on specific allocation and usage properties
to keep structures in a "known state" (that doesn't work with
pools!).
- make most pool_get() be "PR_WAITOK" until they can be analyzed
further, and/or have proper error handling added.
- all RF_Mallocs zero the space returned, so there is no difference
between RF_Calloc and RF_Malloc. In fact, all the RF_Calloc()'s
do is tend to do is get things horribly confused.
Make RF_Malloc() the "general memory allocator", with
RF_MallocAndAdd() the "general memory allocator with
allocation list".
- some of these RF_Malloc's et al. are destined to disappear.
- remove rf_rdp_freelist entirely (it's not used anywhere!)
- remove: #include "rf_freelist.h"
- to the files that were relying on the above, add: #include "rf_general.h"
- add: #include "rf_debugMem.h" to rf_shutdown.h to make it happy
about the loss of: #include "rf_freelist.h".
This shrinks an i386 GENERIC kernel by approx 5K. RAIDframe now
weighs in at about 162K on i386.
of strenuous agreement, and some general agreement, this commit is
going ahead because it's now starting to block some other changes I
wish to make.]
Remove most of the support for the concept of "rows" from RAIDframe.
While the "row" interface has been exported to the world, RAIDframe
internals have really only supported a single row, even though they
have feigned support of multiple rows.
Nothing changes in configuration land -- config files still need to
specify a single row, etc. All auto-config structures remain fully
forward/backwards compatible.
The only visible difference to the average user should be a
reduction in the size of a GENERIC kernel (i386) by 4.5K. For those
of us trolling through RAIDframe kernel code, a lot of the driver
configuration code has become a LOT easier to read.
be inserted into ktrace records. The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.
Bump the kernel rev up to 1.6V