Commit Graph

16 Commits

Author SHA1 Message Date
oster 85611189b6 These changes complete the effective removal of malloc() from all
write paths within RAIDframe.  They also resolve the "panics with
RAID 5 sets with more than 3 components" issue which was present
(briefly) in the commits which were previously supposed to address
the malloc() issue.

With this new code the 5-component RAID 5 set panics are now gone.

It is also now also possible to swap to RAID 5.

The changes made are:

1) Introduce rf_AllocStripeBuffer() and rf_FreeStripeBuffer() to
allocate/free one stripe's worth of space.  rf_AllocStripeBuffer() is
used in rf_MapUnaccessedPortionOfStripe() where it is not sufficient to
allocate memory using just rf_AllocBuffer().  rf_FreeStripeBuffer() is
called from rf_FreeRaidAccDesc(), well after the DAG is finished.

2) Add a set of emergency "stripe buffers" to struct RF_Raid_s.
Arrange for their initialization in rf_Configure().  In low-memory
situations these buffers will be returned by rf_AllocStripeBuffer()
and re-populated by rf_FreeStripeBuffer().

3) Move	RF_VoidPointerListElem_t *iobufs from the dagHeader into
into struct RF_RaidAccessDesc_s.  This is more consistent with the
original code, and will not result in items being freed "too early".

4) Add a RF_RaidAccessDesc_t *desc to RF_DagHeader_s so that we have a
way to find desc->iobufs.

5) Arrange for desc in the DagHeader to be initialized in InitHdrNode().

6) Don't cleanup iobufs in rf_FreeDAG() -- the freeing is now delayed
until rf_FreeRaidAccDesc() (which is how the original code handled the
allocList, and for which there seem to be some subtle, undocumented
assumptions).

7) Rename rf_AllocBuffer2() to be rf_AllocBuffer() and remove the
former rf_AllocBuffer().  Fix all callers of rf_AllocBuffer().
(This was how it was *supposed* to be after the last time these
changes were made, before they were backed out).

8) Remove RF_IOBufHeader and all references to it.

9) Remove desc->cleanupList and all references to it.

Fixes PR#20191
2004-04-09 23:10:16 +00:00
oster 0ff2145648 For each RAID set, pre-allocate a number of "emergency buffers" to be
used in the event that we can't malloc a buffer of the appropriate
size in the traditional way.  rf_AllocIOBuffer() and rf_FreeIOBuffer()
deal with allocating/freeing these structures.  These buffers are
stored in a list on the 'iobuf' list.  iobuf_count keeps track of how
many buffers are available, and numEmergencyBuffers is the effective
"high-water" mark for the freelist.  The buffers allocated by
rf_AllocIOBuffer() are stripe-unit sized, which is the maximum
size requested by any of the callers.

Add an iobufs entry to RF_DagHeader_s.  Use it for keeping track of
buffers that get allocated from the free-list.

Add a "generic list" pool (VoidPointerListElement Pool) for elements
used to maintain a list of allocated memory.  [It is somewhat less
than ideal to add another little pool to handle this...]

Teach rf_AllocBuffer() to use the new rf_AllocIOBuffer().  Modify
other Mallocs to use rf_AllocIOBuffer(), and to update dag_h->iobufs as
appropriate.

Update rf_FreeDAG() to handle cleanup of dag_h->iobufs.

While here, add some missing pool_destroy() calls for a number of pools.

With these changes, it should (in theory) be possible to swap on
RAID 5 sets again.  That said, I've not had any success there yet --
but the last issue I saw at least wasn't in RAIDframe. :-}

[There is room for this code to become a bit more consise, but I
wanted to do a checkpoint here with something known to work :) ]
2004-03-20 04:22:05 +00:00
oster 1a3e20d5d9 Introduce a dual-purpose pool for providing pointer and param "caches"
for RF_DagNode_t's.  Scale the structure size based on RF_MAXCOL.
Use the new allocation method in InitNode().  Note that we can't get
rid of the mallocs in there until we can prove that this new
allocation method is a strict upper bound.  Unless someone tries
running a RAID set with 40 components, the mallocs here shouldn't
shouldn't be an issue.  (and if someone does make a set with 40 components
they will run into other issues with other constants long before
then)
2004-03-19 17:01:26 +00:00
oster b2c52e1175 Take care of six more mallocs:
- Pull rf_FreePhysDiskAddr() out from under a #ifdef, since we're now
going to use it.

- Add a pda_cleanup_list into the DAG header.  Use it in rf_FreeDAG() to
cleanup any PDA's that get allocated but have no "easy" way of being
located and freed when the DAG completes.

- numStripeUnitsAccessed is a per-stripe value, and has a maximum
value equal to the number of colums (thus limited by RF_MAXCOL).
Use this knowledge to set a high-bound on overlappingPDAs, and stuff
it on the stack instead of malloc'ing it all the time!  This costs us
a whopping 40 bytes on the stack, but saves a malloc() and a free().
2004-03-19 15:16:18 +00:00
oster d4fe1a2103 - Introduce a 'dagnode' pool. Initialize it and allow for cleanup.
Provide rf_AllocDAGNode() and rf_FreeDAGNode() to handle
allocation/freeing.

- Introduce a "nodes" linked list of RF_DagNode_t's into the DAG header.
Initialize nodes in InitHdrNode().  Arrange for nodes cleanup in rf_FreeDAG().

- Add a "list_next" to RF_DagNode_t to keep track of nodes on the
above "nodes" list.  (This is distinct from the "next" field of
RF_DagNode_t, which keeps track of the firing order of nodes.)
"list_next" gets used in the cleanup routines, and in traversing
through a set of nodes that belong to a particular set of nodes
(e.g. those belonging to xorNodes for a given DAG).

- use rf_AllocDAGNode() instead of mallocs of variable-sized arrays of
RF_DagNode_t's.  Mostly mechanical changes to convert the DAG construction
from "access nodes via an array index" to "access nodes via a 'nextnode'
pointer".

- rework a couple of tricky spots where assumptions about the node order
was being abused.

- performance remains consistent with performance before these changes.

[Thanks to Simon Burge (simonb at you.know.where) for looking over
the mechanical changes to make sure I didn't biff anything.]
2004-03-18 16:40:05 +00:00
oster 6e2928d6d5 resultNum isn't used anywhere. Good-bye. 2004-03-04 00:56:13 +00:00
oster 8b515e1496 rf_bwd1 and rf_bwd2 are holdovers from the "backward" error recovery.
Nuke them, and the little bit of code associated with them.
2004-03-04 00:54:30 +00:00
oster c7eaad6a14 Use RF_ACC_TRACE to #if out more chunks of code related only
to access tracing.  (not turned on yet)
2004-03-01 23:30:57 +00:00
oster 24099528e9 Use a dynamically allocated linked list of dagLists instead of using a
dynamically allocated variable-sized array (dagArray).  Convert code
to use the new linked list stuff instead of the array stuff (the ratio
of one dagList per stripe still applies).  The big advantage is in
being able to more efficiently allocate the dagLists on-the-fly, and
not have to know the size(s) of the array beforehand.
2004-02-27 02:55:17 +00:00
oster f0efca630a Nuke a couple of unneeded #defines. 2002-09-23 23:53:54 +00:00
oster 752e8eb5c8 - remove "#include "rf_memchunk.h"
- nuke the call to rf_ConfigureMemChunk() from rf_driver.c
2002-08-02 03:42:33 +00:00
oster fcc4232f71 Nuke stuff dealing with the experimental memChunk code. It's unused, and
currently only contributing to bloat.
2002-08-02 03:32:56 +00:00
oster 765e00d3de Step 2 of the disentanglement. We now look to <dev/raidframe/*> for
the stuff that used to live in rf_types.h, rf_raidframe.h, rf_layout.h,
rf_netbsd.h, rf_raid.h, rf_decluster,h, and a few other places.
Believe it or not, when this is all done, things will be cleaner.

No functional changes to RAIDframe.
2001-10-04 15:58:51 +00:00
oster 0014588545 Phase 2 of the RAIDframe cleanup. The source is now closer to KNF
and is much easier to read.  No functionality changes.
1999-02-05 00:06:06 +00:00
oster 1eecf8e491 RAIDframe cleanup, phase 1. Nuke simulator support, user-land driver,
out-dated comments, and other unneeded stuff.  This helps prepare
for cleaning up the rest of the code, and adding new functionality.

No functional changes to the kernel code in this commit.
1999-01-26 02:33:49 +00:00
oster 38a3987b69 RAIDframe, version 1.1, from the Parallel Data Laboratory at
Carnegie Mellon University.  Full RAID implementation, including
levels 0, 1, 4, 5, 6, parity logging, and a few other goodies.
Ported to NetBSD by Greg Oster.
1998-11-13 04:20:26 +00:00