Commit Graph

29 Commits

Author SHA1 Message Date
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
yamt
b290d1aea3 don't include <sys/namei.h> unnecessarily. 2007-11-13 11:39:41 +00:00
christos
ecdff16f80 - use dk_lookup instead of our home-spun version.
- allow raid to be configured in a wedge
- allow wedges to be configured in a raid
- add autoconfiguration of wedges in a raid
2006-08-27 05:07:12 +00:00
oster
55da57e0c1 Remove the component buffer bits, now that I know there is a
"private" structure in struct buf that can be used to keep track of
the request associated with this buffer (the buffer used here is one
allocated from rf_CreateDiskQueueData(), so it's ours to do with what
we please).  Shrinks code a little, reduces the run-time memory
footprint a bit, and simplifies both rf_DispatchKernelIO() and
KernelWakeupFunc().

Thanks to yamt for his "why is rf_DispatchKernelIO using another buf"
question which prompted me to revisit this code.
2006-01-07 16:08:44 +00:00
christos
95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
oster
96ba5552fa Re-work the handling of incoming I/O in RAIDframe:
- introduce rf_buf_queue_check() which checks to see if there
is work to do in the incoming buffer queue
- rf_RaidIOThread() is now responsible for calling raidstart(), and is
also now the only place that calls raidstart()
- raidstrategy() now just queues requests in buf_queue
and signals rf_RaidIOThread() that work has arrived

Hopefully addresses PR#30233
2005-09-25 19:47:17 +00:00
christos
ba7574326e - avoid variable shadowing
- add a lot of const
- remove parameters from functin declarations
2005-05-29 22:03:09 +00:00
perry
f31bd063e9 nuke trailing whitespace 2005-02-27 00:26:58 +00:00
oster
3140947870 Reconstruction Descriptors are only allocated once per reconstruction,
and don't need their own pool or freelist or anything fancier than a
malloc/free.
2005-01-22 02:22:44 +00:00
oster
85611189b6 These changes complete the effective removal of malloc() from all
write paths within RAIDframe.  They also resolve the "panics with
RAID 5 sets with more than 3 components" issue which was present
(briefly) in the commits which were previously supposed to address
the malloc() issue.

With this new code the 5-component RAID 5 set panics are now gone.

It is also now also possible to swap to RAID 5.

The changes made are:

1) Introduce rf_AllocStripeBuffer() and rf_FreeStripeBuffer() to
allocate/free one stripe's worth of space.  rf_AllocStripeBuffer() is
used in rf_MapUnaccessedPortionOfStripe() where it is not sufficient to
allocate memory using just rf_AllocBuffer().  rf_FreeStripeBuffer() is
called from rf_FreeRaidAccDesc(), well after the DAG is finished.

2) Add a set of emergency "stripe buffers" to struct RF_Raid_s.
Arrange for their initialization in rf_Configure().  In low-memory
situations these buffers will be returned by rf_AllocStripeBuffer()
and re-populated by rf_FreeStripeBuffer().

3) Move	RF_VoidPointerListElem_t *iobufs from the dagHeader into
into struct RF_RaidAccessDesc_s.  This is more consistent with the
original code, and will not result in items being freed "too early".

4) Add a RF_RaidAccessDesc_t *desc to RF_DagHeader_s so that we have a
way to find desc->iobufs.

5) Arrange for desc in the DagHeader to be initialized in InitHdrNode().

6) Don't cleanup iobufs in rf_FreeDAG() -- the freeing is now delayed
until rf_FreeRaidAccDesc() (which is how the original code handled the
allocList, and for which there seem to be some subtle, undocumented
assumptions).

7) Rename rf_AllocBuffer2() to be rf_AllocBuffer() and remove the
former rf_AllocBuffer().  Fix all callers of rf_AllocBuffer().
(This was how it was *supposed* to be after the last time these
changes were made, before they were backed out).

8) Remove RF_IOBufHeader and all references to it.

9) Remove desc->cleanupList and all references to it.

Fixes PR#20191
2004-04-09 23:10:16 +00:00
oster
0ff2145648 For each RAID set, pre-allocate a number of "emergency buffers" to be
used in the event that we can't malloc a buffer of the appropriate
size in the traditional way.  rf_AllocIOBuffer() and rf_FreeIOBuffer()
deal with allocating/freeing these structures.  These buffers are
stored in a list on the 'iobuf' list.  iobuf_count keeps track of how
many buffers are available, and numEmergencyBuffers is the effective
"high-water" mark for the freelist.  The buffers allocated by
rf_AllocIOBuffer() are stripe-unit sized, which is the maximum
size requested by any of the callers.

Add an iobufs entry to RF_DagHeader_s.  Use it for keeping track of
buffers that get allocated from the free-list.

Add a "generic list" pool (VoidPointerListElement Pool) for elements
used to maintain a list of allocated memory.  [It is somewhat less
than ideal to add another little pool to handle this...]

Teach rf_AllocBuffer() to use the new rf_AllocIOBuffer().  Modify
other Mallocs to use rf_AllocIOBuffer(), and to update dag_h->iobufs as
appropriate.

Update rf_FreeDAG() to handle cleanup of dag_h->iobufs.

While here, add some missing pool_destroy() calls for a number of pools.

With these changes, it should (in theory) be possible to swap on
RAID 5 sets again.  That said, I've not had any success there yet --
but the last issue I saw at least wasn't in RAIDframe. :-}

[There is room for this code to become a bit more consise, but I
wanted to do a checkpoint here with something known to work :) ]
2004-03-20 04:22:05 +00:00
oster
1a3e20d5d9 Introduce a dual-purpose pool for providing pointer and param "caches"
for RF_DagNode_t's.  Scale the structure size based on RF_MAXCOL.
Use the new allocation method in InitNode().  Note that we can't get
rid of the mallocs in there until we can prove that this new
allocation method is a strict upper bound.  Unless someone tries
running a RAID set with 40 components, the mallocs here shouldn't
shouldn't be an issue.  (and if someone does make a set with 40 components
they will run into other issues with other constants long before
then)
2004-03-19 17:01:26 +00:00
oster
208b461a96 Introduce 3 more pools and 6 functions to handle allocating/freeing
elements from the pools.

Re-work rf_SelectAlgorithm() to get rid of all the 8 malloc's, and to
use the new functions to get/put these 'support structures'.  I'm not
overly happy with some of the variable names, but them's the breaks.

In the process of changing things, fix a bug:
 - in the case where we can't create a dag, free asmh_b and blockFuncs
too!!

[if you were able to look at the source code related to these changes,
and comprehend what was going on without having your eyes bleed or
getting dizzy, please contact me...  I'm sure I'll have more code
which would benefit by you having a look at it before I commit it :) ]
2004-03-19 02:27:44 +00:00
oster
d4fe1a2103 - Introduce a 'dagnode' pool. Initialize it and allow for cleanup.
Provide rf_AllocDAGNode() and rf_FreeDAGNode() to handle
allocation/freeing.

- Introduce a "nodes" linked list of RF_DagNode_t's into the DAG header.
Initialize nodes in InitHdrNode().  Arrange for nodes cleanup in rf_FreeDAG().

- Add a "list_next" to RF_DagNode_t to keep track of nodes on the
above "nodes" list.  (This is distinct from the "next" field of
RF_DagNode_t, which keeps track of the firing order of nodes.)
"list_next" gets used in the cleanup routines, and in traversing
through a set of nodes that belong to a particular set of nodes
(e.g. those belonging to xorNodes for a given DAG).

- use rf_AllocDAGNode() instead of mallocs of variable-sized arrays of
RF_DagNode_t's.  Mostly mechanical changes to convert the DAG construction
from "access nodes via an array index" to "access nodes via a 'nextnode'
pointer".

- rework a couple of tricky spots where assumptions about the node order
was being abused.

- performance remains consistent with performance before these changes.

[Thanks to Simon Burge (simonb at you.know.where) for looking over
the mechanical changes to make sure I didn't biff anything.]
2004-03-18 16:40:05 +00:00
oster
bce42a3095 Move pss_pool to rf_pools. Will save a bit of extra memory at
run-time, and we can only do one reconstruction at a time anyway.
Nuke pss_issued_pool - move it to an internal structure in pss.
2004-03-08 02:25:27 +00:00
oster
f95359dd19 - Introduce rf_pools which contains all of the various global pools used
by RAIDframe.  Convert all other RAIDframe global pools to use pools
defined within this new structure.
- Introduce rf_pool_init(), used for initializing a single pool in
RAIDframe.  Teach each of the configuration routines to use
rf_pool_init().
- Cleanup a few pool-related comments.
- Cleanup revent initialization and #defines.
- Add a missing pool_destroy() for the reconbuffer pool.

(Saves another 1K off of an i386 GENERIC kernel, and makes
stuff a lot more readable)
2004-03-07 22:15:19 +00:00
oster
765e00d3de Step 2 of the disentanglement. We now look to <dev/raidframe/*> for
the stuff that used to live in rf_types.h, rf_raidframe.h, rf_layout.h,
rf_netbsd.h, rf_raid.h, rf_decluster,h, and a few other places.
Believe it or not, when this is all done, things will be cleaner.

No functional changes to RAIDframe.
2001-10-04 15:58:51 +00:00
oster
39a667120f In the event that an up-to-date component cannot be located for a specific
position, see if there is a failed component still hanging around that
we can use instead (but still mark it as failed).  This leads to more
reasonable behaviour (and fewer surprises!) when autoconfiguring and
failed (or previously failed) components are still on the system.
2000-05-28 22:53:49 +00:00
oster
b97e06092d Shuffle some prototypes to a more appropriate location. 2000-03-27 03:01:33 +00:00
oster
2ee63332b3 Add bits for eventual support of deleteing components and moving
hot-spares into the main set.
2000-03-26 22:38:28 +00:00
oster
e0ab2f3d0f Correct a comment. 2000-02-23 00:37:11 +00:00
oster
fb56415023 Add a few comments, and an indicator of whether or not an autoconfig set
is 'rootable'.
2000-02-22 03:39:47 +00:00
oster
445591e874 Get recent changes into the tree:
- make component_label variables more consistent (==> clabel)
- re-work incorrect component configuration code
- re-work disk configuration code
- cleanup initial configuration of raidPtr info
- add auto-detection of components and RAID sets (Disabled, for now)
- allow / on RAID sets (Disabled, for now)
- rename "config_disk_queue" to "rf_ConfigureDiskQueue" and properly prototype
in rf_diskqueue.h
- protect some headers with #if _KERNEL  (XXX this needs to be fixed properly)
  and cleanup header formatting.
- expand the component labels (yes, they should be backward/forward compatible)
- other bits and pieces (some function names are still bogus, and will get
changed soon)
2000-02-13 04:53:57 +00:00
ad
daf0e5b05c Replace two instances of TNF copyright with one (was replicated for two
separate contributers).
1999-05-13 21:46:17 +00:00
oster
98d8c12355 Update for recent changes including component label support, clean
bits, rebuilding components in-place, adding hot spares, shutdownhooks, etc.
1999-03-02 03:18:48 +00:00
oster
be9eca67c8 Cleanup/remove unused cruft. First kick at component labels and clean bits.
Still work in progress.  New code is there, but not enabled yet.
1999-02-23 23:57:53 +00:00
oster
0014588545 Phase 2 of the RAIDframe cleanup. The source is now closer to KNF
and is much easier to read.  No functionality changes.
1999-02-05 00:06:06 +00:00
oster
1eecf8e491 RAIDframe cleanup, phase 1. Nuke simulator support, user-land driver,
out-dated comments, and other unneeded stuff.  This helps prepare
for cleaning up the rest of the code, and adding new functionality.

No functional changes to the kernel code in this commit.
1999-01-26 02:33:49 +00:00
oster
38a3987b69 RAIDframe, version 1.1, from the Parallel Data Laboratory at
Carnegie Mellon University.  Full RAID implementation, including
levels 0, 1, 4, 5, 6, parity logging, and a few other goodies.
Ported to NetBSD by Greg Oster.
1998-11-13 04:20:26 +00:00