(tightly scoped) reason for recording the node address by recording
the assigned number. Dink pci/if_bge.c to match, since ic/ath.c was
used as the archetype.
yesterday's sys/dev/ic/ath.c) to match today's ath.c driver.
Commit now in the hope that Andrew Brown will pick up this file for
any more pending changes.
commit. The rate adaptation code expects them. Usually wi gets
lucky, and an Rx/Alloc/Info event triggers the interrupt handler,
but I had not intended for wi to count on it.
Without Tx/TxExc interrupts enabled, wi will sometimes exhaust all
its rssdescs and cease transmitting. Usually it sets IFF_OACTIVE
in that situation.
(c) TNF
line from 4-clause UCB to 3-clause UCB license; in other words,
remove UCB's ad clause from the license TNF grants.
There is no point in TNF demanding that UCB's ad clause be followed
when even UCB doesn't demand it any longer.
Ok'd by board@ and agc@.
RAID5 sets with more than 3 drives. Still need to figure out why
the original changes were losing, but need the version in tree reliable
first!
Huge THANKS to Juergen Hannken-Illjes for helping track down
the changes that were causing the lossage.
ExtremeRAID with NV cache) and "dumb" (3ware 6410) ld providers.
Instead, use the default buffer queue policy.
With the 3ware adapter, using the read priority strategy instead of FCFS,
for three extractions of pkgsrc, took 329 seconds instead of 331 -- but
with a dramatic improvement in perceived system response (latency for
I/O outside the main stream).
With the Mylex adapter, the improvement was dramatic: using read priority
instead of FCFS yielded an improvement from 381 seconds to 135 seconds!
There was a less-noticeable improvement in perceived latency as well.
The other disk drivers currently hard-wired to FCFS or another policy
should probably be changed as well.
sufficient to clobber this nasty little bug. The behaviour observed
was a panic when doing a 'raidctl -f' on a component when DAGs were
in flight for the given RAID set. Unfortunatly, the faulty behaviour
was very intermittent, and it was difficult to not only reliably
reproduce the bug (nor determine when it was fixed!) but also to even
figure out what might be the cause of the problem.
The real issue was that ci_vp for the failed component was being
set to NULL in rf_FailDisk(), but with DAGs still in flight, some
of them were still expecting to use ci_vp to determine where to
read to/write from!
The fix is to call rf_SuspendNewRequestsAndWait() from rf_FailDisk()
to make sure the RAID set is quiet and all IOs have completed before
mucking with ci_vp and other data structures. rf_ResumeNewRequests()
is then used to continue on as usual.
If raidPtr->numFailures isn't initialized properly, then all sorts of
whacky things can happen, including incorrect DAGs being generated.
(Triggering this problem is a little esoteric, which is why this bug has
been in hiding for so long -- I only saw it after rebooting with a
degraded RAID 5 set that was autoconfigured, rebuilding the failed
componennt, and then failing the component while IO was happening to
the RAID set.)
rf_dagutils.h... missed this one from yesterday. sorry folks :( ]
Change signature of rf_AllocBuffer() to take a dag_h and buffer size
instead of an PDA and an alloclist. This lets us do the vple dance
inside of rf_AllocBuffer().
Cleanup usage of rf_AllocIOBuffer() and use rf_AllocBuffer() instead.
Fix all uses of rf_AllocBuffer() to conform to the new way of doing
things.
instead of an PDA and an alloclist. This lets us do the vple dance
inside of rf_AllocBuffer().
Cleanup usage of rf_AllocIOBuffer() and use rf_AllocBuffer() instead.
Fix all uses of rf_AllocBuffer() to conform to the new way of doing
things.
used in the event that we can't malloc a buffer of the appropriate
size in the traditional way. rf_AllocIOBuffer() and rf_FreeIOBuffer()
deal with allocating/freeing these structures. These buffers are
stored in a list on the 'iobuf' list. iobuf_count keeps track of how
many buffers are available, and numEmergencyBuffers is the effective
"high-water" mark for the freelist. The buffers allocated by
rf_AllocIOBuffer() are stripe-unit sized, which is the maximum
size requested by any of the callers.
Add an iobufs entry to RF_DagHeader_s. Use it for keeping track of
buffers that get allocated from the free-list.
Add a "generic list" pool (VoidPointerListElement Pool) for elements
used to maintain a list of allocated memory. [It is somewhat less
than ideal to add another little pool to handle this...]
Teach rf_AllocBuffer() to use the new rf_AllocIOBuffer(). Modify
other Mallocs to use rf_AllocIOBuffer(), and to update dag_h->iobufs as
appropriate.
Update rf_FreeDAG() to handle cleanup of dag_h->iobufs.
While here, add some missing pool_destroy() calls for a number of pools.
With these changes, it should (in theory) be possible to swap on
RAID 5 sets again. That said, I've not had any success there yet --
but the last issue I saw at least wasn't in RAIDframe. :-}
[There is room for this code to become a bit more consise, but I
wanted to do a checkpoint here with something known to work :) ]