Fix a nastly little bug that I've been chasing over the past 12 hours.

If raidPtr->numFailures isn't initialized properly, then all sorts of
whacky things can happen, including incorrect DAGs being generated.
(Triggering this problem is a little esoteric, which is why this bug has
been in hiding for so long -- I only saw it after rebooting with a
degraded RAID 5 set that was autoconfigured, rebuilding the failed
componennt, and then failing the component while IO was happening to
the RAID set.)
This commit is contained in:
oster 2004-03-21 06:32:03 +00:00
parent 492aa07868
commit 3dd7f5503f
1 changed files with 5 additions and 3 deletions

View File

@ -1,4 +1,4 @@
/* $NetBSD: rf_disks.c,v 1.50 2004/03/13 03:32:08 oster Exp $ */
/* $NetBSD: rf_disks.c,v 1.51 2004/03/21 06:32:03 oster Exp $ */
/*-
* Copyright (c) 1999 The NetBSD Foundation, Inc.
* All rights reserved.
@ -67,7 +67,7 @@
***************************************************************/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: rf_disks.c,v 1.50 2004/03/13 03:32:08 oster Exp $");
__KERNEL_RCSID(0, "$NetBSD: rf_disks.c,v 1.51 2004/03/21 06:32:03 oster Exp $");
#include <dev/raidframe/raidframevar.h>
@ -524,8 +524,10 @@ rf_AutoConfigureDisks(RF_Raid_t *raidPtr, RF_Config_t *cfgPtr,
/* XXX fix for n-fault tolerant */
/* XXX this should probably check to see how many failures
we can handle for this configuration! */
if (numFailuresThisRow > 0)
if (numFailuresThisRow > 0) {
raidPtr->status = rf_rs_degraded;
raidPtr->numFailures = numFailuresThisRow;
}
/* close the device for the ones that didn't get used */