~forever. This requires a number of things:
1) If we can't create a DAG, set desc->numStripes to 0 in
rf_SelectAlgorithm. This will ensure that we don't attempt to free
any dagArray[] elements in rf_StateCleanup.
2) Modify rf_State_CreateDAG() to not panic in the event of a DAG
failure. Instead, set the bp->b_flags and bp->b_error, and set things
up to skip to rf_State_Cleanup().
3) Need to mark desc->status as "bad" so that we actually stop looking
for a different DAG. (which we won't find... no matter how many times
we try).
4) rf_State_LastState() will then do the biodone(), and return EIO for
the IO in question.
5) Remove some " || 1 "'s from ProcessNode(). These were for
debugging, and we don't need the failure notices spewing
over and over again as the failing DAGs are processed.
6) Needed to change
if (asmap->numDataFailed + asmap->numParityFailed > 1)
to
if ((asmap->numDataFailed + asmap->numParityFailed > 1) ||
(raidPtr->numFailures > 1)){
in rf_raid5.c so that it doesn't try to return
rf_CreateNonRedundantWriteDAG as the creation function.
7) Note that we can't apply the above change to the RAID 1 code as
with the silly "fake 2-D" RAID 1 sets, it is possible to have 2 failed
components in the RAID 1 set, and that would stop them from working.
(I really don't know why/how those "fake 2-D" RAID 1 sets even work
with all the "single-fault" assumptions present in the rest of the
code.)
8) Needed to protect rf_RAID0DagSelect() in a similar way -- it should
return NULL as the createFunc.
9) No point printing out "Multiple disks failed..." a zillion times.
- strong alias __sigprocmask14 to pthread_sigmask
- call _sys___sigprocmask14 where appropriate
- make pthread_sigmask not set the signal mask lazily when pthreads
aren't started yet
- add pt_stackinfo to struct __pthread_st
- add pthread__stackinfo_offset returning the offset from ss_sp to
pt_stackinfo
- pass stackinfo_offset to sa_register and set SA_FLAG_STACKINFO to
make the kernel use it
- call pthread__sa_recycle in pthread__resolve_locks; g/c recycleq and
pthread__recycle_bulk
- return stack in pthread__sa_recycle by incrementing sasi_stackgen
- make pthread__sa_recycle debugging output formatting conditional on
pthread__debug_newline
an offset between ss_sp and struct sa_stackinfo_t (located in struct
__pthread_st) when calling sa_register. The kernel increments the
sast_gen counter in struct sastack when an upcall stack is used.
libpthread increments the sasi_stackgen counter in struct
sa_stackinfo_t when an upcall stack is freed. The kernel compares the
two counters to decide if a stack is free or in use.
- add struct sa_stackinfo_t with sasi_stackgen to count stack use in
userland
- add sast_gen to struct sastack to count stack use in kernel
- add SA_FLAG_STACKINFO to enable the stackinfo_offset argument in the
sa_register syscall
- add sa_stackinfo_offset to struct sadata for offset between ss_sp
and struct sa_stackinfo_t
- add ssize_t stackinfo_offset argument to sa_register, initialize
struct sadata's sa_stackinfo_offset from it if SA_FLAG_STACKINFO is
set
- add sa_getstack, sa_getstack0, sa_stackused and sa_setstackfree
functions to find/use/free upcall stacks and use these where
appropriate
- don't record stack for upcall in sa_upcall0
- pass sau to sa_switchcall instead of l2 (l2 = curlwp in sa_switchcall)
- add sa_vp_blocker to struct sadata to pass recently blocked lwp to
sa_switchcall
- delay finding a stack for blocked upcalls to sa_switchcall
- add sa_stacknext to struct sadata pointing to next most likely free
upcall stack; also g/c sa_stackslist in struct sadata and sast_list
in struct sastack
- add L_SA_WOKEN flag: LWP is on sa_woken queue
- add L_SA_RECYCLE flag: LWP should be recycled in sa_setwoken
- replace l_upcallstack with L_SA_WOKEN/L_SA_RECYCLE/L_SA_BLOCKING
flags
- g/c now unused sast_blocker in struct sastack
- make sa_switchcall, sa_upcall0 and sa_upcall_getstate static in
kern_sa.c
- call sa_upcall_userret only once in userret
- split sa_makeupcalls out of sa_upcall_userret and use to process
the sa_upcalls queue
- on process exit: mark LWPs sleeping in saunblock interruptible; also
there are no LWPs sleeping on l->l_upcallstack anymore; also clear
sa_wokenq_head to prevent unblocked upcalls
additional changes:
- cleanup timerupcall sa_vp == curlwp check
- add check in sa_yield if we didn't block on our way here and we
wouldn't any longer be the LWP on the VP
- invalidate sa_vp_ofaultaddr after resolving pagefault
(for rstartd.real) separate to the LIBDIR (for the configuration)
and install rstartd.real into /usr/X11R6/libexec.
(Installing binaries under /usr/X11R6/lib/X11 (which may be a
symlink to /etc/X11) is Bad.0
Override BINDIR after the .include of <bsd.own.mk>, so that rstartd.real
is installed in the correct location.
brief summary:
* Add NetBSD RCS Ids. Change to use a date based version number.
* Remove unused variables and functions.
* Move towards NetBSD code style.
* Add missing GNU options (except for --include, --exclude and
--line-buffered)
* Bug fixes
* Bug fixes and changes from OpenBSD's src/usr.bin/grep
A full list of changes can be viewed in the NetBSD CVS repository at
othersrc/usr.bin/grep. A ChangeLog is also available at:
ftp://ftp.NetBSD.org/pub/NetBSD/misc/cjep/grep-ChangeLog.txt
If you want to help out, please let me (cjep@) know so that we can
organise our efforts efficiently.
brief summary:
* Add NetBSD RCS Ids. Change to use a date based version number.
* Remove unused variables and functions.
* Move towards NetBSD code style.
* Add missing GNU options (except for --include, --exclude and
--line-buffered)
* Bug fixes
* Bug fixes and changes from OpenBSD's src/usr.bin/grep
A full list of changes can be viewed in the NetBSD CVS repository at
othersrc/usr.bin/grep. A ChangeLog is also available at:
ftp://ftp.NetBSD.org/pub/NetBSD/misc/cjep/grep-ChangeLog.txt
If you want to help out, please let me (cjep@) know so that we can
organise our efforts efficiently.
- add PTHREAD_PID_DEBUG which prints the pid before each debuglog line
- output thread returned in pthread__next
- add asserts in pthread__sched akin to asserts in pthread__sched_bulk:
check if scheduled thread is at front/end of queue
- pthread__upcall: output event/interrupted LWP count instead of LWPid
of the first event/interrupted LWP (since unblock upcalls can have
multiple event LWPs).
- pthread__find_interrupted: output LWPid here