tsleep() instead of DELAY. Also, keep trying flushing buffers when the
number of dirty buffers decreases (20 rounds may not be enouth for a
very large buffer cache).
Using tsleep instead of delay gives a chance to others kernel threads to run,
which is needed for raidframe. With this change I've not been able to
reproduce the 'dirty buffer not flushed' problem with raidframe.
with the following modifications to the initial patch:
- rename SHOLD and P_HOST to SSUSPEND and P_SUSPEND to avoid confusion with
PHOLD()
- don't deal with SSUSPEND/P_SUSPEND in fork1(), if we come here while
scheduler is suspended we're forking proc0, which can't have P_SUSPEND set.
sched_suspend() suspends the scheduling of users process, by removing all
processes from the run queues and changing their state from SRUN to
SSUSPEND. Also mark all user process but curproc P_SUSPEND.
When a process has to be put in SRUN and is marked P_SUSPEND, it's placed in
the SSUSPEND state instead.
sched_resume() places all SSUSPEND processes back in SRUN, clear the P_SUSPEND
flag.
to update it, so don't bother with <machine/atomic.h>
Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
- Periodically invoke roundrobin() from hardclock() on all cpu's rather
than from a timer callout; this allows time-slicing on non-primary cpu's.
- Make pscnt per-cpu.
- Notice psdiv changes on each cpu, and adjust pscnt at that point.
Also, invoke setstatclockrate() from the clock interrupt when each cpu
notices the divisor change, rather than when starting/stopping the
profiling clock.
- In simple_lock_switchcheck(), allow/enforce exactly one lock to be
held: sched_lock.
- Per e-mail to tech-smp from Bill Sommerfeld, r/w spin locks have
an interlock at splsched(), rather than splhigh().
in the non-MULTIPROCESSOR case (LOCKDEBUG requires it). Scheduler
lock is held upon entry to mi_switch() and cpu_switch(), and
cpu_switch() releases the lock before returning.
Largely from Bill Sommerfeld, with some minor bug fixes and
machine-dependent code hacking from me.
- LOCK_ASSERT(), which expands to KASSERT() if LOCKDEBUG.
- new simple_lock_held(), which tests if the calling CPU holds
the specified simple lock.
From Bill Sommerfeld, modified slightly by me.