Commit Graph

99 Commits

Author SHA1 Message Date
jdolecek
e9e91a0fb5 split off thread specific stuff from struct sigacts to struct sigctx, leaving
only signal handler array sharable between threads
move other random signal stuff from struct proc to struct sigctx

This addresses kern/10981 by Matthew Orgass.
2000-12-22 22:58:52 +00:00
jdolecek
ab8c5177be use SIGACTION() macro to get on appropriate sigaction
structure
2000-11-12 18:17:56 +00:00
enami
6cf8248614 Stop runnable but swapped out user processes also in suspendsched(). 2000-09-23 01:00:35 +00:00
enami
48b7bc7f16 The struct prochd isn't a proc. Start scaning from prochd.ph_link instead
of &prochd.
2000-09-15 06:36:25 +00:00
thorpej
03810b147f Make sure to lock the proclist when we're traversing allproc. 2000-09-14 19:13:29 +00:00
bouyer
ca5824ec3b Implement suspendsched() by putting all sleeping and runnable processes
in SSTOP state, execpt P_SYSTEM and curproc processes. We have to way to
find the original state of the process so we can't restart scheduling,
so this can only be used at shutdown time.

XXX suspendsched() should also deal with processes running on other CPUs.
I don't know how to do that, and as long as we have a kernel big lock,
this shouldn't be a problem.
2000-09-05 16:27:51 +00:00
bouyer
aacf1f7a6a Back out the suspendsched()/resumesched() thing, per request of Jason Thorpe &
Bill Sommerfeld. suspendsched() will be implemented in a different way.
2000-09-05 16:20:27 +00:00
bouyer
6720d310ef wakeup()->sched_wakeup() 2000-09-01 17:14:04 +00:00
bouyer
629150f864 Add the sched_suspend/sched_resume functions, as discussed on tech-kern,
with the following modifications to the initial patch:
- rename SHOLD and P_HOST to SSUSPEND and P_SUSPEND to avoid confusion with
  PHOLD()
- don't deal with SSUSPEND/P_SUSPEND in fork1(), if we come here while
  scheduler is suspended we're forking proc0, which can't have P_SUSPEND set.

sched_suspend() suspends the scheduling of users process, by removing all
processes from the run queues and changing their state from SRUN to
SSUSPEND. Also mark all user process but curproc P_SUSPEND.
When a process has to be put in SRUN and is marked P_SUSPEND, it's placed in
the SSUSPEND state instead.
sched_resume() places all SSUSPEND processes back in SRUN, clear the P_SUSPEND
flag.
2000-08-31 14:36:19 +00:00
sommerfeld
bdc30aed03 Since the spinlock count is per-cpu, we don't need atomic operations
to update it, so don't bother with <machine/atomic.h>

Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
2000-08-26 19:26:43 +00:00
sommerfeld
340951f9d1 On second thought.. pass cpu_info * to roundrobin() explicitly. 2000-08-26 04:01:16 +00:00
sommerfeld
ec08310fab More MP clock/scheduler changes:
- Periodically invoke roundrobin() from hardclock() on all cpu's rather
than from a timer callout; this allows time-slicing on non-primary cpu's.
 - Make pscnt per-cpu.
 - Notice psdiv changes on each cpu, and adjust pscnt at that point.
Also, invoke setstatclockrate() from the clock interrupt when each cpu
notices the divisor change, rather than when starting/stopping the
profiling clock.
2000-08-26 03:34:36 +00:00
thorpej
4db6fc7542 Make need_resched() take a "struct cpu_info *" argument. This
causes gives a primitive form of processor affinity.  Its use in
roundrobin() still needs some work.
2000-08-25 01:04:06 +00:00
thorpej
4f944290a2 Correct a comment. 2000-08-24 06:14:34 +00:00
sommerfeld
6d8ab92a1a Move kernel_lock release/switch/reacquire from ltsleep() to
mi_switch(), so we don't botch the locking around preempt() or
yield().
2000-08-24 02:37:27 +00:00
thorpej
f759220f40 Define the MI parts of the "big kernel lock" perimeter. From
Bill Sommerfeld.
2000-08-22 17:28:28 +00:00
thorpej
a86d1f4891 Add a lock around the scheduler, and use it as necessary, including
in the non-MULTIPROCESSOR case (LOCKDEBUG requires it).  Scheduler
lock is held upon entry to mi_switch() and cpu_switch(), and
cpu_switch() releases the lock before returning.

Largely from Bill Sommerfeld, with some minor bug fixes and
machine-dependent code hacking from me.
2000-08-20 21:50:06 +00:00
thorpej
b9d2d53fb8 Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks. 2000-08-07 22:10:52 +00:00
thorpej
b24441d4d1 It doesn't make sense to charge simple locks to proc's, because
simple locks are held by CPUs.  Remove p_simple_locks (which was
unused anyway, really), and add a LOCKDEBUG check for held simple
locks in mi_switch().  Grow p_locks to an int to take up the space
previously used by p_simple_locks so that the proc structure doens't
change size.
2000-08-07 21:55:22 +00:00
nathanw
729e93de71 principal -> principle (in a comment) 2000-08-02 20:21:36 +00:00
mrg
32aa199ccf remove include of <vm/vm.h> 2000-06-27 17:41:07 +00:00
sommerfeld
e964d558a7 Fix assorted bugs around shutdown/reboot/panic time.
- add a new global variable, doing_shutdown, which is nonzero if
vfs_shutdown() or panic() have been called.
- in panic, set RB_NOSYNC if doing_shutdown is already set on entry
so we don't reenter vfs_shutdown if we panic'ed there.
 - in vfs_shutdown, don't use proc0's process for sys_sync unless
curproc is NULL.
 - in lockmgr, attribute successful locks to proc0 if doing_shutdown
&& curproc==NULL, and  panic if we can't get the lock right away; avoids the
spurious lockmgr DIAGNOSTIC panic from the ddb reboot command.
 - in subr_pool, deal with curproc==NULL in the doing_shutdown case.
 - in mfs_strategy, bitbucket writes if doing_shutdown, so we don't
wedge waiting for the mfs process.
 - in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the
panicstr case.

Appears to fix: kern/9239, kern/10187, kern/9367.
May also fix kern/10122.
2000-06-10 18:44:43 +00:00
thorpej
fcc7898856 Change tsleep() to ltsleep(), which takes an interlock argument. The
interlock is released once the scheduler is locked, so that a race
between a sleeper and an awakener is prevented in a multiprocessor
environment.  Provide a tsleep() macro that provides the old API.
2000-06-08 05:50:37 +00:00
thorpej
956b3ca3b3 Track which process a CPU is running/has last run on by adding a
p_cpu member to struct proc.  Use this in certain places when
accessing scheduler state, etc.  For the single-processor case,
just initialize p_cpu in fork1() to avoid having to set it in the
low-level context switch code on platforms which will never have
multiprocessing.

While I'm here, comment a few places where there are known issues
for the SMP implementation.
2000-05-31 05:02:31 +00:00
thorpej
6d02ce1e66 All users of the old sleep() are now gone; nuke it. 2000-05-27 05:00:47 +00:00
sommerfeld
40339b39f9 Reduce use of curproc in several places:
- Change ktrace interface to pass in the current process, rather than
p->p_tracep, since the various ktr* function need curproc anyway.

 - Add curproc as a parameter to mi_switch() since all callers had it
handy anyway.

 - Add a second proc argument for inferior() since callers all had
curproc handy.

Also, miscellaneous cleanups in ktrace:

 - ktrace now always uses file-based, rather than vnode-based I/O
(simplifies, increases type safety); eliminate KTRFLAG_FD & KTRFAC_FD.
Do non-blocking I/O, and yield a finite number of times when receiving
EWOULDBLOCK before giving up.

 - move code duplicated between sys_fktrace and sys_ktrace into ktrace_common.

 - simplify interface to ktrwrite()
2000-05-27 00:40:29 +00:00
thorpej
a7d0570e67 First sweep at scheduler state cleanup. Collect MI scheduler
state into global and per-CPU scheduler state:

	- Global state: sched_qs (run queues), sched_whichqs (bitmap
	  of non-empty run queues), sched_slpque (sleep queues).
	  NOTE: These may collectively move into a struct schedstate
	  at some point in the future.

	- Per-CPU state, struct schedstate_percpu: spc_runtime
	  (time process on this CPU started running), spc_flags
	  (replaces struct proc's p_schedflags), and
	  spc_curpriority (usrpri of processes on this CPU).

	- Every platform must now supply a struct cpu_info and
	  a curcpu() macro.  Simplify existing cpu_info declarations
	  where appropriate.

	- All references to per-CPU scheduler state now made through
	  curcpu().  NOTE: this will likely be adjusted in the future
	  after further changes to struct proc are made.

Tested on i386 and Alpha.  Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
2000-05-26 21:19:19 +00:00
thorpej
8964c35eca Introduce a new process state distinct from SRUN called SONPROC
which indicates that the process is actually running on a
processor.  Test against SONPROC as appropriate rather than
combinations of SRUN and curproc.  Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
2000-05-26 00:36:42 +00:00
augustss
264f1d27c6 Get rid of register declarations. 2000-03-30 09:27:11 +00:00
simonb
bb1fc886cf endtsleep() is prototyped at the top of the file, delete duplicate
declaration inside tsleep().
2000-03-28 22:04:46 +00:00
thorpej
68054a285a Track if a process has been through a round-robin cycle without yielding
the CPU, and mark that it should yield if that happens.

Based on a discussion with Artur Grabowski.
2000-03-23 20:37:58 +00:00
thorpej
b667a5a357 New callout mechanism with two major improvements over the old
timeout()/untimeout() API:
- Clients supply callout handle storage, thus eliminating problems of
  resource allocation.
- Insertion and removal of callouts is constant time, important as
  this facility is used quite a lot in the kernel.

The old timeout()/untimeout() API has been removed from the kernel.
2000-03-23 06:30:07 +00:00
fvdl
0b1963121a Add Kirk McKusick's soft updates code to the trunk. Not enabled by
default, as the copyright on the main file (ffs_softdep.c) is such
that is has been put into gnusrc. options SOFTDEP will pull this
in. This code also contains the trickle syncer.

Bump version number to 1.4O
1999-11-15 18:49:07 +00:00
ross
1ff9cacc0c Back out a small and unfinished piece of the old scheduler rototill. 1999-10-14 05:59:57 +00:00
thorpej
11cae42531 Centralize the declaration and clearing of `cold'. 1999-09-17 19:59:35 +00:00
thorpej
6266379c9d Be slightly more informative in the tsleep() diagnostics. 1999-09-15 21:54:57 +00:00
thorpej
1bd7bb28ea Implement wakeup_one(), which wakes up the highest priority process
first in line for the specified identifier.  For use in places where
you don't want a Thundering Herd.

While here, add an optimization to wakeup() suggested by Ross Harvey.
1999-07-26 23:00:58 +00:00
thorpej
ea8fb3e04a Turn the proclist lock into a read/write spinlock. Update proclist locking
calls to reflect this.  Also, block statclock rather than softclock during
in the proclist locking functions, to address a problem reported on
current-users by Sean Doran.
1999-07-25 06:30:33 +00:00
thorpej
01a8cffe77 Add a read/write lock to the proclists and PID hash table. Use the
write lock when doing PID allocation, and during the process exit path.
Use a read lock every where else, including within schedcpu() (interrupt
context).  Note that holding the write lock implies blocking schedcpu()
from running (blocks softclock).

PID allocation is now MP-safe.

Note this actually fixes a bug on single processor systems that was probably
extremely difficult to tickle; it was possible that schedcpu() would run
off a bad pointer if the right clock interrupt happened to come in the
middle of a LIST_INSERT_HEAD() or LIST_REMOVE() to/from allproc.
1999-07-22 21:08:30 +00:00
thorpej
2715b812d1 Rework the process exit path, in preparation for making process exit
and PID allocation MP-safe.  A new process state is added: SDEAD.  This
state indicates that a process is dead, but not yet a zombie (has not
yet been processed by the process reaper).

SDEAD processes exist on both the zombproc list (via p_list) and deadproc
(via p_hash; the proc has been removed from the pidhash earlier in the exit
path).  When the reaper deals with a process, it changes the state to
SZOMB, so that wait4 can process it.

Add a P_ZOMBIE() macro, which treats a proc in SZOMB or SDEAD as a zombie,
and update various parts of the kernel to reflect the new state.
1999-07-22 18:13:36 +00:00
mrg
48c12bfeed revert previous. oops. 1999-04-21 02:37:07 +00:00
mrg
58540a2274 properly test the msgsz as "msgsz - len". from PR#7386 1999-04-21 02:31:49 +00:00
mrg
d2397ac5f7 completely remove Mach VM support. all that is left is the all the
header files as UVM still uses (most of) these.
1999-03-24 05:50:49 +00:00
ross
e47e3c9f45 schedclk() -> schedclock(), for consistency with hardclock(), statclock(), ...
update comments for recent scheduler mods
1999-02-28 18:14:57 +00:00
ross
b4a33c4e60 Scheduler bug fixes and reorganization
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
  steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclk() mechanism at a new clock at schedhz, so high
  platform hz values don't cause nice +0 processes to look like they are
  niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code

=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX.  Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value.  It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)

=== New schedclk() mechansism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock.  Define these for alpha, where hz==1024, and nice was
totally broke.

=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater wieght) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this, it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.

=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
1999-02-23 02:56:03 +00:00
chs
61458d7dfa LOCKDEBUG enhancements for non-MP:
keep a list of locked locks.
use this to print where the lock was locked
when we either go to sleep with a lock held
or try to free a locked lock.
1998-11-04 06:19:55 +00:00
mycroft
fb526e055c Substantial signal handling changes:
* Increase the size of sigset_t to accomodate 128 signals -- adding new
  versions of sys_setprocmask(), sys_sigaction(), sys_sigpending() and
  sys_sigsuspend() to handle the changed arguments.
* Abstract the guts of sys_sigaltstack(), sys_setprocmask(), sys_sigaction(),
  sys_sigpending() and sys_sigsuspend() into separate functions, and call them
  from all the emulations rather than hard-coding everything.  (Avoids uses
  the stackgap crap for these system calls.)
* Add a new flag (p_checksig) to indicate that a process may have signals
  pending and userret() needs to do the full (slow) check.
* Eliminate SAS_ALTSTACK; it's exactly the inverse of SS_DISABLE.
* Correct emulation bugs with restoring SS_ONSTACK.
* Make the signal mask in the sigcontext always use the emulated mask format.
* Store signals internally in sigaction structures, rather than maintaining a
  bunch of little sigsets for each SA_* bit.
* Keep track of where we put the signal trampoline, rather than figuring it out
  in *_sendsig().
* Issue a warning when a non-emulated sigaction bit is observed.
* Add missing emulated signals, and a native SIGPWR (currently not used).
* Implement the `not reset when caught' semantics for relevant signals.

Note: Only code touched by the i386 port has been modified.  Other ports and
emulations need to be updated.
1998-09-11 12:50:05 +00:00
jonathan
466e784ee1 defopt DDB. 1998-07-04 22:18:13 +00:00
thorpej
808867c7cf defopt KTRACE 1998-06-25 21:17:15 +00:00
fvdl
e5bc90f40c Merge with Lite2 + local changes 1998-03-01 02:20:01 +00:00