Commit Graph

138 Commits

Author SHA1 Message Date
fvdl
404fa205d1 Fix (bogus) unitialized variable warning. 2003-10-26 20:55:57 +00:00
itojun
311078a035 truncated output from pty problem. fix by enami
http://mail-index.netbsd.org/tech-kern/2003/09/06/0002.html
2003-09-08 11:14:18 +00:00
agc
aad01611e7 Move UCB-licensed code from 4-clause to 3-clause licence.
Patches provided by Joel Baker in PR 22364, verified by myself.
2003-08-07 16:26:28 +00:00
matt
79fccc7527 Improve _lwp_wakeup so when it wakes a thread, the target thread thinks
ltsleep has been interrupted and thus the target will not think it was
a spurious wakeup.  (this makes syscalls cancellable for libpthread).
2003-07-28 23:35:20 +00:00
matt
0c7a583f3a Add support for storing the priority mask in sched_whichqs in MSB order
(enabled by defining __HAVE_BIGENDIAN_BITOPS in <machine/types.h>).  The
default is still LSB ordering.  This change will allow the powerpc MD
implementations of setrunqueue/remrunqueue to be nuked.
2003-07-18 01:02:31 +00:00
fvdl
4bd1a8dcf8 Changes from Stephan Uphoff to patch problems with LWPs blocking when they
shouldn't, and MP.
2003-07-17 18:16:58 +00:00
fvdl
d5aece61d6 Back out the lwp/ktrace changes. They contained a lot of colateral damage,
and need to be examined and discussed more.
2003-06-29 22:28:00 +00:00
darrenr
960df3c8d1 Pass lwp pointers throughtout the kernel, as required, so that the lwpid can
be inserted into ktrace records.  The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.

Bump the kernel rev up to 1.6V
2003-06-28 14:20:43 +00:00
nathanw
08cf80dc5c Whitespace police. 2003-06-26 02:09:27 +00:00
nathanw
169a6757eb For now, disable voluntary mid-operation preempt() for SA processes;
it doesn't interact well with SA's idea of what's running.
2003-06-26 02:08:19 +00:00
simonb
8ba8e8a81b Sprinkle a little white-space. 2003-05-20 13:48:08 +00:00
matt
d97fd7e0ed In setrunnable, give more infomrmation in the panic message so we can
figure out WTF went wrong.
2003-05-08 02:04:11 +00:00
pk
e148067544 ltsleep(): deal with PNOEXITERR after re-taking the interlock (if necessary). 2003-02-04 20:15:59 +00:00
yamt
05628fc8d1 constify wait channels of ltsleep/wakeup. they are never dereferenced. 2003-02-04 13:41:48 +00:00
yamt
41ad61ee76 make KSTACK_CHECK_* compile after sa merge. 2003-01-22 12:52:14 +00:00
christos
d00d441811 step 4: don't de-reference l, if you are going to test if it is NULL a couple
of lines below.
2003-01-21 00:02:07 +00:00
thorpej
e0d8d366df Merge the nathanw_sa branch. 2003-01-18 10:06:22 +00:00
thorpej
b9cdec841a Pass the process priority we want to compare to resched_proc(). Restores
resetpriority() behavior.  Thanks to Enami Tsugutomo for pointing out my
mistake.
2003-01-15 07:12:20 +00:00
pk
da40c6dd43 schedcpu(): after updating the process CPU tick counters, we no longer need
to run at splstatclock(); continue at splsched().
2003-01-12 01:48:56 +00:00
thorpej
85c31b11c3 * Move the resched check from setrunnable() and resetpriority() to
a new inline, resched_proc().
* When performing the resched check, check the priority against the
  current priority on the CPU the process last ran on, not always the
  current CPU.
2002-12-29 17:40:26 +00:00
thorpej
63ccfc36f6 Add a comment about affinity to awaken(). 2002-12-29 02:08:39 +00:00
gmcgarry
f8f2c11fe0 Re-add yield(). Only used by compat code at the moment. 2002-12-21 23:52:05 +00:00
gmcgarry
a5424a9df1 Remove yield() until the scheduler supports the sched_yield(2) system
call.
2002-12-20 05:43:09 +00:00
nisimura
da22f42379 Add some informative comments about setrunqueue and remrunqueue. 2002-11-03 13:59:12 +00:00
gmcgarry
395d77f3dc Back out __HAVE_CHOOSEPROC stuff. 2002-09-29 21:11:36 +00:00
gmcgarry
6a6ea308fd Separate the scheduler from the context switching code.
This is done by adding an extra argument to mi_switch() and
cpu_switch() which specifies the new process.  If NULL is passed,
then the new function chooseproc() is invoked to wait for a new
process to appear on the run queue.

Also provides an opportunity for optimisations if "switching to self".

Also added are C versions of the setrunqueue() and remrunqueue()
low-level primitives if __HAVE_MD_RUNQUEUE is not defined by MD code.

All these changes are contingent upon the __HAVE_CHOOSEPROC flag being
defined by MD code to indicate that cpu_switch() supports the changes.
2002-09-22 05:36:48 +00:00
matt
48bbf5f234 Use the queue macros from <sys/queue.h> instead of referring to the queue
members directly.  Use *_FOREACH whenever possible.
2002-09-04 01:32:31 +00:00
briggs
487de1e6b9 Only include sys/pmc.h if PERFCTRS is defined. 2002-08-07 11:13:40 +00:00
briggs
0b956d0b8b Implement pmc(9) -- An interface to hardware performance monitoring
counters.  These counters do not exist on all CPUs, but where they
do exist, can be used for counting events such as dcache misses that
would otherwise be difficult or impossible to instrument by code
inspection or hardware simulation.

pmc(9) is meant to be a general interface.  Initially, the Intel XScale
counters are the only ones supported.
2002-08-07 05:14:47 +00:00
yamt
d96bff0e27 add KSTACK_CHECK_MAGIC. discussed on tech-kern. 2002-07-02 20:27:44 +00:00
thorpej
e839580821 Move kernel_lock manipulation info functions so that they will
show up in a profile.
2002-05-21 01:38:26 +00:00
kleink
e9d7166203 asm -> __asm. 2001-11-30 16:21:16 +00:00
lukem
adc783d537 add RCSIDs 2001-11-12 15:25:01 +00:00
chs
e7d9abce3e in ltsleep(), assert that the interlock is held (if one is given). 2001-09-25 01:38:38 +00:00
chs
187cadcb77 don't define bpendtsleep in profiling kernels since it confuses gprof. 2001-05-28 22:20:03 +00:00
jdolecek
27706951af Slighly improve comment for ltsleep(), the previous formulation might
be understood incorrectly (at least, it confused me at first, before
I looked at the actual code).
2001-04-27 08:00:03 +00:00
thorpej
a18eaa4cb6 Make sure there is there is a curproc in ltsleep(). 2001-04-20 17:58:49 +00:00
thorpej
7200d34a76 Whenever ps_sigcheck is set to true, signotify() the process, and
wrap this all up in a CHECKSIGS() macro.  Also, in psignal1(),
signotify() SRUN and SIDL processes if __HAVE_AST_PERPROC is defined.

Per discussion w/ mycroft.
2001-01-14 22:31:58 +00:00
sommerfeld
aace86f946 MULTIPROCESSOR: The two calls to psignal() inside mi_switch() are
inside the scheduler lock perimeter and should be sched_psignal() instead.
2001-01-01 16:02:51 +00:00
jdolecek
e9e91a0fb5 split off thread specific stuff from struct sigacts to struct sigctx, leaving
only signal handler array sharable between threads
move other random signal stuff from struct proc to struct sigctx

This addresses kern/10981 by Matthew Orgass.
2000-12-22 22:58:52 +00:00
jdolecek
ab8c5177be use SIGACTION() macro to get on appropriate sigaction
structure
2000-11-12 18:17:56 +00:00
enami
6cf8248614 Stop runnable but swapped out user processes also in suspendsched(). 2000-09-23 01:00:35 +00:00
enami
48b7bc7f16 The struct prochd isn't a proc. Start scaning from prochd.ph_link instead
of &prochd.
2000-09-15 06:36:25 +00:00
thorpej
03810b147f Make sure to lock the proclist when we're traversing allproc. 2000-09-14 19:13:29 +00:00
bouyer
ca5824ec3b Implement suspendsched() by putting all sleeping and runnable processes
in SSTOP state, execpt P_SYSTEM and curproc processes. We have to way to
find the original state of the process so we can't restart scheduling,
so this can only be used at shutdown time.

XXX suspendsched() should also deal with processes running on other CPUs.
I don't know how to do that, and as long as we have a kernel big lock,
this shouldn't be a problem.
2000-09-05 16:27:51 +00:00
bouyer
aacf1f7a6a Back out the suspendsched()/resumesched() thing, per request of Jason Thorpe &
Bill Sommerfeld. suspendsched() will be implemented in a different way.
2000-09-05 16:20:27 +00:00
bouyer
6720d310ef wakeup()->sched_wakeup() 2000-09-01 17:14:04 +00:00
bouyer
629150f864 Add the sched_suspend/sched_resume functions, as discussed on tech-kern,
with the following modifications to the initial patch:
- rename SHOLD and P_HOST to SSUSPEND and P_SUSPEND to avoid confusion with
  PHOLD()
- don't deal with SSUSPEND/P_SUSPEND in fork1(), if we come here while
  scheduler is suspended we're forking proc0, which can't have P_SUSPEND set.

sched_suspend() suspends the scheduling of users process, by removing all
processes from the run queues and changing their state from SRUN to
SSUSPEND. Also mark all user process but curproc P_SUSPEND.
When a process has to be put in SRUN and is marked P_SUSPEND, it's placed in
the SSUSPEND state instead.
sched_resume() places all SSUSPEND processes back in SRUN, clear the P_SUSPEND
flag.
2000-08-31 14:36:19 +00:00
sommerfeld
bdc30aed03 Since the spinlock count is per-cpu, we don't need atomic operations
to update it, so don't bother with <machine/atomic.h>

Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
2000-08-26 19:26:43 +00:00
sommerfeld
340951f9d1 On second thought.. pass cpu_info * to roundrobin() explicitly. 2000-08-26 04:01:16 +00:00