Commit Graph

126 Commits

Author SHA1 Message Date
ad
3c32363107 PR kern/36183 problem with ptrace and multithreaded processes
Fix the crashy test case that Thor provided.
2009-02-04 21:17:39 +00:00
wrstuden
04ca26c586 Tweak change to move SA support from userret() to lwp_userret().
1) Since we want to check for upcalls only once, take LW_SA_UPCALL
out of the while(l->l_flags & LW_USERRET) loop.

2) since the goal is to keep SA code out of userret() (and especially
all the emulations that include userret() but will never do SA),
ALWAYS set LW_SA_UPCALL when we set SAVP_FLAG_NOUPCALLS. Drop the
test for it in lwp_userret() since it will never be set bare.

3) Adapt sa_upcall_userret() to clear LW_SA_UPCALL if it's no longer
needed. If we have gained upcalls since sa_yield(), we will deliver
them next time around.

Tested by skrll at.
2008-10-28 22:11:36 +00:00
ad
a9327b33e2 Undo revivesa damage to userret(). 2008-10-21 11:51:23 +00:00
wrstuden
fc7511b00e Merge wrstuden-revivesa into HEAD. 2008-10-15 06:51:17 +00:00
rmind
337b081fed - Replace lwp_t::l_sched_info with union: pointer and timeslice.
- Change minimal time-quantum to ~20 ms.
- Thus remove unneeded pool in M2, and unused sched_lwp_exit().
- Do not increase l_slptime twice for SCHED_4BSD (regression fix).
2008-10-07 09:48:27 +00:00
rmind
4f91cff093 - Disallow setting of affinity for zombie LWPs.
- Fix the possible NULL dereference when LWP exiting.
- Fix the inhertance of affinity.
2008-07-14 01:19:37 +00:00
rmind
30dfdb2897 lwp_migrate: if LWP is still on the CPU (LP_RUNNING), it must be handled
like LSONPROC.  Should fix PR/38588.  OK by <ad>.
2008-07-02 19:53:12 +00:00
rmind
160268aca6 Remove proc_representative_lwp(), use a simple LIST_FIRST() instead.
OK by <ad>.
2008-07-02 19:49:58 +00:00
ad
b7f5063255 lwp_lock_retry: return a pointer to the lock acquired. No functional change. 2008-06-16 09:45:20 +00:00
rmind
481ae1556f - Add general cpuset macros.
- Use kcpuset name for kernel-only functions.
- Use cpuid_t to specify CPU ID.
- Unify all cpuset users.

API is expected to be stable now.
2008-06-16 01:41:20 +00:00
christos
f30b5785d5 Don't expose struct cpuset, share the l_affinity flag and only allocate it
if we need to. This is not a compatible change, but the syscalls are new
enough and they don't need to be versioned. Approved by rmind.
2008-06-15 20:32:57 +00:00
ad
ea8a92578d If vfork(), we want the LWP to run fast and on the same CPU
as its parent, so that it can reuse the VM context and cache
footprint on the local CPU.
2008-06-02 13:58:07 +00:00
ad
2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad
208df81d99 Move lwp_exit_switchaway() into kern_synch.c. Instead of always switching
to the idle loop, pick a new LWP from the run queue.
2008-05-27 17:51:17 +00:00
ad
93e0e98369 Take the mutex pointer and waiters count out of sleepq_t: the values can
be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
2008-05-26 12:08:38 +00:00
ad
245f0726ac Reduce ifdefs due to MULTIPROCESSOR slightly. 2008-05-19 17:06:02 +00:00
ad
a4e0004be3 LOCKDEBUG: try to speed it up a bit by not using so much global state.
This will break the build briefly but will be followed by another commit
to fix that..
2008-05-06 18:40:57 +00:00
rmind
0fe9197c91 lwp_suspend: check for LW_* flags in l_flag, not l_stat. 2008-05-01 21:25:23 +00:00
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad
4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad
1f8aca087d Disable preemption during the final stages of LWP exit. 2008-04-27 11:39:20 +00:00
ad
c925598aae lwp_startup: spl0 after pmap_activate, otherwise we could be preempted
without a pmap active.
2008-04-25 14:34:41 +00:00
ad
3cef738139 lwp_userret: don't drop p_lock while holding a scheduler lock. 2008-04-24 21:47:11 +00:00
ad
284c2b9aef Merge proc::p_mutex and proc::p_smutex into a single adaptive mutex, since
we no longer need to guard against access from hardware interrupt handlers.

Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
2008-04-24 18:39:20 +00:00
ad
6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
rmind
5c0e3318e2 Adjust comments: spc_mutex is now always a per-CPU lock, L_INMEM -> LW_INMEM,
L_WSUSPEND -> LW_WSUSPEND, and remove white-spaces, while here.
2008-04-15 18:54:30 +00:00
ad
be04ac4896 Make rusage collection per-LWP and collate in the appropriate places.
cloned threads need a little bit more work but the locking needs to
be fixed first.
2008-03-27 19:06:51 +00:00
ad
25b10dbb15 lwp_ctl_alloc: initialize lcp_kaddr to vm_map_min(kernel_map), in order to
prevent uvm_map() from spuriously failing.
2008-03-23 16:39:34 +00:00
ad
5214147407 LWP_CACHE_CREDS: instead of testing (l_cred != p_cred), use a per-LWP
flag bit to indicate a pending cred update. Avoids touching one item of
shared state in the syscall path.
2008-03-22 17:53:34 +00:00
ad
a9ca7a3734 Catch up with descriptor handling changes. See kern_descrip.c revision
1.173 for details.
2008-03-21 21:54:58 +00:00
ad
c42a4d1422 Add a boolean parameter to syncobj_t::sobj_unsleep. If true we want the
existing behaviour: the unsleep method unlocks and wakes the swapper if
needs be. If false, the caller is doing a batch operation and will take
care of that later. This is kind of ugly, but it's difficult for the caller
to know which lock to release in some situations.
2008-03-17 16:54:51 +00:00
ad
32b8f98e7d lwp_ctl_exit: fix a use-after-free that caused the following:
_lwp_ctl()	<- works
execve()	<- tears down lwpctl state
_lwp_ctl()	<- fails erroneously
2008-03-07 18:06:04 +00:00
rmind
9850c0557d sys__sched_getparam and sys__sched_getaffinity: Do not assume that LWP
with LID=1 exists, use LIST_FIRST(&p->p_lwps) instead.
Fixes PR/37987 by <yamt>.

While here, adjust license.
2008-02-22 22:32:49 +00:00
yamt
1a5be26b7e wrap a long line. 2008-01-28 12:23:42 +00:00
yamt
21832401a4 lwp_free: add assertions. 2008-01-28 10:24:45 +00:00
rmind
5c71a4d49f Implementation of processor-sets, affinity and POSIX real-time extensions.
Add schedctl(8) - a program to control scheduling of processes and threads.

Notes:
- This is supported only by SCHED_M2;
- Migration of LWP mechanism will be revisited;

Proposed on: <tech-kern>. Reviewed by: <ad>.
2008-01-15 03:37:10 +00:00
ad
79aa087ae2 - lwp_exit: if the LWP has a name, rename it to "(zombie)".
- lwp_free: don't leak l_name.
2008-01-12 18:06:40 +00:00
yamt
2e8a5bee68 lwp_ctl_alloc: fix error handling. 2008-01-07 11:41:29 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
ea3f10f7e0 Merge more changes from vmlocking2, mainly:
- Locking improvements.
- Use pool_cache for more items.
2007-12-26 16:01:34 +00:00
yamt
949e16d902 use binuptime for l_stime/l_rtime. 2007-12-22 01:14:53 +00:00
yamt
555511ff96 include <sys/user.h>. 2007-12-13 05:25:03 +00:00
yamt
0c38201391 add ddb "whatis" command. inspired from solaris ::whatis dcmd. 2007-12-13 02:45:09 +00:00
ad
27f20ecf58 Soft interrupts can now take proclist_lock, so there is no need to
double-lock alllwp or allproc.
2007-12-03 20:26:24 +00:00
ad
9027344d32 For the slow path soft interrupts, arrange to have the priority of a
borrowed user LWP raised into the 'kernel RT' range if the LWP sleeps
(which is unlikely).
2007-12-03 17:14:59 +00:00
ad
e2aaefb859 - mi_switch: adjust so that we don't have to hold the old LWP locked across
context switch, since cpu_switchto() can be slow under certain conditions.
  From rmind@ with adjustments by me.
- lwpctl: allow LWPs to reregister instead of returning EINVAL. Just return
  their existing lwpctl user address.
2007-12-02 14:55:32 +00:00
skrll
1587bff9f0 Explicitly include <uvm/uvm_object.h> 2007-11-13 11:38:35 +00:00
yamt
a05590a3d6 lwp_ctl_alloc: fix a mutex_enter/exit mismatch. 2007-11-13 08:38:06 +00:00
ad
b668a9a05f Add _lwp_ctl() system call: provides a bidirectional, per-LWP communication
area between processes and the kernel.
2007-11-12 23:11:58 +00:00