Commit Graph

49 Commits

Author SHA1 Message Date
skrll
1c518780b3 Follow the correct locking protocol when creating an LWP and the process
is stopping.

Problem found by running the gdb testsuite (gdb didn't have pthreads
support)

Thanks to rmind for help with this.
2010-06-06 07:46:17 +00:00
rmind
13f624ca0f Remove lwp_uc_pool, replace it with kmem(9), plus add some consistency.
As discussed, a while ago, with ad@.
2010-04-23 19:18:09 +00:00
rmind
b9a294cf04 - Move inittimeleft() and gettimeleft() to subr_time.c, where they belong.
- Move abstimeout2timo() there too and export.  Use it in lwp_park().
2009-11-01 21:46:09 +00:00
rmind
30d0b02e57 Make lwp_park_sobj and lwp_park_tab static.
Wrap long lines while here.
2009-10-22 13:12:47 +00:00
rmind
40cf6f3659 Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code.  Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
2009-10-21 21:11:57 +00:00
ad
ead83a47c8 _lwp_setprivate: provide the value to MD code if a hook is present.
This will be used to support TLS. The MD method must match the ELF TLS spec
for that CPU architecture (if there is a spec).

At this time it is only implemented for i386, where it means setting the
per-thread base address for %gs. Please implement this for your platform!
2009-03-29 09:24:52 +00:00
christos
461a86f9bd merge christos-time_t 2009-01-11 02:45:45 +00:00
ad
ad507e54f8 _lwp_kill: set SI_LWP in the siginfo, not SI_USER. 2008-10-16 08:47:07 +00:00
wrstuden
fc7511b00e Merge wrstuden-revivesa into HEAD. 2008-10-15 06:51:17 +00:00
ad
93e0e98369 Take the mutex pointer and waiters count out of sleepq_t: the values can
be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
2008-05-26 12:08:38 +00:00
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad
284c2b9aef Merge proc::p_mutex and proc::p_smutex into a single adaptive mutex, since
we no longer need to guard against access from hardware interrupt handlers.

Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
2008-04-24 18:39:20 +00:00
ad
6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
ad
c42a4d1422 Add a boolean parameter to syncobj_t::sobj_unsleep. If true we want the
existing behaviour: the unsleep method unlocks and wakes the swapper if
needs be. If false, the caller is doing a batch operation and will take
care of that later. This is kind of ugly, but it's difficult for the caller
to know which lock to release in some situations.
2008-03-17 16:54:51 +00:00
ad
7b0b5fdc9d +2008 for the copyright 2008-03-12 11:05:01 +00:00
ad
727b89a296 Add a preemption counter to lwpctl_t, to allow user threads to detect that
they have been preempted.
2008-03-12 11:00:43 +00:00
ad
60c1b8843d Make schedstate_percpu::spc_lwplock an exernally allocated item. Remove
the hacks in sparc/cpu.c to reinitialize it. This should be in its own
cache line but that's another change.
2008-02-14 14:26:57 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
dsl
7e2790cf6f Convert all the system call entry points from:
int foo(struct lwp *l, void *v, register_t *retval)
to:
    int foo(struct lwp *l, const struct foo_args *uap, register_t *retval)
Fixup compat code to not write into 'uap' and (in some cases) to actually
pass a correctly formatted 'uap' structure with the right name to the
next routine.
A few 'compat' routines that just call standard ones have been deleted.
All the 'compat' code compiles (along with the kernels required to test
build it).
98% done by automated scripts.
2007-12-20 23:02:38 +00:00
ad
208085366c sys__lwp_create: set in the correct lock when the LWP is created suspended. 2007-12-02 15:49:38 +00:00
ad
b668a9a05f Add _lwp_ctl() system call: provides a bidirectional, per-LWP communication
area between processes and the kernel.
2007-11-12 23:11:58 +00:00
ad
93070c013f Fix error in previous. 2007-11-07 00:56:27 +00:00
ad
f1c6cfe4f6 Add _lwp_setname, _lwp_getname. Proposed on tech-kern. 2007-11-07 00:37:21 +00:00
ad
d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
ad
513227e941 - Fix sleepq_block() to return EINTR if the LWP is cancelled. Pointed out
by yamt@.

- Introduce SOBJ_SLEEPQ_LIFO, and use for LWPs sleeping via _lwp_park.
  libpthread enqueues most waiters in LIFO order to try and wake LWPs that
  ran recently, since their working set is more likely to be in cache.
  Matching the order of insertion reduces the time spent searching queues
  in the kernel.

- Do not boost the priority of LWPs sleeping in _lwp_park, just let them
  sleep at their user priority level. LWPs waiting for some I/O event in
  the kernel still wait with kernel priority and get woken more quickly.
  This needs more evaluation and is to be revisited, but the effect on a
  variety of benchmarks is positive.

- When waking LWPs, do not send an IPI to remote CPUs or arrange for the
  current LWP to be preempted unless (a) the thread being awoken has kernel
  priority and has higher priority than the currently running thread or (b)
  the remote CPU is idle.
2007-09-06 23:58:56 +00:00
rmind
d2142b3188 sys__lwp_suspend: Handle the possible problem when target LWP might exit via
lwp_exit() before suspending.  In such case, LWP might be already freed after
cv_wait_sig() and checking the list of LWPs via lwp_find() is necessary.

Possible problem catched by Andrew Doran.
2007-08-15 02:50:40 +00:00
ad
830ab6bb3c - Fix a bug with _lwp_park() where if the computed wakeup time was under
1 microsecond into the future, the thread could enter an untimed sleep.
- Change the signature of _lwp_park() to accept an lwpid_t and second
  hint pointer, but do so in a way that remains compatible with older
  pthread libraries. This can be used to wake another thread before the
  calling thread goes asleep, saving at least one syscall + involuntary
  context switch. This turns out to be a fairly large win on the condvar
  benchmarks that I have tried.
- Mark some more syscalls MP safe.
2007-08-07 19:00:42 +00:00
rmind
00cdc8df70 sys__lwp_suspend: implement waiting for target LWP status changes (or
process exiting). Removes XXXLWP.

Reviewed by <ad> some time ago..
2007-08-02 01:48:44 +00:00
ad
d028f9dec2 KNF 2007-08-01 23:24:26 +00:00
rmind
9fe9a06d4b Fix a problem in sys__lwp_create() where invalid new_lwp would
leak an LWP and memory.
Reviewed by <ad>.
2007-07-11 00:17:23 +00:00
dsl
7ba299c5d4 Split sys__lwp_park() so that the compat/netbsd32 code can copyin and convert
its timeout then call the standard function.
2007-06-03 09:50:12 +00:00
yamt
f03010953f merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:

	idle lwp, and some changes depending on it.

	1. separate context switching and thread scheduling.
	   (cf. gmcgarry_ctxsw)
	2. implement idle lwp.
	3. clean up related MD/MI interfaces.
	4. make scheduler(s) modular.
2007-05-17 14:51:11 +00:00
rmind
7b9af0160d Handle newlwp() error case. Currently, newlwp() cannot fail, but this
will likely change in the future.
2007-03-24 16:43:56 +00:00
ad
fed1793605 Improvements to lwp_wait1(), for PR kern/35932:
- Better detect simple cycles of threads calling _lwp_wait and return
  EDEADLK. Does not handle deeper cycles like t1 -> t2 -> t3 -> t1.
- If there are multiple threads in _lwp_wait, then make sure that
  targeted waits take precedence over waits for any LWP to exit.
- When checking for deadlock, also count the number of zombies currently
  in the process as potentially reapable. Whenever a zombie is murdered,
  kick all waiters to make them check again for deadlock.
- Add more comments.

Also, while here:

- LOCK_ASSERT -> KASSERT in some places
- lwp_free: change boolean arguments to type 'bool'.
- proc_free: let lwp_free spin waiting for the last LWP to exit, there's
  no reason to do it here.
2007-03-21 18:25:59 +00:00
ad
e0fd341348 Changes to LWP wakeup:
- Don't bother sorting the sleep queues, since user space controls the
  order of removal.
- Change setrunnable(t) to lwp_unsleep(t). No functional change from the
  perspective of user applications.
- Minor cosmetic changes.
2007-03-20 23:25:17 +00:00
ad
06aeb1d344 - Remove the LWP counters. The race between park/unpark rarely occurs
so it's not worth counting.

- lwp_wakeup: set LW_UNPARKED on the target. Ensures that _lwp_park will
  always be awoken even if another system call eats the wakeup, e.g. as a
  result of an intervening signal. To deal with this correctly for other
  system calls will require a different approach.

- _lwp_unpark, _lwp_unpark_all: use setrunnable if the LWP is not parked
  on the same sync queue: (1) simplifies the code a bit as there no point
  doing anything special for this case (2) makes it possible for p_smutex
  to be replaced by p_mutex and (3) restores the guarantee that the 'hint'
  argument really is just a hint.
2007-03-14 23:58:24 +00:00
yamt
b84c74b2d4 sys__lwp_park: whitespace. no functional change. 2007-03-14 23:07:27 +00:00
yamt
b1b0d0db04 sys__lwp_park: don't restart on signals. PR/35969 from Andrew Doran. 2007-03-14 23:00:32 +00:00
yamt
27515959ec fix typos in comments. 2007-03-09 05:00:26 +00:00
ad
61a2eec6c3 _lwp_wakeup: set the cancellation pending if the LWP is not sleeping. 2007-03-02 21:06:27 +00:00
ad
4cbc498383 sys__lwp_park: explicitly drop the kernel lock, for the benefit of compat32.
XXX The stack gap stuff is not MP or MT safe and needs to go away.
2007-03-02 16:14:37 +00:00
ad
8a9f592723 sys__lwp_park: on a !MULTIPROCESSOR kernel the LWP is already locked. 2007-03-02 16:09:53 +00:00
ad
0bffc80584 Fix a couple of races with LWP park/unpark. 2007-03-01 14:55:06 +00:00
yamt
e781af39bd implement priority inheritance. 2007-02-26 09:20:52 +00:00
thorpej
dd962f8680 Pick up some additional files that were missed before due to conflicts
with newlock2 merge:

Replace the Mach-derived boolean_t type with the C99 bool type.  A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 23:48:10 +00:00
cube
632ece3eaf Introduce a new member to struct emul, e_startlwp, to be used by
sys__lwp_create.  It allows using the said syscall under COMPAT_NETBSD32.

The libpthread regression tests now pass on amd64 and sparc64.
2007-02-19 15:10:02 +00:00
pavel
934634a18c Change the process/lwp flags seen by userland via sysctl back to the
P_*/L_* naming convention, and rename the in-kernel flags to avoid
conflict. (P_ -> PK_, L_ -> LW_ ). Add back the (now unused) LSDEAD
constant.

Restores source compatibility with pre-newlock2 tools like ps or top.

Reviewed by Andrew Doran.
2007-02-17 22:31:36 +00:00
ad
d91014721f Add uvm_kick_scheduler() (MP safe) to replace wakeup(&proc0). 2007-02-15 20:21:13 +00:00
ad
b07ec3fc38 Merge newlock2 to head. 2007-02-09 21:55:00 +00:00