The patch below (hopefully) improves some signaling problems
found by Nathan.
It also contains some cleanup of the sa_upcall_userret() function
removing any sleep calls using PCATCH.
Unblocked threads now only use an upcall stack after they
acquire the virtual CPU.
This prevents unblocked threads from stealing all available
upcall stacks.
Tested by Nick Hudson.
be inserted into ktrace records. The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.
Bump the kernel rev up to 1.6V
be returning because the code path that calls is will very likely call
setrunnable() again on the returned thread, leading to a panic because
the thread returned is already at LSRUN. This fixes a problem where netbsd
would panic when using gdb (5.3) on a process with multiple lwp's like this:
% gdb program
(gdb) run
^C
(gdb) quit
sa_switch() invocations while exiting. Test P_SA instead of L_SA, out
of paranoia. Avoids a possible remrunqueue panic reported by Havard
Eidnes.
Release the kernel lock before calling the userret function to exit in
sigexit(). Problem noted by Paul Kranenburg.
timeout
the semantics of 'timeout' parameter differ to POSIX for the syscall
(not const, may be modified by kernel if interrupted from the wait) -
libc will provide appropriate wrapper
since sigwaitinfo(2) will be implemented as wrapper around sigtimedwait()
too, remove it's reserved slot and move sigqueue slot 'up', freeing
slot #246
* Change the semantics of proc_unstop() slightly, so that it is
responsible for making all stopped LWPs runnable, instead of
all-but-one. Return value is a LWP that can be interrupted if doing
so is necessary to take a signal. Adjust callers of proc_stop() to
the new, simpler semantics.
* When a non-continue signal is delivered to a stopped process and
there is a LWP sleeping interruptably, call setrunnable() (by way
of the 'out:' target in psignal1) instead of calling unsleep() so
that it becomes LSSTOP in issignal() and continuable by
proc_unstop(). Addresses PR kern/19990 by Martin Husemann, with
suggestions from enami tsugutomo.
kqueue provides a stateful and efficient event notification framework
currently supported events include socket, file, directory, fifo,
pipe, tty and device changes, and monitoring of processes and signals
kqueue is supported by all writable filesystems in NetBSD tree
(with exception of Coda) and all device drivers supporting poll(2)
based on work done by Jonathan Lemon for FreeBSD
initial NetBSD port done by Luke Mewburn and Jason Thorpe
This is done by adding an extra argument to mi_switch() and
cpu_switch() which specifies the new process. If NULL is passed,
then the new function chooseproc() is invoked to wait for a new
process to appear on the run queue.
Also provides an opportunity for optimisations if "switching to self".
Also added are C versions of the setrunqueue() and remrunqueue()
low-level primitives if __HAVE_MD_RUNQUEUE is not defined by MD code.
All these changes are contingent upon the __HAVE_CHOOSEPROC flag being
defined by MD code to indicate that cpu_switch() supports the changes.
* struct sigacts gets a new sigact_sigdesc structure, which has the
sigaction and the trampoline/version. Version 0 means "legacy kernel
provided trampoline". Other versions are coordinated with machine-
dependent code in libc.
* sigaction1() grows two more arguments -- the trampoline pointer and
the trampoline version.
* A new __sigaction_sigtramp() system call is provided to register a
trampoline along with a signal handler.
* The handler is no longer passed to sensig() functions. Instead,
sendsig() looks up the handler by peeking in the sigacts for the
process getting the signal (since it has to look in there for the
trampoline anyway).
* Native sendsig() functions now select the appropriate trampoline and
its arguments based on the trampoline version in the sigacts.
Changes to libc to use the new facility will be checked in later. Kernel
version not bumped; we will ride the 1.6C bump made recently.
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map). Try to deal with this:
* Group all information about the backend allocator for a pool in a
separate structure. The pool references this structure, rather than
the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
to become available, but will still fail if it cannot callocate KVA
space for the pages. If this happens, carefully drain all pools using
the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
some pages, and use that information to make draining easier and more
efficient.
* Get rid of PR_URGENT. There was only one use of it, and it could be
dealt with by the caller.
From art@openbsd.org.
ltsleep() is calling CURSIG() which can call issignal() and issignal()
could not deal with being called from a locked context. This happens
when a process receives SIGTTIN, and issignal() calls psignal() to
post SIGCHLD to the parent.
XXX: It is really messy to have issignal() handle the job control
functionality and the whole signal interlocking protocol needs to
be re-designed. For now this fix (provided by enami) does the trick.
I've been running with this fix for weeks, and atatat has stress-tested
the kernel running ~30 make kernels...
wrap this all up in a CHECKSIGS() macro. Also, in psignal1(),
signotify() SRUN and SIDL processes if __HAVE_AST_PERPROC is defined.
Per discussion w/ mycroft.
only signal handler array sharable between threads
move other random signal stuff from struct proc to struct sigctx
This addresses kern/10981 by Matthew Orgass.
in the non-MULTIPROCESSOR case (LOCKDEBUG requires it). Scheduler
lock is held upon entry to mi_switch() and cpu_switch(), and
cpu_switch() releases the lock before returning.
Largely from Bill Sommerfeld, with some minor bug fixes and
machine-dependent code hacking from me.
- Change ktrace interface to pass in the current process, rather than
p->p_tracep, since the various ktr* function need curproc anyway.
- Add curproc as a parameter to mi_switch() since all callers had it
handy anyway.
- Add a second proc argument for inferior() since callers all had
curproc handy.
Also, miscellaneous cleanups in ktrace:
- ktrace now always uses file-based, rather than vnode-based I/O
(simplifies, increases type safety); eliminate KTRFLAG_FD & KTRFAC_FD.
Do non-blocking I/O, and yield a finite number of times when receiving
EWOULDBLOCK before giving up.
- move code duplicated between sys_fktrace and sys_ktrace into ktrace_common.
- simplify interface to ktrwrite()
which indicates that the process is actually running on a
processor. Test against SONPROC as appropriate rather than
combinations of SRUN and curproc. Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.