proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:
- Inspecting process state requires thread context, so signals can no longer
be sent from a hardware interrupt handler. Signal activity must be
deferred to a soft interrupt or kthread.
- As the proc state locking is simplified, it's now safe to take exit()
and wait() out from under kernel_lock.
- The system spends less time at IPL_SCHED, and there is less lock activity.
- Socket layer becomes MP safe.
- Unix protocols become MP safe.
- Allows protocol processing interrupts to safely block on locks.
- Fixes a number of race conditions.
With much feedback from matt@ and plunky@.
failure of wpa_supplicant(8) to re-key promptly, as reported in
http://mail-index.netbsd.org/tech-net/2008/04/18/msg000459.html
- Make bpf's read timeout work more correctly with select/poll.
- A fix for catchpacket() which delays calling bpf_wakeup() until
the state has been updated.
(rev 1.125): correct the check for fd_getsock() failure in
gre_socreate().
The second bug is more complicated to fix. Since rev 1.125,
gre_reconf() is using the file descriptor table of the current
process instead of the process 0's (the kernel's).
1. Please don't cast function pointers to (void *), use the full function
prototype cast; this is for archs where a function pointer is not a regular
pointer.
2. Compare pointers to NULL not 0.
Use fewer 'error = ...; break;' statements and more 'return
...;'
Make the SIOCSIFFLAGS case more clear by using a switch
statement instead of an if-else if-else chain.
Shorten a staircase, and remove two unnecessary curly
braces.
- Add a lot of missing selinit() and seldestroy() calls.
- Merge selwakeup() and selnotify() calls into a single selnotify().
- Add an additional 'events' argument to selnotify() call. It will
indicate which event (POLL_IN, POLL_OUT, etc) happen. If unknown,
zero may be used.
Note: please pass appropriate value of 'events' where possible.
Proposed on: <tech-kern>
the opportunity to handle an ioctl before generic ifioctl handling
occurs. This will ease extending the kernel and sharing of code
between drivers.
First steps: Make the signature of ifioctl_common() match struct
ifinet->if_ioctl. Convert SIOCSIFCAP and SIOCSIFMTU to the new
ifioctl() regime, throughout the kernel.
1 Extract subroutine if_dl_create() from if_alloc_sadl().
if_dl_create() allocates a link-layer ifaddr.
2 Extract subroutine ifioctl_common() from ifioctl(). ifioctl_common()
will be the basis for an ifnet "superclass" whose functions
drivers may inherit. Very simple drivers may set ifnet->if_ioctl
= ifioctl_common. More sophisticated drivers will set ifnet->if_ioctl
= driver_ioctl. driver_ioctl() will call ifioctl_common() to
re-use the common code.
reference, but mark the cache 'invalid'. Let the next user of the
route cache check to whether or not the cache is valid, and update
the rtentry reference if necessary. In this way, avoid hairy
splnet()/splx() protection of route caches, which I never did trust.
In rtcache_lookup2(), use the return values of rtcache_validate()
and _rtcache_init() instead of looking at _ro_rt. Also, check the
return code of rtcache_setdst() for an error.