than one active reference to a file descriptor. It should dislodge threads
sleeping while holding a reference to the descriptor. Implemented only for
sockets but should be extended to pipes, fifos, etc.
Fixes the case of a multithreaded process doing something like the
following, which would have hung until the process got a signal.
thr0 accept(fd, ...)
thr1 close(fd)
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:
- Inspecting process state requires thread context, so signals can no longer
be sent from a hardware interrupt handler. Signal activity must be
deferred to a soft interrupt or kthread.
- As the proc state locking is simplified, it's now safe to take exit()
and wait() out from under kernel_lock.
- The system spends less time at IPL_SCHED, and there is less lock activity.
failure of wpa_supplicant(8) to re-key promptly, as reported in
http://mail-index.netbsd.org/tech-net/2008/04/18/msg000459.html
- Make bpf's read timeout work more correctly with select/poll.
- A fix for catchpacket() which delays calling bpf_wakeup() until
the state has been updated.
1. Please don't cast function pointers to (void *), use the full function
prototype cast; this is for archs where a function pointer is not a regular
pointer.
2. Compare pointers to NULL not 0.
- Add a lot of missing selinit() and seldestroy() calls.
- Merge selwakeup() and selnotify() calls into a single selnotify().
- Add an additional 'events' argument to selnotify() call. It will
indicate which event (POLL_IN, POLL_OUT, etc) happen. If unknown,
zero may be used.
Note: please pass appropriate value of 'events' where possible.
Proposed on: <tech-kern>
compatibility with the older ioctls. This avoids stack smashing and
abuse of "struct sockaddr" when ioctls placed "struct sockaddr_foo's" that
were longer than "struct sockaddr".
XXX: Some of the emulations might be broken; I tried to add code for
them but I did not test them.
Rather than calling mircotime() in catchpacket(), make catchpacket()
take a timeval indicating when the packet was captured. Move
microtime() to the calling functions and grab the timestamp as soon
as we know that we're going to call catchpacket at least once.
This means that we call microtime() once per matched packet, as
opposed to once per matched packet per bpf listener. It also means
that we return the same timestamp to all bpf listeners, rather than
slightly different ones.
It would be more accurate to call microtime() even earlier for all
packets, as you have to grab (1+#listener) locks before you can
determine if the packet will be logged. You could always grab a
timestamp before the locks, but microtime() can be costly, so this
didn't seem like a good idea.
(I guess most ethernet interfaces will have a bpf listener these
days because of dhclient. That means that we could be doing two bpf
locks on most packets going through the interface.)
and net.bpf.peers sysctls respectively.
A new structure was added to describe the external (user viewable)
representation of a BPF file; a new entry was added to the bpf_d
structure to store the PID of the calling process; a simple_lock was added
to protect the insert/removal from the net.bpf.peers sysctl handler.
This idea came from FreeBSD (Christian S.J. Peron) but while it is
implemented with sysctl's it differs a bit.
Reviewed by: christos@ and atatat@ (who gave me the tip for the net.bpf.peers
sysctl helper function).
The __UNCONST macro is now used only where necessary and the RW macros
are gone. Most of the changes here are consumers of the
sysctl_createv(9) interface that now takes a pair of const pointers
which used not to be.
Add bpf_deliver prototype.
Rename bpf_measure to m_length and move it to sys/sys/mbuf.h. I
make m_length an inline function in the header file to preserve
its performance characteristics, for better or for worse.
Optimize m_length: use the length in m_pkthdr.len, if M_PKTHDR.
In bpf_deliver, zero the on-stack mbuf before we do anything else
with it.