* When opening a live pcap, obtain the list of supported DLTs from
the BPF.
* Add pcap_list_datalinks() to obtain a list of supported DLTs
supported by the interface associated with the pcap descriptor.
* Add pcap_set_datalink() to set the current DLT of the pcap.
* Bump shlib 1.2 -> 1.3; new functions added.
From David Young <dyoung@ojctech.com>, with some minor changes by me.
* bssid xx:xx:xx:xx:xx:xx -- set the desired BSSID of an 802.11
interface.
* -bssid -- unset the desired BSSID of an 802.11 interface, so
the interface will choose automatically (default).
* channel x -- set the channel (radio frequency) of an 802.11 interface.
Current BSSID and channel are now reported in the 802.11 status
display, if supported by the interface.
Above changes from David Young <dyoung@ojctech.com>, with some slight
changes by me (use ethers(3) functions rather than hand-parsing/printing
the 802.11 address).
Document bssid/-bssid/channel, and clean up markup of parentheticals
in the manual page.
This means that:
+ /bin and /sbin (and the few programs in /usr/* which were statically
linked) are now dynamically linked.
+ The shared libraries that are needed by the /bin and /sbin programs
are now installed into /lib (with compatability symlinks from
/usr/lib). These are:
c crypt edit ipsec kvm m m387 termcap termlib util z
+ The shared linker is now in /libexec/ld.elf_so, and
/usr/libexec/ld.elf_so is a symlink to the former.
If you want the prior behaviour of "some applications statically linked,
the rest dynamically linked", set MKDYNAMICROOT=no in your mk.conf(5).
If you have a philosophical objection to dynamic libraries, continue
to set LDSTATIC=-static in your mk.conf(5), and please don't waste any
more time in trying to convince us why dynamic libraries are 3v1l.
the kqueue branch and -current and thus make testing easier
change HISTORY to clearly state this interface is only available with
experimental kernel branch
add Jason Thorpe and me to AUTHORS
update .Dd
tearing down a vm_map. use this to skip the pmap_update()
at the end of all the removes, which allows pmaps to optimize
pmap tear-down. also, use the new pmap_remove_all() hook to
let the pmap implemenation know what we're up to.
- use struct vm_page_md for attaching pv entries to struct vm_page
- change pseg_set()'s return value to indicate whether the spare page
was used as an L2 or L3 PTP.
- use a pool for pv entries instead of malloc().
- put PTPs on a list attached to the pmap so we can free them
more efficiently (by just walking the list) in pmap_destroy().
- use the new pmap_remove_all() interface to avoid flushing the cache and TLB
for each pmap_remove() that's done as we are tearing down an address space.
- in pmap_enter(), handle replacing an existing mapping more efficiently
than just calling pmap_remove() on it. also, skip flushing the
TSB and TLB if there was no previous mapping, since there can't be
anything we need to flush. also, preload the TSB if we're pre-setting
the mod/ref bits.
- allocate hardware contexts like the MIPS pmap:
allocate them all sequentially without reuse, then once we run out
just invalidate all user TLB entries and flush the entire L1 dcache.
- fix pmap_extract() for the case where the va is not page-aligned and
nothing is mapped there.
- fix calculation of TSB size. it was comparing physmem (which is
in units of pages) to constants that only make sense if they are
in units of bytes.
- avoid sleeping in pmap_enter(), instead let the caller do it.
- use pmap_kenter_pa() instead of pmap_enter() where appropriate.
- remove code to handle impossible cases in various functions.
- tweak asm code to pipeline a little better.
- remove many unnecessary spls and membars.
- lots of code cleanup.
- no doubt other stuff that I've forgotten.
the result of all this is that a fork+exit microbenchmark is 34% faster
and a fork+exec+exit microbenchmark is 28% faster.
cpu_idle() and new cpu_switch() to replace the old cpu_switch()
which did the lot. Runs leaner without overly blocking interrupts.
Includes cleanup of the RAS code to make use of callee-saved registers.
Benchmarks on DX4 @ 100MHz reveal a slight performance improvement
but probably not statistically signficant. More TBD to verify this.
Changes passed a pounding on Athlon @ 1GHz too.
This is done by adding an extra argument to mi_switch() and
cpu_switch() which specifies the new process. If NULL is passed,
then the new function chooseproc() is invoked to wait for a new
process to appear on the run queue.
Also provides an opportunity for optimisations if "switching to self".
Also added are C versions of the setrunqueue() and remrunqueue()
low-level primitives if __HAVE_MD_RUNQUEUE is not defined by MD code.
All these changes are contingent upon the __HAVE_CHOOSEPROC flag being
defined by MD code to indicate that cpu_switch() supports the changes.
rely on default value. It should actually be extracted
from the bootpath instead, but that involves translating
from apple partition map entries to netbsd disklabel entries.
(as Solaris, Linux and HP/UX all mention they need zlib and it should
be part of libnbcompat, maybe this is a hint for us to get a move on
and do that :)
- consistently support __hpux (which the HP compilers define) as well
as __hpux__ (not sure which compilers set this, but retained anyway)
- fix a typo in the definition of signal(). arguably the codebase should
just be converted to sigaction()...
memory fault handler. IRIX uses irix_vm_fault, and all other emulation
use NULL, which means to use uvm_fault.
- While we are there, explicitely set to NULL the uninitialized fields in
struct emul: e_fault and e_sysctl on most ports
- e_fault is used by the trap handler, for now only on mips. In order to avoid
intrusive modifications in UVM, the function pointed by e_fault does not
has exactly the same protoype as uvm_fault:
int uvm_fault __P((struct vm_map *, vaddr_t, vm_fault_t, vm_prot_t));
int e_fault __P((struct proc *, vaddr_t, vm_fault_t, vm_prot_t));
- In IRIX share groups, all the VM space is shared, except one page.
This bounds us to have different VM spaces and synchronize modifications
to the VM space accross share group members. We need an IRIX specific hook
to the page fault handler in order to propagate VM space modifications
caused by page faults.