virtual memory reservation and a private pool of memory pages -- by a scheme
based on memory pools.
This allows better utilization of memory because buffers can now be allocated
with a granularity finer than the system's native page size (useful for
filesystems with e.g. 1k or 2k fragment sizes). It also avoids fragmentation
of virtual to physical memory mappings (due to the former fixed virtual
address reservation) resulting in better utilization of MMU resources on some
platforms. Finally, the scheme is more flexible by allowing run-time decisions
on the amount of memory to be used for buffers.
On the other hand, the effectiveness of the LRU queue for buffer recycling
may be somewhat reduced compared to the traditional method since, due to the
nature of the pool based memory allocation, the actual least recently used
buffer may release its memory to a pool different from the one needed by a
newly allocated buffer. However, this effect will kick in only if the
system is under memory pressure.
- nothing needs to be done if ci_want_resched is already set.
- if the cpu isn't running any lwp, send a no-op ipi to it
so that it can resume immediately from halting in idle loop
without having to wait until the next clock tick.
some advices from Stephan Uphoff.
- pmap_enter: zap PTE and read attributes atomically to
eliminate a race window which could cause lost of attributes.
- reduce number of TLB shootdown by using some assumptions
about PTE handling.
for more details, see "SMP improvements for pmap" thread on port-i386@
around May 2003.
address currently in effect does not always work: There might be more
instances of the code segment selector in other threads, on other CPUs
and in *jmp_bufs.
So always check whether the CS needs updating, if it is not already
set to the "BIG" value.
This code needs more cleanup, this is considered a stopgap fix only.
Gone are the old kern_sysctl(), cpu_sysctl(), hw_sysctl(),
vfs_sysctl(), etc, routines, along with sysctl_int() et al. Now all
nodes are registered with the tree, and nodes can be added (or
removed) easily, and I/O to and from the tree is handled generically.
Since the nodes are registered with the tree, the mapping from name to
number (and back again) can now be discovered, instead of having to be
hard coded. Adding new nodes to the tree is likewise much simpler --
the new infrastructure handles almost all the work for simple types,
and just about anything else can be done with a small helper function.
All existing nodes are where they were before (numerically speaking),
so all existing consumers of sysctl information should notice no
difference.
PS - I'm sorry, but there's a distinct lack of documentation at the
moment. I'm working on sysctl(3/8/9) right now, and I promise to
watch out for buses.
bswapl, and bf_cbc.S uses it. Unfortunately, this means that GENERIC
will no longer use the asm code -- though it will still use the asm
for the basic Blowfish transform. This won't slow down the KAME IPsec
(since it rolls its own CBC) but may slow down fast-ipsec in kernels
that have I386_CPU defined.
(it only allowed to boot an nfs /netbsd automatically)
To make it work for people who can't tell the DHCP server to pass
the right kernel file to pxeboot, without losing flexibility for
people who can, do the following:
Use the filename given by the DHCP server if it contains a ":". A ":"
was already used to seperate filesystem and filename, so we don't
lose anything. Otoh, a path to pxeboot usually doesn't contain a ":",
so it should still work if we got the old pxeboot filename again.
install media and the kernels (and sysinst) will still run on a 16MB system.
(They haven't run on an 8MB system for a while - might affect 12MB though.)
The additional space in the root filesystem lets sysinst core dump properly!
make absolutely high the top 16bits of returned values are zero.
Ralf's list says that some BIOS need %eax = 0x0000e820 in getmementry.
Add a few comments.
Might fix problems with memory size detection on some systems.
Remove p_raslock and rename p_lwplock p_lock (one lock is enough).
Simplify window test when adding a ras and correct test on VM_MAXUSER_ADDRESS.
Avoid unpredictable branch in i386 locore.S
(pad fields left in struct proc to avoid kernel bump)
it can now accomodate multiple _CIDs
sizeof(ACPI_DEVICE_INFO) should not be used
* make ad_devinfo member in acpi_devnode a pointer
* implement acpi_match_hid() to simplify matching devices;
_CIDs are also taken into account now as well as _HID
containing signal posting, kernel-exit handling and sa_upcall processing.
XXX the pc532, sparc, sparc64 and vax ports should have their
XXX userret() code rearranged to use this.
uvm_swapout_threads will swapout LWPs which are running on another CPU:
- uvm_swapout_threads considers LWPs running on another CPU for swapout
if their l_swtime is high
- uvm_swapout_threads considers LWPs on the runqueue for swapout if their
l_swtime is high but these LWPs might be running by the time uvm_swapout
is called
symptoms of failure: panic in setrunqueue
fixes PR kern/23095