If pages being loaded into the VM system intersect with any of these
ranges, the intersecting pages will be placed on a lower-priority
free list to protect them.
an effort to avoid bouncing file system buffers (it's not clear how much
of a win this is, and it'll be pointless w/ a unified buffer cache, but
what the heck).
default free list, and 0 - N additional free list, in order of descending
priority.
A new page allocation function, uvm_pagealloc_strat(), has been added,
providing three page allocation strategies:
- normal: high -> low priority free list walk, taking the
page off the first free list that has one.
- only: attempt to allocate a page only from the specified free
list, failing if that free list has none available.
- fallback: if `only' fails, fall back on `normal'.
uvm_pagealloc(...) is provided for normal use (and is a synonym for
uvm_pagealloc_strat(..., UVM_PGA_STRAT_NORMAL, 0); the free list argument
is ignored for the `normal' case).
uvm_page_physload() now specified which free list the pages will be
loaded onto. This means that some platforms which have multiple physical
memory segments may define additional vm_physsegs if they wish to break
individual physical segments into differing priorities.
Machine-dependent code must define _at least_ the following constants
in <machine/vmparam.h>:
VM_NFREELIST: the number of free lists the system will have
VM_FREELIST_DEFAULT: the default freelist (should always be 0,
but is defined in machdep code so that it's with all of the
other free list-related constants).
Additional free list names may be defined by machine-dependent code, but
they will only be used by machine-dependent code (e.g. for loading the
vm_physsegs).
use by an error handler:
* There can be only one TurboLaser, and we'll overload it
* with a bitmap of found turbo laser nodes. Note that
* these are just the actual hard TL node IDS that we
* discover here, not the virtual IDs that get assigned
* to CPUs. During TLSB specific error handling we
* only need to know which actual TLSB slots have boards
* in them (irrespective of how many CPUs they have).
into the integer tlsb_found a bit map of found nodes and export
it to the rest of the kernel. This is so that at machine check
time when we're doing some TLSB/KFTXX error handling we don't
have to attempt a badaddr to look at all TLSB nodes (which will
just be too sad to try...).
is not really needed for this platform, except that we might want to
handle PCI errors which get reported through here. In any case, this
prints out more info than is usual. Will probably need to detune
this to be based upon DIAGNOSTIC defines.