big-endian. i386, pc532 and vax still include <machine/byte_swap.h>
and define macros for the {n,h}to{h,n}*() functions. mips also
defines some endian-independent assembly-code aliases for unaligned
memory accesses.
that is priority is rasied. Add a new spllowersoftclock() to provide the
atomic drop-to-softclock semantics that the old splsoftclock() provided,
and update calls accordingly.
This fixes a problem with using the "rnd" pseudo-device from within
interrupt context to extract random data (e.g. from within the softnet
interrupt) where doing so would incorrectly unblock interrupts (causing
all sorts of lossage).
XXX 4 platforms do not have priority-raising capability: newsmips, sparc,
XXX sparc64, and VAX. This platforms still have this bug until their
XXX spl*() functions are fixed.
- fix emitrules() like emitfiles() to deal with the prefix (otherwise it
would attempt to find the file in the normal base for the NORMAL_C rule).
- add emitincludes() which adds include directives for each prefix to the
$INCLUDES variable in the makefile.
- add %INCLUDES to each Makefile.arch to deal with the above.
this makes "prefix" actually work in a usable manner, and now i can move
on to fixing compiler warnings (errors) in the ESP code. :)
- remove "need-flag" for mac68k esp driver, as it is not used in anywhere
and conflicts with IPsec ESP header.
This should be the only MD change in IPv6 support, except kernel config file.
Very sorry if you have any compilation problem with it (I believe it is okay).
If your favorite arch is not included in here, please add a
call to ip6intr() from softintr handle.
has PAGEABLE and INTRSAFE flags. PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that. INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now). This will eventually
change now these maps are locked, as well.
managed pages, into KVA space. Since the pages are managed, we should
use pmap_enter(), not pmap_kenter_pa().
Also, when entering the mappings, enter with an access_type of
VM_PROT_READ | VM_PROT_WRITE. We do this for a couple of reasons:
(1) On systems that have H/W mod/ref attributes, the hardware
may not be able to track mod/ref done by a bus master.
(2) On systems that have to do mod/ref emulation, this prevents
a mod/ref page fault from potentially happening while in an
interrupt context, which can be problematic.
This latter change is fairly important if we ever want to be able to
transfer DMA-safe memory pages to anonymous memory objects; we will need
to know that the pages are modified, or else data could be lost!
Note that while the pages are unowned (i.e. "just DMA-safe memory pages"),
they won't consume any swap resources, as the mappings are wired, and
the pages aren't on the active or inactive queues.
directly, call the function pointer (*if_input)(ifp, m). The input routine
expects the packet header to be at the head of the packet, and will adjust
as necessary. Privatize the layer 2 input and output routines, allowing
*_ifattach() to set them up as appropriate.
the child inherits the stack pointer from the parent (traditional
behavior). Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.
This is required for clone(2).
necessary because some interrupt handlers may cause additional faults (e.g.
through R/M emulation) and thereby trash the previous fault state.
From Richard Earnshaw.
has already been called and thus we know the values the timers are using.
This also ensures that clock_sc will always be valid when we try and use
it to read the timer registers.
Fixes PR7357.
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
dereference it when doing handled or modified emulation. This can happen
if fault are taken during early parts of the bootstrap, resulting in
recursive faulting.
* Count page table pages in the RSS.
* Rather than patching it up, panic if access_type has bits not in prot, as
this should now be impossible.
* Add page table reference counting, but currently disabled as it still has
some issues.
When we put a page on the collection list, we must subtract NPVPPG from the
total free count: one for each pv_entry that's free in that page, and one for
each free pv_entry in other pages that we're going to eat by moving the ones
in the page being collected.
pmap_find_pv(), pmap_clean_page() and pmap_remove_all() are only called on
managed pages, after VM initialization. Panic if this invariant is violated.
Also, panic if we try to enter a PT page through pmap_enter(), rather than
silently patching it up.
pmap_initialized is now #ifdef DIAGNOSTIC.
* Map the message buffer with access_type = VM_PROT_READ|VM_PROT_WRITE `just
because'.
* Map the file system buffers with access_type = VM_PROT_READ|VM_PROT_WRITE to
avoid possible problems with pagemove().
* Do not use VM_PROT_EXEC with either of the above.
* Map pages for /dev/mem with access_type = prot. Also, DO NOT use
pmap_kenter() for this, as we DO NOT want to lose modification information.
* Map pages in dumpsys() with VM_PROT_READ.
* Map pages in m68k mappedcopyin()/mappedcopyout() and writeback() with
access_type = prot.
* For now, bus_dma*(), pmap_map(), vmapbuf(), and similar functions still use
access_type = 0. This should probably be revisited.
emulation of managed pages. This required the following `interesting' changes:
* File system buffers must be entered with an access type of
VM_PROT_READ|VM_PROT_WRITE, so that the pages will be accessible immediately.
Otherwise we would have to teach pagemove() to update the R/M information.
Since they're never eligible for paging, the latter is overkill.
* We must insure that pages allocated before the pmap is completely set up
(that is, pages allocated early by the VM system) are not eligible for R/M
emulation, since the memory needed for this isn't available. We do this by
allocating the pmap's internal memory with uvm_pageboot_alloc(). This also
fixes an absolutely horrible hack where the pmap only worked because page 0
happened to be mapped.
to be mapped.
Also:
* Push the wired page counting into the p->v list maintenance functions. This
avoids code duplication, and fixes some cases where we were confused about
which pages to do it with.
* Fix lots of problems associated with pmap_nightmare() (and rename it to
pmap_vac_me_harder()).
* Since the early pages are no longer considered `managed', just make
pmap_*_pv() panic if !pmap_initialized.
"BUS_SPACE_ALIGNED_POINTER()".
Equal to the param.h "ALIGNED_POINTER()" normally, but obeys additional
requirements of the bus_space_xxx_n() macros. (BUS_SPACE_DEBUG)
PTEs is ignored when in kernel mode. Hack around this just like we do on the
i386, by adding a prepass to copyout() to check for write permission on the
destination pages.
* Don't bother pulling PT_M and PT_H bits from pv_flags; they can't ever be
set there!
* Actually make pmap_clear_reference() do something useful.
* Also set the referenced bit (PT_H) when emulating a write fault.
If we're doing modified bit emulation, we must revoke write permission in
pmap_clear_modify(). This is non-negotiable. I will revoke write permission
in pmap_clear_modify(), or suffer the wrath of a thousand bricks.