directly. The old code was totally bogus for the new pmap. New code
lifted from SH5 port.
Fixes panics in ffs_balloc_ufs2() seen while stress-testing a file
system on an XScale-based server platform.
IST_EDGE_{FALLING,RISING,BOTH}, or IST_LEVEL_{LOW,HIGH}. This
argument is valid only for GPIO interrupts (IRQ0..7).
+ Don't clear interrupt pending bits for IIC in interrupt handler.
Since clearing these bits starts next IIC transmission immediately,
IIC driver should handle these.
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html
There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.
Fixes PR kern/21517.
isa_dmamap_create() calls to their open/close entrypoints. This worked
with some luck, but broke on i386 when _bus_dmamap_create started
to allocate bounce buffers upfront, since memory below 16M may well
not be available when the sound devices is opened for the Nth time.
To fix this, create a new simple interface, isa_drq_alloc/isa_drq_free,
wrappers around already existing bitmask macros. These are expected
to be used before an isa_dmamap_create call, and after an
isa_dmamap_destroy call, respectively. For the sb and ad1848 drivers,
they're deferred until open/close.
All isa_dmamap_create calls can now use BUS_DMA_ALLOCNOW and be done
at attach time.
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM. Machine-dependent code is
responsible for initializing them before main() is called. Anything
that steals KVA must adjust these variables accordingly.
This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().
This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.
This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
Also in the ARM32_PMAP_NEW case, reclaim the USPACE-bytes of wasted space
at the top of the user address that hasn't been needed for a very very
long time.
for L2 allocation. This avoids potential recursive calls into
uvm_km_kmemalloc() via the pool allocator.
Bug spotted by Allen Briggs while trying to boot on a machine with 512MB
of memory.
breakpoint address before it's used. Currently a no-op on all but sh5.
This is useful on sh5, for example, to mask off the instruction
type encoding in the bottom two address bits, and makes it possible
to do "db> break $rXX" instead of manually munging the address.
by the application, all NetBSD interfaces are made visible, even
if some other feature-test macro (like _POSIX_C_SOURCE) is defined.
<sys/featuretest.h> defined _NETBSD_SOURCE if none of _ANSI_SOURCE,
_POSIX_C_SOURCE and _XOPEN_SOURCE is defined, so as to preserve
existing behaviour.
This has two major advantages:
+ Programs that require non-POSIX facilities but define _POSIX_C_SOURCE
can trivially be overruled by putting -D_NETBSD_SOURCE in their CFLAGS.
+ It makes most of the #ifs simpler, in that they're all now ORs of the
various macros, rather than having checks for (!defined(_ANSI_SOURCE) ||
!defined(_POSIX_C_SOURCE) || !defined(_XOPEN_SOURCE)) all over the place.
I've tried not to change the semantics of the headers in any case where
_NETBSD_SOURCE wasn't defined, but there were some places where the
current semantics were clearly mad, and retaining them was harder than
correcting them. In particular, I've mostly normalised things so that
_ANSI_SOURCE gets you the smallest set of stuff, then _POSIX_C_SOURCE,
_XOPEN_SOURCE and _NETBSD_SOURCE in that order.
Tested by building for vax, encouraged by thorpej, and uncontested in
tech-userlevel for a week.
* Define a new "MMU type", ARM_MMU_SA1. While the SA-1's MMU is basically
compatible with the generic, the SA-1 cache does not have a write-through
mode, and it is useful to know have an indication of this.
* Add a new PMAP_NEEDS_PTE_SYNC indicator, and try to evaluate it at
compile time. We evaluate it like so:
- If SA-1-style MMU is the only type configured -> 1
- If SA-1-style MMU is not configured -> 0
- Otherwise, defer to a run-time variable.
If PMAP_NEEDS_PTE_SYNC might evaluate to true (SA-1 only or run-time
check), then we also define PMAP_INCLUDE_PTE_SYNC so that e.g. assembly
code can include the necessary run-time support. PMAP_INCLUDE_PTE_SYNC
largely replaces the ARM32_PMAP_NEEDS_PTE_SYNC manual setting Steve
included with the original new pmap.
* In the new pmap, make pmap_pte_init_generic() check to see if the CPU
has a write-back cache. If so, init the PT cache mode to C=1,B=0 to get
write-through mode. Otherwise, init the PT cache mode to C=1,B=1.
* Add a new pmap_pte_init_arm8(). Old pmap, same as generic. New pmap,
sets page table cacheability to 0 (ARM8 has a write-back cache, but
flushing it is quite expensive).
* In the new pmap, make pmap_pte_init_arm9() reset the PT cache mode to
C=1,B=0, since the write-back check in generic gets it wrong for ARM9,
since we use write-through mode all the time on ARM9 right now. (What
this really tells me is that the test for write-through cache is less
than perfect, but we can fix that later.)
* Add a new pmap_pte_init_sa1(). Old pmap, same as generic. New pmap,
does generic initialization, then resets page table cache mode to
C=1,B=1, since C=1,B=0 does not produce write-through on the SA-1.