the functionality of ixp425_bs_tag.
- Add missing stream_{read,write}_1 ops to ixp425_bs_tag.
- Re-work the delay() implementation to use the free-running Time-
Stamp counter. This removes the need to bootstrap TMR0 early on.
the idle loop. They seem to have gone AWOL sometime in the past.
Fixes port-arm/23390.
- While here, tidy up the idle loop.
- Add a cheap DIAGNOSTIC check for run queue sanity.
returns non-zero and we want to shortcut out. This avoids a bogus pagefault
condition being detected in sa_switch().
Many thanks to Christian Limpach for finding this, obviating my band-aid
patch to kern_sa.c (posted on tech-kern).
the call to data_abort_fixup() as the fixup routines also try to
de-reference the fault pc.
- If a fault came from kernel mode, and the fault address looks to be in
the kernel's address space, and pcb_onfault is *set*, check the
instruction which caused the fault. If it's LDR{B,}T or STR{B,}T
then one of the copy in/out routines is trying to read/write a
kernel address with the wrong privilege. If that address is actually
mapped, we could end up in an infinite loop because we failed to
notice that it's really a 'user mode' access. Yay for "crashme".
I suspect this also fixes PR port-arm/23052.
Note: This *could* be fixed by adding sanity checks to copyin et al,
but that would add extra overhead to the non-error path...
- Fix a couple of __predict_false cases.
to determine if a fault is read or write, make sure tf->tf_pc is 32-bit
aligned before dereferencing it.
Otherwise, deliver an illegal instruction signal to the process. We don't
support execution of Thumb code at this time.
copyin() or copyout().
uvm_useracc() tells us whether the mapping permissions allow access to
the desired part of an address space, and many callers assume that
this is the same as knowing whether an attempt to access that part of
the address space will succeed. however, access to user space can
fail for reasons other than insufficient permission, most notably that
paging in any non-resident data can fail due to i/o errors. most of
the callers of uvm_useracc() make the above incorrect assumption. the
rest are all misguided optimizations, which optimize for the case
where an operation will fail. we'd rather optimize for operations
succeeding, in which case we should just attempt the access and handle
failures due to insufficient permissions the same way we handle i/o
errors. since there appear to be no good uses of uvm_useracc(), we'll
just remove it.
alignment fault checking if necessary.
This option gets the acorn32 port working again.
XXX: Richard Earnshaw suggested enabling alignment faults for
XXX: userland only on acorn32. Need to investigate this.
Remove p_raslock and rename p_lwplock p_lock (one lock is enough).
Simplify window test when adding a ras and correct test on VM_MAXUSER_ADDRESS.
Avoid unpredictable branch in i386 locore.S
(pad fields left in struct proc to avoid kernel bump)
containing signal posting, kernel-exit handling and sa_upcall processing.
XXX the pc532, sparc, sparc64 and vax ports should have their
XXX userret() code rearranged to use this.
- Assume a permission fault is always the result of an attempted
write, so no need to disassemble the opcode.
(as discussed with Richard Earnshaw/Jason Thorpe a week or two ago)
- Split out non-MMU data aborts into separate functions, and deal
correctly with XScale imprecise aborts. Specifically, the old code
made no attempt to handle the double abort faults which can occur
as a result of two consecutive external (imprecise) aborts. This
was easy to provoke by read(2)ing from a /dev/mem offset which caused
an external abort. With the old code, this would bring the system
down instantly, with little clue as to why. (hint: tf_spsr held
PSR_ABT32_MODE...)
- Re-write badaddr_read() to use pcb_onfault instead of adding extra
overhead to data_abort_handler(). A side effect of this is that it
now benefits from the XScale double abort recovery.
- Invoke the cpu-specific prefetch/data abort fixup routines only if
the host cpu actually needs it. On other cpus, the code is optimised
away.
- Sprinkle __predict_{false,true} in all the right places.
- G/C some excess debugging baggage.
needless duplication.
Additionally, merge AST handling into the same code.
exception.S and the generic irq_dispatch.S routines have been updated
to use the macroes.
XXX: I have patches for the non-generic IRQ dispatch routines, but they
need testing by someone with hardware.
flushed on every context switch as an indicator that a mapping is
not resident in the cache.
Instead, used the per-pmap flag maintained by the cpu_switch/pmap code.