Commit Graph

138 Commits

Author SHA1 Message Date
perry 5f1c88d70d Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete. 2005-12-24 20:06:46 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
scw 5536ecb20f Implement pmap_collect() for arm32. 2005-12-10 21:19:57 +00:00
cube 9f1eb3e30f Change all archs that did:
#define clockframe somethingelse

to:

struct clockframe {
	struct somethingelse cf_se;
};

and change access macros accordingly.

That means that, at least for that very issue, things will not go
ka-boomy if you don't have the actual definition of struct clockframe
before including systm.h.
2005-08-11 20:32:55 +00:00
uwe fd4f28440c Catch up with constification. 2005-06-02 19:33:04 +00:00
joff 193578fd49 Bump UPAGES back down to 8KB now that real issue was found with ep93xx intr handling 2004-12-29 04:47:44 +00:00
joff 23fe936a38 bump default U-area size from 8KB to 64KB, 8KB is too little to even successfully boot a tsarm SBC 2004-12-23 04:39:41 +00:00
scw 9950f14938 Always disable interrupts at the start of DO_AST_AND_RESTORE_ALIGNMENT_FAULTS.
This addresses #2 of port-arm/23581 by Richard Earnshaw.

Many thanks to Richard for spotting the cause of this problem.
2004-04-27 07:13:16 +00:00
scw fbdf861fac The last cpsr_all change was misguided. Just use cpsr_c wherever possible. 2003-12-15 09:18:21 +00:00
scw 7cf5104b5a - For consistency, use cpsr_all instead of cpsr.
- Make sure IRQs are enabled before handling ASTs.
2003-12-01 08:48:33 +00:00
scw 571f89c4ad Slight re-org of the alignment/ast exit macro to better mimic the
original behaviour WRT cpsr/I32_bit handling.
2003-11-14 16:57:28 +00:00
scw 7a55b436b2 Move the alignment fault enable/disable code into macroes to avoid
needless duplication.

Additionally, merge AST handling into the same code.

exception.S and the generic irq_dispatch.S routines have been updated
to use the macroes.

XXX: I have patches for the non-generic IRQ dispatch routines, but they
need testing by someone with hardware.
2003-10-30 08:57:24 +00:00
scw 52c15bbd20 Don't drop to spl0 in cpu_switch/cpu_switchto. Do it in the idle loop
instead.

With this change, we no longer need to save the current interrupt level
in the switchframe. This is no great loss since both cpu_switch and
cpu_switchto are always called at splsched, so the process' spl is
effectively saved somewhere in the callstack.

This fixes an evbarm problem reported by Allen Briggs:

        lwp gets into sa_switch -> mi_switch with newl != NULL
            when it's the last element on the runqueue, so it
            hits the second bit of:
                if (newl == NULL) {
                        retval = cpu_switch(l, NULL);
                } else {
                        remrunqueue(newl);
                        cpu_switchto(l, newl);
                        retval = 0;
                }

        mi_switch calls remrunqueue() and cpu_switchto()

        cpu_switchto unlocks the sched lock
        cpu_switchto drops CPU priority
        softclock is received
        schedcpu is called from softclock
        schedcpu hits the first if () {} block here:
                if (l->l_priority >= PUSER) {
                        if (l->l_stat == LSRUN &&
                            (l->l_flag & L_INMEM) &&
                            (l->l_priority / PPQ) != (l->l_usrpri / PPQ)) {
                                remrunqueue(l);
                                l->l_priority = l->l_usrpri;
                                setrunqueue(l);
                        } else
                                l->l_priority = l->l_usrpri;
                }

        Since mi_switch has already run remrunqueue, the LWP has been
            removed, but it's not been put back on any queue, so the
            remrunqueue panics.
2003-10-23 08:59:10 +00:00
scw 063066a055 On Xscale, define PMAP_UAREA() and use it to tweak uarea mappings so
they use the mini D$.

This results in a small performance boost on xscale platforms, since
flushing the main cache on a context switch won't affect the kernel
stack/pcb.
2003-10-13 20:50:34 +00:00
rearnsha d4e1e335e8 Add support for ARM10 class processors. 2003-09-06 09:10:46 +00:00
bsh d8193564ca protect with #ifndef _LOCORE so that assembler codes can share
definitions in this file such as PMAP_DOMAIN_KERNEL.
2003-06-18 02:58:09 +00:00
thorpej 452a8fdae2 Rename IPL_IMP -> IPL_VM. 2003-06-16 20:00:56 +00:00
thorpej cf8a25bdfc Add another devmap routine that allows bootstrap code to register
a devmap reflecting mappings that are created by really early
bootstrap code before pmap_devmap_bootstrap() is called.
2003-06-15 18:18:16 +00:00
thorpej 87d5bba5b3 Replace the ad-hoc "section mapping table" for static device mappings
with a more generic "devmap" structure that can also handle mappings
made with large and small pages.  Add new pmap routines to enter these
mappings during bootstrap (and "remember" the devmap), and routines to
look up the static mappings once the kernel is running.
2003-06-15 17:45:21 +00:00
thorpej 1963a8521c Use virtual_avail and virtual_end to compute the size of the available
kernel VM space for VM_MAX_KERNEL_BUF, and move the definition into
generic ARM code.
2003-05-22 05:25:48 +00:00
thorpej c8bed530ac Remove #ifdefs supporting the old pmap, switching fully to the new. 2003-05-21 18:04:42 +00:00
thorpej 46ffc57a80 VM_{MIN,MAX}* are now the same for ARM32_PMAP_NEW with both new and
old VM layout, so merge the two cases.
2003-05-04 01:54:32 +00:00
thorpej bbba90a2fb Don't expose KERNEL_TEXT_BASE outside of board-specific code. This gives
individual board start-up code more flexibility about where the kernel
starts in the kernel address space.
2003-05-03 18:25:28 +00:00
thorpej aae7e372b7 Reduce differences between ARM32_NEW_VM_LAYOUT and not; always pass
the start and end of the kernel managed virtual address space to
pmap_bootstrap() in the new pmap.
2003-05-03 03:49:03 +00:00
thorpej 79a7aff0fd Don't need to reserve a page of space before KERNEL_BASE in the
ARM32_NEW_VM_LAYOUT case.
2003-05-02 23:26:47 +00:00
thorpej 4eeee795e8 Eliminate PTE_BASE and the PT-PT completely in the ARM32_PMAP_NEW case.
Also in the ARM32_PMAP_NEW case, reclaim the USPACE-bytes of wasted space
at the top of the user address that hasn't been needed for a very very
long time.
2003-05-02 23:22:33 +00:00
scw 6b08b996ba Fix the bug reported by Richard Earnshaw in port-arm32/21349.
Make sure to check the access permissions before doing
ref/mod/domain fixups. This is particularly important
on machines with ARM_VECTORS_LOW.
2003-04-28 15:57:23 +00:00
thorpej bbef46a7e9 Some ARM32_PMAP_NEW-related cleanup:
* Define a new "MMU type", ARM_MMU_SA1.  While the SA-1's MMU is basically
  compatible with the generic, the SA-1 cache does not have a write-through
  mode, and it is useful to know have an indication of this.
* Add a new PMAP_NEEDS_PTE_SYNC indicator, and try to evaluate it at
  compile time.  We evaluate it like so:
  - If SA-1-style MMU is the only type configured -> 1
  - If SA-1-style MMU is not configured -> 0
  - Otherwise, defer to a run-time variable.
  If PMAP_NEEDS_PTE_SYNC might evaluate to true (SA-1 only or run-time
  check), then we also define PMAP_INCLUDE_PTE_SYNC so that e.g. assembly
  code can include the necessary run-time support.  PMAP_INCLUDE_PTE_SYNC
  largely replaces the ARM32_PMAP_NEEDS_PTE_SYNC manual setting Steve
  included with the original new pmap.
* In the new pmap, make pmap_pte_init_generic() check to see if the CPU
  has a write-back cache.  If so, init the PT cache mode to C=1,B=0 to get
  write-through mode.  Otherwise, init the PT cache mode to C=1,B=1.
* Add a new pmap_pte_init_arm8().  Old pmap, same as generic.  New pmap,
  sets page table cacheability to 0 (ARM8 has a write-back cache, but
  flushing it is quite expensive).
* In the new pmap, make pmap_pte_init_arm9() reset the PT cache mode to
  C=1,B=0, since the write-back check in generic gets it wrong for ARM9,
  since we use write-through mode all the time on ARM9 right now.  (What
  this really tells me is that the test for write-through cache is less
  than perfect, but we can fix that later.)
* Add a new pmap_pte_init_sa1().  Old pmap, same as generic.  New pmap,
  does generic initialization, then resets page table cache mode to
  C=1,B=1, since C=1,B=0 does not produce write-through on the SA-1.
2003-04-22 00:24:48 +00:00
thorpej 8896997409 Gah, fix *another* typo. 2003-04-18 23:45:50 +00:00
thorpej 21e8a3bc0f Oops, fix typo. 2003-04-18 22:44:54 +00:00
thorpej 08330568d0 Define two new macros to test if a mapping is mappable with an L1 Section
mapping or an L2 Large Page mapping.
2003-04-18 22:39:56 +00:00
scw 41a1932e58 Add the generic arm32 bits of the new pmap, contributed by Wasabi Systems.
Some features of the new pmap are:

 - It allows L1 descriptor tables to be shared efficiently between
   multiple processes. A typical "maxusers 32" kernel, where NPROC is set
   to 532, requires 35 L1s. A "maxusers 2" kernel runs quite happily
   with just 4 L1s. This completely solves the problem of running out
   of contiguous physical memory for allocating new L1s at runtime on a
   busy system.

 - Much improved cache/TLB management "smarts". This change ripples
   out to encompass the low-level context switch code, which is also
   much smarter about when to flush the cache/TLB, and when not to.

 - Faster allocation of L2 page tables and associated metadata thanks,
   in part, to the pool_cache enhancements recently contributed to
   NetBSD by Wasabi Systems.

 - Faster VM space teardown due to accurate referenced tracking of L2
   page tables.

 - Better/faster cache-alias tracking.

The new pmap is enabled by adding options ARM32_PMAP_NEW to the kernel
config file, and making the necessary changes to the port-specific
initarm() function. Several ports have already been converted and will
be committed shortly.
2003-04-18 11:08:24 +00:00
thorpej a0aee79a1d Add the ability for pool caches to cache the physical address of
objects.  Clients of the pool_cache API must consistently use
the "paddr" variants or not, otherwise behavior is undefined.

Enable this on Alpha, ARM, MIPS, and x86.  Other platforms must
define POOL_VTOPHYS() in the appropriate manner in order to enable
the feature.

Part 1 of a series of simple patches contributed by Wasabi Systems
to improve network performance.
2003-04-09 18:22:13 +00:00
thorpej cc2c493bc4 Use PAGE_SIZE rather than NBPG. 2003-04-02 07:35:54 +00:00
chris c9033077aa Garbage collect pmap_map, the last (and only?) use has been removed. 2003-03-23 15:59:23 +00:00
thorpej 78ea2dd367 Use __LDPGSZ (which must be == USRTEXT) as the text address for a.out
executables, and eliminate the USRTEXT constant, which was only used
by the a.out exec code.
2002-12-10 05:14:24 +00:00
lukem 0635de35a3 Remove KDIR=, since SYS_INCLUDE=symlinks and KDIR are not supported any more. 2002-11-26 23:30:07 +00:00
chris 747fcfc089 Fix PTE_FLUSH_RANGE macro, it should have had a cnt parameter. 2002-11-12 09:46:37 +00:00
bjh21 a531a4ae8e Undo recent cpu_switch register usage changes in order to decrease nathanw_sa
merge pain.
2002-10-19 00:10:53 +00:00
bjh21 3d1b6867f0 In cpu_switch(), stack more registers at the start of the function,
and hence save fewer into the PCB.  This should give me enough free
registers in cpu_switch to tidy things up and support MULTIPROCESSOR
properly.  While we're here, make the stacked registers into an
APCS stack frame, so that DDB backtraces through cpu_switch() will
work.

This also affects cpu_fork(), which has to fabricate a switchframe and
PCB for the new process.
2002-10-18 21:32:57 +00:00
bjh21 441e8907fe Switch to using the MI C versions of setrunqueue() and remrunqueue().
GCC produces almost exactly the same instructions as the hand-assembled
versions, albeit in a different order.  It even found one place where it
could shave one off.  Its insistence on creating a stack frame might slow
things down marginally, but not, I think, enough to matter.
2002-10-15 20:53:38 +00:00
thorpej 7bbf61fd89 Add support for restartable atomic sequences on 26-bit ARM. Compile
tested only.

Now that all ARM systems have RAS, move __HAVE_RAS from arm/arm32/types.h
to arm/types.h.
2002-10-07 02:48:38 +00:00
chs c081614ea2 it really helps to get the stub right before cutting + pasting it 27 times.
alas, I did not.  doh.
2002-09-22 07:53:39 +00:00
chs 55e1f79335 add pmap_remove_all() hook (empty on most platforms so far). 2002-09-22 07:17:08 +00:00
simonb eb4524608c Only need to define __HAVE_MD_RUNQUEUE once here... 2002-09-22 05:56:32 +00:00
gmcgarry dca80f08fd Add __HAVE_MD_RUNQUEUE flag for MD code to override MI run queue primitives. 2002-09-22 04:11:32 +00:00
thorpej 212cb9f78d Add machine-dependent bits of RAS for arm32. 2002-08-31 03:07:32 +00:00
thorpej aafe6e006c Define macros describing the 4M super-sections that our pmap
actually uses (since we allocate PT pages in 4K chunks, rather
than 1K chunks).
2002-08-24 02:48:50 +00:00
thorpej 77a6866508 Enable caching on kernel and user page tables. This saves having
to do uncached memory access during VM operations (which can be
quite expensive on some CPUs).

We currently write-back PTEs as soon as they're modified; there is
some room for optimization (to write them back in larger chunks).
For PTEs in the APTE space (i.e. PTEs for pmaps that describe another
process's address space), PTEs must also be evicted from the cache
complete (PTEs in PTE space will be evicted durint a context switch).
2002-08-24 02:16:30 +00:00
thorpej 6cc7c1c1ff * Add PTE_SYNC() and PTE_SYNC_RANGE() macros. These don't actually do
anything yet.
* Use PTE_SYNC() and PTE_SYNC_RANGE() in some obvious places, i.e.
  where vtopte() is used.
2002-08-22 01:13:53 +00:00