Commit Graph

468 Commits

Author SHA1 Message Date
bjh21 85386dce51 Use cpu_number() to find curpcb rather than assuming we're on CPU 0. 2002-10-13 14:24:09 +00:00
bjh21 75248cc7a1 It appears that MI code requires ci_cpuid to be the CPU number of the CPU
in question, whereas the ARM code was using it to hold the model
identification.  To fix this, rename:

ci_cpuid -> ci_arm_cpuid
ci_cputype -> ci_arm_cputype (for consistency)
ci_cpurev -> ci_arm_cpurev (ditto)
ci_cpunum -> ci_cpuid

This makes top(1) give correct CPU numbers in its "STATE" column (all 0 for
now).
2002-10-13 12:24:57 +00:00
bjh21 d8fd346734 Remember the location of each CPU's idle PCB in struct cpu_info.
Move allocation of the idle PCB from hydra.c to cpu.c and add some
extra initialisation from cpu_fork().
2002-10-12 21:06:46 +00:00
bjh21 a7385c575f Move curpcb into struct cpu_info in MULTIPROCESSOR kernels. 2002-10-12 12:20:08 +00:00
bjh21 6ae19cc8cd Use ADR rather than an explicit ADD from PC. 2002-10-09 22:28:03 +00:00
bjh21 67ba9f99bf Remove an outdated register assignment comment. 2002-10-08 23:48:24 +00:00
bjh21 3832819227 Minimal changes to allow a kernel with "options MULTIPROCESSOR" to compile
and boot multi-user on a single-processor machine.  Many of these changes
are wildly inappropriate for actual multi-processor operation, and correcting
this will be my next task.
2002-10-05 13:46:57 +00:00
bjh21 b828507087 constify various string tables. 2002-10-01 22:33:10 +00:00
provos 0f09ed48a5 remove trailing \n in panic(). approved perry. 2002-09-27 15:35:29 +00:00
thorpej 71404bb533 Don't include <sys/map.h>. 2002-09-25 22:21:01 +00:00
chs f01058c887 rename the existing pmap_remove_all() here to pmap_page_remove()
(ala the x86 pmap) to avoid conflicting with the new pmap interface
function of the same name.
2002-09-22 07:56:57 +00:00
nathanw 2cab03d64a In the fault handler, record growth of the stack, so that core dumps
actually contain the entire stack.
2002-09-21 00:29:04 +00:00
gehenna 77a6b82b27 Merge the gehenna-devsw branch into the trunk.
This merge changes the device switch tables from static array to
dynamically generated by config(8).

- All device switches is defined as a constant structure in device drivers.

- The new grammer ``device-major'' is introduced to ``files''.

	device-major <prefix> char <num> [block <num>] [<rules>]

- All device major numbers must be listed up in port dependent majors.<arch>
  by using this grammer.

- Added the new naming convention.
  The name of the device switch must be <prefix>_[bc]devsw for auto-generation
  of device switch tables.

- The backward compatibility of loading block/character device
  switch by LKM framework is broken. This is necessary to convert
  from block/character device major to device name in runtime and vice versa.

- The restriction to assign device major by LKM is completely removed.
  We don't need to reserve LKM entries for dynamic loading of device switch.

- In compile time, device major numbers list is packed into the kernel and
  the LKM framework will refer it to assign device major number dynamically.
2002-09-06 13:18:43 +00:00
jdolecek 8839507f5b whitespace fix past __KERNEL_RCSID() 2002-09-05 18:34:00 +00:00
thorpej 212cb9f78d Add machine-dependent bits of RAS for arm32. 2002-08-31 03:07:32 +00:00
thorpej 139cdc3125 Make nbuf, nswbuf, and bufpages unsigned. Make all operations on these
variables unsigned, and update places where their values are printed.
2002-08-25 20:21:33 +00:00
thorpej ffdedb6d80 In pmap_map_in_l1() and pmap_unmap_in_l1(), make sure that the VA
that is passed in is already aligned to a 4M super-section.
2002-08-24 03:10:40 +00:00
thorpej d158b3a37a When we allocate a PTP, make sure the offset we specify is for
the 4M super-section that the PTP will map, not some random 1M
chunk of it.  This gives the PTP hint code a much better chance
to working properly, and allows us to tidy up the code that
flushes a PTP from the cache in pmap_destroy().
2002-08-24 02:50:53 +00:00
thorpej 77a6866508 Enable caching on kernel and user page tables. This saves having
to do uncached memory access during VM operations (which can be
quite expensive on some CPUs).

We currently write-back PTEs as soon as they're modified; there is
some room for optimization (to write them back in larger chunks).
For PTEs in the APTE space (i.e. PTEs for pmaps that describe another
process's address space), PTEs must also be evicted from the cache
complete (PTEs in PTE space will be evicted durint a context switch).
2002-08-24 02:16:30 +00:00
thorpej 6cc7c1c1ff * Add PTE_SYNC() and PTE_SYNC_RANGE() macros. These don't actually do
anything yet.
* Use PTE_SYNC() and PTE_SYNC_RANGE() in some obvious places, i.e.
  where vtopte() is used.
2002-08-22 01:13:53 +00:00
thorpej 574a9cc019 Use a pool cache for PT-PTs. 2002-08-21 21:22:52 +00:00
thorpej 5fddbbe3d5 Do cached memory access to L1 tables, making sure to write-back the
cache after any L1 table modifications.
2002-08-21 18:34:31 +00:00
thorpej 003b8e8bca More local label fixups. 2002-08-17 16:36:31 +00:00
briggs 20267a208f Do not trim 'offset' from 'len' in _bus_dmamap_sync_linear(). 2002-08-17 05:14:10 +00:00
briggs d86c947b8c Inline bus_dma_inrange() and bus_dmamap_sync_*(). 2002-08-17 01:15:15 +00:00
thorpej 50fe583069 Must ... micro ... optimize!
* Save an instruction in the transition from idle to have-process-to-
  switch-to, and eliminate two instructions that cause datadep-stalls
  on StrongARM And XScale (one in each idle block).
* Rearrange some other instructions to avoid datadep-stalls on StrongARM
  and XScale.
* Since cpu_do_powersave == 0 is by far the common case, avoid a
  pipeline flush by reordering the two idle blocks.
2002-08-17 01:08:21 +00:00
thorpej ebff575bc3 * Add a new machdep.powersave sysctl, which controls the use of
the CPU's "sleep" function in the idle loop.
* Default all CPUs to not use powersave, except for the PDA processors
  (SA11x0 and PXA2x0).

This significantly reduces inteterrupt latency in high-performance
applications (and was good to squeeze another ~10% out of an XScale
IOP on a Gig-E benchmark).
2002-08-16 15:25:53 +00:00
briggs fa81e3d75e * Use local label names (.Lfoo vs. (Lfoo or foo))
* When moving from cpsr, use "cpsr" instead of "cpsr_all" (which is
   provided, but doesn't make sense since mrs doesn't support fields
   like msr does).
2002-08-15 01:37:01 +00:00
thorpej ad73349331 We only need to modify the CPSR's control field, so use cpsr_c rather
than cpsr_all.
2002-08-14 23:23:06 +00:00
chris f4c605201d Tweak asm to avoid a couple of stalls. 2002-08-14 23:07:36 +00:00
thorpej b45159bad0 When doing PREREAD sync operations, if the start and end addresses
of the range are aligned to a cacheline boundary, when do a dcache-inv
operation, rather than a dcache-wbinv operation.

XXX It could be a little smarter (align using wbinv, inv, then finish
up using wbinv), but even this simple change is good for a nearly 40%
improvement in my test case on XScale.
2002-08-14 22:56:55 +00:00
briggs a957deca48 G/c cowfault. 2002-08-14 21:52:36 +00:00
thorpej 203dd6b325 * Add an ARM32_DMAMAP_COHERENT flag to indicate that a loaded DMA
map contains "coherent" (non-cached in ARM-land) mappings.
* Set ARM32_DMAMAP_COHERENT in the map at the start of a load operation,
  and clear it in _bus_dmamap_load_buffer() if we encounter any cacheable
  mappings.
* In _bus_dmamap_sync(), if the map is marked COHERENT, skip any cache
  flushing.
2002-08-14 20:50:37 +00:00
thorpej d00a4a068d Whe making a mapping "coherent", clear *ALL* the cache bits, not
just L2_B and L2_C.
2002-08-14 19:21:50 +00:00
thorpej 98d6ec0b89 Add the brutal hack that allows us to limp along using the read/write
cache line allocation policy on XScale CPUs: in pmap_enter(), if the
pmap is the kernel pmap, clear the X-bit in the PTE, thus disabling
read/write-allocate for managed kernel mappings.

Yes, this is ugly.  But it makes userland code run with r/w-allocate,
which is a huge improvement on systems with low core memory performance.
2002-08-13 03:36:30 +00:00
thorpej d7be866fc8 Rearrange the beginning of cpu_switch() slightly to reduce data-dep
stalls on StrongARM and XScale.
2002-08-12 21:00:12 +00:00
bjh21 664bea62e3 __KERNEL_RCSID 2002-08-12 20:19:04 +00:00
bjh21 ca86069053 When pcb_onfault is set, pass the error code we get from uvm_fault()
(or EFAULT if we never called uvm_fault) to the onfault handler in R0,
in case it wants to use it.
2002-08-12 20:17:37 +00:00
thorpej 3d6f9f69ab Make a slight tweak to register usage to save an instruction. 2002-08-12 19:33:01 +00:00
bjh21 657216ff0f Remove a file which was accidentally resurrected. 2002-08-11 23:20:11 +00:00
bjh21 206c97ccc2 Move the arm32 copystr.S from arch/arm/arm32 to arch/arm/arm and add support
for 26-bit modes (basically saving R14 when we might get a page fault).
Use it on all ARM architectures now.
2002-08-11 23:17:24 +00:00
bjh21 b6228a7d06 New, improved version of copyin(), copyout(), and kcopy() by Allen Briggs.
This version works on both 26-bit and 32-bit machines.  For large copies,
it's up to three times as fast as the old arm32 version and five times as
fast as the old arm26 version.  For small copies it seems to be even faster
(getrusage() is apparently over ten times faster on an ARM610).

Hooray for Allen!
2002-08-11 21:19:12 +00:00
thorpej 76730bd0cc Tidy up pmap_clean_page() a little, and reenable some code that was
disabled previously: Skip cleaning mappings which are read-only, because
the pmap (now) does clean pages on a r/w -> r/o transition.
2002-08-10 00:48:35 +00:00
thorpej 006a578742 Clean up some warts in pmap_protect(). 2002-08-10 00:11:51 +00:00
thorpej 15a5e8f238 cpu_fork(): If PMCs are not enabled in the parent, clear the machine-
dependent PMC state in the child.
2002-08-09 23:44:17 +00:00
thorpej 6ce0a206cc Add an XSCALE_CACHE_READ_WRITE_ALLOCATE option for people who
want to play fast-and-loose.
2002-08-09 21:49:09 +00:00
thorpej 884bc64586 Add some code, conditional on PMAP_ALIAS_DEBUG, that can be used to
hunt for virtual aliases between managed (pmap_enter) and non-managed
(pmap_kenter_pa) mappings.
2002-08-09 18:22:59 +00:00
thorpej c979315325 Reduce stalls on StrongARM and XScale by waiting one insn before using
the result of a load.
2002-08-09 06:18:24 +00:00
thorpej afe3274eed Use ldrbt/strbt. Some other random cleanup. 2002-08-09 06:03:02 +00:00
thorpej 410785d6f0 Use ldrt/strt. 2002-08-09 04:13:20 +00:00
thorpej fdcc8560e4 Speed up bcopy_page() on the XScale slightly by using the "pld"
insn (prefetch) to look-ahead to the next chunk while we copy the
current chunk.

This could probably use a bit more tuning.
2002-08-07 16:21:29 +00:00
briggs 0b956d0b8b Implement pmc(9) -- An interface to hardware performance monitoring
counters.  These counters do not exist on all CPUs, but where they
do exist, can be used for counting events such as dcache misses that
would otherwise be difficult or impossible to instrument by code
inspection or hardware simulation.

pmc(9) is meant to be a general interface.  Initially, the Intel XScale
counters are the only ones supported.
2002-08-07 05:14:47 +00:00
thorpej 26bc8b27f4 - pmap_remove(): unmap the PTEs *after* we have finished with the
page tables.
- pmap_enter(): if making a mapping for the same PA rw->ro, write-back
  the cache before doing so.
- pmap_clearbit(): if revoking REF on a page, make sure to wbinv the
  cache if the page has write permission, else inv the cache if the page's
  PTE is valid (XXX we actually wbinv in this case, as well, due to lack
  of idcache_inv_range()).  Only flush the TLB if the PTE changed.
2002-08-06 21:43:51 +00:00
thorpej 0886c8cc0f Rearrange the exit path so that we don't do a idcache_wbinv_all *twice*
when a process exits.
2002-08-06 19:20:29 +00:00
thorpej 62d83d05b1 * Pass proc0 to switch_exit(), to make this a little more like the
nathanw_sa branch.
* In switch_exit(), set the outgoing-proc register to NULL (rather than
  proc0) so that we actually use the "exiting process" optimization in
  cpu_switch().
2002-08-06 17:44:35 +00:00
thorpej f7328ddbe7 Add dmoverio. 2002-08-02 00:50:25 +00:00
thorpej dce4476374 Overhaul how DMA ranges work in the ARM bus_dma implementation.
A new "arm32_dma_range" structure now describes a DMA window, with
a system address base, bus address base, and length.  In addition to
providing info about which memory regions are legal for DMA, the new
structure provides address translation support, as well.

As before, if a tag does not list any ranges, then all addresses are
considered valid, and no DMA address translation is performed.

This allows us to remove a large chunk of code which was duplicated and
tweaked slightly (to do the address translation) from the stock ARM
bus_dma in the XScale IOP and ARM Integrator ports.

Test compiled on all ARM platforms, test booted on Intel IQ80321 and Shark.
2002-07-31 17:34:23 +00:00
thorpej 79af00bddb Move the calls to uvm_page_physload() out of pmap_bootstrap() and
into platform-specific initialization code, giving platform-specific
code control over which free list a given chunk of memory gets put
onto.

Changes are essentially mechanical.  Test compiled for all ARM
platforms, test booted on Intel IQ80321 and Shark.

Discussed some time ago on port-arm.
2002-07-31 00:20:51 +00:00
thorpej d3aa5664b7 Move the uvm_setpagesize() call to platform-dependent code in preparation
for other changes to pmap_bootstrap().
2002-07-30 16:16:38 +00:00
thorpej 3dcad9ac9e Don't use pmap_kenter_pa() in pmap_map(); doing so causes an assertion
failure in pmap_kenter_pa().
2002-07-30 16:07:23 +00:00
thorpej 3ab4598cc0 Add sysmon at cdev 101. 2002-07-29 18:26:58 +00:00
thorpej 7b652cb939 Change the way that DMA map syncs are done. Instead of remembering
the virtual address for each DMA segment, just cache a pointer to the
original buffer/buftype used to load the DMA map, and use that.  This
lets us shrink the bus_dma_segment_t down from 12 bytes to 8, and the
cache flushing is also more efficient.

Tested on an i80321 -- changes to others are mechanical.
2002-07-28 17:54:05 +00:00
briggs c13ee269dd Handle i80200 step D0 and i80321 step B0 2002-07-22 18:17:42 +00:00
ichiro 6349df15da cdev_tty_init(NIXPCOM,ixpcom) move to end of cdevsw array 2002-07-22 01:12:24 +00:00
simonb 895a23e8ae Add an "#ifndef NIXPCOM" check so that this builds on non-evbarm. 2002-07-20 00:26:51 +00:00
thorpej 3912e469dd Rename cdev_systrace_init() to cdev_clonemisc_init(), so it can
be properly used by any misc. cloning device.  While here, correct
a comment to indicate that "open" is the only entry point and that
everything else is handled with fileops.
2002-07-19 16:38:14 +00:00
ichiro 2255ed4ecb add ixpcom to cdevsw 2002-07-16 14:20:04 +00:00
ichiro 83c0b66d47 add cpu id for "PXA250/210 3rd version CPUcore".
for using many PDA/xscale-core.
2002-07-10 07:00:50 +00:00
thorpej 47506c123a Add kttcp device. 2002-06-30 23:30:07 +00:00
briggs 1b3d605b4e Remove complaint: bus_dmamap_destroy() called for map with valid
mappings bus_dma(9) states: "In the event that the DMA handle contains
a valid mapping, the mapping will be unloaded via the same mechanism
used by bus_dmamap_unload()."  And some drivers do mean to skip the
unload step.
2002-06-28 15:21:00 +00:00
thorpej 43e7ad972b Garbage-collect sigframe references. 2002-06-23 00:16:59 +00:00
christos 3b50728cf4 MD systrace gluons. 2002-06-17 16:32:57 +00:00
thorpej ffe1440f29 Add the CPU ID for the 600MHz i80321 part. 2002-06-07 18:25:28 +00:00
drochner d2b9876081 move initialization of the "struct pglist" returned by uvm_pglistalloc()
from the calling code into uvm_pglistalloc() itself for consistency
and easier error handling
2002-06-02 14:44:35 +00:00
lukem 06de426449 SIMPLEQ rototill:
- implement SIMPLEQ_REMOVE(head, elm, type, field).  whilst it's O(n),
  this mirrors the functionality of SLIST_REMOVE() (the other
  singly-linked list type) and FreeBSD's STAILQ_REMOVE()
- remove the unnecessary elm arg from SIMPLEQ_REMOVE_HEAD().
  this mirrors the functionality of SLIST_REMOVE_HEAD() (the other
  singly-linked list type) and FreeBSD's STAILQ_REMOVE_HEAD()
- remove notes about SIMPLEQ not supporting arbitrary element removal
- use SIMPLEQ_FOREACH() instead of home-grown for loops
- use SIMPLEQ_EMPTY() appropriately
- use SIMPLEQ_*() instead of accessing sqh_first,sqh_last,sqe_next directly
- reorder manual page; be consistent about how the types are listed
- other minor cleanups
2002-06-01 23:50:52 +00:00
ichiro 4c034ead9b make compile when define DEBUG 2002-05-25 07:58:35 +00:00
chris a9e806ee0c Implement scheduler lock protocol, this fixes PR arm/10863.
Also add correct locking when freeing pages in pmap_destroy (fix from potr)

This now means that arm32 kernels can be built with LOCKDEBUG enabled. (only tested on cats though)
2002-05-14 19:22:34 +00:00
matt 0a6d35b7ed Nuke local extern label_t *db_recover; it's now in <ddb/db_extern.h> 2002-05-13 20:30:07 +00:00
ichiro be557a5f28 change ICP12x0 steppings.
define CPU_IXP12X0
2002-05-12 15:05:41 +00:00
thorpej 22cea0e73c Add IXP1200 steppings. 2002-05-10 17:50:25 +00:00
jdolecek f2f12a240b Update to md(4) changes: memory_disk_size is now md_root_size, and
type is size_t
2002-05-05 16:26:30 +00:00
thorpej 860fe83065 Add support for the Intel PXA210 and PXA250. From Hiroyuki Bessho, PR 16617. 2002-05-03 03:28:48 +00:00
rjs 9646735a82 Enable CPU_CLASS_SA1 for SA1100 and SA1110. 2002-05-02 22:57:36 +00:00
thorpej 8bd36dc909 Make a comment describe what the code actually does. 2002-04-25 23:23:23 +00:00
thorpej 2c0a144aa4 * pmap_clean_page(): Clean up a comment.
* pmap_protect(): write back the range when doing a r/w -> r/o
  transition.  (Still leave the block concerned with this in
  pmap_clean_page() disabled, for now.)
* pmap_pte_init_xscale(): Disable read/write-allocate for now, until
  we figure out why sometimes cache lines of NULs get deposited into
  file data.  Also, make sure ECC protection of page table access is
  disabled for now.
* xscale_setup_minidata(): Make sure the mini-data cache is configured
  write-back with read/write-allocate.
2002-04-24 17:35:10 +00:00
wiz d79f4782b6 Complete renaming of opms to opms (was partly named pms, externally and
internally).  Move arm/iomd/pms* to arm/iomd/opms*. Mechanical change,
tested by cross-compiling a kernel from i386.

Approved by christos.

XXX: What are arm/arm32/conf.c and arm/include/conf.h good for?
2002-04-19 01:04:38 +00:00
thorpej 10c0c20ad4 Default all XScale core processors to the read/write-allocate write-back
cache mode.  Add a new XSCALE_CACHE_WRITE_THROUGH option for people who
are paranoid about the cache-related errata (you *do* have to line up
the planets correctly to trip them, but having the option is useful).
2002-04-12 21:52:45 +00:00
thorpej 32a0860797 Centralize ARM CPU configuration information by adding a new header
file, <arm/cpuconf.h>, which pulls in "opt_cputypes.h" and then defines
the following:
* CPU_NTYPES -- now many CPU types are configured into the kernel.  What
  you really want to know is "== 1" or "> 1".
* Defines ARM_ARCH_2, ARM_ARCH_3, ARM_ARCH_4, ARM_ARCH_5, depending
  on which ARM architecture versions are configured (based on CPU_*
  options).  Also defines ARM_NARCH to determins how many architecture
  versions are configured.
* Defines ARM_MMU_MEMC, ARM_MMU_GENERIC, ARM_MMU_XSCALE depending on
  which classes of ARM MMUs are configured into the kernel, and ARM_NMMUS
  to determine how many MMU classes are configured.

Remove the needless inclusion of "opt_cputypes.h" in several places.
Convert remaining users to <arm/cpuconf.h>.
2002-04-12 18:50:29 +00:00
thorpej 27d98ca694 Remove the Control register handling from arm32_vector_init(). Apparently,
the ARM6 and ARM7 do completely the wrong thing if you read this register,
so we have to handle this a different way.
2002-04-10 21:45:43 +00:00
thorpej 59c9e94b72 vm_offset_t -> vaddr_t,paddr_t 2002-04-10 19:35:22 +00:00
thorpej ad2350dccf On XScale processors where we use write-back caching, use are
read/write-allocate line allocation policy.

On the i80321, this improves nearly every lmbench benchmark, dramatically
so the ones that are sensitive to memory bandwidth (100-300% improvement
for these).
2002-04-10 17:39:31 +00:00
thorpej 2b924304ab Add a new function, pmap_alloc_ptpt(), that allocates the PTPT and
maps it the way we want, rather than using uvm_km_zalloc() and playing
the "revoke cacheability" song-and-dance.
2002-04-10 17:08:13 +00:00
thorpej cad393fa1c pmap_alloc_l1pt(): Just enter the mappings for the L1 table by
hand, rather than calling pmap_kenter_pa() and then revoking
cacheability in the PTE.
2002-04-10 15:56:21 +00:00
thorpej cd0e28f1e7 Use L2_S_CACHE_MASK in places where we revoke cacheability. 2002-04-10 15:44:23 +00:00
thorpej 668547d841 pmap_kenter_pa(): Obey the "prot" argument, rather than simply making
all mappings r/w (!!).
2002-04-10 04:40:58 +00:00
thorpej 6e52cbf89e In pmap_copy_page_xscale(), put the source page in the mini-data
cache, as well.  The mini-data cache is 2-way, so src and dst won't
clobber each other, and the smallness of the cache doesn't matter,
since we access each page once sequentially.

While we still have to do the initial clean of the source page, this
saves another 4K of main D$ pollution, and also means we don't have
to do 2 cache passes after the copy is complete (i.e. we can skip the
invalidation of the source page in the main cache, since it's no longer
there).
2002-04-10 01:30:42 +00:00
thorpej 2092e78cec Add separate pmap_{zero,copy}_page() functions for generic ARM
vs. XScale.  Use the mini-data cache for the destination on XScale,
thus saving tossing out 4K of possible-useful data from the main
data cache each time.

This significantly improves every test in lmbench.
2002-04-10 00:45:43 +00:00
thorpej da162bee90 * Move the code that cleans the XScale mini-data cache into its
own function.
* Add a new function which sets up the mini-data cache clean area
  properly.
2002-04-09 23:44:00 +00:00
thorpej 1b20a04772 * Split pte_cache_mode into pte_l1_s_cache_mode, pte_l2_l_cache_mode,
and pte_l2_s_cache_mode.  The cache-meaningful bits are different
  for these descriptor types on some processor models.
* Add pte_*_cache_mask, corresponding to each above, which has a mask
  of the cache-meangful bits, and define those for generic and XScale
  MMU classes.  Note, the L2_S_CACHE_MASK_xscale definition requires
  use of the Extended Small Page L2 descriptor (the "X" bit overlaps
  with AP bits otherwise).
2002-04-09 22:37:00 +00:00
thorpej c535f4ffc4 Define 2 classes of ARM MMUs:
1. Generic (compatible with ARM6)
1. XScale (can be used as generic, but also has certainly nifty extensions).

Define abstract PTE bit defintions for each MMU class.  If only one MMU
class is configured into the kernel (based on CPU_* options), then we
get the constants for that MMU class.  Otherwise we indirect through
varaibles set up via set_cpufuncs().

XXX The XScale bits are currently the same as the generic bits.  Baby steps.
2002-04-09 21:00:42 +00:00
thorpej 7b422802f6 L2_TYPE_S -> L2_S_PROTO 2002-04-09 19:44:22 +00:00
thorpej aee5994fce Use abstract names for the protection and PTE type bits in
L1 and L2 descriptors.  This will allow us to support different
PTE layouts that enable the use of extensions on different
processor models.
2002-04-09 19:37:14 +00:00
thorpej 4d78508c9d Back-out rev 1.75 (pmap_extract() rewrite), and fix the (minor)
bug that revision intended to fix properly.
2002-04-05 22:17:41 +00:00
thorpej 991426d348 * Rewrite the 32-bit ARM pte.h based on the ARM architecture manual.
Significant cleanup, here, including better PTE bit names.
* Add XScale PTE extensions (ECC enable, write-allocate cache mode).
* Mechanical changes everywhere else to update for new pte.h.  While
  doing this, two bugs (as a result of typos) were fixed in

	arm/arm32/bus_dma.c
	evbarm/integrator/int_bus_dma.c
2002-04-05 16:58:01 +00:00
skrll c0e4084210 Fix compile problem when DDB not defined. 2002-04-04 12:39:55 +00:00
thorpej ce482eca0a Eliminate a mask against PD_MASK. 2002-04-04 05:42:29 +00:00
thorpej 60b63aec95 There is no need to mask VAs and PAs w/ PG_FRAME to clear
the lower bits; UVM provides us page-aligned addresses for
everything.  For the paranoid, we'll leave KDASSERT()'s in
that check for this if the kernel is built with DEBUG.

Low-hanging fruit that shaves some cycles.
2002-04-04 04:43:20 +00:00
thorpej e539ef03aa Rename flags that are really part of the pv_entry/mdpage into
pmap.h and give them more descriptive names and better comments:
* PT_M  -> PVF_MOD (page is modified)
* PT_H  -> PVF_REF (page is referenced)
* PT_W  -> PVF_WIRED (mapping is wired)
* PT_Wr -> PVF_WRITE (mapping is writable)
* PT_NC -> PVF_NC (mapping is non-cacheable; multiple mappings)
2002-04-04 04:25:44 +00:00
thorpej 263270d684 Catch a couple more vector page mapping manipulations. 2002-04-04 02:06:46 +00:00
thorpej 20b1bb2655 Clean up handling of the vector page on 32-bit ARM systems:
* Don't refer to VA 0, instead refer to a new variable: vector_page
* Delete the old zero_page_*() functions, replacing them with a new
  one: vector_page_setprot().
* When manipulating vector page mappings in user pmaps, only do so if
  the vector page is below KERNEL_BASE (if it's above KERNEL_BASE, the
  vector page is mapped by the kernel pmap).
* Add a new function, arm32_vector_init(), which takes the virtual
  address of the vector page (which MUST be valid when the function
  is called) and a bitmask of vectors the kernel is going to take
  over, and performs all vector page initialization, including setting
  the V bit in the CPU Control register ("relocate vectors to high
  address"), if necessary.
2002-04-03 23:33:26 +00:00
thorpej 7739f7410a Always provide kernel_text. 2002-04-03 17:30:50 +00:00
reinoud 943880cea2 Rototil and fix the pmap_extract function. It wouldn't even return data
when the part being quiried was mapped with a section (!) giving weird
results and had become a mess of goto's.

Complete rewrite and cleaned up the `goto'-jungle entirely ... ripped all
goto's. The resulting code is much better to read and might even have a
small performance gain.
2002-04-03 15:59:58 +00:00
lukem d213d804f7 Rename MEMORY_DISK_SIZE (formerly MINIROOTSIZE) to MEMORY_DISK_ROOT_SIZE,
which was suggested by Izumi Tsutsui <tsutsui@ceres.dti.ne.jp> as
being more consistent with what it's controlling...
2002-04-02 05:30:34 +00:00
thorpej 243dc1d498 Rename the ARM sysarch calls from arm32* -> arm* 2002-03-30 06:23:39 +00:00
thorpej 863afc5d41 Fix a printf format. 2002-03-29 00:48:58 +00:00
thorpej c915b880c5 The 80321 manual lies; it does have a CPU ID distinct from the 80200.
Add that CPU ID, and add a case for it.
2002-03-27 01:34:47 +00:00
thorpej 41f47f03e7 Restructure a few things in order to support other XScale core
I/O processors:
* The i80200 and the i80321 have the same CPU ID, so split the
  CPU_XSCALE option into CPU_XSCALE_80200 and CPU_XSCALE_80321
  options, and don't let them both be defined at the same time.
  XXX May want to revisit this in the future.
* Split some registers common between the i80200 and i80321 into
  <arm/xscale/xscalereg.h>.
* Rename a few existing functions.
2002-03-26 19:29:44 +00:00
thorpej 3964313f67 Fix reporting of the kernel virtual address space range to UVM. 2002-03-25 22:11:12 +00:00
thorpej a2a309d02a * Some cleanup.
* Delete the call to pmap_copy() in pmap.h
2002-03-25 19:53:38 +00:00
thorpej b17e7a03c2 Clean up pmap_map_ptes() and pmap_unmap_ptes() a little, and add
a debug assertion that curproc is never NULL if mapping a non-current
pmap.
2002-03-25 17:50:12 +00:00
thorpej a2d8f71d01 The target page of pmap_zero_page(), pmap_pageidlezero(), and
pmap_copy_page() will never have any mappings.  Therefore, it
is unnecessary to do a cache clean for that page.

Add assertions in #ifdef DEBUG that assert this invariant.

This shaves some cycles off the frequently-called pmap_zero_page()
and pmap_copy_page() (no need to look up the dst page's vm_page
structure, and one less function call to clean the page).
2002-03-25 17:33:26 +00:00
thorpej 75cb2c6554 * Clean up some comments/whitespace.
* Don't construct a fake trap frame and pass it to main(); that hasn't
  been needed for some time.
* panic if main() returns.
2002-03-25 16:58:18 +00:00
thorpej a61914be93 Garbage-collect fetchuserword(); nothing uses it any more. 2002-03-25 16:32:55 +00:00
thorpej dbe6d8291b * Fix use of pmap_curmaxkvaddr.
* Use the PTP hint in the pmap.
2002-03-25 04:51:19 +00:00
thorpej 8500c97458 Move some private pmap data structures into pmap.c 2002-03-25 03:00:28 +00:00
thorpej da2944b10e In the Prefetch Abort handler, just do the uvm_fault() dance
directly, rather than doing a data access to fetch the page,
which meant we had to take another fault (!!).
2002-03-25 01:53:36 +00:00
thorpej a4652c81cf Only check for SA110 bugs on SA110 CPUs with step <= K. 2002-03-24 22:03:23 +00:00
thorpej ea553e2681 Cache the cpu type and cpu revision in cpu_info. 2002-03-24 22:02:58 +00:00
thorpej 186c0135d6 Garbage-collect pmap_pte() (and good riddance!) 2002-03-24 21:32:18 +00:00
thorpej ea95b58d21 * Only check for SA110 rev K bug if we're on an SA110 (XXX should also
check stepping).
* In said check, don't use pmap_pte().
* Garbage-collect some useless debug code.
2002-03-24 21:27:57 +00:00
chris 03345d6008 remove pointless pg = NULL in else part of if (pg != NULL) 2002-03-24 21:10:25 +00:00
thorpej bf3ea66d5c pmap_enter(): Use pmap_map_ptes() correctly. 2002-03-24 20:48:59 +00:00
chris 434f6391ea Update pmap_copy_page to only map in the src readonly and only invalidate it after the copy, no need for it to flush the wb. 2002-03-24 18:05:45 +00:00
thorpej a6d59cb039 pmap_allocpagedir(): Don't use pmap_pte(), and simplify a little. 2002-03-24 06:07:00 +00:00
thorpej b812152b34 pmap_handled_emulation(): Fix locking protocol botch.
XXX Should we traverse the PV list and enable all PTEs?
2002-03-24 05:55:31 +00:00
thorpej 6fbfe41621 pmap_handled_emulation(): Use pmap_map_ptes() correctly. 2002-03-24 05:52:10 +00:00
thorpej ec75dcf496 pmap_modified_emulation(): Use pmap_map_ptes() correctly. 2002-03-24 05:39:53 +00:00
thorpej 0aef2cab11 pmap_unwire(): Use pmap_map_ptes() correctly. 2002-03-24 05:28:46 +00:00
thorpej 11df08a743 pmap_clearbit(): Use pmap_map_ptes() correctly. 2002-03-24 05:15:59 +00:00
thorpej eb638f9bc5 Use pmap_is_curpmap() consistently. 2002-03-24 04:56:49 +00:00
thorpej 242f080390 Clean up the PTP allocation functions a bit. 2002-03-24 04:49:16 +00:00
thorpej aa1563948c * arm_byte_to_page() -> arm_btop()
* arm_page_to_byte() -> arm_ptob()
2002-03-24 03:37:18 +00:00
thorpej 48d8c5fdd9 Remove some redundant tests in pmap_enter(). 2002-03-24 03:25:10 +00:00
thorpej e80bfdc1a3 Garbage-collect the "pagehook" stuff. 2002-03-23 19:21:58 +00:00
thorpej 0ba36d6f6f * Rename PROCESS_PAGE_TBLS_BASE -> PTE_BASE
* Rename ALT_PAGE_TBLS_BASE -> APTE_BASE
* Garbage-collect PAGE_TABLE_SPACE_START
2002-03-23 02:22:56 +00:00
briggs 47c8167bc7 Fix typo: ISDNCTL -> NISDNCTL. 2002-03-18 22:46:57 +00:00
bjh21 a12e90b08f Only put the CPU type into cpu_model, not the state of the control register.
Instead, print the control register state on the next line at startup.
2002-03-16 18:47:51 +00:00
martin 94881fb123 Rename ISDN devices, per discussion on tech-kern. The network devices
become ippp (ISDN ppp) and irip (ISDN raw IP). The character device now
are called: /dev/isdn (isdnd <-> kernel communication), /dev/isdnctl (dialing
and other control), /dev/isdntrc* (tracing), /dev/isdnbchan* (raw B channel
access, i.e. for user land PPP) and /dev/isdntel* (telephone devices, i.e.
for answering machines).
2002-03-16 16:55:51 +00:00
bjh21 57eb77d59f Add CPU ID for the ARM1022ES.
Also add a CPU class for ARM10E processors in general.
2002-03-16 14:41:15 +00:00
reinoud aefe920476 Serious bug fix: a userland program could panic the kernel when it tried to
issue an instruction that caused the late abort handler to be called for
wich the kernel had no support build in for.

It now only panics when it happends in kernel but otherwise signals the
process a SEGV signal.
2002-03-15 22:19:49 +00:00
reinoud b91c20709e When ARMFPE wasn't enabled the `usearmfpe' flag was statically initialised
but not used resulting in a compiler error. By splitting the declaration
and the initialisation this is solved.

Better would be to not even declare the flag when ARMFPE isnt enabled but
that would just add to the #ifdef jungle.
2002-03-11 11:50:12 +00:00
lukem cd19d52695 * rename MINIROOTSIZE to MEMORY_DISK_SIZE, so that all md(4) options
are now consistently named
* fold opt_mdsize.h into opt_md.h
2002-03-10 19:56:37 +00:00
bjh21 a42e17ae9a __RCSID -> __KERNEL_RCSID 2002-03-10 15:47:43 +00:00
bjh21 3a0f83d390 Re-work the way that FPAs are handled. If ARMFPE isn't configured, don't
even bother probing for an FPA.  If ARMFPE is configured, always use it,
even if there's an FPA (since it provides the FPA support code).  Move all
printfs about FPAs into armfpe_init.c.

This means I can delete the last two elements from struct _cpu, so that the
structure, and the whole of <arm/cpus.h> is redundant and can be deleted.
2002-03-10 15:29:53 +00:00
bjh21 9bb7807c7b Remove fpu_model from struct _cpu. Instead, have initialise_arm_fpe()
printf() the FPE version number itself.
2002-03-10 11:32:00 +00:00
bjh21 63231772e8 Add a ci_dev element to struct cpu_info, pointing to the device that
corresponds to the CPU.
2002-03-10 11:06:01 +00:00
bjh21 60219ba2a6 Kill the fpu_flags element from struct _cpu. It was only ever set to 0
anyway.
2002-03-10 00:44:09 +00:00
bjh21 01b68bd7de Clean up inline assembler. Rather than saving R0, copying FPSR to R0,
copying it to the output register and then restoring R0, just copy the
FPSR straight to the output.
2002-03-10 00:09:24 +00:00
bjh21 aeece3b5bd Remove the cpu_model member from struct _cpu, and just use the cpu_model
variable directly.  While we're at it, make cpu_model rather larger.
2002-03-09 23:49:15 +00:00
bjh21 09dd49a342 Remove the cpu_class element from struct _cpu, and make it a local variable
in identify_arm_cpu(), since it's almost unused elsewhere.

Change the detection of bugged StrongARMs to use the cpu ID rather than the
class.  This turns "almost" into "entirely".
2002-03-09 23:24:11 +00:00
bjh21 1c1e3f8439 Replace cpu_id and cpu_ctrl in struct _cpu with ci_cpuid and ci_ctrl in
struct cpu_info.  Also kill the cpuctrl global while we're here, and make
identify_arm_cpu() take a struct cpu_info * as an argument alongside the CPU
number.
2002-03-09 21:30:57 +00:00
bjh21 20917f120c Move arm700bugcount into stuct cpu_info, and attach it in
identify_master_cpu().
2002-03-09 19:11:20 +00:00
thorpej a180cee23b Pool deals fairly well with physical memory shortage, but it doesn't
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map).  Try to deal with this:

* Group all information about the backend allocator for a pool in a
  separate structure.  The pool references this structure, rather than
  the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
  to become available, but will still fail if it cannot callocate KVA
  space for the pages.  If this happens, carefully drain all pools using
  the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
  some pages, and use that information to make draining easier and more
  efficient.
* Get rid of PR_URGENT.  There was only one use of it, and it could be
  dealt with by the caller.

From art@openbsd.org.
2002-03-08 20:48:27 +00:00
tsutsui 3c8b0446fe Change type of dumpmag to u_int32_t since it is actually
a 32bit unsigned magic number.
As per discussion on tech-kern, and fixes port-sparc64/11949.
2002-03-06 13:10:18 +00:00
chris 09b5f7b740 Mostly style changes to stop us directly referencing tqh_first, and use TAILQ_FIRST instead. Based on rev 1.130 of the i386 pmap.c. 2002-03-06 10:55:21 +00:00
thorpej 5658662324 * Make pmap_is_{modified,referenced}() macros in pmap.h that just
test the attributes in the vm_page_md directly.
* Clean up pmap_clear_{modified,referenced}().
* Delete now-unused pmap_testbit().
2002-03-05 04:48:03 +00:00
thorpej a92da3d4a5 Switch back to using vm_page_md (thanks chuq for finding the bug
in the code that made it unstable before!)
2002-03-05 04:19:59 +00:00
simonb 6f0fb25121 Don't need to declare phys_map - it is declared in <uvm/uvm_extern.h>. 2002-03-04 02:43:22 +00:00
chris 1181e367e0 Implement pmap_growkernel for arm32 based ports.
Note that this has been compiled on some systems, cats, IQ80310, IPAQ, netwinder and shark (note that shark's build is currently broken due to other reasons), but only actually run on cats.
Shark doesn't make use of the functionality as I believe there has to be a correlation between OFW and the kernel tables so that calls into OFW work.
2002-03-03 11:22:58 +00:00
chris a973797a7a Remove ref to VM_MAXKERN_ADDRESS, it's not used in this file 2002-03-02 15:35:05 +00:00
christos e8116a8f5b - Use DEV_ constants, instead of documenting the numbers!
- Delete cdev_decl(mm); where appropriate, and other hand-crufting [hi powerpc!]
2002-02-27 01:20:51 +00:00
simonb d9ab16ba2f Purge CLSIZE, CLSIZELOG2 and MCLOFSET.
Be consistant in the way that MSIZE, MCLSHIFT, MCLBYTES and NMBCLUSTERS
  are defined.
Remove old VM constants from cesfic port.
Bump MSIZE to 256 on mipsco (the only one that wasn't already 256).
2002-02-26 15:13:19 +00:00
thorpej bb84e85802 Change pmap_map_entry() to work like pmap_map_chunk(): take a pointer
to the L1 table and a virtual address, and no pointer to the L2 table.
The L2 table will be looked up by pmap_map_entry(), which will panic
if the there is no L2 table for the requested VA.

NOTE: IT IS EXTREMELY IMPORTANT THAT THE CORRECT VIRTUAL ADDRESS
BE PROVIDED TO pmap_map_entry()!  Notably, the code that mapped
the kernel L2 tables into the kernel PT mapping L2 table were not
passing actual virtual addresses, but rather offsets into the range
mapped by the L2 table.  I have fixed up all of these call sites,
and tested the resulting kernel on both an IQ80310 and a Shark.
Other portmasters should examine their pmap_map_entry() calls if
their new kernels fail.
2002-02-22 04:49:19 +00:00
thorpej 77e3a89912 When reporting there is no VM map for a fault, also report the
faulting address.
2002-02-22 03:24:09 +00:00
thorpej 79738a99e9 Keep track of which kernel PTs are available during bootstrap,
and let pmap_map_chunk() lookup the correct one to use for the
current VA.  Eliminate the "l2table" argument to pmap_map_chunk().

Add a second L2 table for mapping kernel text/data/bss on the
IQ80310 (fixes booting kernels with ramdisks).
2002-02-21 21:58:00 +00:00
thorpej 6d35f61035 In pmap_map_chunk(), if we can't use a section mapping, then
make sure that the L1 slot for the current VA points to an L2
table, and panic if it doesn't.
2002-02-21 06:36:11 +00:00
thorpej 15e0450397 Always pass the L1 table to pmap_map_chunk(). This allows pmap_map_chunk()
to perform some error checking.
2002-02-21 05:25:23 +00:00
thorpej 454e106a48 map_chunk() -> pmap_map_chunk(), and move it to pmap.c 2002-02-21 02:52:19 +00:00
thorpej 425011f621 map_pagetable() -> pmap_link_l2pt(), and move it to pmap.c 2002-02-20 20:41:15 +00:00
thorpej c44b9117f0 Collapse map_entry{,ro,nc}() into a single pmap_map_entry() that
takes a prot and a "cacheable" indicator.
2002-02-20 02:32:56 +00:00
thorpej 9c31f51c34 Rename map_section() to pmap_map_section(), move it to pmap.c, and give it
an extra argument (prot - specifies protection of the mapping).
2002-02-20 00:10:15 +00:00
bjh21 e6e848ef6d Our assembler handles FPA instructions fine, so don't use .word for them. 2002-02-17 20:41:02 +00:00
bjh21 cb7a3d0674 ANSIfy, and othe KNF cleanup. 2002-02-17 19:53:44 +00:00
bjh21 561984015b Undo part of rev 1.8: SWP intructions really do both read and write
the referenced address.
2002-02-14 11:59:26 +00:00
chs b744097a5f allow writing to write-only mappings. fixes PR 3493. 2002-02-14 07:08:02 +00:00
reinoud a74d22be50 Add some extra comments for the `booted_kernel' variable. 2002-02-10 13:20:26 +00:00
thorpej da13cb2fb5 Back out all the vm_page_md changes. They are causing some
mysterious problems (a similar change to the i386 pmap causes
mysterious problems there, as well), and the issue needs to
be investigated more.
2002-02-06 17:41:42 +00:00
thorpej 4611193917 Efficiency tweaks, some made possible by vm_page_md. 2002-02-06 17:32:35 +00:00
thorpej 58eebd58b3 Use vm_page_md rather than pmap_physseg. Saves lots of cycles in
common operations.
2002-02-05 21:14:36 +00:00
thorpej 9485327397 Allow platforms to use an extra level of indirection for FIQs,
enabled by definining __ARM_FIQ_INDIRECT in <machine/types.h>.
This is needed for OpenFirmware systems (like the Shark), where
the OFW vector page is used, and kernel entries merely patched
into it.
2002-02-05 18:26:07 +00:00
chris 3ead7271d5 Fix the type of irqmasks (any reason it's even been added as an extern when it's in irqhandler.h with the correct type and array size?) 2002-01-31 09:43:42 +00:00
thorpej 2bc996b0bc New interrupt framework for NetBSD/evbarm, and accompanying new
interrupt code for the IQ80310 board support package.

XXX The Integrator board support package still uses the old-style
arm32 interrupt code, so some compatibility hacks have been added
for it.  When the Integrator uses new-style interrupts, those hacks
can go away.
2002-01-30 03:59:39 +00:00
thorpej cb51977892 When initializing sf->sf_spl, simply always assume that 0 is
equivalent to spl0().
2002-01-29 23:02:48 +00:00
bjh21 e4b1cbedfc Add revision->stepping maps for the SA-110, SA-1100 and SA-1110.
Those for the SA-1100 and SA-1110 are from Intel's documentation.
The mapping for the SA-110 is from various sources on the net, since Intel
don't seem to document it.

Also, change the layout of the maps to have four steppings per line,
so they aren't quite so unwieldy.
2002-01-27 14:43:47 +00:00
thorpej 08342df793 Overhaul bus_dmamap_sync for the ARM:
* Track which process (XXX really, vmspace) owns the mapping.  When
  we sync the map, if the mapping doesn't belong to the kernel or to
  the current process (XXX really, vmspace), then no cache fobbing
  is necessary, since the cache is Wb-Inv'd on context switch (XXX need
  to revisit this when we support FCSE).
* Be smarter about which cache operation we do when sync'ing the map:
  - PREREAD -- Invalidate D$ (XXX right now, we actually do Wb-Inv)
  - PREWRITE -- Write-back D$ (note, we do NOT invalidate here)
  - PREREAD|PREWRITE -- Wb-Inv D$

More work is needed here.  In particular, a version for CPUs
with write-through caches should be provided, to eliminate
the write-back steps (which are noops on such CPUs, but skipping
two branches would be nice).
2002-01-25 20:57:41 +00:00
thorpej 2c23251a7a ANSI'ify function decls. 2002-01-25 19:37:49 +00:00
thorpej 4e990d9ccb Overhaul of the ARM cache code. This is mostly a simplification
pass.  Rather than providing a whole slew of cache operations that
aren't ever used, distill them down to some useful primitives:

	icache_sync_all         Synchronize I-cache
	icache_sync_range       Synchronize I-cache range

	dcache_wbinv_all        Write-back and Invalidate D-cache
	dcache_wbinv_range      Write-back and Invalidate D-cache range
	dcache_inv_range        Invalidate D-cache range
	dcache_wb_range         Write-back D-cache range

	idcache_wbinv_all       Write-back and Invalidate D-cache,
				Invalidate I-cache
	idcache_wbinv_range     Write-back and Invalidate D-cache,
				Invalidate I-cache range

Note: This does not yet include an overhaul of the actual asm files
that implement the primitives.  Instead, we've provided a safe default
for each CPU type, and the individual CPU types can now be optimized
one at a time.
2002-01-25 19:19:22 +00:00
thorpej c2004821b2 Use a table to look up stepping names. Add a generic stepping
table ("rev 0", "rev 1", etc.) and an i80200 stepping table that
has the stepping names that appear in the i80200 manuals/errata..
2002-01-24 20:14:19 +00:00
thorpej e594c94727 Some prototype cleanup. 2002-01-20 03:41:47 +00:00
thorpej 940aa6cbf5 Add cpwait's after TLB operations. 2002-01-17 23:56:01 +00:00