physical memory start. Garbage-collect some cruft while here.
* Move the kernel up to 0xc0000000, giving a 1G/3G kernel/user split.
* Adjust the Integrator startup code accordingly.
Note that this has been compiled on some systems, cats, IQ80310, IPAQ, netwinder and shark (note that shark's build is currently broken due to other reasons), but only actually run on cats.
Shark doesn't make use of the functionality as I believe there has to be a correlation between OFW and the kernel tables so that calls into OFW work.
to the L1 table and a virtual address, and no pointer to the L2 table.
The L2 table will be looked up by pmap_map_entry(), which will panic
if the there is no L2 table for the requested VA.
NOTE: IT IS EXTREMELY IMPORTANT THAT THE CORRECT VIRTUAL ADDRESS
BE PROVIDED TO pmap_map_entry()! Notably, the code that mapped
the kernel L2 tables into the kernel PT mapping L2 table were not
passing actual virtual addresses, but rather offsets into the range
mapped by the L2 table. I have fixed up all of these call sites,
and tested the resulting kernel on both an IQ80310 and a Shark.
Other portmasters should examine their pmap_map_entry() calls if
their new kernels fail.
and let pmap_map_chunk() lookup the correct one to use for the
current VA. Eliminate the "l2table" argument to pmap_map_chunk().
Add a second L2 table for mapping kernel text/data/bss on the
IQ80310 (fixes booting kernels with ramdisks).
* Track which process (XXX really, vmspace) owns the mapping. When
we sync the map, if the mapping doesn't belong to the kernel or to
the current process (XXX really, vmspace), then no cache fobbing
is necessary, since the cache is Wb-Inv'd on context switch (XXX need
to revisit this when we support FCSE).
* Be smarter about which cache operation we do when sync'ing the map:
- PREREAD -- Invalidate D$ (XXX right now, we actually do Wb-Inv)
- PREWRITE -- Write-back D$ (note, we do NOT invalidate here)
- PREREAD|PREWRITE -- Wb-Inv D$
More work is needed here. In particular, a version for CPUs
with write-through caches should be provided, to eliminate
the write-back steps (which are noops on such CPUs, but skipping
two branches would be nice).
pass. Rather than providing a whole slew of cache operations that
aren't ever used, distill them down to some useful primitives:
icache_sync_all Synchronize I-cache
icache_sync_range Synchronize I-cache range
dcache_wbinv_all Write-back and Invalidate D-cache
dcache_wbinv_range Write-back and Invalidate D-cache range
dcache_inv_range Invalidate D-cache range
dcache_wb_range Write-back D-cache range
idcache_wbinv_all Write-back and Invalidate D-cache,
Invalidate I-cache
idcache_wbinv_range Write-back and Invalidate D-cache,
Invalidate I-cache range
Note: This does not yet include an overhaul of the actual asm files
that implement the primitives. Instead, we've provided a safe default
for each CPU type, and the individual CPU types can now be optimized
one at a time.
<arm/arm32/vmparam.h> (mostly the stuff that's tied to the pmap
implementation).
- Since the MMU definitions in pte.h are specific to ARM processors
that support 32-bit mode, move pte.h to <arm/arm32/pte.h>.
- Make the Netwinder startup file build again (use PT_B|PT_C, rather
than PT_CACHEABLE, since the latter expands to a variable these days).
On platforms which load the kernel sans symbols directly from firmware
(possibly in e.g. S-Record format), call ddb_init() with empty arguments,
so that it will search any compiled in SYMTAB_SPACE. On all other platforms,
if __ELF__, also call ddb_init() with empty arguments until ELF bootloaders
which pass symbol information are ready.