Commit Graph

843 Commits

Author SHA1 Message Date
gmcgarry
dca80f08fd Add __HAVE_MD_RUNQUEUE flag for MD code to override MI run queue primitives. 2002-09-22 04:11:32 +00:00
nathanw
2cab03d64a In the fault handler, record growth of the stack, so that core dumps
actually contain the entire stack.
2002-09-21 00:29:04 +00:00
manu
e77de5cb68 Initial APM support (enough to get battery level) 2002-09-16 19:52:52 +00:00
skrll
1f4f5626a4 Fix typos in comment. 2002-09-15 20:11:55 +00:00
gehenna
77a6b82b27 Merge the gehenna-devsw branch into the trunk.
This merge changes the device switch tables from static array to
dynamically generated by config(8).

- All device switches is defined as a constant structure in device drivers.

- The new grammer ``device-major'' is introduced to ``files''.

	device-major <prefix> char <num> [block <num>] [<rules>]

- All device major numbers must be listed up in port dependent majors.<arch>
  by using this grammer.

- Added the new naming convention.
  The name of the device switch must be <prefix>_[bc]devsw for auto-generation
  of device switch tables.

- The backward compatibility of loading block/character device
  switch by LKM framework is broken. This is necessary to convert
  from block/character device major to device name in runtime and vice versa.

- The restriction to assign device major by LKM is completely removed.
  We don't need to reserve LKM entries for dynamic loading of device switch.

- In compile time, device major numbers list is packed into the kernel and
  the LKM framework will refer it to assign device major number dynamically.
2002-09-06 13:18:43 +00:00
jdolecek
8839507f5b whitespace fix past __KERNEL_RCSID() 2002-09-05 18:34:00 +00:00
manu
9d459610ba When the serial port was not checked in hpcboot on hpcarm, writing to
/dev/ttyS0 crashed the kernel. This is because sacom_filltx uses some
uninitialized static variables. Pulling the salues from softc instead
fixes the problem (this is what was done before the drver was moved
from /sys/arch/hpcarm to /sys/arch/arm, anyway).
2002-09-02 05:27:39 +00:00
thorpej
212cb9f78d Add machine-dependent bits of RAS for arm32. 2002-08-31 03:07:32 +00:00
briggs
37019d791a Use generic_bs_sr_4 for bus_space_set_region_4. 2002-08-29 17:29:34 +00:00
briggs
043080912d Add generic_bs_sr_4 2002-08-29 17:27:48 +00:00
thorpej
70b58c9c1e In bounds_check_with_label(), look for the label sector in RAW_PART,
not "a".
2002-08-27 17:30:02 +00:00
thorpej
139cdc3125 Make nbuf, nswbuf, and bufpages unsigned. Make all operations on these
variables unsigned, and update places where their values are printed.
2002-08-25 20:21:33 +00:00
thorpej
ffdedb6d80 In pmap_map_in_l1() and pmap_unmap_in_l1(), make sure that the VA
that is passed in is already aligned to a 4M super-section.
2002-08-24 03:10:40 +00:00
thorpej
d158b3a37a When we allocate a PTP, make sure the offset we specify is for
the 4M super-section that the PTP will map, not some random 1M
chunk of it.  This gives the PTP hint code a much better chance
to working properly, and allows us to tidy up the code that
flushes a PTP from the cache in pmap_destroy().
2002-08-24 02:50:53 +00:00
thorpej
aafe6e006c Define macros describing the 4M super-sections that our pmap
actually uses (since we allocate PT pages in 4K chunks, rather
than 1K chunks).
2002-08-24 02:48:50 +00:00
thorpej
77a6866508 Enable caching on kernel and user page tables. This saves having
to do uncached memory access during VM operations (which can be
quite expensive on some CPUs).

We currently write-back PTEs as soon as they're modified; there is
some room for optimization (to write them back in larger chunks).
For PTEs in the APTE space (i.e. PTEs for pmaps that describe another
process's address space), PTEs must also be evicted from the cache
complete (PTEs in PTE space will be evicted durint a context switch).
2002-08-24 02:16:30 +00:00
briggs
02aeef1d79 Handle copies to unaligned addresses a bit better. 2002-08-22 05:01:02 +00:00
thorpej
6cc7c1c1ff * Add PTE_SYNC() and PTE_SYNC_RANGE() macros. These don't actually do
anything yet.
* Use PTE_SYNC() and PTE_SYNC_RANGE() in some obvious places, i.e.
  where vtopte() is used.
2002-08-22 01:13:53 +00:00
thorpej
574a9cc019 Use a pool cache for PT-PTs. 2002-08-21 21:22:52 +00:00
thorpej
5fddbbe3d5 Do cached memory access to L1 tables, making sure to write-back the
cache after any L1 table modifications.
2002-08-21 18:34:31 +00:00
briggs
88452ee2b5 Coalesced writes on xscale systems do not always work. If
XSCALE_NO_COALESCE_WRITES is set, disable.  Otherwise, enable.
2002-08-20 02:30:51 +00:00
briggs
50e0ea7aa2 Enable branch prediction and write coalescing on XScale. 2002-08-20 02:00:46 +00:00
thorpej
a7d44c2503 Use separate function pointers for dmamap_sync pre- vs post- operations.
Change the bus_dmamap_sync() macro to test the ops argument against pre-
and post- constants.  The compiler will optimize out dead code because
of the constants.  Since post- operations are not needed on ARM (except
for ISA bounce buffers), this eliminate a large number of function calls
which are noops, each of which cost at least 6 cycles just in the call
and return overhead (not to mention whatever other useless work the
compiler decides to do in the callee).
2002-08-17 20:46:26 +00:00
briggs
126f6cf9bc Add a new option EVBARM_BOARDTYPE to differentiate between different
evbarm ports.  Inline _splraise/_spllower/splx for i80321 and iq80310
for more performance.
2002-08-17 16:42:20 +00:00
thorpej
003b8e8bca More local label fixups. 2002-08-17 16:36:31 +00:00
briggs
20267a208f Do not trim 'offset' from 'len' in _bus_dmamap_sync_linear(). 2002-08-17 05:14:10 +00:00
thorpej
7cbd25232f Use correct-for-ELF local labels. 2002-08-17 03:14:47 +00:00
briggs
d86c947b8c Inline bus_dma_inrange() and bus_dmamap_sync_*(). 2002-08-17 01:15:15 +00:00
thorpej
50fe583069 Must ... micro ... optimize!
* Save an instruction in the transition from idle to have-process-to-
  switch-to, and eliminate two instructions that cause datadep-stalls
  on StrongARM And XScale (one in each idle block).
* Rearrange some other instructions to avoid datadep-stalls on StrongARM
  and XScale.
* Since cpu_do_powersave == 0 is by far the common case, avoid a
  pipeline flush by reordering the two idle blocks.
2002-08-17 01:08:21 +00:00
chris
1334ab7d1e following Jason's change to _xscale, convert bpl's to bhi's, saves looping more than needed in some cases. 2002-08-17 01:02:38 +00:00
thorpej
ebff575bc3 * Add a new machdep.powersave sysctl, which controls the use of
the CPU's "sleep" function in the idle loop.
* Default all CPUs to not use powersave, except for the PDA processors
  (SA11x0 and PXA2x0).

This significantly reduces inteterrupt latency in high-performance
applications (and was good to squeeze another ~10% out of an XScale
IOP on a Gig-E benchmark).
2002-08-16 15:25:53 +00:00
briggs
b84fadb38b i80200_extirq_dispatch takes a struct irqframe * now. 2002-08-16 04:55:48 +00:00
thorpej
cb80293b4b If __ARMEB__ is defined, always set CPU_CONTROL_BEND_ENABLE in
the CPU control register.
2002-08-16 00:06:26 +00:00
briggs
c0366588ce Use local label names (.Lfoo vs. (Lfoo or foo)) 2002-08-15 01:38:16 +00:00
briggs
fa81e3d75e * Use local label names (.Lfoo vs. (Lfoo or foo))
* When moving from cpsr, use "cpsr" instead of "cpsr_all" (which is
   provided, but doesn't make sense since mrs doesn't support fields
   like msr does).
2002-08-15 01:37:01 +00:00
thorpej
45adf20cfe Whitespace. 2002-08-14 23:53:07 +00:00
thorpej
4706ae8670 Use cpsr_c rather then cpsr_all where appropriate. 2002-08-14 23:33:11 +00:00
thorpej
278ecc271f Fix some whitespace. 2002-08-14 23:30:21 +00:00
thorpej
323a5902ee Garbage-collect some unused routines. 2002-08-14 23:24:46 +00:00
thorpej
ad73349331 We only need to modify the CPSR's control field, so use cpsr_c rather
than cpsr_all.
2002-08-14 23:23:06 +00:00
chris
f4c605201d Tweak asm to avoid a couple of stalls. 2002-08-14 23:07:36 +00:00
thorpej
b45159bad0 When doing PREREAD sync operations, if the start and end addresses
of the range are aligned to a cacheline boundary, when do a dcache-inv
operation, rather than a dcache-wbinv operation.

XXX It could be a little smarter (align using wbinv, inv, then finish
up using wbinv), but even this simple change is good for a nearly 40%
improvement in my test case on XScale.
2002-08-14 22:56:55 +00:00
thorpej
8df22142b8 Fix a fencepost in the cache flush routines, caused by using the wrong
condition on a branch (bpl where bhi should have been used).  The error
caused one more line than intended to be flushed, which is particularly
bad if you're doing a dcache-invalidate operation.
2002-08-14 22:53:19 +00:00
briggs
4bb5ae3d09 Inline SetCPSR calls where it seems prudent to do so. This avoids two
branches and allows the compiler to better utilize registers around
calls to disable/enable/restore_interrupts().
2002-08-14 21:55:52 +00:00
briggs
a957deca48 G/c cowfault. 2002-08-14 21:52:36 +00:00
thorpej
203dd6b325 * Add an ARM32_DMAMAP_COHERENT flag to indicate that a loaded DMA
map contains "coherent" (non-cached in ARM-land) mappings.
* Set ARM32_DMAMAP_COHERENT in the map at the start of a load operation,
  and clear it in _bus_dmamap_load_buffer() if we encounter any cacheable
  mappings.
* In _bus_dmamap_sync(), if the map is marked COHERENT, skip any cache
  flushing.
2002-08-14 20:50:37 +00:00
thorpej
eeebe88acf Don't need to frob CPSR in _splraise(). 2002-08-14 19:47:18 +00:00
thorpej
d00a4a068d Whe making a mapping "coherent", clear *ALL* the cache bits, not
just L2_B and L2_C.
2002-08-14 19:21:50 +00:00
thorpej
201e41fc31 * Rename "word" -> 16, and "long" -> 32, as suggested by Ben Harris.
* Replace __byte_swap_32_variable() with a C version from Richard
  Earnshaw that generates nearly identical assembly (and it would be
  exactly identical with the addition of another peephole to GCC ARM
  back-end).
2002-08-14 15:08:57 +00:00
thorpej
da5ef20b1a Byte-swapping optimizations, enabled if compiling with GCC:
* Byte-swap 16-bit and 32-bit constants at compile-time.
* Inline 16-bit and 32-bit variable byte-swaps.  These take 3 and 4
  insns, respectively, and inlining saves the minimum 6 cycle penalty
  to call/return from the byte swap function.
2002-08-13 22:41:36 +00:00