Modernise the linker script and make sure we have a symbol after the
link_sets.
Make sure that pmap_bootstrap doesn't tell uvm that some of the kernel
phys pages are free. Previously uvm would very likely allocate the phys
pages used for the link_sets to a MALLOC and they'd get overwritten.
Thanks to David H. Gutteridge for testing various things.
modifies machine/db_machdep.h: BKPT_SET(inst) to BKPT_SET(inst, addr) for all archs ie; passess the
breakpoint address as well.
Patch from cherry@mahiti.org
trace_is_enabled() to return TRUE if SYSCALL_DEBUG is defined, and g/c
all of the SYSCALL_DEBUG handling from individual system call dispatch
routines.
- proc_is_traced_p() -> trace_is_enabled(), to match trace_enter() and
trace_exit().
- trace_is_enabled() becomes a real function.
- Remove unnecessary include files from various files that used to care
about KTRACE and SYSTRACE, but do no more.
sys/bswap.h in order to pick up the MD inline routines and the constant
folding definitions in the right order.
Code can include either sys/bswap.h or machine/bswap.h with the same effect.
i/o is done. Instead, pass an opaque cookie which is then passed to a
new routine, coredump_write, which does the actual i/o. This allows the
method of doing i/o to change without affecting any future MD code.
Also, make netbsd32_core.c [re]use core_netbsd.c (in a similar manner that
core_elf64.c uses core_elf32.c) and eliminate that code duplication.
cpu_coredump{,32} is now called twice, first with a NULL iocookie to fill
the core structure and a second to actually write md parts of the coredump.
All i/o is nolonger random access and is suitable for shipping over a stream.
managed. after the yamt-km changes, the sti driver needs to add execute
permission to the (now unmanaged) mapping for its copy of the card firmware,
and this appears to work fine already.
if it is, then the instruction is an FPU instruction and we must not check
bits 23-25 since they are not a UID field in this format.
this fixes the spurious SIGILLs sometimes triggered by xmpyu instructions.
distinction between signalling NaNs and quiet NaNs back into the
machine-dependent headers; treat the implementation of __nanf in the
same spirit.
IEEE 754 leaves the distinction between signalling NaNs and quiet NANs
to the implementation, and unlike our headers used to suggest they're
not identical in the interpretation of the fraction's MSb; in due
course, make those of hppa, mips, sh3, and sh5 reflect reality.
- don't use managed mappings/backing objects for wired memory allocations.
save some resources like pv_entry. also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
to four (adding size and direction).
In order for topdown uvm to be an option on ports using PMAP_PREFER,
they will need to "prefer" lower addresses if topdown is being used.
Additionally, at least one port also needs to know the size.
- add a RAS hook in cpu_switch().
- fill in the definition of struct mcontext.
- implement cpu_upcall(), cpu_getmcontext(), cpu_getmcontext() and
cpu_switchto().
- for now, force the right priviledge bits and space regs in setcontext().
- use correct values for __SIMPLELOCK_*.
- move the user stack to start at a multiple of the pthread stack size
so that libpthread can use the sp-masking trick.
- move per VP data into struct sadata_vp referenced from l->l_savp
* VP id
* lock on VP data
* LWP on VP
* recently blocked LWP on VP
* queue of LWPs woken which ran on this VP before sleep
* faultaddr
* LWP cache for upcalls
* upcall queue
- add current concurrency and requested concurrency variables
- make process exit run LWP on all VPs
- make signal delivery consider all VPs
- make timer events consider all VPs
- add sa_newsavp to allocate new sadata_vp structure
- add sa_increaseconcurrency to prepare new VP
- make sys_sa_setconcurrency request new VP or wakeup idle VP
- make sa_yield lower current concurrency
- set sa_cpu = VP id in upcalls
- maintain cached LWPs per VP
use this in mbus_dmamem_map() to fix corruption of DMA memory.
note that this TLB bit is ignored on some CPUs (PA7100 and probably
others of that era), so this doesn't fix the problem in general,
but it does work on newer models and will make things easier later.
process context ('reaper').
From within the exiting process context:
* deactivate pmap and free vmspace while we can still block
* introduce MD cpu_lwp_free() - this cleans all MD-specific context (such
as FPU state), and is the last potentially blocking operation;
all of cpu_wait(), and most of cpu_exit(), is now folded into cpu_lwp_free()
* process is now immediatelly marked as zombie and made available for pickup
by parent; the remaining last lwp continues the exit as fully detached
* MI (rather than MD) code bumps uvmexp.swtch, cpu_exit() is now same
for both 'process' and 'lwp' exit
uvm_lwp_exit() is modified to never block; the u-area memory is now
always just linked to the list of available u-areas. Introduce (blocking)
uvm_uarea_drain(), which is called to release the excessive u-area memory;
this is called by parent within wait4(), or by pagedaemon on memory shortage.
uvm_uarea_free() is now private function within uvm_glue.c.
MD process/lwp exit code now always calls lwp_exit2() immediatelly after
switching away from the exiting lwp.
g/c now unneeded routines and variables, including the reaper kernel thread
precision back to machine-dependent headers. C99 has no strict
requirement which, if any, extended-precision type `long double' must
match, and even between 80-bit formats there are differences in
implementation (m68k vs. x86).
* On arm, consider __VFP_FP__.
* _UC_MACHINE_PC() - access the program counter
* _UC_MACHINE_INTRV() - access the integer return value register
* _UC_MACHINE_SET_PC() - set the program counter (this requires
special handling on some platforms).
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html
There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.
Fixes PR kern/21517.
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM. Machine-dependent code is
responsible for initializing them before main() is called. Anything
that steals KVA must adjust these variables accordingly.
This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().
This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.
This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
breakpoint address before it's used. Currently a no-op on all but sh5.
This is useful on sh5, for example, to mask off the instruction
type encoding in the bottom two address bits, and makes it possible
to do "db> break $rXX" instead of manually munging the address.
by the application, all NetBSD interfaces are made visible, even
if some other feature-test macro (like _POSIX_C_SOURCE) is defined.
<sys/featuretest.h> defined _NETBSD_SOURCE if none of _ANSI_SOURCE,
_POSIX_C_SOURCE and _XOPEN_SOURCE is defined, so as to preserve
existing behaviour.
This has two major advantages:
+ Programs that require non-POSIX facilities but define _POSIX_C_SOURCE
can trivially be overruled by putting -D_NETBSD_SOURCE in their CFLAGS.
+ It makes most of the #ifs simpler, in that they're all now ORs of the
various macros, rather than having checks for (!defined(_ANSI_SOURCE) ||
!defined(_POSIX_C_SOURCE) || !defined(_XOPEN_SOURCE)) all over the place.
I've tried not to change the semantics of the headers in any case where
_NETBSD_SOURCE wasn't defined, but there were some places where the
current semantics were clearly mad, and retaining them was harder than
correcting them. In particular, I've mostly normalised things so that
_ANSI_SOURCE gets you the smallest set of stuff, then _POSIX_C_SOURCE,
_XOPEN_SOURCE and _NETBSD_SOURCE in that order.
Tested by building for vax, encouraged by thorpej, and uncontested in
tech-userlevel for a week.
possible to use alternate system call tables. This is usefull for
displaying correctly the arguments in Mach binaries traces.
If NULL is given, then the regular systam call table for the process is used.
original system call number, which can be negative for a Mach trap.
We cannot just replace code by realcode, because ktrsyscall uses it as
an index in the system call table, thus crashing the kernel when the
value is negative.
since BTLB entries can be scarce and very little of an I/O subsystem normally
needs to be mapped.
Instead, the pmap now allows mappings of I/O space to be entered with
pmap_kenter_pa. bus_space mappings for small amounts of I/O space (as for
virtually all devices) are made this way, with BTLB entries still used for
large mappings for things like framebuffers.
This has led to more and cleaned-up uses of bus_space(9) and has caused
some autoconf cleanup. Also, kgdb is now attached and connected before
autoconfiguration, which is much earlier than before.
in struct hppa_cpu_info or anywhere else, now there are just hppa_btlb_*
functions. Added support for machines with split I/D and variable-range
BTLBs. Added support for purging BTLB entries.
to deal with aliasing of regular memory pages, because many processors don't
support it.
Now, the pmap marks all mappings of a page that has any non-equivalent
aliasing and any writable mapping, and the fault handlers watch for this
and flush other mappings out of the TLB and cache before (re)entering a
conflicting mapping.
When a page has non-equivalent aliasing, only one writable mapping at
a time may be in the TLB and cache. If no writable mapping is in the
TLB and cache, any number of read-only mappings may be.
The PA7100LC/PA7300LC fault handlers have not been converted yet.
counters. These counters do not exist on all CPUs, but where they
do exist, can be used for counting events such as dcache misses that
would otherwise be difficult or impossible to instrument by code
inspection or hardware simulation.
pmc(9) is meant to be a general interface. Initially, the Intel XScale
counters are the only ones supported.
maps it with BTLB entries, to minimize the number of BTLB entries
needed.
Because the CPU type was often guessed incorrectly, the mapping of
HP board number to system name now includes information about the
expected CPU type.
* struct sigacts gets a new sigact_sigdesc structure, which has the
sigaction and the trampoline/version. Version 0 means "legacy kernel
provided trampoline". Other versions are coordinated with machine-
dependent code in libc.
* sigaction1() grows two more arguments -- the trampoline pointer and
the trampoline version.
* A new __sigaction_sigtramp() system call is provided to register a
trampoline along with a signal handler.
* The handler is no longer passed to sensig() functions. Instead,
sendsig() looks up the handler by peeking in the sigacts for the
process getting the signal (since it has to look in there for the
trampoline anyway).
* Native sendsig() functions now select the appropriate trampoline and
its arguments based on the trampoline version in the sigacts.
Changes to libc to use the new facility will be checked in later. Kernel
version not bumped; we will ride the 1.6C bump made recently.