not support a value (e.g., it's to be used as "options FOO" instead of
"options FOO=xxx"). options that take a value were converted to
defparam recently.
- minor whitespace & formatting cleanups
contain information suitable for allowing other parts of the kernel
to determine if a memory region is aligned to the largest data cache
line size present in the system.
Add a mips_dcache_compute_align() function which must be called whenever
one of the data cache line size variables is changed, in order to
compute mips_dcache_align and mips_dcache_align_mask.
executable mappings. Stop overloading VTEXT for this purpose (VTEXT
also has another meaning).
- Rename vn_marktext() to vn_markexec(), and use it when executable
mappings of a vnode are established.
- In places where we want to set VTEXT, set it in v_flag directly, rather
than making a function call to do this (it no longer makes sense to
use a function call, since we no longer overload VTEXT with VEXECMAP's
meaning).
VEXECMAP suggested by Chuq Silvers.
the etc Makefile override that by putting USETOOLS into $.MAKEOVERRIDES
This way the default for kernel compiles is still to use the installed
toolchain instead of depending on $TOOLDIR. $TOOLDIR can be used by
simply adding USETOOLS=yes to the command line as usual.
Adjust each ports template to set the default no setting and also pull in
bsd.own.mk if they weren't already to ensure they'll build correctly
with the new toolchain setup.
COP0_SYNC
In R5900 mtc0, tlbr, tlbp, tlbwi, tlbwr must be followed by sync.p.
if defined MIPS3_5900, COP0_SYNC is defined as sync.p. else nothing.
IPL_ICU_MASK
mask interrupt directly ICU instead of SR.IM.
I've added this feature to support software interrupt for R5900.
and this option may be useful for platform which has cascaded ICU.
kernel compile flags as well as "-mlong-calls" so that calls from the
LKM in KSEG2 work to the kernel in KSEG0.
MIPS LKMs now build and can be loaded with the right Magick command line
args to modload(8). Changes to modload coming...
Thanks to Chris Demetriou for pointing out the -mlong-calls gcc option
that had been staring me in the face all along.
use command line parameters to ld(1) instead to set the endian format.
Clean up some endian decisions in mips/conf/Makefile.mips.
Wrap some long lines.
format specific.
Struct emul has a e_setregs hook back, which points to emulation-specific
setregs function. es_setregs of struct execsw now only points to
optional executable-specific setup function (this is only used for
ECOFF).
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.
Currently this is a no-op on most platforms, so they should see no difference.
Reviewed by Jason.
need to check for curproc being non-NULL since none of the pmap
interfaces which are legal to use in interrupt handlers use this macro.
- use the hit op when flushing the cache in pmap_kremove().
- avoid trusting the optimizer in pmap_clear_reference().
- fix pmap_clear_modify() to reset the mod-bit emulation so we can
detect further modifications to the page, also flushing the cache
for any mappings which might have dirty lines.
passed which is larger than an int but has int alignment. As well as
fixing the described problem, this is the same way it is handled in the
Irix and Ultrix header files.
Problem and suggested solution by Uros Prestor in port-mips mailling
list.
remove some checks for impossible conditions.
in pmap_enter(), only call pmap_remove() to remove an existing mapping
if there actually is an existing mapping.
in pmap_remove_pv(), don't flush the MIPS1 cache when removing the last mapping.
this was added in rev 1.97, to avoid stale data being left in the cache
when the page is zeroed bypassing the cache in pmap_zero_page_uncached().
we've since found that bypassing the cache for idle-loop page zeroing
doesn't work very well anyway, so we don't do that anymore.
so now we can remove the extra cache-flush.
remove pmap_zero_page_uncached() while I'm thinking of it.
various other cleanup.
interrupt to sneak in after EXL had been set; the interrupt EPC was stale
as PC isn't saved if EXL is set, causing the eret to return to the wrong
place and leading to kernel-mode TLB misses on user addresses. The bug
was discovered by the japanese NetBSD/*mips folks and the same fix was
found independently by shinohara-san (shin@netbsd.org).
- Use pools for pmap structures and pv_entry structures.
- Remove a bunch of splvm()/splx(), no longer needed now that
pmap_kenter_pa() and pmap_kremove() are as they should be.
Mostly from Chuck Silvers.
there are just far too many combinations to handle with magic
#ifdefs in any sane way. Also, add a HitFlushDCache op to the
"locoresw", and fill it in as appropriate (it's NULL on MIPS-I,
so watch out).
These changes ensure that my R4600 Indy (with 2-way cache) gets
the correct cache ops when the kernel is built with only MIPS3
support, resulting in a kernel that is significantly more stable.
distinguished by SYSID register in the system controller. Note
that PRiD 0x20 is for a standalone RC32364 processor which has the
same 32300 core inside. Rather better to name them MIPS32 ISA.
(I have a nice 'install' target for cobalts here, but that only works there.
I guess I'll put that into htdocs now that the cobalt port uses Makefile.mips)
What's wrong; the initial SR value in pcb0 gets overwritten before
the first kthread_create1() is called. For a normal process which
has user mode it doesn't matter because proc_trampoline() makes
the process to have spl0 during exception return path to user mode,
however, kthreads stay in kernel mode mistakenly left in splhigh
condition. The trouble is visible as severe clock drifts when
system activity is high.
is created. System kthreads are mistakenly left splhigh state.
pcb0 has an initial SR value for spl0 condition which are expected to
be propagated to all of children
- use "U" suffix for unsigned constants
- use "L" suffix for long constants
- use "UL" suffix for unsigned long constants
- use hexadecimal instead of decimal
Fixes build problems with vi (now that warnings/errors are enabled) on
mips, powerpc and arm platforms.
each vm_page structure. Add a VM_MDPAGE_INIT() macro to init this
data when pages are initialized by UVM. These macros are mandatory,
but ports may #define them to nothing if they are not needed/used.
This deprecates struct pmap_physseg. As a transitional measure,
allow a port to #define PMAP_PHYSSEG so that it can continue to
use it until its pmap is converted to use VM_MDPAGE_MEMBERS.
Use all this stuff to eliminate a lot of extra work in the Alpha
pmap module (it's smaller and faster now). Changes to other pmap
modules will follow.
to <sys/types.h> and <sys/stdint.h>.
* Add a new C99 <stdint.h> header, which provides integer types of
explicit width, related limits and integer constant macros.
* Extend <inttypes.h> to provide <stdint.h> definitions and format
macros for printf() and scanf().
* Add C99 strtoimax() and strtoumax() functions.
* Use the latter within scanf().
* Add C99 %j, %t and %z printf()/scanf() conversions for
intmax_t, pointer-type and size_t arguments.
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).
These calls are relatively conservative. It may be possible to
optimize these a little more.
calling pmap_steal_memory() directly. On these platforms, since
uvm_pageboot_alloc() is a wrapper around pmap_steal_memory(), there
is no functional change. This is merely for API consistency.
which have pmap_steal_memory(). This is to reduce the API differences
between pmaps that implement pmap_steal_memory() and pmaps which do
not.
Note that pmap_steal_memory() needs to adjust *vstartp and/or
*vendp only if it used addresses within the range provided to UVM
via the pmap_virtual_space() call. I.e. it is not necessary to do
so in any current pmap_steal_memory() implementation.
call overhead is incurred as we start sprinkling pmap_update() calls
throughout the source tree (no pmaps currently defer operations, but
we are adding the infrastructure to allow them to do so).
when error is ERESTART. otherwise, user process will re-issue syscall
with broken system call number and get SIGSYS signal and terminate.
patch made by Jason R Thorpe <thorpej@zembu.com>. tested by me.
and link it directly to db_command_table[] so that it's not necessary
to do this at runtime. Make db_machine_command_table[] const on all ports.
g/c now unneded stuff, like db_machine_commands_install(), db_machine_init()
Patch written by enami.
as hacked by mycroft.
- Use syscall_intern() to give a process a plain or fancy
syscall based on ktrace flags.
- Avoid copying from the trapframe into a local array as much
as possible.
Yields roughly 5% improvement on a 25MHz R3000 (DECstation 5000/200)
on a simple syscall benchmark.
There's still some work that can be done using __HAVE_MINIMAL_EMUL.
XXX if you have libc after citrus locale import, please recompile libc,
and your applications that use mbstate_t (rather rare). really sorry
for the mess.
only signal handler array sharable between threads
move other random signal stuff from struct proc to struct sigctx
This addresses kern/10981 by Matthew Orgass.
* move all exec-type specific information from struct emul to execsw[] and
provide single struct emul per emulation
* elf:
- kern/exec_elf32.c:probe_funcs[] is gone, execsw[] how has one entry
per emulation and contains pointer to respective probe function
- interp is allocated via MALLOC() rather than on stack
- elf_args structure is allocated via MALLOC() rather than malloc()
* ecoff: the per-emulation hooks moved from alpha and mips specific code
to OSF1 and Ultrix compat code as appropriate, execsw[] has one entry per
emulation supporting ecoff with appropriate probe function
* the makecmds/probe functions don't set emulation, pointer to emulation is
part of appropriate execsw[] entry
* constify couple of structures
running with multi-way caches. Since we know the ops will mostly
hit as we just dirtied those lines a single hit op is cheaper than
an index op for each way.
only allowing one mapping at a time instead of mapping uncached. Done
by removing conflicting mappings from the pmap when entering a new
mapping. UVM will remember and re-fault the requested page when needed
for the original mapping. Originally done to support our internal machine
that does not support uncached memory completely. Not enabled by default
currently. It may make sense to try on the cobalt or sgi ports.
the page is wired down. Flushing both halves of a wired TLB entry resulted
in hangs when in programs called for and released kernel memory
soon after being invoked. In particular, we see this when single-stepping
a process using GDB.
It would be better if we could arrange to use both halves of the TLB
entry for the PCB, but for some reason we frequently end up with things
on an odd page boundary.
pmap_enter() cannot allocate the segmap return failure if PMAP_CANFAIL
instead of sleeping. Otherwise panic. Both alpha and i386 do this. Do
not pmap_enter_pv() until after this is done so the data structures are
not partially allocated. This should prevent pmap_page_protect() from
getting stuck when called from pagedaemon.
for mips_read_statusreg (which was apparently never implemented).
Provide prototypes and implementations for mips_cp0_cause_write,
mips_cp0_status_read, and mips_cp0_status_write. (Writing can, of
course, be quite dangerous.)
with a MIPS4 option at this point -- all the code except for one single
spot is conditionalized with MIPS3. So, don't even pretend about
MIPS4 for now, until it all gets cleaned up.
for mips3 (and later) 'ld' and 'sd' instructions. These currently
only are properly implemented for the _MIPS_BSD_API_LP32 and
_MIPS_BSD_API_LP32_64CLEAN 'API's. They're pretty messy, but when you
need them, you really need them.
you do not save it and pass it along in rval the system will start
to fail running user programs. This finishes the suggestion by cgd to
not save some registers on syscall entry.
that the page being zero'd was not completed and that page zeroing
should be aborted. This may be used by machine-dependent code doing
slow page access to reduce the latency of running a process that has
become runnable while in the middle of doing a slow page zero.
in syscall() anymore. By defition, processor was in SR_INT_IE turn
on prior to have syscall exception. MIPS1 assembler hook arranges
to enable the bit for its own. MIPS3 does the same effect by
turning off EXL bit.
fact the direct mapped cache makes address alias effect.
- Just turn on processor master interrupt mask IEc (SR_INT_IE) bit prior
to call syscall() kernel entry point. IEp is always 1 in this case
by defition.
and data cache sizes). R4000 uses 2^(12+IC) and 2^(12+DC). IDT32364
uses 2^(9+IC) and 2^(9+DC).
abstract around the problem by making the base a parameter to the
MIPS3_CONFIG_CACHE_SIZE macro. we pass the base down from mips_vector_init
to mips3_vector_init and to mips3_ConfigCache (where it is used).
XXX: someone with an MIPS3_4100 should switch to this and get rid
of the ugly ifdefs in cpuregs.h
a whole 0.01us in lmbench lat_syscall null on our 250Mhz QED system.
$at is still saved just to be safe, although it looks like it does
not need to be. $v1 is used in syscall(), although I'm not sure why.
process's segtab, retiring 'pcb_segtab' field from 'struct pcb'.
This would be another MULTIPROCESSOR unfriendly and the necessity
might be eliminated when the way to hold PTE is redesigned.
it look at casual inspection like 1 nop is needed but play other tricks.
Still have reduced by 1 nop. Hopefully this covers the NEC 41[x]1. Could
not find info for those processors.
in the non-MULTIPROCESSOR case (LOCKDEBUG requires it). Scheduler
lock is held upon entry to mi_switch() and cpu_switch(), and
cpu_switch() releases the lock before returning.
Largely from Bill Sommerfeld, with some minor bug fixes and
machine-dependent code hacking from me.
- MB_LEN_MAX is increased to 32.
- To ensure binary compatibility for old executables
under multibyte locale, versioned setlocale is added.
- __mb_len_cur definision is added in setlocale.c
and enable it in stdlib.h .
It is also important for multibyte locale stuffs,
but I just forgot.
the _2way and mips3_FlushDCache(). This lets all mips3 cache ops accept
user virtual addresss w/o a tlb miss. Since this is now done in both
ICache flush routines, no need to do it in pmap.c. Fixed R4400
stability problems with setregs() cache flushing.
is illegal to flush on user addresses. In theory the race exists
on MIPS1, but it is rather unlikely in common use. I have
seen it with regress/sys/kern/sigtramp on a QED 5231 system.
Previously we jal to panic which never cleared the tlb fault, so if
on the course of shutdown (like a doshutdownhooks() callback) missed
K2, it would panic again. Fix by setting EPC to panic() and eret.
By default this is off, and only slightly changes the code to load SR when
a temp register is available. This can be used by the platform code to
handle slow to clear interrupts (our case) or to mask off any interrupt
any interrupt at run-time. This can be very useful for embedded platforms
that have less than desirable interrupt properties.
it would hang-up. logstacktrace() actually was the same as stacktrace() so
just make it an XLEAF() for now. Include some DDB code for KGDB compilation.
on mips3 systems, until the kernel actually hooks the vectors.
This makes it easier to debug early problems if the firmware
has provides an exception handler.
This keeps the SR management more contained in locore, and should
be roughly the same performance as the .text size is less. Talked
to simonb and he was ok with this change.
jhawk. This callback is used by platform code to manage things like
watchdogs that should be disabled while in ddb. Done as a callback
for processors such as mips that support lots of different systems.
Add ddb support for QED opcodes, fill in enough routines so "next" usually
works, kdbpoke support for any size. Add callback that ports can hook when
entering and leaving ddb. This can be used for things like turning
off watchdogs while in ddb.
This lets mips ports have additional machdep sysctl. Define CPUISMIPS3
for MIPS1+MIPS2 as cpu_arch >= 3 to support mips4. Add cpu_intr()
prototype so this is defined in one place.
* put #includes of opt headers and headers to get protos used by
net/netisr_dispatch.h in net/netisr.h (if !defined(_LOCORE)) (rather than
in netisr_dispatch.h itself, and potentially nowhere, respectively).
* require netisr.h to be included before netisr_dispatch.h.
* minor additional cleanup of both netisr.h and netisr_dispatch.h.
* clean up uses to remove now-unnecessary header file inclusions, and
local prototypes of the fns.
* convert netisr dispatch implementations which didn't use
netisr_dispatch.h (pc532) to use it.
<vm/pglist.h> -> <uvm/uvm_pglist.h>
<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
<vm/vm_object.h> -> nothing
<vm/vm_pager.h> -> into <uvm/uvm_pager.h>
also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
"off_t" and the return value is a "paddr_t" to allow mappings
at offsets past 2^31 bytes. Somewhat inspired by FreeBSD, which
only changed the offset to a "vm_offset_t".
Includes updates for the i386, pc532 and sh3 mmmmap from Jason Thorpe.
family) that usually occur in cold marine waters and often have barbels
and three dorsal fins.
code: a set of instructions for a computer.
The latter is more appropriate in the comment corrected here.
use them. Rename them to match the names in See Mips Run; they're not
as orthogonal as values or'd together might make you think... Finally,
actually use them for every bloody cache op.
Merge Kernel MCOUNT and user MCOUNT.
The earlier code which was inserted to call _mcount in profiling
assembler routines is busted badly. This gets it working with PIC
code and should work with any arbitrary assembler routine.
and rename it to MIPS3_TLB_WIRED_UPAGES.
The value of wired register becomes variable on arc port,
and arc is the only mips3 port which uses the wired TLB entries 2..7.
asm statements, obsoluting asm routines in locore.S. They are
designed to work in symmetry as names suggests. savefpregs()
does not clear a global variable fpcurproc. Both would be noops when
NOFPU global symbol is defined.
- MDP_FPUSED flag is not turned on for FPA-less processors like Vr4100
and TX3900 even when processes execute FP insns.
- Nuke external function reference of savefpregs() which is already defined
in mips/cpu.h.
- Adjust the comment tells "let user processes change CP0 status register
freely might be dangerous."
- MDP_FPUSED flag indicates the process has executed at least one
FP insn during its life time.
- pcb_fpregs storage is guaranteed zero initialzed. If the process is FPA
owner, savefpregs() must be called to synchronize it with FPA contents.
- No necessity to save FPA contents into pcb_fpregs prior to the whole
storage is overwritten by process_write_fpregs().
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1(). In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.
This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
- Change ktrace interface to pass in the current process, rather than
p->p_tracep, since the various ktr* function need curproc anyway.
- Add curproc as a parameter to mi_switch() since all callers had it
handy anyway.
- Add a second proc argument for inferior() since callers all had
curproc handy.
Also, miscellaneous cleanups in ktrace:
- ktrace now always uses file-based, rather than vnode-based I/O
(simplifies, increases type safety); eliminate KTRFLAG_FD & KTRFAC_FD.
Do non-blocking I/O, and yield a finite number of times when receiving
EWOULDBLOCK before giving up.
- move code duplicated between sys_fktrace and sys_ktrace into ktrace_common.
- simplify interface to ktrwrite()
state into global and per-CPU scheduler state:
- Global state: sched_qs (run queues), sched_whichqs (bitmap
of non-empty run queues), sched_slpque (sleep queues).
NOTE: These may collectively move into a struct schedstate
at some point in the future.
- Per-CPU state, struct schedstate_percpu: spc_runtime
(time process on this CPU started running), spc_flags
(replaces struct proc's p_schedflags), and
spc_curpriority (usrpri of processes on this CPU).
- Every platform must now supply a struct cpu_info and
a curcpu() macro. Simplify existing cpu_info declarations
where appropriate.
- All references to per-CPU scheduler state now made through
curcpu(). NOTE: this will likely be adjusted in the future
after further changes to struct proc are made.
Tested on i386 and Alpha. Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
from db_stack_trace_cmd() to db_stack_trace_print(),
and add an additional argument, a function pointer for an
output routine (i.e. printf() or db_printf()).
Add db_stack_trace_cmd() in db_command.[ch], calling
db_stack_trace_print() with db_printf() as the printer.
Move count==-1 special handling from db_stack_trace_print() [nee
db_stack_trace_cmd()] to db_stack_trace_cmd() [nascent here].
Again, I'm unable to test compilation on all affected platforms,
so advance apologies for potential brokenness.
which indicates that the process is actually running on a
processor. Test against SONPROC as appropriate rather than
combinations of SRUN and curproc. Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
- Add 16 bytes to the stack on entry to _mcount so we don't
overflow it.
- Use inline interrupt {dis,en}abling instead of calling
profiled function in locore.