add "%" prefix to register names in assembly code.
change assembly functions to return values in %a0 instead of %d0.
C symbols no longer prepend an underscore, adjust assembly code for this.
32-bit values are now 32-bit aligned instead of 16-bit aligned,
adjust structure packing and padding to override this where necessary.
make EXEC_ELF std, make EXEC_AOUT and COMPAT_AOUT_M68K optional.
use the MI loadfile() instead of several home-grown versions.
state into global and per-CPU scheduler state:
- Global state: sched_qs (run queues), sched_whichqs (bitmap
of non-empty run queues), sched_slpque (sleep queues).
NOTE: These may collectively move into a struct schedstate
at some point in the future.
- Per-CPU state, struct schedstate_percpu: spc_runtime
(time process on this CPU started running), spc_flags
(replaces struct proc's p_schedflags), and
spc_curpriority (usrpri of processes on this CPU).
- Every platform must now supply a struct cpu_info and
a curcpu() macro. Simplify existing cpu_info declarations
where appropriate.
- All references to per-CPU scheduler state now made through
curcpu(). NOTE: this will likely be adjusted in the future
after further changes to struct proc are made.
Tested on i386 and Alpha. Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which
replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED. These files
are also required to supply inline functions __cpu_simple_lock(),
__cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be
supported on that platform (i.e. if MULTIPROCESSOR is defined in the
_KERNEL case). Change these functions to take an int * (&alp->lock_data)
rather than the struct simplelock * itself.
These changes make it possible for userland to use the locking primitives
by including <machine/lock.h>.
that is priority is rasied. Add a new spllowersoftclock() to provide the
atomic drop-to-softclock semantics that the old splsoftclock() provided,
and update calls accordingly.
This fixes a problem with using the "rnd" pseudo-device from within
interrupt context to extract random data (e.g. from within the softnet
interrupt) where doing so would incorrectly unblock interrupts (causing
all sorts of lossage).
XXX 4 platforms do not have priority-raising capability: newsmips, sparc,
XXX sparc64, and VAX. This platforms still have this bug until their
XXX spl*() functions are fixed.
minor of libc and the major of libutil). For little-endian architectures
merge the bnswap() assembly versions with nto* and hton* using symbols
aliasing. Use symbol renaming for the bswap function in this case to avoid
namespace pollution.
Declare bswap* in machine/bswap.h, not machine/endian.h. For little-endian
machines, common code for inline macros go in machine/byte_swap.h
Sync libkern with libc.
Adjust #include in kernel sources for machine/bswap.h.
as with user-land programs, include files are installed by each directory
in the tree that has includes to install. (This allows more flexibility
as to what gets installed, makes 'partial installs' easier, and gives us
more options as to which machines' includes get installed at any given
time.) The old SYS_INCLUDES={symlinks,copies} behaviours are _both_
still supported, though at least one bug in the 'symlinks' case is
fixed by this change. Include files can't be build before installation,
so directories that have includes as targets (e.g. dev/pci) have to move
those targets into a different Makefile.