copyin() or copyout().
uvm_useracc() tells us whether the mapping permissions allow access to
the desired part of an address space, and many callers assume that
this is the same as knowing whether an attempt to access that part of
the address space will succeed. however, access to user space can
fail for reasons other than insufficient permission, most notably that
paging in any non-resident data can fail due to i/o errors. most of
the callers of uvm_useracc() make the above incorrect assumption. the
rest are all misguided optimizations, which optimize for the case
where an operation will fail. we'd rather optimize for operations
succeeding, in which case we should just attempt the access and handle
failures due to insufficient permissions the same way we handle i/o
errors. since there appear to be no good uses of uvm_useracc(), we'll
just remove it.
General idea: only consider the LWP on the VP for signal delivery, all
other LWPs are either asleep or running from waking up until repossessing
the VP.
- in kern_sig.c:kpsignal2: handle all states the LWP on the VP can be in
- in kern_sig.c:proc_stop: only try to stop the LWP on the VP. All other
LWPs will suspend in sa_vp_repossess() until the VP-LWP donates the VP.
Restore original behaviour (before SA-specific hacks were added) for
non-SA processes.
- in kern_sig.c:proc_unstop: only return the LWP on the VP
- handle sa_yield as case 0 in sa_switch instead of clearing L_SA, add an
L_SA_YIELD flag
- replace sa_idle by L_SA_IDLE flag since it was either NULL or == sa_vp
Also don't output itimerfire overrun warning if the process is already
exiting.
Also g/c sa_woken because it's not used.
Also g/c some #if 0 code.
Right now the only flag is used to indicate if a ksiginfo_t is a
result of a trap. Add a predicate macro to test for this flag.
* Add initialization macros for ksiginfo_t's.
* Add accssor macro for ksi_trap. Expands to 0 if the ksiginfo_t was
not the result of a trap. This matches the sigcontext trapcode semantics.
* In kpsendsig(), use KSI_TRAP_P() to select the lwp that gets the signal.
Inspired by Matthias Drochner's fix to kpsendsig(), but correctly handles
the case of non-trap-generated signals that have a > 0 si_code.
This patch fixes a signal delivery problem with threaded programs noted by
Matthias Drochner on tech-kern.
As discussed on tech-kern. Reviewed and OK's by Christos.
is set to 0, the function will always return 0 (no packets/events
are permitted)." Before this patch, ppsratecheck returned 1 once
a second when maxpps was 0.
syscall sys_timer_settime()) to take an absolute value for realtime
timers. This avoids a pair of gratiuitous conversions with the
possibility that the timer's intermediate value would be 0.0, which
would signal timer_settime() to cancel the timer.
Adjust callers of timer_settime() to compensate; catch the case where
sys_timer_settime() with an absolute time value of now and a virtual
timer would also be subtracted down to a timer-cancelling 0.0.
This should fix the bug seen in libpthread's nanosleep() where certain
applications, such as xmms, would wedge with unexpired userlevel
alarms.
fire close together, the second (and every other) timer would be
added to mask incorrectly - timerid value would be shifted twice,
and sa_upcall() would later kill process with SIGILL
mechanism by keeping a list (bitset) of which timers have fired and using
that list in the upcall (Does this sound familiar? SEND HELP NEED SIGINFO).
Provoke the idle LWP into running again with setrunnable(sa->sa_idle)
instead of a wakeup() call, since we know what it is.
This is a followup to PR/14558.
- itimerfix(9) limited the number of seconds to 100M, before I changed
it to 1000M for PR/14558.
- nanosleep(2) documents a limit of 1000M seconds.
- setitimer(2), select(2), and other library functions that indirectly
use setitimer(2) for example alarm(3) don't specify a limit.
So it only seems appropriate that any positive number of seconds in
struct timeval should be accepted by any code that uses itimerfix(9)
directly, except nanosleep(2) which should check for 1000M seconds
manually. This changes makes the manual pages of select(2), nanosleep(2),
setitimer(2), and alarm(3) consistent with the code.
checks root privs, and a lower part that does the actual job. The lower part
will be called by the upcoming clockctl driver. Approved by Christos
Also fixed a few cosmetic things
p_cpu member to struct proc. Use this in certain places when
accessing scheduler state, etc. For the single-processor case,
just initialize p_cpu in fork1() to avoid having to set it in the
low-level context switch code on platforms which will never have
multiprocessing.
While I'm here, comment a few places where there are known issues
for the SMP implementation.
state into global and per-CPU scheduler state:
- Global state: sched_qs (run queues), sched_whichqs (bitmap
of non-empty run queues), sched_slpque (sleep queues).
NOTE: These may collectively move into a struct schedstate
at some point in the future.
- Per-CPU state, struct schedstate_percpu: spc_runtime
(time process on this CPU started running), spc_flags
(replaces struct proc's p_schedflags), and
spc_curpriority (usrpri of processes on this CPU).
- Every platform must now supply a struct cpu_info and
a curcpu() macro. Simplify existing cpu_info declarations
where appropriate.
- All references to per-CPU scheduler state now made through
curcpu(). NOTE: this will likely be adjusted in the future
after further changes to struct proc are made.
Tested on i386 and Alpha. Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
timeout()/untimeout() API:
- Clients supply callout handle storage, thus eliminating problems of
resource allocation.
- Insertion and removal of callouts is constant time, important as
this facility is used quite a lot in the kernel.
The old timeout()/untimeout() API has been removed from the kernel.
that is priority is rasied. Add a new spllowersoftclock() to provide the
atomic drop-to-softclock semantics that the old splsoftclock() provided,
and update calls accordingly.
This fixes a problem with using the "rnd" pseudo-device from within
interrupt context to extract random data (e.g. from within the softnet
interrupt) where doing so would incorrectly unblock interrupts (causing
all sorts of lossage).
XXX 4 platforms do not have priority-raising capability: newsmips, sparc,
XXX sparc64, and VAX. This platforms still have this bug until their
XXX spl*() functions are fixed.