Commit Graph

11 Commits

Author SHA1 Message Date
thorpej
40460ae8eb Need to provide CACHELINESIZE in _STANDALONE environments, too. 2000-11-16 19:02:33 +00:00
thorpej
4db6fc7542 Make need_resched() take a "struct cpu_info *" argument. This
causes gives a primitive form of processor affinity.  Its use in
roundrobin() still needs some work.
2000-08-25 01:04:06 +00:00
thorpej
58e7a6954b Add spllock(). See spl(9) for details. 2000-08-22 19:46:26 +00:00
thorpej
23a7f255d4 Make sure we provide splsched() as described in spl(9). 2000-08-21 02:06:31 +00:00
thorpej
a7d0570e67 First sweep at scheduler state cleanup. Collect MI scheduler
state into global and per-CPU scheduler state:

	- Global state: sched_qs (run queues), sched_whichqs (bitmap
	  of non-empty run queues), sched_slpque (sleep queues).
	  NOTE: These may collectively move into a struct schedstate
	  at some point in the future.

	- Per-CPU state, struct schedstate_percpu: spc_runtime
	  (time process on this CPU started running), spc_flags
	  (replaces struct proc's p_schedflags), and
	  spc_curpriority (usrpri of processes on this CPU).

	- Every platform must now supply a struct cpu_info and
	  a curcpu() macro.  Simplify existing cpu_info declarations
	  where appropriate.

	- All references to per-CPU scheduler state now made through
	  curcpu().  NOTE: this will likely be adjusted in the future
	  after further changes to struct proc are made.

Tested on i386 and Alpha.  Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
2000-05-26 21:19:19 +00:00
thorpej
28fb7c1eb8 Define cpu_number() as discussed on tech-smp. 1999-08-10 21:08:05 +00:00
thorpej
eb20bbc780 Change the semantics of splsoftclock() to be like other spl*() functions,
that is priority is rasied.  Add a new spllowersoftclock() to provide the
atomic drop-to-softclock semantics that the old splsoftclock() provided,
and update calls accordingly.

This fixes a problem with using the "rnd" pseudo-device from within
interrupt context to extract random data (e.g. from within the softnet
interrupt) where doing so would incorrectly unblock interrupts (causing
all sorts of lossage).

XXX 4 platforms do not have priority-raising capability: newsmips, sparc,
XXX sparc64, and VAX.  This platforms still have this bug until their
XXX spl*() functions are fixed.
1999-08-05 18:08:08 +00:00
ws
5423093850 Modify syncicache on PowerPC from an inline to a real function.
Support different cache line sizes with the same object code in userland.
While here, move the function to implementation name space.
1999-04-17 21:16:45 +00:00
thorpej
aefc208b70 asm -> __asm__, volatile -> __volatile 1997-11-05 04:19:04 +00:00
thorpej
ab473e98d6 Definitions for machine_vec interface, from Wolfgang Solfrank. 1997-04-16 22:54:21 +00:00
ws
5804d3f648 PowerPC port 1996-09-30 16:34:14 +00:00