Commit Graph

133 Commits

Author SHA1 Message Date
ad 321c12209b Don't swap kernel stacks of realtime threads. 2008-06-25 19:20:56 +00:00
ad c0cd593d89 uvm_swapout: try to lock the vm_map before calling pmap_collect. 2008-06-16 10:19:57 +00:00
ad 6bb1c8bc6f swappable: invert previous so we check for SACTIVE or SSTOP. 2008-06-09 11:52:34 +00:00
ad a3196ec57e swappable: return false if l->l_proc->p_stat == SDYING. 2008-06-09 11:51:43 +00:00
ad 0cd7bfa598 uvm_proc_exit: use macros to disable preemption. 2008-06-09 11:49:54 +00:00
ad cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
ad 2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
ad 7d6bc631b9 Disable preemption while swapping pmap. 2008-04-27 11:39:46 +00:00
ad 6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
yamt 1d737efc42 fix the order of printf arguments. 2008-04-11 15:47:15 +00:00
christos 9741037836 - use uarea_swapin, rather than duplicating the code.
- use __func__ where appropriate.
2008-04-11 15:31:37 +00:00
christos c7a574ed9e make this compile 2008-03-29 20:40:52 +00:00
dholland 132066c541 Fix broken build. hi skrll :-) 2008-03-29 20:15:54 +00:00
skrll 38b9530a91 Fix unsed variable when DEBUG isn't defined. 2008-03-29 18:49:13 +00:00
ad be04ac4896 Make rusage collection per-LWP and collate in the appropriate places.
cloned threads need a little bit more work but the locking needs to
be fixed first.
2008-03-27 19:06:51 +00:00
yamt 569bb6b4dc update comment 2008-02-29 12:08:04 +00:00
yamt f7048cc403 uvm_uarea_init: fix compilation where PAGE_SIZE is not a constant. (sparc)
reported by Tom Spindler.
2008-02-08 11:49:40 +00:00
yamt cb964fe917 uvm_uarea_init: make #if about PR_NOALIGN clearer and add a comment
to explain it.
2008-02-07 12:21:24 +00:00
yamt 52838e34f5 remove a special allocator for uareas, which is no longer necessary.
use pool_cache instead.
2008-01-28 12:22:46 +00:00
ad 4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
ad f5096e38d8 uvm_swapin: disable the swaplock assertion. uvm_lwp_hold() can't take
the lock yet.
2007-09-21 00:18:35 +00:00
ad 09e7c477d8 Include sys/cpu.h for CPU_INFO_FOREACH. 2007-08-18 10:07:55 +00:00
ad ee1c78315b Fix error in previous. 2007-08-18 00:31:32 +00:00
ad b5866eb299 Make the uarea cache per-CPU and drain in batches of 4. 2007-08-18 00:21:10 +00:00
ad 63873b9b21 Revert unintentially committed change. 2007-07-14 22:27:15 +00:00
ad 88ab7da936 Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
2007-07-09 20:51:58 +00:00
yamt f03010953f merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:

	idle lwp, and some changes depending on it.

	1. separate context switching and thread scheduling.
	   (cf. gmcgarry_ctxsw)
	2. implement idle lwp.
	3. clean up related MD/MI interfaces.
	4. make scheduler(s) modular.
2007-05-17 14:51:11 +00:00
rmind 60e35a7f80 Export uvm_uarea_free() to the rest.
Make things compile again.
2007-03-24 21:15:39 +00:00
christos 53524e44ef Kill caddr_t; there will be some MI fallout, but it will be fixed shortly. 2007-03-04 05:59:00 +00:00
thorpej b3667ada6d TRUE -> true, FALSE -> false 2007-02-22 06:05:00 +00:00
thorpej 712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
ad 7df3c36a6c uvm_kick_scheduler(): do nothing until the swap subsystem is initialized. 2007-02-19 01:35:19 +00:00
pavel 934634a18c Change the process/lwp flags seen by userland via sysctl back to the
P_*/L_* naming convention, and rename the in-kernel flags to avoid
conflict. (P_ -> PK_, L_ -> LW_ ). Add back the (now unused) LSDEAD
constant.

Restores source compatibility with pre-newlock2 tools like ps or top.

Reviewed by Andrew Doran.
2007-02-17 22:31:36 +00:00
ad d91014721f Add uvm_kick_scheduler() (MP safe) to replace wakeup(&proc0). 2007-02-15 20:21:13 +00:00
ad b07ec3fc38 Merge newlock2 to head. 2007-02-09 21:55:00 +00:00
chs 33c1fd1917 add support for O_DIRECT (I/O directly to application memory,
bypassing any kernel caching for file data).
2006-10-05 14:48:32 +00:00
matt 9e0ec4816e Make PTRACE and COREDUMP optional. Make the default (status quo) by putting
them in conf/std.
2006-08-29 23:34:48 +00:00
yamt 9606b0accf uvm_swapin: process -> lwp in a comment. 2006-06-13 13:22:06 +00:00
yamt 1075c99d89 introduce macros, UAREA_TO_USER and USER_TO_UAREA,
to convert uarea VA into a pointer to struct user and vice versa,
so that MD code can change the layout in uarea.
2006-05-22 13:43:54 +00:00
drochner e10923fd37 -clean up the interface to uvm_fault: the "fault type" didn't serve
any purpose (done by a macro, so we don't save any cycles for now)
-kill vm_fault_t; it is not needed for real faults, and for simulated
 faults (wiring) it can be replaced by UVM internal flags
-remove <uvm/uvm_fault.h> from uvm_extern.h again
2006-03-15 18:09:25 +00:00
perry 3d4ed1fbc7 __inline__ -> inline 2005-12-24 23:41:33 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
chs da3f825c26 remove the assertion in uvm_swapout_threads() about LSONPROC lwps
not running on the same CPU as the swapper.  l_stat is protected by
sched_lock, which isn't held here, so we can race with that lwp
starting to run and see its l_cpu not updated yet, as in PR 31870.
we check l_stat again in uvm_swapout() while holding sched_lock,
so the race itself is harmless.
2005-10-24 00:26:37 +00:00
thorpej e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
matt e1245a3c46 Rework the coredump code to have no explicit knownledge of how coredump
i/o is done.  Instead, pass an opaque cookie which is then passed to a
new routine, coredump_write, which does the actual i/o.  This allows the
method of doing i/o to change without affecting any future MD code.
Also, make netbsd32_core.c [re]use core_netbsd.c (in a similar manner that
core_elf64.c uses core_elf32.c) and eliminate that code duplication.
cpu_coredump{,32} is now called twice, first with a NULL iocookie to fill
the core structure and a second to actually write md parts of the coredump.
All i/o is nolonger random access and is suitable for shipping over a stream.
2005-06-10 05:10:12 +00:00
matt bb583a82ce Make sure state.end has a valid initial value. 2005-06-07 22:02:48 +00:00
matt 25a0e29a75 When writing coredumps, don't write zero uninstantiated demand-zero pages.
Also, with ELF core dumps, trim trailing zeroes from sections.  These two
changes can shrink coredumps by over 50% in size.
2005-06-02 17:01:43 +00:00
nathanw 92279c692f uvm_coredump_walkmap(): Set UVM_COREDUMP_NODUMP on regions whose
protection does not include VM_PROT_READ, so that the core dumping
doesn't error out with EFAULT when trying to write that region.

Addresses PR kern/30143; approach suggested by chs@.
2005-05-06 19:34:47 +00:00
yamt 6b2d8b66a4 merge yamt-km branch.
- don't use managed mappings/backing objects for wired memory allocations.
  save some resources like pv_entry.  also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
2005-04-01 11:59:21 +00:00