Commit Graph

6328 Commits

Author SHA1 Message Date
ad
13c7f6ff40 Move lib/libkern/rb.h to sys/rb.h, so it can be used by kernel header
files.
2008-06-04 14:31:15 +00:00
rmind
af4e8e9ea7 Check the result of allocation in the cases where size is passed by user. 2008-06-04 13:02:41 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
ad
621389e8b2 Fix broken enable test; fixes random coredumps. 2008-06-04 12:26:20 +00:00
ad
65b594ff61 Make sure the PAX flags are copied/zeroed correctly. 2008-06-04 12:16:06 +00:00
ad
8dd7771d61 Disable the wakeup assertion for the time being because the tty code
triggers it.
2008-06-04 11:22:55 +00:00
ad
cee82bbafa Don't use proc specificdata for the PAX stuff. Speeds up mmap() and others. 2008-06-03 22:15:14 +00:00
ad
d191ae6b1b Don't use proc specificdata. Speeds up mmap() and others. 2008-06-03 22:14:24 +00:00
ad
1b23b70818 vfs_cache:
- Don't use goto in critical paths, it can confuse the compiler.
- Sprinkle some branch hints.
- Make namecache stats per-CPU and collate once per second.
- Use vtryget().
2008-06-03 15:50:22 +00:00
ad
457a3f8942 vtryget: try to get an initial reference to a vnode without taking its
interlock. Only works if v_usecount is already non-zero, which is nearly
always true for busy files like libc.so or ld_elf.so.
2008-06-03 14:54:12 +00:00
dyoung
b5bf86befb Before freeing a ktr_desc, destroy its condition variables. 2008-06-03 05:53:09 +00:00
ad
a51fec9810 Avoid assertion failures. From drochner@. 2008-06-02 22:56:09 +00:00
drochner
99cb76cad6 adjust a KASSERT to reality after atomic vnode usecount changes 2008-06-02 19:20:22 +00:00
ad
3a8db3158e Use atomics to maintain v_usecount. 2008-06-02 16:25:34 +00:00
ad
30115e937a Most contention on proc_lock is from getppid(), so cache the parent's PID. 2008-06-02 16:18:09 +00:00
ad
fb8c6fcc83 Don't needlessly acquire v_interlock. 2008-06-02 16:16:27 +00:00
ad
0d151db6c9 Don't needlessly acquire v_interlock. 2008-06-02 16:00:33 +00:00
ad
8ad974fae5 vn_marktext, vn_lock: don't needlessly acquire v_interlock. 2008-06-02 15:29:18 +00:00
ad
ea8a92578d If vfork(), we want the LWP to run fast and on the same CPU
as its parent, so that it can reuse the VM context and cache
footprint on the local CPU.
2008-06-02 13:58:07 +00:00
ad
c3e93e738f Make trap counters per-cpu, like syscalls. 2008-06-01 21:24:15 +00:00
ad
736a4d9b78 Kill devsw_lock and just use specfs_lock. The two would need merging
in order to prevent unload of modules when a device that they provide
is still open.
2008-05-31 21:34:42 +00:00
ad
2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
dyoung
8f34c216d0 Add printf_tolog(), which writes to the kernel message buffer,
only.  I believe this is the safest way to log serious conditions
indicated by NMI.
2008-05-31 20:27:24 +00:00
ad
4c57df4a3c - Put in place module compatibility check against __NetBSD_Version__,
as discussed on tech-kern.

- Remove unused module_jettison().
2008-05-31 20:14:38 +00:00
ad
bda19becba Fix wmesg for !LOCKDEBUG. 2008-05-31 16:25:23 +00:00
ad
506130b087 biodone2: if the buffer is async or has a callback method, assert that
there are no waiters on b_done (threads in biowait()).
2008-05-31 13:41:44 +00:00
ad
7442154bd9 - Give each condition variable its own sleep queue head. Helps the system
to scale more gracefully when there are thousands of active threads.
  Proposed on tech-kern@.

- Use LOCKDEBUG to catch some errors in the use of condition variables:

  freeing an active CV
  re-initializing an active CV
  using multiple distinct mutexes during concurrent waits
  not holding the interlocking mutex when calling cv_broadcast/cv_signal
  waking waiters and destroying the CV before they run and exit it
2008-05-31 13:36:25 +00:00
ad
e10320350c Use __noinline. 2008-05-31 13:31:25 +00:00
ad
a6f1414cd1 tsleep -> kpause 2008-05-31 13:26:29 +00:00
ad
7b8f512433 LOCKDEBUG:
- Tweak it so it can also catch common errors with condition variables.
  The change to kern_condvar.c is not included in this commit and will
  come later.

- Don't call kmem_alloc() if operating in interrupt context, just fail
  the allocation and disable debugging for the object. Makes it safe
  to do mutex_init/rw_init/cv_init in interrupt context, when running
  a LOCKDEBUG kernel.
2008-05-31 13:15:21 +00:00
ad
10d96b47b0 shmrealloc: destroy condition variables before freeing them. 2008-05-31 13:11:14 +00:00
ad
deda5b9d55 Hold proc_lock when sleeping on p_waitcv, not proc::p_lock. 2008-05-31 13:04:14 +00:00
ad
5b4d14b9f1 Add a comment to turnstile_block:
* NOTE: if you get a panic in this code block, it is likely that
        * a lock has been destroyed or corrupted while still in use.  Try
        * compiling a kernel with LOCKDEBUG to pinpoint the problem.
2008-05-31 12:03:15 +00:00
freza
1531f6d32e Any time we remove event from the queue make sure we 1. release the
event plist and 2. free the drvctl_event struct.

Discussed with jmcneill@.
2008-05-30 15:30:37 +00:00
ad
13cf4bcc55 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Remove ifdef(i386), kernel preemption works on amd64 now.
2008-05-30 12:18:14 +00:00
rmind
1be38c90d8 do_sys_accept: release the reference to sock in few error paths.
Should fix PR/38790, report and test-case by Nicolas Joly.
2008-05-30 09:49:01 +00:00
rmind
a68758f8bd sched_idle: initialise 'tci' to NULL, avoids compiler warning. 2008-05-30 08:31:42 +00:00
ad
e98d2c1016 lwp_exit_switchaway: set l_lwpctl->lc_curcpu = EXITED, not NONE. 2008-05-29 23:29:59 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
rmind
15e36ef766 sys_shmget: use the correct size variable for uobj_wirepages().
Adjust shm_memlock() for consistency too.

Fixes PR/38782, reported by Adam Hoka.
2008-05-29 21:38:18 +00:00
joerg
157262cae8 Explicitly compute the next interval using 64bit arithmetic, if the time
was either stepped backwards or the timer has overflown. This fixes
PR 26470.
2008-05-29 15:27:51 +00:00
pooka
de98844194 Mark pread/pwrite rump syscalls.
from Arnaud Ysmal
2008-05-29 12:01:37 +00:00
ad
79d0501e52 Disable zero copy if MULTIPROCESSOR, until it is fixed:
- The TLB coherency overhead on MP systems is really expensive.
- It triggers a race in the VM system (grep kpause uvm/*).
2008-05-28 21:01:42 +00:00
dyoung
801c395b47 Run shutdown hooks whether or not the kernel has panicked. Restores
NetBSD's shutdown behavior of more than 6 years before rev 1.176.
Ok joerg@.

It is essential that we restore some hardware to initial conditions
before rebooting, in order to avoid interfering with the BIOS
bootstrap.  For example, if NetBSD gives control back to the Soekris
comBIOS while the kernel text is write-protected, the BIOS bootstrap
hangs during the power-on self-test, "POST: 0123456789bcefghip".
In principle, bus masters can also interfere with BIOS boot.
2008-05-28 15:40:58 +00:00
ad
b5d8351e8e PR kern/38355 lockf deadlock detection is broken after vmlocking
- Fix it; tested with Sun's libMicro.
- Use pool_cache.
- Use a global lock, so the deadlock detection code is safer.
2008-05-28 13:35:32 +00:00
ad
5831c8ac63 Pull in sys/evcnt.h. 2008-05-27 22:05:50 +00:00
ad
f79b59f700 #ifdef strikes again 2008-05-27 21:36:03 +00:00
ad
4c634c7155 Sigh. The previous change did bad things to MySQL sysbench. Continue stealing
jobs from sched_nextlwp, but also do it in the idle loop. In sched_nextlwp
use trylock, in the idle LWP try harder.
2008-05-27 19:05:52 +00:00
ad
208df81d99 Move lwp_exit_switchaway() into kern_synch.c. Instead of always switching
to the idle loop, pick a new LWP from the run queue.
2008-05-27 17:51:17 +00:00
ad
aa7e99c693 Replace a couple of tsleep calls with cv_wait. 2008-05-27 17:50:03 +00:00