Commit Graph

6319 Commits

Author SHA1 Message Date
ad
457a3f8942 vtryget: try to get an initial reference to a vnode without taking its
interlock. Only works if v_usecount is already non-zero, which is nearly
always true for busy files like libc.so or ld_elf.so.
2008-06-03 14:54:12 +00:00
dyoung
b5bf86befb Before freeing a ktr_desc, destroy its condition variables. 2008-06-03 05:53:09 +00:00
ad
a51fec9810 Avoid assertion failures. From drochner@. 2008-06-02 22:56:09 +00:00
drochner
99cb76cad6 adjust a KASSERT to reality after atomic vnode usecount changes 2008-06-02 19:20:22 +00:00
ad
3a8db3158e Use atomics to maintain v_usecount. 2008-06-02 16:25:34 +00:00
ad
30115e937a Most contention on proc_lock is from getppid(), so cache the parent's PID. 2008-06-02 16:18:09 +00:00
ad
fb8c6fcc83 Don't needlessly acquire v_interlock. 2008-06-02 16:16:27 +00:00
ad
0d151db6c9 Don't needlessly acquire v_interlock. 2008-06-02 16:00:33 +00:00
ad
8ad974fae5 vn_marktext, vn_lock: don't needlessly acquire v_interlock. 2008-06-02 15:29:18 +00:00
ad
ea8a92578d If vfork(), we want the LWP to run fast and on the same CPU
as its parent, so that it can reuse the VM context and cache
footprint on the local CPU.
2008-06-02 13:58:07 +00:00
ad
c3e93e738f Make trap counters per-cpu, like syscalls. 2008-06-01 21:24:15 +00:00
ad
736a4d9b78 Kill devsw_lock and just use specfs_lock. The two would need merging
in order to prevent unload of modules when a device that they provide
is still open.
2008-05-31 21:34:42 +00:00
ad
2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
dyoung
8f34c216d0 Add printf_tolog(), which writes to the kernel message buffer,
only.  I believe this is the safest way to log serious conditions
indicated by NMI.
2008-05-31 20:27:24 +00:00
ad
4c57df4a3c - Put in place module compatibility check against __NetBSD_Version__,
as discussed on tech-kern.

- Remove unused module_jettison().
2008-05-31 20:14:38 +00:00
ad
bda19becba Fix wmesg for !LOCKDEBUG. 2008-05-31 16:25:23 +00:00
ad
506130b087 biodone2: if the buffer is async or has a callback method, assert that
there are no waiters on b_done (threads in biowait()).
2008-05-31 13:41:44 +00:00
ad
7442154bd9 - Give each condition variable its own sleep queue head. Helps the system
to scale more gracefully when there are thousands of active threads.
  Proposed on tech-kern@.

- Use LOCKDEBUG to catch some errors in the use of condition variables:

  freeing an active CV
  re-initializing an active CV
  using multiple distinct mutexes during concurrent waits
  not holding the interlocking mutex when calling cv_broadcast/cv_signal
  waking waiters and destroying the CV before they run and exit it
2008-05-31 13:36:25 +00:00
ad
e10320350c Use __noinline. 2008-05-31 13:31:25 +00:00
ad
a6f1414cd1 tsleep -> kpause 2008-05-31 13:26:29 +00:00
ad
7b8f512433 LOCKDEBUG:
- Tweak it so it can also catch common errors with condition variables.
  The change to kern_condvar.c is not included in this commit and will
  come later.

- Don't call kmem_alloc() if operating in interrupt context, just fail
  the allocation and disable debugging for the object. Makes it safe
  to do mutex_init/rw_init/cv_init in interrupt context, when running
  a LOCKDEBUG kernel.
2008-05-31 13:15:21 +00:00
ad
10d96b47b0 shmrealloc: destroy condition variables before freeing them. 2008-05-31 13:11:14 +00:00
ad
deda5b9d55 Hold proc_lock when sleeping on p_waitcv, not proc::p_lock. 2008-05-31 13:04:14 +00:00
ad
5b4d14b9f1 Add a comment to turnstile_block:
* NOTE: if you get a panic in this code block, it is likely that
        * a lock has been destroyed or corrupted while still in use.  Try
        * compiling a kernel with LOCKDEBUG to pinpoint the problem.
2008-05-31 12:03:15 +00:00
freza
1531f6d32e Any time we remove event from the queue make sure we 1. release the
event plist and 2. free the drvctl_event struct.

Discussed with jmcneill@.
2008-05-30 15:30:37 +00:00
ad
13cf4bcc55 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Remove ifdef(i386), kernel preemption works on amd64 now.
2008-05-30 12:18:14 +00:00
rmind
1be38c90d8 do_sys_accept: release the reference to sock in few error paths.
Should fix PR/38790, report and test-case by Nicolas Joly.
2008-05-30 09:49:01 +00:00
rmind
a68758f8bd sched_idle: initialise 'tci' to NULL, avoids compiler warning. 2008-05-30 08:31:42 +00:00
ad
e98d2c1016 lwp_exit_switchaway: set l_lwpctl->lc_curcpu = EXITED, not NONE. 2008-05-29 23:29:59 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
rmind
15e36ef766 sys_shmget: use the correct size variable for uobj_wirepages().
Adjust shm_memlock() for consistency too.

Fixes PR/38782, reported by Adam Hoka.
2008-05-29 21:38:18 +00:00
joerg
157262cae8 Explicitly compute the next interval using 64bit arithmetic, if the time
was either stepped backwards or the timer has overflown. This fixes
PR 26470.
2008-05-29 15:27:51 +00:00
pooka
de98844194 Mark pread/pwrite rump syscalls.
from Arnaud Ysmal
2008-05-29 12:01:37 +00:00
ad
79d0501e52 Disable zero copy if MULTIPROCESSOR, until it is fixed:
- The TLB coherency overhead on MP systems is really expensive.
- It triggers a race in the VM system (grep kpause uvm/*).
2008-05-28 21:01:42 +00:00
dyoung
801c395b47 Run shutdown hooks whether or not the kernel has panicked. Restores
NetBSD's shutdown behavior of more than 6 years before rev 1.176.
Ok joerg@.

It is essential that we restore some hardware to initial conditions
before rebooting, in order to avoid interfering with the BIOS
bootstrap.  For example, if NetBSD gives control back to the Soekris
comBIOS while the kernel text is write-protected, the BIOS bootstrap
hangs during the power-on self-test, "POST: 0123456789bcefghip".
In principle, bus masters can also interfere with BIOS boot.
2008-05-28 15:40:58 +00:00
ad
b5d8351e8e PR kern/38355 lockf deadlock detection is broken after vmlocking
- Fix it; tested with Sun's libMicro.
- Use pool_cache.
- Use a global lock, so the deadlock detection code is safer.
2008-05-28 13:35:32 +00:00
ad
5831c8ac63 Pull in sys/evcnt.h. 2008-05-27 22:05:50 +00:00
ad
f79b59f700 #ifdef strikes again 2008-05-27 21:36:03 +00:00
ad
4c634c7155 Sigh. The previous change did bad things to MySQL sysbench. Continue stealing
jobs from sched_nextlwp, but also do it in the idle loop. In sched_nextlwp
use trylock, in the idle LWP try harder.
2008-05-27 19:05:52 +00:00
ad
208df81d99 Move lwp_exit_switchaway() into kern_synch.c. Instead of always switching
to the idle loop, pick a new LWP from the run queue.
2008-05-27 17:51:17 +00:00
ad
aa7e99c693 Replace a couple of tsleep calls with cv_wait. 2008-05-27 17:50:03 +00:00
ad
234470c22e tsleep -> kpause 2008-05-27 17:49:07 +00:00
ad
ec32985f61 Use pool_cache. 2008-05-27 17:48:27 +00:00
ad
f90f3a01ea Use kmem_alloc/free. 2008-05-27 17:42:14 +00:00
ad
81fa379a0b PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job
  stealing a lot simpler: if the local run queue is empty, let the CPU enter
  the idle loop. In the idle loop, try to steal a job from another CPU's run
  queue if we are idle. If we succeed, re-enter mi_switch() immediatley to
  dispatch the job.

- When stealing jobs, consider a remote CPU to have one less job in its
  queue if it's currently in the idle loop. It will dispatch the job soon,
  so there's no point sloshing it about.

- Introduce a few event counters to monitor what's happening with the run
  queues.

- Revert the idle CPU bitmap change. It's pointless considering NUMA.
2008-05-27 14:48:52 +00:00
ad
f985e88f2b Start profiling clock on new process before setting it running, in case
there is a preemption.
2008-05-27 14:18:51 +00:00
christos
b51f45d9e6 More fixes needed in the error paths for the chroot code to work. 2008-05-26 18:20:36 +00:00
rmind
06171502fc Adjust and thus unify my license. 2008-05-26 17:45:51 +00:00
ad
c9ac92b592 Use pool_cache for sockets. 2008-05-26 17:21:18 +00:00
ad
46540aaf0e brelse: always wakeup on b_busy, in case BC_WANTED is not set. 2008-05-26 14:56:55 +00:00