Increment lwp's context switch counter when it is scheduled onto

a CPU.  This fixes some heavy-load problems with the pool code when
rump kernels essentially lied and caused the pool code not to do
a proper backdown from the fastpath when a context switch happened
when taking a lock.
This commit is contained in:
pooka 2010-08-15 20:23:04 +00:00
parent a89bed9efc
commit e46b5d009d
1 changed files with 3 additions and 2 deletions

View File

@ -1,4 +1,4 @@
/* $NetBSD: scheduler.c,v 1.17 2010/07/11 16:20:39 pooka Exp $ */
/* $NetBSD: scheduler.c,v 1.18 2010/08/15 20:23:04 pooka Exp $ */
/*
* Copyright (c) 2010 Antti Kantee. All Rights Reserved.
@ -26,7 +26,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: scheduler.c,v 1.17 2010/07/11 16:20:39 pooka Exp $");
__KERNEL_RCSID(0, "$NetBSD: scheduler.c,v 1.18 2010/08/15 20:23:04 pooka Exp $");
#include <sys/param.h>
#include <sys/atomic.h>
@ -317,6 +317,7 @@ rump_schedule_cpu_interlock(struct lwp *l, void *interlock)
fastlane:
l->l_cpu = l->l_target_cpu = rcpu->rcpu_ci;
l->l_mutex = rcpu->rcpu_ci->ci_schedstate.spc_mutex;
l->l_ncsw++;
}
void