Commit Graph

151 Commits

Author SHA1 Message Date
matt
dd6af2c565 Allow the MD code to decide to panic if cpu_uarea_alloc would return NULL.
If NULL is returned, just allocate the standard way.
2011-07-02 01:26:29 +00:00
rmind
e225b7bd09 Welcome to 5.99.53! Merge rmind-uvmplock branch:
- Reorganize locking in UVM and provide extra serialisation for pmap(9).
  New lock order: [vmpage-owner-lock] -> pmap-lock.

- Simplify locking in some pmap(9) modules by removing P->V locking.

- Use lock object on vmobjlock (and thus vnode_t::v_interlock) to share
  the locks amongst UVM objects where necessary (tmpfs, layerfs, unionfs).

- Rewrite and optimise x86 TLB shootdown code, make it simpler and cleaner.
  Add TLBSTATS option for x86 to collect statistics about TLB shootdowns.

- Unify /dev/mem et al in MI code and provide required locking (removes
  kernel-lock on some ports).  Also, avoid cache-aliasing issues.

Thanks to Andrew Doran and Joerg Sonnenberger, as their initial patches
formed the core changes of this branch.
2011-06-12 03:35:36 +00:00
drochner
f376d5576a make this build w/o HAVE_CPU_UAREA_ROUTINES 2011-02-18 10:43:52 +00:00
matt
4f38307d62 Add support for cpu-specific uarea allocation routines. Allows different
allocation for user and system lwps.  MIPS will use this to map uareas of
system lwp used direct-mapped addresses (to reduce the overhead of
switching to kernel threads).  ibm4xx could use to map uareas via direct
mapped addresses and avoid the problem of having the kernel stack not in
the TLB.
2011-02-17 19:27:13 +00:00
chuck
40ec801a13 udpate license clauses on my code to match the new-style BSD licenses.
based on second diff that rmind@ sent me.

no functional change with this commit.
2011-02-02 15:25:27 +00:00
rmind
7146b2f61d Retire struct user, remove sys/user.h inclusions. Note sys/user.h header
as obsolete.  Remove USER_TO_UAREA/UAREA_TO_USER macros.

Various #include fixes and review by matt@.
2011-01-14 02:06:22 +00:00
rmind
5f0ac9a4fa - Merge sched_pstats() and uvm_meter()/uvm_loadav(). Avoids double loop
through all LWPs and duplicate locking overhead.

- Move sched_pstats() from soft-interrupt context to process 0 main loop.
  Avoids blocking effect on real-time threads.  Mostly fixes PR/38792.

Note: it might be worth to move the loop above PRI_PGDAEMON.  Also,
sched_pstats() might be cleaned-up slightly.
2010-04-16 03:21:49 +00:00
jym
2c546c52cc Change RSS (resident set size) limit. Instead of setting it arbitrarily
to the total free memory available to the system, use the smallest value
between VM_MAXUSER_ADDRESS and total free memory (having a RSS limit
bigger than VM_MAXUSER_ADDRESS has no real meaning).

Fix a possible int overflow when ptoa(uvmexp.free) is bigger than 4GB
with a 32 bits vaddr_t.

Reviewed by bouyer@.

See also http://mail-index.netbsd.org/tech-kern/2010/02/24/msg007395.html
2010-02-25 23:10:49 +00:00
rmind
1069745866 Replace few USER_TO_UAREA/UAREA_TO_USER uses, reduce sys/user.h inclusions. 2009-12-17 01:25:10 +00:00
rmind
7255a0d9d4 Add uvm_lwp_getuarea() and uvm_lwp_setuarea(). OK matt@. 2009-11-21 17:45:02 +00:00
rmind
40cf6f3659 Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code.  Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
2009-10-21 21:11:57 +00:00
matt
0ad7ebe8ea Revent change to printf. (why can't __func__ concat with other string?) 2009-08-10 16:50:18 +00:00
matt
8e96f9bb1a Only swapout uareas if VMSWAP_UAREA is defined (which is should be by default).
If it's not defined and PMAP_MAP_POOLPAGE is defined and USPACE == PAGE_SIZE,
then allocate/map USPACE via uvm_pagealloc/PMAP_MAP_POOLPAGE.

On platforms like MIPS with 16KB pages, this means that uareas (and hence lwp
kernel stacks) will be always be accessible since they will be KSEG0.
2009-08-09 22:19:09 +00:00
rmind
5c68e5d0ee Ephemeral mapping (emap) implementation. Concept is based on the idea that
activity of other threads will perform the TLB flush for the processes using
emap as a side effect.  To track that, global and per-CPU generation numbers
are used.  This idea was suggested by Andrew Doran; various improvements to
it by me.  Notes:

- For now, zero-copy on pipe is not yet enabled.
- TCP socket code would likely need more work.
- Additional UVM loaning improvements are needed.

Proposed on <tech-kern>, silence there.
Quickly reviewed by <ad>.
2009-06-28 15:18:50 +00:00
rmind
523acc7d68 Avoid few #ifdef KSTACK_CHECK_MAGIC. 2009-04-16 00:17:19 +00:00
mrg
fcc023545e - add new RLIMIT_AS (aka RLIMIT_VMEM) resource that limits the total
address space available to processes.  this limit exists in most other
modern unix variants, and like most of them, our defaults are unlimited.
remove the old mmap / rlimit.datasize hack.

- adds the VMCMD_STACK flag to all the stack-creation vmcmd callers.
it is currently unused, but was added a few years ago.

- add a pair of new process size values to kinfo_proc2{}. one is the
total size of the process memory map, and the other is the total size
adjusted for unused stack space (since most processes have a lot of
this...)

- patch sh, and csh to notice RLIMIT_AS.  (in some cases, the alias
RLIMIT_VMEM was already present and used if availble.)

- patch ps, top and systat to notice the new k_vm_vsize member of
kinfo_proc2{}.

- update irix, svr4, svr4_32, linux and osf1 emulations to support
this information.  (freebsd could be done, but that it's best left
as part of the full-update of compat/freebsd.)


this addresses PR 7897.  it also gives correct memory usage values,
which have never been entirely correct (since mmap), and have been
very incorrect since jemalloc() was enabled.

tested on i386 and sparc64, build tested on several other platforms.

thanks to many folks for feedback and testing but most espcially
chuq and yamt for critical suggestions that lead to this patch not
having a special ugliness i wasn't happy with anyway :-)
2009-03-29 01:02:48 +00:00
yamt
cec5254ed7 uvm_swapin: uncomment an assertion which is now ok. 2009-01-31 09:13:09 +00:00
ad
92ce8c6a3d Make the emulations, exec formats, coredump, NFS, and the NFS server
into modules. By and large this commit:

- shuffles header files and ifdefs
- splits code out where necessary to be modular
- adds module glue for each of the components
- adds/replaces hooks for things that can be installed at runtime
2008-11-19 18:35:57 +00:00
ad
321c12209b Don't swap kernel stacks of realtime threads. 2008-06-25 19:20:56 +00:00
ad
c0cd593d89 uvm_swapout: try to lock the vm_map before calling pmap_collect. 2008-06-16 10:19:57 +00:00
ad
6bb1c8bc6f swappable: invert previous so we check for SACTIVE or SSTOP. 2008-06-09 11:52:34 +00:00
ad
a3196ec57e swappable: return false if l->l_proc->p_stat == SDYING. 2008-06-09 11:51:43 +00:00
ad
0cd7bfa598 uvm_proc_exit: use macros to disable preemption. 2008-06-09 11:49:54 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
ad
2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
ad
7d6bc631b9 Disable preemption while swapping pmap. 2008-04-27 11:39:46 +00:00
ad
6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
yamt
1d737efc42 fix the order of printf arguments. 2008-04-11 15:47:15 +00:00
christos
9741037836 - use uarea_swapin, rather than duplicating the code.
- use __func__ where appropriate.
2008-04-11 15:31:37 +00:00
christos
c7a574ed9e make this compile 2008-03-29 20:40:52 +00:00
dholland
132066c541 Fix broken build. hi skrll :-) 2008-03-29 20:15:54 +00:00
skrll
38b9530a91 Fix unsed variable when DEBUG isn't defined. 2008-03-29 18:49:13 +00:00
ad
be04ac4896 Make rusage collection per-LWP and collate in the appropriate places.
cloned threads need a little bit more work but the locking needs to
be fixed first.
2008-03-27 19:06:51 +00:00
yamt
569bb6b4dc update comment 2008-02-29 12:08:04 +00:00
yamt
f7048cc403 uvm_uarea_init: fix compilation where PAGE_SIZE is not a constant. (sparc)
reported by Tom Spindler.
2008-02-08 11:49:40 +00:00
yamt
cb964fe917 uvm_uarea_init: make #if about PR_NOALIGN clearer and add a comment
to explain it.
2008-02-07 12:21:24 +00:00
yamt
52838e34f5 remove a special allocator for uareas, which is no longer necessary.
use pool_cache instead.
2008-01-28 12:22:46 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
ad
f5096e38d8 uvm_swapin: disable the swaplock assertion. uvm_lwp_hold() can't take
the lock yet.
2007-09-21 00:18:35 +00:00
ad
09e7c477d8 Include sys/cpu.h for CPU_INFO_FOREACH. 2007-08-18 10:07:55 +00:00
ad
ee1c78315b Fix error in previous. 2007-08-18 00:31:32 +00:00
ad
b5866eb299 Make the uarea cache per-CPU and drain in batches of 4. 2007-08-18 00:21:10 +00:00
ad
63873b9b21 Revert unintentially committed change. 2007-07-14 22:27:15 +00:00
ad
88ab7da936 Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
2007-07-09 20:51:58 +00:00
yamt
f03010953f merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:

	idle lwp, and some changes depending on it.

	1. separate context switching and thread scheduling.
	   (cf. gmcgarry_ctxsw)
	2. implement idle lwp.
	3. clean up related MD/MI interfaces.
	4. make scheduler(s) modular.
2007-05-17 14:51:11 +00:00
rmind
60e35a7f80 Export uvm_uarea_free() to the rest.
Make things compile again.
2007-03-24 21:15:39 +00:00
christos
53524e44ef Kill caddr_t; there will be some MI fallout, but it will be fixed shortly. 2007-03-04 05:59:00 +00:00
thorpej
b3667ada6d TRUE -> true, FALSE -> false 2007-02-22 06:05:00 +00:00
thorpej
712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00