Commit Graph

770 Commits

Author SHA1 Message Date
simonb 19f85c9cf1 Fix a tyop. 2004-04-05 01:39:07 +00:00
pk daff668b49 Use maxdmap and maxsmap instead of MAXDSIZ and MAXSSIZ. 2004-04-04 18:21:48 +00:00
yamt dabac1bc03 uvm_map_findspace: don't return unaligned address if alignment is specified.
discussed on tech-kern@.
2004-03-30 12:59:09 +00:00
he 8721979173 Conditionalize a few more declarations, as they may be defined as macros:
pmap_collect, pmap_reference, and pmap_remove (observed lossage for vax).
2004-03-27 00:59:55 +00:00
atatat 19af35fd0d Tango on sysctl_createv() and flags. The flags have all been renamed,
and sysctl_createv() now uses more arguments.
2004-03-24 15:34:46 +00:00
junyoung 325f5482a8 Nuke __P(). 2004-03-24 07:55:01 +00:00
junyoung 1e2b269ded - Nuke __P().
- Drop trailing spaces.
2004-03-24 07:50:48 +00:00
junyoung 7e0c058612 Drop trailing spaces. 2004-03-24 07:47:32 +00:00
junyoung ff32ba0bff pmap_copy() and pmap_update() might be defined as macros in <machine/pmap.h>. 2004-03-23 14:15:59 +00:00
mycroft 2fef5d8dfc Something I posted to tech-kern a long time ago...
Slightly simplify uvm_map_extract() slightly by eliminating "oldstart".
2004-03-17 23:58:12 +00:00
jdolecek 43bafa5c97 fix typo in comment 2004-03-14 16:47:23 +00:00
pooka c5e500a486 Reflect dropping mappings in map_size.
Avoids panic on DIAGNOSTIC kernels.

ok by chs
2004-03-11 15:03:47 +00:00
dbj 5dc123ec73 add debugging assertion ensuring UBC_FAULTBUSY is only used with UBC_WRITE 2004-03-05 20:44:01 +00:00
yamt 9359a18b6a uvm_fault: check loan_count of neighborhood object page properly.
PR/24595 from Stephan Uphoff.
2004-03-02 11:43:44 +00:00
dsl bcaf57b039 Fix prev. so it compiles 2004-02-14 16:40:22 +00:00
jdolecek af46922ada add compat hook in check for zerodev; use this hook to recognize
the old ARM /dev/zero minor mapping #ifdef COMPAT_16
fixes second part of PR kern/23581 by Richard Earnshaw
2004-02-14 12:20:14 +00:00
drochner 41b5db514b make this compile whether DIAGNOSTIC is defined or not 2004-02-13 17:17:04 +00:00
yamt 546aea4d9c when breaking a loan from uobj,
insert the replacement page into the same position
as the original page on the object memq so that
genfs_putpages (and lfs) won't be confused.

noted by Stephan Uphoff (PR/24328)
2004-02-13 13:47:16 +00:00
wiz d20841bb64 Uppercase CPU, plural is CPUs. 2004-02-13 11:36:08 +00:00
yamt babebaa9f9 uvm_loanentry: add a missing uvmfault_unlockall. 2004-02-13 11:15:30 +00:00
matt a78a1b0777 Back out the changes in
http://mail-index.netbsd.org/source-changes/2004/01/29/0027.html
since they don't really fix the problem.

Incorpate one fix:  Mark uvm_map_entry's that were created with
UVM_FLAG_NOMERGE so that they will not be used as future merge
candidates.
2004-02-10 01:30:49 +00:00
dbj c32c758d2d s/fauling/faulting/ 2004-02-10 00:40:06 +00:00
yamt 1e18e59746 - borrow vmspace0 in uvm_proc_exit instead of uvmspace_free.
the latter is not a appropriate place to do so and it broke vfork.
- deactivate pmap before calling cpu_exit() to keep a balance of
  pmap_activate/deactivate.
2004-02-09 13:11:21 +00:00
yamt 8fb96e0be4 introduce a new patchable variable, uvm_debug_check_rbtree,
which is zero by default.
perform rbtree sanity checks only when it isn't zero
because the check is very heavy weight especially when
there're many entries.
2004-02-07 13:22:19 +00:00
yamt a45adbd9c7 don't deactivate pmap in exit1 because we'll touch the pmap later.
instead, borrow vmspace0 immediately before destroying the pmap
in uvmspace_free.
2004-02-07 10:05:52 +00:00
yamt 4124096ea8 uvm_kmapent_alloc:
in the case that there's no cached entries,
if kmem_map is already up, allocate a entry from it
so that we won't try to vm_map_lock recursively.
XXX assuming usage pattern of kmem_map.
2004-02-07 08:02:21 +00:00
he be19fc25f3 Since the playstation2 port still uses a variant of gcc 2.95.2,
change to use a zero-sized array instead of c99 flexible array
member in a struct.

OK'ed by yamt.
2004-02-02 23:13:44 +00:00
yamt 6f979c2127 uvm_loanuobjpages: fix a comment. 2004-01-30 12:01:27 +00:00
yamt c5ffc97d8e remove wrong assertions.
sparc's alloc_cpuinfo_global_va() partially unmaps kva range in kernel_map.

noted by Juergen Hannken-Illjes on current-users@.
2004-01-30 11:56:39 +00:00
tls aeaf748ff2 Buffer cache fixes to avoid thrashing between high and low water marks
and uncontrolled growth.

The key fix is from Dan Carasone, who noticed that buf_canfree() was
counting in _bytes_ but freeing in _buffers_, which caused the instant
drop to lowater observed by some users.

We now control the rate of growth; the probability of getting a new
allocation is inversely proportional to the current size of the
cache.  This idea is from a long-ago conversation with Kirk McKusick
and, if memory serves, was used for the file-system cache in some
other BSD variant at some point in history.

With growth and shrinkage more or less dealt with, we return the
default maximum cache size to 15%.  The default _minimum_ cache size
is raised from 1/16 of the maximum cache size to 1/8, since 1/16 was
chosen when the maximum size was 30% of memory.

Finally, after observing the behaviour of the pagedaemon and the
buffer cache drainer under pathological workloads (e.g. a benchmark
that steps through 75% of available memory backwards) I have moved
the call to buf_drain() to the beginning of the pagedaemon from the
end; if the pagedaemon bogs down, it still won't get run as often
as it should, but at least this way it will see the state of the
free count and free target _before_ the scan step does its thing.
2004-01-30 11:32:16 +00:00
yamt d6e6e2e5c8 some English fixes from Soren Jacobsen. 2004-01-29 12:07:29 +00:00
yamt 20c5bc5099 - split uvm_map() into two functions for the followings.
- for in-kernel maps, disable map entry merging so that
  unmap operations won't block. (workaround for PR/24039)
- for in-kernel maps, allocate kva for vm_map_entry from
  the map itsself and eliminate MAX_KMAPENT and
  uvm_map_entry_kmem_pool.
2004-01-29 12:06:02 +00:00
hannken 3db4e2acd8 Make VOP_STRATEGY(bp) a real VOP as discussed on tech-kern.
VOP_STRATEGY(bp) is replaced by one of two new functions:

- VOP_STRATEGY(vp, bp)  Call the strategy routine of vp for bp.
- DEV_STRATEGY(bp)      Call the d_strategy routine of bp->b_dev for bp.

DEV_STRATEGY(bp) is used only for block-to-block device situations.
2004-01-25 18:06:48 +00:00
dbj 5b4c9f2016 rearrange struct uvm_history to put the struct simplelock at the end.
This avoids problems with the kernel grovelling vmstat -u/-U when
using LOCKDEBUG, which changes the size of struct simplelock.
Replaced the original location of the simplelock with "int unused"
so that binary compatibility will be retained with old vmstat.
2004-01-24 21:29:03 +00:00
yamt b81e2fa5c5 uvm_coredump_walkmap: use UVM_OBJ_IS_DEVICE macro. 2004-01-16 12:43:16 +00:00
yamt 7f20b0c529 bump vnode hold count for page cache as well
to resolve unfairness between page cache and traditional buffer cache.
pointed by enami tsugutomo on current-users@.
2004-01-14 11:28:04 +00:00
yamt 0cad61498f sysctl_vm_updateminmax: fix swapped filemin and execmin.
the problem reported by Vesbula on current-users@.
2004-01-11 18:42:25 +00:00
yamt 7266a95907 store a i/o priority hint in struct buf for buffer queue discipline. 2004-01-10 14:39:50 +00:00
yamt 4b651870d9 #if 0 out unused ubc_flush(). 2004-01-07 12:18:16 +00:00
yamt 59afac32fe - get pages to loan out in uvm_loanuobjpages() rather than
having caller (nfsd, in this case) do so.
- tweak locking so that nfs loaned READ works on layered filesystems.
2004-01-07 12:17:10 +00:00
chs 7662f44874 fix lock initialization in uvm_anon_add(). from PR 23831. 2004-01-06 15:56:49 +00:00
jdolecek 089abdad44 Rearrange process exit path to avoid need to free resources from different
process context ('reaper').

From within the exiting process context:
* deactivate pmap and free vmspace while we can still block
* introduce MD cpu_lwp_free() - this cleans all MD-specific context (such
  as FPU state), and is the last potentially blocking operation;
  all of cpu_wait(), and most of cpu_exit(), is now folded into cpu_lwp_free()
* process is now immediatelly marked as zombie and made available for pickup
  by parent; the remaining last lwp continues the exit as fully detached
* MI (rather than MD) code bumps uvmexp.swtch, cpu_exit() is now same
  for both 'process' and 'lwp' exit

uvm_lwp_exit() is modified to never block; the u-area memory is now
always just linked to the list of available u-areas. Introduce (blocking)
uvm_uarea_drain(), which is called to release the excessive u-area memory;
this is called by parent within wait4(), or by pagedaemon on memory shortage.
uvm_uarea_free() is now private function within uvm_glue.c.

MD process/lwp exit code now always calls lwp_exit2() immediatelly after
switching away from the exiting lwp.

g/c now unneeded routines and variables, including the reaper kernel thread
2004-01-04 11:33:29 +00:00
pk 70f20a1217 Replace the traditional buffer memory management -- based on fixed per buffer
virtual memory reservation and a private pool of memory pages -- by a scheme
based on memory pools.

This allows better utilization of memory because buffers can now be allocated
with a granularity finer than the system's native page size (useful for
filesystems with e.g. 1k or 2k fragment sizes).  It also avoids fragmentation
of virtual to physical memory mappings (due to the former fixed virtual
address reservation) resulting in better utilization of MMU resources on some
platforms.  Finally, the scheme is more flexible by allowing run-time decisions
on the amount of memory to be used for buffers.

On the other hand, the effectiveness of the LRU queue for buffer recycling
may be somewhat reduced compared to the traditional method since, due to the
nature of the pool based memory allocation, the actual least recently used
buffer may release its memory to a pool different from the one needed by a
newly allocated buffer. However, this effect will kick in only if the
system is under memory pressure.
2003-12-30 12:33:13 +00:00
simonb 2b9ac03f55 No need to break a line - the full line is less than 80 chars long. 2003-12-21 11:38:46 +00:00
simonb b9fbceaf46 Unindent a code block that doens't need to be indented. 2003-12-19 06:02:50 +00:00
pk 3c96ae431b * Introduce uvm_km_kmemalloc1() which allows alignment and preferred offset
to be passed to uvm_map().

* Turn all uvm_km_valloc*() macros back into (inlined) functions to retain
  binary compatibility with any 3rd party modules.
2003-12-18 15:02:04 +00:00
pk 60181444ca Condense all existing variants of uvm_km_valloc into a single function:
uvm_km_valloc1(), and use it to express all of
	uvm_km_valloc()
	uvm_km_valloc_wait()
	uvm_km_valloc_prefer()
	uvm_km_valloc_prefer_wait()
	uvm_km_valloc_align()
in terms of it by macro expansion.
2003-12-18 08:15:42 +00:00
tsutsui 082caf94ca Allow sysctl(8) to update vm.{anon,exec,file}{min,max}.
XXX needs sysctl(9) man page to confirm this change..
2003-12-07 00:40:43 +00:00
atatat 13f8d2ce5f Dynamic sysctl.
Gone are the old kern_sysctl(), cpu_sysctl(), hw_sysctl(),
vfs_sysctl(), etc, routines, along with sysctl_int() et al.  Now all
nodes are registered with the tree, and nodes can be added (or
removed) easily, and I/O to and from the tree is handled generically.

Since the nodes are registered with the tree, the mapping from name to
number (and back again) can now be discovered, instead of having to be
hard coded.  Adding new nodes to the tree is likewise much simpler --
the new infrastructure handles almost all the work for simple types,
and just about anything else can be done with a small helper function.

All existing nodes are where they were before (numerically speaking),
so all existing consumers of sysctl information should notice no
difference.

PS - I'm sorry, but there's a distinct lack of documentation at the
moment.  I'm working on sysctl(3/8/9) right now, and I promise to
watch out for buses.
2003-12-04 19:38:21 +00:00
yamt f7d48e3571 mincore: don't treat an aobj as a device mapping. 2003-11-29 19:06:48 +00:00