Commit Graph

355 Commits

Author SHA1 Message Date
enami 332c98526a - Move the comment, which describes that calling the function
uvm_map_pageable(map, ...) implies unlocking passed map, just before the
  function call.
- If we bail out before calling the uvm_map_pageable, unlock the map
  by ourself to prevent a panic ``locking against myself''.  The panic is,
  for example, caused when cdrecord is invoked with too large fifo size.
2000-05-23 02:19:20 +00:00
thorpej 1410091b4e Clean up a comment. 2000-05-20 19:54:01 +00:00
thorpej cd737c4016 Remove VM_PROT_EXECUTE from the permissions used to map the page
for pager I/O -- it is not needed, and including it leads to
unnecessary I-cache flushes.
2000-05-20 03:36:06 +00:00
thorpej 646555bbd5 Clean up some indentation lossage in uvm_map_extract(). 2000-05-19 17:43:55 +00:00
thorpej f636538446 NULL != 0 2000-05-19 04:34:39 +00:00
thorpej 655b21e17d Tell uvm_pagermapin() the direction of the I/O so that it can map
with only the protection that it needs.
2000-05-19 03:45:04 +00:00
thorpej f3b078d268 __predict_false() an error check. 2000-05-08 23:13:42 +00:00
thorpej d45c9982df __predict_false() DIAGNOSTIC error checks. 2000-05-08 23:11:53 +00:00
thorpej 39294d89e5 __predict_false() out-of-resource conditions and DIAGNOSTIC error checks. 2000-05-08 23:10:20 +00:00
thorpej b5b82faa4a uvm_map_setup(): We almost ever set up an interrupt-safe map, but we
set up quite a few regular ones (at every fork!), so put interrupt-
safe map setup in the slow path with a __predict_false().

uvm_map_reference(): __predict_false() the check for NULL map.
uvm_map_deallocate(): Likewise.
2000-05-08 22:59:35 +00:00
thorpej 9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
chs d444bb4032 undo rev 1.13, which is to say, don't block interrupts while deactivating
one pmap and activating another.  this isn't actually necessary (since
pmap_activate() and pmap_deactivate() affect only user-level mappings,
which cannot be accessed from interrupts anyway), and pmap_activate()
is very slow on old sun4c sparcs so we can't block interrupts for this long.
this fixes PR 8322.
2000-04-16 20:52:29 +00:00
mrg 6b7f13609a remove <vm/vm_swap.h> and <vm/vm_conf.h> 2000-04-15 18:08:12 +00:00
pk 741c324930 Finish previous. 2000-04-11 08:12:14 +00:00
chs 8724ec3b5c avoid declarating "i" as a local variable in a macro.
it's too easy to shadow another local.
2000-04-11 02:34:19 +00:00
chs 66014d2dff sparc -> __sparc__
print lock status in uvm_object_printit().
2000-04-10 02:21:26 +00:00
chs 061ecbff46 tidy. 2000-04-10 02:20:06 +00:00
thorpej eeb3a38cfc Use UVM_PGA_ZERO in the promote-zero-fault case of uvm_fault(). 2000-04-10 01:17:41 +00:00
thorpej 345b3d2136 Use UVM_PGA_ZERO in a few (easy) places. 2000-04-10 00:32:46 +00:00
thorpej 2c48131727 Add UVM_PGA_ZERO which instructs uvm_pagealloc{,_strat}() to return a
zero'd, ! PG_CLEAN page, as if it were uvm_pagezero()'d.
2000-04-10 00:28:05 +00:00
chs d75d6fb164 restore a brelvp() that I removed in a moment of overzealousness.
Debugged by:  Brian Grayson <bgrayson@netbsd.org>
2000-04-07 08:27:28 +00:00
chs 60a7a67f71 remove uvm_shareprot(). no longer needed since the demise of share maps. 2000-04-03 08:09:02 +00:00
chs 03a4ef3a79 remove the "shareprot" pagerop. it's not needed anymore since
share maps are long gone.
2000-04-03 07:35:23 +00:00
thorpej 2bc5adb20e Instead of checking vm_physmem[<physseg>].pgs to determine if
uvm_page_init() has completed, add a boolean uvm.page_init_done,
and test against that.  Use this same boolean (rather than
pmap_initialized) in pmap_growkernel() to determine if we are
being called via uvm_page_init() to grow the kernel address space.

This fixes a problem on some i386 configurations where pmap_init()
itself was needing to have the kernel page table grown, and since
pmap_initialized was not yet set to TRUE, pmap_growkernel() was
choosing the wrong code path.

Fix tested by Havard Eidnes.
2000-04-02 20:39:14 +00:00
augustss 641df97d12 Remove more register declarations. 2000-03-30 12:31:50 +00:00
simonb 171910636e Delete redundant decl of aobj_pager - it's in <uvm/uvm_aobj.h>. 2000-03-30 02:49:55 +00:00
simonb b2a7dc176d Remove redundant decl for uvmspace_fork() - it's in <uvm/uvm_extern.h>. 2000-03-29 04:05:47 +00:00
simonb 0fd09c8496 Don't need to include <sys/conf.h> here. 2000-03-29 03:43:33 +00:00
kleink 7e35a43e67 In mmap(), bail out with EOVERFLOW when mapping a regular file and the file
offset plus mapping length cannot be represented in an off_t.
2000-03-28 18:45:19 +00:00
kleink a51ef2c3f9 Kill duplicate uvn_attach() prototype (public, already in uvm_vnode.h). 2000-03-27 16:58:23 +00:00
kleink 6e5b64c8a0 Merge parts of chs-ubc2 into the trunk:
Add a new type voff_t (defined as a synonym for off_t) to describe offsets
into uvm objects, and update the appropriate interfaces to use it, the
most visible effect being the ability to mmap() file offsets beyond
the range of a vaddr_t.

Originally by Chuck Silvers; blame me for problems caused by merging this
into non-UBC.
2000-03-26 20:54:45 +00:00
kleink 08025fbc20 Kill duplicate udv_attach() prototype; it's a public interface, and declared
in uvm_device.h.
2000-03-26 20:46:59 +00:00
soren 95054da1a1 Fix doubled 'the's in comments. 2000-03-13 23:52:25 +00:00
thorpej 9671588a30 Allocate the page buckets out of kernel_map, not kmem_map. Saves 16
or so kmem_map pages on a 32MB SPARCstation 2.
2000-02-13 03:34:40 +00:00
thorpej eb9cbbe294 Add some very simple code to auto-size the kmem_map. We take the
amount of physical memory, divide it by 4, and then allow machine
dependent code to place upper and lower bounds on the size.  Export
the computed value to userspace via the new "vm.nkmempages" sysctl.

NKMEMCLUSTERS is now deprecated and will generate an error if you
attempt to use it.  The new option, should you choose to use it,
is called NKMEMPAGES, and two new options NKMEMPAGES_MIN and
NKMEMPAGES_MAX allow the user to configure the bounds in the kernel
config file.
2000-02-11 19:22:52 +00:00
thorpej fe551f0e64 Fix a bug in disksort_*() which caused non-optimal ordering when multiple
active partitions were on a single spindle.  Add a b_rawblkno member to
struct buf which contains the non-partition-relative block number to sort
by.
2000-02-07 20:16:47 +00:00
chs 26c744c85b remove a debug printf that has outlived its usefulness. 2000-01-28 08:02:48 +00:00
thorpej 0b0aecffd6 Update for sys/buf.h/disksort_*() changes. 2000-01-21 23:43:10 +00:00
chs 16f0ca3612 add support for ``swapctl -d'' (removing swap space).
improve handling of i/o errors in swap space.

reviewed by:  Chuck Cranor
2000-01-11 06:57:49 +00:00
wrstuden 56f2ef9f29 Revert rev 1.28 -> 1.29. The VOP_CLOSE call was happeneing with the vnode
already locked, so don't lock it here.
2000-01-04 21:37:54 +00:00
eeh c0ac678704 I should have made uvm_page_physload() take paddr_t's instead of vaddr_t's.
Also, add uvm_coredump32().
1999-12-30 16:09:47 +00:00
thorpej 7287dd22c6 Remove a piece of code introduced in rev 1.36 that I didn't intend to
commit.
1999-12-11 05:38:41 +00:00
fvdl 5277c5e448 CL* clearout 1999-12-04 23:14:40 +00:00
drochner 38e73c0c99 in uvm_page_physget(), try the vm_physmem[] chunks in the order of their
"free_list" attributes, to save DMA memory
1999-12-01 16:08:32 +00:00
thorpej 63494b0b50 Avoid an integer overflow on systems w/ more than 2G of RAM. 1999-11-30 18:34:23 +00:00
drochner b1f2453dee add a diagnostic panic to catch illegal memory ranges passed to
uvm_page_physload()
1999-11-24 18:28:49 +00:00
fvdl 0b1963121a Add Kirk McKusick's soft updates code to the trunk. Not enabled by
default, as the copyright on the main file (ffs_softdep.c) is such
that is has been put into gnusrc. options SOFTDEP will pull this
in. This code also contains the trickle syncer.

Bump version number to 1.4O
1999-11-15 18:49:07 +00:00
thorpej 1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
thorpej a25b1ab916 Always pass all arguments to uvm_sleep(). 1999-11-13 00:21:17 +00:00
thorpej 8e930a51fe Const poison uvm_wait(). 1999-11-04 21:51:42 +00:00
ross 0f2e70dfa4 Patch from chuq for uvm r/w map oscillation bug.
Fixes the XalphaNetBSD slowdown.
1999-10-24 16:29:23 +00:00
chs b16ae5a8a5 put various debugging printfs under #ifdef DEBUG. 1999-10-19 16:04:45 +00:00
wrstuden e682a080e9 In spec_close(), if we're not doing a non-blocking close and VXLOCK is
not set, unlock the vnode before calling the device's close routine and
relock it after it returns. tty close routines will sleep waiting for
buffers to drain, which won't happen often times as the other side needs
to grab the vnode lock first.

Make all unmount routines lock the device vnode before calling VOP_CLOSE().
1999-10-16 23:53:26 +00:00
chs f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej 23e83a7ac7 When handling the MADV_FREE case, if the amap or aobj has more than
one reference, go through the deactivate path; the page may actually
be in use by another process.

Fixes kern/8239.
1999-08-21 02:19:05 +00:00
ross 7c367407aa In uvm_anon_init() and uvm_anon_add(), initialize the ref count lock. 1999-08-14 06:25:48 +00:00
thorpej 050aaac26e Fix the error recovery in uvm_map_pageable_all(). 1999-08-03 00:38:33 +00:00
thorpej ea8fb3e04a Turn the proclist lock into a read/write spinlock. Update proclist locking
calls to reflect this.  Also, block statclock rather than softclock during
in the proclist locking functions, to address a problem reported on
current-users by Sean Doran.
1999-07-25 06:30:33 +00:00
thorpej 3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej 2c668fb0d4 0 -> FALSE in a few places. 1999-07-22 21:27:32 +00:00
cgd 4eb46531af make sure 'wide' fault handling is actually done only once per fault.
('narrow' was mistakenly set to FALSE instead of TRUE.)  Committed after
discussion with chuq.
1999-07-19 19:02:22 +00:00
thorpej 5310e69363 Fix PR #8023 from Bernd Ernesti: when MADV_FREE'ing a region which spanned
more than one VM map entry, a typo caused amap_unadd() to attempt to
remove anons from the wrong amap.  Fix that typo.
1999-07-19 17:45:23 +00:00
chs a8f10f9e37 allow uvm_km_alloc_poolpage1() to use kernel-reserve pages. 1999-07-18 22:55:30 +00:00
thorpej 5ee6f3960d Rework uvm_map_protect():
- Fix some locking bugs; a couple of places would return an error condition
  without unlocking the map.
- Deal with maps marked WIREFUTURE; if making an entry VM_PROT_NONE ->
  anything else, and it is not already marked as wired, wire it.
1999-07-18 00:41:56 +00:00
thorpej b6f435026c Add a set of "lockflags", which can control the locking behavior
of some functions.  Use these flags in uvm_map_pageable() to determine
if the map is locked on entry (replaces an already present boolean_t
argument `islocked'), and if the function should return with the map
still locked.
1999-07-17 21:35:49 +00:00
thorpej fcc55e7687 Garbage-collect uvm_km_get(); nothing actually uses it. 1999-07-17 06:41:36 +00:00
thorpej a448b59581 Implement uao_flush(). This is pretty much identical to the "amap flush"
code in uvm_map_clean().
1999-07-17 06:06:36 +00:00
thorpej 8e06a75bcb Fix an operator precedence error which caused msync(2) to fail to pass
the PGO_CLEANIT flag to the object pagers.  Fixes PR #7978, from
Matthias Pfaller.
1999-07-14 21:06:30 +00:00
kleink e79a283e47 XSH5: change function signature to `void *sbrk(intptr_t)'. 1999-07-12 21:55:19 +00:00
thorpej ff05773b4a Back out the change I made yesterday. It seems to cause some trouble
for some folks.
1999-07-11 17:47:12 +00:00
thorpej a0555db3e0 Simplify uvm_fault_unwire_locked() a little. 1999-07-10 21:46:56 +00:00
thorpej c0389be5da Make a comment reflect reality. 1999-07-10 20:40:23 +00:00
thorpej d75fb0f6b0 Slightly better test for "object with no real pages". Test for NULL
pgo_releasepg rather than if the pager is the device pager.
1999-07-10 20:29:24 +00:00
thorpej 3ebbe095e0 Change the pmap_extract() interface to:
boolean_t pmap_extract(pmap_t, vaddr_t, paddr_t *);
This makes it possible for the pmap to map physical address 0.
1999-07-08 18:05:21 +00:00
thorpej 6885fbe3d1 Teeny bit of style policing. 1999-07-08 01:02:44 +00:00
thorpej ec74ea9486 Correct a comment. 1999-07-08 00:52:45 +00:00
thorpej 4ef1f3670d Fix a thinko which could cause a NULL pointer deref, in the PGO_FREE
case.
1999-07-07 21:51:35 +00:00
thorpej 62dcdc109b In the PGO_FREE case of uvm_map_clean()'s amap cleaning, skip wired
pages.

XXX This should be handled better in the future, probably by marking the
XXX page as released, and making uvm_pageunwire() free the page when
XXX the wire count on a released page reaches zero.
1999-07-07 21:04:22 +00:00
thorpej 4e398a6ded Add some more meat to madvise(2):
* Implement MADV_DONTNEED: deactivate pages in the specified range,
  semantics similar to Solaris's MADV_DONTNEED.
* Add MADV_FREE: free pages and swap resources associated with the
  specified range, causing the range to be reloaded from backing
  store (vnodes) or zero-fill (anonymous), semantics like FreeBSD's
  MADV_FREE and like Digital UNIX's MADV_DONTNEED (isn't it SO GREAT
  that madvise(2) isn't standardized!?)

As part of this, move the non-map-modifying advice handling out of
uvm_map_advise(), and into sys_madvise().

As another part, implement general amap cleaning in uvm_map_clean(), and
change uvm_map_clean() to only push dirty pages to disk if PGO_CLEANIT
is set in its flags (and update sys___msync13() accordingly).  XXX Add
a patchable global "amap_clean_works", defaulting to 1, which can disable
the amap cleaning code, just in case problems are unearthed; this gives
a developer/user a quick way to recover and send a bug report (e.g. boot
into DDB and change the value).

XXX Still need to implement a real uao_flush().

XXX Need to update the manual page.

With these changes, rebuilding libc will automatically cause the new
malloc(3) to use MADV_FREE to actually release pages and swap resources
when it decides that can be done.
1999-07-07 06:02:21 +00:00
thorpej f631c1adae Update a comment in uao_flush(). 1999-07-07 05:32:26 +00:00
thorpej 121fe0bc26 Don't bother returning the "slot" number from amap_add():
* Nothing currently uses this return value.
* It's arguably an abstraction violation.

Fix amap_unadd()'s API to be consistent w/ amap_add()'s: rather than
take a vm_amap * and a slot number, take a vm_aref * and an offset.

It's now actually possible to use amap_unadd() to remove an anon from
an amap.
1999-07-07 05:31:40 +00:00
cgd c1b7b40399 from the comment added to the code:
> XXX (in)sanity check.  We don't do proper datasize checking
> XXX for anonymous (or private writable) mmap().  However,
> XXX know that if we're trying to allocate more than the amount
> XXX remaining under our current data size limit, _that_ should
> XXX be disallowed.
This is one link on the chain of lossage known as PR#7897.  It's
definitely not the right fix, but it's better than nothing.
1999-07-06 02:31:05 +00:00
cgd 5cc6a54251 fix allocation handling bugs in amap_alloc1(). if the first or second
sub-structure malloc() failed, it was quite likely that the function
would return success incorrectly.  This is this direct cause of the bug
reported in PR#7897.  (Thanks to chs for helping to track it down.)
1999-07-06 02:15:53 +00:00
thorpej 3c83723113 Bring in additional uvmexp members from chs-ubc2, so that VM stats can
be read no matter which kernel you're running.
1999-07-02 23:20:58 +00:00
thorpej 11c67d01a5 Fix a corner case locking error, which could lead to map corruption in
SMP environments.  See comments in <vm/vm_map.h> for details.
1999-07-01 20:07:05 +00:00
thorpej c859e43fbb Fix tyop. From Bill Studenmund. 1999-07-01 18:40:39 +00:00
thorpej abb48c5b71 Protect prototypes, certain macros, and inlines from userland. 1999-06-21 17:25:11 +00:00
thorpej 72fcd1784e Fix a typo. 1999-06-19 00:11:17 +00:00
thorpej 9e9f068f43 Add the guts of mlockall(MCL_FUTURE). This requires that a process's
"memlock" resource limit to uvm_mmap().  Update all calls accordingly.
1999-06-18 05:13:45 +00:00
thorpej 01ac9b6529 In sys_mmap():
- rather than treating MAP_COPY like MAP_PRIVATE by sheer virtue of it not
  being MAP_SHARED, actually convert the MAP_COPY flag into MAP_PRIVATE.
- return EINVAL if MAP_SHARED and MAP_PRIVATE are both included in flags.
1999-06-17 21:05:19 +00:00
thorpej 0288ffb53a pmap_change_wiring() -> pmap_unwire(). 1999-06-17 19:23:20 +00:00
thorpej f5a527bb4e Remove pmap_pageable(); no pmap implements it, and it is not really useful,
because pmap_enter()/pmap_change_wiring() (soon to be pmap_unwire())
communicate the information in greater detail.
1999-06-17 18:21:21 +00:00
thorpej 12347b2657 Make uvm_vslock() return the error code from uvm_fault_wire(). All places
which use uvm_vslock() should now test the return value.  If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.

XXX We currently just EFAULT a failed uvm_vslock().  We may want to do
more about translating error codes in the future.
1999-06-17 15:47:22 +00:00
thorpej 1f97ad987f In uvm_useracc(), make sure we have a read lock on the map before
calling uvm_map_checkprot().
1999-06-17 05:57:33 +00:00
thorpej f274deb90a The i386 and pc532 pmaps are officially fixed. 1999-06-17 00:24:10 +00:00
thorpej d1d9b366cd When unwiring a range in uvm_fault_unwire_locked(), don't call
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired.  This fixes the following scenario, as described on
tech-kern@netbsd.org on Wed 6/16/1999 12:25:23:

	- User mlock(2)'s a buffer, to guarantee it will never become
	  non-resident while he is using it.

	- User then does physio to that buffer.  Physio calls uvm_vslock()
	  to lock down the pages and ensure that page faults do not happen
	  while the I/O is in progress (possibly in interrupt context).

	- Physio does the I/O.

	- Physio calls uvm_vsunlock().  This calls uvm_fault_unwire().

	  >>> HERE IS WHERE THE PROBLEM OCCURS <<<

	  uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
	  which now gives the pmap free reign to recycle the mapping
	  information for that page, which is illegal; the mapping is
	  still wired (due to the mlock(2)), but now access of the
	  page could cause a non-protection page fault (disallowed).

	  NOTE: This could eventually lead to a panic when the user
	  subsequently munlock(2)'s the buffer and the mapping info
	  has been recycled for use by another mapping!
1999-06-16 23:02:40 +00:00
thorpej b861180119 * Rename uvm_fault_unwire() to uvm_fault_unwire_locked(), and require that
the map be at least read-locked to call this function.  This requirement
  will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
  uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
  and uvm_fault_unwire().
1999-06-16 22:11:23 +00:00
thorpej 42c671ffba Modify uvm_map_pageable() and uvm_map_pageable_all() to follow POSIX 1003.1b
semantics.  That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).

Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used.  The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
1999-06-16 19:34:24 +00:00
thorpej 23c6eb95d3 Remove a incorrect-and-no-longer-relevant comment. 1999-06-16 18:43:28 +00:00
minoura ff8fb3ef82 Remove extra ]. 1999-06-16 17:25:39 +00:00