Commit Graph

402 Commits

Author SHA1 Message Date
thorpej 37247109d1 When considering a page for deactivation, check to see if the
page has been referenced since the last time it was considered.
If it was, don't deactivate the page.
2001-01-25 00:24:48 +00:00
mycroft 91a4c18e32 Put back the pmap_is_referenced() check from the original UVM code in the
inactive list scans.  Without this, the referenced bit was essentially ignored.
2001-01-25 00:10:03 +00:00
thorpej ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej 13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
thorpej f4395a4eae splimp() -> splvm() 2001-01-14 02:10:01 +00:00
pk f134ba4486 atop(): cast argument to `paddr_t' (instead of `u_long') to avoid
truncating the address.
2001-01-09 13:55:20 +00:00
chs f0ff6fc897 in uvn_flush(), when PGO_SYNCIO is specified then we should wait for
pending i/os to complete before returning even if PGO_CLEANIT is not
specified.  this fixes two races:

 (1) NFS write rpcs vs. setattr operations which truncate the file.
     if the truncate doesn't wait for pending writes to complete then
     a later write rpc completion can undo the effect of the truncate.
     this problem has been reported by several people.

 (2) write i/os in disk-based filesystem vs. the disk block being
     freed by a truncation, allocated to a new file, and written
     again with different data.  if the disk driver reorders the requests
     and does the second i/o first, the old data will clobber the new,
     corrupting the new file.  I haven't heard of anyone experiencing
     this problem yet, but it's fixed now anyway.
2001-01-08 06:21:13 +00:00
thorpej 4d4b2b5626 Nevermind that it's silly to include PROT_EXEC even if a vnode
doesn't have the exec bit set, we need to have PROT_EXEC set
in order for some expected mmap/mprotect behavior to work, so
do the last bit slightly differently: if udv_attach() fails, and
the protection (NOT maxprot) doens't include PROT_EXEC, then clear
PROT_EXEC from maxprot and try udv_attach() again.

Sigh, mmap really needs to be rototilled.
2001-01-08 01:35:03 +00:00
thorpej 781516b080 Only include PROT_EXEC in maxprot if the user specified PROT_EXEC
in the mmap() call.  maxprot is used to create device mappings,
and always including PROT_EXEC causes the mapping to fail on the Alpha
when mapping a non-RAM offset of /dev/mem (which may be sparse, so
instruction fetch from there is disallowed).
2001-01-07 06:16:46 +00:00
enami f306f72978 Use cast where appropriate to avoid integer overflow. 2001-01-04 06:07:18 +00:00
chs 1e651a1688 remove some more leftovers from Mach. 2000-12-28 08:24:55 +00:00
chs 89b005fc27 when we fail to allocate anons to represent new swap space,
just return an error rather than panicing.
2000-12-27 09:17:04 +00:00
chs 910b4f2e20 fix some types so that files larger than 4GB work. 2000-12-27 09:01:45 +00:00
chs de569051ad VOP_GETPAGES() returns an E* error code, not a VM_PAGER_* error code. 2000-12-27 04:44:42 +00:00
enami 6ff137de16 Place a name of extent in a struct swapdev instead of dynamically
allocating it.
2000-12-23 12:13:05 +00:00
enami 4e59adc1bb s/UBC_WINSIZE/ubc_winsize/g except the variable initialization. 2000-12-21 03:37:59 +00:00
chs fc03073896 expose the tunables ubc_nwins and ubc_winsize in uvm_param.h.
add the space used by UBC mappings to the initial PTE calculations
for pmaps that do that (mips and alpha).
2000-12-21 00:52:01 +00:00
chs 34a059b354 in uvn_flush(), don't deactivate busy pages. 2000-12-16 06:17:09 +00:00
chs cf25b3fa04 continue processing the inactive queue past the free target when
we're enforcing the limit on the number of vnode pages.
2000-12-13 17:03:32 +00:00
enami 4625dcde2e Use single const char array instead of over 200 string constant. 2000-12-13 08:06:11 +00:00
chs 837f5c9bd6 we don't need VM_PROT_EXECUTE for UBC mappings. 2000-12-10 19:28:09 +00:00
chs cae7ac2e3a in uvm_pagermapin(), for now, don't pass the flag to pmap_enter()
which presets the page modified bit if the page is already initialized.
we don't actually want to modify such pages.
2000-12-09 23:26:27 +00:00
chs a8609aaac8 in uvn_findpage(), only increment the counter of vnode pages
if we succeed in allocating a page.

from Lars Heidieker <lars@heidieker.de> in PR 11636.
2000-12-06 03:37:30 +00:00
chs eeabe3f90d make sure that pages are on an paging queue before unlocking them. 2000-12-01 09:54:42 +00:00
chs 024f8bed4a add new uvmexp fields for uvmexp_print(). 2000-12-01 09:48:56 +00:00
simonb 33999a4224 Move uvm_pgcnt_vnode and uvm_pgcnt_anon into uvmexp (as vnodepages and
anonpages), and add vtextpages which is currently unused but will be
used to trace the number of pages used by vtext vnodes.
2000-11-30 11:04:43 +00:00
simonb 3c1b8a2b35 Add a vm.uvmexp2 sysctl that uses a ABI-safe 'struct uvmexp_sysctl'. 2000-11-29 09:52:18 +00:00
chs e9037d16c5 allow building without SOFTDEP by adding the pageiodone hook to bio_ops. 2000-11-27 18:26:38 +00:00
chs aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00
chs c29a1b4461 allow ports to override PAGER_MAP_SIZE in machine/vmparam.h.
some ports (such as arm32) don't have enough KVA for the
increased default size once the UBC mapping is also present.
2000-11-27 08:19:50 +00:00
chs 1ec37ad27a use queue.h macros and KASSERT(). 2000-11-27 07:47:42 +00:00
nisimura 10571faa84 Introduce uvm_km_valloc_align() and use it to glab process's USPACE
aligned on USPACE boundary in kernel virutal address.  It's benefitial
for MIPS R4000's paired TLB entry design.
2000-11-27 04:36:40 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
soren 2a6c823e89 Typo in comment. 2000-11-24 23:30:01 +00:00
chs b5142d6841 increase PAGER_MAP_SIZE to 16MB and move it to uvm_pager.h
since the alpha and mips pmaps use it.
2000-11-24 22:41:38 +00:00
chs f9fb6f5a55 g/c unused pager ops "asyncget" and "aiodone". 2000-11-24 20:34:01 +00:00
chs ccbcd7c873 use queue.h macros and other misc cleanup. 2000-11-24 18:54:31 +00:00
chs 55a751c9d5 add ddb commands "show uvmexp" and "show ncache".
the former used to be "call uvm_dump", the latter is new.
2000-11-24 07:25:50 +00:00
chs 0a54af033a cleanup: use queue.h macros and KASSERT(). 2000-11-24 07:07:27 +00:00
mrg 45d83e5996 add SWAP_GETDUMPDEV command support. 2000-11-17 11:39:39 +00:00
chs 1fd1a318cb in swap_off(), reverse the order of vrele() and VOP_CLOSE() so that
devices will actually be notified if this is the last close.
this allows raidframe swap devices to be marked clean.
also, move the corresponding vref() into swap_on() for symmetry
and improve some comments.
2000-11-13 14:50:55 +00:00
christos 413d7641a1 Give a hint to the user on why we failed. 2000-11-09 19:15:28 +00:00
ad 642267bcc7 Update for hashinit() change. 2000-11-08 14:28:12 +00:00
thorpej 0a2fa5320b Back out rev. 1.83 -- it's causing problems with some pmap
implementations, so we'll have to spend a little more time
working on the problem.
2000-10-16 23:17:54 +00:00
thorpej 76589fafd4 - uvmspace_share(): If p2 has a vmspace already, make sure to deactivate
it and free it as appropriate.  Activate p2's new address space once
  it references p1's.
- uvm_fork(): Make sure the child's vmspace is NULL before calling
  uvmspace_share() (the child doens't have one already in this case).

These changes do not change the behavior for the current use of
uvmspace_share() (vfork(2)), but make it possible for an already
running process (such as a kernel thread) to properly attach to
another process's address space.
2000-10-11 17:27:58 +00:00
thorpej 47a2016cdc - Change SAVE_HINT() to take a "check" value. This value is compared
to the contents of the hint in the map, and the hint saved in the
  map only if the two values match.  When an unconditional save is
  required, the "check" value passed should be map->hint (and the
  compiler will optimize the test away).  When deleting a map entry,
  the new SAVE_HINT() will only change the hint if the entry being
  deleted was the hint value (thus preserving any meaningful hint
  that may have been there previously, rather than stomping on it).
- Add a missing hint update when deleting the map entry in
  uvm_map_entry_unlink().  This is the fix for kern/11125, from
  ITOH Yasufumi <itohy@netbsd.org>.
2000-10-11 17:21:11 +00:00
mrg 4f75145ec1 s/vm/uvm/ in a bunch of error messages. 2000-10-05 00:37:50 +00:00
mrg dd521daa8b clean up a comment. 2000-10-03 20:50:49 +00:00
eeh 1ecf6779be Add support for variable end of user stacks needed to support COMPAT_NETBSD32:
`struct vmspace' has a new field `vm_minsaddr' which is the user TOS.

	PS_STRINGS is deprecated in favor of curproc->p_pstr which is derived
	from `vm_minsaddr'.

	Bump the kernel version number.
2000-09-28 19:05:06 +00:00
enami 9308eaef21 splstatclock is insufficient to protect run queues. Acquire scheduler
lock instead.
2000-09-23 00:43:10 +00:00