Commit Graph

133 Commits

Author SHA1 Message Date
yamt a8acd82f94 update a comment; malloc doesn't use uvm_kernacc anymore. 2005-02-08 08:22:37 +00:00
chs 7c203c91d4 reduce the size of user coredump files by not dumping regions of
the address space that have never been touched (such as much of the
virtual space allocated for pthread stacks).
2005-01-21 03:24:40 +00:00
yamt 5469c2b7c1 add assertions. 2004-05-12 20:09:50 +00:00
pk 9b96c17df2 Make uvm_uarea_free an inline function. 2004-05-02 13:04:57 +00:00
pk daff668b49 Use maxdmap and maxsmap instead of MAXDSIZ and MAXSSIZ. 2004-04-04 18:21:48 +00:00
junyoung 1e2b269ded - Nuke __P().
- Drop trailing spaces.
2004-03-24 07:50:48 +00:00
yamt 1e18e59746 - borrow vmspace0 in uvm_proc_exit instead of uvmspace_free.
the latter is not a appropriate place to do so and it broke vfork.
- deactivate pmap before calling cpu_exit() to keep a balance of
  pmap_activate/deactivate.
2004-02-09 13:11:21 +00:00
yamt b81e2fa5c5 uvm_coredump_walkmap: use UVM_OBJ_IS_DEVICE macro. 2004-01-16 12:43:16 +00:00
jdolecek 089abdad44 Rearrange process exit path to avoid need to free resources from different
process context ('reaper').

From within the exiting process context:
* deactivate pmap and free vmspace while we can still block
* introduce MD cpu_lwp_free() - this cleans all MD-specific context (such
  as FPU state), and is the last potentially blocking operation;
  all of cpu_wait(), and most of cpu_exit(), is now folded into cpu_lwp_free()
* process is now immediatelly marked as zombie and made available for pickup
  by parent; the remaining last lwp continues the exit as fully detached
* MI (rather than MD) code bumps uvmexp.swtch, cpu_exit() is now same
  for both 'process' and 'lwp' exit

uvm_lwp_exit() is modified to never block; the u-area memory is now
always just linked to the list of available u-areas. Introduce (blocking)
uvm_uarea_drain(), which is called to release the excessive u-area memory;
this is called by parent within wait4(), or by pagedaemon on memory shortage.
uvm_uarea_free() is now private function within uvm_glue.c.

MD process/lwp exit code now always calls lwp_exit2() immediatelly after
switching away from the exiting lwp.

g/c now unneeded routines and variables, including the reaper kernel thread
2004-01-04 11:33:29 +00:00
pk 70f20a1217 Replace the traditional buffer memory management -- based on fixed per buffer
virtual memory reservation and a private pool of memory pages -- by a scheme
based on memory pools.

This allows better utilization of memory because buffers can now be allocated
with a granularity finer than the system's native page size (useful for
filesystems with e.g. 1k or 2k fragment sizes).  It also avoids fragmentation
of virtual to physical memory mappings (due to the former fixed virtual
address reservation) resulting in better utilization of MMU resources on some
platforms.  Finally, the scheme is more flexible by allowing run-time decisions
on the amount of memory to be used for buffers.

On the other hand, the effectiveness of the LRU queue for buffer recycling
may be somewhat reduced compared to the traditional method since, due to the
nature of the pool based memory allocation, the actual least recently used
buffer may release its memory to a pool different from the one needed by a
newly allocated buffer. However, this effect will kick in only if the
system is under memory pressure.
2003-12-30 12:33:13 +00:00
chs e07f0b9362 eliminate uvm_useracc() in favor of checking the return value of
copyin() or copyout().

uvm_useracc() tells us whether the mapping permissions allow access to
the desired part of an address space, and many callers assume that
this is the same as knowing whether an attempt to access that part of
the address space will succeed.  however, access to user space can
fail for reasons other than insufficient permission, most notably that
paging in any non-resident data can fail due to i/o errors.  most of
the callers of uvm_useracc() make the above incorrect assumption.  the
rest are all misguided optimizations, which optimize for the case
where an operation will fail.  we'd rather optimize for operations
succeeding, in which case we should just attempt the access and handle
failures due to insufficient permissions the same way we handle i/o
errors.  since there appear to be no good uses of uvm_useracc(), we'll
just remove it.
2003-11-13 03:09:28 +00:00
yamt 933834a7ae revert rev.1.70 as it was not needed.
uvm_map_lookup_entry() should handle addresses out of the map.
2003-11-03 04:39:11 +00:00
jdolecek 5e94c73334 kill unneded SYSVSHM includes
use ANSI C function definition for uvm_lwp_exit()
2003-11-02 16:53:43 +00:00
yamt 922ad03e28 don't try to lookup addresses out of the map in uvm_coredump_walkmap(). 2003-11-01 10:43:27 +00:00
cl e30be76fce simplify tests:
The case where l_stat == LSONPROC and l_cpu == curcpu cannot happen
because the pagedaemon is the LWP on curcpu and the pagedaemon is a
kernel thread and the code is only used by the pagedaemon.

See also updated patch in PR kern/23095, which I ment to checkin
originally.
2003-10-24 13:07:33 +00:00
cl ed9c2d7075 don't uvm_swapout LWPs which are LSONPROC on another cpu.
uvm_swapout_threads will swapout LWPs which are running on another CPU:
- uvm_swapout_threads considers LWPs running on another CPU for swapout
  if their l_swtime is high
- uvm_swapout_threads considers LWPs on the runqueue for swapout if their
  l_swtime is high but these LWPs might be running by the time uvm_swapout
  is called

symptoms of failure: panic in setrunqueue

fixes PR kern/23095
2003-10-19 17:45:35 +00:00
scw 4355b16f71 In uvm_lwp_fork(), check if PMAP_UAREA() is defined and if so, invoke it
with the KVA of the newly-wired uarea.

This is useful on some architectures (e.g. xscale) where the uarea mapping
can be tweaked to use the mini-data cache instead of the main cache.
2003-10-13 20:43:03 +00:00
fvdl d5aece61d6 Back out the lwp/ktrace changes. They contained a lot of colateral damage,
and need to be examined and discussed more.
2003-06-29 22:28:00 +00:00
darrenr 960df3c8d1 Pass lwp pointers throughtout the kernel, as required, so that the lwpid can
be inserted into ktrace records.  The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.

Bump the kernel rev up to 1.6V
2003-06-28 14:20:43 +00:00
atatat a57bcda26a Rework the way in which the map is traversed when dumping core. Now
we read-lock the map and call uvm_map_lookup_entry() instead of simply
walking from the header to the next and to the next, etc.

Dumping from sparsely populated amaps could cause faults that would
result in amaps being split, which (in turn) resulted in the core
dumping routines dumping some regions of memory twice.  This makes the
core file too large, the headers not match, gdb not work properly,
and so on.

Addresses PR 19260.
2003-02-14 16:25:12 +00:00
yamt 41ad61ee76 make KSTACK_CHECK_* compile after sa merge. 2003-01-22 12:52:14 +00:00
thorpej b78f59b443 Merge the nathanw_sa branch. 2003-01-18 08:51:40 +00:00
chs 4b2625143d change uvm_uarea_alloc() to indicate whether the returned uarea is already
backed by physical pages (ie. because it reused a previously-freed one),
so that we can skip a bunch of useless work in that case.
this fixes the underlying problem behind PR 18543, and also speeds up fork()
quite a bit (eg. 7% on my pc, 1% on my ultra2) when we get a cache hit.
2002-11-17 08:32:43 +00:00
chs 2b73cf7ece encapsulate knowledge of uarea allocation in some new functions. 2002-09-22 07:20:29 +00:00
yamt d96bff0e27 add KSTACK_CHECK_MAGIC. discussed on tech-kern. 2002-07-02 20:27:44 +00:00
matt 357945ce6f When core dumping a process, don't dump maps backed up by the device pager.
(move the pagerops externs to uvm_object.h and out the C files).
2002-05-15 06:57:49 +00:00
chs 43973be0c5 introduce a new UVM fault type, VM_FAULT_WIREMAX. this is different
from VM_FAULT_WIRE in that when the pages being wired are faulted in,
the simulated fault is at the maximum protection allowed for the mapping
instead of the current protection.  use this in uvm_map_pageable{,_all}()
to fix the problem where writing via ptrace() to shared libraries that
are also mapped with wired mappings in another process causes a
diagnostic panic when the wired mapping is removed.

this is a really obscure problem so it deserves some more explanation.
ptrace() writing to another process ends up down in uvm_map_extract(),
which for MAP_PRIVATE mappings (such as shared libraries) will cause
the amap to be copied or created.  then the amap is made shared
(ie. the AMAP_SHARED flag is set) between the kernel and the ptrace()d
process so that the kernel can modify pages in the amap and have the
ptrace()d process see the changes.  then when the page being modified
is actually faulted on, the object pages (from the shared library vnode)
is copied to a new anon page and inserted into the shared amap.
to make all the processes sharing the amap actually see the new anon
page instead of the vnode page that was there before, we need to
invalidate all the pmap-level mappings of the vnode page in the pmaps
of the processes sharing the amap, but we don't have a good way of
doing this.  the amap doesn't keep track of the vm_maps which map it.
so all we can do at this point is to remove all the mappings of the
page with pmap_page_protect(), but this has the unfortunate side-effect
of removing wired mappings as well.  removing wired mappings with
pmap_page_protect() is a legitimate operation, it can happen when a file
with a wired mapping is truncated.  so the pmap has no way of knowing
whether a request to remove a wired mapping is normal or when it's due to
this weird situation.  so the pmap has to remove the weird mapping.
the process being ptrace()d goes away and life continues.  then,
much later when we go to unwire or remove the wired vm_map mapping,
we discover that the pmap mapping has been removed when it should
still be there, and we panic.

so where did we go wrong?  the problem is that we don't have any way
to update just the pmap mappings that need to be updated in this
scenario.  we could invent a mechanism to do this, but that is much
more complicated than this change and it doesn't seem like the right
way to go in the long run either.

the real underlying problem here is that wired pmap mappings just
aren't a good concept.  one of the original properties of the pmap
design was supposed to be that all the information in the pmap could
be thrown away at any time and the VM system could regenerate it all
through fault processing, but wired pmap mappings don't allow that.
a better design for UVM would not require wired pmap mappings,
and Chuck C. and I are talking about this, but it won't be done
anytime soon, so this change will do for now.

this change has the effect of causing MAP_PRIVATE mappings to be
copied to anonymous memory when they are mlock()d, so that uvm_fault()
doesn't need to copy these pages later when called from ptrace(), thus
avoiding the call to pmap_page_protect() and the panic that results
from this when the mlock()d region is unlocked or freed.  note that
this change doesn't help the case where the wired mapping is MAP_SHARED.

discussed at great length with Chuck Cranor.
fixes PRs 10363, 12554, 12604, 13041, 13487, 14580 and 14853.
2001-12-31 22:34:39 +00:00
thorpej 06920aef28 Move the code that walks the process's VM map during a coredump
into uvm_coredump_walkmap(), and use callbacks into the coredump
routine to do something with each section.
2001-12-10 01:52:26 +00:00
lukem b616d1ca1d add RCSIDs, and in some cases, slightly cleanup #include order 2001-11-10 07:36:59 +00:00
chs d8cbdbb0da in uvm_exit(), don't bother to unwire the uarea before we free it,
the pages will be freed anyway.
2001-11-06 05:34:42 +00:00
chs a467bddfdc bump the rusage counter for "swaps" when we swap out a process.
addresses PR 6170.
2001-09-23 07:10:08 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
chris 0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
chs 821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
lukem d84d2c6c85 add missing #include "opt_kgdb.h" 2001-05-30 15:24:23 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
thorpej 1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
thorpej 0115ec662c The pmap_update() call at the end of uvm_swapout_threads() is
completely useless.  Nuke it.
2001-04-21 17:38:24 +00:00
chs ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
eeh ec22628573 Move maxdmap and maxsmap where they belong and make them big enough. 2001-02-06 19:54:43 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
thorpej 76589fafd4 - uvmspace_share(): If p2 has a vmspace already, make sure to deactivate
it and free it as appropriate.  Activate p2's new address space once
  it references p1's.
- uvm_fork(): Make sure the child's vmspace is NULL before calling
  uvmspace_share() (the child doens't have one already in this case).

These changes do not change the behavior for the current use of
uvmspace_share() (vfork(2)), but make it possible for an already
running process (such as a kernel thread) to properly attach to
another process's address space.
2000-10-11 17:27:58 +00:00
enami 9308eaef21 splstatclock is insufficient to protect run queues. Acquire scheduler
lock instead.
2000-09-23 00:43:10 +00:00
thorpej 9bd3060650 Remove a totally unnecessary splhigh/spl0 pair. 2000-08-21 02:29:32 +00:00
sommerfeld a7449460ec add comment warning about possible unlock/sleep race 2000-08-12 17:46:25 +00:00
mrg dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg 2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
simonb 58d0ed9dd2 Set p->p_addr to NULL after it gets freed. 2000-06-18 05:20:27 +00:00
thorpej b0afc900f5 Change UVM_UNLOCK_AND_WAIT() to use ltsleep() (it is now atomic, as
advertised).  Garbage-collect uvm_sleep().
2000-06-08 05:52:34 +00:00
thorpej e03e9e8086 Rather than starting init and creating kthreads by forking and then
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1().  In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.

This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
2000-05-28 05:48:59 +00:00
thorpej 8964c35eca Introduce a new process state distinct from SRUN called SONPROC
which indicates that the process is actually running on a
processor.  Test against SONPROC as appropriate rather than
combinations of SRUN and curproc.  Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
2000-05-26 00:36:42 +00:00
augustss 641df97d12 Remove more register declarations. 2000-03-30 12:31:50 +00:00
kleink 6e5b64c8a0 Merge parts of chs-ubc2 into the trunk:
Add a new type voff_t (defined as a synonym for off_t) to describe offsets
into uvm objects, and update the appropriate interfaces to use it, the
most visible effect being the ability to mmap() file offsets beyond
the range of a vaddr_t.

Originally by Chuck Silvers; blame me for problems caused by merging this
into non-UBC.
2000-03-26 20:54:45 +00:00
thorpej 1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
thorpej ea8fb3e04a Turn the proclist lock into a read/write spinlock. Update proclist locking
calls to reflect this.  Also, block statclock rather than softclock during
in the proclist locking functions, to address a problem reported on
current-users by Sean Doran.
1999-07-25 06:30:33 +00:00
thorpej 3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej 3ebbe095e0 Change the pmap_extract() interface to:
boolean_t pmap_extract(pmap_t, vaddr_t, paddr_t *);
This makes it possible for the pmap to map physical address 0.
1999-07-08 18:05:21 +00:00
thorpej 12347b2657 Make uvm_vslock() return the error code from uvm_fault_wire(). All places
which use uvm_vslock() should now test the return value.  If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.

XXX We currently just EFAULT a failed uvm_vslock().  We may want to do
more about translating error codes in the future.
1999-06-17 15:47:22 +00:00
thorpej 1f97ad987f In uvm_useracc(), make sure we have a read lock on the map before
calling uvm_map_checkprot().
1999-06-17 05:57:33 +00:00
thorpej f274deb90a The i386 and pc532 pmaps are officially fixed. 1999-06-17 00:24:10 +00:00
thorpej 8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej 9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej 2835fc6e46 Pull signal actions out of struct user, make them a separate proc
substructure, and allow them to be shared.

Required for clone(2).
1999-04-30 21:23:49 +00:00
mycroft 31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
mrg a0139bc39d remove now >1 year old pre-release message. 1999-03-25 18:48:49 +00:00
chs e2d0bfbb09 remove a debugging printf. 1999-03-15 07:55:19 +00:00
tron c71ccab136 Defopt SYSVMSG, SYSVSEM and SYSVSHM. 1998-10-19 22:21:19 +00:00
thorpej 28904fca48 Implement uvm_exit(), which frees VM resources when a process finishes
exiting.
1998-09-08 23:44:21 +00:00
eeh a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
perry 2c8717021d bzero->memset, bcopy->memcpy, bcmp->memcmp 1998-08-09 22:36:37 +00:00
kleink afeaa5bb57 Use size_t to pass the length of the memory region to operate on to chgkprot(),
kernacc(), useracc(), vslock() and vsunlock(); (unsigned) ints are not
adequate on all platforms.
1998-05-09 15:04:39 +00:00
kleink d9066c40e9 Make uvm_vsunlock() actually use the proc * passed to it; per discussion
with Jason Thorpe.
1998-05-08 17:41:41 +00:00
thorpej 73863dd3c9 Pass vslock() and vsunlock() a proc *, rather than implicitly operating
on curproc.
1998-04-30 06:28:57 +00:00
thorpej fe97b1da8e Oops, fix a typo. 1998-04-09 00:24:05 +00:00
thorpej 2018d40811 Allocate kernel virtual address space for the U-area before allocating
the new proc structure when performing a fork.  This makes it much
easier to abort a fork operation and return an error if we run out
of KVA space.

The U-area pages are still wired down in {,u}vm_fork(), as before.
1998-04-09 00:23:38 +00:00
mrg 8106d13596 KNF. 1998-03-09 00:58:55 +00:00
mrg d90485202c - add defopt's for UVM, UVMHIST and PMAP_NEW.
- remove unnecessary UVMHIST_DECL's.
1998-02-10 14:08:44 +00:00
mrg 1f6b921cf7 restore rcsids 1998-02-07 11:07:38 +00:00
chs 732a925b1b add locking of kernel_map in uvm_kernacc().
check return value of uvm_fault_wire() in uvm_fork().
enable swappings.
1998-02-07 02:26:04 +00:00
thorpej 9eb328b495 RCS ID police. 1998-02-06 22:26:13 +00:00
mrg f2caacc717 initial import of the new virtual memory system, UVM, into -current.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code.  i provided some help
getting swap and paging working, and other bug fixes/ideas.  chuck
silvers <chuq@chuq.com> also provided some other fixes.

this is the UVM kernel code portion.


this will be KNF'd shortly.  :-)
1998-02-05 06:25:08 +00:00