Commit Graph

124 Commits

Author SHA1 Message Date
yamt
8bf7662829 merge yamt-splraiseipl branch.
- finish implementing splraiseipl (and makeiplcookie).
	  http://mail-index.NetBSD.org/tech-kern/2006/07/01/0000.html
	- complete workqueue(9) and fix its ipl problem, which is reported
	  to cause audio skipping.
	- fix netbt (at least compilation problems) for some ports.
	- fix PR/33218.
2006-12-21 15:55:21 +00:00
elad
b8e4702fb2 Back out uvm_is_swap_device(). 2006-12-07 14:06:51 +00:00
elad
a6c2dfb16d Introduce uvm_is_swap_device(), to check if the passed struct vnode * is
used as a swap device or not.

Okay mrg@.
2006-12-01 16:06:09 +00:00
yamt
9de1cdc8fe move some knowledge about vnode into uvm_vnode.c. 2006-10-12 10:14:20 +00:00
yamt
0b30e1501c uobj_wirepages and uobj_unwirepages from Mindaugas. PR/34771.
(commented out in files.uvm for now because there is no user in tree.)

http://mail-index.netbsd.org/tech-kern/2006/09/24/0000.html
http://mail-index.netbsd.org/tech-kern/2006/10/10/0000.html
2006-10-12 10:11:57 +00:00
chs
33c1fd1917 add support for O_DIRECT (I/O directly to application memory,
bypassing any kernel caching for file data).
2006-10-05 14:48:32 +00:00
yamt
9d3e3eab23 merge yamt-pdpolicy branch.
- separate page replacement policy from the rest of kernel
	- implement an alternative replacement policy
2006-09-15 15:51:12 +00:00
cherry
8a4036de78 bumps kernel aobj to 64 bit. \
See: http://mail-index.netbsd.org/tech-kern/2006/03/07/0007.html
2006-09-01 20:39:05 +00:00
he
5ea0e70c68 Rearrange included headers and/or add include of <sys/types.h> and
<sys/lock.h>, so that the mipsco port can build again, ref.
  http://mail-index.netbsd.org/port-mips/2006/08/04/0000.html
Reviewed by thorpej
2006-08-04 22:42:36 +00:00
drochner
ef8848c74a Introduce a UVM_KMF_EXEC flag for uvm_km_alloc() which enforces an
executable mapping. Up to now, only R+W was requested from pmap_kenter_pa.
On most CPUs, we get an executable mapping anyway, due to lack of
hardware support or due to lazyness in the pmap implementation. Only
alpha does obey VM_PROT_EXECUTE, afaics.
2006-07-05 14:26:42 +00:00
yamt
c876210968 UVM_MAPFLAG: add missing parens. 2006-05-19 15:08:14 +00:00
elad
fc9422c9d9 integrate kauth. 2006-05-14 21:31:52 +00:00
drochner
e10923fd37 -clean up the interface to uvm_fault: the "fault type" didn't serve
any purpose (done by a macro, so we don't save any cycles for now)
-kill vm_fault_t; it is not needed for real faults, and for simulated
 faults (wiring) it can be replaced by UVM internal flags
-remove <uvm/uvm_fault.h> from uvm_extern.h again
2006-03-15 18:09:25 +00:00
yamt
ec5a93183a merge yamt-uio_vmspace branch.
- use vmspace rather than proc or lwp where appropriate.
  the latter is more natural to specify an address space.
  (and less likely to be abused for random purposes.)
- fix a swdmover race.
2006-03-01 12:38:10 +00:00
simonb
8b00a8c689 Make a note that some counters should be 64-bit as they wrap far to
quickly.
2006-02-10 00:53:04 +00:00
yamt
e8e7ab8637 implement compat_linux mremap. 2006-01-21 13:34:15 +00:00
yamt
4fce5d6f5e make length of inactive queue tunable by sysctl. (vm.inactivepct) 2005-12-21 12:19:04 +00:00
yamt
221616873d merge yamt-readahead branch. 2005-11-29 22:52:02 +00:00
yamt
74be61b2e6 remove one of duplicated forward decl. of vmspace. pointed by Dheeraj S. 2005-09-01 02:21:12 +00:00
yamt
b34f62d5e9 put back uvm_fault.h for now as it's needed for some ports. 2005-09-01 02:16:46 +00:00
yamt
be5d1db4a4 don't include uvm_fault.h unnecessarily. 2005-08-27 16:11:32 +00:00
matt
e1245a3c46 Rework the coredump code to have no explicit knownledge of how coredump
i/o is done.  Instead, pass an opaque cookie which is then passed to a
new routine, coredump_write, which does the actual i/o.  This allows the
method of doing i/o to change without affecting any future MD code.
Also, make netbsd32_core.c [re]use core_netbsd.c (in a similar manner that
core_elf64.c uses core_elf32.c) and eliminate that code duplication.
cpu_coredump{,32} is now called twice, first with a NULL iocookie to fill
the core structure and a second to actually write md parts of the coredump.
All i/o is nolonger random access and is suitable for shipping over a stream.
2005-06-10 05:10:12 +00:00
matt
25a0e29a75 When writing coredumps, don't write zero uninstantiated demand-zero pages.
Also, with ELF core dumps, trim trailing zeroes from sections.  These two
changes can shrink coredumps by over 50% in size.
2005-06-02 17:01:43 +00:00
yamt
627b0d5099 remove anon related statistics which are no longer used. 2005-05-15 08:01:06 +00:00
yamt
6b2d8b66a4 merge yamt-km branch.
- don't use managed mappings/backing objects for wired memory allocations.
  save some resources like pv_entry.  also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
2005-04-01 11:59:21 +00:00
fvdl
c487efe4a7 Fix some things regarding COMPAT_NETBSD32 and limits/VM addresses.
* For sparc64 and amd64, define *SIZ32 VM constants.
* Add a new function pointer to struct emul, pointing at a function
  that will return the default VM map address. The default function
  is uvm_map_defaultaddr, which just uses the VM_DEFAULT_ADDRESS
  macro. This gives emulations control over the default map address,
  and allows things to be mapped at the right address (in 32bit range)
  for COMPAT_NETBSD32.
* Add code to adjust the data and stack limits when a COMPAT_NETBSD32
  or COMPAT_SVR4_32 binary is executed.
* Don't use USRSTACK in kern_resource.c, use p_vmspace->vm_minsaddr
  instead (emulations might have set it differently)
* Since this changes struct emul, bump kernel version to 3.99.2

Tested on amd64, compile-tested on sparc64.
2005-03-26 05:12:34 +00:00
yamt
22099ab744 in uvm_unmap_remove, always wakeup va waiters if any.
uvm_km_free_wakeup is now a synonym of uvm_km_free.
2005-01-13 11:50:32 +00:00
chs
8975a0856f adjust the UBC mapping code to support non-vnode uvm_objects.
this means we can no longer look at the vnode size to determine how many
pages to request in a fault, which is good since for NFS the size can change
out from under us on the server anyway.  there's also a new flag UBC_UNMAP
for ubc_release(), so that the file system code can make the decision about
whether to cache mappings for files being used as executables.
2005-01-09 16:42:43 +00:00
yamt
a880e5e2b5 in the case of !PMAP_MAP_POOLPAGE, gather pool backend allocations to
large chunks for kernel_map and kmem_map to ease kva fragmentation.
2005-01-01 21:08:02 +00:00
yamt
95c82bfee4 introduce vm_map_kernel, a subclass of vm_map, and
move some kernel-only members of vm_map to it.
2005-01-01 21:02:12 +00:00
yamt
1207308b90 for in-kernel maps,
- allocate kva for vm_map_entry from the map itsself and
  remove the static limit, MAX_KMAPENT.
- keep merged entries for later splitting to fix allocate-to-free problem.
  PR/24039.
2005-01-01 21:00:06 +00:00
thorpej
6c08646cb8 Garbage-collect pagemove(); nothing use it anymore (YAY!!!) 2004-08-28 22:12:40 +00:00
pk
2fb3dac280 Since a vmspace' always includes a vm_map' we can re-use vm_map's
reference count lock to also protect the vmspace's reference count.
2004-05-04 21:33:40 +00:00
junyoung
325f5482a8 Nuke __P(). 2004-03-24 07:55:01 +00:00
jdolecek
43bafa5c97 fix typo in comment 2004-03-14 16:47:23 +00:00
yamt
546aea4d9c when breaking a loan from uobj,
insert the replacement page into the same position
as the original page on the object memq so that
genfs_putpages (and lfs) won't be confused.

noted by Stephan Uphoff (PR/24328)
2004-02-13 13:47:16 +00:00
jdolecek
089abdad44 Rearrange process exit path to avoid need to free resources from different
process context ('reaper').

From within the exiting process context:
* deactivate pmap and free vmspace while we can still block
* introduce MD cpu_lwp_free() - this cleans all MD-specific context (such
  as FPU state), and is the last potentially blocking operation;
  all of cpu_wait(), and most of cpu_exit(), is now folded into cpu_lwp_free()
* process is now immediatelly marked as zombie and made available for pickup
  by parent; the remaining last lwp continues the exit as fully detached
* MI (rather than MD) code bumps uvmexp.swtch, cpu_exit() is now same
  for both 'process' and 'lwp' exit

uvm_lwp_exit() is modified to never block; the u-area memory is now
always just linked to the list of available u-areas. Introduce (blocking)
uvm_uarea_drain(), which is called to release the excessive u-area memory;
this is called by parent within wait4(), or by pagedaemon on memory shortage.
uvm_uarea_free() is now private function within uvm_glue.c.

MD process/lwp exit code now always calls lwp_exit2() immediatelly after
switching away from the exiting lwp.

g/c now unneeded routines and variables, including the reaper kernel thread
2004-01-04 11:33:29 +00:00
pk
3c96ae431b * Introduce uvm_km_kmemalloc1() which allows alignment and preferred offset
to be passed to uvm_map().

* Turn all uvm_km_valloc*() macros back into (inlined) functions to retain
  binary compatibility with any 3rd party modules.
2003-12-18 15:02:04 +00:00
pk
60181444ca Condense all existing variants of uvm_km_valloc into a single function:
uvm_km_valloc1(), and use it to express all of
	uvm_km_valloc()
	uvm_km_valloc_wait()
	uvm_km_valloc_prefer()
	uvm_km_valloc_prefer_wait()
	uvm_km_valloc_align()
in terms of it by macro expansion.
2003-12-18 08:15:42 +00:00
chs
e07f0b9362 eliminate uvm_useracc() in favor of checking the return value of
copyin() or copyout().

uvm_useracc() tells us whether the mapping permissions allow access to
the desired part of an address space, and many callers assume that
this is the same as knowing whether an attempt to access that part of
the address space will succeed.  however, access to user space can
fail for reasons other than insufficient permission, most notably that
paging in any non-resident data can fail due to i/o errors.  most of
the callers of uvm_useracc() make the above incorrect assumption.  the
rest are all misguided optimizations, which optimize for the case
where an operation will fail.  we'd rather optimize for operations
succeeding, in which case we should just attempt the access and handle
failures due to insufficient permissions the same way we handle i/o
errors.  since there appear to be no good uses of uvm_useracc(), we'll
just remove it.
2003-11-13 03:09:28 +00:00
pk
5869d91cb9 Introduce uvm_swapisfull(), which computes the available swap space by
taking into account swap devices that are in the process of being removed.
2003-08-11 16:33:30 +00:00
agc
aad01611e7 Move UCB-licensed code from 4-clause to 3-clause licence.
Patches provided by Joel Baker in PR 22364, verified by myself.
2003-08-07 16:26:28 +00:00
fvdl
d5aece61d6 Back out the lwp/ktrace changes. They contained a lot of colateral damage,
and need to be examined and discussed more.
2003-06-29 22:28:00 +00:00
darrenr
960df3c8d1 Pass lwp pointers throughtout the kernel, as required, so that the lwpid can
be inserted into ktrace records.  The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.

Bump the kernel rev up to 1.6V
2003-06-28 14:20:43 +00:00
thorpej
36da248c07 Back out the following chagne:
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html

There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.

Fixes PR kern/21517.
2003-05-10 21:10:23 +00:00
thorpej
b77900c3c2 Simplify the way the bounds of the managed kernel virtual address
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM.  Machine-dependent code is
responsible for initializing them before main() is called.  Anything
that steals KVA must adjust these variables accordingly.

This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().

This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.

This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
2003-05-08 18:13:12 +00:00
wiz
6a8058b52a Misc fixes from jmc@openbsd. 2003-05-03 19:01:05 +00:00
thorpej
b193480908 Add extensible malloc types, adapted from FreeBSD. This turns
malloc types into a structure, a pointer to which is passed around,
instead of an int constant.  Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
2003-02-01 06:23:35 +00:00
thorpej
b78f59b443 Merge the nathanw_sa branch. 2003-01-18 08:51:40 +00:00
thorpej
8ae922d8a7 Define a UVM_FLAG_NOWAIT, which indicates that we're not allowed
to sleep.  Define UVM_KMF_NOWAIT in terms of UVM_FLAG_NOWAIT.

From Manuel Bouyer.  Fixes a problem where any mapping with
read protection was created in a "nowait" context, causing
spurious failures.
2002-12-11 07:10:20 +00:00