in the non-MULTIPROCESSOR case (LOCKDEBUG requires it). Scheduler
lock is held upon entry to mi_switch() and cpu_switch(), and
cpu_switch() releases the lock before returning.
Largely from Bill Sommerfeld, with some minor bug fixes and
machine-dependent code hacking from me.
<vm/pglist.h> -> <uvm/uvm_pglist.h>
<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
<vm/vm_object.h> -> nothing
<vm/vm_pager.h> -> into <uvm/uvm_pager.h>
also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
with procfs's cmdline - from the PR:
The cmdline implementation in procfs is bogus. It's possible that
part of the fix is a workaround of a UVM problem - that is, when
(internally) accessing the top of the process VM (the end of the
args) a request for I/0 of a PAGE_SIZE'd block starting at less
than a PAGE_SIZE from the end of the mem space returns EINVAL
rather than the data that is available. Whether this is a bug
in UVM or not depends upon how it is defined to work, and I was
unable to determine that. (Simon Burge found that problem, and
provided the basis of the workaround/fix).
Then, the cmdline function is unable to read more than one
page of args, and a good thing too, as the way it is written
attempting to get more than that would reference into lala land.
And, on an attempt to read a lot of data when the above is
fixed, most of the data won't be returned, only the final block
of any read.
Tested on alpha, pmax, i386 and sparc.
in vfs_detach(). vfs_done may free global filesystem's resources,
typically those allocated in respective filesystem's init function.
Needed so those filesystems which went in via LKM have a chance to
clean after themselves before unloading. This fixes random panics
when LKM for filesystem using pools was loaded and unloaded several
times.
For each leaf filesystem, add appropriate vfs_done routine.
UFS code, and I forgot to rename the "ihash" variable, causing
weird effects, because 3/4th of the UFS hash table would become
unreachable after procfs was loaded as an LKM.
process is about to exec a sugid binary.
To speed up things, use hashing for vnode allocation, like other filesystems
do. This avoids walking the whole procfs node list in the revoke case too.
Problem turned out to be due to improper handling of reads beyond EOF:
they should just return without error with the uio unchanged, and the
caller will recognize this as a zero-byte return (EOF).
The previous fix to protect directory reads against bogus uio_offset
values returned EINVAL, which broke mount -o union, which only
union'ed in the lower directory if the upper directory cleanly
returned EOF.
While we're here, protect kernfs as well.
call with F_FSCTL set and F_SETFL calls generate calls to a new
fileop fo_fcntl. Add genfs_fcntl() and soo_fcntl() which return 0
for F_SETFL and EOPNOTSUPP otherwise. Have all leaf filesystems
use genfs_fcntl().
Reviewed by: thorpej
Tested by: wrstuden
and PID allocation MP-safe. A new process state is added: SDEAD. This
state indicates that a process is dead, but not yet a zombie (has not
yet been processed by the process reaper).
SDEAD processes exist on both the zombproc list (via p_list) and deadproc
(via p_hash; the proc has been removed from the pidhash earlier in the exit
path). When the reaper deals with a process, it changes the state to
SZOMB, so that wait4 can process it.
Add a P_ZOMBIE() macro, which treats a proc in SZOMB or SDEAD as a zombie,
and update various parts of the kernel to reflect the new state.
Update coda to new struct lock in struct vnode.
make fdescfs, kernfs, portalfs, and procfs actually lock their vnodes.
It's not that hard.
Make unionfs set v_vnlock = NULL so any overlayed fs will call its
VOP_LOCK.
array of ARG_MAX size. ARG_MAX is currently 256k, which causes a rather
serious stack overflow (kernel stacks are not very large, usually 8k).
Fixes memory corruption problems observed after accessig /proc/1/cmdline
during tests. Problem in my case manifested itself as massive lossage
in ffs_sync(), resulting in a crash, and sometimes, pooched file systems.
XXX This could, and probably should, be rewritten to use a much smaller
temporary buffer, and a loop around uiomove().
- Don't error out on P_SYSTEM or SZOMB processes; instead, do what ps(1)
would do, i.e. the p_comm in parenthesis.
- Use uvm_io() (or procfs_rwmem() if !UVM) to read the target process's
psstrings and argument vector. Using copyin() is problematic, because
it operates on the current processes! That is, the old code would
always get the `cmdline' of the process reading the file, not that of
the target process.
with UVM and seperate I&D-Cache). Mostly by Michael Hitch, but pass struct
proc * instead of the pmap. Reason: said machine will need a method to do
the syncing operation for "curproc", too; this way more code can be shared.
as with user-land programs, include files are installed by each directory
in the tree that has includes to install. (This allows more flexibility
as to what gets installed, makes 'partial installs' easier, and gives us
more options as to which machines' includes get installed at any given
time.) The old SYS_INCLUDES={symlinks,copies} behaviours are _both_
still supported, though at least one bug in the 'symlinks' case is
fixed by this change. Include files can't be build before installation,
so directories that have includes as targets (e.g. dev/pci) have to move
those targets into a different Makefile.
(thus causing s_leader to become NULL) by storing the session ID separately
in the session structure. Export the session ID to userspace in the
eproc structure.
Submitted by Tom Proett <proett@nas.nasa.gov>.