* Remove the "lwp *" argument that was added to vget(). Turns out
that nothing actually used it!
* Remove the "lwp *" arguments that were added to VFS_ROOT(), VFS_VGET(),
and VFS_FHTOVP(); all they did was pass it to vget() (which, as noted
above, didn't use it).
* Remove all of the "lwp *" arguments to internal functions that were added
just to appease the above.
be inserted into ktrace records. The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.
Bump the kernel rev up to 1.6V
malloc types into a structure, a pointer to which is passed around,
instead of an int constant. Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map). Try to deal with this:
* Group all information about the backend allocator for a pool in a
separate structure. The pool references this structure, rather than
the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
to become available, but will still fail if it cannot callocate KVA
space for the pages. If this happens, carefully drain all pools using
the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
some pages, and use that information to make draining easier and more
efficient.
* Get rid of PR_URGENT. There was only one use of it, and it could be
dealt with by the caller.
From art@openbsd.org.
vnode, we should not attempt to remove the namecache entry. this is because
vget() can sleep (eg. if VXLOCK is set because the vnode is being reclaimed),
and so multiple threads can end up in this context at the same time.
if this happens, each thread ends up removing the cache entry, but
the code to remove the entry assumes that the entry is still valid.
so we should just leave the (now stale) entry in the cache.
if another thread finds the entry again before it is reused,
that thread will notice that the entry is stale and remove it safely.
fixes PR 14042.
mappings (vnode -> name) in the reverse mapping hash table. Without
this option, there is no change; only directories will be entered to
speed up getcwd. This is an option because it will cause getcwd
to hit longer hash chains, and at the moment its usefulness is
still limited.
getnewvnode() has been changed to virtually guarantee that we'll have more
vnodes than "desired", so previously there would always be more vnodes
than namecache entries. this fixes PR 9792.
If the entry is found in name cache, cache_lookup() does all the
necessary locking now, simplifying the interface and making the
code easier to follow and maintain.
The code now also removes the entry from cache when it's either invalid
(vget() fails) or the vnode has been recycled while waiting for the lock.
In that case, unlock/relock of the directory vnode has been eliminated too.
Both changes could lead to sligh performace improvement in same cases.
Furthermore, obscure bug has been found and eliminated for ISDOTDOT in the
lockparent && ISLASTCN case: if the vget() succeded and the re-lock
of the directory vnode not, we returned the error with the '..' vnode still
locked.
For simplicity, cache_lookup() now returns 0 if the positive entry was found
in cache, -1 if not found and ENOENT or error returned by the locking
functions in any other case.
Many thanks to Bill Studenmund and especially Charles Hannum
for invaluable advices and code to get this right.
Tested by: jdolecek
Rewieved by: wrstuden, mycroft
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
The only benefit this provides is that we don't use kmem_map to map the memory
used for name cache entries (though, this is a 13 virtual page savings on my
PPro) since vnodes are never freed (they have their own freelist).
suggested fix was correct: the suggested change to cache_lookup would cause
the counters to be incremented when doingcache was zero, and the suggested
change to cache_enter was prone to a race condition (e.g. if doingcache
became 1 between the cache_lookup and cache_enter).