getnewvnode() has been changed to virtually guarantee that we'll have more
vnodes than "desired", so previously there would always be more vnodes
than namecache entries. this fixes PR 9792.
If the entry is found in name cache, cache_lookup() does all the
necessary locking now, simplifying the interface and making the
code easier to follow and maintain.
The code now also removes the entry from cache when it's either invalid
(vget() fails) or the vnode has been recycled while waiting for the lock.
In that case, unlock/relock of the directory vnode has been eliminated too.
Both changes could lead to sligh performace improvement in same cases.
Furthermore, obscure bug has been found and eliminated for ISDOTDOT in the
lockparent && ISLASTCN case: if the vget() succeded and the re-lock
of the directory vnode not, we returned the error with the '..' vnode still
locked.
For simplicity, cache_lookup() now returns 0 if the positive entry was found
in cache, -1 if not found and ENOENT or error returned by the locking
functions in any other case.
Many thanks to Bill Studenmund and especially Charles Hannum
for invaluable advices and code to get this right.
Tested by: jdolecek
Rewieved by: wrstuden, mycroft
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
The only benefit this provides is that we don't use kmem_map to map the memory
used for name cache entries (though, this is a 13 virtual page savings on my
PPro) since vnodes are never freed (they have their own freelist).
suggested fix was correct: the suggested change to cache_lookup would cause
the counters to be incremented when doingcache was zero, and the suggested
change to cache_enter was prone to a race condition (e.g. if doingcache
became 1 between the cache_lookup and cache_enter).