Commit Graph

61 Commits

Author SHA1 Message Date
yamt
ce02ffbc68 introduce a new function, cache_lookup_raw(), for filesystems which
want more flexible namecache handling.
it just looks up a dnlc entry and vget() the result vnode.
ie. no automatic entry removal, no automatic vnode locking.

discussed on tech-kern@.
2004-06-27 08:50:44 +00:00
yamt
8a2c13021f cache_lookup: avoid to grab two vnode's v_interlock.
just hold a reference (usecount) to a vnode instead.
2004-06-19 18:49:47 +00:00
yamt
68b4772ef6 redo the previous (rev.1.58; overwrite a duplicate entry rather than leave it)
differently so that entries entered during we're doing pool_get() are
checked as well.  pointed by Paul Kranenburg on source-changes@.
2004-05-07 12:05:41 +00:00
yamt
8d615f3e18 cache_enter: when we found a duplicate entry,
simply overwrite it rather than leaving a stale entry.
2004-05-06 22:02:02 +00:00
pk
5c36071518 cache_enter: concurrent lookups in the same directory may race for a
cache entry. Upon detection, free our tentative entry and return.
2004-05-02 12:00:34 +00:00
simonb
b5d0e6bf06 Initialise (most) pools from a link set instead of explicit calls
to pool_init.  Untouched pools are ones that either in arch-specific
code, or aren't initialiased during initial system startup.

 Convert struct session, ucred and lockf to pools.
2004-04-25 16:42:40 +00:00
yamt
4972de4cca make cache_purge more controlable.
namely, allow following operations.
	- purge only an entry specified by a component name.
	- purge only child entries.
	- purge only parent entries.
no objections on tech-kern@.
2004-04-05 10:20:52 +00:00
yamt
b3d9c9a976 remove an obsolete comment. pointed by enami@ 2003-09-01 12:13:55 +00:00
yamt
a01f6f4ea5 - make this a bit MP friendly.
(although no actual changes under the kernel lock)
- remove a test that isn't meaningful anymore.
2003-08-08 20:19:56 +00:00
yamt
1f08924c29 arrange namecache lru before vget (and releasing namecache_slock)
since our namecache entry can go away during we're sleeping on the vnode.

the bug pointed by enami tsugutomo.
tested by Matthias Scheler.
PR/22363.
2003-08-08 20:18:19 +00:00
agc
aad01611e7 Move UCB-licensed code from 4-clause to 3-clause licence.
Patches provided by Joel Baker in PR 22364, verified by myself.
2003-08-07 16:26:28 +00:00
yamt
428f108569 remove remaining v_id. 2003-07-31 15:43:06 +00:00
yamt
2dc3c0a90c for NCHASH, obtain bits from the vnode pointer as well
to achieve a better hash distribution.
2003-07-31 15:14:08 +00:00
yamt
378afd773a when casting a pointer to an integer,
cast it to uintptr_t so that 64-bit archs will be happy.

pointed by Juergen Hannken-Illjes.
2003-07-31 15:13:05 +00:00
yamt
cc104d0635 eliminate v_id. 2003-07-30 12:10:57 +00:00
yamt
b0cdf0a26d maintain the list of namecaches attached to the vnode.
it makes vnodes freeable.
2003-07-30 12:09:46 +00:00
fvdl
d5aece61d6 Back out the lwp/ktrace changes. They contained a lot of colateral damage,
and need to be examined and discussed more.
2003-06-29 22:28:00 +00:00
thorpej
a06b275edc Undo part of the ktrace/lwp changes. In particular:
* Remove the "lwp *" argument that was added to vget().  Turns out
  that nothing actually used it!
* Remove the "lwp *" arguments that were added to VFS_ROOT(), VFS_VGET(),
  and VFS_FHTOVP(); all they did was pass it to vget() (which, as noted
  above, didn't use it).
* Remove all of the "lwp *" arguments to internal functions that were added
  just to appease the above.
2003-06-29 18:43:21 +00:00
darrenr
960df3c8d1 Pass lwp pointers throughtout the kernel, as required, so that the lwpid can
be inserted into ktrace records.  The general change has been to replace
"struct proc *" with "struct lwp *" in various function prototypes, pass
the lwp through and use l_proc to get the process pointer when needed.

Bump the kernel rev up to 1.6V
2003-06-28 14:20:43 +00:00
fvdl
dd522b0702 Fix a missing namecache_slock unlock. From Stephan Uphoff. 2003-05-21 09:36:06 +00:00
enami
ca4393664d ... and no need to aqcuire lock while free'ing old hash which no one
refers it.
2003-03-02 13:26:22 +00:00
jmc
ff58e08182 Move simple_lock after the hashinit's to avoid possible sleeping/malloc'ing
with a simplelock held.
2003-02-20 02:49:51 +00:00
pk
87978ebf11 Make cache insertion, removal and lookup MP-safe. 2003-02-14 21:39:46 +00:00
thorpej
b193480908 Add extensible malloc types, adapted from FreeBSD. This turns
malloc types into a structure, a pointer to which is passed around,
instead of an int constant.  Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
2003-02-01 06:23:35 +00:00
matt
48bbf5f234 Use the queue macros from <sys/queue.h> instead of referring to the queue
members directly.  Use *_FOREACH whenever possible.
2002-09-04 01:32:31 +00:00
thorpej
79111bb802 Fix signed/unsigned comparison warnings from GCC 3.3. 2002-08-26 01:21:58 +00:00
thorpej
a180cee23b Pool deals fairly well with physical memory shortage, but it doesn't
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map).  Try to deal with this:

* Group all information about the backend allocator for a pool in a
  separate structure.  The pool references this structure, rather than
  the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
  to become available, but will still fail if it cannot callocate KVA
  space for the pages.  If this happens, carefully drain all pools using
  the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
  some pages, and use that information to make draining easier and more
  efficient.
* Get rid of PR_URGENT.  There was only one use of it, and it could be
  dealt with by the caller.

From art@openbsd.org.
2002-03-08 20:48:27 +00:00
enami
8d6b971560 KNF. 2001-12-10 01:49:26 +00:00
enami
df8cfd38a2 Test ".." correctly when creating reverse cache entry. 2001-12-08 04:09:56 +00:00
lukem
adc783d537 add RCSIDs 2001-11-12 15:25:01 +00:00
chs
a54f8441f8 in cache_lookup(), if we get a cache hit but then fail to vget() the found
vnode, we should not attempt to remove the namecache entry.  this is because
vget() can sleep (eg. if VXLOCK is set because the vnode is being reclaimed),
and so multiple threads can end up in this context at the same time.
if this happens, each thread ends up removing the cache entry, but
the code to remove the entry assumes that the entry is still valid.
so we should just leave the (now stale) entry in the cache.
if another thread finds the entry again before it is reused,
that thread will notice that the entry is stale and remove it safely.
fixes PR 14042.
2001-10-27 04:53:38 +00:00
chs
7bb91a0959 resize the namecache hash table also when desiredvnodes changes. 2001-09-24 06:01:13 +00:00
fvdl
2c310ee4d5 Depending on the NAMECACHE_ENTER_REVERSE option, always enter reverse
mappings (vnode -> name) in the reverse mapping hash table. Without
this option, there is no change; only directories will be entered to
speed up getcwd. This is an option because it will cause getcwd
to hit longer hash chains, and at the moment its usefulness is
still limited.
2001-03-29 22:39:23 +00:00
chs
55a751c9d5 add ddb commands "show uvmexp" and "show ncache".
the former used to be "call uvm_dump", the latter is new.
2000-11-24 07:25:50 +00:00
chs
ab077e1ed4 change cache_purgevfs() from O(N^2) to O(N).
use queue.h macros where possible.
2000-11-24 05:02:23 +00:00
ad
642267bcc7 Update for hashinit() change. 2000-11-08 14:28:12 +00:00
chs
9431f1857b change "nextvnodeid" from a global in namei.h to a static in
the one function that uses it.
2000-04-16 21:41:49 +00:00
chs
d0fb21715e limit the number of namecache entries to numvnodes rather than desiredvnodes.
getnewvnode() has been changed to virtually guarantee that we'll have more
vnodes than "desired", so previously there would always be more vnodes
than namecache entries.  this fixes PR 9792.
2000-04-16 21:39:57 +00:00
augustss
264f1d27c6 Get rid of register declarations. 2000-03-30 09:27:11 +00:00
soren
93c531f872 Tiny comment update. 2000-03-23 14:32:41 +00:00
mycroft
34b45d9bd7 Obey negative cache entries for intermediate directories during a create. 1999-09-10 23:24:23 +00:00
jdolecek
026b142488 Change cache_lookup() as per discussion on tech-kern & ICB:
If the entry is found in name cache, cache_lookup() does all the
necessary locking now, simplifying the interface and making the
code easier to follow and maintain.

The code now also removes the entry from cache when it's either invalid
(vget() fails) or the vnode has been recycled while waiting for the lock.
In that case, unlock/relock of the directory vnode has been eliminated too.
Both changes could lead to sligh performace improvement in same cases.

Furthermore, obscure bug has been found and eliminated for ISDOTDOT in the
lockparent && ISLASTCN case: if the vget() succeded and the re-lock
of the directory vnode not, we returned the error with the '..' vnode still
locked.

For simplicity, cache_lookup() now returns 0 if the positive entry was found
in cache, -1 if not found and ENOENT or error returned by the locking
functions in any other case.

Many thanks to Bill Studenmund and especially Charles Hannum
for invaluable advices and code to get this right.

Tested by: jdolecek
Rewieved by: wrstuden, mycroft
1999-09-05 14:22:34 +00:00
sommerfe
095cd96cd5 Change namei cache to record vnode->(parent,name) entries (for directories).
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
1999-03-22 17:01:55 +00:00
thorpej
77ecf8273e Use the pool allocator and "nointr" pool page allocator for name cache entries.
The only benefit this provides is that we don't use kmem_map to map the memory
used for name cache entries (though, this is a 13 virtual page savings on my
PPro) since vnodes are never freed (they have their own freelist).
1998-09-01 04:33:56 +00:00
perry
275d1554aa Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one.
bcopy(x, y, z) ->  memcpy(y, x, z)
ovbcopy(x, y, z) -> memmove(y, x, z)
   bcmp(x, y, z) ->  memcmp(x, y, z)
  bzero(x, y)    ->  memset(x, 0, y)
1998-08-04 04:03:10 +00:00
perry
730baa7431 fix sizeofs so they comply with the KNF style guide. yes, it is pedantic. 1998-07-31 22:50:48 +00:00
fvdl
e5bc90f40c Merge with Lite2 + local changes 1998-03-01 02:20:01 +00:00
chs
f64abc7b4c add flags arg to hashinit(), to pass to malloc(). 1998-02-07 02:44:44 +00:00
christos
e630447d8c First pass at prototyping 1996-02-04 02:17:43 +00:00
ws
166530c153 Distribute cache entries more evenly 1995-09-08 14:15:07 +00:00