the directory cache as translation table. See nfs_subs.c for comments.
Makes the code a bit more complex to look at than I would have liked,
but doesn't affect the speed of the default behavior.
* Optimize caching behavior a bit when buffers are invalidated.
* Save some RPCs in readdir operations by not bothering if there is
a small amount left to do to fill the buffer. It'll be done in the
next RPC with a larger chunk anyway. Wastes a bit of buffer space
but is faster.
* Make n_vattr an allocated vattr struct. This avoids nfsnode bloat,
and is friendlier to the malloc routines.
directory cookie that may be thrown back at us from userspace, up
to a size limit. Fixes double entry problem.
* Split flags for internal and external use in the NFS mount structure.
* Fix some buffer structure fields that weren're being used correctly.
* Fix missing directory cache inval call in nfs_open.
* Limit on NFS_DIRBLKSIZ no longer needed, bumped to the more reasonable
value of 8k.
* Various other things that I forget, all related to the dir caching
somehow, though.
From Olaf Seibert <rhialto@polder.ubc.kun.nl> (PR 3687)
* Make an attempt to check the maximum filesize before attempting
a write to the server, as write RPCs will typically happen
asynchronously, and the process will not see the error.
Fixes problems with unexpectly truncated files at 4G
* Pass up errors in nfs_writerpc correctly
architectures), truncate them intelligently instead.
The truncation is done centralized in vnode_pager.c.
This prevents from wrap-over effects when parts of large (>2^32 byte) files
are mmapped.
Don't allow to mmap above the numerical range of vm_offset_t.
This is considered a temporary solution until the vm system handles the
object sizes/offsets more cleanly.
Improve the queuing algorithms used by NFS' asynchronous i/o. The
existing mechanism uses a global queue for some buffers and the
vp->b_dirtyblkhd queue for others. This turns sequential writes into
randomly ordered writes to the server, affecting both read and write
performance. The existing mechanism also copes badly with hung
servers, tending to block accesses to other servers when all the iods
are waiting for a hung server.
The new mechanism uses a queue for each mount point. All asynchronous
i/o goes through this queue which preserves the ordering of requests.
A simple mechanism ensures that the iods are shared out fairly between
active mount points.
Reviewed/integrated/approved by Frank van der Linden <fvdl@netbsd.org>
* Never change the NQNFS flag and/or version when just doing an update mount.
Fixes a problem that made diskless booting impossible under some
circumstances.