Do a little mbuf rework while here. Change all uses of MGET*(*, M_WAIT, *)
to m_get*(M_WAIT, *). These are not performance critical and making them
call m_get saves considerable space. Add m_clget analogue of MCLGET and
make corresponding change for M_WAIT uses.
Modify netinet, gem, fxp, tulip, nfs to support MBUFTRACE.
Begin to change netstat to use sysctl.
malloc types into a structure, a pointer to which is passed around,
instead of an int constant. Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
that required to support NFSv2 mounts. Not finished yet, but already
provides some 44k of saving in code size on arm26. More savings, and some
documentation, are still to come.
filesystem, if the number of threads is "-1", meaning it's never been
set, then set it to 4. You can override by setting this to some other
number (including 0) before or after mounting, of course.
Thanks to whoever it was that suggested this on ICB... sorry I don't
remember who.
count is 0, wait for use count to drain before finishing the close.
This is necessary in order for multiple processes to safely share file
descriptor tables.
directory cookie that may be thrown back at us from userspace, up
to a size limit. Fixes double entry problem.
* Split flags for internal and external use in the NFS mount structure.
* Fix some buffer structure fields that weren're being used correctly.
* Fix missing directory cache inval call in nfs_open.
* Limit on NFS_DIRBLKSIZ no longer needed, bumped to the more reasonable
value of 8k.
* Various other things that I forget, all related to the dir caching
somehow, though.
the client and server/shared data initialization into separate functions,
and calling the server/shared initialization directly from main().
Problem noted in PR #1308 (Kenneth Stailey) and PR #1780 (Chris Demetriou).
Fix suggested in PR #1780 by Chris Demetriou, and munged a bit by me,
and OK'd by Frank van der Linden <fvdl@netbsd.org>.
Improve the queuing algorithms used by NFS' asynchronous i/o. The
existing mechanism uses a global queue for some buffers and the
vp->b_dirtyblkhd queue for others. This turns sequential writes into
randomly ordered writes to the server, affecting both read and write
performance. The existing mechanism also copes badly with hung
servers, tending to block accesses to other servers when all the iods
are waiting for a hung server.
The new mechanism uses a queue for each mount point. All asynchronous
i/o goes through this queue which preserves the ordering of requests.
A simple mechanism ensures that the iods are shared out fairly between
active mount points.
Reviewed/integrated/approved by Frank van der Linden <fvdl@netbsd.org>