run through copy-on-write. Call fscow_run() with valid data where possible.
The LP_UFSCOW hack is no longer needed to protect ffs_copyonwrite() against
endless recursion.
- Add a flag B_MODIFY to bread(), breada() and breadn(). If set the caller
intends to modify the buffer returned.
- Always run copy-on-write on buffers returned from ffs_balloc().
- Add new function ffs_getblk() that gets a buffer, assigns a new blkno,
may clear the buffer and runs copy-on-write. Process possible errors
from getblk() or fscow_run(). Part of PR kern/38664.
Welcome to 4.99.63
Reviewed by: YAMAMOTO Takashi <yamt@netbsd.org>
Make VFS hooks dynamic while we're here and say farewell to VFS_ATTACH and
VFS_HOOKS_ATTACH linksets.
As a consequence, most of the file systems can now be loaded as new style
modules.
Quick sanity check by ad@.
The general trend is to remove it from all kernel interfaces and
this is a start. In case the calling lwp is desired, curlwp should
be used.
quick consensus on tech-kern
before copying them out, rather just use a single one. Further, follow
the example of tmpfs and others by simply allocating on the stack.
This should have the side-effect of silencing false Coverity reports like
CID 4559 and 4554.
knew what it was supposed to be used for and wrstuden gave a go-ahead
* while rototilling, convert file systems which went easily to
use VFS_PROTOS() instead of manually prototyping the methods
need to understand the locking around that field. Instead of setting
B_ERROR, set b_error instead. b_error is 'owned' by whoever completes
the I/O request.
fs code is a kernel buffer, pass though the length of the buffer as well.
Since the length of the userspace buffer isn'it (yet) passed through the mount
system call, add a field to the vfsops structure containing the default length.
Split sys_mount() for calls from compat code.
Ride one of the recent kernel version changes - old fs LKMs will load, but
sys_mount() will reject any attempt to use them.
1) Comply with the way buffercache(9) is intended to be used. Now we
read in single blocks of EFS_BB_SIZE, never taking in variable
length extents with a single bread() call.
2) Handle symlinks with more than one extent. There's no reason for
this to ever happen, but it's handled now.
3) Finally, add a hint to our iteration initialiser so we can start
from the desired offset, rather than naively looping through from
the beginning each time. Since we can binary search the correct
location quickly, this improves large sequential reads by about
40% with 128MB files. Improvement should increase with file size.
When reading a file, we would erroneously iterate to the next extent
before having filled the entire uio request. This lead to unnecessary
extent iteration and excessive calls to efs_read.
Sequential read performance has doubled in the uncached case and
quadrupled when data is buffered.