be digging itself deeper into a hole, it forks off a subprocess
that locates files with too many discontinuities and rewrites them, if
there is enough room.
Optionally the user can manually coaleasce files by running with "-c".
The recent change to lfs_markv is required for the coalescer to do anything.
All of "digging itself deeper", "too many discontinuities", and "enough room"
need to be better defined.
because of the disklabel.
Fix a problem with inode block handling that sometimes caused the wrong
blocks to be read, causing either cleaning failures or panics with v2 file
systems.
Kernels and tools understand both v1 and v2 filesystems; newfs_lfs
generates v2 by default. Changes for the v2 layout include:
- Segments of non-PO2 size and arbitrary block offset, so these can be
matched to convenient physical characteristics of the partition (e.g.,
stripe or track size and offset).
- Address by fragment instead of by disk sector, paving the way for
non-512-byte-sector devices. In theory fragments can be as large
as you like, though in reality they must be smaller than MAXBSIZE in size.
- Use serial number and filesystem identifier to ensure that roll-forward
doesn't get old data and think it's new. Roll-forward is enabled for
v2 filesystems, though not for v1 filesystems by default.
- The inode free list is now a tailq, paving the way for undelete (undelete
is not yet implemented, but can be without further non-backwards-compatible
changes to disk structures).
- Inode atime information is kept in the Ifile, instead of on the inode;
that is, the inode is never written *just* because atime was changed.
Because of this the inodes remain near the file data on the disk, rather
than wandering all over as the disk is read repeatedly. This speeds up
repeated reads by a small but noticeable amount.
Other changes of note include:
- The ifile written by newfs_lfs can now be of arbitrary length, it is no
longer restricted to a single indirect block.
- Fixed an old bug where ctime was changed every time a vnode was created.
I need to look more closely to make sure that the times are only updated
during write(2) and friends, not after-the-fact during a segment write,
and certainly not by the cleaner.
avoiding needless looping (possibly infinite looping) on certain kinds of
errors.
Get rid of erroneous free() in error return from add_segment.
Patch from Jesse Off <joff@gci-net.com> (PR #11547).
Let lfs_cleanerd record its pid in /var/run like other daemons. Make
mount_lfs not start another cleaner when updating the mount, unless it is
being upgraded from read-only to read-write; when downgrading to read-only,
kill the cleaner using the recorded pids.
instead, if the segment doesn't have many live blocks, copy them to a
more appropriately sized chunk of memory and release the original.
This should prevent the cleaner from distending itself when cleaning many
segments with only one or two live blocks each, as when using the "-b" option.
Kernel:
* Add runtime quantity lfs_ravail, the number of disk-blocks reserved
for writing. Writes to the filesystem first reserve a maximum amount
of blocks before their write is allowed to proceed; after the blocks
are allocated the reserved total is reduced by a corresponding amount.
If the lfs_reserve function cannot immediately reserve the requested
number of blocks, the inode is unlocked, and the thread sleeps until
the cleaner has made enough space available for the blocks to be
reserved. In this way large files can be written to the filesystem
(or, smaller files can be written to a nearly-full but thoroughly
clean filesystem) and the cleaner can still function properly.
* Remove explicit switching on dlfs_minfreeseg from the kernel code; it
is now merely a fs-creation parameter used to compute dlfs_avail and
dlfs_bfree (and used by fsck_lfs(8) to check their accuracy). Its
former role is better assumed by a properly computed dlfs_avail.
* Bounds-check inode numbers submitted through lfs_bmapv and lfs_markv.
This prevents a panic, but, if the cleaner is feeding the filesystem
the wrong data, you are still in a world of hurt.
* Cleanup: remove explicit references of DEV_BSIZE in favor of
btodb()/dbtob().
lfs_cleanerd:
* Make -n mean "send N segments' blocks through a single call to
lfs_markv". Previously it had meant "clean N segments though N calls
to lfs_markv, before looking again to see if more need to be cleaned".
The new behavior gives better packing of direct data on disk with as
little metadata as possible, largely alleviating the problem that the
cleaner can consume more disk through inefficient use of metadata than
it frees by moving dirty data away from clean "holes" to produce
entirely clean segments.
* Make -b mean "read as many segments as necessary to write N segments
of dirty data back to disk", rather than its former meaning of "read
as many segments as necessary to free N segments worth of space". The
new meaning, combined with the new -n behavior described above,
further aids in cleaning storage efficiency as entire segments can be
written at once, using as few blocks as possible for segment summaries
and inode blocks.
* Make the cleaner take note of segments which could not be cleaned due
to error, and not attempt to clean them until they are entirely free
of dirty blocks. This prevents the case in which a cleanerd running
with -n 1 and without -b (formerly the default) would spin trying
repeatedly to clean a corrupt segment, while the remaining space
filled and deadlocked the filesystem.
* Update the lfs_cleanerd manual page to describe all the options,
including the changes mentioned here (in particular, the -b and -n
flags were previously undocumented).
fsck_lfs:
* Check, and optionally fix, lfs_avail (to an exact figure) and
lfs_bfree (within a margin of error) in pass 5.
newfs_lfs:
* Reduce the default dlfs_minfreeseg to 1/20 of the total segments.
* Add a warning if the sgs disklabel field is 16 (the default for FFS'
cpg, but not usually desirable for LFS' sgs: 5--8 is a better range).
* Change the calculation of lfs_avail and lfs_bfree, corresponding to
the kernel changes mentioned above.
mount_lfs:
* Add -N and -b options to pass corresponding -n and -b options to
lfs_cleanerd.
* Default to calling lfs_cleanerd with "-b -n 4".
[All of these changes were largely tested in the 1.5 branch, with the
idea that they (along with previous un-pulled-up work) could be applied
to the branch while it was still in ALPHA2; however my test system has
experienced corruption on another filesystem (/dev/console has gone
missing :^), and, while I believe this unrelated to the LFS changes, I
cannot with good conscience request that the changes be pulled up.]
newfs_lfs gives lfs_minfreeseg a value of 1/8 of the total segments on
the disk, based on rough empirical data, but this should be refined in
the future.
is not monotonically increasing (e.g. clock is slaved to another system)
the optimization will result in segments being treated as corrupt
(uncleanable). If enough such "bad" segments were created, the cleaner would
clean continuously, and after some time the system would panic with "no
clean segments".
(Legitimately old partial-segments are relatively rare, and will have their
blocks culled by lfs_bmapv.)
contents, a substantial optimization if the work load is right: if enough
empty segments are available, the cleaner never has to read or write *any*
blocks except those on the Ifile. When the cleaner wakes up it marks all
empty segments clean before deciding whether any further segments need to
be cleaned.
Fixed overflow bugs in the cleaner's handling of the cost/benefit metric
for empty segments.
(Fix a sort-of-LP64 egcs printf warning.)
It's unfortunate that off_t and quad_t don't print with %q. I wonder
what would happen if alpha changed these from long -> long long? It's
the same actual size in bits either way.