2005-12-11 15:16:03 +03:00
|
|
|
# $NetBSD: TODO,v 1.10 2005/12/11 12:25:26 christos Exp $
|
|
|
|
|
|
|
|
- Lock audit. Need to check locking for multiprocessor case in particular.
|
|
|
|
|
|
|
|
- Get rid of lfs_segclean(); the kernel should clean a dirty segment IFF it
|
|
|
|
has passed two checkpoints containing zero live bytes.
|
Add code to UBCify LFS. This is still behind "#ifdef LFS_UBC" for now
(there are still some details to work out) but expect that to go
away soon. To support these basic changes (creation of lfs_putpages,
lfs_gop_write, mods to lfs_balloc) several other changes were made, to
wit:
* Create a writer daemon kernel thread whose purpose is to handle page
writes for the pagedaemon, but which also takes over some of the
functions of lfs_check(). This thread is started the first time an
LFS is mounted.
* Add a "flags" parameter to GOP_SIZE. Current values are
GOP_SIZE_READ, meaning that the call should return the size of the
in-core version of the file, and GOP_SIZE_WRITE, meaning that it
should return the on-disk size. One of GOP_SIZE_READ or
GOP_SIZE_WRITE must be specified.
* Instead of using malloc(...M_WAITOK) for everything, reserve enough
resources to get by and use malloc(...M_NOWAIT), using the reserves if
necessary. Use the pool subsystem for structures small enough that
this is feasible. This also obsoletes LFS_THROTTLE.
And a few that are not strictly necessary:
* Moves the LFS inode extensions off onto a separately allocated
structure; getting closer to LFS as an LKM. "Welcome to 1.6O."
* Unified GOP_ALLOC between FFS and LFS.
* Update LFS copyright headers to correct values.
* Actually cast to unsigned in lfs_shellsort, like the comment says.
* Keep track of which segments were empty before the previous
checkpoint; any segments that pass two checkpoints both dirty and
empty can be summarily cleaned. Do this. Right now lfs_segclean
still works, but this should be turned into an effectless
compatibility syscall.
2003-02-18 02:48:08 +03:00
|
|
|
|
|
|
|
- Now that our cache is basically all of physical memory, we need to make
|
|
|
|
sure that segwrite is not starving other important things. Need a way
|
|
|
|
to prioritize which blocks are most important to write, and write only
|
2005-04-02 01:59:46 +04:00
|
|
|
those, saving the rest for later. Does this change our notion of what
|
|
|
|
a checkpoint is?
|
1999-03-15 03:46:47 +03:00
|
|
|
|
Merge the short-lived perseant-lfsv2 branch into the trunk.
Kernels and tools understand both v1 and v2 filesystems; newfs_lfs
generates v2 by default. Changes for the v2 layout include:
- Segments of non-PO2 size and arbitrary block offset, so these can be
matched to convenient physical characteristics of the partition (e.g.,
stripe or track size and offset).
- Address by fragment instead of by disk sector, paving the way for
non-512-byte-sector devices. In theory fragments can be as large
as you like, though in reality they must be smaller than MAXBSIZE in size.
- Use serial number and filesystem identifier to ensure that roll-forward
doesn't get old data and think it's new. Roll-forward is enabled for
v2 filesystems, though not for v1 filesystems by default.
- The inode free list is now a tailq, paving the way for undelete (undelete
is not yet implemented, but can be without further non-backwards-compatible
changes to disk structures).
- Inode atime information is kept in the Ifile, instead of on the inode;
that is, the inode is never written *just* because atime was changed.
Because of this the inodes remain near the file data on the disk, rather
than wandering all over as the disk is read repeatedly. This speeds up
repeated reads by a small but noticeable amount.
Other changes of note include:
- The ifile written by newfs_lfs can now be of arbitrary length, it is no
longer restricted to a single indirect block.
- Fixed an old bug where ctime was changed every time a vnode was created.
I need to look more closely to make sure that the times are only updated
during write(2) and friends, not after-the-fact during a segment write,
and certainly not by the cleaner.
2001-07-14 00:30:18 +04:00
|
|
|
- Investigate alternate inode locking strategy: Inode locks are useful
|
|
|
|
for locking against simultaneous changes to inode size (balloc,
|
|
|
|
truncate, write) but because the assignment of disk blocks is also
|
|
|
|
covered by the segment lock, we don't really need to pay attention to
|
|
|
|
the inode lock when writing a segment, right? If this is true, the
|
|
|
|
locking problem in lfs_{bmapv,markv} goes away and lfs_reserve can go,
|
|
|
|
too.
|
1999-03-15 03:46:47 +03:00
|
|
|
|
|
|
|
- Get rid of DEV_BSIZE, pay attention to the media block size at mount time.
|
|
|
|
|
|
|
|
- More fs ops need to call lfs_imtime. Which ones? (Blackwell et al., 1995)
|
|
|
|
|
|
|
|
- lfs_vunref_head exists so that vnodes loaded solely for cleaning can
|
|
|
|
be put back on the *head* of the vnode free list. Make sure we
|
|
|
|
actually do this, since we now take IN_CLEANING off during segment write.
|
|
|
|
|
|
|
|
- The cleaner could be enhanced to be controlled from other processes,
|
|
|
|
and possibly perform additional tasks:
|
|
|
|
|
|
|
|
- Backups. At a minimum, turn the cleaner off and on to allow
|
2003-02-23 03:22:33 +03:00
|
|
|
effective live backups. More aggressively, the cleaner itself could
|
|
|
|
be the backup agent, and dump_lfs would merely be a controller.
|
1999-03-15 03:46:47 +03:00
|
|
|
|
|
|
|
- Cleaning time policies. Be able to tweak the cleaner's thresholds
|
2003-02-23 03:22:33 +03:00
|
|
|
to allow more thorough cleaning during policy-determined idle
|
|
|
|
periods (regardless of actual idleness) or put off until later
|
|
|
|
during short, intensive write periods.
|
1999-03-15 03:46:47 +03:00
|
|
|
|
|
|
|
- File coalescing and placement. During periods we expect to be idle,
|
|
|
|
coalesce fragmented files into one place on disk for better read
|
|
|
|
performance. Ideally, move files that have not been accessed in a
|
|
|
|
while to the extremes of the disk, thereby shortening seek times for
|
|
|
|
files that are accessed more frequently (though how the cleaner
|
|
|
|
should communicate "please put this near the beginning or end of the
|
|
|
|
disk" to the kernel is a very good question; flags to lfs_markv?).
|
|
|
|
|
|
|
|
- Versioning. When it cleans a segment it could write data for files
|
|
|
|
that were less than n versions old to tape or elsewhere. Perhaps it
|
|
|
|
could even write them back onto the disk, although that requires
|
|
|
|
more thought (and kernel mods).
|
|
|
|
|
|
|
|
- Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue;
|
|
|
|
perhaps keep the name, replace the function. Could it count referenced
|
|
|
|
vnodes as well, if it was in vfs_subr.c instead?
|
|
|
|
|
|
|
|
- Why not delete the lfs_bmapv call, just mark everything dirty that
|
|
|
|
isn't deleted/truncated? Get some numbers about what percentage of
|
|
|
|
the stuff that the cleaner thinks might be live is live. If it's
|
|
|
|
high, get rid of lfs_bmapv.
|
|
|
|
|
|
|
|
- There is a nasty problem in that it may take *more* room to write the
|
|
|
|
data to clean a segment than is returned by the new segment because of
|
|
|
|
indirect blocks in segment 2 being dirtied by the data being copied
|
|
|
|
into the log from segment 1. The suggested solution at this point is
|
|
|
|
to detect it when we have no space left on the filesystem, write the
|
|
|
|
extra data into the last segment (leaving no clean ones), make it a
|
|
|
|
checkpoint and shut down the file system for fixing by a utility
|
|
|
|
reading the raw partition. Argument is that this should never happen
|
|
|
|
and is practically impossible to fix since the cleaner would have to
|
|
|
|
theoretically build a model of the entire filesystem in memory to
|
|
|
|
detect the condition occurring. A file coalescing cleaner will help
|
|
|
|
avoid the problem, and one that reads/writes from the raw disk could
|
|
|
|
fix it.
|
|
|
|
|
|
|
|
- Need to keep vnode v_numoutput up to date for pending writes?
|
|
|
|
|
|
|
|
- If delete a file that's being executed, the version number isn't
|
|
|
|
updated, and fsck_lfs has to figure this out; case is the same as if
|
|
|
|
have an inode that no directory references, so the file should be
|
|
|
|
reattached into lost+found.
|
|
|
|
|
|
|
|
- Currently there's no notion of write error checking.
|
|
|
|
+ Failed data/inode writes should be rescheduled (kernel level bad blocking).
|
|
|
|
+ Failed superblock writes should cause selection of new superblock
|
|
|
|
for checkpointing.
|
|
|
|
|
|
|
|
- Future fantasies:
|
|
|
|
- unrm, versioning
|
|
|
|
- transactions
|
|
|
|
- extended cleaner policies (hot/cold data, data placement)
|
|
|
|
|
|
|
|
- Problem with the concept of multiple buffer headers referencing the segment:
|
|
|
|
Positives:
|
|
|
|
Don't lock down 1 segment per file system of physical memory.
|
|
|
|
Don't copy from buffers to segment memory.
|
|
|
|
Don't tie down the bus to transfer 1M.
|
|
|
|
Works on controllers supporting less than large transfers.
|
|
|
|
Disk can start writing immediately instead of waiting 1/2 rotation
|
|
|
|
and the full transfer.
|
|
|
|
Negatives:
|
|
|
|
Have to do segment write then segment summary write, since the latter
|
|
|
|
is what verifies that the segment is okay. (Is there another way
|
|
|
|
to do this?)
|
|
|
|
|
|
|
|
- The algorithm for selecting the disk addresses of the super-blocks
|
|
|
|
has to be available to the user program which checks the file system.
|