Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
/* $NetBSD: ffs_vfsops.c,v 1.85 2001/09/06 02:16:02 lukem Exp $ */
|
1994-06-29 10:39:25 +04:00
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Copyright (c) 1989, 1991, 1993, 1994
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1998-03-01 05:20:01 +03:00
|
|
|
* @(#)ffs_vfsops.c 8.31 (Berkeley) 5/20/95
|
1994-06-08 15:41:58 +04:00
|
|
|
*/
|
|
|
|
|
2001-05-30 15:57:16 +04:00
|
|
|
#if defined(_KERNEL_OPT)
|
1998-11-12 22:51:10 +03:00
|
|
|
#include "opt_ffs.h"
|
1998-06-08 08:27:50 +04:00
|
|
|
#include "opt_quota.h"
|
1998-07-05 12:49:30 +04:00
|
|
|
#include "opt_compat_netbsd.h"
|
2000-06-16 04:30:15 +04:00
|
|
|
#include "opt_softdep.h"
|
1998-06-09 11:46:31 +04:00
|
|
|
#endif
|
1998-06-08 08:27:50 +04:00
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/namei.h>
|
|
|
|
#include <sys/proc.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/vnode.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/mount.h>
|
|
|
|
#include <sys/buf.h>
|
1997-01-31 06:05:31 +03:00
|
|
|
#include <sys/device.h>
|
1994-06-08 15:41:58 +04:00
|
|
|
#include <sys/mbuf.h>
|
|
|
|
#include <sys/file.h>
|
|
|
|
#include <sys/disklabel.h>
|
|
|
|
#include <sys/ioctl.h>
|
|
|
|
#include <sys/errno.h>
|
|
|
|
#include <sys/malloc.h>
|
1998-09-01 07:11:08 +04:00
|
|
|
#include <sys/pool.h>
|
1997-07-08 03:37:36 +04:00
|
|
|
#include <sys/lock.h>
|
1998-03-01 05:20:01 +03:00
|
|
|
#include <sys/sysctl.h>
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
#include <miscfs/specfs/specdev.h>
|
|
|
|
|
|
|
|
#include <ufs/ufs/quota.h>
|
|
|
|
#include <ufs/ufs/ufsmount.h>
|
|
|
|
#include <ufs/ufs/inode.h>
|
1997-06-11 14:09:37 +04:00
|
|
|
#include <ufs/ufs/dir.h>
|
1994-06-08 15:41:58 +04:00
|
|
|
#include <ufs/ufs/ufs_extern.h>
|
1998-03-18 18:57:26 +03:00
|
|
|
#include <ufs/ufs/ufs_bswap.h>
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
#include <ufs/ffs/fs.h>
|
|
|
|
#include <ufs/ffs/ffs_extern.h>
|
|
|
|
|
2000-03-16 21:20:06 +03:00
|
|
|
/* how many times ffs_init() was called */
|
|
|
|
int ffs_initcount = 0;
|
|
|
|
|
1997-07-08 03:37:36 +04:00
|
|
|
extern struct lock ufs_hashlock;
|
|
|
|
|
1998-02-18 10:05:47 +03:00
|
|
|
extern struct vnodeopv_desc ffs_vnodeop_opv_desc;
|
|
|
|
extern struct vnodeopv_desc ffs_specop_opv_desc;
|
|
|
|
extern struct vnodeopv_desc ffs_fifoop_opv_desc;
|
|
|
|
|
2001-01-22 15:17:35 +03:00
|
|
|
const struct vnodeopv_desc * const ffs_vnodeopv_descs[] = {
|
1998-02-18 10:05:47 +03:00
|
|
|
&ffs_vnodeop_opv_desc,
|
|
|
|
&ffs_specop_opv_desc,
|
|
|
|
&ffs_fifoop_opv_desc,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
1995-11-12 01:00:15 +03:00
|
|
|
struct vfsops ffs_vfsops = {
|
|
|
|
MOUNT_FFS,
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_mount,
|
|
|
|
ufs_start,
|
|
|
|
ffs_unmount,
|
|
|
|
ufs_root,
|
|
|
|
ufs_quotactl,
|
|
|
|
ffs_statfs,
|
|
|
|
ffs_sync,
|
|
|
|
ffs_vget,
|
|
|
|
ffs_fhtovp,
|
|
|
|
ffs_vptofh,
|
|
|
|
ffs_init,
|
2000-03-16 21:20:06 +03:00
|
|
|
ffs_done,
|
1998-03-01 05:20:01 +03:00
|
|
|
ffs_sysctl,
|
1997-01-31 06:05:31 +03:00
|
|
|
ffs_mountroot,
|
1999-02-27 02:44:43 +03:00
|
|
|
ufs_check_export,
|
1998-02-18 10:05:47 +03:00
|
|
|
ffs_vnodeopv_descs,
|
1994-06-08 15:41:58 +04:00
|
|
|
};
|
|
|
|
|
1998-09-01 07:11:08 +04:00
|
|
|
struct pool ffs_inode_pool;
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
1998-03-01 05:20:01 +03:00
|
|
|
* Called by main() when ffs is going to be mounted as root.
|
1994-06-08 15:41:58 +04:00
|
|
|
*/
|
|
|
|
|
1996-02-10 01:22:18 +03:00
|
|
|
int
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_mountroot()
|
|
|
|
{
|
1998-03-01 05:20:01 +03:00
|
|
|
struct fs *fs;
|
|
|
|
struct mount *mp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct proc *p = curproc; /* XXX */
|
|
|
|
struct ufsmount *ump;
|
|
|
|
int error;
|
1997-01-31 06:05:31 +03:00
|
|
|
|
|
|
|
if (root_device->dv_class != DV_DISK)
|
|
|
|
return (ENODEV);
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
1997-06-12 21:12:17 +04:00
|
|
|
* Get vnodes for rootdev.
|
1994-06-08 15:41:58 +04:00
|
|
|
*/
|
1997-06-12 21:12:17 +04:00
|
|
|
if (bdevvp(rootdev, &rootvp))
|
1994-06-08 15:41:58 +04:00
|
|
|
panic("ffs_mountroot: can't setup bdevvp's");
|
|
|
|
|
1999-07-17 05:08:28 +04:00
|
|
|
if ((error = vfs_rootmountalloc(MOUNT_FFS, "root_device", &mp))) {
|
|
|
|
vrele(rootvp);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
1999-07-17 05:08:28 +04:00
|
|
|
}
|
1998-03-01 05:20:01 +03:00
|
|
|
if ((error = ffs_mountfs(rootvp, mp, p)) != 0) {
|
|
|
|
mp->mnt_op->vfs_refcount--;
|
|
|
|
vfs_unbusy(mp);
|
1994-06-08 15:41:58 +04:00
|
|
|
free(mp, M_MOUNT);
|
1999-07-17 05:08:28 +04:00
|
|
|
vrele(rootvp);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
}
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&mountlist_slock);
|
1995-01-18 09:19:49 +03:00
|
|
|
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_unlock(&mountlist_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
1998-08-10 00:15:38 +04:00
|
|
|
memset(fs->fs_fsmnt, 0, sizeof(fs->fs_fsmnt));
|
1998-03-01 05:20:01 +03:00
|
|
|
(void)copystr(mp->mnt_stat.f_mntonname, fs->fs_fsmnt, MNAMELEN - 1, 0);
|
1994-06-08 15:41:58 +04:00
|
|
|
(void)ffs_statfs(mp, &mp->mnt_stat, p);
|
1998-03-01 05:20:01 +03:00
|
|
|
vfs_unbusy(mp);
|
1994-06-08 15:41:58 +04:00
|
|
|
inittodr(fs->fs_time);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* VFS Operations.
|
|
|
|
*
|
|
|
|
* mount system call
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_mount(mp, path, data, ndp, p)
|
2000-03-30 16:41:09 +04:00
|
|
|
struct mount *mp;
|
1996-12-22 13:10:12 +03:00
|
|
|
const char *path;
|
|
|
|
void *data;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct nameidata *ndp;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
|
|
|
struct vnode *devvp;
|
|
|
|
struct ufs_args args;
|
1996-02-10 01:22:18 +03:00
|
|
|
struct ufsmount *ump = NULL;
|
2000-03-30 16:41:09 +04:00
|
|
|
struct fs *fs;
|
1995-03-09 15:05:21 +03:00
|
|
|
size_t size;
|
1994-06-08 15:41:58 +04:00
|
|
|
int error, flags;
|
1994-12-14 16:03:35 +03:00
|
|
|
mode_t accessmode;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
1996-02-10 01:22:18 +03:00
|
|
|
error = copyin(data, (caddr_t)&args, sizeof (struct ufs_args));
|
|
|
|
if (error)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
2000-06-16 04:30:15 +04:00
|
|
|
|
|
|
|
#if !defined(SOFTDEP)
|
2000-06-16 09:45:14 +04:00
|
|
|
mp->mnt_flag &= ~MNT_SOFTDEP;
|
2000-06-16 04:30:15 +04:00
|
|
|
#endif
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* If updating, check whether changing from read-only to
|
|
|
|
* read/write; if there is no device name, that's all we do.
|
|
|
|
*/
|
|
|
|
if (mp->mnt_flag & MNT_UPDATE) {
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
|
|
|
if (fs->fs_ronly == 0 && (mp->mnt_flag & MNT_RDONLY)) {
|
|
|
|
flags = WRITECLOSE;
|
|
|
|
if (mp->mnt_flag & MNT_FORCE)
|
|
|
|
flags |= FORCECLOSE;
|
1999-11-15 21:49:07 +03:00
|
|
|
if (mp->mnt_flag & MNT_SOFTDEP)
|
|
|
|
error = softdep_flushfiles(mp, flags, p);
|
|
|
|
else
|
|
|
|
error = ffs_flushfiles(mp, flags, p);
|
1995-04-13 01:21:00 +04:00
|
|
|
if (error == 0 &&
|
|
|
|
ffs_cgupdate(ump, MNT_WAIT) == 0 &&
|
|
|
|
fs->fs_clean & FS_WASCLEAN) {
|
2000-06-16 02:35:37 +04:00
|
|
|
if (mp->mnt_flag & MNT_SOFTDEP)
|
|
|
|
fs->fs_flags &= ~FS_DOSOFTDEP;
|
1995-04-13 01:21:00 +04:00
|
|
|
fs->fs_clean = FS_ISCLEAN;
|
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT);
|
|
|
|
}
|
|
|
|
if (error)
|
|
|
|
return (error);
|
|
|
|
fs->fs_ronly = 1;
|
2001-01-10 20:49:18 +03:00
|
|
|
fs->fs_fmod = 0;
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
2000-06-16 02:35:37 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Flush soft dependencies if disabling it via an update
|
|
|
|
* mount. This may leave some items to be processed,
|
|
|
|
* so don't do this yet XXX.
|
|
|
|
*/
|
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP) &&
|
|
|
|
!(mp->mnt_flag & MNT_SOFTDEP) && fs->fs_ronly == 0) {
|
|
|
|
#ifdef notyet
|
|
|
|
flags = WRITECLOSE;
|
|
|
|
if (mp->mnt_flag & MNT_FORCE)
|
|
|
|
flags |= FORCECLOSE;
|
|
|
|
error = softdep_flushfiles(mp, flags, p);
|
|
|
|
if (error == 0 && ffs_cgupdate(ump, MNT_WAIT) == 0)
|
|
|
|
fs->fs_flags &= ~FS_DOSOFTDEP;
|
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT);
|
2000-06-16 04:30:15 +04:00
|
|
|
#elif defined(SOFTDEP)
|
2000-06-16 02:35:37 +04:00
|
|
|
mp->mnt_flag |= MNT_SOFTDEP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When upgrading to a softdep mount, we must first flush
|
|
|
|
* all vnodes. (not done yet -- see above)
|
|
|
|
*/
|
|
|
|
if (!(fs->fs_flags & FS_DOSOFTDEP) &&
|
|
|
|
(mp->mnt_flag & MNT_SOFTDEP) && fs->fs_ronly == 0) {
|
|
|
|
#ifdef notyet
|
|
|
|
flags = WRITECLOSE;
|
|
|
|
if (mp->mnt_flag & MNT_FORCE)
|
|
|
|
flags |= FORCECLOSE;
|
|
|
|
error = ffs_flushfiles(mp, flags, p);
|
|
|
|
#else
|
|
|
|
mp->mnt_flag &= ~MNT_SOFTDEP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
1995-04-13 01:21:00 +04:00
|
|
|
if (mp->mnt_flag & MNT_RELOAD) {
|
1994-06-08 15:41:58 +04:00
|
|
|
error = ffs_reload(mp, ndp->ni_cnd.cn_cred, p);
|
1995-04-13 01:21:00 +04:00
|
|
|
if (error)
|
|
|
|
return (error);
|
|
|
|
}
|
1994-12-14 16:03:35 +03:00
|
|
|
if (fs->fs_ronly && (mp->mnt_flag & MNT_WANTRDWR)) {
|
|
|
|
/*
|
|
|
|
* If upgrade to read-write by non-root, then verify
|
|
|
|
* that user has necessary permissions on the device.
|
|
|
|
*/
|
1999-11-15 21:49:07 +03:00
|
|
|
devvp = ump->um_devvp;
|
1994-12-14 16:03:35 +03:00
|
|
|
if (p->p_ucred->cr_uid != 0) {
|
1998-03-01 05:20:01 +03:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
1996-02-10 01:22:18 +03:00
|
|
|
error = VOP_ACCESS(devvp, VREAD | VWRITE,
|
|
|
|
p->p_ucred, p);
|
1998-03-01 05:20:01 +03:00
|
|
|
VOP_UNLOCK(devvp, 0);
|
|
|
|
if (error)
|
1994-12-14 16:03:35 +03:00
|
|
|
return (error);
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
fs->fs_ronly = 0;
|
1995-04-13 01:21:00 +04:00
|
|
|
fs->fs_clean <<= 1;
|
|
|
|
fs->fs_fmod = 1;
|
1999-11-15 21:49:07 +03:00
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP)) {
|
|
|
|
error = softdep_mount(devvp, mp, fs,
|
|
|
|
p->p_ucred);
|
|
|
|
if (error)
|
|
|
|
return (error);
|
2000-06-16 02:35:37 +04:00
|
|
|
}
|
1994-12-14 16:03:35 +03:00
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
if (args.fspec == 0) {
|
|
|
|
/*
|
|
|
|
* Process export requests.
|
|
|
|
*/
|
|
|
|
return (vfs_export(mp, &ump->um_export, &args.export));
|
|
|
|
}
|
1999-11-15 21:49:07 +03:00
|
|
|
if ((mp->mnt_flag & (MNT_SOFTDEP | MNT_ASYNC)) ==
|
|
|
|
(MNT_SOFTDEP | MNT_ASYNC)) {
|
|
|
|
printf("%s fs uses soft updates, ignoring async mode\n",
|
|
|
|
fs->fs_fsmnt);
|
|
|
|
mp->mnt_flag &= ~MNT_ASYNC;
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Not an update, or updating the name: look up the name
|
|
|
|
* and verify that it refers to a sensible block device.
|
|
|
|
*/
|
|
|
|
NDINIT(ndp, LOOKUP, FOLLOW, UIO_USERSPACE, args.fspec, p);
|
1996-02-10 01:22:18 +03:00
|
|
|
if ((error = namei(ndp)) != 0)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
devvp = ndp->ni_vp;
|
|
|
|
|
|
|
|
if (devvp->v_type != VBLK) {
|
|
|
|
vrele(devvp);
|
|
|
|
return (ENOTBLK);
|
|
|
|
}
|
|
|
|
if (major(devvp->v_rdev) >= nblkdev) {
|
|
|
|
vrele(devvp);
|
|
|
|
return (ENXIO);
|
|
|
|
}
|
1994-12-14 16:03:35 +03:00
|
|
|
/*
|
|
|
|
* If mount by non-root, then verify that user has necessary
|
|
|
|
* permissions on the device.
|
|
|
|
*/
|
|
|
|
if (p->p_ucred->cr_uid != 0) {
|
|
|
|
accessmode = VREAD;
|
|
|
|
if ((mp->mnt_flag & MNT_RDONLY) == 0)
|
|
|
|
accessmode |= VWRITE;
|
1998-03-01 05:20:01 +03:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
1996-02-10 01:22:18 +03:00
|
|
|
error = VOP_ACCESS(devvp, accessmode, p->p_ucred, p);
|
1998-03-01 05:20:01 +03:00
|
|
|
VOP_UNLOCK(devvp, 0);
|
1996-02-10 01:22:18 +03:00
|
|
|
if (error) {
|
1998-03-01 05:20:01 +03:00
|
|
|
vrele(devvp);
|
1994-12-14 16:03:35 +03:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
}
|
1999-11-15 21:49:07 +03:00
|
|
|
if ((mp->mnt_flag & MNT_UPDATE) == 0) {
|
1994-06-08 15:41:58 +04:00
|
|
|
error = ffs_mountfs(devvp, mp, p);
|
2000-03-16 13:37:00 +03:00
|
|
|
if (!error) {
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
|
|
|
if ((mp->mnt_flag & (MNT_SOFTDEP | MNT_ASYNC)) ==
|
|
|
|
(MNT_SOFTDEP | MNT_ASYNC)) {
|
|
|
|
printf("%s fs uses soft updates, "
|
|
|
|
"ignoring async mode\n",
|
|
|
|
fs->fs_fsmnt);
|
|
|
|
mp->mnt_flag &= ~MNT_ASYNC;
|
|
|
|
}
|
1999-11-15 21:49:07 +03:00
|
|
|
}
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
else {
|
|
|
|
if (devvp != ump->um_devvp)
|
|
|
|
error = EINVAL; /* needs translation */
|
|
|
|
else
|
|
|
|
vrele(devvp);
|
|
|
|
}
|
|
|
|
if (error) {
|
|
|
|
vrele(devvp);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
(void) copyinstr(path, fs->fs_fsmnt, sizeof(fs->fs_fsmnt) - 1, &size);
|
1998-08-10 00:15:38 +04:00
|
|
|
memset(fs->fs_fsmnt + size, 0, sizeof(fs->fs_fsmnt) - size);
|
|
|
|
memcpy(mp->mnt_stat.f_mntonname, fs->fs_fsmnt, MNAMELEN);
|
1994-06-08 15:41:58 +04:00
|
|
|
(void) copyinstr(args.fspec, mp->mnt_stat.f_mntfromname, MNAMELEN - 1,
|
|
|
|
&size);
|
1998-08-10 00:15:38 +04:00
|
|
|
memset(mp->mnt_stat.f_mntfromname + size, 0, MNAMELEN - size);
|
2000-06-16 02:35:37 +04:00
|
|
|
if (mp->mnt_flag & MNT_SOFTDEP)
|
|
|
|
fs->fs_flags |= FS_DOSOFTDEP;
|
2000-12-03 22:52:06 +03:00
|
|
|
else
|
|
|
|
fs->fs_flags &= ~FS_DOSOFTDEP;
|
1995-04-13 01:21:00 +04:00
|
|
|
if (fs->fs_fmod != 0) { /* XXX */
|
|
|
|
fs->fs_fmod = 0;
|
|
|
|
if (fs->fs_clean & FS_WASCLEAN)
|
|
|
|
fs->fs_time = time.tv_sec;
|
|
|
|
else
|
2001-07-26 11:58:55 +04:00
|
|
|
printf("%s: file system not clean (fs_clean=%x); please fsck(8)\n",
|
1998-03-18 18:57:26 +03:00
|
|
|
mp->mnt_stat.f_mntfromname, fs->fs_clean);
|
1995-04-13 01:21:00 +04:00
|
|
|
(void) ffs_cgupdate(ump, MNT_WAIT);
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reload all incore data for a filesystem (used after running fsck on
|
|
|
|
* the root filesystem and finding things to fix). The filesystem must
|
|
|
|
* be mounted read-only.
|
|
|
|
*
|
|
|
|
* Things to do to update the mount:
|
|
|
|
* 1) invalidate all cached meta-data.
|
|
|
|
* 2) re-read superblock from disk.
|
|
|
|
* 3) re-read summary information from disk.
|
|
|
|
* 4) invalidate all inactive vnodes.
|
|
|
|
* 5) invalidate all cached file data.
|
|
|
|
* 6) re-read inode data for all active vnodes.
|
|
|
|
*/
|
1996-02-10 01:22:18 +03:00
|
|
|
int
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_reload(mountp, cred, p)
|
2000-03-30 16:41:09 +04:00
|
|
|
struct mount *mountp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct ucred *cred;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct vnode *vp, *nvp, *devvp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct inode *ip;
|
2001-09-02 05:58:30 +04:00
|
|
|
void *space;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct buf *bp;
|
1995-12-20 02:27:53 +03:00
|
|
|
struct fs *fs, *newfs;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct partinfo dpart;
|
|
|
|
int i, blks, size, error;
|
1995-12-20 02:27:53 +03:00
|
|
|
int32_t *lp;
|
1998-10-23 04:31:28 +04:00
|
|
|
caddr_t cp;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
if ((mountp->mnt_flag & MNT_RDONLY) == 0)
|
|
|
|
return (EINVAL);
|
|
|
|
/*
|
|
|
|
* Step 1: invalidate all cached meta-data.
|
|
|
|
*/
|
|
|
|
devvp = VFSTOUFS(mountp)->um_devvp;
|
1999-11-15 21:49:07 +03:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
|
|
|
error = vinvalbuf(devvp, 0, cred, p, 0, 0);
|
|
|
|
VOP_UNLOCK(devvp, 0);
|
|
|
|
if (error)
|
1994-06-08 15:41:58 +04:00
|
|
|
panic("ffs_reload: dirty1");
|
|
|
|
/*
|
|
|
|
* Step 2: re-read superblock from disk.
|
|
|
|
*/
|
|
|
|
if (VOP_IOCTL(devvp, DIOCGPART, (caddr_t)&dpart, FREAD, NOCRED, p) != 0)
|
|
|
|
size = DEV_BSIZE;
|
|
|
|
else
|
|
|
|
size = dpart.disklab->d_secsize;
|
1998-03-01 05:20:01 +03:00
|
|
|
error = bread(devvp, (ufs_daddr_t)(SBOFF / size), SBSIZE, NOCRED, &bp);
|
1999-02-10 16:14:08 +03:00
|
|
|
if (error) {
|
|
|
|
brelse(bp);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
1999-02-10 16:14:08 +03:00
|
|
|
}
|
1998-03-18 18:57:26 +03:00
|
|
|
fs = VFSTOUFS(mountp)->um_fs;
|
|
|
|
newfs = malloc(fs->fs_sbsize, M_UFSMNT, M_WAITOK);
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(newfs, bp->b_data, fs->fs_sbsize);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
1999-11-15 21:49:07 +03:00
|
|
|
if (VFSTOUFS(mountp)->um_flags & UFS_NEEDSWAP) {
|
2001-08-17 06:18:46 +04:00
|
|
|
ffs_sb_swap((struct fs*)bp->b_data, newfs);
|
1999-11-15 21:49:07 +03:00
|
|
|
fs->fs_flags |= FS_SWAPPED;
|
|
|
|
}
|
1998-03-18 18:57:26 +03:00
|
|
|
#endif
|
1995-12-20 02:27:53 +03:00
|
|
|
if (newfs->fs_magic != FS_MAGIC || newfs->fs_bsize > MAXBSIZE ||
|
|
|
|
newfs->fs_bsize < sizeof(struct fs)) {
|
1994-06-08 15:41:58 +04:00
|
|
|
brelse(bp);
|
1998-03-18 18:57:26 +03:00
|
|
|
free(newfs, M_UFSMNT);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (EIO); /* XXX needs translation */
|
|
|
|
}
|
1995-12-20 02:27:53 +03:00
|
|
|
/*
|
|
|
|
* Copy pointer fields back into superblock before copying in XXX
|
|
|
|
* new superblock. These should really be in the ufsmount. XXX
|
|
|
|
* Note that important parameters (eg fs_ncg) are unchanged.
|
|
|
|
*/
|
2001-09-02 05:58:30 +04:00
|
|
|
newfs->fs_csp = fs->fs_csp;
|
1995-12-20 02:27:53 +03:00
|
|
|
newfs->fs_maxcluster = fs->fs_maxcluster;
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
newfs->fs_contigdirs = fs->fs_contigdirs;
|
2001-01-09 13:44:19 +03:00
|
|
|
newfs->fs_ronly = fs->fs_ronly;
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(fs, newfs, (u_int)fs->fs_sbsize);
|
1994-06-08 15:41:58 +04:00
|
|
|
if (fs->fs_sbsize < SBSIZE)
|
|
|
|
bp->b_flags |= B_INVAL;
|
|
|
|
brelse(bp);
|
1998-03-18 18:57:26 +03:00
|
|
|
free(newfs, M_UFSMNT);
|
1994-06-29 01:50:24 +04:00
|
|
|
mountp->mnt_maxsymlinklen = fs->fs_maxsymlinklen;
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_oldfscompat(fs);
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
/* An old fsck may have zeroed these fields, so recheck them. */
|
|
|
|
if (fs->fs_avgfilesize <= 0)
|
|
|
|
fs->fs_avgfilesize = AVFILESIZ;
|
|
|
|
if (fs->fs_avgfpdir <= 0)
|
|
|
|
fs->fs_avgfpdir = AFPDIR;
|
|
|
|
|
1999-11-15 21:49:07 +03:00
|
|
|
ffs_statfs(mountp, &mountp->mnt_stat, p);
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Step 3: re-read summary information from disk.
|
|
|
|
*/
|
|
|
|
blks = howmany(fs->fs_cssize, fs->fs_fsize);
|
2001-09-02 05:58:30 +04:00
|
|
|
space = fs->fs_csp;
|
1994-06-08 15:41:58 +04:00
|
|
|
for (i = 0; i < blks; i += fs->fs_frag) {
|
|
|
|
size = fs->fs_bsize;
|
|
|
|
if (i + fs->fs_frag > blks)
|
|
|
|
size = (blks - i) * fs->fs_fsize;
|
1996-02-10 01:22:18 +03:00
|
|
|
error = bread(devvp, fsbtodb(fs, fs->fs_csaddr + i), size,
|
|
|
|
NOCRED, &bp);
|
1999-02-10 16:14:08 +03:00
|
|
|
if (error) {
|
|
|
|
brelse(bp);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
1999-02-10 16:14:08 +03:00
|
|
|
}
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
1999-11-15 21:49:07 +03:00
|
|
|
if (UFS_FSNEEDSWAP(fs))
|
2001-09-02 05:58:30 +04:00
|
|
|
ffs_csum_swap((struct csum *)bp->b_data,
|
|
|
|
(struct csum *)space, size);
|
1998-03-18 18:57:26 +03:00
|
|
|
else
|
|
|
|
#endif
|
2001-09-02 05:58:30 +04:00
|
|
|
memcpy(space, bp->b_data, (size_t)size);
|
|
|
|
space = (char *)space + size;
|
1994-06-08 15:41:58 +04:00
|
|
|
brelse(bp);
|
|
|
|
}
|
1999-11-15 21:49:07 +03:00
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP))
|
|
|
|
softdep_mount(devvp, mountp, fs, cred);
|
1995-12-20 02:27:53 +03:00
|
|
|
/*
|
|
|
|
* We no longer know anything about clusters per cylinder group.
|
|
|
|
*/
|
|
|
|
if (fs->fs_contigsumsize > 0) {
|
|
|
|
lp = fs->fs_maxcluster;
|
|
|
|
for (i = 0; i < fs->fs_ncg; i++)
|
|
|
|
*lp++ = fs->fs_contigsumsize;
|
|
|
|
}
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
loop:
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
for (vp = mountp->mnt_vnodelist.lh_first; vp != NULL; vp = nvp) {
|
1998-03-01 05:20:01 +03:00
|
|
|
if (vp->v_mount != mountp) {
|
|
|
|
simple_unlock(&mntvnode_slock);
|
|
|
|
goto loop;
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
nvp = vp->v_mntvnodes.le_next;
|
|
|
|
/*
|
|
|
|
* Step 4: invalidate all inactive vnodes.
|
|
|
|
*/
|
1998-03-01 05:20:01 +03:00
|
|
|
if (vrecycle(vp, &mntvnode_slock, p))
|
|
|
|
goto loop;
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Step 5: invalidate all cached file data.
|
|
|
|
*/
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&vp->v_interlock);
|
|
|
|
simple_unlock(&mntvnode_slock);
|
|
|
|
if (vget(vp, LK_EXCLUSIVE | LK_INTERLOCK))
|
1994-06-08 15:41:58 +04:00
|
|
|
goto loop;
|
|
|
|
if (vinvalbuf(vp, 0, cred, p, 0, 0))
|
|
|
|
panic("ffs_reload: dirty2");
|
|
|
|
/*
|
|
|
|
* Step 6: re-read inode data for all active vnodes.
|
|
|
|
*/
|
|
|
|
ip = VTOI(vp);
|
1996-02-10 01:22:18 +03:00
|
|
|
error = bread(devvp, fsbtodb(fs, ino_to_fsba(fs, ip->i_number)),
|
|
|
|
(int)fs->fs_bsize, NOCRED, &bp);
|
|
|
|
if (error) {
|
1999-02-10 16:14:08 +03:00
|
|
|
brelse(bp);
|
1994-06-08 15:41:58 +04:00
|
|
|
vput(vp);
|
|
|
|
return (error);
|
|
|
|
}
|
1998-10-23 04:31:28 +04:00
|
|
|
cp = (caddr_t)bp->b_data +
|
|
|
|
(ino_to_fsbo(fs, ip->i_number) * DINODE_SIZE);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
1999-11-15 21:49:07 +03:00
|
|
|
if (UFS_FSNEEDSWAP(fs))
|
1998-10-23 04:31:28 +04:00
|
|
|
ffs_dinode_swap((struct dinode *)cp,
|
|
|
|
&ip->i_din.ffs_din);
|
1998-03-18 18:57:26 +03:00
|
|
|
else
|
|
|
|
#endif
|
1998-10-23 04:31:28 +04:00
|
|
|
memcpy(&ip->i_din.ffs_din, cp, DINODE_SIZE);
|
1999-11-15 21:49:07 +03:00
|
|
|
ip->i_ffs_effnlink = ip->i_ffs_nlink;
|
1994-06-08 15:41:58 +04:00
|
|
|
brelse(bp);
|
|
|
|
vput(vp);
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_unlock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Common code for mount and mountroot
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_mountfs(devvp, mp, p)
|
2000-03-30 16:41:09 +04:00
|
|
|
struct vnode *devvp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct mount *mp;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
1998-03-18 18:57:26 +03:00
|
|
|
struct ufsmount *ump;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct buf *bp;
|
1998-03-18 18:57:26 +03:00
|
|
|
struct fs *fs;
|
1994-12-14 16:03:35 +03:00
|
|
|
dev_t dev;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct partinfo dpart;
|
2001-09-02 05:58:30 +04:00
|
|
|
void *space;
|
1994-06-08 15:41:58 +04:00
|
|
|
int blks;
|
1999-08-03 23:22:43 +04:00
|
|
|
int error, i, size, ronly;
|
|
|
|
#ifdef FFS_EI
|
|
|
|
int needswap;
|
|
|
|
#endif
|
1994-12-14 16:03:35 +03:00
|
|
|
int32_t *lp;
|
|
|
|
struct ucred *cred;
|
|
|
|
u_int64_t maxfilesize; /* XXX */
|
1998-03-18 18:57:26 +03:00
|
|
|
u_int32_t sbsize;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
1994-12-14 16:03:35 +03:00
|
|
|
dev = devvp->v_rdev;
|
|
|
|
cred = p ? p->p_ucred : NOCRED;
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Disallow multiple mounts of the same device.
|
|
|
|
* Disallow mounting of a device that is currently in use
|
|
|
|
* (except for root, which might share swap device for miniroot).
|
|
|
|
* Flush out any old buffers remaining from a previous use.
|
|
|
|
*/
|
1996-02-10 01:22:18 +03:00
|
|
|
if ((error = vfs_mountedon(devvp)) != 0)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
if (vcount(devvp) > 1 && devvp != rootvp)
|
|
|
|
return (EBUSY);
|
1999-11-15 21:49:07 +03:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
|
|
|
error = vinvalbuf(devvp, V_SAVE, cred, p, 0, 0);
|
|
|
|
VOP_UNLOCK(devvp, 0);
|
|
|
|
if (error)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
|
|
|
|
ronly = (mp->mnt_flag & MNT_RDONLY) != 0;
|
1996-02-10 01:22:18 +03:00
|
|
|
error = VOP_OPEN(devvp, ronly ? FREAD : FREAD|FWRITE, FSCRED, p);
|
|
|
|
if (error)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
1994-12-14 16:03:35 +03:00
|
|
|
if (VOP_IOCTL(devvp, DIOCGPART, (caddr_t)&dpart, FREAD, cred, p) != 0)
|
1994-06-08 15:41:58 +04:00
|
|
|
size = DEV_BSIZE;
|
|
|
|
else
|
|
|
|
size = dpart.disklab->d_secsize;
|
|
|
|
|
|
|
|
bp = NULL;
|
|
|
|
ump = NULL;
|
1998-03-01 05:20:01 +03:00
|
|
|
error = bread(devvp, (ufs_daddr_t)(SBOFF / size), SBSIZE, cred, &bp);
|
1996-02-10 01:22:18 +03:00
|
|
|
if (error)
|
1994-06-08 15:41:58 +04:00
|
|
|
goto out;
|
1998-03-18 18:57:26 +03:00
|
|
|
|
|
|
|
fs = (struct fs*)bp->b_data;
|
|
|
|
if (fs->fs_magic == FS_MAGIC) {
|
|
|
|
sbsize = fs->fs_sbsize;
|
|
|
|
#ifdef FFS_EI
|
1999-08-03 23:22:43 +04:00
|
|
|
needswap = 0;
|
1998-03-18 18:57:26 +03:00
|
|
|
} else if (fs->fs_magic == bswap32(FS_MAGIC)) {
|
|
|
|
sbsize = bswap32(fs->fs_sbsize);
|
1999-08-03 23:22:43 +04:00
|
|
|
needswap = 1;
|
1998-03-18 18:57:26 +03:00
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
error = EINVAL;
|
|
|
|
goto out;
|
1998-12-04 14:00:40 +03:00
|
|
|
}
|
1999-03-05 15:02:18 +03:00
|
|
|
if (sbsize > MAXBSIZE || sbsize < sizeof(struct fs)) {
|
1998-12-04 14:00:40 +03:00
|
|
|
error = EINVAL;
|
|
|
|
goto out;
|
1998-03-18 18:57:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
fs = malloc((u_long)sbsize, M_UFSMNT, M_WAITOK);
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(fs, bp->b_data, sbsize);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
1999-11-15 21:49:07 +03:00
|
|
|
if (needswap) {
|
2001-08-17 06:18:46 +04:00
|
|
|
ffs_sb_swap((struct fs*)bp->b_data, fs);
|
1999-11-15 21:49:07 +03:00
|
|
|
fs->fs_flags |= FS_SWAPPED;
|
|
|
|
}
|
1998-03-18 18:57:26 +03:00
|
|
|
#endif
|
1999-12-10 17:36:04 +03:00
|
|
|
ffs_oldfscompat(fs);
|
|
|
|
|
1999-03-05 15:02:18 +03:00
|
|
|
if (fs->fs_bsize > MAXBSIZE || fs->fs_bsize < sizeof(struct fs)) {
|
|
|
|
error = EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
1998-12-04 14:00:40 +03:00
|
|
|
/* make sure cylinder group summary area is a reasonable size. */
|
|
|
|
if (fs->fs_cgsize == 0 || fs->fs_cpg == 0 ||
|
|
|
|
fs->fs_ncg > fs->fs_ncyl / fs->fs_cpg + 1 ||
|
|
|
|
fs->fs_cssize >
|
|
|
|
fragroundup(fs, fs->fs_ncg * sizeof(struct csum))) {
|
1994-06-08 15:41:58 +04:00
|
|
|
error = EINVAL; /* XXX needs translation */
|
1998-03-18 18:57:26 +03:00
|
|
|
goto out2;
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
|
|
|
/* XXX updating 4.2 FFS superblocks trashes rotational layout tables */
|
|
|
|
if (fs->fs_postblformat == FS_42POSTBLFMT && !ronly) {
|
|
|
|
error = EROFS; /* XXX what should be returned? */
|
1998-03-18 18:57:26 +03:00
|
|
|
goto out2;
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
1999-12-10 17:36:04 +03:00
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
ump = malloc(sizeof *ump, M_UFSMNT, M_WAITOK);
|
1998-08-10 00:15:38 +04:00
|
|
|
memset((caddr_t)ump, 0, sizeof *ump);
|
1998-03-18 18:57:26 +03:00
|
|
|
ump->um_fs = fs;
|
1994-06-08 15:41:58 +04:00
|
|
|
if (fs->fs_sbsize < SBSIZE)
|
|
|
|
bp->b_flags |= B_INVAL;
|
|
|
|
brelse(bp);
|
|
|
|
bp = NULL;
|
|
|
|
fs->fs_ronly = ronly;
|
1995-04-13 01:21:00 +04:00
|
|
|
if (ronly == 0) {
|
|
|
|
fs->fs_clean <<= 1;
|
1994-06-08 15:41:58 +04:00
|
|
|
fs->fs_fmod = 1;
|
1995-04-13 01:21:00 +04:00
|
|
|
}
|
1994-12-14 16:03:35 +03:00
|
|
|
size = fs->fs_cssize;
|
|
|
|
blks = howmany(size, fs->fs_fsize);
|
|
|
|
if (fs->fs_contigsumsize > 0)
|
|
|
|
size += fs->fs_ncg * sizeof(int32_t);
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
size += fs->fs_ncg * sizeof(*fs->fs_contigdirs);
|
2001-09-02 05:58:30 +04:00
|
|
|
space = malloc((u_long)size, M_UFSMNT, M_WAITOK);
|
|
|
|
fs->fs_csp = space;
|
1994-06-08 15:41:58 +04:00
|
|
|
for (i = 0; i < blks; i += fs->fs_frag) {
|
|
|
|
size = fs->fs_bsize;
|
|
|
|
if (i + fs->fs_frag > blks)
|
|
|
|
size = (blks - i) * fs->fs_fsize;
|
1996-02-10 01:22:18 +03:00
|
|
|
error = bread(devvp, fsbtodb(fs, fs->fs_csaddr + i), size,
|
|
|
|
cred, &bp);
|
|
|
|
if (error) {
|
2001-09-02 05:58:30 +04:00
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
1998-03-18 18:57:26 +03:00
|
|
|
goto out2;
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
|
|
|
if (needswap)
|
2001-09-02 05:58:30 +04:00
|
|
|
ffs_csum_swap((struct csum *)bp->b_data,
|
|
|
|
(struct csum *)space, size);
|
1998-03-18 18:57:26 +03:00
|
|
|
else
|
|
|
|
#endif
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(space, bp->b_data, (u_int)size);
|
1998-03-18 18:57:26 +03:00
|
|
|
|
2001-09-02 05:58:30 +04:00
|
|
|
space = (char *)space + size;
|
1994-06-08 15:41:58 +04:00
|
|
|
brelse(bp);
|
|
|
|
bp = NULL;
|
|
|
|
}
|
1994-12-14 16:03:35 +03:00
|
|
|
if (fs->fs_contigsumsize > 0) {
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
fs->fs_maxcluster = lp = space;
|
1994-12-14 16:03:35 +03:00
|
|
|
for (i = 0; i < fs->fs_ncg; i++)
|
|
|
|
*lp++ = fs->fs_contigsumsize;
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
space = lp;
|
1994-12-14 16:03:35 +03:00
|
|
|
}
|
Incorporate the enhanced ffs_dirpref() by Grigoriy Orlov, as found in
FreeBSD (three commits; the initial work, man page updates, and a fix
to ffs_reload()), with the following differences:
- Be consistent between newfs(8) and tunefs(8) as to the options which
set and control the tuning parameters for this work (avgfilesize & avgfpdir)
- Use u_int16_t instead of u_int8_t to keep track of the number of
contiguous directories (suggested by Chuck Silvers)
- Work within our FFS_EI framework
- Ensure that fs->fs_maxclusters and fs->fs_contigdirs don't point to
the same area of memory
The new algorithm has a marked performance increase, especially when
performing tasks such as untarring pkgsrc.tar.gz, etc.
The original FreeBSD commit messages are attached:
=====
mckusick 2001/04/10 01:39:00 PDT
Directory layout preference improvements from Grigoriy Orlov <gluk@ptci.ru>.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
=====
=====
iedowse 2001/04/23 17:37:17 PDT
Pre-dirpref versions of fsck may zero out the new superblock fields
fs_contigdirs, fs_avgfilesize and fs_avgfpdir. This could cause
panics if these fields were zeroed while a filesystem was mounted
read-only, and then remounted read-write.
Add code to ffs_reload() which copies the fs_contigdirs pointer
from the previous superblock, and reinitialises fs_avgf* if necessary.
Reviewed by: mckusick
=====
=====
nik 2001/04/10 03:36:44 PDT
Add information about the new options to newfs and tunefs which set the
expected average file size and number of files per directory. Could do
with some fleshing out.
=====
2001-09-06 06:16:00 +04:00
|
|
|
size = fs->fs_ncg * sizeof(*fs->fs_contigdirs);
|
|
|
|
fs->fs_contigdirs = space;
|
|
|
|
space = (char *)space + size;
|
|
|
|
memset(fs->fs_contigdirs, 0, size);
|
|
|
|
/* Compatibility for old filesystems - XXX */
|
|
|
|
if (fs->fs_avgfilesize <= 0)
|
|
|
|
fs->fs_avgfilesize = AVFILESIZ;
|
|
|
|
if (fs->fs_avgfpdir <= 0)
|
|
|
|
fs->fs_avgfpdir = AFPDIR;
|
1994-06-08 15:41:58 +04:00
|
|
|
mp->mnt_data = (qaddr_t)ump;
|
|
|
|
mp->mnt_stat.f_fsid.val[0] = (long)dev;
|
1995-11-12 01:00:15 +03:00
|
|
|
mp->mnt_stat.f_fsid.val[1] = makefstype(MOUNT_FFS);
|
1994-06-08 15:41:58 +04:00
|
|
|
mp->mnt_maxsymlinklen = fs->fs_maxsymlinklen;
|
2000-11-27 11:39:39 +03:00
|
|
|
mp->mnt_fs_bshift = fs->fs_bshift;
|
|
|
|
mp->mnt_dev_bshift = DEV_BSHIFT; /* XXX */
|
1994-06-08 15:41:58 +04:00
|
|
|
mp->mnt_flag |= MNT_LOCAL;
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
|
|
|
if (needswap)
|
|
|
|
ump->um_flags |= UFS_NEEDSWAP;
|
|
|
|
#endif
|
1994-06-08 15:41:58 +04:00
|
|
|
ump->um_mountp = mp;
|
|
|
|
ump->um_dev = dev;
|
|
|
|
ump->um_devvp = devvp;
|
|
|
|
ump->um_nindir = fs->fs_nindir;
|
2000-11-27 11:39:39 +03:00
|
|
|
ump->um_lognindir = ffs(fs->fs_nindir) - 1;
|
1994-06-08 15:41:58 +04:00
|
|
|
ump->um_bptrtodb = fs->fs_fsbtodb;
|
|
|
|
ump->um_seqinc = fs->fs_frag;
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++)
|
|
|
|
ump->um_quotas[i] = NULLVP;
|
1999-11-15 21:49:07 +03:00
|
|
|
devvp->v_specmountpoint = mp;
|
1994-12-14 16:03:35 +03:00
|
|
|
ump->um_savedmaxfilesize = fs->fs_maxfilesize; /* XXX */
|
|
|
|
maxfilesize = (u_int64_t)0x80000000 * fs->fs_bsize - 1; /* XXX */
|
|
|
|
if (fs->fs_maxfilesize > maxfilesize) /* XXX */
|
|
|
|
fs->fs_maxfilesize = maxfilesize; /* XXX */
|
1999-11-15 21:49:07 +03:00
|
|
|
if (ronly == 0 && (fs->fs_flags & FS_DOSOFTDEP)) {
|
|
|
|
error = softdep_mount(devvp, mp, fs, cred);
|
|
|
|
if (error) {
|
2001-09-02 05:58:30 +04:00
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
1999-11-15 21:49:07 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
return (0);
|
1998-03-18 18:57:26 +03:00
|
|
|
out2:
|
|
|
|
free(fs, M_UFSMNT);
|
1994-06-08 15:41:58 +04:00
|
|
|
out:
|
1999-11-15 21:49:07 +03:00
|
|
|
devvp->v_specmountpoint = NULL;
|
1994-06-08 15:41:58 +04:00
|
|
|
if (bp)
|
|
|
|
brelse(bp);
|
1999-10-17 03:53:26 +04:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
1994-12-14 16:03:35 +03:00
|
|
|
(void)VOP_CLOSE(devvp, ronly ? FREAD : FREAD|FWRITE, cred, p);
|
1999-10-17 03:53:26 +04:00
|
|
|
VOP_UNLOCK(devvp, 0);
|
1994-06-08 15:41:58 +04:00
|
|
|
if (ump) {
|
|
|
|
free(ump, M_UFSMNT);
|
|
|
|
mp->mnt_data = (qaddr_t)0;
|
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sanity checks for old file systems.
|
|
|
|
*
|
|
|
|
* XXX - goes away some day.
|
|
|
|
*/
|
1996-02-10 01:22:18 +03:00
|
|
|
int
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_oldfscompat(fs)
|
|
|
|
struct fs *fs;
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
fs->fs_npsect = max(fs->fs_npsect, fs->fs_nsect); /* XXX */
|
|
|
|
fs->fs_interleave = max(fs->fs_interleave, 1); /* XXX */
|
|
|
|
if (fs->fs_postblformat == FS_42POSTBLFMT) /* XXX */
|
|
|
|
fs->fs_nrpos = 8; /* XXX */
|
|
|
|
if (fs->fs_inodefmt < FS_44INODEFMT) { /* XXX */
|
1994-10-28 22:59:21 +03:00
|
|
|
u_int64_t sizepb = fs->fs_bsize; /* XXX */
|
1994-06-08 15:41:58 +04:00
|
|
|
/* XXX */
|
|
|
|
fs->fs_maxfilesize = fs->fs_bsize * NDADDR - 1; /* XXX */
|
|
|
|
for (i = 0; i < NIADDR; i++) { /* XXX */
|
|
|
|
sizepb *= NINDIR(fs); /* XXX */
|
|
|
|
fs->fs_maxfilesize += sizepb; /* XXX */
|
|
|
|
} /* XXX */
|
|
|
|
fs->fs_qbmask = ~fs->fs_bmask; /* XXX */
|
|
|
|
fs->fs_qfmask = ~fs->fs_fmask; /* XXX */
|
|
|
|
} /* XXX */
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* unmount system call
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_unmount(mp, mntflags, p)
|
|
|
|
struct mount *mp;
|
|
|
|
int mntflags;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct fs *fs;
|
1994-12-14 16:03:35 +03:00
|
|
|
int error, flags;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
flags = 0;
|
1995-01-18 09:19:49 +03:00
|
|
|
if (mntflags & MNT_FORCE)
|
1994-06-08 15:41:58 +04:00
|
|
|
flags |= FORCECLOSE;
|
1999-11-15 21:49:07 +03:00
|
|
|
if (mp->mnt_flag & MNT_SOFTDEP) {
|
|
|
|
if ((error = softdep_flushfiles(mp, flags, p)) != 0)
|
|
|
|
return (error);
|
|
|
|
} else {
|
|
|
|
if ((error = ffs_flushfiles(mp, flags, p)) != 0)
|
|
|
|
return (error);
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
1995-04-13 01:21:00 +04:00
|
|
|
if (fs->fs_ronly == 0 &&
|
|
|
|
ffs_cgupdate(ump, MNT_WAIT) == 0 &&
|
|
|
|
fs->fs_clean & FS_WASCLEAN) {
|
2000-06-16 02:35:37 +04:00
|
|
|
if (mp->mnt_flag & MNT_SOFTDEP)
|
|
|
|
fs->fs_flags &= ~FS_DOSOFTDEP;
|
1995-04-13 01:21:00 +04:00
|
|
|
fs->fs_clean = FS_ISCLEAN;
|
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT);
|
|
|
|
}
|
1999-10-20 18:32:09 +04:00
|
|
|
if (ump->um_devvp->v_type != VBAD)
|
1999-11-15 21:49:07 +03:00
|
|
|
ump->um_devvp->v_specmountpoint = NULL;
|
1999-10-17 03:53:26 +04:00
|
|
|
vn_lock(ump->um_devvp, LK_EXCLUSIVE | LK_RETRY);
|
1994-12-14 16:03:35 +03:00
|
|
|
error = VOP_CLOSE(ump->um_devvp, fs->fs_ronly ? FREAD : FREAD|FWRITE,
|
1994-06-08 15:41:58 +04:00
|
|
|
NOCRED, p);
|
1999-10-17 03:53:26 +04:00
|
|
|
vput(ump->um_devvp);
|
2001-09-02 05:58:30 +04:00
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
1994-06-08 15:41:58 +04:00
|
|
|
free(fs, M_UFSMNT);
|
|
|
|
free(ump, M_UFSMNT);
|
|
|
|
mp->mnt_data = (qaddr_t)0;
|
|
|
|
mp->mnt_flag &= ~MNT_LOCAL;
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flush out all the files in a filesystem.
|
|
|
|
*/
|
1996-02-10 01:22:18 +03:00
|
|
|
int
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_flushfiles(mp, flags, p)
|
2000-03-30 16:41:09 +04:00
|
|
|
struct mount *mp;
|
1994-06-08 15:41:58 +04:00
|
|
|
int flags;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
|
|
|
extern int doforce;
|
2000-03-30 16:41:09 +04:00
|
|
|
struct ufsmount *ump;
|
1996-02-10 01:22:18 +03:00
|
|
|
int error;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
if (!doforce)
|
|
|
|
flags &= ~FORCECLOSE;
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
#ifdef QUOTA
|
|
|
|
if (mp->mnt_flag & MNT_QUOTA) {
|
1996-02-10 01:22:18 +03:00
|
|
|
int i;
|
|
|
|
if ((error = vflush(mp, NULLVP, SKIPSYSTEM|flags)) != 0)
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++) {
|
|
|
|
if (ump->um_quotas[i] == NULLVP)
|
|
|
|
continue;
|
|
|
|
quotaoff(p, mp, i);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Here we fall through to vflush again to ensure
|
|
|
|
* that we have gotten rid of all the system vnodes.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
#endif
|
1999-11-15 21:49:07 +03:00
|
|
|
/*
|
|
|
|
* Flush all the files.
|
|
|
|
*/
|
1994-06-08 15:41:58 +04:00
|
|
|
error = vflush(mp, NULLVP, flags);
|
1999-11-15 21:49:07 +03:00
|
|
|
if (error)
|
|
|
|
return (error);
|
|
|
|
/*
|
|
|
|
* Flush filesystem metadata.
|
|
|
|
*/
|
|
|
|
vn_lock(ump->um_devvp, LK_EXCLUSIVE | LK_RETRY);
|
2000-09-20 02:04:08 +04:00
|
|
|
error = VOP_FSYNC(ump->um_devvp, p->p_ucred, FSYNC_WAIT, 0, 0, p);
|
1999-11-15 21:49:07 +03:00
|
|
|
VOP_UNLOCK(ump->um_devvp, 0);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get file system statistics.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_statfs(mp, sbp, p)
|
|
|
|
struct mount *mp;
|
2000-03-30 16:41:09 +04:00
|
|
|
struct statfs *sbp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct proc *p;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct fs *fs;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
|
|
|
if (fs->fs_magic != FS_MAGIC)
|
|
|
|
panic("ffs_statfs");
|
|
|
|
#ifdef COMPAT_09
|
|
|
|
sbp->f_type = 1;
|
|
|
|
#else
|
|
|
|
sbp->f_type = 0;
|
|
|
|
#endif
|
|
|
|
sbp->f_bsize = fs->fs_fsize;
|
|
|
|
sbp->f_iosize = fs->fs_bsize;
|
|
|
|
sbp->f_blocks = fs->fs_dsize;
|
|
|
|
sbp->f_bfree = fs->fs_cstotal.cs_nbfree * fs->fs_frag +
|
|
|
|
fs->fs_cstotal.cs_nffree;
|
1997-10-16 22:29:11 +04:00
|
|
|
sbp->f_bavail = (long) (((u_int64_t) fs->fs_dsize * (u_int64_t)
|
1998-06-13 20:26:22 +04:00
|
|
|
(100 - fs->fs_minfree) / (u_int64_t) 100) -
|
|
|
|
(u_int64_t) (fs->fs_dsize - sbp->f_bfree));
|
1994-06-08 15:41:58 +04:00
|
|
|
sbp->f_files = fs->fs_ncg * fs->fs_ipg - ROOTINO;
|
|
|
|
sbp->f_ffree = fs->fs_cstotal.cs_nifree;
|
|
|
|
if (sbp != &mp->mnt_stat) {
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(sbp->f_mntonname, mp->mnt_stat.f_mntonname, MNAMELEN);
|
|
|
|
memcpy(sbp->f_mntfromname, mp->mnt_stat.f_mntfromname, MNAMELEN);
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
1995-01-18 12:44:34 +03:00
|
|
|
strncpy(sbp->f_fstypename, mp->mnt_op->vfs_name, MFSNAMELEN);
|
1994-06-08 15:41:58 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Go through the disk queues to initiate sandbagged IO;
|
|
|
|
* go through the inodes to write those that have been modified;
|
|
|
|
* initiate the writing of the super block if it has been modified.
|
|
|
|
*
|
|
|
|
* Note: we are always called with the filesystem marked `MPBUSY'.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_sync(mp, waitfor, cred, p)
|
|
|
|
struct mount *mp;
|
|
|
|
int waitfor;
|
|
|
|
struct ucred *cred;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
1998-03-01 05:20:01 +03:00
|
|
|
struct vnode *vp, *nvp;
|
|
|
|
struct inode *ip;
|
|
|
|
struct ufsmount *ump = VFSTOUFS(mp);
|
|
|
|
struct fs *fs;
|
1994-06-08 15:41:58 +04:00
|
|
|
int error, allerror = 0;
|
|
|
|
|
|
|
|
fs = ump->um_fs;
|
1998-03-01 05:20:01 +03:00
|
|
|
if (fs->fs_fmod != 0 && fs->fs_ronly != 0) { /* XXX */
|
|
|
|
printf("fs = %s\n", fs->fs_fsmnt);
|
|
|
|
panic("update: rofs mod");
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Write back each (modified) inode.
|
|
|
|
*/
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
loop:
|
2000-05-29 22:28:48 +04:00
|
|
|
for (vp = LIST_FIRST(&mp->mnt_vnodelist); vp != NULL; vp = nvp) {
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* If the vnode that we are about to sync is no longer
|
|
|
|
* associated with this mount point, start over.
|
|
|
|
*/
|
|
|
|
if (vp->v_mount != mp)
|
|
|
|
goto loop;
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&vp->v_interlock);
|
2000-05-29 22:28:48 +04:00
|
|
|
nvp = LIST_NEXT(vp, v_mntvnodes);
|
1994-06-08 15:41:58 +04:00
|
|
|
ip = VTOI(vp);
|
2000-02-15 01:00:21 +03:00
|
|
|
if (vp->v_type == VNON ||
|
|
|
|
((ip->i_flag &
|
2000-05-29 22:04:30 +04:00
|
|
|
(IN_ACCESS | IN_CHANGE | IN_UPDATE | IN_MODIFIED | IN_ACCESSED)) == 0 &&
|
2000-12-04 12:37:06 +03:00
|
|
|
LIST_EMPTY(&vp->v_dirtyblkhd) &&
|
|
|
|
vp->v_uvm.u_obj.uo_npages == 0))
|
2000-02-15 01:00:21 +03:00
|
|
|
{
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_unlock(&vp->v_interlock);
|
1994-06-08 15:41:58 +04:00
|
|
|
continue;
|
1998-03-01 05:20:01 +03:00
|
|
|
}
|
|
|
|
simple_unlock(&mntvnode_slock);
|
|
|
|
error = vget(vp, LK_EXCLUSIVE | LK_NOWAIT | LK_INTERLOCK);
|
|
|
|
if (error) {
|
|
|
|
simple_lock(&mntvnode_slock);
|
|
|
|
if (error == ENOENT)
|
|
|
|
goto loop;
|
|
|
|
continue;
|
|
|
|
}
|
1998-06-05 23:53:00 +04:00
|
|
|
if ((error = VOP_FSYNC(vp, cred,
|
2000-09-20 02:04:08 +04:00
|
|
|
waitfor == MNT_WAIT ? FSYNC_WAIT : 0, 0, 0, p)) != 0)
|
1994-06-08 15:41:58 +04:00
|
|
|
allerror = error;
|
|
|
|
vput(vp);
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_lock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
1998-03-01 05:20:01 +03:00
|
|
|
simple_unlock(&mntvnode_slock);
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Force stale file system control information to be flushed.
|
|
|
|
*/
|
1999-11-15 21:49:07 +03:00
|
|
|
if (waitfor != MNT_LAZY) {
|
|
|
|
if (ump->um_mountp->mnt_flag & MNT_SOFTDEP)
|
|
|
|
waitfor = MNT_NOWAIT;
|
|
|
|
vn_lock(ump->um_devvp, LK_EXCLUSIVE | LK_RETRY);
|
|
|
|
if ((error = VOP_FSYNC(ump->um_devvp, cred,
|
2000-09-20 02:04:08 +04:00
|
|
|
waitfor == MNT_WAIT ? FSYNC_WAIT : 0, 0, 0, p)) != 0)
|
1999-11-15 21:49:07 +03:00
|
|
|
allerror = error;
|
|
|
|
VOP_UNLOCK(ump->um_devvp, 0);
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
#ifdef QUOTA
|
|
|
|
qsync(mp);
|
|
|
|
#endif
|
1998-03-01 05:20:01 +03:00
|
|
|
/*
|
|
|
|
* Write back modified superblock.
|
|
|
|
*/
|
|
|
|
if (fs->fs_fmod != 0) {
|
|
|
|
fs->fs_fmod = 0;
|
|
|
|
fs->fs_time = time.tv_sec;
|
2000-05-29 22:28:48 +04:00
|
|
|
if ((error = ffs_cgupdate(ump, waitfor)))
|
|
|
|
allerror = error;
|
1998-03-01 05:20:01 +03:00
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
return (allerror);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Look up a FFS dinode number to find its incore vnode, otherwise read it
|
|
|
|
* in from disk. If it is in core, wait for the lock bit to clear, then
|
|
|
|
* return the inode locked. Detection and handling of mount points must be
|
|
|
|
* done by the calling routine.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_vget(mp, ino, vpp)
|
|
|
|
struct mount *mp;
|
|
|
|
ino_t ino;
|
|
|
|
struct vnode **vpp;
|
|
|
|
{
|
1998-03-01 05:20:01 +03:00
|
|
|
struct fs *fs;
|
|
|
|
struct inode *ip;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct buf *bp;
|
|
|
|
struct vnode *vp;
|
|
|
|
dev_t dev;
|
1998-09-01 07:11:08 +04:00
|
|
|
int error;
|
1998-10-23 04:31:28 +04:00
|
|
|
caddr_t cp;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
dev = ump->um_dev;
|
2000-06-28 03:39:17 +04:00
|
|
|
|
|
|
|
if ((*vpp = ufs_ihashget(dev, ino, LK_EXCLUSIVE)) != NULL)
|
|
|
|
return (0);
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
/* Allocate a new vnode/inode. */
|
1996-02-10 01:22:18 +03:00
|
|
|
if ((error = getnewvnode(VT_UFS, mp, ffs_vnodeop_p, &vp)) != 0) {
|
1994-06-08 15:41:58 +04:00
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
2000-06-28 03:39:17 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If someone beat us to it while sleeping in getnewvnode(),
|
|
|
|
* push back the freshly allocated vnode we don't need, and return.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
if ((*vpp = ufs_ihashget(dev, ino, LK_EXCLUSIVE)) != NULL) {
|
2000-06-28 03:51:22 +04:00
|
|
|
ungetnewvnode(vp);
|
2000-06-28 03:39:17 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
} while (lockmgr(&ufs_hashlock, LK_EXCLUSIVE|LK_SLEEPFAIL, 0));
|
|
|
|
|
1998-09-01 07:11:08 +04:00
|
|
|
/*
|
|
|
|
* XXX MFS ends up here, too, to allocate an inode. Should we
|
|
|
|
* XXX create another pool for MFS inodes?
|
|
|
|
*/
|
|
|
|
ip = pool_get(&ffs_inode_pool, PR_WAITOK);
|
1998-08-10 00:15:38 +04:00
|
|
|
memset((caddr_t)ip, 0, sizeof(struct inode));
|
1994-06-08 15:41:58 +04:00
|
|
|
vp->v_data = ip;
|
|
|
|
ip->i_vnode = vp;
|
|
|
|
ip->i_fs = fs = ump->um_fs;
|
|
|
|
ip->i_dev = dev;
|
|
|
|
ip->i_number = ino;
|
2001-01-10 07:47:10 +03:00
|
|
|
LIST_INIT(&ip->i_pcbufhd);
|
1994-06-08 15:41:58 +04:00
|
|
|
#ifdef QUOTA
|
1996-02-10 01:22:18 +03:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++)
|
|
|
|
ip->i_dquot[i] = NODQUOT;
|
|
|
|
}
|
1994-06-08 15:41:58 +04:00
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* Put it onto its hash chain and lock it so that other requests for
|
|
|
|
* this inode will block if they arrive while we are sleeping waiting
|
|
|
|
* for old data structures to be purged or for the contents of the
|
|
|
|
* disk portion of this inode to be read.
|
|
|
|
*/
|
|
|
|
ufs_ihashins(ip);
|
1998-03-01 05:20:01 +03:00
|
|
|
lockmgr(&ufs_hashlock, LK_RELEASE, 0);
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
/* Read in the disk contents for the inode, copy into the inode. */
|
1996-02-10 01:22:18 +03:00
|
|
|
error = bread(ump->um_devvp, fsbtodb(fs, ino_to_fsba(fs, ino)),
|
|
|
|
(int)fs->fs_bsize, NOCRED, &bp);
|
|
|
|
if (error) {
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* The inode does not contain anything useful, so it would
|
|
|
|
* be misleading to leave it on its hash chain. With mode
|
|
|
|
* still zero, it will be unlinked and returned to the free
|
|
|
|
* list by vput().
|
|
|
|
*/
|
|
|
|
vput(vp);
|
|
|
|
brelse(bp);
|
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
1998-10-23 04:31:28 +04:00
|
|
|
cp = (caddr_t)bp->b_data + (ino_to_fsbo(fs, ino) * DINODE_SIZE);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
1999-11-15 21:49:07 +03:00
|
|
|
if (UFS_FSNEEDSWAP(fs))
|
1998-10-23 04:31:28 +04:00
|
|
|
ffs_dinode_swap((struct dinode *)cp, &ip->i_din.ffs_din);
|
1998-03-18 18:57:26 +03:00
|
|
|
else
|
|
|
|
#endif
|
1998-10-23 04:31:28 +04:00
|
|
|
memcpy(&ip->i_din.ffs_din, cp, DINODE_SIZE);
|
1999-11-15 21:49:07 +03:00
|
|
|
if (DOINGSOFTDEP(vp))
|
|
|
|
softdep_load_inodeblock(ip);
|
|
|
|
else
|
|
|
|
ip->i_ffs_effnlink = ip->i_ffs_nlink;
|
1994-06-08 15:41:58 +04:00
|
|
|
brelse(bp);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the vnode from the inode, check for aliases.
|
|
|
|
* Note that the underlying vnode may have changed.
|
|
|
|
*/
|
1998-06-25 00:58:44 +04:00
|
|
|
error = ufs_vinit(mp, ffs_specop_p, ffs_fifoop_p, &vp);
|
1996-02-10 01:22:18 +03:00
|
|
|
if (error) {
|
1994-06-08 15:41:58 +04:00
|
|
|
vput(vp);
|
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Finish inode initialization now that aliasing has been resolved.
|
|
|
|
*/
|
|
|
|
ip->i_devvp = ump->um_devvp;
|
|
|
|
VREF(ip->i_devvp);
|
|
|
|
/*
|
|
|
|
* Ensure that uid and gid are correct. This is a temporary
|
|
|
|
* fix until fsck has been changed to do the update.
|
|
|
|
*/
|
1998-06-13 20:26:22 +04:00
|
|
|
if (fs->fs_inodefmt < FS_44INODEFMT) { /* XXX */
|
|
|
|
ip->i_ffs_uid = ip->i_din.ffs_din.di_ouid; /* XXX */
|
|
|
|
ip->i_ffs_gid = ip->i_din.ffs_din.di_ogid; /* XXX */
|
|
|
|
} /* XXX */
|
2000-11-27 11:39:39 +03:00
|
|
|
uvm_vnp_setsize(vp, ip->i_ffs_size);
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
*vpp = vp;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* File handle to vnode
|
|
|
|
*
|
|
|
|
* Have to be really careful about stale file handles:
|
|
|
|
* - check that the inode number is valid
|
|
|
|
* - call ffs_vget() to get the locked inode
|
|
|
|
* - check for an unallocated inode (i_mode == 0)
|
|
|
|
* - check that the given client host has export rights and return
|
|
|
|
* those rights via. exflagsp and credanonp
|
|
|
|
*/
|
|
|
|
int
|
1999-02-27 02:44:43 +03:00
|
|
|
ffs_fhtovp(mp, fhp, vpp)
|
2000-03-30 16:41:09 +04:00
|
|
|
struct mount *mp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct fid *fhp;
|
|
|
|
struct vnode **vpp;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct ufid *ufhp;
|
1994-06-08 15:41:58 +04:00
|
|
|
struct fs *fs;
|
|
|
|
|
|
|
|
ufhp = (struct ufid *)fhp;
|
|
|
|
fs = VFSTOUFS(mp)->um_fs;
|
|
|
|
if (ufhp->ufid_ino < ROOTINO ||
|
|
|
|
ufhp->ufid_ino >= fs->fs_ncg * fs->fs_ipg)
|
|
|
|
return (ESTALE);
|
1999-02-27 02:44:43 +03:00
|
|
|
return (ufs_fhtovp(mp, ufhp, vpp));
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Vnode pointer to File handle
|
|
|
|
*/
|
|
|
|
/* ARGSUSED */
|
1996-02-10 01:22:18 +03:00
|
|
|
int
|
1994-06-08 15:41:58 +04:00
|
|
|
ffs_vptofh(vp, fhp)
|
|
|
|
struct vnode *vp;
|
|
|
|
struct fid *fhp;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct inode *ip;
|
|
|
|
struct ufid *ufhp;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
ip = VTOI(vp);
|
|
|
|
ufhp = (struct ufid *)fhp;
|
|
|
|
ufhp->ufid_len = sizeof(struct ufid);
|
|
|
|
ufhp->ufid_ino = ip->i_number;
|
1997-06-11 14:09:37 +04:00
|
|
|
ufhp->ufid_gen = ip->i_ffs_gen;
|
1994-06-08 15:41:58 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
1998-03-01 05:20:01 +03:00
|
|
|
void
|
|
|
|
ffs_init()
|
|
|
|
{
|
2000-03-16 21:20:06 +03:00
|
|
|
if (ffs_initcount++ > 0)
|
|
|
|
return;
|
|
|
|
|
1999-11-15 21:49:07 +03:00
|
|
|
softdep_initialize();
|
1998-03-01 05:20:01 +03:00
|
|
|
ufs_init();
|
1998-09-01 07:11:08 +04:00
|
|
|
|
|
|
|
pool_init(&ffs_inode_pool, sizeof(struct inode), 0, 0, 0, "ffsinopl",
|
|
|
|
0, pool_page_alloc_nointr, pool_page_free_nointr, M_FFSNODE);
|
1998-03-01 05:20:01 +03:00
|
|
|
}
|
|
|
|
|
2000-03-16 21:20:06 +03:00
|
|
|
void
|
|
|
|
ffs_done()
|
|
|
|
{
|
|
|
|
if (--ffs_initcount > 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* XXX softdep cleanup ? */
|
|
|
|
ufs_done();
|
|
|
|
pool_destroy(&ffs_inode_pool);
|
|
|
|
}
|
|
|
|
|
1998-03-01 05:20:01 +03:00
|
|
|
int
|
|
|
|
ffs_sysctl(name, namelen, oldp, oldlenp, newp, newlen, p)
|
|
|
|
int *name;
|
|
|
|
u_int namelen;
|
|
|
|
void *oldp;
|
|
|
|
size_t *oldlenp;
|
|
|
|
void *newp;
|
|
|
|
size_t newlen;
|
|
|
|
struct proc *p;
|
|
|
|
{
|
|
|
|
extern int doclusterread, doclusterwrite, doreallocblks, doasyncfree;
|
2000-04-04 13:23:20 +04:00
|
|
|
extern int ffs_log_changeopt;
|
1998-03-01 05:20:01 +03:00
|
|
|
|
|
|
|
/* all sysctl names at this level are terminal */
|
|
|
|
if (namelen != 1)
|
|
|
|
return (ENOTDIR); /* overloaded */
|
|
|
|
|
|
|
|
switch (name[0]) {
|
|
|
|
case FFS_CLUSTERREAD:
|
|
|
|
return (sysctl_int(oldp, oldlenp, newp, newlen,
|
|
|
|
&doclusterread));
|
|
|
|
case FFS_CLUSTERWRITE:
|
|
|
|
return (sysctl_int(oldp, oldlenp, newp, newlen,
|
|
|
|
&doclusterwrite));
|
|
|
|
case FFS_REALLOCBLKS:
|
|
|
|
return (sysctl_int(oldp, oldlenp, newp, newlen,
|
|
|
|
&doreallocblks));
|
|
|
|
case FFS_ASYNCFREE:
|
|
|
|
return (sysctl_int(oldp, oldlenp, newp, newlen, &doasyncfree));
|
2000-04-04 13:23:20 +04:00
|
|
|
case FFS_LOG_CHANGEOPT:
|
|
|
|
return (sysctl_int(oldp, oldlenp, newp, newlen,
|
|
|
|
&ffs_log_changeopt));
|
1998-03-01 05:20:01 +03:00
|
|
|
default:
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
}
|
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
/*
|
|
|
|
* Write a superblock and associated information back to disk.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_sbupdate(mp, waitfor)
|
|
|
|
struct ufsmount *mp;
|
|
|
|
int waitfor;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct fs *fs = mp->um_fs;
|
|
|
|
struct buf *bp;
|
1995-04-13 01:21:00 +04:00
|
|
|
int i, error = 0;
|
1998-03-18 18:57:26 +03:00
|
|
|
int32_t saved_nrpos = fs->fs_nrpos;
|
|
|
|
int64_t saved_qbmask = fs->fs_qbmask;
|
|
|
|
int64_t saved_qfmask = fs->fs_qfmask;
|
|
|
|
u_int64_t saved_maxfilesize = fs->fs_maxfilesize;
|
1999-11-15 21:49:07 +03:00
|
|
|
u_int8_t saveflag;
|
1994-06-08 15:41:58 +04:00
|
|
|
|
|
|
|
/* Restore compatibility to old file systems. XXX */
|
|
|
|
if (fs->fs_postblformat == FS_42POSTBLFMT) /* XXX */
|
1998-03-18 18:57:26 +03:00
|
|
|
fs->fs_nrpos = -1; /* XXX */
|
1994-06-08 15:41:58 +04:00
|
|
|
if (fs->fs_inodefmt < FS_44INODEFMT) { /* XXX */
|
1994-10-28 22:59:21 +03:00
|
|
|
int32_t *lp, tmp; /* XXX */
|
1994-06-08 15:41:58 +04:00
|
|
|
/* XXX */
|
1998-06-13 20:26:22 +04:00
|
|
|
lp = (int32_t *)&fs->fs_qbmask; /* XXX nuke qfmask too */
|
1994-06-08 15:41:58 +04:00
|
|
|
tmp = lp[4]; /* XXX */
|
|
|
|
for (i = 4; i > 0; i--) /* XXX */
|
|
|
|
lp[i] = lp[i-1]; /* XXX */
|
|
|
|
lp[0] = tmp; /* XXX */
|
|
|
|
} /* XXX */
|
1998-03-18 18:57:26 +03:00
|
|
|
fs->fs_maxfilesize = mp->um_savedmaxfilesize; /* XXX */
|
|
|
|
|
|
|
|
bp = getblk(mp->um_devvp, SBOFF >> (fs->fs_fshift - fs->fs_fsbtodb),
|
|
|
|
(int)fs->fs_sbsize, 0, 0);
|
1999-11-15 21:49:07 +03:00
|
|
|
saveflag = fs->fs_flags & FS_INTERNAL;
|
|
|
|
fs->fs_flags &= ~FS_INTERNAL;
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(bp->b_data, fs, fs->fs_sbsize);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
|
|
|
if (mp->um_flags & UFS_NEEDSWAP)
|
2001-08-17 06:18:46 +04:00
|
|
|
ffs_sb_swap(fs, (struct fs*)bp->b_data);
|
1998-03-18 18:57:26 +03:00
|
|
|
#endif
|
|
|
|
|
1999-11-15 21:49:07 +03:00
|
|
|
fs->fs_flags |= saveflag;
|
1998-03-18 18:57:26 +03:00
|
|
|
fs->fs_nrpos = saved_nrpos; /* XXX */
|
|
|
|
fs->fs_qbmask = saved_qbmask; /* XXX */
|
|
|
|
fs->fs_qfmask = saved_qfmask; /* XXX */
|
|
|
|
fs->fs_maxfilesize = saved_maxfilesize; /* XXX */
|
|
|
|
|
1994-06-08 15:41:58 +04:00
|
|
|
if (waitfor == MNT_WAIT)
|
|
|
|
error = bwrite(bp);
|
|
|
|
else
|
|
|
|
bawrite(bp);
|
1995-04-13 01:21:00 +04:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ffs_cgupdate(mp, waitfor)
|
|
|
|
struct ufsmount *mp;
|
|
|
|
int waitfor;
|
|
|
|
{
|
2000-03-30 16:41:09 +04:00
|
|
|
struct fs *fs = mp->um_fs;
|
|
|
|
struct buf *bp;
|
1995-04-13 01:21:00 +04:00
|
|
|
int blks;
|
2001-09-02 05:58:30 +04:00
|
|
|
void *space;
|
1995-04-13 01:21:00 +04:00
|
|
|
int i, size, error = 0, allerror = 0;
|
|
|
|
|
|
|
|
allerror = ffs_sbupdate(mp, waitfor);
|
1994-06-08 15:41:58 +04:00
|
|
|
blks = howmany(fs->fs_cssize, fs->fs_fsize);
|
2001-09-02 05:58:30 +04:00
|
|
|
space = fs->fs_csp;
|
1994-06-08 15:41:58 +04:00
|
|
|
for (i = 0; i < blks; i += fs->fs_frag) {
|
|
|
|
size = fs->fs_bsize;
|
|
|
|
if (i + fs->fs_frag > blks)
|
|
|
|
size = (blks - i) * fs->fs_fsize;
|
|
|
|
bp = getblk(mp->um_devvp, fsbtodb(fs, fs->fs_csaddr + i),
|
|
|
|
size, 0, 0);
|
1998-03-18 18:57:26 +03:00
|
|
|
#ifdef FFS_EI
|
|
|
|
if (mp->um_flags & UFS_NEEDSWAP)
|
|
|
|
ffs_csum_swap((struct csum*)space,
|
1998-06-13 20:26:22 +04:00
|
|
|
(struct csum*)bp->b_data, size);
|
1998-03-18 18:57:26 +03:00
|
|
|
else
|
|
|
|
#endif
|
1998-08-10 00:15:38 +04:00
|
|
|
memcpy(bp->b_data, space, (u_int)size);
|
2001-09-02 05:58:30 +04:00
|
|
|
space = (char *)space + size;
|
1994-06-08 15:41:58 +04:00
|
|
|
if (waitfor == MNT_WAIT)
|
|
|
|
error = bwrite(bp);
|
|
|
|
else
|
|
|
|
bawrite(bp);
|
|
|
|
}
|
1995-04-13 01:21:00 +04:00
|
|
|
if (!allerror && error)
|
|
|
|
allerror = error;
|
|
|
|
return (allerror);
|
1994-06-08 15:41:58 +04:00
|
|
|
}
|