to the server. If "2" is specified, a separate connection is used
for data and directory operations. Using two connections can
significantly increase directory operation performance on a saturated
link, at least up to 30x faster.
with the ffs kernel module and follows the trend of retiring ufs.
It also allows to get rid of a special case kludge in runtime module
loading, since ufs was not really a module. librumpfs_ufs is now
obsoleted and ffs consumers should be linked solely against
librumpfs_ffs.
return something other than just 0.
caveat: statvfs is done for the mountpoint path, so might not give
the truth about a directory inside the mountpoint.
made to fail. Specifically, change
.ifdef(SYMBOL) -> .ifdef SYMBOL or .if defined(SYMBOL),
and corresponding for .ifndef.
Also correct one error in lib/libm/Makefile (.ifdef (${MKCOMPLEX} != "no")?!?).
experimental option -p, which tries to reestablish the connection
to the sftp server in case it is lost. This currently has a few
limitations (found in the man page), but generally works in some
use cases.
Better support might eventually emerge, but since that requires a
plunge into the depths of puffs_framebuf, I need quite a bit of
Fernet Branca to build up my courage before attempting it.
the other mount binaries do. Now syspuffs can be used to run all
puffs file systems as utilities. This includes fuse file systems
and becomes interesting with the fs-utils project. We can now do
e.g. this:
ReFUSE ntfs-3g:
golem> echo hello | fsu_write/fsu_write ntfs-3g puffs ~/img/ntfs.img dafile
golem> fsu_cat/fsu_cat ntfs-3g puffs ~/img/ntfs.img dafile
hello
golem>
puffs sysctlfs:
golem> fsu_ls/fsu_ls mount_sysctlfs puffs sysctl -l ddb
total 0
-r-xr-xr-x 1 pooka users 1 Sep 2 22:11 commandonenter
-r-xr-xr-x 1 pooka users 2 Sep 2 22:11 fromconsole
-r-xr-xr-x 1 pooka users 3 Sep 2 22:11 lines
-r-xr-xr-x 1 pooka users 8 Sep 2 22:11 maxoff
-r-xr-xr-x 1 pooka users 3 Sep 2 22:11 maxwidth
-r-xr-xr-x 1 pooka users 2 Sep 2 22:11 onpanic
-r-xr-xr-x 1 pooka users 3 Sep 2 22:11 radix
-r-xr-xr-x 1 pooka users 2 Sep 2 22:11 tabstops
-r-xr-xr-x 1 pooka users 2 Sep 2 22:11 tee_msgbuf
Same works for psshfs etcetc.
In other words, this provides total integration for "normal"
in-kernel file systems and puffs/fuse file systems on the ukfs
library level.
Note: implementation is still "first stab" and the fs-utils usage
will no doubt change.
This is fairly annoying if browsing a hierarchy with multiple file
systems mounted, since at least inode 2 is fairly common. Compensate
by comparing modification time also. Not perfect, but ....
* Don't loop eternally if we attempt to read at or past EOF. Fixes
reading files on a non-cached mount.
Use this routine both in mount_fs and rump_fs to provide equivalent
command line parameters and therefore usage interchangeability.
While doing this, combine some common mountgoop to mountprog.h
private non-installed build infrastructure from sys/rump.
breakdown of commit:
* install relevant headers into /usr/include/rump
* build sys/rump/librump/rumpuser and sys/rump/librump/rumpkern
from src/lib and install as librumpuser and librump, respectively
+ this retains the ability to test a librump build with just the
kernel sources at hand
* move sys/rump/fs/lib/libukfs and sys/rump/fs/lib/libp2k to src/lib
for general consumption, they are not kernel-space dwellers anyway
* build and install sys/rump/fs/lib/lib$fs as librumpfs_$fs
* add chapter 3 manual pages for rump, rumpuser, ukfs and p2k
* build and install userspace kernel file system daemons if MKPUFFS=yes
is spexified
* retire fsconsole for now, it will make a comeback with an actually
implemented version shortly
nodes when doing readdir. This makes pwd work again for cases
where getcwd() actually has to do the "READDIR + compare inode
numbers" trick.
Yet another problem reported by jmmv.
a node always has the inode number set. And since I'm feeling
generous, sprinkle a few comments around the affected areas (mostly
so that I'd remember what in the world the code is trying to do).
Ok, ok, a few more words about it: stop holding puffs_cc as a holy
value and passing it around to almost every possible place (popquiz:
which kernel variable does this remind you of?). Instead, pass
the natural choice, puffs_usermount, and fetch puffs_cc via
puffs_cc_getcc() only in routines which actually need it. This
not only simplifies code, but (thanks to the introduction of
puffs_cc_getcc()) enables constructs which weren't previously sanely
possible, say layering as a curious example.
There's still a little to do on this front, but this was the major
fs interface blast.
handle open is requested, it is waited for only if the node was
not previously succesfully opened. The actual wait for the file
handle happens only when the file handle is actually needed (read
or write). This in turn has the effect that reading cached files
will be quick instead of waiting for the file handle from the sftp
server first. The wait previously could be very long if there were
serveral hundred k of outstanding requests in a limited-bandwidth
link.
The code is in some need of serious handholding, but it works, so
I'll leave that as "future work".
the directory which contained the file before a getattr on the file
itself, the locally cached mtime would be updated without invalidating
the kernel page cache. Thus incorrect data would be returned when
the node was read afterwards as the node size wouldn't match the
data length in the page cache.
Fix the problem by making all vattr-setting routines use the same code.
Problem noticed again by jmmv & atf (and again by running atf over
psshfs ... sometimes you're the windshield, sometimes you're the bug)
against cached mtime instead of attrread - attrread can be reset
these days by sending SIGHUP.
Problem noticed by jmmv & atf (well.. namely by using atf through psshfs).
servers. Calling daemon() (i.e. fork()ing) inside a library can
cause nice surprises for e.g. threaded programs. As discussed with
Greg Oster & others.
for each node. Setting this to a small number can be used to
improve interactive performance on low-bandwidth links when performing
bulk data reads. Of course I could also open separate pipes for
bulk and other, but this was quicker and less intrusive and doesn't
require authenticating twice.
anymore do this if we fail to set size.
The whole lookup procedure should be done in a smarter fashion,
but this is the quickie fix to get things working again.
to signal no references, as that is not currently supported for
node_rename(). The removed node will not immediately be reclaimed,
but we can live with that for now.
While here, factor the removal code a bit to share with remove and
rmdir.
fixes PR kern/36637 by reinoud
portalfs. Uses the same config files etc. as the "regular"
mount_portal, and actually compiles a lot of the code directly from
under src/sbin/mount_portal.
Doesn't yet support concurrent access, but I have a pretty clear
vision on how to neatly fix that.
FORTIFY_SOURCE feature of libssp, thus checking the size of arguments to
various string and memory copy and set functions (as well as a few system
calls and other miscellany) where known at function entry. RedHat has
evidently built all "core system packages" with this option for some time.
This option should be used at the top of Makefiles (or Makefile.inc where
this is used for subdirectories) but after any setting of LIB.
This is only useful for userland code, and cannot be used in libc or in
any code which includes the libc internals, because it overrides certain
libc functions with macros. Some effort has been made to make USE_FORT=yes
work correctly for a full-system build by having the bsd.sys.mk logic
disable the feature where it should not be used (libc, libssp iteself,
the kernel) but no attempt has been made to build the entire system with
USE_FORT and doing so will doubtless expose numerous bugs and misfeatures.
Adjust the system build so that all programs and libraries that are setuid,
directly handle network data (including serial comm data), perform
authentication, or appear likely to have (or have a history of having)
data-driven bugs (e.g. file(1)) are built with USE_FORT=yes by default,
with the exception of libc, which cannot use USE_FORT and thus uses
only USE_SSP by default. Tested on i386 with no ill results; USE_FORT=no
per-directory or in a system build will disable if desired.
be reclaimed from under while we are warming the getattr cache.
Shuffle some code to prevent the effects. Theoretically the race
is still possible, but I don't think it will happen in practice.
In any case, the code could benefit from some more dusting.
getattr are usually still outstanding when we already would like
the result. Instead of issueing another stat which will be serviced
only after all the other entries in the directory, record all the
outgoing readdir getattr buffers and if we encounter an outstanding
request when we need to fetch attrs, do a puffs_framev_framebuf_ccpromote()
wait for it instead of firing off the second query. This shaves
almost 10% off the time for ls -lR.
Also, get rid of the SUPERREADDIR conditional, since it has penetrated
the code quite a bit and the #ifdef SUPERREADDIRs were starting to
look like tagliatelle alla bolognese (n.b. I love how it looks,
but I wouldn't like it either if my tagliatelle alla bolognese
looked like psshfs code). Maybe it should be re-introduced in the
form of a switch?
a bit differently: when reading the directory, store all getattr
caching queries and fire off only when the directory read is
complete. That way the common sequence is not [readdir, lots of
async getattr requests, readdir EOF] but rather [readdir, readdir
EOF, lots of async getattr]. This speeds up ls -lR by about 25%
(on my LAN).
equal, larger, respectively instead of 0/1 for non/equal. This
will allow sorting the buffers for faster matching in libpuffs.
While here, change the name from respcmp to framecmp, as that better
reflects the purpose.
NOTE! there is no obvious way to make compilation fail for file
systems which may already be using this feature (although I don't
think there are any outside our tree, as the feature is two weeks
old). Nevertheless, non-updated file systems will fail very quickly.