PUFFS_KFLAG_WTCACHE. Second, create separate fids for reading and
writing. If opening for read, open a read-only fid and for write
a write-only fid; use these for reading and writing. When the
open-count for a node drops to zero, clunk both. This avoids hitting
the fid limit when accessing large directory hierarchies.
Two problems remain:
* does not take credentials into account, although we can only mount
the remote 9P file server with one set of credentials, so not a
huge worry
* doesn't work for the open/mmap/close/access_memory_window case, but
that will require some further kernel changes
Works, but lots of little things to nibble on:
* fix permissions to work better
* limit the amount of open files required
* do constant folding with psshfs code
* support authentication
etcetc.
only take the bare essentials, which currently means removing
"maxreqlen" from the argument list (all current callers I'm aware
of set it as 0 anyway). Introduce puffs_init(), which provides a
context for setting various parameters and puffs_domount(), which
can be used to mount the file system. Keep puffs_mount() as a
shortcut for the above two for simple file systems.
Bump development ABI version to 13. After all, it's Friday the 13th.
Watch out! Bad things can happen on Friday the 13th. --No carrier--
been nodetofh translated even if they are not valid on the sftp
server anymore, because some nfs client might still be clinging on
to the file handle we are reclaiming now.
Now, when I say support, I mean "support", due to the limitations
of the backend. File handles are valid only for one session, since
nodes can only be identified by pathnames and pathnames don't (all)
fit into the nfs file handle space. Additionally, we can't detect
if a pathname is completely replaced by another file (if it's done
via some other route that through our mount, of course). But then
again, that's an inherent problem with sshfs even without nfs.
accessors for interesting data in it. Namely, you can now get
pu->pu_privdata with puffs_getspecific(), pu->pu_pn_root with
puffs_set/getroot() and pu->pu_maxreqlen with puffs_getmaxreqlen().
(that's what you get when you copypaste code, a cid with a pin
to burst your bubble, that's what you get for all your troubles, I'll
never copypaste again)
CID 4461
this can happen legally when a file is removed from backing
storage not using this sshfs instance, a readdir is executed for
the parent directory and only then the node gets reclaimed.
* now that there is a mechanism in place which does not require a
pcc to do an sftp transaction, do not yield() in operations where
the final transaction is something where we don't care about the
return value (e.g. close handle). speedup benefit for no cost.
reclaimed nodes hanging until all their children have been reclaimed
and then reclaim everything we can as far up to root as possible.
This is because the file system structures are currently interlinked
in a fashion which would make dotdot lookup based on purely a path
instead of a in-memory node parent member pointer very difficult.
Yes, this deserves a closer look some day.
directory entries already in readdir and caches the results instead
of waiting for each individial getattr from the kernel. For
high-latency links the difference in "ls -l" is quite astounding
and even on my lan "ls -lR" is faster than for nfs in a normal
directory hierarchy (i.e. not one artifically setup to have thousands
of files per directory).
TODO: implement some sort of bandwidth/latency measurement in the
code and enable or disable this option based on than information
(and a command-line flag).
should make this work with the IETF standard some day, also.
* kludge with writes and permissions a bit to be able to flush data
cached in ubc to files which are already with r/o permissions in
the backend