handle open is requested, it is waited for only if the node was
not previously succesfully opened. The actual wait for the file
handle happens only when the file handle is actually needed (read
or write). This in turn has the effect that reading cached files
will be quick instead of waiting for the file handle from the sftp
server first. The wait previously could be very long if there were
serveral hundred k of outstanding requests in a limited-bandwidth
link.
The code is in some need of serious handholding, but it works, so
I'll leave that as "future work".
servers. Calling daemon() (i.e. fork()ing) inside a library can
cause nice surprises for e.g. threaded programs. As discussed with
Greg Oster & others.
for each node. Setting this to a small number can be used to
improve interactive performance on low-bandwidth links when performing
bulk data reads. Of course I could also open separate pipes for
bulk and other, but this was quicker and less intrusive and doesn't
require authenticating twice.
only take the bare essentials, which currently means removing
"maxreqlen" from the argument list (all current callers I'm aware
of set it as 0 anyway). Introduce puffs_init(), which provides a
context for setting various parameters and puffs_domount(), which
can be used to mount the file system. Keep puffs_mount() as a
shortcut for the above two for simple file systems.
Bump development ABI version to 13. After all, it's Friday the 13th.
Watch out! Bad things can happen on Friday the 13th. --No carrier--
Now, when I say support, I mean "support", due to the limitations
of the backend. File handles are valid only for one session, since
nodes can only be identified by pathnames and pathnames don't (all)
fit into the nfs file handle space. Additionally, we can't detect
if a pathname is completely replaced by another file (if it's done
via some other route that through our mount, of course). But then
again, that's an inherent problem with sshfs even without nfs.
accessors for interesting data in it. Namely, you can now get
pu->pu_privdata with puffs_getspecific(), pu->pu_pn_root with
puffs_set/getroot() and pu->pu_maxreqlen with puffs_getmaxreqlen().
this can happen legally when a file is removed from backing
storage not using this sshfs instance, a readdir is executed for
the parent directory and only then the node gets reclaimed.
* now that there is a mechanism in place which does not require a
pcc to do an sftp transaction, do not yield() in operations where
the final transaction is something where we don't care about the
return value (e.g. close handle). speedup benefit for no cost.
reclaimed nodes hanging until all their children have been reclaimed
and then reclaim everything we can as far up to root as possible.
This is because the file system structures are currently interlinked
in a fashion which would make dotdot lookup based on purely a path
instead of a in-memory node parent member pointer very difficult.
Yes, this deserves a closer look some day.
directory entries already in readdir and caches the results instead
of waiting for each individial getattr from the kernel. For
high-latency links the difference in "ls -l" is quite astounding
and even on my lan "ls -lR" is faster than for nfs in a normal
directory hierarchy (i.e. not one artifically setup to have thousands
of files per directory).
TODO: implement some sort of bandwidth/latency measurement in the
code and enable or disable this option based on than information
(and a command-line flag).