loops where vnodes can get removed or added during the loops. This could
lead to panic's on unmount since nodes are skipped or otherwise
TAILQ_NEXT(0xdeadbeef, ...) was dereferenced.
all waiters *before* trying to get the syncer lock necessary for
dounmount(). This prevents a deadlock if the userspace server dies
while the syncer is running.
getnewvnode() while holding on to any vnode lock deadlocks the
system if the file system is being forcibly unmounted.
Normal file systems don't trigger this problem because of two reaons:
1) they don't hold on to vnode locks while idling who-knows-where, so
the race doesn't trigger
2) they aren't usually unmounted with FORCE; puffs is, in case "someone"
manages to make a crashy userspace server
Nevertheless, a real solution is slowly being braised.
It contains the VFS attachment and userspace message-passing interface.
This work was initially started and completed for Google SoC 2005
and tweaked to work a bit better in the past few weeks. While
being far from complete, it is functional enough to be able and
stable to host a fairly general-purpose in-memory file system in
userspace. Even so, puffs should be considered experimental and
no binary compatibility for interfaces or crash-freedom or zero
security implications should be relied upon just yet.
The GSoC project was mentored by William Studenmund and the final
review for the code was done by Christos.
vnodes were synced and processed backwards. This meant that the last
accessed node was processed first and the earlierst last.
An extra benefit is the removal of the ugly hack from the Berkly days on
LFS.
In the proces, i've also replaced the various variations hand written loops
by the TAILQ_FOREACH() macro's.
the udf_verbose variable. So when something goes wrong, it can be examined
on the spot without needing to reboot a new kernel and possibly loosing
state.
. get rid of struct adirent which didn't match struct dirent anymore
. fix cookies, move all the code handling them to the end of the function
Includes many minor changes to the code of this function.
"Add a 3rd entry in the cache, which keeps the end position
from just before extending a file.
This has the desired effect of keeping the write speed constant."
And yes, that helps a lot copying large files... always at full speed
now. This closes my PR kern/30868 "Poor performance copying large files
on msdosfs".
Also remove a 2 if-statements testing the same condition, combine them.
All that from Rhialto, thank you very much.
If you perform this request on a directory with exactly 50 files
(plus '.' and '..' which brings the total to 52 objects), the first
reply for the SMB server completely satisfies the query (server
side is Windows 2000 Professional).
The smbfs client then performs a TRANS2_FIND_NEXT2 using the last
file name as the resume key. The response returns a SearchCount
of zero (ctx->f_ecnt == 0) and an EndOfSearch code of zero.
Any attempt to get more entries with calls to TRANS2_FIND_NEXT2
result in Badfid (bad file descriptor). I suspect the return of
SearchCount of zero means that end-of-search has been reached and
the Sid is now closed.
The solution is to set "SMB_RDD_EOF | SMB_RDD_NOCLOSE" after getting
back a zero SearchCount, I've tested this in the field on a quite
a few systems, aggressively accessing Windows shares over smbfs
and it appears flawless.
I was initially concerned about the possibility of resource exhaustion
on the Windows server. I was afraid by not officially closing the
search, it would leave a resource hung-up and over time, exhaust
some sort of "open search table" limit. I've since convinced myself
this is NOT the case.
Windows needs to be able to handle clients that come and go over
time. If the search is not closed, Windows will close it if it
finds it needs more resources. I've testing this on directory
searches descending into 10's of thousands of folders, with 100's
of thousands of files.
it was initialised quite late due to its reliance on disc data the mount
process could have stopped before initialising and thus could panic again
only now for uninitialising an not initialised pool! *sigh*
the same memory block allocated as before and it bombs out on its
descriptor pool allready being initialised. It turns out that the pool was
not allways destroyed. This fix ought to clean it up whatever the cause of
the mishap that results in a reject.
253 of the superblock be zero. Searching the net failed to find any
justification for checking these bytes; all available references say
that they are part of the boot code and not BOOTSIG2 and BOOTSIG3.
Modify the MSDOS 7.1 bootsector definition to have 420 bytes of boot
code and no BOOTSIG[23], rather than 418 bytes of boot code, to follow
available references and apparent Windows practice. A test build
showed that these defines are not used other than in the check removed
by this commit.
Patch tested on netbsd-3, and enabled mounting of a 4 GB CF formatted
under Windows XP and then in a digital camera. The CF was previously
unmountable.
Concept approved on tech-kern by christos@ and martin@.