- Add proper TCP state tracking as described in Guido van Rooij paper,
plus handle TCP Window Scaling option.
- Completely rework npf_cache_t, reduce granularity, simplify code.
- Add npf_addr_t as an abstraction, amend session handling code, as well
as NAT code et al, to use it. Now design is prepared for IPv6 support.
- Handle IPv4 fragments i.e. perform packet reassembly.
- Add support for IPv4 ID randomization and minimum TTL enforcement.
- Add support for TCP MSS "clamping".
- Random bits for IPv6. Various fixes and clean-up.
+ add support for partial blocks, defined in rfc 4880, and used fairly
extensively by gnupg where the input size may not be known in advance
(e.g. for encrypted compressed data, as produced by default by gpg -e)
local definitions of
ASM_PREFERRED_EH_DATA_FORMAT
ASM_MAYBE_OUTPUT_ENCODED_ADDR_RTX
and make it obvious we're not using local ASM_OUTPUT_INTERNAL_LABEL
This fixes the current build problems (and probably more)
commit to this file: we must restore the PID value (that is, the
current address space ID) before touching memory, or the memory writes
might go to arbitrary wrong places or fault.
I'm not completely convinced this function (or other functions in this
file) are handling pipeline hazards safely, but I don't have
authoritative mips1 documentation any more so I'm not going to meddle.
has busy pages and wants the wapbl lock as reader from wapbl_begin(),
another thread has the wapbl lock as reader and waits for a page from
the first thread. Now a third thread calls wapbl_flush() and wants the
wapbl lock as writer.
Move the wapbl_begin() up to a point where genfs_getpages() has no busy
pages yet.
protect wl_dealloc* members. Take the mutex here and change the lock
requirements of these fields to "writer lock or mutex".
This error lead to file system corruption and "freeing free block" panics.
Yesterday I thought I committed the increased timeout and when the
test was still failing for the autotests n hours later I noticed
I had actually failed to commit it. I did manage to commit something
in the evening, but the autotests were still failing this morning,
so I noticed I increased the timeout of the wrong test. I wonder
what will go wrong this time...
(and p.s.: 10240 is probably slow because it's O(n^2) with a constant
of quite a few)
The server must of course have some disks configured. Let's say
we have this simple server with disks as a few sparse host files:
main()
{
rump_init();
rump_pub_etfs_register("/disk1", "./disk1.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk2", "./disk2.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk3", "./disk3.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk4", "./disk4.img", RUMP_ETFS_BLK);
pause();
}
And we run the server:
mainbus0 (root)
Kernelized RAIDframe activated
/disk1: hostpath ./disk1.img (97 GB)
/disk2: hostpath ./disk2.img (97 GB)
/disk3: hostpath ./disk3.img (97 GB)
/disk4: hostpath ./disk4.img (97 GB)
We can then configure the raid against the server:
> ./raidctl -c theraid.conf raid0
And lo, we have evidence of a level1 raid in the server dmesg:
raid0: RAID Level 1
raid0: Components: /disk1 /disk2 /disk3 /disk4
raid0: Total Sectors: 409599744 (199999 MB)
yea, i initialized it already in a previous run:
> ./raidctl -S raid0
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.