1. Don't force use of "for" when "while" works better.
2. No need to check c != '\0' when we also check (c == ' ' || c == '\t')
3. Use the size of the buffer we're using, rather than a different one
(not really a concern, they're the same size)
4. Don't use fscanf() to read file data, use fgets() & sscanf().
5. After using a pointer as a char *, validate alignment before switching
to int * (can only fail if kernel #define gets set stupidly) Or #6...
6. Validate sparemap file name isn't too long for assigned space.
7. recognise that strlen() returns size_t - don't shove it into an int.
8. On out of mem, be more clear which allocation failed in warning msg.
ATF tests all pass. But I don't think they use sparemap files.
Coincidentally, the change also works around a gcc 5.1 bug which causes
a segmentation fault when trying to compile the longer version (guess
the compiler got exhausted, or something).
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66345
Replace atoi(3) by strtol(3), and check that numbers are valid,
positive, and in int32_t range. The previous lack of check could
silently lead to the same serial being set to all RAID volumes
for instance because given numbers were bigger than INT_MAX. The
consequence is in an awful mess when RAIDframe would mix volumes...
component in a failed RAID set in order to avoid potentially configuring
RAId 1 sets with erroneous values taken from random extent data in the
replacement component partitions.
The server must of course have some disks configured. Let's say
we have this simple server with disks as a few sparse host files:
main()
{
rump_init();
rump_pub_etfs_register("/disk1", "./disk1.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk2", "./disk2.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk3", "./disk3.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk4", "./disk4.img", RUMP_ETFS_BLK);
pause();
}
And we run the server:
mainbus0 (root)
Kernelized RAIDframe activated
/disk1: hostpath ./disk1.img (97 GB)
/disk2: hostpath ./disk2.img (97 GB)
/disk3: hostpath ./disk3.img (97 GB)
/disk4: hostpath ./disk4.img (97 GB)
We can then configure the raid against the server:
> ./raidctl -c theraid.conf raid0
And lo, we have evidence of a level1 raid in the server dmesg:
raid0: RAID Level 1
raid0: Components: /disk1 /disk2 /disk3 /disk4
raid0: Total Sectors: 409599744 (199999 MB)
yea, i initialized it already in a previous run:
> ./raidctl -S raid0
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
"marked clean" after however much inactivity; it is *actually* clean
as soon as the component disks all do their thing (on the order of ms,
usually), just the same as before.
The bikeshed is now less of a taupe and more of an ecru.
Drastically reduces the amount of time spent rewriting parity after an
unclean shutdown by keeping better track of which regions might have had
outstanding writes. Enabled by default; can be disabled on a per-set
basis, or tuned, with the new raidctl(8) commands.
Discussed on tech-kern@ to a general air of approval; exhortations to
commit from mrg@, christos@, and others.
Thanks to Google for their sponsorship, oster@ for mentoring the
project, assorted developers for trying very hard to break it, and
probably more I'm forgetting.
component label. raidctl(8) should now print the correct number of
blocks for RAID sets larger than 1TB.
Patch supplied by Bernhard Moellemann in PR bin/40479.