to ensure it's been properly extended. Clears up some problems at certain
blocksizes which showed up during creation of atf tests, which is done
using file-backed file systems.
when testing that the last sector of the new size is writeable, make
sure we're ACTUALLY writing in the new space, instead of possibly
overwriting something in the existing fs.
Discovered while writing tests - tests which uncovered file corruption at
certain block sizes.
XXX should rewrite writeat() to expect fs blocks instead of disk blocks.
OK mlelstv@
for KEYGEN_RANDOMKEY.
Print a warning if such a refusal is made---this will help the user understand
why there is an error.
Patch provided by: Taylor R Campbell <campbell+netbsd@mumble.net>.
- sync usage comment with current reality
- sort includes
- wrap lines
- use EXIT_FAILURE consistently
- make error messages consistent: Cannot->Can't
- Remove "Old FFSv1 macros" in favor of system macros in ufs/ffs/fs.h .
Leave dblksize() because it uses the on-disk dinode structure.
More cleanup is needed.
No functional changes intended.
were commented out with XXX and a notation to "fix once fsck is fixed."
fsck seems to have been fixed for this particular issue sometime in the
7 years since the code was brought into the tree.
Update cg_old_niblk instead of cg_ni_blk, since this tool
currently supports ffsv1 only.
With these two changes, I can grow a file system and have the result
be clean according to fsck_ffs. Shrinking still results in an unclean
file system.
OK mhitch@
While I'm here, fix a typo in an error message.
The server must of course have some disks configured. Let's say
we have this simple server with disks as a few sparse host files:
main()
{
rump_init();
rump_pub_etfs_register("/disk1", "./disk1.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk2", "./disk2.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk3", "./disk3.img", RUMP_ETFS_BLK);
rump_pub_etfs_register("/disk4", "./disk4.img", RUMP_ETFS_BLK);
pause();
}
And we run the server:
mainbus0 (root)
Kernelized RAIDframe activated
/disk1: hostpath ./disk1.img (97 GB)
/disk2: hostpath ./disk2.img (97 GB)
/disk3: hostpath ./disk3.img (97 GB)
/disk4: hostpath ./disk4.img (97 GB)
We can then configure the raid against the server:
> ./raidctl -c theraid.conf raid0
And lo, we have evidence of a level1 raid in the server dmesg:
raid0: RAID Level 1
raid0: Components: /disk1 /disk2 /disk3 /disk4
raid0: Total Sectors: 409599744 (199999 MB)
yea, i initialized it already in a previous run:
> ./raidctl -S raid0
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
current testing purpose is to create a file system with
block size > MAXPHYS.
(the check doesn't make that much sense anyway in these days of
mobile file systems, since we're interested in MAXPHYS where we
attempt to mount the file system, not where we happen to create it)
This is useful for automated environments where everything (rpcbind,
mountd, nfsd and the client) is started in parallel in a split
second and there is a small chance we will race in there before
everything has been communicated to rpcbind.
we're ELF now, and there are many missing checks against OBJECT_FMT.
if we ever consider switching, the we can figure out what new ones
we need but for now it's just clutter.
this doesn't remove any of the support for exec_aout or any actually
required-for-boot a.out support, only the ability to build a netbsd
release in a.out format. ie, most of this code has been dead for
over a decade.
i've tested builds on vax, amd64, i386, mac68k, macppc, sparc, atari,
amiga, shark, cats, dreamcast, landisk, mmeye and x68k. this covers
the 5 MACHINE_ARCH's affected, and all the other arch code touched.
it also includes some actual run-time testing of sparc, i386 and
shark, and i performed binary comparison upon amiga and x68k as well.
some minor details relevant:
- move shlib.[ch] from ld.aout_so into ldconfig proper, and cut them
down to only the parts ldconfig needs
- remove various unused source files
- switch amiga bootblocks to using elf2bb.h instead of aout2bb.h
tells whether it should detect and convert to binary a hexadecimal octet
string of the form 0x0123ABab, or leave those strings undecoded.
If the argument for a 'media', 'mediamode', 'mediaopt', '-mediaopt',
'nwkey', or 'bssid' keyword is a hexadecimal octet string, do not detect
and decode it. (Note that setifnwkey decodes hexadecimal strings on its
own.)
This fixes a bug noticed by Jim Miller where the trailing zero-octets
were discarded from hexadecimal octet-string arguments for 'nwkey'.
a live file system.
While here modify snap_open() to accept a character device as its
first arg and remove now unneeded get_snap_device().
Reviewed by: Manuel Bouyer <bouyer@netbsd.org>
destroy / create to fail. When reading the GPT label from the end of the disk
ignore errors if the GPT label at the beginning of the disk was not found.
ramdisks and prefer disklabel elsewhere.
Based on discussion on affected port lists (port-sparc port-sparc64
port-sun3 port-sun2 port-atari port-mvme68k).
All listed ports plus amd64 test built after change
- drop the notion of frags (LFS fragments) vs fsb (FFS fragments)
The code uses a complicated unity function that just makes the
code difficult to understand.
- support larger sector sizes. Fix disk address computations
to use DEV_BSIZE in the kernel as required by device drivers
and to use sector sizes in userland.
- Fix several locking bugs in lfs_bio.c and lfs_subr.c.
In particular:
- newfs will not try to erase the label
- fsck_ffs will not try to validate the label
This lets newfs and fsck work on 2048-byte-per-sector media.
Does Apple UFS support such media and how?
uint64_t usermem. This only becomes relevant if you have several TB of RAM.
Promoting cachebufs to uint64_t is not necessary as it gets limited to
(currently) 512 anyway.
fixes the last issue of PR: 19852
information to atactl identify output.
Also:
- remove caddr_t cast
- warn about invalid IDENTIFY data checksum (when possible)
- humanize capacity in power-of-10 format
- remove semi-pointless ATAPI check
- slightly rework command queue depth output to be less conversational
(treating a target disk as a regular file and suppressing ioctl(2)s)
on reading/writing disklabel in a target file.
This allows cross build enviroment creating bootable disk images
for targets in different endian.
No functional changes to native (non-tools) disklabel(8) command.
Closes PR toolchain/42357.
Info can be specified with -A parameter.
Default is based on how the first partition is defined.
For empty disks larger than 128GB (arbitrary figure) use 1MB alignment.
Rename (with #defines) the variables use for aligning partitions to
separate them from the bios geometry.
All in advance of allowing other partition alignments (eg 2048 sectors).
"marked clean" after however much inactivity; it is *actually* clean
as soon as the component disks all do their thing (on the order of ms,
usually), just the same as before.
The bikeshed is now less of a taupe and more of an ecru.
>> Allow MB, GB and CYL (not just M, G and C) and lower case.
>> Don't output a splurious 'd' before "cyl".
>> Fixes PR/37414.
XXX "NNcy" is also allowed?
Instead, use proper macro defined in Makefile per ${MACHINE_ARCH}.
__${MACHINE_ARCH}__ doesn't represent an architecture of tool's target
but an architecture of binaries being compiled, so required features
are not prolery enabled or unintentionally enabled on certain host
and target combinations during src/tools build.
- Use %.40g rather than %g when printing sectors and MB for existing
partition size/offset.
Changes [1.93802e+06c, 1953525105s, 953870M]:
to: [1938021c, 1953525105s, 953869.6875M]: