Running
git grep '\$here' tests/qemu-iotests
has 0 hits, which means we are setting a variable that has
no use. It appears that commit e8f8624d removed the last
use. So execute the following cmd to remove all of
the 'here=...' lines as dead code.
sed -i '/^here=/d' $(git grep -l '^here=' tests/qemu-iotests)
Cc: kwolf@redhat.com
Cc: mreitz@redhat.com
Cc: eblake@redhat.com
Suggested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
Message-Id: <20181024094051.4470-3-maozhongyi@cmss.chinamobile.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: touch up commit message, reorder series, rebase to master]
Signed-off-by: Eric Blake <eblake@redhat.com>
The previous commit removed the last usage of ${tmp} inside the tests
themselves; the only remaining users are sourced by check. So we can now
drop this variable from the tests.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-by: Bo Tu <tubo@linux.vnet.ibm.com>
Message-id: 1460472980-26319-4-git-send-email-silbe@linux.vnet.ibm.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In preparation to possible automatic regression and performance
testing for the block layer I found that the iotests don't work
for all protocols anymore.
In commit 1f7bf7d0 I started to change supported protocols from
generic to file for various tests. Unfortunately, some tests
added in the meantime again carry generic protocol altough they
can only work with file because they require local file access.
The other way around for some tests that only support file I added
NFS protocol after confirming they work.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Like qcow2 since commit 6d33e8e7, error out on invalid lengths instead
of silently truncating them to 1023.
Also don't rely on bdrv_pread() catching integer overflows that make len
negative, but use unsigned variables in the first place.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
A huge image size could cause s->l1_size to overflow. Make sure that
images never require a L1 table larger than what fits in s->l1_size.
This cannot only cause unbounded allocations, but also the allocation of
a too small L1 table, resulting in out-of-bounds array accesses (both
reads and writes).
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Too large L2 table sizes cause unbounded allocations. Images actually
created by qemu-img only have 512 byte or 4k L2 tables.
To keep things consistent with cluster sizes, allow ranges between 512
bytes and 64k (in fact, down to 1 entry = 8 bytes is technically
working, but L2 table sizes smaller than a cluster don't make a lot of
sense).
This also means that the number of bytes on the virtual disk that are
described by the same L2 table is limited to at most 8k * 64k or 2^29,
preventively avoiding any integer overflows.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Huge values for header.cluster_bits cause unbounded allocations (e.g.
for s->cluster_cache) and crash qemu this way. Less huge values may
survive those allocations, but can cause integer overflows later on.
The only cluster sizes that qemu can create are 4k (for standalone
images) and 512 (for images with backing files), so we can limit it
to 64k.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>