If a test has issued a quit command already (which may be useful to do
explicitly because the test wants to show its effects),
QEMUMachine.shutdown() should not do so again. Otherwise, the VM may
well return an ECONNRESET which will lead QEMUMachine.shutdown() to
killing it, which then turns into a "qemu received signal 9" line.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The Valgrind tool reports about the uninitialised buffer 'buf'
instantiated on the stack of the function guess_disk_lchs().
Pass 'read-zeroes=on' to the null block driver to make it deterministic.
The output of the tests 051, 186 and 227 now includes the parameter
'read-zeroes'. So, the benchmark output files are being changed too.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that the stream job exits cleanly (without abort) when the
top node is read-only and cannot be reopened read/write.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190703172813.6868-12-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
We recently removed the dependency of the stream job on its base node.
That makes it OK to use a commit filter node there. Test that.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-11-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
unittest-style tests generally do not use the log file, but VM.run_job()
can still be useful to them. Add a parameter to it that hides its
output from the log file.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190703172813.6868-10-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, 030 just compares the error class, which does not say
anything.
Before HEAD^ added throttling to test_overlapping_4, that test actually
usually failed because node2 was already gone, not because it was the
commit and stream job were not allowed to overlap.
Prevent such problems in the future by comparing the error description
instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-9-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, TestParallelOps in 030 creates images that are too small for
job throttling to be effective. This is reflected by the fact that it
never undoes the throttling.
Increase the image size and undo the throttling when the job should be
completed. Also, add throttling in test_overlapping_4, or the jobs may
not be so overlapping after all. In fact, the error usually emitted
here is that node2 simply does not exist, not that overlapping jobs are
not allowed -- the fact that this job ignores the exact error messages
and just checks the error class is something that should be fixed in a
follow-up patch.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
A recent tweak to the '-o help' output for qemu-img needs to be
reflected into the iotests expected outputs.
Fixes: f7077c98
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The bottom node is the intermediate block device that has the base as its
backing image. It is used instead of the base node while a block stream
job is running to avoid dependency on the base that may change due to the
parallel jobs. The change may take place due to a filter node as well that
is inserted between the base and the intermediate bottom node. It occurs
when the base node is the top one for another commit or stream job.
After the introduction of the bottom node, don't freeze its backing child,
that's the base, anymore.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1559152576-281803-4-git-send-email-andrey.shinkevich@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
It's not obvious that something named __init__.py actually houses
important code that isn't relevant to python packaging glue. Move the
QEMUMachine and related error classes out into their own module.
Adjust users to the new import location.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20190627212816.27298-2-jsnow@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Tests should place their files into the test directory. This includes
Unix sockets. 205 currently fails to do so, which prevents it from
being run concurrently.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190618210238.9524-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Rewrite the implementation of the ssh block driver to use libssh instead
of libssh2. The libssh library has various advantages over libssh2:
- easier API for authentication (for example for using ssh-agent)
- easier API for known_hosts handling
- supports newer types of keys in known_hosts
Use APIs/features available in libssh 0.8 conditionally, to support
older versions (which are not recommended though).
Adjust the iotest 207 according to the different error message, and to
find the default key type for localhost (to properly compare the
fingerprint with).
Contributed-by: Max Reitz <mreitz@redhat.com>
Adjust the various Docker/Travis scripts to use libssh when available
instead of libssh2. The mingw/mxe testing is dropped for now, as there
are no packages for it.
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190620200840.17655-1-ptoscano@redhat.com
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 5873173.t2JhDm7DL7@lindworm.usersys.redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
512M of L1 entries is a very loose bound, only 32M are required to store
the maximal supported VMDK file size of 2TB.
Fixed qemu-iotest 59# - now failure occures before on impossible L1
table size.
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com>
Message-id: 20190620091057.47441-3-shmuel.eiderman@oracle.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
COW (even empty/zero) areas require encryption too
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190516143028.81155-1-anton.nefedov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, the "thistime" variable is not reinitialized on every loop
iteration. This leads to tests that do not yield a run time (because
they failed or were skipped) printing the run time of the previous test
that did. Fix that by reinitializing "thistime" for every test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We do not support this combination (yet), so this should yield an error
message.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190507203508.18026-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This test converts a simple image to another, but blkdebug injects
block_status and read faults at some offsets. The resulting image
should be the same as the input image, except that sectors that could
not be read have to be 0.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190507203508.18026-7-mreitz@redhat.com
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[mreitz: Dropped superfluous printf from _filter_offsets, as suggested
by Vladimir; disable test for VDI and IMGOPTSSYNTAX]
Signed-off-by: Max Reitz <mreitz@redhat.com>
There are error messages which refer to an overlay node as the snapshot.
That is wrong, those are two different things.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20190603202236.1342-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Test fails at least for qcow, because of different cluster sizes in
base and top (and therefore different granularities of bitmaps we are
trying to merge).
The test aim is to check block-dirty-bitmap-merge between different
nodes functionality, no needs to check all formats. So, let's just drop
support for anything except qcow2.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190605155405.104384-1-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
In 219, we wait for the job to make progress before we emit its status.
This makes the output reliable. We do not wait for any more progress if
the job's current-progress already matches its total-progress.
Unfortunately, there is a bug: Right after the job has been started,
it's possible that total-progress is still 0. In that case, we may skip
the first progress-making step and keep ending up 64 kB short.
To fix that bug, we can simply wait for total-progress to reach 4 MB
(the image size) after starting the job.
Reported-by: Karen Mezick <kmezick@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1686651
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190516161114.27596-1-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
[mreitz: Adjusted commit message as per John's proposal]
Signed-off-by: Max Reitz <mreitz@redhat.com>
It is possible for an empty file to take up blocks on a filesystem, for
example:
$ qemu-img create -f raw test.img 1G
Formatting 'test.img', fmt=raw size=1073741824
$ mkfs.ext4 -I 128 -q test.img
$ mkdir test-mount
$ sudo mount -o loop test.img test-mount
$ sudo touch test-mount/test-file
$ stat -c 'blocks=%b' test-mount/test-file
blocks=8
These extra blocks (one cluster) are apparently used for metadata,
because they are always there, on top of blocks used for data:
$ sudo dd if=/dev/zero of=test-mount/test-file bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00135339 s, 775 MB/s
$ stat -c 'blocks=%b' test-mount/test-file
blocks=2056
Make iotest 175 take this into account.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190516144319.12570-1-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-6-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
[mreitz: Moved from 250 to 256]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Don't pull events out of the queue that don't belong to us;
be choosier so that we can use this method to drive jobs that
were launched by transactions that may have more jobs.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-5-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Cap waits to 60 seconds so that iotests can fail gracefully if something
goes wrong.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-3-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
common.nbd's nbd_server_set_tcp_port() tries to find a free port, and
then uses it for the whole test run. However, this is racy because even
if the port was free at the beginning, there is no guarantee it will
continue to be available. Therefore, 233 currently cannot reliably be
run concurrently with other NBD TCP tests.
This patch addresses the problem by dropping nbd_server_set_tcp_port(),
and instead finding a new port every time nbd_server_start_tcp_socket()
is invoked. For this, we run qemu-nbd with --fork and on error evaluate
the output to see whether it contains "Address already in use". If so,
we try the next port.
On success, we still want to continually redirect the output from
qemu-nbd to stderr. To achieve both, we redirect qemu-nbd's stderr to a
FIFO that we then open in bash. If the parent process exits with status
0 (which means that the server has started successfully), we launch a
background cat process that copies the FIFO to stderr. On failure, we
read the whole content into a variable and then evaluate it.
While at it, use --fork in nbd_server_start_unix_socket(), too. Doing
so allows us to drop nbd_server_wait_for_*_socket().
Note that the reason common.nbd did not use --fork before is that
qemu-nbd did not have --pid-file.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190508211820.17851-6-mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
qemu_nbd_pipe() currently unconditionally reads qemu-nbd's output. That
is not ideal because qemu-nbd may keep stderr open after the parent
process has exited.
Currently, the only user of qemu_nbd_pipe() is 147, which discards the
whole output if the parent process returned success and only evaluates
it on error. Therefore, we can replace qemu_nbd_pipe() by
qemu_nbd_early_pipe() that does the same: Discard the output on success,
and return it on error.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190508211820.17851-3-mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit 70ff5b07 wanted to move the diff between actual and reference
output to the end after printing the test result line. It really only
copied it, though, so the diff is now displayed twice. Remove the old
one.
Fixes: 70ff5b07fc
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This test checks bug in qcow2_process_discards, fixed by previous
commit.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that devices refuse to be attached to a node that has already
been moved to a different iothread if they can't be or aren't configured
to work in the same iothread.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This makes use of qdev_prop_drive_iothread for scsi-disk so that the
disk can be attached to a node that is already in the target AioContext.
We need to check that the HBA actually supports iothreads, otherwise
scsi-disk must make sure that the node is already in the main
AioContext.
This changes the error message for conflicting iothread settings.
Previously, virtio-scsi produced the error message, now it comes from
blk_set_aio_context(). Update a test case accordingly.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The NBD server uses an AioContext notifier, so it can tolerate that its
BlockBackend is switched to a different AioContext. Before we start
actually calling bdrv_try_set_aio_context(), which checks for
consistency, outside of test cases, we need to make sure that the NBD
server actually allows this.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This patch adds a test where we cancel a throttled mirror job and
immediately close the VM before it can be cancelled. Doing so will
invoke bdrv_drain_all() while the mirror job tries to drain the
throttled node. When bdrv_drain_all_end() tries to lift its drain on
the throttle node, the job will exit and replace the current root node
of the BB drive0 (which is the job's filter node) by the throttle node.
Before the previous patch, this replacement did not increase drive0's
quiesce_counter by a sufficient amount, so when
bdrv_parent_drained_end() (invoked by bdrv_do_drained_end(), invoked by
bdrv_drain_all_end()) tried to end the drain on all of the throttle
node's parents, it decreased drive0's quiesce_counter below 0 -- which
fails an assertion.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drv_co_block_status digs bs->file for additional, more accurate search
for hole inside region, reported as DATA by bs since 5daa74a6eb.
This accuracy is not free: assume we have qcow2 disk. Actually, qcow2
knows, where are holes and where is data. But every block_status
request calls lseek additionally. Assume a big disk, full of
data, in any iterative copying block job (or img convert) we'll call
lseek(HOLE) on every iteration, and each of these lseeks will have to
iterate through all metadata up to the end of file. It's obviously
ineffective behavior. And for many scenarios we don't need this lseek
at all.
However, lseek is needed when we have metadata-preallocated image.
So, let's detect metadata-preallocation case and don't dig qcow2's
protocol file in other cases.
The idea is to compare allocation size in POV of filesystem with
allocations size in POV of Qcow2 (by refcounts). If allocation in fs is
significantly lower, consider it as metadata-preallocation case.
102 iotest changed, as our detector can't detect shrinked file as
metadata-preallocation, which don't seem to be wrong, as with metadata
preallocation we always have valid file length.
Two other iotests have a slight change in their QMP output sequence:
Active 'block-commit' returns earlier because the job coroutine yields
earlier on a blocking operation. This operation is loading the refcount
blocks in qcow2_detect_metadata_preallocation().
Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that concurrent requests are correctly drained before making
graph modifications instead of running into assertions in
bdrv_replace_node().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This test shows that external snapshots and incremental backups are
friends.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190517152111.206494-3-vsementsov@virtuozzo.com
Signed-off-by: John Snow <jsnow@redhat.com>
We mandate that the source node must be a root node; but there's no reason
I am aware of that it needs to be restricted to such. In some cases, we need
to make sure that there's a medium present, but in the general case we can
allow the backup job itself to do the graph checking.
This patch helps improve the error message when you try to backup from
the same node more than once, which is reflected in the change to test
056.
For backups with bitmaps, it will also show a better error message that
the bitmap is in use instead of giving you something cryptic like "need
a root node."
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1707303
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190521210053.8864-1-jsnow@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
If COW areas of the newly allocated clusters are zeroes on the backing
image, efficient bdrv_write_zeroes(flags=BDRV_REQ_NO_FALLBACK) can be
used on the whole cluster instead of writing explicit zero buffers later
in perform_cow().
iotest 060:
write to the discarded cluster does not trigger COW anymore.
Use a backing image instead.
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Message-id: 20190516142749.81019-2-anton.nefedov@virtuozzo.com
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This attempts to clean-up the output to better match the output of the
rest of the QEMU check system when called with -makecheck. This includes:
- formatting as " TEST iotest-FMT: nnn"
- only dumping config on failure (when -makecheck enabled)
The non-make check output has been cleaned up as well:
- line re-displayed (\r) at the end
- fancy colours for pass/fail/skip
- timestamps always printed (option removed)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190503143904.31211-1-alex.bennee@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Currently, all tests are in the "auto" group. This is a little bit pointless.
OTOH, we need a group for the tests that we can automatically run during
"make check" each time, too. Tests in this new group are supposed to run
with every possible QEMU configuration, for example they must run with every
QEMU binary (also non-x86), without failing when an optional features is
missing (but reporting "skip" is ok), and be able to run on all kind of host
filesystems and users (i.e. also as "nobody" or "root").
So let's use the "auto" group for this class of tests now. The initial
list has been determined by running the iotests with non-x86 QEMU targets
and with our CI pipelines on Gitlab, Cirrus-CI and Travis (i.e. including
macOS and FreeBSD).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190502084506.8009-7-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
A lot of tests run fine on FreeBSD and macOS, too - the limitation
to Linux here was likely just copied-and-pasted from other tests.
Thus remove the "_supported_os Linux" line from tests that run
successful in our CI pipelines on FreeBSD and macOS.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190502084506.8009-6-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
bash is installed in a different directory on non-Linux systems like
FreeBSD. Do not hard-code /bin/bash here so that the tests can run
there, too.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190502084506.8009-4-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
qemu-system-arm, qemu-system-aarch64 and qemu-system-tricore do not have
a default machine, so when running the qemu-iotests with such a binary,
lots of tests are failing. Fix it by picking a default machine in the
"check" script instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190502084506.8009-3-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
"check -raw 005" fails when running on certain filesystems - these do not
support such large sparse files. Use the same check as in test 220 to
skip the test in this case.
Suggested-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190502084506.8009-2-thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Sometimes, 245 fails for me because some stream job has already finished
while the test expects it to still be active. (With -c none, it fails
basically every time.) The most reliable way to fix this is to simply
set auto_finalize=false so the job will remain in the block graph as
long as we need it. This allows us to drop the rate limiting, too,
which makes the test faster.
The only problem with this is that there is a single place that yields a
different error message depending on whether the stream job is still
copying data (so COR is enabled) or not (COR has been disabled, but the
job still has the WRITE_UNCHANGED permission on the target node). We
can easily address that by expecting either error message.
Note that we do not need auto_finalize=false (or rate limiting) for the
active commit job, because It never completes without an explicit
block-job-complete anyway.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
log() is in the current module, there is no need to prefix it. In fact,
doing so may make VM.run_job() unusable in tests that never use
iotests.log() themselves.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Sometimes we cannot tell which error message qemu will emit, and we do
not care. With this change, we can then just pass an array of all
possible messages to assert_qmp() and it will choose the right one.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>