It's not enough to order the kwargs for consistent QMP log output,
we must also sort any sub-dictionaries in lists that appear as values.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Without this filter, this test sometimes fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch forbids attaching a disk to a SCSI device if its using a
different AioContext. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching two disks with the same blockdev to
a SCSI device that is using iothreads. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching a disk to a SCSI device using
iothreads, then detaching it and reattaching it again. Test case
included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently qemu_uuid_bswap() takes a pointer to the QemuUUID to
be byte-swapped. This means it can't be used when the UUID
to be swapped is in a packed member of a struct. It's also
out of line with the general bswap*() functions we provide
in bswap.h, which take the value to be swapped and return it.
Make qemu_uuid_bswap() take a QemuUUID and return the swapped version.
This fixes some clang warnings about taking the address of
a packed struct member in block/vdi.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Clarify that the number of extents provided in BlockdevCreateOptionsVmdk
must match the number of extents that will actually be used. Providing
more extents will result in an error now.
This requires adapting the test case to provide the right number of
extents.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
This test waits for a MIGRATION event with status=completed on the
source VM before querying the migration status on both source and
destination. However, just because the source says migration has
completed does not mean the destination thinks the same. Therefore, in
some cases, the destination VM may still report "active" instead of
"completed" when asked for its migration status.
Fix this by enabling migration events on both VMs and waiting until both
source and destination emit a status=completed MIGRATION event.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In the block layer, synchronous APIs are often implemented by creating a
coroutine that calls the asynchronous coroutine-based implementation and
then waiting for completion with BDRV_POLL_WHILE().
For this to work with iothreads (more specifically, when the synchronous
API is called in a thread that is not the home thread of the block
device, so that the coroutine will run in a different thread), we must
make sure to call aio_wait_kick() at the end of the operation. Many
places are missing this, so that BDRV_POLL_WHILE() keeps hanging even if
the condition has long become false.
Note that bdrv_dec_in_flight() involves an aio_wait_kick() call. This
corresponds to the BDRV_POLL_WHILE() in the drain functions, but it is
generally not enough for most other operations because they haven't set
the return value in the coroutine entry stub yet. To avoid race
conditions there, we need to kick after setting the return value.
The race window is small enough that the problem doesn't usually surface
in the common path. However, it does surface and causes easily
reproducible hangs if the operation can return early before even calling
bdrv_inc/dec_in_flight, which many of them do (trivial error or no-op
success paths).
The bug in bdrv_truncate(), bdrv_check() and bdrv_invalidate_cache() is
slightly different: These functions even neglected to schedule the
coroutine in the home thread of the node. This avoids the hang, but is
obviously wrong, too. Fix those to schedule the coroutine in the right
AioContext in addition to adding aio_wait_kick() calls.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Recently, some bugs in dmg file have been fixed. To prevent reading dmg
is broken someday in the future, add a simple test which ensures the
conversion from dmg to raw should not hang or face any I/O error.
Signed-off-by: yuchenlin <npes87184@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The mirror_start_job() function used for the commit-active job blocks
the source, target and all intermediate nodes for the duration of the
job.
target <- intermediate <- source
Since 4ef85a9c23 this function creates a dummy mirror_top_bs that
goes on top of the source node, and it is this dummy node that gets
blocked instead. The source node is never blocked or added to the
job's list of nodes.
target <- intermediate <- source <- mirror_top
At the moment I don't think it is possible to exploit this problem
because any additional job on 'source' would either be forbidden for
other reasons or it would need to involve an additional node that is
blocked, causing an error.
This can be seen in the error messages, however, because they never
refer to the source node being blocked:
$ qemu-img create -f qcow2 hd0.qcow2 1M
$ qemu-img create -f qcow2 -b hd0.qcow2 hd1.qcow2
$ qemu-io -c 'write 0 1M' hd0.qcow2
$ $QEMU -drive if=none,file=hd1.qcow2,node-name=hd1
{ "execute": "qmp_capabilities" }
{ "execute": "block-commit", "arguments": {"device": "hd1", "speed": 256}}
{ "execute": "block-stream", "arguments": {"device": "hd1"}}
{ "error": {"class": "GenericError",
"desc": "Node 'hd0' is busy: block device is in use by block job: commit"}}
After this patch the error message refers to 'hd1', as it should.
The expected output of iotest 141 also needs to be updated for the
same reason.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
To do this, we need to allow creating the NBD server on various ports
instead of a single one (which may not even work if you run just one
instance, because something entirely else might be using that port).
So we just pick a random port in [32768, 32768 + 1024) and try to create
a server there. If that fails, we just retry until something sticks.
For the IPv6 test, we need a different range, though (just above that
one). This is because "localhost" resolves to both 127.0.0.1 and ::1.
This means that if you bind to it, it will bind to both, if possible, or
just one if the other is already in use. Therefore, if the IPv6 test
has already taken [::1]:some_port and we then try to take
localhost:some_port, that will work -- only the second server will be
bound to 127.0.0.1:some_port alone and not [::1]:some_port in addition.
So we have two different servers on the same port, one for IPv4 and one
for IPv6.
But when we then try to connect to the server through
localhost:some_port, we will always end up at the IPv6 one (as long as
it is up), and this may not be the one we want.
Thus, we must make sure not to create an IPv6-only NBD server on the
same port as a normal "dual-stack" NBD server -- which is done by using
distinct port ranges, as explained above.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-4-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
By default, qemu-nbd binds to 0.0.0.0. However, we then proceed to
connect to "localhost". Usually, this works out fine; but if this test
is run concurrently, some other test function may have bound a different
server to ::1 (on the same port -- you can bind different serves to the
same port, as long as one is on IPv4 and the other on IPv6).
So running qemu-nbd works, it can bind to 0.0.0.0:NBD_PORT. But
potentially a concurrent test has successfully taken [::1]:NBD_PORT. In
this case, trying to connect to "localhost" will lead us to the IPv6
instance, where we do not want to end up.
Fix this by just binding to "localhost". This will make qemu-nbd error
out immediately and not give us cryptic errors later.
(Also, it will allow us to just try a different port as of a future
patch.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In some cases, we may want to deal with qemu-nbd errors (e.g. by
launching it in a different configuration until it no longer throws
any). In that case, we do not want its output ending up in the test
output.
It may still be useful for handling the error, though, so add a new
function that works basically like qemu_nbd(), only that it returns the
qemu-nbd output instead of making it end up in the log. In contrast to
qemu_img_pipe(), it does still return the exit code as well, though,
because that is even more important for error handling.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Using of global_qtest is not required here. Let's replace functions like
readl() with the corresponding qtest_* counterparts.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190123120759.7162-3-jusual@mail.ru
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Run qtest with a socket that connects QEMU chardev and test code.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190123120759.7162-2-jusual@mail.ru
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This test verifies that we read back the expected I2C WHO_AM_I register
values for the accelerometer/magnetometer.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190110094020.18354-3-stefanha@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcSw5lAAoJENSXKoln91pl8aEH/2KkK5ojXMBY94wH+sCZ17pM
gEuysp20pUudeIk7eUk9aCnq7NHXGJ3GElB6baBq9zwC6qMlVC2Fmj4azluZbjra
ZHKXAeXmibDpRo0UJeovH27+rP8j/NKK+nvZX/+gt0azfC+0SgFwhM6DFFPklJWj
FXGxe2vOUO8Qjke0GwGfqUcjhXJPYg9DEC8RxvliBANME1o0HwkPaNPWWHdjacm2
3K5FeR4jbPYIsfLdazH+G1KfYnQc8GgleSYBIeBKSyyy0z0Z9FXUTZGTjXZSgup4
TqUJbUG0whUtVPzRzd8/DLeWK+2JXfLk35MrwVwv/TDdKuije7Pk7ABrb9B2Lnk=
=O/OP
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-january-25-2019' into staging
MIPS queue for January 25, 2019
# gpg: Signature made Fri 25 Jan 2019 13:25:57 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-january-25-2019:
docs/qemu-cpu-models: Add MIPS/nanoMIPS QEMU supported CPU models
qemu-doc: Add nanoMIPS ISA information
tests: tcg: mips: Remove old directories
tests: tcg: mips: Add two new Makefiles
tests: tcg: mips: Move source files to new locations
MAINTAINERS: Update MIPS sections
target/mips: Add I6500 core configuration
target/mips: nanoMIPS: Fix branch handling
disas: nanoMIPS: Amend DSP instructions related comments
target/mips: Extend gen_scwp() functionality to support EVA
target/mips: Correct the second argument type of cpu_supports_isa()
target/mips: nanoMIPS: Rename macros for extracting 3-bit-coded GPR numbers
target/mips: nanoMIPS: Remove an unused macro
target/mips: nanoMIPS: Remove duplicate macro definitions
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add Makefiles for two new direcitories.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
MIPS TCG test will be organized by ISAs and ASEs in future.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Commit 8bca4613 added support for %% in json strings when interpolating,
but in doing so broke handling of % when not interpolating.
When parse_string() is fed a string token containing '%', it skips the
'%' regardless of ctxt->ap, i.e. even it's not interpolating. If the
'%' is the string's last character, it fails an assertion. Else, it
"merely" swallows the '%'.
Fix parse_string() to handle '%' specially only when interpolating.
To gauge the bug's impact, let's review non-interpolating users of this
parser, i.e. code passing NULL context to json_message_parser_init():
* tests/check-qjson.c, tests/test-qobject-input-visitor.c,
tests/test-visitor-serialization.c
Plenty of tests, but we still failed to cover the buggy case.
* monitor.c: QMP input
* qga/main.c: QGA input
* qobject_from_json():
- qobject-input-visitor.c: JSON command line option arguments of
-display and -blockdev
Reproducer: -blockdev '{"%"}'
- block.c: JSON pseudo-filenames starting with "json:"
Reproducer: https://bugzilla.redhat.com/show_bug.cgi?id=1668244#c3
- block/rbd.c: JSON key pairs
Pseudo-filenames starting with "rbd:".
Command line, QMP and QGA input are trusted.
Filenames are trusted when they come from command line, QMP or HMP.
They are untrusted when they come from from image file headers.
Example: QCOW2 backing file name. Note that this is *not* the security
boundary between host and guest. It's the boundary between host and an
image file from an untrusted source.
Neither failing an assertion nor skipping a character in a filename of
your choice looks exploitable. Note that we don't support compiling
with NDEBUG.
Fixes: 8bca4613e6
Cc: qemu-stable@nongnu.org
Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
Message-Id: <20190102140535.11512-1-cfergeau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
[Commit message extended to discuss impact]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
New pages-per-second stat, a new test, and a bunch
of fixes and tidy ups.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcSI5IAAoJEAUWMx68W/3nQuoQAKvaW/g+EMEHBXeXkSttCTpz
n5hDhiIwj+sW6BXDqjS9r9zBbAeUvmlJELO+N35ELgZKZKAfymYjcP7MT3bHlnW+
8/AZdIuuVBBWvGS4iCgiScvBVwbj7HqeVynEk4Z2DudGckJpypNnfjx8ssBKs5Gt
DGGKcqrT3DGo5VxMbR/gYXCvsFtbqxVM3Taud4ReZNyuNQmMxRLe+O3JPQ2AHYbZ
lSWVp9R0AZx71ynpI5K2bZDe5f1Y6Ag9Ziitz+H0WOpiEcDdS5NHmnfq8Bm7x/Ef
0K/D2V2T9EVyiuGs6XAuyMV0n7RfpVN7OpMMH41SW6UzvS88YoXqLPuSPYPV5TGH
JhpenX/5ELAGEO/B7ITeaeKi9q4aQ4Fxbtpr/lWqStodCARzI7QS6arrUCOF0UhH
EGuKV/qoX7UMmz1KRA7vr6npSOZkdVVbdnRjG6zrTJpSBwCzZufUYKPaDDjc8pSD
+RY4rrFU1bMuQF5e8xyjKu8VWWN1YhN+uzWzf5OhTsW6josocSCzC3FTJPJStpbs
2hbXDptznfZw77W3qvKPX1hNMwl08im79NLyDo+PxniVilclMQ11GKyaWAGRggmQ
49012bWozON6pK2JIzv1GXLHNOAyLcTHdUfpywU100srBEkchrdq/SlEX49IiDW7
yh+a4BmwHjN4FDwCkufR
=xxTj
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20190123a' into staging
Migration pull 2019-01-23
New pages-per-second stat, a new test, and a bunch
of fixes and tidy ups.
# gpg: Signature made Wed 23 Jan 2019 15:54:48 GMT
# gpg: using RSA key 0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20190123a:
migration: introduce pages-per-second
vmstate: constify SaveVMHandlers
tests: add /vmstate/simple/array
migration/rdma: unregister fd handler
migration: unify error handling for process_incoming_migration_co
migration: add more error handling for postcopy_ram_enable_notify
migration: multifd_save_cleanup() can't fail, simplify
migration: fix the multifd code when receiving less channels
Fix segmentation fault when qemu_signal_init fails
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The qio_channel_socket_close method for was mistakenly unlinking the
UNIX server socket, even if the channel was a client connection. This
was not noticed with chardevs, since they never call close, but with the
VNC server, this caused the VNC server socket to be deleted after the
first client quit.
The qio_channel_socket_close method also needlessly reimplemented the
logic that already exists in socket_listen_cleanup(). Just call that
method directly, for listen sockets only.
This fixes a regression introduced in QEMU 3.0.0 with
commit d66f78e1ea
Author: Pavel Balaev <mail@void.so>
Date: Mon May 21 19:17:35 2018 +0300
Delete AF_UNIX socket after close
Fixes launchpad #1795100
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Hot-unplug a scsi-hd using an iothread. The previous patch fixes a
segfault in this scenario.
This patch adds a regression test.
Suggested-by: Alberto Garcia <berto@igalia.com>
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190114133257.30299-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The qapi_event_send_FOO() functions emit events like this:
QMPEventFuncEmit emit;
emit = qmp_event_get_func_emit();
if (!emit) {
return;
}
qmp = qmp_event_build_dict("FOO");
[put event arguments into @qmp...]
emit(QAPI_EVENT_FOO, qmp);
The value of qmp_event_get_func_emit() depends only on the program:
* In qemu-system-FOO, it's always monitor_qapi_event_queue.
* In tests/test-qmp-event, it's always event_test_emit.
* In all other programs, it's always null.
This is exactly the kind of dependence the linker is supposed to
resolve; we don't actually need an indirection.
Note that things would fall apart if we linked more than one QAPI
schema into a single program: each set of qapi_event_send_FOO() uses
its own event enumeration, yet they share a single emit function.
Which takes the event enumeration as an argument. Which one if
there's more than one?
More seriously: how does this work even now? qemu-system-FOO wants
QAPIEvent, and passes a function taking that to
qmp_event_set_func_emit(). test-qmp-event wants test_QAPIEvent, and
passes a function taking that to qmp_event_set_func_emit().
It works by type trickery, of course:
typedef void (*QMPEventFuncEmit)(unsigned event, QDict *dict);
void qmp_event_set_func_emit(QMPEventFuncEmit emit);
QMPEventFuncEmit qmp_event_get_func_emit(void);
We use unsigned instead of the enumeration type. Relies on both
enumerations boiling down to unsigned, which happens to be true for
the compilers we use.
Clean this up as follows:
* Generate qapi_event_send_FOO() that call PREFIX_qapi_event_emit()
instead of the value of qmp_event_set_func_emit().
* Generate a prototype for PREFIX_qapi_event_emit() into
qapi-events.h.
* PREFIX_ is empty for qapi/qapi-schema.json, and test_ for
tests/qapi-schema/qapi-schema-test.json. It's qga_ for
qga/qapi-schema.json, and doc-good- for
tests/qapi-schema/doc-good.json, but those don't define any events.
* Rename monitor_qapi_event_queue() to qapi_event_emit() instead of
passing it to qmp_event_set_func_emit(). This takes care of
qemu-system-FOO.
* Rename event_test_emit() to test_qapi_event_emit() instead of
passing it to qmp_event_set_func_emit(). This takes care of
tests/test-qmp-event.
* Add a qapi_event_emit() that does nothing to stubs/monitor.c. This
takes care of all other programs that link code emitting QMP events.
* Drop qmp_event_set_func_emit(), qmp_event_get_func_emit().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20181218182234.28876-3-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
[Commit message typos fixed]
A very simple test to show VMSTATE_*_ARRAY usage and result. It could
be systematically extended to other primitives, but I leave that as an
exercise for others :).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20181114132130.27141-1-marcandre.lureau@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This adds a rule to run all of our softfloat tests. It is included as
a pre-requisite to check-tcg and check-unit as well.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Wire up test/fp-test into the main testing Makefile. Currently we skip
some of the extF80 and f128 related tests. Once we re-factor and fix
these tests the plumbing should get simpler.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
We get HOST_WORDS_BIGENDIAN from config-host.h, but the include
is missing. Fix it.
This fixes `make check-softfloat' on big endian hosts.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
At this point random_ops[] only contains normals, so there's
no need to do anything to them. In fact, raising the exponent
here can make the output !normal, which is precisely
what the comment says we want to avoid.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The second test in the branches is wrong; fix while converting
to a switch statement, which is easier to get right.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This test was merged into drive_del-test in 2014.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Fixes: e2f3f22188 ("Merge of qdev-monitor-test, blockdev-test")
Signed-off-by: Thomas Huth <thuth@redhat.com>
The hexloader test invokes QEMU with the -nographic argument. This
is unnecessary, because the qtest_initf() function will pass it
-display none, which suffices to disable the graphical window.
It also means that the QEMU process will make the stdin/stdout
O_NONBLOCK. Since O_NONBLOCK is not per-file descriptor but per
"file description", this non-blocking behaviour is then shared
with any other process that's using the stdin/stdout of the
'make check' run, including make itself. This can result in make
falling over with "make: write error: stdout" because it got
an unexpected EINTR trying to write output messages to the terminal.
This is particularly noticable if running 'make check' in a loop with
while make check; do true; done
(It does not affect single make check runs so much because the
shell will remove the O_NONBLOCK status before it reads the
terminal for interactive input.)
Remove the unwanted -nographic argument.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Pass around the QTestState, so that we can finally get rid of the
out-of-favor global_qtest variable in this file, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Pass around the QTestState from function to function, so that we can finally
get rid of the out-of-favor global_qtest variable in this file, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Pass around the test state explicitly, to be able to use the qtest_in*()
and qtest_out*() function in this test.
Signed-off-by: Thomas Huth <thuth@redhat.com>
To be able to build and test QEMU binaries where certain devices or machines
are disabled, we have to use the right CONFIG_* switches to run certain tests
only if the corresponding device or machine really has been compiled into
the binary.
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
To be able to build and test QEMU binaries where certain devices are
disabled, we have to use the right CONFIG_* switches to run certain
tests only if the corresponding device really has been compiled into
the binary.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Any good new feature deserves some regression testing :)
Coverage includes:
- 223: what happens when there are 0 or more than 1 export,
proof that we can see multiple contexts including qemu:dirty-bitmap
- 233: proof that we can list over TLS, and that mix-and-match of
plain/TLS listings will behave sanely
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20190117193658.16413-22-eblake@redhat.com>