Allow callers to choose whether to allow OOB support during a test;
for now, all existing callers pass false, but the next patch will
add a new caller. Also, rewrite the monitor setup to be generic
(using the -qmp shorthand is insufficient for honoring the parameter).
Based on an idea by Peter Xu, in <20180326063901.27425-8-peterx@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180327013620.1644387-4-eblake@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
It simply tests the new OOB capability, and make sure the QAPISchema can
parse it correctly.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180326063901.27425-7-peterx@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The allow_oob parameter was passed in but not used in tests. Now
reflect that in the tests, so we need to touch up other command testers
with that new change.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180326063901.27425-6-peterx@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Running 'make check' on rawhide with gcc 8.0.1 fails:
tests/test-visitor-serialization.c: In function 'main':
tests/test-visitor-serialization.c:1127:34: error: '/primitives/' directive writing 12 bytes into a region of size between 1 and 128 [-Werror=format-overflow=]
The warning is a false positive (we have two buffers of size 128,
so yes, if we FULLY used the first buffer, then sprint'ing it into
the second will overflow the second). But in practice, our first
buffer will not be longer than "/visitor/serialization/String",
so sizing it smaller is enough to let gcc see that we don't
overflow the second.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180323204341.1501664-1-eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
patch keeping us from creating too large extents.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJauVVBAAoJEPQH2wBh1c9AGtYIAJ1ojl6+guKQHjtzv9W8ch50
2tzH9/lFkhq/Tyay8MCcq1dWQr23tKqusxi6fDHVdc8XIx6fDuPFzWxUQwwLvS81
9CA6Qs2NngkiAs89ZZ0Sc9aj+LzFbKvXDXPYd4FFeLVVzD0CA4qghH5THrrH6LRm
4DoRUK1QejrOC0v2zCRQvEN6RlI6WFGCf9YiYKkdrvswnjtLgOobCt7TvC9Nade2
smGyxtDENkV6bAZgQgxMAlf88GCyKslb4Fu6U+sfKMDejWmlYEREbBsQc+/Gp6Af
R6ZeiXEJ+KUF34RktPTmhJlUbzRc4Mw0Ij7Abrfhtpre9hifIp62XSrM8sasgLo=
=k+ly
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2018-03-26' into staging
A fix for dirty bitmap migration through shared storage, and a VMDK
patch keeping us from creating too large extents.
# gpg: Signature made Mon 26 Mar 2018 21:17:05 BST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2018-03-26:
vmdk: return ERROR when cluster sector is larger than vmdk limitation
iotests: enable shared migration cases in 169
qcow2: fix bitmaps loading when bitmaps already exist
qcow2-bitmap: add qcow2_reopen_bitmaps_rw_hint()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Check that two coroutines can queue each other repeatedly without
hitting stack exhaustion.
Switch to qemu_init_main_loop() in main() because coroutines use
qemu_get_aio_context() - they don't know about test-aio's ctx variable.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20180322152834.12656-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The value of CCOUNT special register is calculated as time elapsed
since CCOUNT == 0 multiplied by the core frequency. In icount mode time
increment between consecutive instructions that don't involve time
warps is constant, but unless the result of multiplication of this
constant by the core frequency is a whole number the CCOUNT increment
between these instructions may not be constant. E.g. with icount=7 each
instruction takes 128ns, with core clock of 10MHz CCOUNT values for
consecutive instructions are:
502: (128 * 502 * 10000000) / 1000000000 = 642.56
503: (128 * 503 * 10000000) / 1000000000 = 643.84
504: (128 * 504 * 10000000) / 1000000000 = 645.12
I.e.the CCOUNT increments depend on the absolute time. This results in
varying CCOUNT differences for consecutive instructions in tests that
involve time warps and don't set CCOUNT explicitly.
Change frequency of the core used in tests so that clock cycle takes
exactly 64ns. Change icount power used in tests to 6, so that each
instruction takes exactly 1 clock cycle. With these changes CCOUNT
increments only depend on the number of executed instructions and that's
what timer tests expect, so they work correctly.
Longer story:
http://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg04326.html
Cc: Pavel Dovgaluk <Pavel.Dovgaluk@ispras.ru>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Shared migration for dirty bitmaps is fixed by previous patches,
so we can enable the test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180320170521.32152-5-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This revert commit fb68096da3, and
modify test_read_guest_mem() to use different chardev names, when
using memfd (_test_server_free(), where the chardev is removed, runs
in idle).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180215212552.26997-4-marcandre.lureau@redhat.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Before the chardev name fix, the following error may happen: "attempt
to add duplicate property 'chr-test' to object (type 'container')",
due to races.
Sadly, error_vprintf() uses g_test_message(), so you have to use
read the cryptic --debug-log to see it. Later, it would make sense to
use g_critical() instead, and catch errors with
g_test_expect_message() (in glib 2.34).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180215212552.26997-5-marcandre.lureau@redhat.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This tests that the .bdrv_truncate implementation for luks doesn't crash
for invalid image sizes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We want to test resizing even for luks. The only change that is needed
is to explicitly zero out new space for luks because it's undefined.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When we try to allocate new clusters we first look for available ones
starting from s->free_cluster_index and once we find them we increase
their reference counts. Before we get to call update_refcount() to do
this last step s->free_cluster_index is already pointing to the next
cluster after the ones we are trying to allocate.
During update_refcount() it may happen however that we also need to
allocate a new refcount block in order to store the refcounts of these
new clusters (and to complicate things further that may also require
us to grow the refcount table). After all this we don't know if the
clusters that we originally tried to allocate are still available, so
we return -EAGAIN to ask the caller to restart the search for free
clusters.
This is what can happen in a common scenario:
1) We want to allocate a new cluster and we see that cluster N is
free.
2) We try to increase N's refcount but all refcount blocks are full,
so we allocate a new one at N+1 (where s->free_cluster_index was
pointing at).
3) Once we're done we return -EAGAIN to look again for a free
cluster, but now s->free_cluster_index points at N+2, so that's
the one we allocate. Cluster N remains unallocated and we have a
hole in the qcow2 file.
This can be reproduced easily:
qemu-img create -f qcow2 -o cluster_size=512 hd.qcow2 1M
qemu-io -c 'write 0 124k' hd.qcow2
After this the image has 132608 bytes (256 clusters), and the refcount
block is full. If we write 512 more bytes it should allocate two new
clusters: the data cluster itself and a new refcount block.
qemu-io -c 'write 124k 512' hd.qcow2
However the image has now three new clusters (259 in total), and the
first one of them is empty (and unallocated):
dd if=hd.qcow2 bs=512c skip=256 count=1 | hexdump -C
If we write larger amounts of data in the last step instead of the 512
bytes used in this example we can create larger holes in the qcow2
file.
What this patch does is reset s->free_cluster_index to its previous
value when alloc_refcount_block() returns -EAGAIN. This way the caller
will try to allocate again the original clusters if they are still
free.
The output of iotest 026 also needs to be updated because now that
images have no holes some tests fail at a different point and the
number of leaked clusters is different.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Testing on ext4, most 'quick' qcow2 tests took less than 5 seconds,
but 163 took more than 20. Let's remove it from the quick set.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit d4e5ec877 already fixed things to work around Python 3's
lame bug of having LC_ALL=C not be 8-bit clean, when parsing the
main QMP qapi files; but failed to do likewise in the tests
directory. As a result, running 'LC_ALL=C make check' fails on
escape-too-big and unicode-str when using python 3 with a nasty
stack trace instead of the intended graceful error message that
QAPI doesn't yet support 8-bit data (the two tests contain
Unicode é, when parsed in UTF-8; they represent something
different when parsed in a proper single-byte C locale, but that
doesn't matter to the error message printed out, provided that
brain-dead Python hasn't first choked on the input instead of
being 8-bit clean).
Ideally, we'd teach the qapi generator scripts to automatically
slurp things in using UTF-8 regardless of locale, and to honor
content that is not limited to 7 bit data rather than gracefully
erroring out; but until then, since our graceful error depends
on python parsing 8-bit data (even if nothing we generate uses
8-bit data), our quick fix is to use the right locale when
running these tests.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180319205040.1113423-1-eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit 3fd2457d18.
Enabling OOB caused several iotests failures; due to the imminent
2.12 release, the safest action is to disable OOB for now. If
other patches fix the issues that iotests exposed, it may be turned
back on in time for the release, otherwise it will be 2.13 material;
either way, the framework changes not reverted now do not hurt if
they remain as part of the 2.12 release.
Additionally, revert the tests in the patch 02130314d8 ("qmp: introduce
QMPCapability", 2018-03-19), as both parts must be reverted at once
to keep 'make check' passing.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180323140821.28957-2-peterx@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
[eblake: reorder/squash commits, enhance commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
This reverts commit 91ad45061a.
Enabling OOB caused several iotests failures; due to the imminent
2.12 release, the safest action is to disable OOB, but first we
have to revert tests that rely on OOB.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180323140821.28957-4-peterx@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
[eblake: reorder commits, enhance commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
This reverts commit d003f7a8f9.
Enabling OOB caused several iotests failures; due to the imminent
2.12 release, the safest action is to disable OOB, but first we
have to revert tests that rely on OOB.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180323140821.28957-3-peterx@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
[eblake: reorder commits, enhance commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
Testing the exit code only once after a whole group of tests has
completed is not enough, it catches errors only in the very last qemu
invocation. We need to have the check after each qemu run.
The logging and diff with the reference output is still done once per
group to keep things more managable. This is not a problem because the
log file accumulates the output of all runs.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jack Schwartz <jack.schwartz@oracle.com>
Reviewers can use ACPI tables in this patch to run
test_acpi_{piix4,q35}_tcg_dimm_pxm cases.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
QEMU now builds one SRAT memory affinity structure for each PC-DIMM
and NVDIMM device presented at boot time with the proximity domain
specified in the device option 'node', rather than only one SRAT
memory affinity structure covering the entire hotpluggable address
space with the proximity domain of the last node.
Add test cases on PC and Q35 machines with 4 proximity domains, and
one PC-DIMM and one NVDIMM attached to the 2nd and 3rd proximity
domains respectively. Check whether the QEMU-built SRAT tables match
with the expected ones.
The following ACPI tables need to be added for this test:
tests/acpi-test-data/pc/APIC.dimmpxm
tests/acpi-test-data/pc/DSDT.dimmpxm
tests/acpi-test-data/pc/NFIT.dimmpxm
tests/acpi-test-data/pc/SRAT.dimmpxm
tests/acpi-test-data/pc/SSDT.dimmpxm
tests/acpi-test-data/q35/APIC.dimmpxm
tests/acpi-test-data/q35/DSDT.dimmpxm
tests/acpi-test-data/q35/NFIT.dimmpxm
tests/acpi-test-data/q35/SRAT.dimmpxm
tests/acpi-test-data/q35/SSDT.dimmpxm
New APIC and DSDT are needed because of the multiple processors
configuration. New NFIT and SSDT are needed because of NVDIMM.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Ed-script diffs are awful compared to context diffs. Fix another
'diff -q' while in the area (if the files are different, being
noisy makes it easier to diagnose why).
While at it, diff .err before .out, because if a test fails, .err
is more likely to contain the most important information for
fixing the failure.
Fixes: 46ec4fce
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180315125116.804342-1-eblake@redhat.com>
Test the new OOB capability. Here we used the new "x-oob-test" command.
First, we send a lock=true and oob=false command to hang the main
thread. Then send another lock=false and oob=true command (which will
be run inside parser this time) to free that hanged command.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-24-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: grammar tweaks]
Signed-off-by: Eric Blake <eblake@redhat.com>
OOB introduced DROP event for flow control. This should not affect old
QMP clients. Add a command batching check to make sure of it.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-23-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Here "oob" stands for "Out-Of-Band". When "allow-oob" is set, it means
the command allows out-of-band execution.
The "oob" idea is proposed by Markus Armbruster in following thread:
https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg02057.html
This new "allow-oob" boolean will be exposed by "query-qmp-schema" as
well for command entries, so that QMP clients can know which commands
can be used in out-of-band calls. For example the command "migrate"
originally looks like:
{"name": "migrate", "ret-type": "17", "meta-type": "command",
"arg-type": "86"}
And it'll be changed into:
{"name": "migrate", "ret-type": "17", "allow-oob": false,
"meta-type": "command", "arg-type": "86"}
This patch only provides the QMP interface level changes. It does not
contain the real out-of-band execution implementation yet.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-18-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase on introspection done by qlit]
Signed-off-by: Eric Blake <eblake@redhat.com>
There were no QMP capabilities defined. Define the first capability,
"oob", to allow out-of-band messages.
After this patch, we will allow QMP clients to enable QMP capabilities
when sending the first "qmp_capabilities" command. Originally we are
starting QMP session with no arguments like:
{ "execute": "qmp_capabilities" }
Now we can enable some QMP capabilities using (take OOB as example,
which is the only capability that we support):
{ "execute": "qmp_capabilities",
"arguments": { "enable": [ "oob" ] } }
When the "arguments" key is not provided, no capability is enabled.
For capability "oob", the monitor needs to be run on a dedicated IO
thread, otherwise the command will fail. For example, trying to enable
OOB on a MUXed typed QMP monitor will fail.
One thing to mention is that QMP capabilities are per-monitor, and also
when the connection is closed due to some reason, the capabilities will
be reset.
Also, touch up qmp-test.c to test the new bits.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-11-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: touch up commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
Instead of converting all "backing": null instances into "backing": "",
handle a null value directly in bdrv_open_inherit().
This enables explicitly null backing links for json:{} filenames.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20180224154033.29559-7-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase to qobject_to() parameter order and qapi headers split]
Signed-off-by: Eric Blake <eblake@redhat.com>
This patch was generated using the following Coccinelle script:
@@
expression Obj;
@@
(
- qobject_to_qnum(Obj)
+ qobject_to(QNum, Obj)
|
- qobject_to_qstring(Obj)
+ qobject_to(QString, Obj)
|
- qobject_to_qdict(Obj)
+ qobject_to(QDict, Obj)
|
- qobject_to_qlist(Obj)
+ qobject_to(QList, Obj)
|
- qobject_to_qbool(Obj)
+ qobject_to(QBool, Obj)
)
and a bit of manual fix-up for overly long lines and three places in
tests/check-qjson.c that Coccinelle did not find.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20180224154033.29559-4-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: swap order from qobject_to(o, X), rebase to master, also a fix
to latent false-positive compiler complaint about hw/i386/acpi-build.c]
Signed-off-by: Eric Blake <eblake@redhat.com>
Replace the generated json string with a literal qobject. The later is
easier to deal with, at run time as well as compile time: adding #if
conditionals will be easier than in a json string.
The output of query-qmp-schema is not changed.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180305172951.2150-5-marcandre.lureau@redhat.com>
[eblake: fix python 3 failure]
Signed-off-by: Eric Blake <eblake@redhat.com>
Instantiate a QObject* from a literal QLitObject.
LitObject only supports int64_t for now. uint64_t and double aren't
implemented.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180305172951.2150-4-marcandre.lureau@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
CentOS 6 lacks a realpath binary on the base install, which makes
all iotests runs fail since the 2.11 release:
001 - output mismatch (see 001.out.bad)
./check: line 815: realpath: command not found
diff: missing operand after `/home/dummy/qemu/tests/qemu-iotests/001.out'
diff: Try `diff --help' for more information.
Many of the uses of 'realpath' in the check script were being
used on the output of 'type -p' - but that is already an
absolute file name. While a canonical name can often be
shorter (realpath gets rid of /../), it can also be longer (due
to symlink expansion); and we really don't care if the name is
canonical, merely that it was an executable file with an
absolute path. These were broken in commit cceaf1db.
The remaining use of realpath was to convert a possibly relative
filename into an absolute one before calling diff to make it
easier to copy-and-paste the filename for moving the .bad file
into place as the new reference file even when running iotests
out-of-tree (see commit 93e53fb6), but $PWD can achieve the same
purpose.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit bff5554843 added "force_size" into the common.filter for
_filter_img_create(), but test 146 still expects it in the output.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When doing drive mirror to a low speed shared storage, if there was heavy
BLK IO write workload in VM after the 'ready' event, drive mirror block job
can't be canceled immediately, it would keep running until the heavy BLK IO
workload stopped in the VM.
Libvirt depends on the current block-job-cancel semantics, which is that
when used without a flag after the 'ready' event, the command blocks
until data is in sync. However, these semantics are awkward in other
situations, for example, people may use drive mirror for realtime
backups while still wanting to use block live migration. Libvirt cannot
start a block live migration while another drive mirror is in progress,
but the user would rather abandon the backup attempt as broken and
proceed with the live migration than be stuck waiting for the current
drive mirror backup to finish.
The drive-mirror command already includes a 'force' flag, which libvirt
does not use, although it documented the flag as only being useful to
quit a job which is paused. However, since quitting a paused job has
the same effect as abandoning a backup in a non-paused job (namely, the
destination file is not in sync, and the command completes immediately),
we can just improve the documentation to make the force flag obviously
useful.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: John Snow <jsnow@redhat.com>
Reported-by: Huaitong Han <huanhuaitong@didichuxing.com>
Signed-off-by: Huaitong Han <huanhuaitong@didichuxing.com>
Signed-off-by: Liang Li <liliangleo@didichuxing.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Originally we added parallels as a read-only format to qemu-iotests
where we did just some tests with a binary image. Since then, write and
image creation support has been added to the driver, so we can now
enable it in _supported_fmt generic.
The driver doesn't support migration yet, though, so we need to add it
to the list of exceptions in 181.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Whatever the state a blockjob is in, it should be able to be canceled
by the block layer.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Expose the "manual" property via QAPI for the backup-related jobs.
As of this commit, this allows the management API to request the
"concluded" and "dismiss" semantics for backup jobs.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Which commands ("verbs") are appropriate for jobs in which state is
also somewhat burdensome to keep track of.
As of this commit, it looks rather useless, but begins to look more
interesting the more states we add to the STM table.
A recurring theme is that no verb will apply to an 'undefined' job.
Further, it's not presently possible to restrict the "pause" or "resume"
verbs any more than they are in this commit because of the asynchronous
nature of how jobs enter the PAUSED state; justifications for some
seemingly erroneous applications are given below.
=====
Verbs
=====
Cancel: Any state except undefined.
Pause: Any state except undefined;
'created': Requests that the job pauses as it starts.
'running': Normal usage. (PAUSED)
'paused': The job may be paused for internal reasons,
but the user may wish to force an indefinite
user-pause, so this is allowed.
'ready': Normal usage. (STANDBY)
'standby': Same logic as above.
Resume: Any state except undefined;
'created': Will lift a user's pause-on-start request.
'running': Will lift a pause request before it takes effect.
'paused': Normal usage.
'ready': Will lift a pause request before it takes effect.
'standby': Normal usage.
Set-speed: Any state except undefined, though ready may not be meaningful.
Complete: Only a 'ready' job may accept a complete request.
=======
Changes
=======
(1)
To facilitate "nice" error checking, all five major block-job verb
interfaces in blockjob.c now support an errp parameter:
- block_job_user_cancel is added as a new interface.
- block_job_user_pause gains an errp paramter
- block_job_user_resume gains an errp parameter
- block_job_set_speed already had an errp parameter.
- block_job_complete already had an errp parameter.
(2)
block-job-pause and block-job-resume will no longer no-op when trying
to pause an already paused job, or trying to resume a job that isn't
paused. These functions will now report that they did not perform the
action requested because it was not possible.
iotests have been adjusted to address this new behavior.
(3)
block-job-complete doesn't worry about checking !block_job_started,
because the permission table guards against this.
(4)
test-bdrv-drain's job implementation needs to announce that it is
'ready' now, in order to be completed.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Split out the pause command into the actual pause and the wait.
Not every usage presently needs to resubmit a pause request.
The intent with the next commit will be to explicitly disallow
redundant or meaningless pause/resume requests, so the tests
need to become more judicious to reflect that.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We're about to add several new states, and booleans are becoming
unwieldly and difficult to reason about. It would help to have a
more explicit bookkeeping of the state of blockjobs. To this end,
add a new "status" field and add our existing states in a redundant
manner alongside the bools they are replacing:
UNDEFINED: Placeholder, default state. Not currently visible to QMP
unless changes occur in the future to allow creating jobs
without starting them via QMP.
CREATED: replaces !!job->co && paused && !busy
RUNNING: replaces effectively (!paused && busy)
PAUSED: Nearly redundant with info->paused, which shows pause_count.
This reports the actual status of the job, which almost always
matches the paused request status. It differs in that it is
strictly only true when the job has actually gone dormant.
READY: replaces job->ready.
STANDBY: Paused, but job->ready is true.
New state additions in coming commits will not be quite so redundant:
WAITING: Waiting on transaction. This job has finished all the work
it can until the transaction converges, fails, or is canceled.
PENDING: Pending authorization from user. This job has finished all the
work it can until the job or transaction is finalized via
block_job_finalize. This implies the transaction has converged
and left the WAITING phase.
ABORTING: Job has encountered an error condition and is in the process
of aborting.
CONCLUDED: Job has ceased all operations and has a return code available
for query and may be dismissed via block_job_dismiss.
NULL: Job has been dismissed and (should) be destroyed. Should never
be visible to QMP.
Some of these states appear somewhat superfluous, but it helps define the
expected flow of a job; so some of the states wind up being synchronous
empty transitions. Importantly, jobs can be in only one of these states
at any given time, which helps code and external users alike reason about
the current condition of a job unambiguously.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
model all independent jobs as single job transactions.
It's one less case we have to worry about when we add more states to the
transition machine. This way, we can just treat all job lifetimes exactly
the same. This helps tighten assertions of the STM graph and removes some
conditionals that would have been needed in the coming commits adding a
more explicit job lifetime management API.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The "40p" machine is using the Open Hack'Ware BIOS, just like the "prep"
machine, so we can test it accordingly with the boot-serial tester, too.
While we're at it, also change the strings that we are using for the
"prep" machine, so that this test now also checks some CLI parameters.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
- Eric Blake: iotests: Fix stuck NBD process on 33
- Vladimir Sementsov-Ogievskiy: 0/5 nbd server fixing and refactoring before BLOCK_STATUS
- Eric Blake: nbd/server: Honor FUA request on NBD_CMD_TRIM
- Stefan Hajnoczi: 0/2 block: fix nbd-server-stop crash after blockdev-snapshot-sync
- Vladimir Sementsov-Ogievskiy: nbd block status base:allocation
-----BEGIN PGP SIGNATURE-----
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
iQEcBAABCAAGBQJaqDklAAoJEKeha0olJ0NqRsAH+waQGLA8YPwxlnpRW2kulLfC
dZXv/ocl2vGgxRrDLEL46xh1RUpapERHADk/Qun8reQpqLicd6p8VCuoOZFEj6QN
Xo98JHrKKL6AZ1rVhWUzD8G6qwgL6FGq6Eb5ty/kanf2/0igwtHmu86nOgGyc9dz
zelGPdIxyxIEjCNiLIN49iEFs+gk1hr8qp1TNMbnHlQh8moOYqdCJWNisOQowoEE
soCJ4NLnvKBtnmrxDrvtkppQKW8ukDOG/q5BkSTvAIEBH/v0ioohFUNTFkC8vmjO
8YAwlXAz6EpQuKxpEfl7vxaT19edrNIo55JO1/Gwzk50g4/Mt+AH2JM/msFcMKQ=
=i+nR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-03-13-v2' into staging
nbd patches for 2018-03-13
- Eric Blake: iotests: Fix stuck NBD process on 33
- Vladimir Sementsov-Ogievskiy: 0/5 nbd server fixing and refactoring before BLOCK_STATUS
- Eric Blake: nbd/server: Honor FUA request on NBD_CMD_TRIM
- Stefan Hajnoczi: 0/2 block: fix nbd-server-stop crash after blockdev-snapshot-sync
- Vladimir Sementsov-Ogievskiy: nbd block status base:allocation
# gpg: Signature made Tue 13 Mar 2018 20:48:37 GMT
# gpg: using RSA key A7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2018-03-13-v2:
iotests: new test 209 for NBD BLOCK_STATUS
iotests: add file_path helper
iotests.py: tiny refactor: move system imports up
nbd: BLOCK_STATUS for standard get_block_status function: client part
block/nbd-client: save first fatal error in nbd_iter_error
nbd: BLOCK_STATUS for standard get_block_status function: server part
nbd/server: add nbd_read_opt_name helper
nbd/server: add nbd_opt_invalid helper
iotests: add 208 nbd-server + blockdev-snapshot-sync test case
block: let blk_add/remove_aio_context_notifier() tolerate BDS changes
nbd/server: Honor FUA request on NBD_CMD_TRIM
nbd/server: refactor nbd_trip: split out nbd_handle_request
nbd/server: refactor nbd_trip: cmd_read and generic reply
nbd/server: fix: check client->closing before sending reply
nbd/server: fix sparse read
nbd/server: move nbd_co_send_structured_error up
iotests: Fix stuck NBD process on 33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJapqaMAAoJEL/70l94x66DQoUH/Rvg+a8giz/SrEA4P8D3Cb2z
4GNbNUUoy4oU0ltD5IAMskMwpOsvl1batE0D+pKIlfO9NV4+Cj2kpgo0p9TxoYqM
VCby3wRtx27zb5nVytC6M++iIKXmeEMqXmFw61I6umddNPSl4IR3hiHEE0DM+7dV
UPIOvJeEiazyQaw3Iw+ZctNn8dDBKc/+6oxP9xRcYTaZ6hB4G9RZkqGNNSLcJkk7
R0UotdjzIZhyWMOkjIwlpTF4sWv8gsYUV4bPYKMYho5B0Obda2dBM3I1kpA8yDa/
xZ5lheOaAVBZvM5aMIcaQPa65MO9hLyXFmhMOgyfpJhLBBz6Qpa4OLLI6DeTN+0=
=UAgA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Record-replay lockstep execution, log dumper and fixes (Alex, Pavel)
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
# gpg: Signature made Mon 12 Mar 2018 16:10:52 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
tcg: fix cpu_io_recompile
replay: update documentation
replay: save vmstate of the asynchronous events
replay: don't process async events when warping the clock
scripts/replay-dump.py: replay log dumper
replay: avoid recursive call of checkpoints
replay: check return values of fwrite
replay: push replay_mutex_lock up the call tree
replay: don't destroy mutex at exit
replay: make locking visible outside replay code
replay/replay-internal.c: track holding of replay_lock
replay/replay.c: bump REPLAY_VERSION again
replay: save prior value of the host clock
replay: added replay log format description
replay: fix save/load vm for non-empty queue
replay: fixed replay_enable_events
replay: fix processing async events
cpu-exec: fix exception_index handling
hw/i386/pc: Factor out the superio code
hw/alpha/dp264: Use the TYPE_SMC37C669_SUPERIO
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# default-configs/i386-softmmu.mak
# default-configs/x86_64-softmmu.mak
iQIcBAABCAAGBQJaqBR+AAoJEL6G67QVEE/fqHQP/2qAzY8BVheN28qVMB5UoHh7
fz3IqgearImhtYJNgfp3GHTOZTgvdSONI7o+q7yw27ORh4c0zPNQpm9dBmzm9H7x
yCojRhD0gH2X1mwoLjEfCkdJpPBNP0/DP6JwNkwS/Mg6GwRQbIiN6W7OPyjtp5fF
ImlCMB9ap8aQmoCaVqbRE7B/8YpjLg5/YY3kqhz5K8NMuOl4W3/icLD/0z35M3et
cSos831a+UZqUOBKjlj7MSFYaLX40gqKvkZ5bGkh1HAPxDh/jRiU/eaMZmJDSVgY
6PuhUv0sg3F0FHd6udfQVP8ogbQf0bDyLyeP4cHiWHAWjvHTg+th2mMuFGQ9T4qs
5E0ZEiTO+iliWthvcqrtDQNlR447BnghBL5AVRCuSBLZyLX6EGlQPRlP/bLbNYDu
e2tZ4pl095vDjuDHpu0Aq7CPUw4lS3Oyipvsr2QBMs4bBsZ57UjKoiT5a/7Z5Hhk
D0r9JFXCjWYB56IiG4NMEXqeuLo+K0n9F1lLrHfSnH2b7YrKFAGYdDCw2Omxu8kB
j5wfnnhst20x2yEOSMjPyHCKIRKyxDOFvPgvjD+9s9MMBuLHtKx+jeOcJA+9DnyK
S/50+bEVEv/ef/s1Nc4wSRdI26WMwh5veTKVoSi9BoQd8BGsp+ucb/jZBQR20emI
CKHxvYuR/2ySra0P9KI8
=QeRU
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/berrange/tags/socket-next-pull-request' into staging
# gpg: Signature made Tue 13 Mar 2018 18:12:14 GMT
# gpg: using RSA key BE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>"
# gpg: aka "Daniel P. Berrange <berrange@redhat.com>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/socket-next-pull-request:
char: allow passing pre-opened socket file descriptor at startup
char: refactor parsing of socket address information
sockets: allow SocketAddress 'fd' to reference numeric file descriptors
sockets: check that the named file descriptor is a socket
sockets: move fd_is_socket() into common sockets code
sockets: strengthen test suite IP protocol availability checks
sockets: pull code for testing IP availability out of specific test
cutils: add qemu_strtoi & qemu_strtoui parsers for int/unsigned int types
char: don't silently skip tn3270 protocol init when TLS is enabled
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJap/4yAAoJEL/70l94x66DmPoH/igfzYkxFyIHFqzb/hQEut3e
IJA05u9DBSqqdSvL0UeLdUgyJTeDM3S5kKZqZ38BPHIudwOGtydoIM2utWtPSejf
Z+mS77+dSgchEMgf1gxmD0oZ5TrO/2pdOYfaZZuQuGmGLruKsDgz6vH3F87cfk8b
yJSJkoZkFc8C9SpwQERWYuhXn2fYFxSBFgEMc9xSFN+zqQUFqeIfOJhwZ+txjAUl
y1EKlhhVyjkxTLR++SkzhKIJ8D5cycpcY/H19gw3ghHviY/tGwNLot3bLRPbwCM6
QvrXDf4rhvFHTmmOfliCI5y6Xgj0u7IZv2fVoKXEtKk1qyfyD4ZnouYTaqP/U9I=
=Q4/y
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-sev' into staging
* Migrate MSR_SMI_COUNT (Liran)
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
# gpg: Signature made Tue 13 Mar 2018 16:37:06 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream-sev: (22 commits)
sev/i386: add sev_get_capabilities()
sev/i386: qmp: add query-sev-capabilities command
sev/i386: qmp: add query-sev-launch-measure command
sev/i386: hmp: add 'info sev' command
cpu/i386: populate CPUID 0x8000_001F when SEV is active
sev/i386: add migration blocker
sev/i386: finalize the SEV guest launch flow
sev/i386: add support to LAUNCH_MEASURE command
target/i386: encrypt bios rom
sev/i386: add command to encrypt guest memory region
sev/i386: add command to create launch memory encryption context
sev/i386: register the guest memory range which may contain encrypted data
sev/i386: add command to initialize the memory encryption context
include: add psp-sev.h header file
sev/i386: qmp: add query-sev command
target/i386: add Secure Encrypted Virtualization (SEV) object
kvm: introduce memory encryption APIs
kvm: add memory encryption context
docs: add AMD Secure Encrypted Virtualization (SEV)
machine: add memory-encryption option
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
there is no point to read fields here but not actually
checking them so drop it and read only header + dsdt/facs
addresses since it's needed later to fetch that tables.
With this cleanup we can get rid of AcpiFadtDescriptorRev3/
ACPI_FADT_COMMON_DEF which have no users left.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Test
- start two vms (vm_a, vm_b)
- in a
- do writes from set A
- do writes from set B
- fix bitmap sha256
- clear bitmap
- do writes from set A
- start migration
- than, in b
- wait vm start (postcopy should start)
- do writes from set B
- check bitmap sha256
The test should verify postcopy migration and then merging with delta
(changes in target, during postcopy process).
Reduce supported cache modes to only 'none', because with cache on time
from source.STOP to target.RESUME is unpredictable and we can fail with
timout while waiting for target.RESUME.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180313180320.339796-14-vsementsov@virtuozzo.com
The test starts two vms (vm_a, vm_b), create dirty bitmap in
the first one, do several writes to corresponding device and
then migrate vm_a to vm_b.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180313180320.339796-13-vsementsov@virtuozzo.com
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180312152126.286890-9-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Simple way to have auto generated filenames with auto cleanup. Like
FilePath but without using 'with' statement and without additional
indentation of the whole test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180312152126.286890-8-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: grammar tweak]
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180312152126.286890-7-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
This test case adds an NBD server export and then invokes
blockdev-snapshot-sync, which changes the BlockDriverState node that the
NBD server's BlockBackend points to. This is an interesting scenario to
test and exercises the code path fixed by the previous commit.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20180306204819.11266-3-stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit afe35cde6 added additional actions to test 33, but forgot
to reset the image between tests. As a result, './check -nbd 33'
fails because the qemu-nbd process from the first half is still
occupying the port, preventing the second half from starting a
new qemu-nbd process. Worse, the failure leaves a rogue qemu-nbd
process behind even after the test fails, which causes knock-on
failures to later tests that also want to start qemu-nbd.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180312211156.452139-1-eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
When starting QEMU management apps will usually setup a monitor socket, and
then open it immediately after startup. If not using QEMU's own -daemonize
arg, this process can be troublesome to handle correctly. The mgmt app will
need to repeatedly call connect() until it succeeds, because it does not
know when QEMU has created the listener socket. If can't retry connect()
forever though, because an error might have caused QEMU to exit before it
even creates the monitor.
The obvious way to fix this kind of problem is to just pass in a pre-opened
socket file descriptor for the QEMU monitor to listen on. The management
app can now immediately call connect() just once. If connect() fails it
knows that QEMU has exited with an error.
The SocketAddress(Legacy) structs allow for FD passing via the monitor, and
now via inherited file descriptors from the process that spawned QEMU. The
final missing piece is adding a 'fd' parameter in the socket chardev
options.
This allows both HMP usage, pass any FD number with SCM_RIGHTS, then
running HMP commands:
getfd myfd
chardev-add socket,fd=myfd
Note that numeric FDs cannot be referenced directly in HMP, only named FDs.
And also CLI usage, by leak FD 3 from parent by clearing O_CLOEXEC, then
spawning QEMU with
-chardev socket,fd=3,id=mon
-mon chardev=mon,mode=control
Note that named FDs cannot be referenced in CLI args, only numeric FDs.
We do not wire this up in the legacy chardev syntax, so you cannot use FD
passing with '-qmp', you must use the modern '-mon' + '-chardev' pair.
When passing pre-opened FDs there is a restriction on use of TLS encryption.
It can be used on a server socket chardev, but cannot be used for a client
socket chardev. This is because when validating a server's certificate, the
client needs to have a hostname available to match against the certificate
identity.
An illustrative example of usage is:
#!/usr/bin/perl
use IO::Socket::UNIX;
use Fcntl;
unlink "/tmp/qmp";
my $srv = IO::Socket::UNIX->new(
Type => SOCK_STREAM(),
Local => "/tmp/qmp",
Listen => 1,
);
my $flags = fcntl $srv, F_GETFD, 0;
fcntl $srv, F_SETFD, $flags & ~FD_CLOEXEC;
my $fd = $srv->fileno();
exec "qemu-system-x86_64", \
"-chardev", "socket,fd=$fd,server,nowait,id=mon", \
"-mon", "chardev=mon,mode=control";
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The SocketAddress 'fd' kind accepts the name of a file descriptor passed
to the monitor with the 'getfd' command. This makes it impossible to use
the 'fd' kind in cases where a monitor is not available. This can apply in
handling command line argv at startup, or simply if internal code wants to
use SocketAddress and pass a numeric FD it has acquired from elsewhere.
Fortunately the 'getfd' command mandated that the FD names must not start
with a leading digit. We can thus safely extend semantics of the
SocketAddress 'fd' kind, to allow a purely numeric name to reference an
file descriptor that QEMU already has open. There will be restrictions on
when each kind can be used.
In codepaths where we are handling a monitor command (ie cur_mon != NULL),
we will only support use of named file descriptors as before. Use of FD
numbers is still not permitted for monitor commands.
In codepaths where we are not handling a monitor command (ie cur_mon ==
NULL), we will not support named file descriptors. Instead we can reference
FD numers explicitly. This allows the app spawning QEMU to intentionally
"leak" a pre-opened socket to QEMU and reference that in a SocketAddress
definition, or for code inside QEMU to pass pre-opened FDs around.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The SocketAddress struct has an "fd" type, which references the name of a
file descriptor passed over the monitor using the "getfd" command. We
currently blindly assume the FD is a socket, which can lead to hard to
diagnose errors later. This adds an explicit check that the FD is actually
a socket to improve the error diagnosis.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The fd_is_socket() helper method is useful in a few places, so put it in
the common sockets code. Make the code more compact while moving it.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of just checking whether it is possible to bind() on a socket, also
check that we can successfully connect() to the socket we bound to. This
more closely replicates the level of functionality that tests will actually
use.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The test-io-channel-socket.c file has some useful helper functions for
checking if a specific IP protocol is available. Other tests need to
perform similar kinds of checks to avoid running tests that will fail
due to missing IP protocols.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are qemu_strtoNN functions for various sized integers. This adds two
more for plain int & unsigned int types, with suitable range checking.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The command can be used by libvirt to query the SEV capabilities.
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The command can be used by libvirt to retrieve the measurement of SEV guest.
This measurement is a signature of the memory contents that was encrypted
through the LAUNCH_UPDATE_DATA.
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The QMP query command can used to retrieve the SEV information when
memory encryption is enabled on AMD platform.
Cc: Eric Blake <eblake@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since --enable-debug no longer enable sanitizers, we need explicit
--enable-sanitizers.
llvm package is required for llvm-symbolizer, to get symbols in
backtraces.
Add make V=1 to get details about failing tests.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180312120849.20073-1-marcandre.lureau@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
A NDOB bit set to one specifies that the disk shall not transfer data
from the data-out buffer and shall process the command as if the data-out
buffer contained user data set to all zeroes.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introduce ChardevClass.chr_machine_done() hook so that chardevs can run
customized procedures after machine init.
There was an existing mux user already that did similar thing but used a
raw machine done notifier. Generalize it into a framework, and let the
mux chardevs provide such a class-specific hook to achieve the same
thing. Then we can move the mux related code to the char-mux.c file.
Since at it, replace the mux_realized variable with the global
machine_init_done varible.
This notifier framework will be further leverged by other type of
chardevs soon.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180306053320.15401-6-peterx@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In 2c9bb29703 I added a migration test that purposely fails;
unfortunately it prints a copy of the failure message to stderr
which makes the output a bit messy.
Hide stderr for that test.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180306173042.24572-1-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There is a race between the test's 'query-migrate' QMP command after the
QMP 'STOP' event and completing the migration:
The test case invokes 'query-migrate' upon receiving 'STOP'. At this
point the migration thread may still be in the process of completing.
Therefore 'query-migrate' can return 'status': 'active' for a brief
window of time instead of 'status': 'completed'. This results in
qemu-iotests 203 hanging.
Solve the race by enabling the 'events' migration capability, which
causes QEMU to emit migration-specific QMP events that do not suffer
from this race condition. Wait for the QMP 'MIGRATION' event with
'status': 'completed'.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180305155926.25858-1-stefanha@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch tweaks TestParallelOps in iotest 030 so it allocates data
in smaller regions (256KB/512KB instead of 512KB/1MB) and the
block-stream job in test_stream_commit() only needs to copy data that
is at the very end of the image.
This way when the block-stream job is awakened it will finish right
away without any chance of being stopped by block_job_sleep_ns(). This
triggers the bug that was fixed by 3d5d319e12 and
1a63a90750 and is therefore a more useful test
case for parallel block jobs.
After this patch the aforementiond bug can also be reproduced with the
test_stream_parallel() test case.
Since with this change the stream job in test_stream_commit() finishes
early, this patch introduces a similar test case where both jobs are
slowed down so they can actually run in parallel.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Cc: John Snow <jsnow@redhat.com>
Message-id: 20180306130121.30243-1-berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The AFL image is to exercise the code validating image size, which
doesn't work on 32 bit or when out of memory (there is a large
allocation before the interesting point). So check that and skip the
test, instead of faking the result.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 20180301011413.11531-1-famz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The majority of our iotests have the executable bit set; fix the
few outliers for consistency.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20180305161824.7188-1-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Instead of manually creating the BlockdevCreateOptions object, use a
visitor to parse the given options into the QAPI object.
This involves translation from the old command line syntax to the syntax
mandated by the QAPI schema. Option names are still checked against
qcow2_create_opts, so only the old option names are allowed on the
command line, even if they are translated in qcow2_create().
In contrast, new option values are optionally recognised besides the old
values: 'compat' accepts 'v2'/'v3' as an alias for '0.10'/'1.1', and
'encrypt.format' accepts 'qcow' as an alias for 'aes' now.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
A few block drivers will need to rename .bdrv_create options for their
QAPIfication, so let's have a helper function for that.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Basic test for merging two QemuOptsLists.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
'qemu-img check' cannot detect if a snapshot's L1 table is corrupted.
This patch checks the table's offset and size and reports corruption
if the values are not valid.
This patch doesn't add code to fix that corruption yet, only to detect
and report it.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function deletes a snapshot from disk, removing its entry from
the snapshot table, freeing its L1 table and decreasing the refcounts
of all clusters.
The L1 table offset and size are however not validated. If we use
invalid values in this function we'll probably corrupt the image even
more, so we should return an error instead.
We now have a function to take care of this, so let's use it.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function copies a snapshot's L1 table into the active one without
validating it first.
We now have a function to take care of this, so let's use it.
Cc: Eric Blake <eblake@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The inactive-l2 overlap check iterates uses the L1 tables from all
snapshots, but it does not validate them first.
We now have a function to take care of this, so let's use it.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>