backup-top "supports" write-unchanged, by skipping CBW operation in
backup_top_co_pwritev. But it forgets to do the same in
backup_top_co_pwrite_zeroes, as well as declare support for
BDRV_REQ_WRITE_UNCHANGED.
Fix this, and, while being here, declare also support for flags
supported by source child.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200207161231.32707-1-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When initializing the LUKS header the size with default encryption
parameters will currently be 2068480 bytes. This is rounded up to
a multiple of the cluster size, 2081792, with 64k sectors. If the
end of the header is not the same as the end of the cluster we fill
the extra space with zeros. This was forgetting that not even the
space allocated for the header will be fully initialized, as we
only write key material for the first key slot. The space left
for the other 7 slots is never written to.
An optimization to the ref count checking code:
commit a5fff8d4b4 (refs/bisect/bad)
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: Wed Feb 27 16:14:30 2019 +0300
qcow2-refcount: avoid eating RAM
made the assumption that every cluster which was allocated would
have at least some data written to it. This was violated by way
the LUKS header is only partially written, with much space simply
reserved for future use.
Depending on the cluster size this problem was masked by the
logic which wrote zeros between the end of the LUKS header and
the end of the cluster.
$ qemu-img create --object secret,id=cluster_encrypt0,data=123456 \
-f qcow2 -o cluster_size=2k,encrypt.iter-time=1,\
encrypt.format=luks,encrypt.key-secret=cluster_encrypt0 \
cluster_size_check.qcow2 100M
Formatting 'cluster_size_check.qcow2', fmt=qcow2 size=104857600
encrypt.format=luks encrypt.key-secret=cluster_encrypt0
encrypt.iter-time=1 cluster_size=2048 lazy_refcounts=off refcount_bits=16
$ qemu-img check --object secret,id=cluster_encrypt0,data=redhat \
'json:{"driver": "qcow2", "encrypt.format": "luks", \
"encrypt.key-secret": "cluster_encrypt0", \
"file.driver": "file", "file.filename": "cluster_size_check.qcow2"}'
ERROR: counting reference for region exceeding the end of the file by one cluster or more: offset 0x2000 size 0x1f9000
Leaked cluster 4 refcount=1 reference=0
...snip...
Leaked cluster 130 refcount=1 reference=0
1 errors were found on the image.
Data may be corrupted, or further writes to the image may corrupt it.
127 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
Image end offset: 268288
The problem only exists when the disk image is entirely empty. Writing
data to the disk image payload will solve the problem by causing the
end of the file to be extended further.
The change fixes it by ensuring that the entire allocated LUKS header
region is fully initialized with zeros. The qemu-img check will still
fail for any pre-existing disk images created prior to this change,
unless at least 1 byte of the payload is written to.
Fully writing zeros to the entire LUKS header is a good idea regardless
as it ensures that space has been allocated on the host filesystem (or
whatever block storage backend is used).
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200207135520.2669430-1-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In many cases the target of a convert operation is a newly provisioned
target that the user knows is blank (reads as zero). In this situation
there is no requirement for qemu-img to wastefully zero out the entire
device.
Add a new option, --target-is-zero, allowing the user to indicate that
an existing target device will return zeros for all reads.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20200205110248.2009589-2-david.edmondson@oracle.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When a management application manages node names there's no reason to
recurse into backing images in the output of query-named-block-nodes.
Add a parameter to the command which will return just the top level
structs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Message-Id: <4470f8c779abc404dcf65e375db195cd91a80651.1579509782.git.pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[mreitz: Fixed coding style]
Signed-off-by: Max Reitz <mreitz@redhat.com>
8dff69b94 added an aio parameter to the drive parameter but forgot to
add a comma before, thus breaking the test. Fix it again.
Fixes: 8dff69b941
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200206130812.612960-1-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit d9df28e7b0 ("iotests: check whitelisted formats") added the
modern @iotests.skip_if_unsupported() to the functions in this test,
so we don't need the old explicit test here anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200129141751.32652-1-thuth@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The patch adds a new additional field to the qcow2 header: compression_type,
which specifies compression type. If field is absent or zero, default
compression type is set: ZLIB, which corresponds to current behavior.
New compression type (ZSTD) is to be added in further commit.
Suggested-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200131142219.3264-3-vsementsov@virtuozzo.com>
[mreitz: s/Bits 3-63: Reserved/Bits 4-63: Reserved/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Make it more obvious how to add new fields to the version 3 header and
how to interpret them.
The specification is adjusted so that for new defined optional fields:
1. Software may support some of these optional fields and ignore the
others, which means that features may be backported to downstream
Qemu independently.
2. If we want to add incompatible field (or a field, for which some of
its values would be incompatible), it must be accompanied by
incompatible feature bit.
Also the concept of "default is zero" is clarified, as it's strange to
say that the value of the field is assumed to be zero for the software
version which don't know about the field at all and don't know how to
treat it be it zero or not.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200131142219.3264-2-vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
[mreitz: s/some its/some of its/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-20-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add two tests to see that you cannot replace a Quorum child with the
mirror job while the child is in use by a different parent.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-19-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
041's TestRepairQuorum has its own image_len, no need to refer to
TestSingleDrive. (This patch allows commenting out TestSingleDrive to
speed up 041 during test testing.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-18-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-17-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All tearDowns in 041 shutdown the VM. Thus, test cases do not need to
do it themselves (unless they need the VM to be down for some
post-operation check).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-16-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This way, we get to see errors during the completion phase.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-14-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Quorum is not a filter, for example because it cannot guarantee which of
its children will serve the next request. Thus, any of its children may
differ from the data visible to quorum's parents.
We have other filters with multiple children, but they differ in this
aspect:
- blkverify quits the whole qemu process if its children differ. As
such, we can always skip it when we want to skip it (as a filter node)
by going to any of its children. Both have the same data.
- replication generally serves requests from bs->file, so this is its
only actually filtered child.
- Block job filters currently only have one child, but they will
probably get more children in the future. Still, they will always
have only one actually filtered child.
Having "filters" as a dedicated node category only makes sense if you
can skip them by going to a one fixed child that always shows the same
data as the filter node. Quorum cannot fulfill this, so it is not a
filter.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-13-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There is no guarantee that we can still replace the node we want to
replace at the end of the mirror job. Double-check by calling
bdrv_recurse_can_replace().
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-12-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It no longer has any users.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-11-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Let check_to_replace_node() use the more specialized
bdrv_recurse_can_replace() instead of
bdrv_recurse_is_first_non_filter(), which is too restrictive (or, in the
case of quorum, sometimes not restrictive enough).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-10-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-8-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
After a couple of follow-up patches, this function will replace
bdrv_recurse_is_first_non_filter() in check_to_replace_node().
bdrv_recurse_is_first_non_filter() is both not sufficiently specific for
check_to_replace_node() (it allows cases that should not be allowed,
like replacing child nodes of quorum with dissenting data that have more
parents than just quorum), and it is too restrictive (it is perfectly
fine to replace filters).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-7-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Quorum cannot share WRITE or RESIZE on its children. Presumably, it
only does so because as a filter, it seemed intuitively correct to point
its .bdrv_child_perm to bdrv_filter_default_perm().
However, it is not really a filter, and bdrv_filter_default_perm() does
not work for it, so we have to provide a custom .bdrv_child_perm
implementation.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-6-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Using -drive with default options means that a virtio-blk drive will be
created that has write access to the to-be quorum children. Quorum
should have exclusive write access to them, so we should use -blockdev
instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-5-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is unused now. (And it was ugly because it needed to explore all BDS
chains from the top.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-4-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Block nodes that do not allow resizing should not share BLK_PERM_RESIZE.
It does not matter whether they are the first non-filter in their chain
or not.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-3-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There is no good reason why we would allow external snapshots only on
the first non-filter node in a chain. Parent BDSs should not care
whether their child is replaced by a snapshot. (If they do care, they
should announce that via freezing the chain, which is checked in
bdrv_append() through bdrv_set_backing_hd().)
Before we had bdrv_is_first_non_filter() here (since 212a5a8f09), there
was a special function bdrv_check_ext_snapshot() that allowed snapshots
by default, but block drivers could override this. Only blkverify did
so, however.
It is not clear to me why blkverify would do so; maybe just so that the
testee block driver would not be replaced. The introducing commit
f6186f49e2 does not explain why. Maybe because 08b24cfe37 would have
been the correct solution? (Which adds a .supports_backing check.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200218103454.296704-2-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fixes: 6663a0a337
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200218094402.26625-5-philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fixes: 132ada80c4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200218094402.26625-4-philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests both read failure (from the top node) and write failure (to
the base node) for on-error=report/stop/ignore.
As block-commit actually starts two different types of block jobs
(mirror.c for committing the active later, commit.c for intermediate
layers), all tests are run for both cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-8-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Now that the error handling in the common block job is fixed, we can
expose the on-error option in QMP instead of hard-coding it as 'report'
in qmp_block_commit().
This fulfills the promise that the old comment in that function made,
even if a bit later than expected: "This will be part of the QMP
command, if/when the BlockdevOnError change for blkmirror makes it in".
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-7-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
block_job_error_action() needs to know if reading from the top node or
writing to the base node failed so that it can set the right 'operation'
in the BLOCK_JOB_ERROR QMP event.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-6-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
commit_populate() is a very short function and only called in a single
place. Its return value doesn't tell us whether an error happened while
reading or writing, which would be necessary for sending the right data
in the BLOCK_JOB_ERROR QMP event.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-5-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The block_job_error_action() error call in the commit job gives the
on_err and is_read arguments in the wrong order. Fix this.
(Of course, hard-coded is_read = false is wrong, too, but that's a
separate problem for a separate patch.)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-4-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The bytes_written variable is only ever written to, it serves no
purpose. This has actually been the case since the commit job was first
introduced in commit 747ff60263.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-3-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is not obvious what 'ignore' actually means for block jobs: It could
be continuing the job and returning success in the end despite the error
(no block job does this). It could also mean continuing and returning
failure in the end (this is what stream does). And it can mean retrying
the failed request later (this is what backup, commit and mirror do).
This (somewhat inconsistent) behaviour was introduced and described for
stream and mirror in commit 32c81a4a6e. backup and commit were
introduced later and use the same model as mirror.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200214200812.28180-2-kwolf@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix warning reported by Clang static code analyzer:
CC block/qcow2-bitmap.o
block/qcow2-bitmap.c:650:5: warning: Value stored to 'ret' is never read
ret = -EINVAL;
^ ~~~~~~~
Fixes: 88ddffae8
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200215161557.4077-2-philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds a test for 'qemu-img convert' with copy offloading where the
target image has an external data file. If the test hosts supports it,
it tests both the case where copy offloading is supported and the case
where it isn't (otherwise we just test unsupported twice).
More specifically, the case with unsupported copy offloading tests
qcow2_alloc_cluster_abort() with external data files.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200211094900.17315-4-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For external data file, cluster allocations return an offset in the data
file and are not refcounted. In this case, there is nothing to do for
qcow2_alloc_cluster_abort(). Freeing the same offset in the qcow2 file
is wrong and causes crashes in the better case or image corruption in
the worse case.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200211094900.17315-3-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In the case that update_refcount() frees a refcount block, it evicts it
from the metadata cache. Before doing so, however, it returns the
currently used refcount block to the cache because it might be the same.
Returning the refcount block early means that we need to reset
old_table_index so that we reload the refcount block in the next
iteration if it is actually still in use.
Fixes: f71c08ea8e
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200211094900.17315-2-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Before this commit, BDRVVVFATState.qcow is unrefed in write_target_close
on closing backing bdrv of vvfat. However, qcow bdrv is opend as a child
of vvfat in enable_write_target() so it will be also unrefed on closing
vvfat itself. This causes use-after-free of qcow on freeing vvfat which
has backing bdrv and qcow bdrv as children in this order because
bdrv_close(vvfat) tries to free qcow bdrv after freeing backing bdrv
as QLIST_FOREACH_SAFE() loop keeps next pointer, but BdrvChild of qcow
is already freed in bdrv_close(backing bdrv).
Signed-off-by: Hikaru Nishida <hikarupsp@gmail.com>
Message-Id: <20200209175156.85748-1-hikarupsp@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
I/O requests to encrypted media should be aligned to the sector size
used by the underlying encryption method, not to BDRV_SECTOR_SIZE.
Fortunately this doesn't break anything at the moment because
both existing QCRYPTO_BLOCK_*_SECTOR_SIZE have the same value as
BDRV_SECTOR_SIZE.
The checks in qcow2_co_preadv_encrypted() are also unnecessary because
they are repeated immediately afterwards in qcow2_co_encdec().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20200213171646.15876-1-berto@igalia.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
mirror_wait_for_free_in_flight_slot() just picks a random operation to
wait for. However, when mirror_co_read() waits for free slots, its
MirrorOp is already in s->ops_in_flight, so if not enough slots are
immediately available, an operation can end up waiting for itself to
complete, which results in a hang.
Fix this by passing the current MirrorOp and skipping this operation
when picking an operation to wait for.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1794692
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a coroutine is launched, but the coroutine pointer isn't stored
anywhere, debugging any problems inside the coroutine is quite hard.
Let's store the coroutine pointer of a mirror operation in MirrorOp to
have it available in the debugger.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Supporting a build platform beyond its end of life makes no sense.
Spell that out just to be clear.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200213084335.15100-1-armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
monitor/misc.c contains code that works only in the system emulator, so
it can't be linked to tools like a storage daemon. In order to make
schema introspection available for tools, move the function to
monitor/qmp-cmds-control.c, which can be linked into the storage daemon.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200129102239.31435-5-kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Move all of the QMP commands handlers to implement the 'control' module
(qapi/control.json) that can be shared between the system emulator and
tools such as a storage daemon to a new file monitor/qmp-cmds-control.c.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200129102239.31435-4-kwolf@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
misc.json contains definitions that are related to the system emulator,
so it can't be used for other tools like the storage daemon. This patch
moves basic functionality that is shared between all tools (and mostly
related to the monitor itself) into a new control.json, which could be
used in tools as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200129102239.31435-3-kwolf@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>