Commit 70ff5b07 wanted to move the diff between actual and reference
output to the end after printing the test result line. It really only
copied it, though, so the diff is now displayed twice. Remove the old
one.
Fixes: 70ff5b07fc
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This test checks bug in qcow2_process_discards, fixed by previous
commit.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes at least one overflow in qcow2_process_discards, which
passes 64bit region length to bdrv_pdiscard where bytes (or sectors in
the past) parameter is int since its introduction in 0b919fae.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Let's at least trace ignored failure.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All callers of bdrv_set_aio_context() are eliminated now, they have
moved to bdrv_try_set_aio_context() and related safe functions. Remove
bdrv_set_aio_context().
With this, we can now know that the .set_aio_ctx callback must be
present in bdrv_set_aio_context_ignore() because
bdrv_can_set_aio_context() would have returned false previously, so
instead of checking the condition, we can assert it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
No reason to use the unchecked version in tests, even more so when these
are the last callers of bdrv_set_aio_context() outside of block.c.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that devices refuse to be attached to a node that has already
been moved to a different iothread if they can't be or aren't configured
to work in the same iothread.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that blockdev-add can correctly add a qcow2 overlay to an
image used by a virtio-scsi disk in an iothread. The interesting point
here is whether the newly added node gets correctly moved into the
iothread AioContext.
If it isn't, we get an assertion failure in virtio-scsi while processing
the next request:
virtio_scsi_ctx_check: Assertion `blk_get_aio_context(d->conf.blk) == s->ctx' failed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The mirror and commit block jobs use bdrv_set_aio_context() to move
their filter node into the right AioContext before hooking it up in the
graph. Similarly, bdrv_open_backing_file() explicitly moves the backing
file node into the right AioContext first.
This isn't necessary any more, they get automatically moved into the
right context now when attaching them.
However, in the case of bdrv_open_backing_file() with a node reference,
it's actually not only unnecessary, but even wrong: The unchecked
bdrv_set_aio_context() changes the AioContext of the child node even if
other parents require it to retain the old context. So this is not only
a simplification, but a bug fix, too.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1684342
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Monitor commands can handle errors, so they can easily be converted to
using the safer bdrv_try_set_aio_context() function.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A node should only be in a non-default AioContext if a user is attached
to it that requires this. When the last parent of a node is gone, it can
move back to the main AioContext.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Test that BlockBackends preserve their assigned AioContext even when the
root node goes away. Inserting a new root node will move it to the right
AioContext.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Opening a new parent node for a node that has already been moved into a
different AioContext must cause the new parent to move into the same
context.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
So far, we only made sure that updating the AioContext of a node
affected the whole subtree. However, if a node is newly attached to a
new parent, we also need to make sure that both the subtree of the node
and the parent are in the same AioContext. This tries to move the new
child node to the parent AioContext and returns an error if this isn't
possible.
BlockBackends now actually apply their AioContext to their root node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This makes use of qdev_prop_drive_iothread for scsi-disk so that the
disk can be attached to a node that is already in the target AioContext.
We need to check that the HBA actually supports iothreads, otherwise
scsi-disk must make sure that the node is already in the main
AioContext.
This changes the error message for conflicting iothread settings.
Previously, virtio-scsi produced the error message, now it comes from
blk_set_aio_context(). Update a test case accordingly.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some qdev block devices have support for iothreads and take care of the
AioContext they are running in, but most devices don't know about any of
this. For the latter category, the qdev drive property must make sure
that their BlockBackend is in the main AioContext.
Unfortunately, while the current code just does the same thing for
devices that do support iothreads, this is not correct and it would show
as soon as we actually try to keep a consistent AioContext assignment
across all nodes and users of a block graph subtree: If a node is
already in a non-default AioContext because of one of its users,
attaching a new device should still be possible if that device can work
in the same AioContext. Switching the node back to the main context
first and only then into the device AioContext causes failure (because
the existing user wouldn't allow the switch to the main context).
So devices that support iothreads need a different kind of drive
property that leaves the node in its current AioContext, but by using
this type, the device promises to check later that it can work with this
context.
This patch adds the qdev infrastructure that allows devices to signal
that they handle iothreads and qdev should leave the AioContext alone.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds a new parameter to blk_new() which requires its callers to
declare from which AioContext this BlockBackend is going to be used (or
the locks of which AioContext need to be taken anyway).
The given context is only stored and kept up to date when changing
AioContexts. Actually applying the stored AioContext to the root node
is saved for another commit.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add an Error parameter to blk_set_aio_context() and use
bdrv_child_try_set_aio_context() internally to check whether all
involved nodes can actually support the AioContext switch.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The NBD server uses an AioContext notifier, so it can tolerate that its
BlockBackend is switched to a different AioContext. Before we start
actually calling bdrv_try_set_aio_context(), which checks for
consistency, outside of test cases, we need to make sure that the NBD
server actually allows this.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Just make the test cover the AioContext of the filter node as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kenneth Heitke <kenneth.heitke@intel.com>
Reviewed-by: Klaus Birkelund Jensen <klaus.jensen@cnexlabs.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Callback-based laio_submit() and laio_cancel() were left after
rewriting Linux AIO backend to coroutines in hope that they would be
used in other code that could bypass coroutines. They can be safely
removed because they have not been used since that time.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch adds a test where we cancel a throttled mirror job and
immediately close the VM before it can be cancelled. Doing so will
invoke bdrv_drain_all() while the mirror job tries to drain the
throttled node. When bdrv_drain_all_end() tries to lift its drain on
the throttle node, the job will exit and replace the current root node
of the BB drive0 (which is the job's filter node) by the throttle node.
Before the previous patch, this replacement did not increase drive0's
quiesce_counter by a sufficient amount, so when
bdrv_parent_drained_end() (invoked by bdrv_do_drained_end(), invoked by
bdrv_drain_all_end()) tried to end the drain on all of the throttle
node's parents, it decreased drive0's quiesce_counter below 0 -- which
fails an assertion.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When ending a drained section, bdrv_do_drained_end() currently first
decrements the quiesce_counter, and only then actually ends the drain.
The bdrv_drain_invoke(bs, false) call may cause graph changes. Say the
graph change involves replacing an existing BB's ("blk") BDS
(blk_bs(blk)) by @bs. Let us introducing the following values:
- bs_oqc = old_quiesce_counter
(so bs->quiesce_counter == bs_oqc - 1)
- obs_qc = blk_bs(blk)->quiesce_counter (before bdrv_drain_invoke())
Let us assume there is no blk_pread_unthrottled() involved, so
blk->quiesce_counter == obs_qc (before bdrv_drain_invoke()).
Now replacing blk_bs(blk) by @bs will reduce blk->quiesce_counter by
obs_qc (making it 0) and increase it by bs_oqc-1 (making it bs_oqc-1).
bdrv_drain_invoke() returns and we invoke bdrv_parent_drained_end().
This will decrement blk->quiesce_counter by one, so it would be -1 --
were there not an assertion against that in blk_root_drained_end().
We therefore have to keep the quiesce_counter up at least until
bdrv_drain_invoke() returns, so that bdrv_parent_drained_end() does the
right thing for the parents @bs got during bdrv_drain_invoke().
But let us delay it even further, namely until bdrv_parent_drained_end()
returns, because then it mirrors bdrv_do_drained_begin(): There, we
first increment the quiesce_counter, then begin draining the parents,
and then call bdrv_drain_invoke(). It makes sense to let
bdrv_do_drained_end() unravel this exactly in reverse.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drv_co_block_status digs bs->file for additional, more accurate search
for hole inside region, reported as DATA by bs since 5daa74a6eb.
This accuracy is not free: assume we have qcow2 disk. Actually, qcow2
knows, where are holes and where is data. But every block_status
request calls lseek additionally. Assume a big disk, full of
data, in any iterative copying block job (or img convert) we'll call
lseek(HOLE) on every iteration, and each of these lseeks will have to
iterate through all metadata up to the end of file. It's obviously
ineffective behavior. And for many scenarios we don't need this lseek
at all.
However, lseek is needed when we have metadata-preallocated image.
So, let's detect metadata-preallocation case and don't dig qcow2's
protocol file in other cases.
The idea is to compare allocation size in POV of filesystem with
allocations size in POV of Qcow2 (by refcounts). If allocation in fs is
significantly lower, consider it as metadata-preallocation case.
102 iotest changed, as our detector can't detect shrinked file as
metadata-preallocation, which don't seem to be wrong, as with metadata
preallocation we always have valid file length.
Two other iotests have a slight change in their QMP output sequence:
Active 'block-commit' returns earlier because the job coroutine yields
earlier on a blocking operation. This operation is loading the refcount
blocks in qcow2_detect_metadata_preallocation().
Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Block layer may recursively check block_status in file child of qcow2,
if qcow2 driver returned DATA. There are several test cases to check
influence of lseek on block_status performance. To see real difference
run on tmpfs.
Tests originally created by Kevin, I just refactored and put them
together into one executable file with simple output.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If the bitmap can't be used for whatever reason, we skip putting down
the reference. Fix that.
In practice, this means that if you attempt to gracefully exit QEMU
after a backup command being rejected, bdrv_close_all will fail and
tell you some unpleasant things via assert().
Reported-by: aihua liang <aliang@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1703916
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This tests that concurrent requests are correctly drained before making
graph modifications instead of running into assertions in
bdrv_replace_node().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Instead of just asserting that no requests are in flight in
bdrv_replace_node(), which is a requirement that most callers ignore, we
can just drain the source node right there. This fixes at least starting
a commit job while I/O is active on the backing chain, but probably
other callers, too.
Having requests in flight on the target node isn't a problem because the
target just gets new parents, but the call path of running requests
isn't modified. So we can just drop this assertion without a replacement.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1711643
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
adds simple arm/virt test case that starts guest with
bios-tables-test.aarch64.iso.qcow2 boot image which
initializes UefiTestSupport* structure in RAM once
guest is booted.
* see commit: tests: acpi: add acpi_find_rsdp_address_uefi() helper
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1559560929-260254-3-git-send-email-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1559560929-260254-2-git-send-email-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Fail after comparing all tables: this way
user gets the full list of tables that need
to be updated or whitelisted.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Commit b2fc91db84 ("q35: set split kernel irqchip as default") changed
the default for the pc-q35-4.0 machine type to use split irqchip, which
turned out to have disasterous effects on vfio-pci INTx support. KVM
resampling irqfds are registered for handling these interrupts, but
these are non-functional in split irqchip mode. We can't simply test
for split irqchip in QEMU as userspace handling of this interrupt is a
significant performance regression versus KVM handling (GeForce GPUs
assigned to Windows VMs are non-functional without forcing MSI mode or
re-enabling kernel irqchip).
The resolution is to revert the change in default irqchip mode in the
pc-q35-4.1 machine and create a pc-q35-4.0.1 machine for the 4.0-stable
branch. The qemu-q35-4.0 machine type should not be used in vfio-pci
configurations for devices requiring legacy INTx support without
explicitly modifying the VM configuration to use kernel irqchip.
Link: https://bugs.launchpad.net/qemu/+bug/1826422
Fixes: b2fc91db84 ("q35: set split kernel irqchip as default")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <155786484688.13873.6037015630912983760.stgit@gimli.home>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a left-over from commit 7aaa6a1637 "tpm: express dependencies
with Kconfig".
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190524181411.8599-1-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Patchew cannot yet retrieve the configuration from the QEMU Git tree, but
this is planned. In the meanwhile, let's start storing it as YAML
so that the Patchew configuration (currently accessible only to administrators)
is public and documented.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This removes the hardcoded I2C address from the tests. The address
is passed via QOSGraphEdgeOptions to i2c_device_create and stored
in the QI2CDevice.
The i2c_send and i2c_recv functions, along with their wrappers,
therefore, can be changed to take a QI2CDevice rather than an
adapter/address pair.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create an i2c-bus interface, corresponding to the I2CAdapter struct.
Wrap IMXI2C and OMAPI2C with a QOSGraphObject, and add the get_driver
function to retrieve the I2CAdapter.
The conversion is still not complete; for simplicity, i2c_recv and
i2c_send (along with their wrappers) still take an adapter/address
pair. Fixing that would be complicated until the tests are converted
to qgraph, so it is left for after the conversion.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Provide *_init functions that populate an I2CAdapter struct without
allocating one, and make the existing *_create functions wrap them.
Because in the new setup *_create might return a pointer inside the
IMXI2C or OMAPI2C struct, create companion *_free functions to go
back to the outer pointer.
All this is temporary until allocation will be handled entirely by
qgraph.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no need to have a test device created by the board.
Instead, create it in the qtest so that we will be able to run
it on other boards too.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
receive_autoinc is relying on the LED state that is set by
send_and_receive. Stop doing that, because qgraph resets the
machine between tests.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If more than 4 bytes are received, the FIFO cannot host the entire
contents of the transfer and STP will be nonzero before entering
the transfer loop. Also, CNT will contain the number of bytes
left to be transferred instead of the total number of bytes in
the transfer.
(Reverse engineered from the omap_i2c.c source code; no available
datasheet).
This will fix ds1338-test for omap-i2c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The functions to read/write 8-bit or 16-bit registers are the same
in tmp105 and pca9552 tests, and in fact they are a special case of
"read block"/"write block" functionality; read block in turn is used
in ds1338-test.
Move everything inside libqos-test, removing the duplication. Account
for the small differences by adding to tmp105-test.c the "read register
after writing" behavior that is specific to it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, if qos_node_contains was passed options, it would still
create an edge without any options. Instead, in that case
NULL acts as a terminator.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Allow choosing the bus that the device will be placed on, in case
the machine has more than one. Otherwise, the bus may not match
the base address of the controller we attach it to.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>