Due to different error propagation, this breaks tests 051 and 087; fix
their output.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1423162705-32065-6-git-send-email-mreitz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-4-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The QERR_ macros are leftovers from the days of "rich" error objects.
They're used with error_set() and qerror_report(), and expand into the
first *two* arguments. This trickiness has become pointless. Clean
this one up.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-3-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When find_block_job() fails, all its callers build the same Error
object. Build it in find_block_job() instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-2-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Like BLOCK_OP_TYPE_BACKUP_SOURCE and BLOCK_OP_TYPE_BACKUP_TARGET,
block-commit involves two asymmetric devices.
This change is not user-visible (yet), because commit only works with
device names.
But once we enable backing reference in blockdev-add, or specifying
node-name in block-commit command, we don't want the user to start two
commit jobs on the same backing chain, which will corrupt things because
of the final bdrv_swap.
Before we have per category blockers, splitting this type is still
better.
[Resolved virtio-blk dataplane conflict by replacing
BLOCK_OP_TYPE_COMMIT with both BLOCK_OP_TYPE_COMMIT_{SOURCE, TARGET}.
They are safe since the block job runs in the same AioContext as the
dataplane IOThread.
--Stefan]
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Similar to drive-backup, but this command uses a device id as target
instead of creating/opening an image file.
Also add blocker on target bs, since the target is also a named device
now.
Add check and report error for bs == target which became possible but is
an illegal case with introduction of blockdev-backup.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1418899027-8445-3-git-send-email-famz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The BLOCK_OP_TYPE_INTERNAL_SNAPSHOT op blocker exists but was never
used! Let's fix that so internal snapshots can be blocked.
[Fixed s/external/internal/ typo as pointed out by Paolo Bonzini and Max
Reitz.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-5-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The transaction QMP command performs operations atomically on a group of
drives. This command needs to acquire AioContext in order to work
safely when virtio-blk dataplane IOThreads are accessing drives.
The transactional nature of the command means that actions are split
into prepare, commit, abort, and clean functions. Acquire the
AioContext in prepare and don't release it until one of the other
functions is called. This prevents the IOThread from running the
AioContext before the transaction has completed.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-4-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drive_backup_prepare() assigns DriveBackupState fields to NULL in the
error path. This is unnecessary because the DriveBackupState is
allocated using g_malloc0() and other functions like
external_snapshot_prepare() already rely on this.
Do not explicitly assign fields to NULL so that the error path is
concise and does not require modification when fields are added to
DriveBackupState.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-3-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Originally the transaction QMP command was just for taking snapshots.
The command became more general when drive-backup and abort were added.
It is more accurate to say the command is about performing operations on
an atomic group than to say it is about snapshots.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-2-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the change-backing-file QMP commands. By
acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Note that this command operates on both bs and a node in its chain
(image_bs). The bdrv_chain_contains(bs, image_bs) check guarantees that
bs and image_bs are in the same AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
By acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Fix up eject, change, and block_passwd in a single patch because
qmp_eject() and qmp_change_blockdev() both call eject_device(). Also
fix block_passwd while we're tackling a command that takes a block
encryption password.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The BLOCK_OP_TYPE_INTERNAL_SNAPSHOT_DELETE op blocker exists but was
never used! Let's fix that so snapshot delete can be blocked.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the blockdev-snapshot-delete-internal-sync QMP
command. By acquiring the AioContext we avoid race conditions with the
dataplane thread which may also be accessing the BlockDriverState.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The commit block job must run in the BlockDriverState AioContext so that
it works with dataplane.
Acquire the AioContext in blockdev.c so starting the block job is safe.
One detail here is that the bdrv_drain_all() must be moved inside the
aio_context_acquire() region so requests cannot sneak in between the
drain and acquire.
The completion code in block/commit.c must perform backing chain
manipulation and bdrv_reopen() from the main loop. Use
block_job_defer_to_main_loop() to achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-11-git-send-email-stefanha@redhat.com
The mirror block job must run in the BlockDriverState AioContext so that
it works with dataplane.
Acquire the AioContext in blockdev.c so starting the block job is safe.
Note that to_replace is treated separately from other BlockDriverStates
in that it does not need to be in the same AioContext. Explicitly
acquire/release to_replace's AioContext when accessing it.
The completion code in block/mirror.c must perform BDS graph
manipulation and bdrv_reopen() from the main loop. Use
block_job_defer_to_main_loop() to achieve that.
The bdrv_drain_all() call is not allowed outside the main loop since it
could lead to lock ordering problems. Use bdrv_drain(bs) instead
because we have acquired the AioContext so nothing else can sneak in
I/O.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-10-git-send-email-stefanha@redhat.com
The stream block job must run in the BlockDriverState AioContext so that
it works with dataplane.
The basics of acquiring the AioContext are easy in blockdev.c.
The tricky part is the completion code which drops part of the backing
file chain. This must be done in the main loop where bdrv_unref() and
bdrv_close() are safe to call. Use block_job_defer_to_main_loop() to
achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-9-git-send-email-stefanha@redhat.com
The backup block job must run in the BlockDriverState AioContext so that
it works with dataplane.
The basics of acquiring the AioContext are easy in blockdev.c.
The completion code in block/backup.c must call bdrv_unref() from the
main loop. Use block_job_defer_to_main_loop() to achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-8-git-send-email-stefanha@redhat.com
This function is correct but we should document the constraint that
everything must be thread-safe.
Emitting QMP events and scheduling BHs are both thread-safe so nothing
needs to be done here.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-5-git-send-email-stefanha@redhat.com
When an emulated storage controller is unrealized it will call
blockdev_mark_auto_del(). This will cancel any running block job (and
that eventually releases its reference to the BDS so it can be freed).
Since the block job may be executing in another AioContext we must
acquire/release to ensure thread safety.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-4-git-send-email-stefanha@redhat.com
Make sure that query-block-jobs acquires the BlockDriverState
AioContext so that the blockjob isn't running in another thread while we
access its state.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-3-git-send-email-stefanha@redhat.com
block-job-set-speed, block-job-cancel, block-job-pause,
block-job-resume, and block-job-complete must acquire the
BlockDriverState AioContext so that it is safe to access bs.
At the moment bs->job is always NULL when dataplane is active because op
blockers prevent blockjobs from starting. Once the rest of the blockjob
API has been made aware of AioContext we can drop the op blocker.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-2-git-send-email-stefanha@redhat.com
Move device model attachment / detachment and the BlockDevOps device
model callbacks and their wrappers from BlockDriverState to
BlockBackend.
Wrapper calls in block.c change from
bdrv_dev_FOO_cb(bs, ...)
to
if (bs->blk) {
bdrv_dev_FOO_cb(bs->blk, ...);
}
No change, because both bdrv_dev_change_media_cb() and
bdrv_dev_resize_cb() do nothing when no device model is attached, and
a device model can be attached only when bs->blk.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Much more command code needs conversion. I'm converting these now
because they're using bdrv_dev_* functions, which I'm about to lift
into BlockBackend.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev_init() always creates a DriveInfo, but only drive_new() fills
it in. qmp_blockdev_add() leaves it blank. This results in a drive
with type = IF_IDE, bus = 0, unit = 0. Screwed up in commit ee13ed1c.
Board initialization code looking for IDE drive (0,0) can pick up one
of these bogus drives. The QMP command has to execute really early to
be visible. Not sure how likely that is in practice.
Fix by creating DriveInfo in drive_new(). Block backends created by
blockdev-add don't get one.
Breaks the test for "has been created by qmp_blockdev_add()" in
blockdev_mark_auto_del() and do_drive_del(), because it changes the
value of dinfo && !dinfo->enable_auto_del from true to false. Simply
test !dinfo instead.
Leaves DriveInfo member enable_auto_del unused. Drop it.
A few places assume a block backend always has a DriveInfo. Fix them
up.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Device models should access their block backends only through the
block-backend.h API. Convert them, and drop direct includes of
inappropriate headers.
Just four uses of BlockDriverState are left:
* The Xen paravirtual block device backend (xen_disk.c) opens images
itself when set up via xenbus, bypassing blockdev.c. I figure it
should go through qmp_blockdev_add() instead.
* Device model "usb-storage" prompts for keys. No other device model
does, and this one probably shouldn't do it, either.
* ide_issue_trim_cb() uses bdrv_aio_discard() instead of
blk_aio_discard() because it fishes its backend out of a BlockAIOCB,
which has only the BlockDriverState.
* PC87312State has an unused BlockDriverState[] member.
The next two commits take care of the latter two.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The patch is big, but all it really does is replacing
dinfo->bdrv
by
blk_bs(blk_by_legacy_dinfo(dinfo))
The replacement is repetitive, but the conversion of device models to
BlockBackend is imminent, and will shorten it to just
blk_legacy_dinfo(dinfo).
Line wrapping muddies the waters a bit. I also omit tests whether
dinfo->bdrv is null, because it never is.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drive_del() has become a trivial wrapper around blk_unref(). Get rid
of it.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On BlockBackend destruction, unref its BlockDriverState. Replaces the
callers' unrefs.
This turns the pointer from BlockBackend to BlockDriverState into a
strong reference, managed with bdrv_ref() / bdrv_unref(). The
back-pointer remains weak.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make the BlockBackend own the DriveInfo. Change blockdev_init() to
return the BlockBackend instead of the DriveInfo.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Convenience function blk_new_with_bs() creates a BlockBackend with its
BlockDriverState. Callers have to unref both. The commit after next
will relieve them of the need to unref the BlockDriverState.
Complication: due to the silly way drive_del works, we need a way to
hide a BlockBackend, just like bdrv_make_anon(). To emphasize its
"special" status, give the function a suitably off-putting name:
blk_hide_on_behalf_of_do_drive_del(). Unfortunately, hiding turns the
BlockBackend's name into the empty string. Can't avoid that without
breaking the blk->bs->device_name equals blk->name invariant.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Creating an anonymous BDS can't fail. Make that obvious.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of duplicating the logic for the if_ide
(bus,unit) mappings, rely on the blockdev layer
for managing those mappings for us, and use the
drive_get_by_index call instead.
This allows ide_drive_get to work for AHCI HBAs
as well, and can be used in the Q35 initialization.
Lastly, change the nature of the argument to
ide_drive_get so that represents the number of
total drives we can support, and not the total
number of buses. This will prevent array overflows
if the units-per-default-bus property ever needs
to be adjusted for compatibility reasons.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1412187569-23452-5-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The if_max_devs table as in the past been an immutable
default that controls the mapping of index => (bus,unit)
for all boards and all HBAs for each interface type.
Since adding this mapping information to the HBA device
itself is currently unwieldly from the perspective of
retrieving this information at option parsing time
(e.g, within drive_new), we consider the alternative
of marking the if_max_devs table mutable so that
later configuration and initialization can adjust the
mapping at will, but only up until a drive is added,
at which point the mapping is finalized.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1412187569-23452-3-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When users use command line options like -hda, -cdrom,
or even -drive if=ide, it is up to the board initialization
routines to pick up these drives and create backing
devices for them.
Some boards, like Q35, have not been doing this.
However, there is no warning explaining why certain
drive specifications are just silently ignored,
so this function adds a check to print some warnings
to assist users in debugging these sorts of issues
in the future.
This patch will not warn about drives added with if_none,
for which it is not possible to tell in advance if
the omission of a backing device is an issue.
A warning in these cases is considered appropriate.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1412187569-23452-2-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add realization of rename all items in opts for qemu_opt_rename.
e.g:
When add bps twice in command line, need to rename all bps to
throttling.bps-total.
This patch solved following bug:
Bug 1145586 - qemu-kvm will give strange hint when add bps twice for a drive
ref:https://bugzilla.redhat.com/show_bug.cgi?id=1145586
[Resolved conflict with commit 5abbf0ee4d
("block: Catch simultaneous usage of options and their aliases"). Check
for simultaneous use first, and then loop over all options.
--Stefan]
Signed-off-by: Jun Li <junmuzi@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1411537527-16715-1-git-send-email-junmuzi@gmail.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1411999675-14533-1-git-send-email-armbru@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
While thinking about precedence of conflicting block device options from
different sources, I noticed that you can specify both an option and its
legacy alias at the same time (e.g. readonly=on,read-only=off). Rather
than specifying the order of precedence, we should simply forbid such
combinations.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Instead of a series of qemu_opt_rename() calls, use an array that
contains all of the renames and call qemu_opt_rename() in a loop. This
will keep the code readable even when we add an error return to
qemu_opt_rename().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
If the BDS's refcnt > 0, drive_del() destroys the DriveInfo, but not
the BDS. This can happen in three places:
* Device model destruction during unplug: blockdev_auto_del()
* Xen IDE unplug: pci_piix3_xen_ide_unplug()
* drive_del command when no device model is attached: do_drive_del()
The other callers of drive_del are on error paths where refcnt == 1.
If the user somehow manages to plug in a device model using a BDS that
has gone through drive_del(), the legacy configuration passed in
DriveInfo doesn't reach the device model, and automatic deletion on
unplug doesn't work. Worse, some device models such as scsi-disk
crash when DriveInfo doesn't exist.
This is theoretical; I didn't research an actual reproducer. The problem
was introduced when we replaced DriveInfo reference counting by BDS
reference counting in commit a94a3fa..fa510eb.
Fix by keeping DriveInfo alive until its BDS dies.
This affects qemu_drive_opts: now you can't reuse the same ID for new
drive options until the BDS dies. Before, you could, but since the
code always attempts to create a BDS with the same ID next, the
enclosing operation "create a new drive" failed anyway. Different
error path, same result.
Unfortunately, the fix involves use of blockdev.c stuff from block.c,
which is a layering violation. Fortunately, my forthcoming
BlockBackend work will get rid of it again.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev_init() mixes up BlockDriverState and DriveInfo initialization
Finish the BlockDriverState job before starting to mess with
DriveInfo. Easier on the eyes.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For some device models, the guest can prevent unplug. Some users need a
way to forcibly revoke device model access to the block backend then, so
the underlying images can be safely used for something else.
drive_del lets you do that. Unfortunately, it conflates revoking access
with destroying the backend.
Commit 9063f81 made drive_del immediately destroy the root BDS. Nice:
the device name becomes available for reuse immediately. Not so nice:
the device model's pointer to the root BDS dangles, and we're prone to
crash when the memory gets reused.
Commit d22b2f4 fixed that by hiding the root BDS instead of destroying
it. Destruction only happens on unplug. "Hiding" means removing it
from bdrv_states and graph_bdrv_states; see bdrv_make_anon().
This "destroy on revoke" is a misfeature we don't want to carry
forward to blockdev-add, just like "destroy on unplug" (commit
2d246f0). So make drive_del fail on anything added with blockdev-add.
We'll add separate QMP commands to revoke device model access and to
destroy backends.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
relaxing the license to LGPLv2+ is intentional.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit.canet@nodalink.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Make drive_del safe for dataplane where another thread may be running
the BlockDriverState's AioContext.
Note the assumption that AioContext's lifetime exceeds DriveInfo and
BlockDriverState. We release AioContext after DriveInfo and
BlockDriverState are potentially freed.
This is clearly safe with the global AioContext but also with -object
iothread and implicit iothreads created by -device
virtio-blk-pci,x-data-plane=on (their lifetime is tied to DeviceState,
not BlockDriverState).
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Name the 'granularity' parameter and give its expected value range.
Previously the device name was mistakenly reported as the parameter
name.
Note that the error class is unchanged from ERROR_CLASS_GENERIC_ERROR.
Reported-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Make block_resize safe for dataplane where another thread may be running
the BlockDriverState's AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
Patch created with Coccinelle, with two manual changes on top:
* Add const to bdrv_iterate_format() to keep the types straight
* Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
inexplicably misses
Coccinelle semantic patch:
@@
type T;
@@
-g_malloc(sizeof(T))
+g_new(T, 1)
@@
type T;
@@
-g_try_malloc(sizeof(T))
+g_try_new(T, 1)
@@
type T;
@@
-g_malloc0(sizeof(T))
+g_new0(T, 1)
@@
type T;
@@
-g_try_malloc0(sizeof(T))
+g_try_new0(T, 1)
@@
type T;
expression n;
@@
-g_malloc(sizeof(T) * (n))
+g_new(T, n)
@@
type T;
expression n;
@@
-g_try_malloc(sizeof(T) * (n))
+g_try_new(T, n)
@@
type T;
expression n;
@@
-g_malloc0(sizeof(T) * (n))
+g_new0(T, n)
@@
type T;
expression n;
@@
-g_try_malloc0(sizeof(T) * (n))
+g_try_new0(T, n)
@@
type T;
expression p, n;
@@
-g_realloc(p, sizeof(T) * (n))
+g_renew(T, p, n)
@@
type T;
expression p, n;
@@
-g_try_realloc(p, sizeof(T) * (n))
+g_try_renew(T, p, n)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On some image chains, QEMU may not always be able to resolve the
filenames properly, when updating the backing file of an image
after a block job.
For instance, certain relative pathnames may fail, or drives may
have been specified originally by file descriptor (e.g. /dev/fd/???),
or a relative protocol pathname may have been used.
In these instances, QEMU may lack the information to be able to make
the correct choice, but the user or management layer most likely does
have that knowledge.
With this extension to the block-stream api, the user is able to change
the backing file of the active layer as part of the block-stream
operation.
This allows the change to be 'safe', in the sense that if the attempt
to write the active image metadata fails, then the block-stream
operation returns failure, without disrupting the guest.
If a backing file string is not specified in the command, the backing
file string to use is determined in the same manner as it was
previously.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
On some image chains, QEMU may not always be able to resolve the
filenames properly, when updating the backing file of an image
after a block commit.
For instance, certain relative pathnames may fail, or drives may
have been specified originally by file descriptor (e.g. /dev/fd/???),
or a relative protocol pathname may have been used.
In these instances, QEMU may lack the information to be able to make
the correct choice, but the user or management layer most likely does
have that knowledge.
With this extension to the block-commit api, the user is able to change
the backing file of the overlay image as part of the block-commit
operation.
This allows the change to be 'safe', in the sense that if the attempt
to write the overlay image metadata fails, then the block-commit
operation returns failure, without disrupting the guest.
If the commit top is the active layer, then specifying the backing
file string will be treated as an error (there is no overlay image
to modify in that case).
If a backing file string is not specified in the command, the backing
file string to use is determined in the same manner as it was
previously.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This allows a user to make a live change to the backing file recorded in
an open image.
The image file to modify can be specified 2 ways:
1) image filename
2) image node-name
Note: this does not cause the backing file itself to be reopened; it
merely changes the backing filename in the image file structure, and
in internal BDS structures.
It is the responsibility of the user to pass a filename string that
can be resolved when the image chain is reopened, and the filename
string is not validated.
A good analogy for this command is that it is a live version of
'qemu-img rebase -u', with respect to changing the backing file string.
[Jeff is offline so I respun this patch in his absence. Dropped image
filename since using node-name is preferred and this is a new command.
No need to introduce the limitations of finding images by filename.
--Stefan]
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that active layer block-commit is supported, the 'top' argument
no longer needs to be mandatory.
Change it to optional, with the default being the active layer in the
device chain.
[kwolf: Rebased and resolved conflict in tests/qemu-iotests/040]
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
drive-mirror will bdrv_swap the new BDS named node-name with the one
pointed by replaces when the mirroring is finished.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This new argument can be used to specify the node-name of the new mirrored BDS.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If we check for the RESIZE blocker in bdrv_truncate(), that means a
commit will fail if the overlay layer is larger than the base, due to
the backing blocker.
This is a regression in behavior from 2.0; currently, commit will try to
grow the size of the base image to match the overlay size, if the
overlay size is larger.
By moving this into the QMP command qmp_block_resize(), it allows
usage of bdrv_truncate() within block jobs.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since BLOCK_JOB_COMPLETED, BLOCK_JOB_CANCELLED, BLOCK_JOB_READY are
related, convert them in one patch. The block_job_event_* functions
are used to keep encapsulation of BlockJob structure.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
It's always one since commit fa510eb dropped the last drive_get_ref().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
"Init" and "uninit" suggest the functions don't allocate / free
storage. But they do.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The block_set_io_throttle QMP and HMP commands modify I/O throttling
limits for block devices.
Acquire the BlockDriverState's AioContext to protect against race
conditions with an IOThread that is running I/O for this device.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
bs_opts is leaked on all paths from its qdev_new() that don't got
through blockdev_init(). Add the missing QDECREF(), and zap bs_opts
after blockdev_init(), so the new QDECREF() does nothing when we go
through blockdev_init().
Leak introduced in commit f298d07. Spotted by Coverity.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev_init() leaks bs_opts when qemu_opts_create() fails, i.e. when
the ID is bad. Missed in commit ec9c10d.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qerror_report() is a transitional interface to help with converting
existing HMP commands to QMP. It should not be used elsewhere.
do_drive_del() is an HMP command that won't be converted to QMP (we'll
create a new QMP command instead). It uses both qerror_report() and
error_report(). Convert the former to the latter.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
qerror_report_err() is a transitional interface to help with
converting existing HMP commands to QMP. It should not be used
elsewhere.
drive_init() is not meant to be used by QMP commands. It uses both
qerror_report_err() and error_report(). Convert the former to the
latter.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It makes no sense to check for "any" blocker on bs, we are here only
because of the mechanical conversion from in_use to op_blockers. Remove
it now, and let the callers check specific operation types. Backup and
mirror already have it, add checker to stream and commit.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This drops BlockDriverState.in_use with op_blockers:
- Call bdrv_op_block_all in place of bdrv_set_in_use(bs, 1).
- Call bdrv_op_unblock_all in place of bdrv_set_in_use(bs, 0).
- Check bdrv_op_is_blocked() in place of bdrv_in_use(bs).
The specific types are used, e.g. in place of starting block backup,
bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP, ...).
There is one exception in block_job_create, where
bdrv_op_blocker_is_empty() is used, because we don't know the operation
type here. This doesn't matter because in a few commits away we will drop
the check and move it to callers that _do_ know the type.
- Check bdrv_op_blocker_is_empty() in place of assert(!bs->in_use).
Note: there is only bdrv_op_block_all and bdrv_op_unblock_all callers at
this moment. So although the checks are specific to op types, this
changes can still be seen as identical logic with previously with
in_use. The difference is error message are improved because of blocker
error info.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
this patch tries to optimize zero write requests
by automatically using bdrv_write_zeroes if it is
supported by the format.
This significantly speeds up file system initialization and
should speed zero write test used to test backend storage
performance.
I ran the following 2 tests on my internal SSD with a
50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap]
-----
runtime: 14secs 1.1secs 1.1secs
filesize: 937M 18M 18M
iSCSI [off] [on] [unmap]
----
runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap]
-----
runtime: 246secs 18secs 18secs
filesize: 51G 192K 192K
throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap]
----
runtime: 8mins 45secs 33secs
throughput: 106M/s 1.2G/s 1.6G/s
allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface.
It seems to internally handle writing zeroes
via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
this adds a generic function to recover the enum id of a parameter
given as a string.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* remotes/qmp-unstable/queue/qmp:
monitor: fix qmp_getfd() fd leak in error case
HMP: support specifying dump format for dump-guest-memory
HMP: fix doc of dump-guest-memory
qmp: object-add: Validate class before creating object
monitor: Add device_add and device_del completion.
monitor: Add command_completion callback to mon_cmd_t.
monitor: Fix drive_del id argument type completion.
error: Remove some unused headers
qerror.h: Replace QERR_NOT_SUPPORTED with QERR_UNSUPPORTED
qerror.h: Remove QERR defines that are only used once
qerror.h: Remove unused error classes
error: Print error_report() to stderr if using qmp
monitor: Remove unused monitor_print_filename
error: Privatize error_print_loc
vnc: Remove default_mon usage
slirp: Remove default_mon usage
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Using error_is_set(ERRP) to find out whether a function failed is
either wrong, fragile, or unnecessarily opaque. It's wrong when ERRP
may be null, because errors go undetected when it is. It's fragile
when proving ERRP non-null involves a non-local argument. Else, it's
unnecessarily opaque (see commit 84d18f0).
The error_is_set(errp) in internal_snapshot_prepare() is merely
fragile, because the caller never passes a null errp argument.
Make the code more robust and more obviously correct: receive the
error in a local variable, then propagate it through the parameter.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Just hardcode them in the callers
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Since commit f298d071, block devices added with blockdev-add don't have
a QemuOpts around in dinfo->opts. Consequently, we can't rely any more
on QemuOpts catching duplicate IDs for block devices.
This patch adds a new check for duplicate IDs to bdrv_new(), and moves
the existing check that the ID isn't already taken for a node-name there
as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This patch adds an errp parameter to bdrv_new() and updates all its
callers. The next patches will make use of this in order to check for
duplicate IDs. Most of the callers know that their ID is fine, so they
can simply assert that there is no error.
Behaviour doesn't change with this patch yet as bdrv_new() doesn't
actually assign errors to errp.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
As speed is an optional parameter for the QMP block-commit command, it
should be set to 0 if not given (as it is undefined if has_speed is
false), that is, the speed should not be limited.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If aio=native, we check that cache.direct is set as well. If however
cache wasn't specified at all, qemu just segfaulted.
The old condition didn't make any sense anyway because it effectively
only checked for the default cache mode case, but not for an explicitly
set cache.direct=off mode.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Encrypted images need a password before they can be used, and we don't
want blockdev-add to create BDSes that aren't fully initialised. So for
now simply forbid encrypted images; we can come back to it later if we
need the functionality.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Allow bdrv_open() to handle references to existing block devices just as
bdrv_file_open() is already capable of.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make bdrv_open() take a pointer to a BDS pointer, similarly to
bdrv_file_open(). If a pointer to a NULL pointer is given, bdrv_open()
will create a new BDS with an empty name; if the BDS pointer is not
NULL, that existing BDS will be reused (in the same way as bdrv_open()
already did).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* QTest cleanups and test cases for PCI NICs
* NAND fix for "info qtree"
* Cleanup and extension of QOM machine tests
* IndustryPack test cases and conversion to QOM realize
* I2C cleanups
* Cleanups of legacy qdev properties
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJTAooJAAoJEPou0S0+fgE/SuQQALW3zvra4ZLRAQV0e8kFoyj1
vVtmLkDhnCe4cYfxxfOX91NA0rH1ts2EO1+UcnaCHJlptNWfA+8qJW69XgYpHE3c
DKQlKPL/9pV5ywY5uUw/t1UJHg2BfrLBDDM4lP+vrpwiQYq4kp24JffnhfY3l9MA
9qdkXu1HrlWoLRVGnMyGDXI8cb+5bTL+FEc6UuHl3P89/gj5BV+LDWn0QOFbAkxq
4wk+Xh6sHKcfOdq6vMCNGlTjlJnpbY43D1a8+q6hFGG8JBlpne7Oer7bse9k4uTK
q/CzyNzC0lnjjcULpa4ptRlycH0ruD9DPY7Lco9XqYd3l/c9742PmTEqN5TZseKD
XD7+hwT1tk7W8rihm8KETCP6sKlXz4w8tJiWe6IT3zwRzvXIolxxK93heQuaX73Z
HFDmvTPVLUiWF8ftKTyWZM3w+jsbSH0QSrMCIHKJrPTRWTKphx0DUP74lWjNsvGs
FFBjpAgrflLihxiuRrcLmekGn0xCTjhQWIo2GoiWTgLSEHNQQQUNO+15/kcU/vlI
hh3DJpiBKeSnUapHHL0OEK6ryeHoG95akiRjImwWVthNLk4KEuWtlhFPYBtulO5A
PA02trE4Ah769effX0ZYdNl23KbW4VxpZ8VZv+kp7RTrDKxw551HoEFJ5ja0nkvB
O1CfsE7x0GH/Rbi/Hxhu
=KRcc
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/afaerber/tags/qom-devices-for-peter' into staging
QOM infrastructure fixes and device conversions
* QTest cleanups and test cases for PCI NICs
* NAND fix for "info qtree"
* Cleanup and extension of QOM machine tests
* IndustryPack test cases and conversion to QOM realize
* I2C cleanups
* Cleanups of legacy qdev properties
# gpg: Signature made Mon 17 Feb 2014 22:15:37 GMT using RSA key ID 3E7E013F
# gpg: Good signature from "Andreas Färber <afaerber@suse.de>"
# gpg: aka "Andreas Färber <afaerber@suse.com>"
* remotes/afaerber/tags/qom-devices-for-peter: (49 commits)
qtest: Include system headers before user headers
qapi: Refine human printing of sizes
qdev: Use QAPI type names for properties
qdev: Add enum property types to QAPI schema
block: Handle "rechs" and "large" translation options
qdev: Remove hex8/32/64 property types
qdev: Remove most legacy printers
qdev: Use human mode in "info qtree"
qapi: Add human mode to StringOutputVisitor
qdev: Inline qdev_prop_parse()
qdev: Legacy properties are just strings
qdev: Legacy properties are now read-only
qdev: Remove legacy parsers for hex8/32/64
qdev: Sizes are now parsed by StringInputVisitor
qapi: Add size parser to StringInputVisitor
qtest: Don't segfault with invalid -qtest option
ipack: Move IndustryPack out of hw/char/
ipoctal232: QOM parent field cleanup
ipack: QOM parent field cleanup for IPackDevice
ipack: QOM parent field cleanup for IPackBus
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
error_is_set(&var) is the same as var != NULL, but it takes
whole-program analysis to figure that out. Unnecessarily hard for
optimizers, static checkers, and human readers. Dumb it down to
obvious.
Gets rid of several dozen Coverity false positives.
Note that the obvious form is already used in many places.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Sure, CHS translation is an obscure topic, and legacy options for
hard-disk geometries are obscure as well. But since QEMU does nothing
with it except telling the BIOS, and since there "large" and "rechs"
are listed in the enums, parsing them seems to be the bare minimum.
Acked-by: Stefan Hajnoczi <stefanha@gmail.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since we introduced node_name for named bs of the graph modify the opening by
reference to use it as a fallback.
This patch also enforce the separation of the device id and graph node
namespaces.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
As bdrv_open() documentation states:
"The reference to the QDict belongs to the block layer
* after the call (even on failure), so if the caller intends to reuse the
* dictionary, it needs to use QINCREF() before calling bdrv_open."
the optional options dict will not be reused after bdrv_open() and should
belong to the block layer so remove the extra QDECREF(options).
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
blockdev-add doesn't know about the device that the backend will be
attached to, this is a legacy -drive concept. Move the remaining checks
that use it to drive_init().
[Fam Zheng <famz@redhat.com> suggested line-wrapping to 80 chars as
required by the coding standard. I have fixed this.
--Stefan]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Fam Zheng <famz@redhat.com>
There was two candidate ways to implement named node manipulation:
1)
{ 'command': 'block_passwd', 'data': {'*device': 'str',
'*node-name': 'str', 'password': 'str'}
}
2)
{ 'command': 'block_passwd', 'data': {'device': 'str',
'*device-is-node': 'bool',
'password': 'str'} }
Luiz proposed 1 and says 2 was an abuse of the QMP interface and proposed to
rewrite the QMP block interface for 2.0.
Luiz does not like in 1 the fact that 2 fields are optional but one of them must
be specified leading to an abuse of the QMP semantic.
Kevin argumented that 2 what a clear abuse of the device field and would not be
practical when reading fast some log file because the user would read "device"
and think that a device is manipulated when it's in fact a node name.
Documentation of 1 make it pretty clear what to do for the user.
Kevin argued that all bs are node including devices ones so 2 does not make
sense.
Kevin also argued that rewriting the QMP block interface would not make disapear
the current one.
Kevin pushed the argument that making the QAPI generator compatible with the
semantic of the operation would need a rewrite that no one has done yet.
A vote has been done on the list to elect the version to use and 1 won.
For reference the complete thread is:
"[Qemu-devel] [PATCH V4 4/7] qmp: Allow to change password on names block driver
states."
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Specifying the image filename through the "file" option is a legacy
option and should not be supported by blockdev-add (in that case, giving
a string for "file" references an existing block device).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is a boiler-plate _nofail variant of qemu_opts_create. Remove and
use error_abort in call sites.
null/0 arguments needs to be added for the id and fail_if_exists fields
in affected callsites due to argument inconsistency between the normal and
no_fail variants.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
If active is top, it will be mirrored to base, (with block/mirror.c
code), then the image is switched when user completes the block job.
QMP documentation is updated.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
For "none" sync mode in "absolute-paths" mode, the current image should
be used as the backing file for the newly created image.
The current behavior is:
a) If the image to be mirrored has a backing file, use that (which is
wrong, since the operations recorded by "none" are applied to the
image itself, not to its backing file).
b) If the image to be mirrored lacks a backing file, the target doesn't
have one either (which is not really wrong, but not really right,
either; "none" records a set of operations executed on the image
file, therefore having no backing file to apply these operations on
seems rather pointless).
For a, this is clearly a bugfix. For b, it is still a bugfix, although
it might break existing API - but since that case crashed qemu just
three weeks ago (before 1452686495), we
can safely assume there is no such API relying on that case yet.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1385407736-13941-2-git-send-email-mreitz@redhat.com
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
Currently we have three QemuOptsList (qemu_common_drive_opts,
qemu_legacy_drive_opts, and qemu_drive_opts), only qemu_drive_opts
is added to vm_config_groups[].
This patch changes query-command-line-options to access three local
QemuOptsLists for drive option, and merge the description items
together.
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It should be possible to execute the QMP "drive-mirror" command in
"none" sync mode and "absolute-paths" mode even for block devices
lacking a backing file.
"absolute-paths" does in fact not require a backing file to be present,
as can be seen from the "top" sync mode code path. "top" basically
states that the device should indeed have a backing file - however, the
current code catches the case if it doesn't and then simply treats it as
"full" sync mode, creating a target image without a backing file (in
"absolute-paths" mode). Thus, "absolute-paths" does not imply the target
file must indeed have a backing file.
Therefore, the target file may be left unbacked in case of "none" sync
mode as well, if the specified device is not backed either. Currently,
qemu will crash trying to dereference the backing file pointer since it
assumes that it will always be non-NULL in that case ("none" with
"absolute-paths").
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
These memory leaks also make drive_add if=none,id=drive0 without a file=
option leak the options list. This keeps ID "drive0" around forever.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Since 0ebd24e0, cdrom doesn't have read-only on by default, which will
error out when using an read only image. Fix it by setting the default
value when parsing opts.
Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This gives us meaningful error messages for the blockdev-add QMP
command.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a read-only device is configured with copy-on-read=on, the old code
only prints a warning and automatically disables copy on read. Make it
a real error for blockdev-add.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The remaining users shouldn't be there with blockdev-add and are easy to
move to drive_init().
Bonus bug fix: As a side effect, CD-ROM drives can now use block drivers
on the read-only whitelist without explicitly specifying read-only=on,
even if a format is explicitly specified.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
IF_NONE allows read-only, which makes forbidding it in this place
for other types pretty much pointless.
Instead, make sure that all devices for which the check would have
errored out check in their init function that they don't get a read-only
BlockDriverState. This catches even cases where IF_NONE and -device is
used.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This requires moving the automatic ID generation at the same time, so
let's do that as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
It's already ignored and only prints a deprecation message. No use in
making it available in new interfaces.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This moves all of the geometry options (cyls/heads/secs/trans) to
drive_init so that they can only be accessed using legacy functions, but
never with anything blockdev-add related.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
It's always IF_NONE for blockdev-add.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
This moves as much as possible of the processing of the 'media' option
to drive_init so that it can only be accessed using legacy functions,
but never with anything blockdev-add related.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Working on a QDict instead of a QemuOpts that accepts anything is more
in line with bdrv_open(). A QDict is what qmp_blockdev_add() already has
anyway, so this saves additional conversions. And last, but not least,
it allows later patches to easily extract legacy options into a
separate, typed QemuOpts for drive_init() (the untyped QemuOpts that
drive_init already has doesn't allow access to numbers, only strings,
and is therefore useless without conversion).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
blockdev-add shouldn't automatically generate IDs, but will keep most of
the DriveInfo creation code.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
BlockDriverStates shouldn't be affected by an unplugged guest device,
except if created with the legacy -drive command line option or the
drive_add HMP command.
Make the automatic deletion as well as cancelling of jobs conditional on
an enable_auto_del boolean that is only set in drive_init().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
The main intent of this patch is to consolidate the whitelist checks to
a single point in the code instead of spreading it everywhere. This adds
a nicer error message for read-only whitelisting, too, in places where
it was still missing.
The patch also contains a bonus bug fix: By finding the format first in
bdrv_open() and then independently checking against the whitelist only
later, we avoid the case that use of a non-whitelisted format results in
probing rather than an error message. Previously, this could happen when
using the driver=... option.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
This field is used by blkverify to disable external snapshots creation.
It will also be used by block filters like quorum to disable external
snapshot creation.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev.c:1929:13: warning: Value stored to 'ret' is never read
ret = 0;
^ ~
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
That's why all my VMs were so fast lately. :)
This changed in 1.6.0 by mistake in patch 29c4e2b (blockdev: Split up
'cache' option, 2013-07-18).
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In qmp_transaction, assert that the BdrvActionOps to be used is actually
valid.
This assertion failing is very improbable, however, it might happen, if
a new TransactionActionKind is introduced "out of order" and the
actions[] array is not updated.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add an Error ** parameter to bdrv_open, bdrv_file_open and associated
functions to allow more specific error messages.
Signed-off-by: Max Reitz <mreitz@redhat.com>
This interface use id and name as optional parameters, to handle the
case that one image contain multiple snapshots with same name which
may be '', but with different id.
Adding parameter id is for historical compatiability reason, and
that case is not possible in qemu's new interface for internal
snapshot at block device level, but still possible in qemu-img.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Unlike savevm, the qmp_transaction interface will not generate
snapshot name automatically, saving trouble to return information
of the new created snapshot.
Although qcow2 support storing multiple snapshots with same name
but different ID, here it will fail when an snapshot with that name
already exist before the operation. Format such as rbd do not support
ID at all, and in most case, it means trouble to user when he faces
multiple snapshots with same name, so ban that case. Request with
empty name will be rejected.
Snapshot ID can't be specified in this interface.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Block jobs used drive_get_ref(drive_get_by_blockdev(bs)) to avoid BDS
being deleted. Now we have BDS reference count, and block jobs don't
care about dinfo, so replace them to get cleaner code. It is also the
safe way when BDS has no drive info.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Manage BlockDriverState lifecycle with refcnt, so bdrv_delete() is no
longer public and should be called by bdrv_unref() if refcnt is
decreased to 0.
This is an identical change because effectively, there's no multiple
reference of BDS now: no caller of bdrv_ref() yet, only bdrv_new() sets
bs->refcnt to 1, so all bdrv_unref() now actually delete the BDS.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This feature can be used in case where users are avoiding the iops limit by
doing jumbo I/Os hammering the storage backend.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The max parameter of the leaky bucket throttling algorithm can be used to
allow the guest to do bursts.
The max value is a pool of I/O that the guest can use without being throttled
at all. Throttling is triggered once this pool is empty.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is an autogenerated patch using scripts/switch-timer-api.
Switch the entire code base to using the new timer API.
Note this patch may introduce some line length issues.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When user tries to use read-only whitelist format in the command line
option, failure message was "'foo' invalid format". It might be invalid
only for writable, but valid for read-only, so it is confusing. Give the
user easier to understand information.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_flags is set by bdrv_parse_discard_flags(), but later it is reset
to zero.
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Message-id: 1376483201-13466-1-git-send-email-mohan@in.ibm.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When use -drive file='xxx',format=qcow2,snapshot=on the error
message "Can't use snapshot=on with driver-specific options"
can be show, and fail to start the qemu.
This should not be happened, and there is no file.driver option
in qemu command line.
It is because the commit 74fe54f2a1,
it puts 'driver' option if the command line use 'format' option.
This patch is to solve this bug.
Signed-off-by: Mike Qiu <qiudayu@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We don't want to commit to the API yet before everything is worked out.
Like already for 1.5, disable it again for the 1.6 release. This commit
is meant to be reverted after the 1.6 release.
The disabling of the driver-specific options is achieved by applying the
old checks while parsing the command line.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The macro g_assert_not_reached is a better self documenting replacement
for assert(0) or assert(false).
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This patch adds sync-modes to the drive-backup interface and
implements the FULL, NONE and TOP modes of synchronization.
FULL performs as before copying the entire contents of the drive
while preserving the point-in-time using CoW.
NONE only copies new writes to the target drive.
TOP copies changes to the topmost drive image and preserves the
point-in-time using CoW.
For sync mode TOP are creating a new target image using the same backing
file as the original disk image. Then any new data that has been laid
on top of it since creation is copied in the main backup_run() loop.
There is an extra check in the 'TOP' case so that we don't bother to copy
all the data of the backing file as it already exists in the target.
This is where the bdrv_co_is_allocated() is used to determine if the
data exists in the topmost layer or below.
Also any new data being written is intercepted via the write_notifier
hook which ends up calling backup_do_cow() to copy old data out before
it gets overwritten.
For mode 'NONE' we create the new target image and only copy in the
original data from the disk image starting from the time the call was
made. This preserves the point in time data by only copying the parts
that are *going to change* to the target image. This way we can
reconstruct the final image by checking to see if the given block exists
in the new target image first, and if it does not, you can get it from
the original image. This is basically an optimization allowing you to
do point-in-time snapshots with low overhead vs the 'FULL' version.
Since there is no old data to copy out the loop in backup_run() for the
NONE case just calls qemu_coroutine_yield() which only wakes up after
an event (usually cancel in this case). The rest is handled by the
before_write notifier which again calls backup_do_cow() to write out
the old data so it can be preserved.
Signed-off-by: Ian Main <imain@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The old 'cache' option really encodes three different boolean flags into
a cache mode name, without providing all combinations. Make them three
separate options instead and translate the old option to the new ones
for drive_init().
The specific boolean options take precedence if the old cache option is
specified as well, so the following options are equivalent:
-drive file=x,cache=none,cache.no-flush=true
-drive file=x,cache.writeback=true,cache.direct=true,cache.no-flush=true
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
In QMP, we want to use dashes instead of underscores in QMP argument
names, and use nested options for throttling.
The new option names affect the command line as well, but for
compatibility drive_init() will convert the old option names before
calling into the code that will be shared between -drive and
blockdev-add.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This is traditionally -drive format=..., which is now translated into
the new driver option. This gives us a more consistent way to select the
driver of BlockDriverStates that can be used in QMP context, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The drive-backup command is similar to the drive-mirror command, except
no guest data written after the command executes gets copied. Add a
sync mode argument which determines whether the entire disk is copied,
just allocated clusters, or only clusters being written to by the guest.
Currently only sync mode 'full' is supported - it copies the entire disk.
For read-only point-in-time snapshots we may only need sync mode 'none'
since the target can be a qcow2 file using the guest's disk as its
backing file (no need to copy the entire disk). Finally, sync mode
'top' is useful if we wish to preserve the backing chain.
Note that this patch just adds the sync mode argument to drive-backup.
It does not implement sync modes 'top' or 'none'. This patch is
necessary so we can add a drive-backup HMP command that behaves like the
existing drive-mirror HMP command and takes a sync mode.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The Abort action can be used to test QMP 'transaction' failure. Add it
as the last action to exercise the .abort() and .cleanup() code paths
for all previous actions.
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch adds a transactional version of the drive-backup QMP command.
It allows atomic snapshots of multiple drives along with automatic
cleanup if there is a failure to start one of the backup jobs.
Note that QMP events are emitted for block job completion/cancellation
and the block job will be listed by query-block-jobs.
@device: the name of the device whose writes should be mirrored.
@target: the target of the new image. If the file exists, or if it
is a device, the existing file/device will be used as the new
destination. If it does not exist, a new file will be created.
@format: #optional the format of the new destination, default is to
probe if @mode is 'existing', else the format of the source
@mode: #optional whether and how QEMU should create a new image, default is
'absolute-paths'.
@speed: #optional the maximum speed, in bytes per second
@on-source-error: #optional the action to take on an error on the source,
default 'report'. 'stop' and 'enospc' can only be used
if the block device supports io-status (see BlockInfo).
@on-target-error: #optional the action to take on an error on the target,
default 'report' (no limitations, since this applies to
a different block device than @device).
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some QMP 'transaction' types don't need to do anything on .commit().
Make .commit() optional just like .abort().
The "drive-backup" action will take advantage of this, it only needs to
cancel the block job on .abort(). Other block job actions will probably
follow the same pattern, so allow .commit() to be NULL.
Suggested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The QMP 'transaction' command keeps a list of in-flight transactions.
The transaction state structure is called BlkTransactionStates even
though it only deals with a single transaction. The only plural thing
is the linked list of transaction states.
I find it confusing to call the single structure "States". This patch
renames it to "State", just like BlockDriverState is singular.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
@drive-backup
Start a point-in-time copy of a block device to a new destination. The
status of ongoing drive-backup operations can be checked with
query-block-jobs where the BlockJobInfo.type field has the value 'backup'.
The operation can be stopped before it has completed using the
block-job-cancel command.
@device: the name of the device which should be copied.
@target: the target of the new image. If the file exists, or if it
is a device, the existing file/device will be used as the new
destination. If it does not exist, a new file will be created.
@format: #optional the format of the new destination, default is to
probe if @mode is 'existing', else the format of the source
@mode: #optional whether and how QEMU should create a new image, default is
'absolute-paths'.
@speed: #optional the maximum speed, in bytes per second
@on-source-error: #optional the action to take on an error on the source,
default 'report'. 'stop' and 'enospc' can only be used
if the block device supports io-status (see BlockInfo).
@on-target-error: #optional the action to take on an error on the target,
default 'report' (no limitations, since this applies to
a different block device than @device).
Note that @on-source-error and @on-target-error only affect background I/O.
If an error occurs during a guest write request, the device's rerror/werror
actions will be used.
Returns: nothing on success
If @device is not a valid block device, DeviceNotFound
Since 1.6
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Use bdrv_getlength() for its byte units and error return instead of
bdrv_get_geometry().
Reported-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is not necessary to check that we can find a protocol block driver
since we create or open the image file. This produces the error that we
need anyway.
Besides, the QERR_INVALID_BLOCK_FORMAT is inappropriate since the
protocol is incorrect rather than the format.
Also drop an empty line between bdrv_open() and checking its return
value. This may be due to copy-pasting from earlier code that performed
other operations before handling errors.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Paolo Bonzini <pbonzini@redhat.com> suggested the following test case:
1. Launch a guest and wait at the GRUB boot menu:
qemu-system-x86_64 -enable-kvm -m 1024 \
-drive if=none,cache=none,file=test.img,id=foo,werror=stop,rerror=stop
-device virtio-blk-pci,drive=foo,id=virtio0,addr=4
2. Hot unplug the device:
(qemu) drive_del foo
3. Select the first boot menu entry
Without this patch the guest pauses due to ENOMEDIUM. The guest is
stuck in a continuous pause loop since the I/O request is retried and
fails immediately again when the guest is resumed.
With this patch the error is reported to the guest.
Note that this scenario actually happens sometimes during libvirt disk
hot unplug, where device_del is followed by drive_del. I/O may still be
submitted to the drive after drive_del if the guest does not process the
PCI hot unplug notification.
Reported-by: Dafna Ron <dron@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
We may want to include a driver in the whitelist for read only tasks
such as diagnosing or exporting guest data (with libguestfs as a good
example). This patch introduces a readonly whitelist option, and for
backward compatibility, the old configure option --block-drv-whitelist
is now an alias to rw whitelist.
Drivers in readonly list is only permitted to open file readonly, and
returns -ENOTSUP for RW opening.
E.g. To include vmdk readonly, and others read+write:
./configure --target-list=x86_64-softmmu \
--block-drv-rw-whitelist=qcow2,raw,file,qed \
--block-drv-ro-whitelist=vmdk
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There's no reason to restrict transactions to operations related to
block devices, so rename the type now before schema introspection stops
us from doing so.
Also change the schema documentation of 'transaction' to not refer to
block devices or snapshots any more.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make it easier to add other operations to qmp_transaction() by using
callbacks, with external snapshots serving as an example implementation
of the callbacks.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The code is simply moved into a separate function.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The code is moved into preparation function, and changed
a bit to tip more clearly what it is doing.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The code before really committing is moved into a function. Most
code is simply moved from qmp_transaction(), except that on fail it
just returns now. Other code such as input parsing is not touched,
to make it easier in review.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We have an errno value that can be displayed, so we should just do that.
An easy way to reproduce this case is to resize a raw image to a size
that is too large for the host file system.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We don't want to commit to the API yet before everything is worked out.
Disable it for the 1.5 release. This commit is meant to be reverted
after the 1.5 release.
The disabling of the driver-specific options is achieved by applying the
old checks while parsing the command line.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is not necessary to adjust the slice time at runtime. We already
extend the current slice in order to carry over accounting into the next
slice. Changing the actual slice time value introduces oscillations.
The guest may experience large changes in throughput or IOPS from one
moment to the next when slice times are adjusted.
Reported-by: Benoît Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-By: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
After this patch, using -drive with an empty file name continues to open
the file if driver-specific options are used. If no driver-specific
options are specified, the semantics stay as it was: It defines a drive
without an inserted medium.
In order to achieve this, bdrv_open() must be made safe to work with a
NULL filename parameter. The assumption that is made is that only block
drivers which implement bdrv_parse_filename() support using driver
specific options and could therefore work without a filename. These
drivers must make sure to cope with NULL in their implementation of
.bdrv_open() (this is only NBD for now). For all other drivers, the
block layer code will make sure to error out before calling into their
code - they can't possibly work without a filename.
Now an NBD connection can be opened like this:
qemu-system-x86_64 -drive file.driver=nbd,file.port=1234,file.host=::1
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
this patch ensures that all pending IOs are completed
before a device is resized. this is especially important
if a device is shrinked as it the bdrv_check_request()
result is invalidated.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Screwed up in commit 666daa68. Thanks to Kevin Wolf for reminding me
to fix this.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Any non-default -drive options are now passed down to the block drivers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Pointing to a QemuOpts element is surprising and can lead to subtle
use-after-free errors when the QemuOpts is freed after all options are
parsed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It doesn't do anything yet except storing the options QDict in the
BlockDriverState.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add support for BDRV_O_UNMAP from the QEMU command-line.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There can be a need to turn output to stdout off. This patch adds a -q option
that enable "Quiet mode". In Quiet mode, only errors are printed out.
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Negative I/O throttling iops and bps values do not make sense so reject
them with an error message.
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The do_check_io_limits() function returns false when I/O limits are
invalid but it doesn't set an Error to indicate why. The two
do_check_io_limits() callers duplicate error reporting. Solve this by
passing an Error pointer into do_check_io_limits().
Note that the two callers report slightly different errors: drive_init()
prints a custom error message while qmp_block_set_io_throttle() does
error_set(errp, QERR_INVALID_PARAMETER_COMBINATION).
QERR_INVALID_PARAMETER_COMBINATION is a generic error, see
include/qapi/qmp/qerror.h:
#define QERR_INVALID_PARAMETER_COMBINATION \
ERROR_CLASS_GENERIC_ERROR, "Invalid parameter combination"
Since it is generic we are not obliged to keep this error. Switch to
the custom error message which contains more information.
This patch prepares for adding additional checks with their own error
messages to do_check_io_limits(). The next patch adds a new check.
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
# By Paolo Bonzini (14) and others
# Via Kevin Wolf
* kwolf/for-anthony: (24 commits)
ide: Add fall through annotations
block: Create proper size file for disk mirror
ahci: Add migration support
ahci: Change data types in preparation for migration
ahci: Remove unused AHCIDevice fields
hbitmap: add assertion on hbitmap_iter_init
mirror: do nothing on zero-sized disk
block/vdi: Check for bad signature
block/vdi: Improved return values from vdi_open
block/vdi: Improve debug output for signature
block: Use error code EMEDIUMTYPE for wrong format in some block drivers
block: Add special error code for wrong format
mirror: support arbitrarily-sized iterations
mirror: support more than one in-flight AIO operation
mirror: add buf-size argument to drive-mirror
mirror: switch mirror_iteration to AIO
mirror: allow customizing the granularity
block: allow customizing the granularity of the dirty bitmap
block: return count of dirty sectors, not chunks
mirror: perform COW if the cluster size is bigger than the granularity
...
The qmp monitor command to mirror a disk was passing -1 for size
along with the disk's backing file. This size of the resulting disk
is the size of the backing file, which is incorrect if the disk
has been resized. Therefore we should always pass in the size of
the current disk.
Signed-off-by: Vishvananda Ishaya <vishvananda@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The block drivers need a special error code for "wrong format".
From the available error codes EMEDIUMTYPE fits best.
It is not available on all platforms, so a definition in
qemu-common.h and a specific error report are needed.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This makes sense when the next commit starts using the extra buffer space
to perform many I/O operations asynchronously.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The desired granularity may be very different depending on the kind of
operation (e.g. continuous replication vs. collapse-to-raw) and whether
the VM is expected to perform lots of I/O while mirroring is in progress.
Allow the user to customize it, while providing a sane default so that
in general there will be no extra allocated space in the target compared
to the source.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When mirroring runs, the backing files for the target may not yet be
ready. However, this means that a copy-on-write operation on the target
would fill the missing sectors with zeros. Copy-on-write only happens
if the granularity of the dirty bitmap is smaller than the cluster size
(and only for clusters that are allocated in the source after the job
has started copying). So far, the granularity was fixed to 1MB; to avoid
the problem we detected the situation and required the backing files to
be available in that case only.
However, we want to lower the granularity for efficiency, so we need
a better solution. The solution is to always copy a whole cluster the
first time it is touched. The code keeps a bitmap of clusters that
have already been allocated by the mirroring job, and only does "manual"
copy-on-write if the chunk being copied is zero in the bitmap.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The non-live bdrv_commit() function may return one of the following
errors: -ENOTSUP, -EBUSY, -EACCES, -EIO. The only error that is
checked in the HMP handler is -EBUSY, so the monitor command 'commit'
silently fails for all error cases other than 'Device is in use'.
Report error using monitor_printf() and strerror(), and convert existing
qerror_report() calls in do_commit() to monitor_printf().
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
We will use qemu_opts_create_nofail function, it can make code
more readable.
Signed-off-by: Dong Xu Wang <wdongxu@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit adds an Error ** argument to bdrv_img_create() and set it
appropriately on error.
Callers of bdrv_img_create() pass NULL for the new argument and still
rely on bdrv_img_create()'s return value. Next commits will change
callers to use the Error object instead.
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There are QEMUMachines that have neither IF_IDE nor IF_SCSI as a
default/standard interface to their block devices / drives. Therefore,
this patch introduces a new field default_block_type per QEMUMachine
struct. The prior use_scsi field becomes thereby obsolete and is
replaced through .default_block_type = IF_SCSI.
This patch also changes the default for s390x to IF_VIRTIO and
removes an early hack that converts IF_IDE drives.
Other parties have already claimed interest (e.g. IF_SD for exynos)
To create a sane default, for machines that dont specify a
default_block_type, this patch makes IF_IDE = 0 and IF_NONE = 1.
I checked all users of IF_NONE (blockdev.c and ww/device-hotplug.c)
as well as IF_IDE and it seems that it is ok to change the defines -
in other words, I found no obvious (to me) assumption in the code
regarding IF_NONE==0. IF_NONE is only set if there is an
explicit if=none. Without if=* the interface becomes IF_DEFAULT.
I would suggest to have some additional care, e.g. by letting
this patch sit some days in the block tree.
Based on an initial patch from Einar Lueck <elelueck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Igor Mitsyanko <i.mitsyanko@samsung.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Acked-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Releases of qemu-kvm will be interrupted at qemu 1.3.0.
Users should switch to plain qemu releases.
To avoid breaking scenarios which are setup with command line
options specific to qemu-kvm, port these switches from qemu-kvm
to qemu.git.
Port drive boot option. From the qemu-kvm original commit message:
We do not want to maintain this option forever. It will be removed after
a grace period of a few releases. So warn the user that this option has
no effect and will become invalid soon.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Error management is important for mirroring; otherwise, an error on the
target (even something as "innocent" as ENOSPC) requires to start again
with a full copy. Similar to on_read_error/on_write_error, two separate
knobs are provided for on_source_error (reads) and on_target_error (writes).
The default is 'report' for both.
The 'ignore' policy will leave the sector dirty, so that it will be
retried later. Thus, it will not cause corruption.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds the monitor commands that start the mirroring job.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Even for jobs that need to be manually completed, management may want
to take care itself of the completion, not requiring the user to issue
a command to terminate the job. In this case we want to avoid that
they poll us continuously, waiting for completion to become available.
Thus, add a new event that signals the phase switch and the availability
of the block-job-complete command.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
While streaming can be dropped as soon as it progressed through the whole
image, mirroring needs to be completed manually for two reasons: 1) so that
management knows exactly when the VM switches to the target; 2) because
for other use cases such as replication, we may leave the operation running
for the whole life of the virtual machine.
Add a new block job command that manually completes background operations.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>