Now that this parameter is effectively unused, we can drop it and just
pass NULL on to bdrv_open_inherit().
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Change all callers of bdrv_open() to pass the driver name in the options
QDict instead of passing its BlockDriver pointer.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We use mirror+replace to fix quorum's broken child. bs/s->common.bs
is quorum, and to_replace is the broken child. The new child is target_bs.
Without this patch, the replace node can be any node, and it can be
top BDS with BB, or another quorum's child. We just check if the broken
child is part of the quorum BDS in this patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Message-id: 55A86486.1000404@cn.fujitsu.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The bps_max/iops_max values are meaningless without corresponding
bps/iops values. Reported an error if bps_max/iops_max is given without
bps/iops.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1438683733-21111-2-git-send-email-stefanha@redhat.com
If bus_size is less than 0, the command fails.
If buf_size is 0, use DEFAULT_MIRROR_BUF_SIZE.
If buf_size % granularity is not 0, mirror_free_init() will
do dangerous things.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 5555A588.3080907@cn.fujitsu.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Draining is not necessary, I/O can happen as soon as the
commit coroutine yields. Draining can be necessary before
reopening the file for read/write, or while modifying the
backing file chain, but that is done separately in
bdrv_reopen_multiple or bdrv_close; this particular
bdrv_drain_all does nothing for that.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1432822903-25821-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
If specified as "true", it allows discarding on target sectors where source is
not allocated.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_close already does that, and in fact hmp_drive_del would need
another drain after the flush (which bdrv_close does). So remove
the duplication.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1432822629-25401-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In particular, don't include it into headers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
These macros expand into error class enumeration constant, comma,
string. Unclean. Has been that way since commit 13f59ae.
The error class is always ERROR_CLASS_GENERIC_ERROR since the previous
commit.
Clean up as follows:
* Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and
delete it from the QERR_ macro. No change after preprocessing.
* Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into
error_setg(...). Again, no change after preprocessing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Error classes other than ERROR_CLASS_GENERIC_ERROR should not be used
in new code. Hiding them in QERR_ macros makes new uses hard to spot.
Fortunately, there's just one such macro left. Eliminate it with this
coccinelle semantic patch:
@@
expression EP, E;
@@
-error_set(EP, QERR_DEVICE_NOT_FOUND, E)
+error_set(EP, ERROR_CLASS_DEVICE_NOT_FOUND, "Device '%s' not found", E)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
qerror_report_err() is a transitional interface to help with
converting existing monitor commands to QMP. It should not be used
elsewhere.
The only remaining user in qemu-option.c is qemu_opts_parse(). Is it
used in QMP context? If not, we can simply replace
qerror_report_err() by error_report_err().
The uses in qemu-img.c, qemu-io.c, qemu-nbd.c and under tests/ are
clearly not in QMP context.
The uses in vl.c aren't either, because the only QMP command handlers
there are qmp_query_status() and qmp_query_machines(), and they don't
call it.
Remaining uses:
* drive_def(): Command line -drive and such, HMP drive_add and pci_add
* hmp_chardev_add(): HMP chardev-add
* monitor_parse_command(): HMP core
* tmp_config_parse(): Command line -tpmdev
* net_host_device_add(): HMP host_net_add
* net_client_parse(): Command line -net and -netdev
* qemu_global_option(): Command line -global
* vnc_parse_func(): Command line -display, -vnc, default display, HMP
change, QMP change. Bummer.
* qemu_pci_hot_add_nic(): HMP pci_add
* usb_net_init(): Command line -usbdevice, HMP usb_add
Propagate errors through qemu_opts_parse(). Create a convenience
function qemu_opts_parse_noisily() that passes errors to
error_report_err(). Switch all non-QMP users outside tests to it.
That leaves vnc_parse_func(). Propagate errors through it. Since I'm
touching it anyway, rename it to vnc_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
We now finally have TCG support for the basic set of instructions necessary
to run the s390-ccw machine. That means in any aspect possible that machine
type is now superior to the legacy s390-virtio machine.
Switch over to the ccw machine as default. That way people don't get a halfway
broken machine with the s390x target.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJVevYFAAoJEH8JsnLIjy/W3jEP/0hiQ3rCRZ/he8s5maTdT+TR
YSeHkB5rKpz0Uopn1DMn1QrIbUVzX7dyb+uf9zQ0/xRQIzf6k8uxqU/NWrdoF3NK
qx91dGWedwnG+TEBIMbcR7nMrw4dP6kH7uPz/VWMXDHVLz0HIcD95qhKgs0mSY6J
dWqex6ACjXM68zJU5IioagU9evV80WZE1S8z7zfixxtTBx5hCaTVbwalkaCxcrXw
PbZle55rjI8B10+OzgBw0fq10nias+NTndU9CwNBboxmEtAjq8/mQ663vcWlmiFo
9a/hkda27Z5ut/0Tqk1v4uLHauylp++rrAabPBAuCFMKes6cdkddP15Q/r52aJ29
5meodQtbet1rGrM+Aq4vuSuWId71PGypEI/3URDdNfYFNISoeLLsk4lcQUu7VrDD
sRX3Jt8SI3nkIgOnhPyi7NDPmafxFt8yRt5vM8MyR5ynF8NS/2hiAc3wqnbXGjUj
a5GqDCefb1yM0R5HvksuFFt3OnXlKJQ3J+ksXNUJf9DSAZPauqWD696pcTeg8wyy
3PIGkczgUuKTVfFWd3THZxJLAo7ZuqvXBHHV8o1SeMBDxwh4FhTd8Kjvm3rUNFfl
VDox4qwZ1AcxLrxgqazKU7sD9iWBDHURRcpOoUsBys7oxQZnhcmMp1fRlEkTOyrD
HNiSNByqBrtkfeVzHlSe
=QgYk
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer core and image format patches
# gpg: Signature made Fri Jun 12 16:08:53 2015 BST using RSA key ID C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
* remotes/kevin/tags/for-upstream: (25 commits)
block: Fix reopen flag inheritance
block: Add BlockDriverState.inherits_from
block: Add list of children to BlockDriverState
queue.h: Add QLIST_FIX_HEAD_PTR()
block: Drain requests before swapping nodes in bdrv_swap()
block: Move flag inheritance to bdrv_open_inherit()
block: Use QemuOpts in bdrv_open_common()
block: Use macro for cache option names
vmdk: Use bdrv_open_image()
quorum: Use bdrv_open_image()
check-qdict: Test cases for new functions
qdict: Add qdict_{set,copy}_default()
qdict: Add qdict_array_entries()
iotests: Add tests for overriding BDRV_O_PROTOCOL
block: driver should override flags in bdrv_open()
block: Change bitmap truncate conditional to assertion
block: record new size in bdrv_dirty_bitmap_truncate
raw-posix: Fix .bdrv_co_get_block_status() for unaligned image size
vmdk: Use vmdk_find_index_in_cluster everywhere
vmdk: Fix index_in_cluster calculation in vmdk_co_get_block_status
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
The throttle group support use a cooperative round robin scheduling
algorithm.
The principles of the algorithm are simple:
- Each BDS of the group is used as a token in a circular way.
- The active BDS computes if a wait must be done and arms the right
timer.
- If a wait must be done the token timer will be armed so the token
will become the next active BDS.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: f0082a86f3ac01c46170f7eafe2101a92e8fde39.1433779731.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
All QMP commands use the "new" handler interface (mhandler.cmd_new).
Most HMP commands still use the traditional interface (mhandler.cmd),
but a few use the "new" one. Complicates handle_user_command() for no
gain, so I'm converting these to the traditional interface.
For drive_del, that's easy: hmp_drive_del() sheds its unused last
parameter, and its return value, which the caller ignored anyway.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
We often don't need the BlockDriverState for functions
that operate on bitmaps. Remove it.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1429314609-29776-15-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add bdrv_clear_dirty_bitmap and a matching QMP command,
qmp_block_dirty_bitmap_clear that enables a user to reset
the bitmap attached to a drive.
This allows us to reset a bitmap in the event of a full
drive backup.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1429314609-29776-12-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For "dirty-bitmap" sync mode, the block job will iterate through the
given dirty bitmap to decide if a sector needs backup (backup all the
dirty clusters and skip clean ones), just as allocation conditions of
"top" sync mode.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1429314609-29776-11-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A bitmap successor is an anonymous BdrvDirtyBitmap that is intended to
be created just prior to a sensitive operation (e.g. Incremental Backup)
that can either succeed or fail, but during the course of which we still
want a bitmap tracking writes.
On creating a successor, we "freeze" the parent bitmap which prevents
its deletion, enabling, anonymization, or creating a bitmap with the
same name.
On success, the parent bitmap can "abdicate" responsibility to the
successor, which will inherit its name. The successor will have been
tracking writes during the course of the backup operation. The parent
will be safely deleted.
On failure, we can "reclaim" the successor from the parent, unifying
them such that the resulting bitmap describes all writes occurring since
the last successful backup, for instance. Reclamation will thaw the
parent, but not explicitly re-enable it.
BdrvDirtyBitmap operations that target a single bitmap are protected
by assertions that the bitmap is not frozen and/or disabled.
BdrvDirtyBitmap operations that target a group of bitmaps, such as
bdrv_{set,reset}_dirty will ignore frozen/disabled drives with a
conditional instead.
Internal functions that enable/disable dirty bitmaps have assertions
added to them to prevent modifying frozen bitmaps.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1429314609-29776-10-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The new command pair is added to manage a user created dirty bitmap. The
dirty bitmap's name is mandatory and must be unique for the same device,
but different devices can have bitmaps with the same names.
The granularity is an optional field. If it is not specified, we will
choose a default granularity based on the cluster size if available,
clamped to between 4K and 64K to mirror how the 'mirror' code was
already choosing granularity. If we do not have cluster size info
available, we choose 64K. This code has been factored out into a helper
shared with block/mirror.
This patch also introduces the 'block_dirty_bitmap_lookup' helper,
which takes a device name and a dirty bitmap name and validates the
lookup, returning NULL and setting errp if there is a problem with
either field. This helper will be re-used in future patches in this
series.
The types added to block-core.json will be re-used in future patches
in this series, see:
'qapi: Add transaction support to block-dirty-bitmap-{add, enable, disable}'
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1429314609-29776-5-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The image field in BlockDeviceInfo is supposed to contain an ImageInfo
object. However that is being filled in by bdrv_query_info(), not by
bdrv_block_device_info(), which is where BlockDeviceInfo is actually
created.
Anyone calling bdrv_block_device_info() directly will get a null image
field. As a consequence of this, the HMP command 'info block -n -v'
crashes QEMU.
This patch moves the code that fills in that field from
bdrv_query_info() to bdrv_block_device_info().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 1429271563-3765-1-git-send-email-berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There are several error messages that identify a BlockDriverState by
its device name. However those errors can be produced in nodes that
don't have a device name associated.
In those cases we should use bdrv_get_device_or_node_name() to fall
back to the node name and produce a more meaningful message. The
messages are also updated to use the more generic term 'node' instead
of 'device'.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 9823a1f0514fdb0692e92868661c38a9e00a12d6.1428485266.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch changes block_job_pause to increase the pause counter and
block_job_resume to decrease it.
The counter will allow calling block_job_pause/block_job_resume
unconditionally on a job when we need to suspend the IO temporarily.
From now on, each block_job_resume must be paired with a block_job_pause
to keep the counter balanced.
The user pause from QMP or HMP will only trigger block_job_pause once
until it's resumed, this is achieved by adding a user_paused flag in
BlockJob.
One occurrence of block_job_resume in mirror_complete is replaced with
block_job_enter which does what is necessary.
In block_job_cancel, the cancel flag is good enough to instruct
coroutines to quit loop, so use block_job_enter to replace the unpaired
block_job_resume.
Upon block job IO error, user is notified about the entering to the
pause state, so this pause belongs to user pause, set the flag
accordingly and expect a matching QMP resume.
[Extended doc comments as suggested by Paolo Bonzini
<pbonzini@redhat.com>.
--Stefan]
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1428069921-2957-2-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Error classes are a leftover from the days of "rich" error objects.
New code should always use ERROR_CLASS_GENERIC_ERROR. Commit
b7b9d39..7c6a4ab added uses of ERROR_CLASS_DEVICE_NOT_FOUND. Replace
them.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Don't convert numbers to strings for use with qemu_opt_set(), simply
use qemu_opt_set_number() instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu_opt_set() is a wrapper around qemu_opt_set() that reports the
error with qerror_report_err().
Most of its users assume the function can't fail. Make them use
qemu_opt_set_err() with &error_abort, so that should the assumption
ever break, it'll break noisily.
Just two users remain, in util/qemu-config.c. Switch them to
qemu_opt_set_err() as well, then rename qemu_opt_set_err() to
qemu_opt_set().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Return the Error object instead of reporting it with
qerror_report_err().
Change callers that assume the function can't fail to pass
&error_abort, so that should the assumption ever break, it'll break
noisily.
Turns out all callers outside its unit test assume that. We could
drop the Error ** argument, but that would make the interface less
regular, so don't.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Some are called do_COMMAND() (old ones, usually), some hmp_COMMAND(),
and sometimes COMMAND pointlessly differs in spelling.
Normalize to hmp_COMMAND(), where COMMAND is exactly the command name
with '-' replaced by '_'.
Exceptions:
* do_device_add() and client_migrate_info() *not* renamed to
hmp_device_add(), hmp_client_migrate_info(), because they're also
QMP handlers. They still need to be converted to QAPI.
* do_memory_dump(), do_physical_memory_dump(), do_ioport_read(),
do_ioport_write() renamed do hmp_* instead of hmp_x(), hmp_xp(),
hmp_i(), hmp_o(), because those names are too cryptic for my taste.
* do_info_help() renamed to hmp_info_help() instead of hmp_info(),
because it only covers help.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Due to different error propagation, this breaks tests 051 and 087; fix
their output.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1423162705-32065-6-git-send-email-mreitz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-4-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The QERR_ macros are leftovers from the days of "rich" error objects.
They're used with error_set() and qerror_report(), and expand into the
first *two* arguments. This trickiness has become pointless. Clean
this one up.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-3-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When find_block_job() fails, all its callers build the same Error
object. Build it in find_block_job() instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1422524221-8566-2-git-send-email-armbru@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Like BLOCK_OP_TYPE_BACKUP_SOURCE and BLOCK_OP_TYPE_BACKUP_TARGET,
block-commit involves two asymmetric devices.
This change is not user-visible (yet), because commit only works with
device names.
But once we enable backing reference in blockdev-add, or specifying
node-name in block-commit command, we don't want the user to start two
commit jobs on the same backing chain, which will corrupt things because
of the final bdrv_swap.
Before we have per category blockers, splitting this type is still
better.
[Resolved virtio-blk dataplane conflict by replacing
BLOCK_OP_TYPE_COMMIT with both BLOCK_OP_TYPE_COMMIT_{SOURCE, TARGET}.
They are safe since the block job runs in the same AioContext as the
dataplane IOThread.
--Stefan]
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Similar to drive-backup, but this command uses a device id as target
instead of creating/opening an image file.
Also add blocker on target bs, since the target is also a named device
now.
Add check and report error for bs == target which became possible but is
an illegal case with introduction of blockdev-backup.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1418899027-8445-3-git-send-email-famz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The BLOCK_OP_TYPE_INTERNAL_SNAPSHOT op blocker exists but was never
used! Let's fix that so internal snapshots can be blocked.
[Fixed s/external/internal/ typo as pointed out by Paolo Bonzini and Max
Reitz.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-5-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The transaction QMP command performs operations atomically on a group of
drives. This command needs to acquire AioContext in order to work
safely when virtio-blk dataplane IOThreads are accessing drives.
The transactional nature of the command means that actions are split
into prepare, commit, abort, and clean functions. Acquire the
AioContext in prepare and don't release it until one of the other
functions is called. This prevents the IOThread from running the
AioContext before the transaction has completed.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-4-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drive_backup_prepare() assigns DriveBackupState fields to NULL in the
error path. This is unnecessary because the DriveBackupState is
allocated using g_malloc0() and other functions like
external_snapshot_prepare() already rely on this.
Do not explicitly assign fields to NULL so that the error path is
concise and does not require modification when fields are added to
DriveBackupState.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-3-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Originally the transaction QMP command was just for taking snapshots.
The command became more general when drive-backup and abort were added.
It is more accurate to say the command is about performing operations on
an atomic group than to say it is about snapshots.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-2-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the change-backing-file QMP commands. By
acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Note that this command operates on both bs and a node in its chain
(image_bs). The bdrv_chain_contains(bs, image_bs) check guarantees that
bs and image_bs are in the same AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
By acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Fix up eject, change, and block_passwd in a single patch because
qmp_eject() and qmp_change_blockdev() both call eject_device(). Also
fix block_passwd while we're tackling a command that takes a block
encryption password.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The BLOCK_OP_TYPE_INTERNAL_SNAPSHOT_DELETE op blocker exists but was
never used! Let's fix that so snapshot delete can be blocked.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the blockdev-snapshot-delete-internal-sync QMP
command. By acquiring the AioContext we avoid race conditions with the
dataplane thread which may also be accessing the BlockDriverState.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The commit block job must run in the BlockDriverState AioContext so that
it works with dataplane.
Acquire the AioContext in blockdev.c so starting the block job is safe.
One detail here is that the bdrv_drain_all() must be moved inside the
aio_context_acquire() region so requests cannot sneak in between the
drain and acquire.
The completion code in block/commit.c must perform backing chain
manipulation and bdrv_reopen() from the main loop. Use
block_job_defer_to_main_loop() to achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-11-git-send-email-stefanha@redhat.com
The mirror block job must run in the BlockDriverState AioContext so that
it works with dataplane.
Acquire the AioContext in blockdev.c so starting the block job is safe.
Note that to_replace is treated separately from other BlockDriverStates
in that it does not need to be in the same AioContext. Explicitly
acquire/release to_replace's AioContext when accessing it.
The completion code in block/mirror.c must perform BDS graph
manipulation and bdrv_reopen() from the main loop. Use
block_job_defer_to_main_loop() to achieve that.
The bdrv_drain_all() call is not allowed outside the main loop since it
could lead to lock ordering problems. Use bdrv_drain(bs) instead
because we have acquired the AioContext so nothing else can sneak in
I/O.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-10-git-send-email-stefanha@redhat.com
The stream block job must run in the BlockDriverState AioContext so that
it works with dataplane.
The basics of acquiring the AioContext are easy in blockdev.c.
The tricky part is the completion code which drops part of the backing
file chain. This must be done in the main loop where bdrv_unref() and
bdrv_close() are safe to call. Use block_job_defer_to_main_loop() to
achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-9-git-send-email-stefanha@redhat.com
The backup block job must run in the BlockDriverState AioContext so that
it works with dataplane.
The basics of acquiring the AioContext are easy in blockdev.c.
The completion code in block/backup.c must call bdrv_unref() from the
main loop. Use block_job_defer_to_main_loop() to achieve that.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-8-git-send-email-stefanha@redhat.com
This function is correct but we should document the constraint that
everything must be thread-safe.
Emitting QMP events and scheduling BHs are both thread-safe so nothing
needs to be done here.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-5-git-send-email-stefanha@redhat.com
When an emulated storage controller is unrealized it will call
blockdev_mark_auto_del(). This will cancel any running block job (and
that eventually releases its reference to the BDS so it can be freed).
Since the block job may be executing in another AioContext we must
acquire/release to ensure thread safety.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-4-git-send-email-stefanha@redhat.com
Make sure that query-block-jobs acquires the BlockDriverState
AioContext so that the blockjob isn't running in another thread while we
access its state.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-3-git-send-email-stefanha@redhat.com
block-job-set-speed, block-job-cancel, block-job-pause,
block-job-resume, and block-job-complete must acquire the
BlockDriverState AioContext so that it is safe to access bs.
At the moment bs->job is always NULL when dataplane is active because op
blockers prevent blockjobs from starting. Once the rest of the blockjob
API has been made aware of AioContext we can drop the op blocker.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1413889440-32577-2-git-send-email-stefanha@redhat.com
Move device model attachment / detachment and the BlockDevOps device
model callbacks and their wrappers from BlockDriverState to
BlockBackend.
Wrapper calls in block.c change from
bdrv_dev_FOO_cb(bs, ...)
to
if (bs->blk) {
bdrv_dev_FOO_cb(bs->blk, ...);
}
No change, because both bdrv_dev_change_media_cb() and
bdrv_dev_resize_cb() do nothing when no device model is attached, and
a device model can be attached only when bs->blk.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Much more command code needs conversion. I'm converting these now
because they're using bdrv_dev_* functions, which I'm about to lift
into BlockBackend.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev_init() always creates a DriveInfo, but only drive_new() fills
it in. qmp_blockdev_add() leaves it blank. This results in a drive
with type = IF_IDE, bus = 0, unit = 0. Screwed up in commit ee13ed1c.
Board initialization code looking for IDE drive (0,0) can pick up one
of these bogus drives. The QMP command has to execute really early to
be visible. Not sure how likely that is in practice.
Fix by creating DriveInfo in drive_new(). Block backends created by
blockdev-add don't get one.
Breaks the test for "has been created by qmp_blockdev_add()" in
blockdev_mark_auto_del() and do_drive_del(), because it changes the
value of dinfo && !dinfo->enable_auto_del from true to false. Simply
test !dinfo instead.
Leaves DriveInfo member enable_auto_del unused. Drop it.
A few places assume a block backend always has a DriveInfo. Fix them
up.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Device models should access their block backends only through the
block-backend.h API. Convert them, and drop direct includes of
inappropriate headers.
Just four uses of BlockDriverState are left:
* The Xen paravirtual block device backend (xen_disk.c) opens images
itself when set up via xenbus, bypassing blockdev.c. I figure it
should go through qmp_blockdev_add() instead.
* Device model "usb-storage" prompts for keys. No other device model
does, and this one probably shouldn't do it, either.
* ide_issue_trim_cb() uses bdrv_aio_discard() instead of
blk_aio_discard() because it fishes its backend out of a BlockAIOCB,
which has only the BlockDriverState.
* PC87312State has an unused BlockDriverState[] member.
The next two commits take care of the latter two.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The patch is big, but all it really does is replacing
dinfo->bdrv
by
blk_bs(blk_by_legacy_dinfo(dinfo))
The replacement is repetitive, but the conversion of device models to
BlockBackend is imminent, and will shorten it to just
blk_legacy_dinfo(dinfo).
Line wrapping muddies the waters a bit. I also omit tests whether
dinfo->bdrv is null, because it never is.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
drive_del() has become a trivial wrapper around blk_unref(). Get rid
of it.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On BlockBackend destruction, unref its BlockDriverState. Replaces the
callers' unrefs.
This turns the pointer from BlockBackend to BlockDriverState into a
strong reference, managed with bdrv_ref() / bdrv_unref(). The
back-pointer remains weak.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make the BlockBackend own the DriveInfo. Change blockdev_init() to
return the BlockBackend instead of the DriveInfo.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Convenience function blk_new_with_bs() creates a BlockBackend with its
BlockDriverState. Callers have to unref both. The commit after next
will relieve them of the need to unref the BlockDriverState.
Complication: due to the silly way drive_del works, we need a way to
hide a BlockBackend, just like bdrv_make_anon(). To emphasize its
"special" status, give the function a suitably off-putting name:
blk_hide_on_behalf_of_do_drive_del(). Unfortunately, hiding turns the
BlockBackend's name into the empty string. Can't avoid that without
breaking the blk->bs->device_name equals blk->name invariant.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Creating an anonymous BDS can't fail. Make that obvious.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of duplicating the logic for the if_ide
(bus,unit) mappings, rely on the blockdev layer
for managing those mappings for us, and use the
drive_get_by_index call instead.
This allows ide_drive_get to work for AHCI HBAs
as well, and can be used in the Q35 initialization.
Lastly, change the nature of the argument to
ide_drive_get so that represents the number of
total drives we can support, and not the total
number of buses. This will prevent array overflows
if the units-per-default-bus property ever needs
to be adjusted for compatibility reasons.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1412187569-23452-5-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The if_max_devs table as in the past been an immutable
default that controls the mapping of index => (bus,unit)
for all boards and all HBAs for each interface type.
Since adding this mapping information to the HBA device
itself is currently unwieldly from the perspective of
retrieving this information at option parsing time
(e.g, within drive_new), we consider the alternative
of marking the if_max_devs table mutable so that
later configuration and initialization can adjust the
mapping at will, but only up until a drive is added,
at which point the mapping is finalized.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1412187569-23452-3-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When users use command line options like -hda, -cdrom,
or even -drive if=ide, it is up to the board initialization
routines to pick up these drives and create backing
devices for them.
Some boards, like Q35, have not been doing this.
However, there is no warning explaining why certain
drive specifications are just silently ignored,
so this function adds a check to print some warnings
to assist users in debugging these sorts of issues
in the future.
This patch will not warn about drives added with if_none,
for which it is not possible to tell in advance if
the omission of a backing device is an issue.
A warning in these cases is considered appropriate.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1412187569-23452-2-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add realization of rename all items in opts for qemu_opt_rename.
e.g:
When add bps twice in command line, need to rename all bps to
throttling.bps-total.
This patch solved following bug:
Bug 1145586 - qemu-kvm will give strange hint when add bps twice for a drive
ref:https://bugzilla.redhat.com/show_bug.cgi?id=1145586
[Resolved conflict with commit 5abbf0ee4d
("block: Catch simultaneous usage of options and their aliases"). Check
for simultaneous use first, and then loop over all options.
--Stefan]
Signed-off-by: Jun Li <junmuzi@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1411537527-16715-1-git-send-email-junmuzi@gmail.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1411999675-14533-1-git-send-email-armbru@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
While thinking about precedence of conflicting block device options from
different sources, I noticed that you can specify both an option and its
legacy alias at the same time (e.g. readonly=on,read-only=off). Rather
than specifying the order of precedence, we should simply forbid such
combinations.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Instead of a series of qemu_opt_rename() calls, use an array that
contains all of the renames and call qemu_opt_rename() in a loop. This
will keep the code readable even when we add an error return to
qemu_opt_rename().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
If the BDS's refcnt > 0, drive_del() destroys the DriveInfo, but not
the BDS. This can happen in three places:
* Device model destruction during unplug: blockdev_auto_del()
* Xen IDE unplug: pci_piix3_xen_ide_unplug()
* drive_del command when no device model is attached: do_drive_del()
The other callers of drive_del are on error paths where refcnt == 1.
If the user somehow manages to plug in a device model using a BDS that
has gone through drive_del(), the legacy configuration passed in
DriveInfo doesn't reach the device model, and automatic deletion on
unplug doesn't work. Worse, some device models such as scsi-disk
crash when DriveInfo doesn't exist.
This is theoretical; I didn't research an actual reproducer. The problem
was introduced when we replaced DriveInfo reference counting by BDS
reference counting in commit a94a3fa..fa510eb.
Fix by keeping DriveInfo alive until its BDS dies.
This affects qemu_drive_opts: now you can't reuse the same ID for new
drive options until the BDS dies. Before, you could, but since the
code always attempts to create a BDS with the same ID next, the
enclosing operation "create a new drive" failed anyway. Different
error path, same result.
Unfortunately, the fix involves use of blockdev.c stuff from block.c,
which is a layering violation. Fortunately, my forthcoming
BlockBackend work will get rid of it again.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev_init() mixes up BlockDriverState and DriveInfo initialization
Finish the BlockDriverState job before starting to mess with
DriveInfo. Easier on the eyes.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For some device models, the guest can prevent unplug. Some users need a
way to forcibly revoke device model access to the block backend then, so
the underlying images can be safely used for something else.
drive_del lets you do that. Unfortunately, it conflates revoking access
with destroying the backend.
Commit 9063f81 made drive_del immediately destroy the root BDS. Nice:
the device name becomes available for reuse immediately. Not so nice:
the device model's pointer to the root BDS dangles, and we're prone to
crash when the memory gets reused.
Commit d22b2f4 fixed that by hiding the root BDS instead of destroying
it. Destruction only happens on unplug. "Hiding" means removing it
from bdrv_states and graph_bdrv_states; see bdrv_make_anon().
This "destroy on revoke" is a misfeature we don't want to carry
forward to blockdev-add, just like "destroy on unplug" (commit
2d246f0). So make drive_del fail on anything added with blockdev-add.
We'll add separate QMP commands to revoke device model access and to
destroy backends.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
relaxing the license to LGPLv2+ is intentional.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit.canet@nodalink.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Make drive_del safe for dataplane where another thread may be running
the BlockDriverState's AioContext.
Note the assumption that AioContext's lifetime exceeds DriveInfo and
BlockDriverState. We release AioContext after DriveInfo and
BlockDriverState are potentially freed.
This is clearly safe with the global AioContext but also with -object
iothread and implicit iothreads created by -device
virtio-blk-pci,x-data-plane=on (their lifetime is tied to DeviceState,
not BlockDriverState).
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Name the 'granularity' parameter and give its expected value range.
Previously the device name was mistakenly reported as the parameter
name.
Note that the error class is unchanged from ERROR_CLASS_GENERIC_ERROR.
Reported-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoît Canet <benoit.canet@nodalink.com>
Make block_resize safe for dataplane where another thread may be running
the BlockDriverState's AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
Patch created with Coccinelle, with two manual changes on top:
* Add const to bdrv_iterate_format() to keep the types straight
* Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
inexplicably misses
Coccinelle semantic patch:
@@
type T;
@@
-g_malloc(sizeof(T))
+g_new(T, 1)
@@
type T;
@@
-g_try_malloc(sizeof(T))
+g_try_new(T, 1)
@@
type T;
@@
-g_malloc0(sizeof(T))
+g_new0(T, 1)
@@
type T;
@@
-g_try_malloc0(sizeof(T))
+g_try_new0(T, 1)
@@
type T;
expression n;
@@
-g_malloc(sizeof(T) * (n))
+g_new(T, n)
@@
type T;
expression n;
@@
-g_try_malloc(sizeof(T) * (n))
+g_try_new(T, n)
@@
type T;
expression n;
@@
-g_malloc0(sizeof(T) * (n))
+g_new0(T, n)
@@
type T;
expression n;
@@
-g_try_malloc0(sizeof(T) * (n))
+g_try_new0(T, n)
@@
type T;
expression p, n;
@@
-g_realloc(p, sizeof(T) * (n))
+g_renew(T, p, n)
@@
type T;
expression p, n;
@@
-g_try_realloc(p, sizeof(T) * (n))
+g_try_renew(T, p, n)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On some image chains, QEMU may not always be able to resolve the
filenames properly, when updating the backing file of an image
after a block job.
For instance, certain relative pathnames may fail, or drives may
have been specified originally by file descriptor (e.g. /dev/fd/???),
or a relative protocol pathname may have been used.
In these instances, QEMU may lack the information to be able to make
the correct choice, but the user or management layer most likely does
have that knowledge.
With this extension to the block-stream api, the user is able to change
the backing file of the active layer as part of the block-stream
operation.
This allows the change to be 'safe', in the sense that if the attempt
to write the active image metadata fails, then the block-stream
operation returns failure, without disrupting the guest.
If a backing file string is not specified in the command, the backing
file string to use is determined in the same manner as it was
previously.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
On some image chains, QEMU may not always be able to resolve the
filenames properly, when updating the backing file of an image
after a block commit.
For instance, certain relative pathnames may fail, or drives may
have been specified originally by file descriptor (e.g. /dev/fd/???),
or a relative protocol pathname may have been used.
In these instances, QEMU may lack the information to be able to make
the correct choice, but the user or management layer most likely does
have that knowledge.
With this extension to the block-commit api, the user is able to change
the backing file of the overlay image as part of the block-commit
operation.
This allows the change to be 'safe', in the sense that if the attempt
to write the overlay image metadata fails, then the block-commit
operation returns failure, without disrupting the guest.
If the commit top is the active layer, then specifying the backing
file string will be treated as an error (there is no overlay image
to modify in that case).
If a backing file string is not specified in the command, the backing
file string to use is determined in the same manner as it was
previously.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This allows a user to make a live change to the backing file recorded in
an open image.
The image file to modify can be specified 2 ways:
1) image filename
2) image node-name
Note: this does not cause the backing file itself to be reopened; it
merely changes the backing filename in the image file structure, and
in internal BDS structures.
It is the responsibility of the user to pass a filename string that
can be resolved when the image chain is reopened, and the filename
string is not validated.
A good analogy for this command is that it is a live version of
'qemu-img rebase -u', with respect to changing the backing file string.
[Jeff is offline so I respun this patch in his absence. Dropped image
filename since using node-name is preferred and this is a new command.
No need to introduce the limitations of finding images by filename.
--Stefan]
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that active layer block-commit is supported, the 'top' argument
no longer needs to be mandatory.
Change it to optional, with the default being the active layer in the
device chain.
[kwolf: Rebased and resolved conflict in tests/qemu-iotests/040]
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
drive-mirror will bdrv_swap the new BDS named node-name with the one
pointed by replaces when the mirroring is finished.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This new argument can be used to specify the node-name of the new mirrored BDS.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If we check for the RESIZE blocker in bdrv_truncate(), that means a
commit will fail if the overlay layer is larger than the base, due to
the backing blocker.
This is a regression in behavior from 2.0; currently, commit will try to
grow the size of the base image to match the overlay size, if the
overlay size is larger.
By moving this into the QMP command qmp_block_resize(), it allows
usage of bdrv_truncate() within block jobs.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since BLOCK_JOB_COMPLETED, BLOCK_JOB_CANCELLED, BLOCK_JOB_READY are
related, convert them in one patch. The block_job_event_* functions
are used to keep encapsulation of BlockJob structure.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
It's always one since commit fa510eb dropped the last drive_get_ref().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
"Init" and "uninit" suggest the functions don't allocate / free
storage. But they do.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The block_set_io_throttle QMP and HMP commands modify I/O throttling
limits for block devices.
Acquire the BlockDriverState's AioContext to protect against race
conditions with an IOThread that is running I/O for this device.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
bs_opts is leaked on all paths from its qdev_new() that don't got
through blockdev_init(). Add the missing QDECREF(), and zap bs_opts
after blockdev_init(), so the new QDECREF() does nothing when we go
through blockdev_init().
Leak introduced in commit f298d07. Spotted by Coverity.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>