2004-08-02 01:59:26 +04:00
|
|
|
/*
|
2006-08-08 01:34:46 +04:00
|
|
|
* QEMU disk image utility
|
2007-09-17 01:08:06 +04:00
|
|
|
*
|
2008-01-06 20:21:48 +03:00
|
|
|
* Copyright (c) 2003-2008 Fabrice Bellard
|
2007-09-17 01:08:06 +04:00
|
|
|
*
|
2004-08-02 01:59:26 +04:00
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
|
|
* in the Software without restriction, including without limitation the rights
|
|
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
|
|
* furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
|
|
* THE SOFTWARE.
|
|
|
|
*/
|
2018-02-01 14:18:39 +03:00
|
|
|
|
2016-01-18 21:01:42 +03:00
|
|
|
#include "qemu/osdep.h"
|
2017-07-21 16:50:47 +03:00
|
|
|
#include <getopt.h>
|
|
|
|
|
2019-05-23 17:35:08 +03:00
|
|
|
#include "qemu-common.h"
|
2016-06-01 12:44:21 +03:00
|
|
|
#include "qemu-version.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
#include "qapi/qapi-commands-block-core.h"
|
2018-02-11 12:36:01 +03:00
|
|
|
#include "qapi/qapi-visit-block-core.h"
|
2016-09-30 17:45:27 +03:00
|
|
|
#include "qapi/qobject-output-visitor.h"
|
2012-12-17 21:19:43 +04:00
|
|
|
#include "qapi/qmp/qjson.h"
|
2018-02-01 14:18:39 +03:00
|
|
|
#include "qapi/qmp/qdict.h"
|
2016-03-20 20:16:19 +03:00
|
|
|
#include "qemu/cutils.h"
|
2016-02-17 13:10:17 +03:00
|
|
|
#include "qemu/config-file.h"
|
2012-12-17 21:20:00 +04:00
|
|
|
#include "qemu/option.h"
|
|
|
|
#include "qemu/error-report.h"
|
2016-06-17 17:44:14 +03:00
|
|
|
#include "qemu/log.h"
|
Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h). It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.
Include qemu/main-loop.h only where it's needed. Touching it now
recompiles only some 1700 objects. For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the
others, they shrink only slightly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-12 08:23:50 +03:00
|
|
|
#include "qemu/main-loop.h"
|
2019-05-23 17:35:07 +03:00
|
|
|
#include "qemu/module.h"
|
2020-08-25 13:38:48 +03:00
|
|
|
#include "qemu/sockets.h"
|
2019-05-08 13:43:24 +03:00
|
|
|
#include "qemu/units.h"
|
2016-02-17 13:10:17 +03:00
|
|
|
#include "qom/object_interfaces.h"
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
#include "sysemu/block-backend.h"
|
2012-12-17 21:19:44 +04:00
|
|
|
#include "block/block_int.h"
|
2014-10-24 17:57:37 +04:00
|
|
|
#include "block/blockjob.h"
|
2013-05-25 07:09:44 +04:00
|
|
|
#include "block/qapi.h"
|
2016-04-06 14:12:06 +03:00
|
|
|
#include "crypto/init.h"
|
2016-06-17 17:44:14 +03:00
|
|
|
#include "trace/control.h"
|
2020-10-20 17:47:44 +03:00
|
|
|
#include "qemu/throttle.h"
|
|
|
|
#include "block/throttle-groups.h"
|
2006-06-14 19:32:10 +04:00
|
|
|
|
2018-02-15 14:06:47 +03:00
|
|
|
#define QEMU_IMG_VERSION "qemu-img version " QEMU_FULL_VERSION \
|
2016-10-05 12:54:44 +03:00
|
|
|
"\n" QEMU_COPYRIGHT "\n"
|
2014-04-28 22:37:18 +04:00
|
|
|
|
2009-10-02 01:12:16 +04:00
|
|
|
typedef struct img_cmd_t {
|
2009-06-07 03:42:17 +04:00
|
|
|
const char *name;
|
|
|
|
int (*handler)(int argc, char **argv);
|
2009-10-02 01:12:16 +04:00
|
|
|
} img_cmd_t;
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
enum {
|
|
|
|
OPTION_OUTPUT = 256,
|
|
|
|
OPTION_BACKING_CHAIN = 257,
|
2016-02-17 13:10:17 +03:00
|
|
|
OPTION_OBJECT = 258,
|
2016-02-17 13:10:20 +03:00
|
|
|
OPTION_IMAGE_OPTS = 259,
|
2015-07-10 19:09:18 +03:00
|
|
|
OPTION_PATTERN = 260,
|
2016-06-03 14:59:41 +03:00
|
|
|
OPTION_FLUSH_INTERVAL = 261,
|
|
|
|
OPTION_NO_DRAIN = 262,
|
2017-05-15 19:47:11 +03:00
|
|
|
OPTION_TARGET_IMAGE_OPTS = 263,
|
2017-07-05 15:57:36 +03:00
|
|
|
OPTION_SIZE = 264,
|
2017-06-13 23:20:55 +03:00
|
|
|
OPTION_PREALLOCATION = 265,
|
2017-09-18 15:42:27 +03:00
|
|
|
OPTION_SHRINK = 266,
|
2019-05-07 23:35:03 +03:00
|
|
|
OPTION_SALVAGE = 267,
|
2020-02-05 14:02:48 +03:00
|
|
|
OPTION_TARGET_IS_ZERO = 268,
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
OPTION_ADD = 269,
|
|
|
|
OPTION_REMOVE = 270,
|
|
|
|
OPTION_CLEAR = 271,
|
|
|
|
OPTION_ENABLE = 272,
|
|
|
|
OPTION_DISABLE = 273,
|
|
|
|
OPTION_MERGE = 274,
|
2020-05-21 22:21:36 +03:00
|
|
|
OPTION_BITMAPS = 275,
|
2020-06-25 15:55:38 +03:00
|
|
|
OPTION_FORCE = 276,
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
OPTION_SKIP_BROKEN = 277,
|
2013-01-28 15:59:47 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef enum OutputFormat {
|
|
|
|
OFORMAT_JSON,
|
|
|
|
OFORMAT_HUMAN,
|
|
|
|
} OutputFormat;
|
|
|
|
|
2016-03-15 15:03:11 +03:00
|
|
|
/* Default to cache=writeback as data integrity is not important for qemu-img */
|
2011-06-20 20:48:19 +04:00
|
|
|
#define BDRV_DEFAULT_CACHE "writeback"
|
2008-11-30 22:12:49 +03:00
|
|
|
|
2014-08-27 15:08:56 +04:00
|
|
|
static void format_print(void *opaque, const char *name)
|
2004-08-02 01:59:26 +04:00
|
|
|
{
|
2014-08-27 15:08:56 +04:00
|
|
|
printf(" %s", name);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2014-04-22 09:36:11 +04:00
|
|
|
static void QEMU_NORETURN GCC_FMT_ATTR(1, 2) error_exit(const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
va_start(ap, fmt);
|
2019-04-17 22:06:27 +03:00
|
|
|
error_vreport(fmt, ap);
|
2014-04-22 09:36:11 +04:00
|
|
|
va_end(ap);
|
|
|
|
|
2019-04-17 22:06:27 +03:00
|
|
|
error_printf("Try 'qemu-img --help' for more information\n");
|
2014-04-22 09:36:11 +04:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
|
2017-03-17 13:45:41 +03:00
|
|
|
static void QEMU_NORETURN missing_argument(const char *option)
|
|
|
|
{
|
|
|
|
error_exit("missing argument for option '%s'", option);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void QEMU_NORETURN unrecognized_option(const char *option)
|
|
|
|
{
|
|
|
|
error_exit("unrecognized option '%s'", option);
|
|
|
|
}
|
|
|
|
|
2020-05-13 04:16:41 +03:00
|
|
|
/* Please keep in synch with docs/tools/qemu-img.rst */
|
2014-04-22 09:36:11 +04:00
|
|
|
static void QEMU_NORETURN help(void)
|
2004-08-02 01:59:26 +04:00
|
|
|
{
|
2010-02-04 18:49:56 +03:00
|
|
|
const char *help_msg =
|
2014-04-28 22:37:18 +04:00
|
|
|
QEMU_IMG_VERSION
|
2016-06-17 17:44:13 +03:00
|
|
|
"usage: qemu-img [standard options] command [command options]\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"QEMU disk image utility\n"
|
|
|
|
"\n"
|
2016-06-17 17:44:13 +03:00
|
|
|
" '-h', '--help' display this help and exit\n"
|
|
|
|
" '-V', '--version' output version information and exit\n"
|
2016-06-17 17:44:14 +03:00
|
|
|
" '-T', '--trace' [[enable=]<pattern>][,events=<file>][,file=<file>]\n"
|
|
|
|
" specify tracing options\n"
|
2016-06-17 17:44:13 +03:00
|
|
|
"\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"Command syntax:\n"
|
2009-06-07 03:42:17 +04:00
|
|
|
#define DEF(option, callback, arg_string) \
|
|
|
|
" " arg_string "\n"
|
|
|
|
#include "qemu-img-cmds.h"
|
|
|
|
#undef DEF
|
2010-02-08 12:04:56 +03:00
|
|
|
"\n"
|
|
|
|
"Command parameters:\n"
|
|
|
|
" 'filename' is a disk image filename\n"
|
2016-02-17 13:10:17 +03:00
|
|
|
" 'objectdef' is a QEMU user creatable object definition. See the qemu(1)\n"
|
|
|
|
" manual page for a description of the object properties. The most common\n"
|
|
|
|
" object type is a 'secret', which is used to supply passwords and/or\n"
|
|
|
|
" encryption keys.\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'fmt' is the disk image format. It is guessed automatically in most cases\n"
|
2011-06-20 20:48:19 +04:00
|
|
|
" 'cache' is the cache mode used to write the output disk image, the valid\n"
|
2012-04-20 13:10:56 +04:00
|
|
|
" options are: 'none', 'writeback' (default, except for convert), 'writethrough',\n"
|
|
|
|
" 'directsync' and 'unsafe' (default for convert)\n"
|
2014-09-02 14:01:02 +04:00
|
|
|
" 'src_cache' is the cache mode used to read input disk images, the valid\n"
|
|
|
|
" options are the same as for the 'cache' option\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'size' is the disk image size in bytes. Optional suffixes\n"
|
2013-06-05 16:19:27 +04:00
|
|
|
" 'k' or 'K' (kilobyte, 1024), 'M' (megabyte, 1024k), 'G' (gigabyte, 1024M),\n"
|
|
|
|
" 'T' (terabyte, 1024G), 'P' (petabyte, 1024T) and 'E' (exabyte, 1024P) are\n"
|
|
|
|
" supported. 'b' is ignored.\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'output_filename' is the destination disk image filename\n"
|
|
|
|
" 'output_fmt' is the destination format\n"
|
|
|
|
" 'options' is a comma separated list of format specific options in a\n"
|
|
|
|
" name=value format. Use -o ? for an overview of the options supported by the\n"
|
|
|
|
" used format\n"
|
2013-12-04 13:10:57 +04:00
|
|
|
" 'snapshot_param' is param used for internal snapshot, format\n"
|
|
|
|
" is 'snapshot.id=[ID],snapshot.name=[NAME]', or\n"
|
|
|
|
" '[ID_OR_NAME]'\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" '-c' indicates that target image must be compressed (qcow format only)\n"
|
2017-07-18 03:34:22 +03:00
|
|
|
" '-u' allows unsafe backing chains. For rebasing, it is assumed that old and\n"
|
|
|
|
" new backing file match exactly. The image doesn't need a working\n"
|
|
|
|
" backing file before rebasing in this case (useful for renaming the\n"
|
|
|
|
" backing file). For image creation, allow creating without attempting\n"
|
|
|
|
" to open the backing file.\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" '-h' with or without a command shows this help and lists the supported formats\n"
|
2011-03-30 16:16:25 +04:00
|
|
|
" '-p' show progress of command (only certain commands)\n"
|
2013-02-13 12:09:40 +04:00
|
|
|
" '-q' use Quiet mode - do not print any output (except errors)\n"
|
2013-10-24 14:07:05 +04:00
|
|
|
" '-S' indicates the consecutive number of bytes (defaults to 4k) that must\n"
|
|
|
|
" contain only zeros for qemu-img to create a sparse image during\n"
|
|
|
|
" conversion. If the number of bytes is 0, the source will not be scanned for\n"
|
|
|
|
" unallocated or zero sectors, and the destination image will always be\n"
|
|
|
|
" fully allocated\n"
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
" '--output' takes the format in which the output must be done (human or json)\n"
|
2013-09-02 22:07:24 +04:00
|
|
|
" '-n' skips the target volume creation (useful if the volume is created\n"
|
|
|
|
" prior to running qemu-img)\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"\n"
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
"Parameters to bitmap subcommand:\n"
|
|
|
|
" 'bitmap' is the name of the bitmap to manipulate, through one or more\n"
|
|
|
|
" actions from '--add', '--remove', '--clear', '--enable', '--disable',\n"
|
|
|
|
" or '--merge source'\n"
|
|
|
|
" '-g granularity' sets the granularity for '--add' actions\n"
|
|
|
|
" '-b source' and '-F src_fmt' tell '--merge' actions to find the source\n"
|
|
|
|
" bitmaps from an alternative file\n"
|
|
|
|
"\n"
|
2012-05-11 18:07:02 +04:00
|
|
|
"Parameters to check subcommand:\n"
|
|
|
|
" '-r' tries to repair any inconsistencies that are found during the check.\n"
|
|
|
|
" '-r leaks' repairs only cluster leaks, whereas '-r all' fixes all\n"
|
|
|
|
" kinds of errors, with a higher risk of choosing the wrong fix or\n"
|
2012-08-11 00:03:25 +04:00
|
|
|
" hiding corruption that has already occurred.\n"
|
2012-05-11 18:07:02 +04:00
|
|
|
"\n"
|
2017-02-28 15:40:07 +03:00
|
|
|
"Parameters to convert subcommand:\n"
|
2020-05-21 22:21:36 +03:00
|
|
|
" '--bitmaps' copies all top-level persistent bitmaps to destination\n"
|
2017-02-28 15:40:07 +03:00
|
|
|
" '-m' specifies how many coroutines work in parallel during the convert\n"
|
|
|
|
" process (defaults to 8)\n"
|
|
|
|
" '-W' allow to write to the target out of order rather than sequential\n"
|
|
|
|
"\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"Parameters to snapshot subcommand:\n"
|
|
|
|
" 'snapshot' is the name of the snapshot to create, apply or delete\n"
|
|
|
|
" '-a' applies a snapshot (revert disk to saved state)\n"
|
|
|
|
" '-c' creates a snapshot\n"
|
|
|
|
" '-d' deletes a snapshot\n"
|
2013-02-13 12:09:41 +04:00
|
|
|
" '-l' lists all snapshots in the given image\n"
|
|
|
|
"\n"
|
|
|
|
"Parameters to compare subcommand:\n"
|
|
|
|
" '-f' first image format\n"
|
|
|
|
" '-F' second image format\n"
|
2016-08-10 05:43:12 +03:00
|
|
|
" '-s' run in Strict mode - fail on different image size or sector allocation\n"
|
|
|
|
"\n"
|
|
|
|
"Parameters to dd subcommand:\n"
|
|
|
|
" 'bs=BYTES' read and write up to BYTES bytes at a time "
|
|
|
|
"(default: 512)\n"
|
|
|
|
" 'count=N' copy only N input blocks\n"
|
|
|
|
" 'if=FILE' read from FILE\n"
|
2016-08-10 17:16:09 +03:00
|
|
|
" 'of=FILE' write to FILE\n"
|
|
|
|
" 'skip=N' skip N bs-sized blocks at the start of input\n";
|
2010-02-04 18:49:56 +03:00
|
|
|
|
|
|
|
printf("%s\nSupported formats:", help_msg);
|
2019-03-07 16:33:58 +03:00
|
|
|
bdrv_iterate_format(format_print, NULL, false);
|
2017-08-03 19:33:53 +03:00
|
|
|
printf("\n\n" QEMU_HELP_BOTTOM "\n");
|
2014-04-22 09:36:11 +04:00
|
|
|
exit(EXIT_SUCCESS);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
qemu-img: Move is_valid_option_list() to qemu-img.c and rewrite
is_valid_option_list()'s purpose is ensuring qemu-img.c's can safely
join multiple parameter strings separated by ',' like this:
g_strdup_printf("%s,%s", params1, params2);
How it does that is anything but obvious. A close reading of the code
reveals that it fails exactly when its argument starts with ',' or
ends with an odd number of ','. Makes sense, actually, because when
the argument starts with ',', a separating ',' preceding it would get
escaped, and when it ends with an odd number of ',', a separating ','
following it would get escaped.
Move it to qemu-img.c and rewrite it the obvious way.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200415074927.19897-9-armbru@redhat.com>
2020-04-15 10:49:26 +03:00
|
|
|
/*
|
|
|
|
* Is @optarg safe for accumulate_options()?
|
|
|
|
* It is when multiple of them can be joined together separated by ','.
|
|
|
|
* To make that work, @optarg must not start with ',' (or else a
|
|
|
|
* separating ',' preceding it gets escaped), and it must not end with
|
|
|
|
* an odd number of ',' (or else a separating ',' following it gets
|
qemu-img: Reject broken -o ""
qemu-img create, convert, amend, and measure use accumulate_options()
to merge multiple -o options. This is broken for -o "":
$ qemu-img create -f qcow2 -o backing_file=a -o "" -o backing_fmt=raw,size=1M new.qcow2
qemu-img: warning: Could not verify backing image. This may become an error in future versions.
Could not open 'a,backing_fmt=raw': No such file or directory
Formatting 'new.qcow2', fmt=qcow2 size=1048576 backing_file=a,,backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
$ qemu-img info new.qcow2
image: new.qcow2
file format: qcow2
virtual size: 1 MiB (1048576 bytes)
disk size: 196 KiB
cluster_size: 65536
--> backing file: a,backing_fmt=raw
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Merging these three -o the obvious way is wrong, because it results in
an unwanted ',' escape:
backing_file=a,,backing_fmt=raw,size=1M
~~
We could silently drop -o "", but Kevin asked me to reject it instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200415074927.19897-10-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2020-04-15 10:49:27 +03:00
|
|
|
* escaped), or be empty (or else a separating ',' preceding it can
|
|
|
|
* escape a separating ',' following it).
|
|
|
|
*
|
qemu-img: Move is_valid_option_list() to qemu-img.c and rewrite
is_valid_option_list()'s purpose is ensuring qemu-img.c's can safely
join multiple parameter strings separated by ',' like this:
g_strdup_printf("%s,%s", params1, params2);
How it does that is anything but obvious. A close reading of the code
reveals that it fails exactly when its argument starts with ',' or
ends with an odd number of ','. Makes sense, actually, because when
the argument starts with ',', a separating ',' preceding it would get
escaped, and when it ends with an odd number of ',', a separating ','
following it would get escaped.
Move it to qemu-img.c and rewrite it the obvious way.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200415074927.19897-9-armbru@redhat.com>
2020-04-15 10:49:26 +03:00
|
|
|
*/
|
|
|
|
static bool is_valid_option_list(const char *optarg)
|
|
|
|
{
|
|
|
|
size_t len = strlen(optarg);
|
|
|
|
size_t i;
|
|
|
|
|
qemu-img: Reject broken -o ""
qemu-img create, convert, amend, and measure use accumulate_options()
to merge multiple -o options. This is broken for -o "":
$ qemu-img create -f qcow2 -o backing_file=a -o "" -o backing_fmt=raw,size=1M new.qcow2
qemu-img: warning: Could not verify backing image. This may become an error in future versions.
Could not open 'a,backing_fmt=raw': No such file or directory
Formatting 'new.qcow2', fmt=qcow2 size=1048576 backing_file=a,,backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
$ qemu-img info new.qcow2
image: new.qcow2
file format: qcow2
virtual size: 1 MiB (1048576 bytes)
disk size: 196 KiB
cluster_size: 65536
--> backing file: a,backing_fmt=raw
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Merging these three -o the obvious way is wrong, because it results in
an unwanted ',' escape:
backing_file=a,,backing_fmt=raw,size=1M
~~
We could silently drop -o "", but Kevin asked me to reject it instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200415074927.19897-10-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2020-04-15 10:49:27 +03:00
|
|
|
if (!optarg[0] || optarg[0] == ',') {
|
qemu-img: Move is_valid_option_list() to qemu-img.c and rewrite
is_valid_option_list()'s purpose is ensuring qemu-img.c's can safely
join multiple parameter strings separated by ',' like this:
g_strdup_printf("%s,%s", params1, params2);
How it does that is anything but obvious. A close reading of the code
reveals that it fails exactly when its argument starts with ',' or
ends with an odd number of ','. Makes sense, actually, because when
the argument starts with ',', a separating ',' preceding it would get
escaped, and when it ends with an odd number of ',', a separating ','
following it would get escaped.
Move it to qemu-img.c and rewrite it the obvious way.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200415074927.19897-9-armbru@redhat.com>
2020-04-15 10:49:26 +03:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = len; i > 0 && optarg[i - 1] == ','; i--) {
|
|
|
|
}
|
|
|
|
if ((len - i) % 2) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2019-10-11 22:49:17 +03:00
|
|
|
}
|
|
|
|
|
2020-04-15 10:49:25 +03:00
|
|
|
static int accumulate_options(char **options, char *optarg)
|
|
|
|
{
|
|
|
|
char *new_options;
|
|
|
|
|
|
|
|
if (!is_valid_option_list(optarg)) {
|
|
|
|
error_report("Invalid option list: %s", optarg);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!*options) {
|
|
|
|
*options = g_strdup(optarg);
|
|
|
|
} else {
|
|
|
|
new_options = g_strdup_printf("%s,%s", *options, optarg);
|
|
|
|
g_free(*options);
|
|
|
|
*options = new_options;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
static QemuOptsList qemu_source_opts = {
|
|
|
|
.name = "source",
|
|
|
|
.implied_opt_name = "file",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(qemu_source_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2013-06-16 19:01:05 +04:00
|
|
|
static int GCC_FMT_ATTR(2, 3) qprintf(bool quiet, const char *fmt, ...)
|
2013-02-13 12:09:40 +04:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
if (!quiet) {
|
|
|
|
va_list args;
|
|
|
|
va_start(args, fmt);
|
|
|
|
ret = vprintf(fmt, args);
|
|
|
|
va_end(args);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2010-12-06 17:25:38 +03:00
|
|
|
static int print_block_option_help(const char *filename, const char *fmt)
|
|
|
|
{
|
|
|
|
BlockDriver *drv, *proto_drv;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOptsList *create_opts = NULL;
|
2015-02-05 21:58:12 +03:00
|
|
|
Error *local_err = NULL;
|
2010-12-06 17:25:38 +03:00
|
|
|
|
|
|
|
/* Find driver and parse its options */
|
|
|
|
drv = bdrv_find_format(fmt);
|
|
|
|
if (!drv) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Unknown file format '%s'", fmt);
|
2010-12-06 17:25:38 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2018-05-10 00:00:21 +03:00
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation", fmt);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-06-05 13:21:11 +04:00
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
if (filename) {
|
2015-02-05 21:58:12 +03:00
|
|
|
proto_drv = bdrv_find_protocol(filename, true, &local_err);
|
2014-02-21 19:24:07 +04:00
|
|
|
if (!proto_drv) {
|
2015-03-12 18:08:02 +03:00
|
|
|
error_report_err(local_err);
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_free(create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
return 1;
|
|
|
|
}
|
2018-05-10 00:00:21 +03:00
|
|
|
if (!proto_drv->create_opts) {
|
2018-11-19 13:19:20 +03:00
|
|
|
error_report("Protocol driver '%s' does not support image creation",
|
2018-05-10 00:00:21 +03:00
|
|
|
proto_drv->format_name);
|
2018-11-19 13:19:21 +03:00
|
|
|
qemu_opts_free(create_opts);
|
2018-05-10 00:00:21 +03:00
|
|
|
return 1;
|
|
|
|
}
|
2014-06-05 13:21:11 +04:00
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
|
|
|
|
2019-04-13 18:20:37 +03:00
|
|
|
if (filename) {
|
|
|
|
printf("Supported options:\n");
|
|
|
|
} else {
|
|
|
|
printf("Supported %s options:\n", fmt);
|
|
|
|
}
|
2018-10-19 19:49:25 +03:00
|
|
|
qemu_opts_print_help(create_opts, false);
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_free(create_opts);
|
2019-04-13 18:20:37 +03:00
|
|
|
|
|
|
|
if (!filename) {
|
|
|
|
printf("\n"
|
|
|
|
"The protocol level may support further options.\n"
|
|
|
|
"Specify the target filename to include those options.\n");
|
|
|
|
}
|
|
|
|
|
2010-12-06 17:25:38 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open_opts(const char *optstr,
|
2016-03-15 15:01:04 +03:00
|
|
|
QemuOpts *opts, int flags, bool writethrough,
|
2017-05-02 19:35:39 +03:00
|
|
|
bool quiet, bool force_share)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
QDict *options;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
BlockBackend *blk;
|
|
|
|
options = qemu_opts_to_qdict(opts, NULL);
|
2017-05-02 19:35:39 +03:00
|
|
|
if (force_share) {
|
|
|
|
if (qdict_haskey(options, BDRV_OPT_FORCE_SHARE)
|
2018-05-02 23:20:50 +03:00
|
|
|
&& strcmp(qdict_get_str(options, BDRV_OPT_FORCE_SHARE), "on")) {
|
2017-05-02 19:35:39 +03:00
|
|
|
error_report("--force-share/-U conflicts with image options");
|
2018-04-19 18:01:43 +03:00
|
|
|
qobject_unref(options);
|
2017-05-02 19:35:39 +03:00
|
|
|
return NULL;
|
|
|
|
}
|
2018-05-02 23:20:50 +03:00
|
|
|
qdict_put_str(options, BDRV_OPT_FORCE_SHARE, "on");
|
2017-05-02 19:35:39 +03:00
|
|
|
}
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = blk_new_open(NULL, NULL, options, flags, &local_err);
|
2016-02-17 13:10:20 +03:00
|
|
|
if (!blk) {
|
2016-04-06 12:16:18 +03:00
|
|
|
error_reportf_err(local_err, "Could not open '%s': ", optstr);
|
2016-02-17 13:10:20 +03:00
|
|
|
return NULL;
|
|
|
|
}
|
2016-03-15 15:01:04 +03:00
|
|
|
blk_set_enable_write_cache(blk, !writethrough);
|
2016-02-17 13:10:20 +03:00
|
|
|
|
|
|
|
return blk;
|
|
|
|
}
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open_file(const char *filename,
|
2017-05-15 19:47:12 +03:00
|
|
|
QDict *options,
|
2016-02-17 13:10:20 +03:00
|
|
|
const char *fmt, int flags,
|
2017-05-02 19:35:39 +03:00
|
|
|
bool writethrough, bool quiet,
|
|
|
|
bool force_share)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
BlockBackend *blk;
|
2013-09-05 16:45:29 +04:00
|
|
|
Error *local_err = NULL;
|
2010-12-16 17:37:41 +03:00
|
|
|
|
2017-05-15 19:47:12 +03:00
|
|
|
if (!options) {
|
|
|
|
options = qdict_new();
|
|
|
|
}
|
2004-08-28 01:28:58 +04:00
|
|
|
if (fmt) {
|
2017-04-28 00:58:17 +03:00
|
|
|
qdict_put_str(options, "driver", fmt);
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2011-02-09 13:25:53 +03:00
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
if (force_share) {
|
2017-05-15 22:54:39 +03:00
|
|
|
qdict_put_bool(options, BDRV_OPT_FORCE_SHARE, true);
|
2017-05-02 19:35:39 +03:00
|
|
|
}
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = blk_new_open(filename, NULL, options, flags, &local_err);
|
2015-02-05 21:58:16 +03:00
|
|
|
if (!blk) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "Could not open '%s': ", filename);
|
2016-02-17 13:10:20 +03:00
|
|
|
return NULL;
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2016-03-15 15:01:04 +03:00
|
|
|
blk_set_enable_write_cache(blk, !writethrough);
|
2011-02-09 13:25:53 +03:00
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
return blk;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-05-15 19:47:12 +03:00
|
|
|
static int img_add_key_secrets(void *opaque,
|
|
|
|
const char *name, const char *value,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
QDict *options = opaque;
|
|
|
|
|
|
|
|
if (g_str_has_suffix(name, "key-secret")) {
|
2017-06-24 21:10:07 +03:00
|
|
|
qdict_put_str(options, name, value);
|
2017-05-15 19:47:12 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open(bool image_opts,
|
2016-02-17 13:10:20 +03:00
|
|
|
const char *filename,
|
2016-03-15 15:01:04 +03:00
|
|
|
const char *fmt, int flags, bool writethrough,
|
2017-05-02 19:35:39 +03:00
|
|
|
bool quiet, bool force_share)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
BlockBackend *blk;
|
|
|
|
if (image_opts) {
|
|
|
|
QemuOpts *opts;
|
|
|
|
if (fmt) {
|
|
|
|
error_report("--image-opts and --format are mutually exclusive");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
opts = qemu_opts_parse_noisily(qemu_find_opts("source"),
|
|
|
|
filename, true);
|
|
|
|
if (!opts) {
|
|
|
|
return NULL;
|
|
|
|
}
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open_opts(filename, opts, flags, writethrough, quiet,
|
|
|
|
force_share);
|
2016-02-17 13:10:20 +03:00
|
|
|
} else {
|
2017-05-15 19:47:12 +03:00
|
|
|
blk = img_open_file(filename, NULL, fmt, flags, writethrough, quiet,
|
2017-05-02 19:35:39 +03:00
|
|
|
force_share);
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
return blk;
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
static int add_old_style_options(const char *fmt, QemuOpts *opts,
|
2010-12-07 19:44:34 +03:00
|
|
|
const char *base_filename,
|
|
|
|
const char *base_fmt)
|
2009-05-18 18:42:12 +04:00
|
|
|
{
|
|
|
|
if (base_filename) {
|
qemu-option: Use returned bool to check for failure
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for QemuOpts functions that now return true / false on success /
error. Coccinelle script:
@@
identifier fun = {
opts_do_parse, parse_option_bool, parse_option_number,
parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set,
qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict,
qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set,
qemu_opts_validate
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-15-armbru@redhat.com>
[Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend
options" resolved by rerunning Coccinelle on master's version]
2020-07-07 19:05:42 +03:00
|
|
|
if (!qemu_opt_set(opts, BLOCK_OPT_BACKING_FILE, base_filename,
|
2020-07-07 19:06:11 +03:00
|
|
|
NULL)) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Backing file not supported for file format '%s'",
|
|
|
|
fmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
return -1;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (base_fmt) {
|
2020-07-07 19:06:11 +03:00
|
|
|
if (!qemu_opt_set(opts, BLOCK_OPT_BACKING_FMT, base_fmt, NULL)) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Backing file format not supported for file "
|
|
|
|
"format '%s'", fmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
return -1;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
return 0;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
static int64_t cvtnum_full(const char *name, const char *value, int64_t min,
|
|
|
|
int64_t max)
|
2017-02-21 23:14:04 +03:00
|
|
|
{
|
2017-02-21 23:14:06 +03:00
|
|
|
int err;
|
2020-05-13 16:36:26 +03:00
|
|
|
uint64_t res;
|
|
|
|
|
|
|
|
err = qemu_strtosz(value, NULL, &res);
|
|
|
|
if (err < 0 && err != -ERANGE) {
|
|
|
|
error_report("Invalid %s specified. You may use "
|
|
|
|
"k, M, G, T, P or E suffixes for", name);
|
|
|
|
error_report("kilobytes, megabytes, gigabytes, terabytes, "
|
|
|
|
"petabytes and exabytes.");
|
2017-02-21 23:14:06 +03:00
|
|
|
return err;
|
|
|
|
}
|
2020-05-13 16:36:26 +03:00
|
|
|
if (err == -ERANGE || res > max || res < min) {
|
|
|
|
error_report("Invalid %s specified. Must be between %" PRId64
|
|
|
|
" and %" PRId64 ".", name, min, max);
|
2017-02-21 23:14:07 +03:00
|
|
|
return -ERANGE;
|
|
|
|
}
|
2020-05-13 16:36:26 +03:00
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int64_t cvtnum(const char *name, const char *value)
|
|
|
|
{
|
|
|
|
return cvtnum_full(name, value, 0, INT64_MAX);
|
2017-02-21 23:14:04 +03:00
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_create(int argc, char **argv)
|
|
|
|
{
|
2012-11-30 16:52:06 +04:00
|
|
|
int c;
|
2010-12-09 16:17:25 +03:00
|
|
|
uint64_t img_size = -1;
|
2004-08-02 01:59:26 +04:00
|
|
|
const char *fmt = "raw";
|
2009-03-28 20:55:19 +03:00
|
|
|
const char *base_fmt = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
const char *filename;
|
|
|
|
const char *base_filename = NULL;
|
2009-05-18 18:42:11 +04:00
|
|
|
char *options = NULL;
|
2012-11-30 16:52:05 +04:00
|
|
|
Error *local_err = NULL;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2017-07-18 03:34:22 +03:00
|
|
|
int flags = 0;
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-07-18 03:34:22 +03:00
|
|
|
c = getopt_long(argc, argv, ":F:b:f:ho:qu",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
2009-03-28 20:55:19 +03:00
|
|
|
case 'F':
|
|
|
|
base_fmt = optarg;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'b':
|
|
|
|
base_filename = optarg;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2009-05-18 18:42:11 +04:00
|
|
|
case 'o':
|
2020-04-15 10:49:25 +03:00
|
|
|
if (accumulate_options(&options, optarg) < 0) {
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
|
|
|
}
|
2009-05-18 18:42:11 +04:00
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2017-07-18 03:34:22 +03:00
|
|
|
case 'u':
|
|
|
|
flags |= BDRV_O_NO_BACKING;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2009-03-28 20:55:19 +03:00
|
|
|
|
2010-05-26 06:35:36 +04:00
|
|
|
/* Get the filename */
|
2014-02-21 19:24:07 +04:00
|
|
|
filename = (optind < argc) ? argv[optind] : NULL;
|
|
|
|
if (options && has_help_option(options)) {
|
|
|
|
g_free(options);
|
|
|
|
return print_block_option_help(filename, fmt);
|
|
|
|
}
|
|
|
|
|
2010-12-06 17:25:39 +03:00
|
|
|
if (optind >= argc) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2014-02-21 19:24:07 +04:00
|
|
|
optind++;
|
2010-05-26 06:35:36 +04:00
|
|
|
|
2010-12-09 16:17:25 +03:00
|
|
|
/* Get image size, if specified */
|
|
|
|
if (optind < argc) {
|
2011-01-05 13:41:02 +03:00
|
|
|
int64_t sval;
|
2017-02-21 23:14:04 +03:00
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
sval = cvtnum("image size", argv[optind++]);
|
2017-02-21 23:14:04 +03:00
|
|
|
if (sval < 0) {
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2010-12-09 16:17:25 +03:00
|
|
|
}
|
|
|
|
img_size = (uint64_t)sval;
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Unexpected argument: %s", argv[optind]);
|
2013-08-05 12:53:04 +04:00
|
|
|
}
|
2010-12-09 16:17:25 +03:00
|
|
|
|
2012-11-30 16:52:05 +04:00
|
|
|
bdrv_img_create(filename, fmt, base_filename, base_fmt,
|
2017-07-18 03:34:22 +03:00
|
|
|
options, img_size, flags, quiet, &local_err);
|
2014-01-30 18:07:28 +04:00
|
|
|
if (local_err) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "%s: ", filename);
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2012-11-30 16:52:06 +04:00
|
|
|
|
2014-02-21 19:24:04 +04:00
|
|
|
g_free(options);
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
2014-02-21 19:24:04 +04:00
|
|
|
|
|
|
|
fail:
|
|
|
|
g_free(options);
|
|
|
|
return 1;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
static void dump_json_image_check(ImageCheck *check, bool quiet)
|
2013-01-28 15:59:47 +04:00
|
|
|
{
|
2020-12-11 20:11:37 +03:00
|
|
|
GString *str;
|
2013-01-28 15:59:47 +04:00
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageCheck(v, NULL, &check, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2020-12-11 20:11:35 +03:00
|
|
|
str = qobject_to_json_pretty(obj, true);
|
2013-01-28 15:59:47 +04:00
|
|
|
assert(str != NULL);
|
2020-12-11 20:11:37 +03:00
|
|
|
qprintf(quiet, "%s\n", str->str);
|
2018-04-19 18:01:43 +03:00
|
|
|
qobject_unref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
2020-12-11 20:11:37 +03:00
|
|
|
g_string_free(str, true);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
static void dump_human_image_check(ImageCheck *check, bool quiet)
|
2013-01-28 15:59:47 +04:00
|
|
|
{
|
|
|
|
if (!(check->corruptions || check->leaks || check->check_errors)) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "No errors were found on the image.\n");
|
2013-01-28 15:59:47 +04:00
|
|
|
} else {
|
|
|
|
if (check->corruptions) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "\n%" PRId64 " errors were found on the image.\n"
|
|
|
|
"Data may be corrupted, or further writes to the image "
|
|
|
|
"may corrupt it.\n",
|
|
|
|
check->corruptions);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->leaks) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"\n%" PRId64 " leaked clusters were found on the image.\n"
|
|
|
|
"This means waste of disk space, but no harm to data.\n",
|
|
|
|
check->leaks);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->check_errors) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"\n%" PRId64
|
|
|
|
" internal errors have occurred during the check.\n",
|
|
|
|
check->check_errors);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (check->total_clusters != 0 && check->allocated_clusters != 0) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "%" PRId64 "/%" PRId64 " = %0.2f%% allocated, "
|
|
|
|
"%0.2f%% fragmented, %0.2f%% compressed clusters\n",
|
|
|
|
check->allocated_clusters, check->total_clusters,
|
|
|
|
check->allocated_clusters * 100.0 / check->total_clusters,
|
|
|
|
check->fragmented_clusters * 100.0 / check->allocated_clusters,
|
|
|
|
check->compressed_clusters * 100.0 /
|
|
|
|
check->allocated_clusters);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->image_end_offset) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"Image end offset: %" PRId64 "\n", check->image_end_offset);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int collect_image_check(BlockDriverState *bs,
|
|
|
|
ImageCheck *check,
|
|
|
|
const char *filename,
|
|
|
|
const char *fmt,
|
|
|
|
int fix)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
BdrvCheckResult result;
|
|
|
|
|
|
|
|
ret = bdrv_check(bs, &result, fix);
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
check->filename = g_strdup(filename);
|
|
|
|
check->format = g_strdup(bdrv_get_format_name(bs));
|
|
|
|
check->check_errors = result.check_errors;
|
|
|
|
check->corruptions = result.corruptions;
|
|
|
|
check->has_corruptions = result.corruptions != 0;
|
|
|
|
check->leaks = result.leaks;
|
|
|
|
check->has_leaks = result.leaks != 0;
|
|
|
|
check->corruptions_fixed = result.corruptions_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
check->has_corruptions_fixed = result.corruptions_fixed != 0;
|
2013-01-28 15:59:47 +04:00
|
|
|
check->leaks_fixed = result.leaks_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
check->has_leaks_fixed = result.leaks_fixed != 0;
|
2013-01-28 15:59:47 +04:00
|
|
|
check->image_end_offset = result.image_end_offset;
|
|
|
|
check->has_image_end_offset = result.image_end_offset != 0;
|
|
|
|
check->total_clusters = result.bfi.total_clusters;
|
|
|
|
check->has_total_clusters = result.bfi.total_clusters != 0;
|
|
|
|
check->allocated_clusters = result.bfi.allocated_clusters;
|
|
|
|
check->has_allocated_clusters = result.bfi.allocated_clusters != 0;
|
|
|
|
check->fragmented_clusters = result.bfi.fragmented_clusters;
|
|
|
|
check->has_fragmented_clusters = result.bfi.fragmented_clusters != 0;
|
2013-02-07 20:15:04 +04:00
|
|
|
check->compressed_clusters = result.bfi.compressed_clusters;
|
|
|
|
check->has_compressed_clusters = result.bfi.compressed_clusters != 0;
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-06-29 13:43:13 +04:00
|
|
|
/*
|
|
|
|
* Checks an image for consistency. Exit codes:
|
|
|
|
*
|
2014-06-03 00:15:21 +04:00
|
|
|
* 0 - Check completed, image is good
|
|
|
|
* 1 - Check not completed because of internal errors
|
|
|
|
* 2 - Check completed, image is corrupted
|
|
|
|
* 3 - Check completed, image has leaked clusters, but is good otherwise
|
|
|
|
* 63 - Checks are not supported by the image format
|
2010-06-29 13:43:13 +04:00
|
|
|
*/
|
2009-04-22 03:11:53 +04:00
|
|
|
static int img_check(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c, ret;
|
2013-01-28 15:59:47 +04:00
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *filename, *fmt, *output, *cache;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2009-04-22 03:11:53 +04:00
|
|
|
BlockDriverState *bs;
|
2012-05-11 18:07:02 +04:00
|
|
|
int fix = 0;
|
2016-03-15 15:01:04 +03:00
|
|
|
int flags = BDRV_O_CHECK;
|
|
|
|
bool writethrough;
|
2014-10-07 15:59:05 +04:00
|
|
|
ImageCheck *check;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2009-04-22 03:11:53 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
2013-01-28 15:59:47 +04:00
|
|
|
output = NULL;
|
2014-07-23 00:58:42 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2016-03-15 15:01:04 +03:00
|
|
|
|
2009-04-22 03:11:53 +04:00
|
|
|
for(;;) {
|
2013-01-28 15:59:47 +04:00
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
2014-03-24 22:38:54 +04:00
|
|
|
{"repair", required_argument, 0, 'r'},
|
2013-01-28 15:59:47 +04:00
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2013-01-28 15:59:47 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-05-02 19:35:39 +03:00
|
|
|
c = getopt_long(argc, argv, ":hf:r:T:qU",
|
2013-01-28 15:59:47 +04:00
|
|
|
long_options, &option_index);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2009-04-22 03:11:53 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-04-22 03:11:53 +04:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2009-04-22 03:11:53 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2012-05-11 18:07:02 +04:00
|
|
|
case 'r':
|
|
|
|
flags |= BDRV_O_RDWR;
|
|
|
|
|
|
|
|
if (!strcmp(optarg, "leaks")) {
|
|
|
|
fix = BDRV_FIX_LEAKS;
|
|
|
|
} else if (!strcmp(optarg, "all")) {
|
|
|
|
fix = BDRV_FIX_LEAKS | BDRV_FIX_ERRORS;
|
|
|
|
} else {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Unknown option value for -r "
|
|
|
|
"(expecting 'leaks' or 'all'): %s", optarg);
|
2012-05-11 18:07:02 +04:00
|
|
|
}
|
|
|
|
break;
|
2013-01-28 15:59:47 +04:00
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2009-04-22 03:11:53 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-04-22 03:11:53 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", cache);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet,
|
|
|
|
force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
check = g_new0(ImageCheck, 1);
|
|
|
|
ret = collect_image_check(bs, check, filename, fmt, fix);
|
2010-06-29 13:43:13 +04:00
|
|
|
|
|
|
|
if (ret == -ENOTSUP) {
|
2014-05-31 23:33:30 +04:00
|
|
|
error_report("This image format does not support checks");
|
2013-10-24 10:53:34 +04:00
|
|
|
ret = 63;
|
2013-01-28 15:59:47 +04:00
|
|
|
goto fail;
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (check->corruptions_fixed || check->leaks_fixed) {
|
|
|
|
int corruptions_fixed, leaks_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
bool has_leaks_fixed, has_corruptions_fixed;
|
2012-05-11 20:16:54 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
leaks_fixed = check->leaks_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
has_leaks_fixed = check->has_leaks_fixed;
|
2013-01-28 15:59:47 +04:00
|
|
|
corruptions_fixed = check->corruptions_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
has_corruptions_fixed = check->has_corruptions_fixed;
|
2010-06-29 13:43:13 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (output_format == OFORMAT_HUMAN) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"The following inconsistencies were found and repaired:\n\n"
|
|
|
|
" %" PRId64 " leaked clusters\n"
|
|
|
|
" %" PRId64 " corruptions\n\n"
|
|
|
|
"Double checking the fixed image now...\n",
|
|
|
|
check->leaks_fixed,
|
|
|
|
check->corruptions_fixed);
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
|
|
|
|
2020-02-27 04:29:50 +03:00
|
|
|
qapi_free_ImageCheck(check);
|
|
|
|
check = g_new0(ImageCheck, 1);
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = collect_image_check(bs, check, filename, fmt, 0);
|
2009-04-22 03:11:53 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
check->leaks_fixed = leaks_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
check->has_leaks_fixed = has_leaks_fixed;
|
2013-01-28 15:59:47 +04:00
|
|
|
check->corruptions_fixed = corruptions_fixed;
|
2020-03-24 20:27:55 +03:00
|
|
|
check->has_corruptions_fixed = has_corruptions_fixed;
|
2012-03-15 16:13:31 +04:00
|
|
|
}
|
|
|
|
|
2014-10-23 17:29:12 +04:00
|
|
|
if (!ret) {
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
|
|
|
dump_human_image_check(check, quiet);
|
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
|
|
|
dump_json_image_check(check, quiet);
|
|
|
|
break;
|
|
|
|
}
|
2013-01-28 15:59:46 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (ret || check->check_errors) {
|
2014-10-23 17:29:12 +04:00
|
|
|
if (ret) {
|
|
|
|
error_report("Check failed: %s", strerror(-ret));
|
|
|
|
} else {
|
|
|
|
error_report("Check failed");
|
|
|
|
}
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = 1;
|
|
|
|
goto fail;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2010-06-29 13:43:13 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (check->corruptions) {
|
|
|
|
ret = 2;
|
|
|
|
} else if (check->leaks) {
|
|
|
|
ret = 3;
|
2010-06-29 13:43:13 +04:00
|
|
|
} else {
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = 0;
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
fail:
|
|
|
|
qapi_free_ImageCheck(check);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-01-28 15:59:47 +04:00
|
|
|
return ret;
|
2009-04-22 03:11:53 +04:00
|
|
|
}
|
|
|
|
|
2014-10-24 17:57:37 +04:00
|
|
|
typedef struct CommonBlockJobCBInfo {
|
|
|
|
BlockDriverState *bs;
|
|
|
|
Error **errp;
|
|
|
|
} CommonBlockJobCBInfo;
|
|
|
|
|
|
|
|
static void common_block_job_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
CommonBlockJobCBInfo *cbi = opaque;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
error_setg_errno(cbi->errp, -ret, "Block job failed");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void run_block_job(BlockJob *job, Error **errp)
|
|
|
|
{
|
2021-06-14 11:11:29 +03:00
|
|
|
uint64_t progress_current, progress_total;
|
2021-05-06 17:13:53 +03:00
|
|
|
AioContext *aio_context = block_job_get_aio_context(job);
|
2017-06-15 09:47:33 +03:00
|
|
|
int ret = 0;
|
2014-10-24 17:57:37 +04:00
|
|
|
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_acquire(aio_context);
|
2018-04-13 19:50:05 +03:00
|
|
|
job_ref(&job->job);
|
2014-10-24 17:57:37 +04:00
|
|
|
do {
|
2018-05-04 13:17:20 +03:00
|
|
|
float progress = 0.0f;
|
2014-10-24 17:57:37 +04:00
|
|
|
aio_poll(aio_context, true);
|
2021-06-14 11:11:29 +03:00
|
|
|
|
|
|
|
progress_get_snapshot(&job->job.progress, &progress_current,
|
|
|
|
&progress_total);
|
|
|
|
if (progress_total) {
|
|
|
|
progress = (float)progress_current / progress_total * 100.f;
|
2018-05-04 13:17:20 +03:00
|
|
|
}
|
|
|
|
qemu_progress_print(progress, 0);
|
2018-04-25 16:09:58 +03:00
|
|
|
} while (!job_is_ready(&job->job) && !job_is_completed(&job->job));
|
2014-10-24 17:57:37 +04:00
|
|
|
|
2018-04-19 14:04:01 +03:00
|
|
|
if (!job_is_completed(&job->job)) {
|
2018-04-24 17:13:52 +03:00
|
|
|
ret = job_complete_sync(&job->job, errp);
|
2017-06-15 09:47:33 +03:00
|
|
|
} else {
|
2018-04-19 18:30:16 +03:00
|
|
|
ret = job->job.ret;
|
2017-06-15 09:47:33 +03:00
|
|
|
}
|
2018-04-13 19:50:05 +03:00
|
|
|
job_unref(&job->job);
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_release(aio_context);
|
2014-10-24 17:57:39 +04:00
|
|
|
|
2017-06-15 09:47:33 +03:00
|
|
|
/* publish completion progress only when success */
|
|
|
|
if (!ret) {
|
|
|
|
qemu_progress_print(100.f, 0);
|
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_commit(int argc, char **argv)
|
|
|
|
{
|
2011-06-20 20:48:19 +04:00
|
|
|
int c, ret, flags;
|
2014-10-24 17:57:40 +04:00
|
|
|
const char *filename, *fmt, *cache, *base;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2014-10-24 17:57:37 +04:00
|
|
|
BlockDriverState *bs, *base_bs;
|
2017-01-25 21:16:34 +03:00
|
|
|
BlockJob *job;
|
2014-10-24 17:57:39 +04:00
|
|
|
bool progress = false, quiet = false, drop = false;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2014-10-24 17:57:37 +04:00
|
|
|
Error *local_err = NULL;
|
|
|
|
CommonBlockJobCBInfo cbi;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2016-10-27 13:49:04 +03:00
|
|
|
AioContext *aio_context;
|
2020-10-20 17:47:43 +03:00
|
|
|
int64_t rate_limit = 0;
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
2011-06-20 20:48:19 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2014-10-24 17:57:40 +04:00
|
|
|
base = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2020-10-20 17:47:43 +03:00
|
|
|
c = getopt_long(argc, argv, ":f:ht:b:dpqr:",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-10-24 17:57:40 +04:00
|
|
|
case 'b':
|
|
|
|
base = optarg;
|
|
|
|
/* -b implies -d */
|
|
|
|
drop = true;
|
|
|
|
break;
|
2014-10-24 17:57:38 +04:00
|
|
|
case 'd':
|
|
|
|
drop = true;
|
|
|
|
break;
|
2014-10-24 17:57:39 +04:00
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2020-10-20 17:47:43 +03:00
|
|
|
case 'r':
|
|
|
|
rate_limit = cvtnum("rate limit", optarg);
|
|
|
|
if (rate_limit < 0) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2014-10-24 17:57:39 +04:00
|
|
|
|
|
|
|
/* Progress is not shown in Quiet mode */
|
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2014-10-24 17:57:38 +04:00
|
|
|
flags = BDRV_O_RDWR | BDRV_O_UNMAP;
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-08-26 22:17:54 +04:00
|
|
|
return 1;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet,
|
|
|
|
false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
|
|
|
|
2014-10-24 17:57:39 +04:00
|
|
|
qemu_progress_init(progress, 1.f);
|
|
|
|
qemu_progress_print(0.f, 100);
|
|
|
|
|
2014-10-24 17:57:40 +04:00
|
|
|
if (base) {
|
|
|
|
base_bs = bdrv_find_backing_image(bs, base);
|
|
|
|
if (!base_bs) {
|
2016-12-01 05:05:08 +03:00
|
|
|
error_setg(&local_err,
|
|
|
|
"Did not find '%s' in the backing chain of '%s'",
|
|
|
|
base, filename);
|
2014-10-24 17:57:40 +04:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* This is different from QMP, which by default uses the deepest file in
|
|
|
|
* the backing chain (i.e., the very base); however, the traditional
|
|
|
|
* behavior of qemu-img commit is using the immediate backing file. */
|
2019-06-12 20:00:30 +03:00
|
|
|
base_bs = bdrv_backing_chain_next(bs);
|
2014-10-24 17:57:40 +04:00
|
|
|
if (!base_bs) {
|
|
|
|
error_setg(&local_err, "Image does not have a backing file");
|
|
|
|
goto done;
|
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
cbi = (CommonBlockJobCBInfo){
|
|
|
|
.errp = &local_err,
|
|
|
|
.bs = bs,
|
|
|
|
};
|
|
|
|
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context = bdrv_get_aio_context(bs);
|
|
|
|
aio_context_acquire(aio_context);
|
2020-10-20 17:47:43 +03:00
|
|
|
commit_active_start("commit", bs, base_bs, JOB_DEFAULT, rate_limit,
|
2017-02-20 20:10:05 +03:00
|
|
|
BLOCKDEV_ON_ERROR_REPORT, NULL, common_block_job_cb,
|
2017-04-21 15:27:04 +03:00
|
|
|
&cbi, false, &local_err);
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_release(aio_context);
|
2014-10-24 17:57:37 +04:00
|
|
|
if (local_err) {
|
|
|
|
goto done;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2015-09-15 12:58:23 +03:00
|
|
|
/* When the block job completes, the BlockBackend reference will point to
|
|
|
|
* the old backing file. In order to avoid that the top image is already
|
|
|
|
* deleted, so we can still empty it afterwards, increment the reference
|
|
|
|
* counter here preemptively. */
|
2014-10-24 17:57:38 +04:00
|
|
|
if (!drop) {
|
2015-09-15 12:58:23 +03:00
|
|
|
bdrv_ref(bs);
|
2014-10-24 17:57:38 +04:00
|
|
|
}
|
|
|
|
|
2017-01-25 21:16:34 +03:00
|
|
|
job = block_job_get("commit");
|
2018-11-06 00:38:37 +03:00
|
|
|
assert(job);
|
2017-01-25 21:16:34 +03:00
|
|
|
run_block_job(job, &local_err);
|
2014-10-24 17:57:38 +04:00
|
|
|
if (local_err) {
|
|
|
|
goto unref_backing;
|
|
|
|
}
|
|
|
|
|
2020-04-29 17:11:26 +03:00
|
|
|
if (!drop) {
|
|
|
|
BlockBackend *old_backing_blk;
|
|
|
|
|
|
|
|
old_backing_blk = blk_new_with_bs(bs, BLK_PERM_WRITE, BLK_PERM_ALL,
|
|
|
|
&local_err);
|
|
|
|
if (!old_backing_blk) {
|
|
|
|
goto unref_backing;
|
|
|
|
}
|
|
|
|
ret = blk_make_empty(old_backing_blk, &local_err);
|
|
|
|
blk_unref(old_backing_blk);
|
|
|
|
if (ret == -ENOTSUP) {
|
|
|
|
error_free(local_err);
|
|
|
|
local_err = NULL;
|
|
|
|
} else if (ret < 0) {
|
2014-10-24 17:57:38 +04:00
|
|
|
goto unref_backing;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
unref_backing:
|
|
|
|
if (!drop) {
|
2015-09-15 12:58:23 +03:00
|
|
|
bdrv_unref(bs);
|
2014-10-24 17:57:38 +04:00
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
done:
|
2014-10-24 17:57:39 +04:00
|
|
|
qemu_progress_end();
|
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
if (local_err) {
|
2015-02-10 17:14:02 +03:00
|
|
|
error_report_err(local_err);
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
qprintf(quiet, "Image committed.\n");
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:11 +03:00
|
|
|
/*
|
|
|
|
* Returns -1 if 'buf' contains only zeroes, otherwise the byte index
|
|
|
|
* of the first sector boundary within buf where the sector contains a
|
|
|
|
* non-zero byte. This function is robust to a buffer that is not
|
|
|
|
* sector-aligned.
|
|
|
|
*/
|
|
|
|
static int64_t find_nonzero(const uint8_t *buf, int64_t n)
|
|
|
|
{
|
|
|
|
int64_t i;
|
|
|
|
int64_t end = QEMU_ALIGN_DOWN(n, BDRV_SECTOR_SIZE);
|
|
|
|
|
|
|
|
for (i = 0; i < end; i += BDRV_SECTOR_SIZE) {
|
|
|
|
if (!buffer_is_zero(buf + i, BDRV_SECTOR_SIZE)) {
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (i < n && !buffer_is_zero(buf + i, n - end)) {
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2008-06-06 01:53:49 +04:00
|
|
|
/*
|
|
|
|
* Returns true iff the first sector pointed to by 'buf' contains at least
|
|
|
|
* a non-NUL byte.
|
|
|
|
*
|
|
|
|
* 'pnum' is set to the number of sectors (including and immediately following
|
|
|
|
* the first one) that are known to be in the same allocated/unallocated state.
|
2018-07-12 16:00:10 +03:00
|
|
|
* The function will try to align the end offset to alignment boundaries so
|
2020-09-17 10:50:20 +03:00
|
|
|
* that the request will at least end aligned and consecutive requests will
|
2018-07-12 16:00:10 +03:00
|
|
|
* also start at an aligned offset.
|
2008-06-06 01:53:49 +04:00
|
|
|
*/
|
2018-07-12 16:00:10 +03:00
|
|
|
static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum,
|
|
|
|
int64_t sector_num, int alignment)
|
2004-08-02 01:59:26 +04:00
|
|
|
{
|
2012-02-07 17:27:24 +04:00
|
|
|
bool is_zero;
|
2018-07-12 16:00:10 +03:00
|
|
|
int i, tail;
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
if (n <= 0) {
|
|
|
|
*pnum = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2020-08-19 04:36:07 +03:00
|
|
|
is_zero = buffer_is_zero(buf, BDRV_SECTOR_SIZE);
|
2004-08-02 01:59:26 +04:00
|
|
|
for(i = 1; i < n; i++) {
|
2020-08-19 04:36:07 +03:00
|
|
|
buf += BDRV_SECTOR_SIZE;
|
|
|
|
if (is_zero != buffer_is_zero(buf, BDRV_SECTOR_SIZE)) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2012-02-07 17:27:24 +04:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
2018-07-12 16:00:10 +03:00
|
|
|
|
|
|
|
tail = (sector_num + i) & (alignment - 1);
|
|
|
|
if (tail) {
|
|
|
|
if (is_zero && i <= tail) {
|
|
|
|
/* treat unallocated areas which only consist
|
|
|
|
* of a small tail as allocated. */
|
|
|
|
is_zero = false;
|
|
|
|
}
|
|
|
|
if (!is_zero) {
|
|
|
|
/* align up end offset of allocated areas. */
|
|
|
|
i += alignment - tail;
|
|
|
|
i = MIN(i, n);
|
|
|
|
} else {
|
|
|
|
/* align down end offset of zero areas. */
|
|
|
|
i -= tail;
|
|
|
|
}
|
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
*pnum = i;
|
2012-02-07 17:27:24 +04:00
|
|
|
return !is_zero;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2011-08-26 17:27:13 +04:00
|
|
|
/*
|
|
|
|
* Like is_allocated_sectors, but if the buffer starts with a used sector,
|
|
|
|
* up to 'min' consecutive sectors containing zeros are ignored. This avoids
|
|
|
|
* breaking up write requests for only small sparse areas.
|
|
|
|
*/
|
|
|
|
static int is_allocated_sectors_min(const uint8_t *buf, int n, int *pnum,
|
2018-07-12 16:00:10 +03:00
|
|
|
int min, int64_t sector_num, int alignment)
|
2011-08-26 17:27:13 +04:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int num_checked, num_used;
|
|
|
|
|
|
|
|
if (n < min) {
|
|
|
|
min = n;
|
|
|
|
}
|
|
|
|
|
2018-07-12 16:00:10 +03:00
|
|
|
ret = is_allocated_sectors(buf, n, pnum, sector_num, alignment);
|
2011-08-26 17:27:13 +04:00
|
|
|
if (!ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
num_used = *pnum;
|
|
|
|
buf += BDRV_SECTOR_SIZE * *pnum;
|
|
|
|
n -= *pnum;
|
2018-07-12 16:00:10 +03:00
|
|
|
sector_num += *pnum;
|
2011-08-26 17:27:13 +04:00
|
|
|
num_checked = num_used;
|
|
|
|
|
|
|
|
while (n > 0) {
|
2018-07-12 16:00:10 +03:00
|
|
|
ret = is_allocated_sectors(buf, n, pnum, sector_num, alignment);
|
2011-08-26 17:27:13 +04:00
|
|
|
|
|
|
|
buf += BDRV_SECTOR_SIZE * *pnum;
|
|
|
|
n -= *pnum;
|
2018-07-12 16:00:10 +03:00
|
|
|
sector_num += *pnum;
|
2011-08-26 17:27:13 +04:00
|
|
|
num_checked += *pnum;
|
|
|
|
if (ret) {
|
|
|
|
num_used = num_checked;
|
|
|
|
} else if (*pnum >= min) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
*pnum = num_used;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
2017-10-12 06:47:14 +03:00
|
|
|
* Compares two buffers sector by sector. Returns 0 if the first
|
|
|
|
* sector of each buffer matches, non-zero otherwise.
|
2010-01-12 14:55:18 +03:00
|
|
|
*
|
2017-10-12 06:47:14 +03:00
|
|
|
* pnum is set to the sector-aligned size of the buffer prefix that
|
|
|
|
* has the same matching status as the first sector.
|
2010-01-12 14:55:18 +03:00
|
|
|
*/
|
2017-10-12 06:47:14 +03:00
|
|
|
static int compare_buffers(const uint8_t *buf1, const uint8_t *buf2,
|
|
|
|
int64_t bytes, int64_t *pnum)
|
2010-01-12 14:55:18 +03:00
|
|
|
{
|
2015-02-20 19:06:15 +03:00
|
|
|
bool res;
|
2017-10-12 06:47:14 +03:00
|
|
|
int64_t i = MIN(bytes, BDRV_SECTOR_SIZE);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:14 +03:00
|
|
|
assert(bytes > 0);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:14 +03:00
|
|
|
res = !!memcmp(buf1, buf2, i);
|
|
|
|
while (i < bytes) {
|
|
|
|
int64_t len = MIN(bytes - i, BDRV_SECTOR_SIZE);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:14 +03:00
|
|
|
if (!!memcmp(buf1 + i, buf2 + i, len) != res) {
|
2010-01-12 14:55:18 +03:00
|
|
|
break;
|
|
|
|
}
|
2017-10-12 06:47:14 +03:00
|
|
|
i += len;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
*pnum = i;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2019-05-08 13:43:24 +03:00
|
|
|
#define IO_BUF_SIZE (2 * MiB)
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-02-13 12:09:41 +04:00
|
|
|
/*
|
|
|
|
* Check if passed sectors are empty (not allocated or contain only 0 bytes)
|
|
|
|
*
|
2017-10-12 06:47:12 +03:00
|
|
|
* Intended for use by 'qemu-img compare': Returns 0 in case sectors are
|
|
|
|
* filled with 0, 1 if sectors contain non-zero data (this is a comparison
|
|
|
|
* failure), and 4 on error (the exit status for read errors), after emitting
|
|
|
|
* an error message.
|
2013-02-13 12:09:41 +04:00
|
|
|
*
|
2015-02-05 21:58:18 +03:00
|
|
|
* @param blk: BlockBackend for the image
|
2017-10-12 06:47:13 +03:00
|
|
|
* @param offset: Starting offset to check
|
|
|
|
* @param bytes: Number of bytes to check
|
2013-02-13 12:09:41 +04:00
|
|
|
* @param filename: Name of disk file we are checking (logging purpose)
|
|
|
|
* @param buffer: Allocated buffer for storing read data
|
|
|
|
* @param quiet: Flag for quiet mode
|
|
|
|
*/
|
2017-10-12 06:47:13 +03:00
|
|
|
static int check_empty_sectors(BlockBackend *blk, int64_t offset,
|
|
|
|
int64_t bytes, const char *filename,
|
2013-02-13 12:09:41 +04:00
|
|
|
uint8_t *buffer, bool quiet)
|
|
|
|
{
|
2017-10-12 06:47:11 +03:00
|
|
|
int ret = 0;
|
|
|
|
int64_t idx;
|
|
|
|
|
2017-10-12 06:47:13 +03:00
|
|
|
ret = blk_pread(blk, offset, buffer, bytes);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64 " of %s: %s",
|
2017-10-12 06:47:13 +03:00
|
|
|
offset, filename, strerror(-ret));
|
2017-10-12 06:47:12 +03:00
|
|
|
return 4;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
2017-10-12 06:47:13 +03:00
|
|
|
idx = find_nonzero(buffer, bytes);
|
2017-10-12 06:47:11 +03:00
|
|
|
if (idx >= 0) {
|
2013-02-13 12:09:41 +04:00
|
|
|
qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
|
2017-10-12 06:47:13 +03:00
|
|
|
offset + idx);
|
2013-02-13 12:09:41 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compares two images. Exit codes:
|
|
|
|
*
|
2021-02-17 14:56:45 +03:00
|
|
|
* 0 - Images are identical or the requested help was printed
|
2013-02-13 12:09:41 +04:00
|
|
|
* 1 - Images differ
|
|
|
|
* >1 - Error occurred
|
|
|
|
*/
|
|
|
|
static int img_compare(int argc, char **argv)
|
|
|
|
{
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *fmt1 = NULL, *fmt2 = NULL, *cache, *filename1, *filename2;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk1, *blk2;
|
2013-02-13 12:09:41 +04:00
|
|
|
BlockDriverState *bs1, *bs2;
|
2017-10-12 06:47:16 +03:00
|
|
|
int64_t total_size1, total_size2;
|
2013-02-13 12:09:41 +04:00
|
|
|
uint8_t *buf1 = NULL, *buf2 = NULL;
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
int64_t pnum1, pnum2;
|
2013-02-13 12:09:41 +04:00
|
|
|
int allocated1, allocated2;
|
|
|
|
int ret = 0; /* return value - 0 Ident, 1 Different, >1 Error */
|
|
|
|
bool progress = false, quiet = false, strict = false;
|
2014-07-23 00:58:42 +04:00
|
|
|
int flags;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2017-10-12 06:47:16 +03:00
|
|
|
int64_t total_size;
|
|
|
|
int64_t offset = 0;
|
|
|
|
int64_t chunk;
|
2017-10-12 06:47:14 +03:00
|
|
|
int c;
|
2013-02-13 12:09:41 +04:00
|
|
|
uint64_t progress_base;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2014-07-23 00:58:42 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2013-02-13 12:09:41 +04:00
|
|
|
for (;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-05-02 19:35:39 +03:00
|
|
|
c = getopt_long(argc, argv, ":hf:F:T:pqsU",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt1 = optarg;
|
|
|
|
break;
|
|
|
|
case 'F':
|
|
|
|
fmt2 = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
|
|
|
case 's':
|
|
|
|
strict = true;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
{
|
|
|
|
Error *local_err = NULL;
|
|
|
|
|
|
|
|
if (!user_creatable_add_from_str(optarg, &local_err)) {
|
|
|
|
if (local_err) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
exit(2);
|
|
|
|
} else {
|
|
|
|
/* Help was printed */
|
|
|
|
exit(EXIT_SUCCESS);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
}
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Progress is not shown in Quiet mode */
|
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 2) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting two image file names");
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
filename1 = argv[optind++];
|
|
|
|
filename2 = argv[optind++];
|
|
|
|
|
2014-08-26 22:17:55 +04:00
|
|
|
/* Initialize before goto out */
|
|
|
|
qemu_progress_init(progress, 2.0);
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
flags = 0;
|
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", cache);
|
|
|
|
ret = 2;
|
|
|
|
goto out3;
|
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk1 = img_open(image_opts, filename1, fmt1, flags, writethrough, quiet,
|
|
|
|
force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk1) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 2;
|
2014-10-07 15:59:05 +04:00
|
|
|
goto out3;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk2 = img_open(image_opts, filename2, fmt2, flags, writethrough, quiet,
|
|
|
|
force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk2) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 2;
|
2014-10-07 15:59:05 +04:00
|
|
|
goto out2;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
2016-02-17 13:10:20 +03:00
|
|
|
bs1 = blk_bs(blk1);
|
2014-10-07 15:59:05 +04:00
|
|
|
bs2 = blk_bs(blk2);
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
buf1 = blk_blockalign(blk1, IO_BUF_SIZE);
|
|
|
|
buf2 = blk_blockalign(blk2, IO_BUF_SIZE);
|
2017-10-12 06:47:16 +03:00
|
|
|
total_size1 = blk_getlength(blk1);
|
|
|
|
if (total_size1 < 0) {
|
2014-06-26 15:23:25 +04:00
|
|
|
error_report("Can't get size of %s: %s",
|
2017-10-12 06:47:16 +03:00
|
|
|
filename1, strerror(-total_size1));
|
2014-06-26 15:23:25 +04:00
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
total_size2 = blk_getlength(blk2);
|
|
|
|
if (total_size2 < 0) {
|
2014-06-26 15:23:25 +04:00
|
|
|
error_report("Can't get size of %s: %s",
|
2017-10-12 06:47:16 +03:00
|
|
|
filename2, strerror(-total_size2));
|
2014-06-26 15:23:25 +04:00
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
total_size = MIN(total_size1, total_size2);
|
|
|
|
progress_base = MAX(total_size1, total_size2);
|
2013-02-13 12:09:41 +04:00
|
|
|
|
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
if (strict && total_size1 != total_size2) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 1;
|
|
|
|
qprintf(quiet, "Strict mode: Image size mismatch!\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
while (offset < total_size) {
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
int status1, status2;
|
2016-01-26 06:58:48 +03:00
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
status1 = bdrv_block_status_above(bs1, NULL, offset,
|
|
|
|
total_size1 - offset, &pnum1, NULL,
|
|
|
|
NULL);
|
2016-01-13 11:37:41 +03:00
|
|
|
if (status1 < 0) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s", filename1);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-01-13 11:37:41 +03:00
|
|
|
allocated1 = status1 & BDRV_BLOCK_ALLOCATED;
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
status2 = bdrv_block_status_above(bs2, NULL, offset,
|
|
|
|
total_size2 - offset, &pnum2, NULL,
|
|
|
|
NULL);
|
2016-01-13 11:37:41 +03:00
|
|
|
if (status2 < 0) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s", filename2);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-01-13 11:37:41 +03:00
|
|
|
allocated2 = status2 & BDRV_BLOCK_ALLOCATED;
|
2017-10-12 06:47:09 +03:00
|
|
|
|
|
|
|
assert(pnum1 && pnum2);
|
2017-10-12 06:47:16 +03:00
|
|
|
chunk = MIN(pnum1, pnum2);
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2016-01-13 11:37:41 +03:00
|
|
|
if (strict) {
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
if (status1 != status2) {
|
2016-01-13 11:37:41 +03:00
|
|
|
ret = 1;
|
|
|
|
qprintf(quiet, "Strict mode: Offset %" PRId64
|
2017-10-12 06:47:16 +03:00
|
|
|
" block status mismatch!\n", offset);
|
2016-01-13 11:37:41 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if ((status1 & BDRV_BLOCK_ZERO) && (status2 & BDRV_BLOCK_ZERO)) {
|
2017-10-12 06:47:09 +03:00
|
|
|
/* nothing to do */
|
2016-01-13 11:37:41 +03:00
|
|
|
} else if (allocated1 == allocated2) {
|
2013-02-13 12:09:41 +04:00
|
|
|
if (allocated1) {
|
2017-10-12 06:47:14 +03:00
|
|
|
int64_t pnum;
|
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
chunk = MIN(chunk, IO_BUF_SIZE);
|
|
|
|
ret = blk_pread(blk1, offset, buf1, chunk);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
2017-10-12 06:47:16 +03:00
|
|
|
error_report("Error while reading offset %" PRId64
|
|
|
|
" of %s: %s",
|
|
|
|
offset, filename1, strerror(-ret));
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
ret = blk_pread(blk2, offset, buf2, chunk);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64
|
2017-10-12 06:47:16 +03:00
|
|
|
" of %s: %s",
|
|
|
|
offset, filename2, strerror(-ret));
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
ret = compare_buffers(buf1, buf2, chunk, &pnum);
|
|
|
|
if (ret || pnum != chunk) {
|
2013-02-13 12:09:41 +04:00
|
|
|
qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
|
2017-10-12 06:47:16 +03:00
|
|
|
offset + (ret ? 0 : pnum));
|
2013-11-13 16:26:49 +04:00
|
|
|
ret = 1;
|
2013-02-13 12:09:41 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
2017-10-12 06:47:16 +03:00
|
|
|
chunk = MIN(chunk, IO_BUF_SIZE);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (allocated1) {
|
2017-10-12 06:47:16 +03:00
|
|
|
ret = check_empty_sectors(blk1, offset, chunk,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename1, buf1, quiet);
|
|
|
|
} else {
|
2017-10-12 06:47:16 +03:00
|
|
|
ret = check_empty_sectors(blk2, offset, chunk,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename2, buf1, quiet);
|
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
offset += chunk;
|
|
|
|
qemu_progress_print(((float) chunk / progress_base) * 100, 100);
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
if (total_size1 != total_size2) {
|
2015-02-05 21:58:18 +03:00
|
|
|
BlockBackend *blk_over;
|
2013-02-13 12:09:41 +04:00
|
|
|
const char *filename_over;
|
|
|
|
|
|
|
|
qprintf(quiet, "Warning: Image size mismatch!\n");
|
2017-10-12 06:47:16 +03:00
|
|
|
if (total_size1 > total_size2) {
|
2015-02-05 21:58:18 +03:00
|
|
|
blk_over = blk1;
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over = filename1;
|
|
|
|
} else {
|
2015-02-05 21:58:18 +03:00
|
|
|
blk_over = blk2;
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over = filename2;
|
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:16 +03:00
|
|
|
while (offset < progress_base) {
|
|
|
|
ret = bdrv_block_status_above(blk_bs(blk_over), NULL, offset,
|
|
|
|
progress_base - offset, &chunk,
|
|
|
|
NULL, NULL);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s",
|
|
|
|
filename_over);
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
}
|
2017-10-12 06:47:10 +03:00
|
|
|
if (ret & BDRV_BLOCK_ALLOCATED && !(ret & BDRV_BLOCK_ZERO)) {
|
2017-10-12 06:47:16 +03:00
|
|
|
chunk = MIN(chunk, IO_BUF_SIZE);
|
|
|
|
ret = check_empty_sectors(blk_over, offset, chunk,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over, buf1, quiet);
|
|
|
|
if (ret) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2017-10-12 06:47:16 +03:00
|
|
|
offset += chunk;
|
|
|
|
qemu_progress_print(((float) chunk / progress_base) * 100, 100);
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
qprintf(quiet, "Images are identical.\n");
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
qemu_vfree(buf1);
|
|
|
|
qemu_vfree(buf2);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk2);
|
2013-02-13 12:09:41 +04:00
|
|
|
out2:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk1);
|
2013-02-13 12:09:41 +04:00
|
|
|
out3:
|
|
|
|
qemu_progress_end();
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-05-21 22:21:35 +03:00
|
|
|
/* Convenience wrapper around qmp_block_dirty_bitmap_merge */
|
|
|
|
static void do_dirty_bitmap_merge(const char *dst_node, const char *dst_name,
|
|
|
|
const char *src_node, const char *src_name,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
BlockDirtyBitmapMergeSource *merge_src;
|
2020-11-13 04:13:37 +03:00
|
|
|
BlockDirtyBitmapMergeSourceList *list = NULL;
|
2020-05-21 22:21:35 +03:00
|
|
|
|
|
|
|
merge_src = g_new0(BlockDirtyBitmapMergeSource, 1);
|
|
|
|
merge_src->type = QTYPE_QDICT;
|
|
|
|
merge_src->u.external.node = g_strdup(src_node);
|
|
|
|
merge_src->u.external.name = g_strdup(src_name);
|
2020-11-13 04:13:37 +03:00
|
|
|
QAPI_LIST_PREPEND(list, merge_src);
|
2020-05-21 22:21:35 +03:00
|
|
|
qmp_block_dirty_bitmap_merge(dst_node, dst_name, list, errp);
|
|
|
|
qapi_free_BlockDirtyBitmapMergeSourceList(list);
|
|
|
|
}
|
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
enum ImgConvertBlockStatus {
|
|
|
|
BLK_DATA,
|
|
|
|
BLK_ZERO,
|
|
|
|
BLK_BACKING_FILE,
|
|
|
|
};
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
#define MAX_COROUTINES 16
|
2020-10-20 17:47:44 +03:00
|
|
|
#define CONVERT_THROTTLE_GROUP "img_convert"
|
2017-02-28 15:40:07 +03:00
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
typedef struct ImgConvertState {
|
|
|
|
BlockBackend **src;
|
|
|
|
int64_t *src_sectors;
|
2020-09-01 15:51:29 +03:00
|
|
|
int *src_alignment;
|
2017-02-28 15:40:07 +03:00
|
|
|
int src_num;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
int64_t total_sectors;
|
|
|
|
int64_t allocated_sectors;
|
2017-02-28 15:40:07 +03:00
|
|
|
int64_t allocated_done;
|
|
|
|
int64_t sector_num;
|
|
|
|
int64_t wr_offs;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
enum ImgConvertBlockStatus status;
|
|
|
|
int64_t sector_next_status;
|
|
|
|
BlockBackend *target;
|
|
|
|
bool has_zero_init;
|
|
|
|
bool compressed;
|
2019-07-24 20:12:29 +03:00
|
|
|
bool target_is_new;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
bool target_has_backing;
|
2018-05-01 19:57:49 +03:00
|
|
|
int64_t target_backing_sectors; /* negative if unknown */
|
2017-02-28 15:40:07 +03:00
|
|
|
bool wr_in_order;
|
2018-06-01 12:26:48 +03:00
|
|
|
bool copy_range;
|
2019-05-07 23:35:03 +03:00
|
|
|
bool salvage;
|
2019-05-07 23:35:02 +03:00
|
|
|
bool quiet;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
int min_sparse;
|
2018-07-12 16:00:10 +03:00
|
|
|
int alignment;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
size_t cluster_sectors;
|
|
|
|
size_t buf_sectors;
|
2017-04-21 12:11:55 +03:00
|
|
|
long num_coroutines;
|
2017-02-28 15:40:07 +03:00
|
|
|
int running_coroutines;
|
|
|
|
Coroutine *co[MAX_COROUTINES];
|
|
|
|
int64_t wait_sector_num[MAX_COROUTINES];
|
|
|
|
CoMutex lock;
|
|
|
|
int ret;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
} ImgConvertState;
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
static void convert_select_part(ImgConvertState *s, int64_t sector_num,
|
|
|
|
int *src_cur, int64_t *src_cur_offset)
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
{
|
2017-02-28 15:40:07 +03:00
|
|
|
*src_cur = 0;
|
|
|
|
*src_cur_offset = 0;
|
|
|
|
while (sector_num - *src_cur_offset >= s->src_sectors[*src_cur]) {
|
|
|
|
*src_cur_offset += s->src_sectors[*src_cur];
|
|
|
|
(*src_cur)++;
|
|
|
|
assert(*src_cur < s->src_num);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_iteration_sectors(ImgConvertState *s, int64_t sector_num)
|
|
|
|
{
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
int64_t src_cur_offset;
|
|
|
|
int ret, n, src_cur;
|
2018-05-01 19:57:49 +03:00
|
|
|
bool post_backing_zero = false;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
convert_select_part(s, sector_num, &src_cur, &src_cur_offset);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
|
|
|
assert(s->total_sectors > sector_num);
|
|
|
|
n = MIN(s->total_sectors - sector_num, BDRV_REQUEST_MAX_SECTORS);
|
|
|
|
|
2018-05-01 19:57:49 +03:00
|
|
|
if (s->target_backing_sectors >= 0) {
|
|
|
|
if (sector_num >= s->target_backing_sectors) {
|
2020-05-28 12:43:56 +03:00
|
|
|
post_backing_zero = true;
|
2018-05-01 19:57:49 +03:00
|
|
|
} else if (sector_num + n > s->target_backing_sectors) {
|
|
|
|
/* Split requests around target_backing_sectors (because
|
|
|
|
* starting from there, zeros are handled differently) */
|
|
|
|
n = s->target_backing_sectors - sector_num;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (s->sector_next_status <= sector_num) {
|
2019-05-07 23:35:03 +03:00
|
|
|
uint64_t offset = (sector_num - src_cur_offset) * BDRV_SECTOR_SIZE;
|
|
|
|
int64_t count;
|
2020-09-01 15:51:29 +03:00
|
|
|
int tail;
|
2019-06-12 20:00:30 +03:00
|
|
|
BlockDriverState *src_bs = blk_bs(s->src[src_cur]);
|
|
|
|
BlockDriverState *base;
|
|
|
|
|
|
|
|
if (s->target_has_backing) {
|
|
|
|
base = bdrv_cow_bs(bdrv_skip_filters(src_bs));
|
|
|
|
} else {
|
|
|
|
base = NULL;
|
|
|
|
}
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
|
2019-05-07 23:35:03 +03:00
|
|
|
do {
|
|
|
|
count = n * BDRV_SECTOR_SIZE;
|
|
|
|
|
2019-06-12 20:00:30 +03:00
|
|
|
ret = bdrv_block_status_above(src_bs, base, offset, count, &count,
|
|
|
|
NULL, NULL);
|
2019-05-07 23:35:03 +03:00
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
if (s->salvage) {
|
|
|
|
if (n == 1) {
|
|
|
|
if (!s->quiet) {
|
|
|
|
warn_report("error while reading block status at "
|
|
|
|
"offset %" PRIu64 ": %s", offset,
|
|
|
|
strerror(-ret));
|
|
|
|
}
|
|
|
|
/* Just try to read the data, then */
|
|
|
|
ret = BDRV_BLOCK_DATA;
|
|
|
|
count = BDRV_SECTOR_SIZE;
|
|
|
|
} else {
|
|
|
|
/* Retry on a shorter range */
|
|
|
|
n = DIV_ROUND_UP(n, 4);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
error_report("error while reading block status at offset "
|
|
|
|
"%" PRIu64 ": %s", offset, strerror(-ret));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (ret < 0);
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
|
block: Convert bdrv_get_block_status_above() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status_above()
to bdrv_block_status_above() ensures that the compiler enforces that
all callers are updated. Likewise, since it a byte interface allows
an offset mapping that might not be sector aligned, split the mapping
out of the return value and into a pass-by-reference parameter. For
now, the io.c layer still assert()s that all uses are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status in the drivers.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), plus
updates for the new split return interface. But some code,
particularly bdrv_block_status(), gets a lot simpler because it no
longer has to mess with sectors. Likewise, mirror code no longer
computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop
an assertion about alignment because the loop no longer depends on
alignment (never mind that we don't really have a driver that
reports sub-sector alignments, so it's not really possible to test
the effect of sub-sector mirroring). Fix a neighboring assertion to
use is_power_of_2 while there.
For ease of review, bdrv_get_block_status() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:08 +03:00
|
|
|
n = DIV_ROUND_UP(count, BDRV_SECTOR_SIZE);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
2020-09-01 15:51:29 +03:00
|
|
|
/*
|
|
|
|
* Avoid that s->sector_next_status becomes unaligned to the source
|
|
|
|
* request alignment and/or cluster size to avoid unnecessary read
|
|
|
|
* cycles.
|
|
|
|
*/
|
|
|
|
tail = (sector_num - src_cur_offset + n) % s->src_alignment[src_cur];
|
|
|
|
if (n > tail) {
|
|
|
|
n -= tail;
|
|
|
|
}
|
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret & BDRV_BLOCK_ZERO) {
|
2018-05-01 19:57:49 +03:00
|
|
|
s->status = post_backing_zero ? BLK_BACKING_FILE : BLK_ZERO;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
} else if (ret & BDRV_BLOCK_DATA) {
|
|
|
|
s->status = BLK_DATA;
|
|
|
|
} else {
|
2017-04-07 14:34:04 +03:00
|
|
|
s->status = s->target_has_backing ? BLK_BACKING_FILE : BLK_DATA;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
s->sector_next_status = sector_num + n;
|
|
|
|
}
|
|
|
|
|
|
|
|
n = MIN(n, s->sector_next_status - sector_num);
|
|
|
|
if (s->status == BLK_DATA) {
|
|
|
|
n = MIN(n, s->buf_sectors);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We need to write complete clusters for compressed images, so if an
|
|
|
|
* unallocated area is shorter than that, we must consider the whole
|
|
|
|
* cluster allocated. */
|
|
|
|
if (s->compressed) {
|
|
|
|
if (n < s->cluster_sectors) {
|
|
|
|
n = MIN(s->cluster_sectors, s->total_sectors - sector_num);
|
|
|
|
s->status = BLK_DATA;
|
|
|
|
} else {
|
|
|
|
n = QEMU_ALIGN_DOWN(n, s->cluster_sectors);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
static int coroutine_fn convert_co_read(ImgConvertState *s, int64_t sector_num,
|
|
|
|
int nb_sectors, uint8_t *buf)
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
{
|
2019-05-07 23:35:03 +03:00
|
|
|
uint64_t single_read_until = 0;
|
2017-02-28 15:40:07 +03:00
|
|
|
int n, ret;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
|
|
|
assert(nb_sectors <= s->buf_sectors);
|
|
|
|
while (nb_sectors > 0) {
|
|
|
|
BlockBackend *blk;
|
2017-02-28 15:40:07 +03:00
|
|
|
int src_cur;
|
|
|
|
int64_t bs_sectors, src_cur_offset;
|
2019-05-07 23:35:03 +03:00
|
|
|
uint64_t offset;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
|
|
|
/* In the case of compression with multiple source files, we can get a
|
|
|
|
* nb_sectors that spreads into the next part. So we must be able to
|
|
|
|
* read across multiple BDSes for one convert_read() call. */
|
2017-02-28 15:40:07 +03:00
|
|
|
convert_select_part(s, sector_num, &src_cur, &src_cur_offset);
|
|
|
|
blk = s->src[src_cur];
|
|
|
|
bs_sectors = s->src_sectors[src_cur];
|
|
|
|
|
2019-05-07 23:35:03 +03:00
|
|
|
offset = (sector_num - src_cur_offset) << BDRV_SECTOR_BITS;
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
n = MIN(nb_sectors, bs_sectors - (sector_num - src_cur_offset));
|
2019-05-07 23:35:03 +03:00
|
|
|
if (single_read_until > offset) {
|
|
|
|
n = 1;
|
|
|
|
}
|
2017-02-28 15:40:07 +03:00
|
|
|
|
2019-05-07 23:35:03 +03:00
|
|
|
ret = blk_co_pread(blk, offset, n << BDRV_SECTOR_BITS, buf, 0);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
2019-05-07 23:35:03 +03:00
|
|
|
if (s->salvage) {
|
|
|
|
if (n > 1) {
|
|
|
|
single_read_until = offset + (n << BDRV_SECTOR_BITS);
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
if (!s->quiet) {
|
|
|
|
warn_report("error while reading offset %" PRIu64
|
|
|
|
": %s", offset, strerror(-ret));
|
|
|
|
}
|
|
|
|
memset(buf, 0, BDRV_SECTOR_SIZE);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
return ret;
|
|
|
|
}
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
nb_sectors -= n;
|
|
|
|
buf += n * BDRV_SECTOR_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
|
|
|
|
static int coroutine_fn convert_co_write(ImgConvertState *s, int64_t sector_num,
|
|
|
|
int nb_sectors, uint8_t *buf,
|
|
|
|
enum ImgConvertBlockStatus status)
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
while (nb_sectors > 0) {
|
|
|
|
int n = nb_sectors;
|
2017-04-27 05:58:27 +03:00
|
|
|
BdrvRequestFlags flags = s->compressed ? BDRV_REQ_WRITE_COMPRESSED : 0;
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
switch (status) {
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
case BLK_BACKING_FILE:
|
|
|
|
/* If we have a backing file, leave clusters unallocated that are
|
|
|
|
* unallocated in the source image, so that the backing file is
|
|
|
|
* visible at the respective offset. */
|
|
|
|
assert(s->target_has_backing);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case BLK_DATA:
|
2017-04-27 05:58:27 +03:00
|
|
|
/* If we're told to keep the target fully allocated (-S 0) or there
|
|
|
|
* is real non-zero data, we must write it. Otherwise we can treat
|
|
|
|
* it as zero sectors.
|
|
|
|
* Compressed clusters need to be written as a whole, so in that
|
|
|
|
* case we can only save the write if the buffer is completely
|
|
|
|
* zeroed. */
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (!s->min_sparse ||
|
2017-04-27 05:58:27 +03:00
|
|
|
(!s->compressed &&
|
2018-07-12 16:00:10 +03:00
|
|
|
is_allocated_sectors_min(buf, n, &n, s->min_sparse,
|
|
|
|
sector_num, s->alignment)) ||
|
2017-04-27 05:58:27 +03:00
|
|
|
(s->compressed &&
|
|
|
|
!buffer_is_zero(buf, n * BDRV_SECTOR_SIZE)))
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
{
|
2019-04-22 17:58:38 +03:00
|
|
|
ret = blk_co_pwrite(s->target, sector_num << BDRV_SECTOR_BITS,
|
|
|
|
n << BDRV_SECTOR_BITS, buf, flags);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* fall-through */
|
|
|
|
|
|
|
|
case BLK_ZERO:
|
|
|
|
if (s->has_zero_init) {
|
2017-04-27 05:58:27 +03:00
|
|
|
assert(!s->target_has_backing);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
break;
|
|
|
|
}
|
2017-02-28 15:40:07 +03:00
|
|
|
ret = blk_co_pwrite_zeroes(s->target,
|
|
|
|
sector_num << BDRV_SECTOR_BITS,
|
2019-03-24 03:20:12 +03:00
|
|
|
n << BDRV_SECTOR_BITS,
|
|
|
|
BDRV_REQ_MAY_UNMAP);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
nb_sectors -= n;
|
|
|
|
buf += n * BDRV_SECTOR_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-06-01 12:26:48 +03:00
|
|
|
static int coroutine_fn convert_co_copy_range(ImgConvertState *s, int64_t sector_num,
|
|
|
|
int nb_sectors)
|
|
|
|
{
|
|
|
|
int n, ret;
|
|
|
|
|
|
|
|
while (nb_sectors > 0) {
|
|
|
|
BlockBackend *blk;
|
|
|
|
int src_cur;
|
|
|
|
int64_t bs_sectors, src_cur_offset;
|
|
|
|
int64_t offset;
|
|
|
|
|
|
|
|
convert_select_part(s, sector_num, &src_cur, &src_cur_offset);
|
|
|
|
offset = (sector_num - src_cur_offset) << BDRV_SECTOR_BITS;
|
|
|
|
blk = s->src[src_cur];
|
|
|
|
bs_sectors = s->src_sectors[src_cur];
|
|
|
|
|
|
|
|
n = MIN(nb_sectors, bs_sectors - (sector_num - src_cur_offset));
|
|
|
|
|
|
|
|
ret = blk_co_copy_range(blk, offset, s->target,
|
|
|
|
sector_num << BDRV_SECTOR_BITS,
|
2018-07-09 19:37:17 +03:00
|
|
|
n << BDRV_SECTOR_BITS, 0, 0);
|
2018-06-01 12:26:48 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
nb_sectors -= n;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
static void coroutine_fn convert_co_do_copy(void *opaque)
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
{
|
2017-02-28 15:40:07 +03:00
|
|
|
ImgConvertState *s = opaque;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
uint8_t *buf = NULL;
|
2017-02-28 15:40:07 +03:00
|
|
|
int ret, i;
|
|
|
|
int index = -1;
|
|
|
|
|
|
|
|
for (i = 0; i < s->num_coroutines; i++) {
|
|
|
|
if (s->co[i] == qemu_coroutine_self()) {
|
|
|
|
index = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
assert(index >= 0);
|
|
|
|
|
|
|
|
s->running_coroutines++;
|
|
|
|
buf = blk_blockalign(s->target, s->buf_sectors * BDRV_SECTOR_SIZE);
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
int n;
|
|
|
|
int64_t sector_num;
|
|
|
|
enum ImgConvertBlockStatus status;
|
2018-06-01 12:26:48 +03:00
|
|
|
bool copy_range;
|
2017-02-28 15:40:07 +03:00
|
|
|
|
|
|
|
qemu_co_mutex_lock(&s->lock);
|
|
|
|
if (s->ret != -EINPROGRESS || s->sector_num >= s->total_sectors) {
|
|
|
|
qemu_co_mutex_unlock(&s->lock);
|
2017-04-26 11:33:15 +03:00
|
|
|
break;
|
2017-02-28 15:40:07 +03:00
|
|
|
}
|
|
|
|
n = convert_iteration_sectors(s, s->sector_num);
|
|
|
|
if (n < 0) {
|
|
|
|
qemu_co_mutex_unlock(&s->lock);
|
|
|
|
s->ret = n;
|
2017-04-26 11:33:15 +03:00
|
|
|
break;
|
2017-02-28 15:40:07 +03:00
|
|
|
}
|
|
|
|
/* save current sector and allocation status to local variables */
|
|
|
|
sector_num = s->sector_num;
|
|
|
|
status = s->status;
|
|
|
|
if (!s->min_sparse && s->status == BLK_ZERO) {
|
|
|
|
n = MIN(n, s->buf_sectors);
|
|
|
|
}
|
|
|
|
/* increment global sector counter so that other coroutines can
|
|
|
|
* already continue reading beyond this request */
|
|
|
|
s->sector_num += n;
|
|
|
|
qemu_co_mutex_unlock(&s->lock);
|
|
|
|
|
|
|
|
if (status == BLK_DATA || (!s->min_sparse && status == BLK_ZERO)) {
|
|
|
|
s->allocated_done += n;
|
|
|
|
qemu_progress_print(100.0 * s->allocated_done /
|
|
|
|
s->allocated_sectors, 0);
|
|
|
|
}
|
|
|
|
|
2018-06-01 12:26:48 +03:00
|
|
|
retry:
|
|
|
|
copy_range = s->copy_range && s->status == BLK_DATA;
|
|
|
|
if (status == BLK_DATA && !copy_range) {
|
2017-02-28 15:40:07 +03:00
|
|
|
ret = convert_co_read(s, sector_num, n, buf);
|
|
|
|
if (ret < 0) {
|
2020-04-02 16:57:17 +03:00
|
|
|
error_report("error while reading at byte %lld: %s",
|
|
|
|
sector_num * BDRV_SECTOR_SIZE, strerror(-ret));
|
2017-02-28 15:40:07 +03:00
|
|
|
s->ret = ret;
|
|
|
|
}
|
|
|
|
} else if (!s->min_sparse && status == BLK_ZERO) {
|
|
|
|
status = BLK_DATA;
|
|
|
|
memset(buf, 0x00, n * BDRV_SECTOR_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (s->wr_in_order) {
|
|
|
|
/* keep writes in order */
|
2017-04-26 11:33:15 +03:00
|
|
|
while (s->wr_offs != sector_num && s->ret == -EINPROGRESS) {
|
2017-02-28 15:40:07 +03:00
|
|
|
s->wait_sector_num[index] = sector_num;
|
|
|
|
qemu_coroutine_yield();
|
|
|
|
}
|
|
|
|
s->wait_sector_num[index] = -1;
|
|
|
|
}
|
|
|
|
|
2017-04-26 11:33:15 +03:00
|
|
|
if (s->ret == -EINPROGRESS) {
|
2018-06-01 12:26:48 +03:00
|
|
|
if (copy_range) {
|
|
|
|
ret = convert_co_copy_range(s, sector_num, n);
|
|
|
|
if (ret) {
|
|
|
|
s->copy_range = false;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ret = convert_co_write(s, sector_num, n, buf, status);
|
|
|
|
}
|
2017-04-26 11:33:15 +03:00
|
|
|
if (ret < 0) {
|
2020-04-02 16:57:17 +03:00
|
|
|
error_report("error while writing at byte %lld: %s",
|
|
|
|
sector_num * BDRV_SECTOR_SIZE, strerror(-ret));
|
2017-04-26 11:33:15 +03:00
|
|
|
s->ret = ret;
|
|
|
|
}
|
2017-02-28 15:40:07 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (s->wr_in_order) {
|
|
|
|
/* reenter the coroutine that might have waited
|
|
|
|
* for this write to complete */
|
|
|
|
s->wr_offs = sector_num + n;
|
|
|
|
for (i = 0; i < s->num_coroutines; i++) {
|
|
|
|
if (s->co[i] && s->wait_sector_num[i] == s->wr_offs) {
|
|
|
|
/*
|
|
|
|
* A -> B -> A cannot occur because A has
|
|
|
|
* s->wait_sector_num[i] == -1 during A -> B. Therefore
|
|
|
|
* B will never enter A during this time window.
|
|
|
|
*/
|
|
|
|
qemu_coroutine_enter(s->co[i]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
qemu_vfree(buf);
|
|
|
|
s->co[index] = NULL;
|
|
|
|
s->running_coroutines--;
|
|
|
|
if (!s->running_coroutines && s->ret == -EINPROGRESS) {
|
|
|
|
/* the convert job finished successfully */
|
|
|
|
s->ret = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_do_copy(ImgConvertState *s)
|
|
|
|
{
|
|
|
|
int ret, i, n;
|
|
|
|
int64_t sector_num = 0;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
|
|
|
/* Check whether we have zero initialisation or can get it efficiently */
|
2020-02-05 14:02:48 +03:00
|
|
|
if (!s->has_zero_init && s->target_is_new && s->min_sparse &&
|
|
|
|
!s->target_has_backing) {
|
2019-07-24 20:12:29 +03:00
|
|
|
s->has_zero_init = bdrv_has_zero_init(blk_bs(s->target));
|
|
|
|
}
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
|
|
|
/* Allocate buffer for copied data. For compressed images, only one cluster
|
|
|
|
* can be copied at a time. */
|
|
|
|
if (s->compressed) {
|
|
|
|
if (s->cluster_sectors <= 0 || s->cluster_sectors > s->buf_sectors) {
|
|
|
|
error_report("invalid cluster size");
|
2017-02-28 15:40:07 +03:00
|
|
|
return -EINVAL;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
s->buf_sectors = s->cluster_sectors;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (sector_num < s->total_sectors) {
|
|
|
|
n = convert_iteration_sectors(s, sector_num);
|
|
|
|
if (n < 0) {
|
2017-02-28 15:40:07 +03:00
|
|
|
return n;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
2016-03-25 01:33:57 +03:00
|
|
|
if (s->status == BLK_DATA || (!s->min_sparse && s->status == BLK_ZERO))
|
|
|
|
{
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
s->allocated_sectors += n;
|
|
|
|
}
|
|
|
|
sector_num += n;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do the copy */
|
|
|
|
s->sector_next_status = 0;
|
2017-02-28 15:40:07 +03:00
|
|
|
s->ret = -EINPROGRESS;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
qemu_co_mutex_init(&s->lock);
|
|
|
|
for (i = 0; i < s->num_coroutines; i++) {
|
|
|
|
s->co[i] = qemu_coroutine_create(convert_co_do_copy, s);
|
|
|
|
s->wait_sector_num[i] = -1;
|
|
|
|
qemu_coroutine_enter(s->co[i]);
|
|
|
|
}
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
|
2017-04-26 11:33:15 +03:00
|
|
|
while (s->running_coroutines) {
|
2017-02-28 15:40:07 +03:00
|
|
|
main_loop_wait(false);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
if (s->compressed && !s->ret) {
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
/* signal EOF to align */
|
2016-07-22 11:17:40 +03:00
|
|
|
ret = blk_pwrite_compressed(s->target, 0, NULL, 0);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
2017-02-28 15:40:07 +03:00
|
|
|
return ret;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-28 15:40:07 +03:00
|
|
|
return s->ret;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
|
2021-07-09 18:39:50 +03:00
|
|
|
/* Check that bitmaps can be copied, or output an error */
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
static int convert_check_bitmaps(BlockDriverState *src, bool skip_broken)
|
2021-07-09 18:39:50 +03:00
|
|
|
{
|
|
|
|
BdrvDirtyBitmap *bm;
|
|
|
|
|
|
|
|
if (!bdrv_supports_persistent_dirty_bitmap(src)) {
|
|
|
|
error_report("Source lacks bitmap support");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
FOR_EACH_DIRTY_BITMAP(src, bm) {
|
|
|
|
if (!bdrv_dirty_bitmap_get_persistence(bm)) {
|
|
|
|
continue;
|
|
|
|
}
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
if (!skip_broken && bdrv_dirty_bitmap_inconsistent(bm)) {
|
2021-07-09 18:39:50 +03:00
|
|
|
error_report("Cannot copy inconsistent bitmap '%s'",
|
|
|
|
bdrv_dirty_bitmap_name(bm));
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
error_printf("Try --skip-broken-bitmaps, or "
|
|
|
|
"use 'qemu-img bitmap --remove' to delete it\n");
|
2021-07-09 18:39:50 +03:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
static int convert_copy_bitmaps(BlockDriverState *src, BlockDriverState *dst,
|
|
|
|
bool skip_broken)
|
2020-05-21 22:21:36 +03:00
|
|
|
{
|
|
|
|
BdrvDirtyBitmap *bm;
|
|
|
|
Error *err = NULL;
|
|
|
|
|
|
|
|
FOR_EACH_DIRTY_BITMAP(src, bm) {
|
|
|
|
const char *name;
|
|
|
|
|
|
|
|
if (!bdrv_dirty_bitmap_get_persistence(bm)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
name = bdrv_dirty_bitmap_name(bm);
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
if (skip_broken && bdrv_dirty_bitmap_inconsistent(bm)) {
|
|
|
|
warn_report("Skipping inconsistent bitmap '%s'", name);
|
|
|
|
continue;
|
|
|
|
}
|
2020-05-21 22:21:36 +03:00
|
|
|
qmp_block_dirty_bitmap_add(dst->node_name, name,
|
|
|
|
true, bdrv_dirty_bitmap_granularity(bm),
|
|
|
|
true, true,
|
|
|
|
true, !bdrv_dirty_bitmap_enabled(bm),
|
|
|
|
&err);
|
|
|
|
if (err) {
|
|
|
|
error_reportf_err(err, "Failed to create bitmap %s: ", name);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
do_dirty_bitmap_merge(dst->node_name, name, src->node_name, name,
|
|
|
|
&err);
|
|
|
|
if (err) {
|
|
|
|
error_reportf_err(err, "Failed to populate bitmap %s: ", name);
|
2021-07-09 18:39:50 +03:00
|
|
|
qmp_block_dirty_bitmap_remove(dst->node_name, name, NULL);
|
2020-05-21 22:21:36 +03:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-07-13 10:15:39 +03:00
|
|
|
#define MAX_BUF_SECTORS 32768
|
|
|
|
|
2020-10-20 17:47:44 +03:00
|
|
|
static void set_rate_limit(BlockBackend *blk, int64_t rate_limit)
|
|
|
|
{
|
|
|
|
ThrottleConfig cfg;
|
|
|
|
|
|
|
|
throttle_config_init(&cfg);
|
|
|
|
cfg.buckets[THROTTLE_BPS_WRITE].avg = rate_limit;
|
|
|
|
|
|
|
|
blk_io_limits_enable(blk, CONVERT_THROTTLE_GROUP);
|
|
|
|
blk_set_io_limits(blk, &cfg);
|
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_convert(int argc, char **argv)
|
|
|
|
{
|
2021-04-22 19:43:44 +03:00
|
|
|
int c, bs_i, flags, src_flags = BDRV_O_NO_SHARE;
|
2017-05-15 19:47:11 +03:00
|
|
|
const char *fmt = NULL, *out_fmt = NULL, *cache = "unsafe",
|
2017-04-21 12:11:55 +03:00
|
|
|
*src_cache = BDRV_DEFAULT_CACHE, *out_baseimg = NULL,
|
2021-09-13 16:17:35 +03:00
|
|
|
*out_filename, *out_baseimg_param, *snapshot_name = NULL,
|
|
|
|
*backing_fmt = NULL;
|
2017-05-15 19:47:11 +03:00
|
|
|
BlockDriver *drv = NULL, *proto_drv = NULL;
|
2006-08-06 01:31:00 +04:00
|
|
|
BlockDriverInfo bdi;
|
2017-04-21 12:11:55 +03:00
|
|
|
BlockDriverState *out_bs;
|
|
|
|
QemuOpts *opts = NULL, *sn_opts = NULL;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOptsList *create_opts = NULL;
|
2018-08-14 15:39:47 +03:00
|
|
|
QDict *open_opts = NULL;
|
2009-05-18 18:42:12 +04:00
|
|
|
char *options = NULL;
|
2013-09-06 19:14:26 +04:00
|
|
|
Error *local_err = NULL;
|
2019-05-07 23:35:02 +03:00
|
|
|
bool writethrough, src_writethrough, image_opts = false,
|
2017-05-15 19:47:11 +03:00
|
|
|
skip_create = false, progress = false, tgt_image_opts = false;
|
2017-04-21 12:11:55 +03:00
|
|
|
int64_t ret = -EINVAL;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2018-07-27 06:34:01 +03:00
|
|
|
bool explict_min_sparse = false;
|
2020-05-21 22:21:36 +03:00
|
|
|
bool bitmaps = false;
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
bool skip_broken = false;
|
2020-10-20 17:47:44 +03:00
|
|
|
int64_t rate_limit = 0;
|
2017-04-21 12:11:55 +03:00
|
|
|
|
|
|
|
ImgConvertState s = (ImgConvertState) {
|
|
|
|
/* Need at least 4k of zeros for sparse detection */
|
|
|
|
.min_sparse = 8,
|
2018-07-27 06:34:01 +03:00
|
|
|
.copy_range = false,
|
2017-04-21 12:11:55 +03:00
|
|
|
.buf_sectors = IO_BUF_SIZE / BDRV_SECTOR_SIZE,
|
|
|
|
.wr_in_order = true,
|
|
|
|
.num_coroutines = 8,
|
|
|
|
};
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2017-05-15 19:47:11 +03:00
|
|
|
{"target-image-opts", no_argument, 0, OPTION_TARGET_IMAGE_OPTS},
|
2019-05-07 23:35:03 +03:00
|
|
|
{"salvage", no_argument, 0, OPTION_SALVAGE},
|
2020-02-05 14:02:48 +03:00
|
|
|
{"target-is-zero", no_argument, 0, OPTION_TARGET_IS_ZERO},
|
2020-05-21 22:21:36 +03:00
|
|
|
{"bitmaps", no_argument, 0, OPTION_BITMAPS},
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
{"skip-broken-bitmaps", no_argument, 0, OPTION_SKIP_BROKEN},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2021-09-13 16:17:35 +03:00
|
|
|
c = getopt_long(argc, argv, ":hf:O:B:CcF:o:l:S:pt:T:qnm:WUr:",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'O':
|
|
|
|
out_fmt = optarg;
|
|
|
|
break;
|
2008-06-06 01:53:49 +04:00
|
|
|
case 'B':
|
|
|
|
out_baseimg = optarg;
|
|
|
|
break;
|
2018-07-27 06:34:01 +03:00
|
|
|
case 'C':
|
|
|
|
s.copy_range = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'c':
|
2017-04-21 12:11:55 +03:00
|
|
|
s.compressed = true;
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2021-09-13 16:17:35 +03:00
|
|
|
case 'F':
|
|
|
|
backing_fmt = optarg;
|
|
|
|
break;
|
2009-05-18 18:42:12 +04:00
|
|
|
case 'o':
|
2020-04-15 10:49:25 +03:00
|
|
|
if (accumulate_options(&options, optarg) < 0) {
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2014-02-21 19:24:05 +04:00
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
break;
|
2013-12-04 13:10:57 +04:00
|
|
|
case 'l':
|
|
|
|
if (strstart(optarg, SNAPSHOT_OPT_BASE, NULL)) {
|
QemuOpts: Wean off qerror_report_err()
qerror_report_err() is a transitional interface to help with
converting existing monitor commands to QMP. It should not be used
elsewhere.
The only remaining user in qemu-option.c is qemu_opts_parse(). Is it
used in QMP context? If not, we can simply replace
qerror_report_err() by error_report_err().
The uses in qemu-img.c, qemu-io.c, qemu-nbd.c and under tests/ are
clearly not in QMP context.
The uses in vl.c aren't either, because the only QMP command handlers
there are qmp_query_status() and qmp_query_machines(), and they don't
call it.
Remaining uses:
* drive_def(): Command line -drive and such, HMP drive_add and pci_add
* hmp_chardev_add(): HMP chardev-add
* monitor_parse_command(): HMP core
* tmp_config_parse(): Command line -tpmdev
* net_host_device_add(): HMP host_net_add
* net_client_parse(): Command line -net and -netdev
* qemu_global_option(): Command line -global
* vnc_parse_func(): Command line -display, -vnc, default display, HMP
change, QMP change. Bummer.
* qemu_pci_hot_add_nic(): HMP pci_add
* usb_net_init(): Command line -usbdevice, HMP usb_add
Propagate errors through qemu_opts_parse(). Create a convenience
function qemu_opts_parse_noisily() that passes errors to
error_report_err(). Switch all non-QMP users outside tests to it.
That leaves vnc_parse_func(). Propagate errors through it. Since I'm
touching it anyway, rename it to vnc_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2015-02-13 14:50:26 +03:00
|
|
|
sn_opts = qemu_opts_parse_noisily(&internal_snapshot_opts,
|
|
|
|
optarg, false);
|
2013-12-04 13:10:57 +04:00
|
|
|
if (!sn_opts) {
|
|
|
|
error_report("Failed in parsing snapshot param '%s'",
|
|
|
|
optarg);
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2013-12-04 13:10:57 +04:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
snapshot_name = optarg;
|
|
|
|
}
|
|
|
|
break;
|
2011-08-26 17:27:13 +04:00
|
|
|
case 'S':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
2017-02-21 23:14:04 +03:00
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
sval = cvtnum("buffer size for sparse output", optarg);
|
|
|
|
if (sval < 0) {
|
|
|
|
goto fail_getopt;
|
|
|
|
} else if (!QEMU_IS_ALIGNED(sval, BDRV_SECTOR_SIZE) ||
|
2018-07-13 10:15:39 +03:00
|
|
|
sval / BDRV_SECTOR_SIZE > MAX_BUF_SECTORS) {
|
|
|
|
error_report("Invalid buffer size for sparse output specified. "
|
|
|
|
"Valid sizes are multiples of %llu up to %llu. Select "
|
|
|
|
"0 to disable sparse detection (fully allocates output).",
|
|
|
|
BDRV_SECTOR_SIZE, MAX_BUF_SECTORS * BDRV_SECTOR_SIZE);
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2011-08-26 17:27:13 +04:00
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
s.min_sparse = sval / BDRV_SECTOR_SIZE;
|
2018-07-27 06:34:01 +03:00
|
|
|
explict_min_sparse = true;
|
2011-08-26 17:27:13 +04:00
|
|
|
break;
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
case 'p':
|
2017-04-21 12:11:55 +03:00
|
|
|
progress = true;
|
2011-03-30 16:16:25 +04:00
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
src_cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
2019-05-07 23:35:02 +03:00
|
|
|
s.quiet = true;
|
2013-02-13 12:09:40 +04:00
|
|
|
break;
|
2013-09-02 22:07:24 +04:00
|
|
|
case 'n':
|
2017-04-21 12:11:55 +03:00
|
|
|
skip_create = true;
|
2013-09-02 22:07:24 +04:00
|
|
|
break;
|
2017-02-28 15:40:07 +03:00
|
|
|
case 'm':
|
2017-04-21 12:11:55 +03:00
|
|
|
if (qemu_strtol(optarg, NULL, 0, &s.num_coroutines) ||
|
|
|
|
s.num_coroutines < 1 || s.num_coroutines > MAX_COROUTINES) {
|
2017-02-28 15:40:07 +03:00
|
|
|
error_report("Invalid number of coroutines. Allowed number of"
|
|
|
|
" coroutines is between 1 and %d", MAX_COROUTINES);
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'W':
|
2017-04-21 12:11:55 +03:00
|
|
|
s.wr_in_order = false;
|
2017-02-28 15:40:07 +03:00
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2020-10-20 17:47:44 +03:00
|
|
|
case 'r':
|
|
|
|
rate_limit = cvtnum("rate limit", optarg);
|
|
|
|
if (rate_limit < 0) {
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
2016-02-17 13:10:17 +03:00
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2019-05-07 23:35:03 +03:00
|
|
|
case OPTION_SALVAGE:
|
|
|
|
s.salvage = true;
|
|
|
|
break;
|
2017-05-15 19:47:11 +03:00
|
|
|
case OPTION_TARGET_IMAGE_OPTS:
|
|
|
|
tgt_image_opts = true;
|
|
|
|
break;
|
2020-02-05 14:02:48 +03:00
|
|
|
case OPTION_TARGET_IS_ZERO:
|
|
|
|
/*
|
|
|
|
* The user asserting that the target is blank has the
|
|
|
|
* same effect as the target driver supporting zero
|
|
|
|
* initialisation.
|
|
|
|
*/
|
|
|
|
s.has_zero_init = true;
|
|
|
|
break;
|
2020-05-21 22:21:36 +03:00
|
|
|
case OPTION_BITMAPS:
|
|
|
|
bitmaps = true;
|
|
|
|
break;
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
case OPTION_SKIP_BROKEN:
|
|
|
|
skip_broken = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2017-05-15 19:47:11 +03:00
|
|
|
if (!out_fmt && !tgt_image_opts) {
|
|
|
|
out_fmt = "raw";
|
|
|
|
}
|
|
|
|
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
if (skip_broken && !bitmaps) {
|
|
|
|
error_report("Use of --skip-broken-bitmaps requires --bitmaps");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2018-07-27 06:34:01 +03:00
|
|
|
if (s.compressed && s.copy_range) {
|
|
|
|
error_report("Cannot enable copy offloading when -c is used");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (explict_min_sparse && s.copy_range) {
|
|
|
|
error_report("Cannot enable copy offloading when -S is used");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2019-05-07 23:35:03 +03:00
|
|
|
if (s.copy_range && s.salvage) {
|
|
|
|
error_report("Cannot use copy offloading in salvaging mode");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2017-05-15 19:47:11 +03:00
|
|
|
if (tgt_image_opts && !skip_create) {
|
|
|
|
error_report("--target-image-opts requires use of -n flag");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2019-08-09 12:09:21 +03:00
|
|
|
if (skip_create && options) {
|
2020-07-06 23:39:46 +03:00
|
|
|
error_report("-o has no effect when skipping image creation");
|
|
|
|
goto fail_getopt;
|
2019-08-09 12:09:21 +03:00
|
|
|
}
|
|
|
|
|
2020-02-05 14:02:48 +03:00
|
|
|
if (s.has_zero_init && !skip_create) {
|
|
|
|
error_report("--target-is-zero requires use of -n flag");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
s.src_num = argc - optind - 1;
|
|
|
|
out_filename = s.src_num >= 1 ? argv[argc - 1] : NULL;
|
2008-06-06 01:53:49 +04:00
|
|
|
|
2014-02-21 19:24:05 +04:00
|
|
|
if (options && has_help_option(options)) {
|
2017-05-15 19:47:11 +03:00
|
|
|
if (out_fmt) {
|
|
|
|
ret = print_block_option_help(out_filename, out_fmt);
|
|
|
|
goto fail_getopt;
|
|
|
|
} else {
|
|
|
|
error_report("Option help requires a format be specified");
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
2010-12-06 17:25:38 +03:00
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
if (s.src_num < 1) {
|
|
|
|
error_report("Must specify image file name");
|
|
|
|
goto fail_getopt;
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
/* ret is still -EINVAL until here */
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(src_cache, &src_flags, &src_writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", src_cache);
|
2017-04-21 12:11:55 +03:00
|
|
|
goto fail_getopt;
|
2014-07-23 00:58:42 +04:00
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
/* Initialize before goto out */
|
2019-05-07 23:35:02 +03:00
|
|
|
if (s.quiet) {
|
2017-04-21 12:11:55 +03:00
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
qemu_progress_init(progress, 1.0);
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
s.src = g_new0(BlockBackend *, s.src_num);
|
|
|
|
s.src_sectors = g_new(int64_t, s.src_num);
|
2020-09-01 15:51:29 +03:00
|
|
|
s.src_alignment = g_new(int, s.src_num);
|
2007-10-31 04:11:44 +03:00
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
for (bs_i = 0; bs_i < s.src_num; bs_i++) {
|
2020-09-01 15:51:29 +03:00
|
|
|
BlockDriverState *src_bs;
|
2017-04-21 12:11:55 +03:00
|
|
|
s.src[bs_i] = img_open(image_opts, argv[optind + bs_i],
|
2019-05-07 23:35:02 +03:00
|
|
|
fmt, src_flags, src_writethrough, s.quiet,
|
2017-05-02 19:35:39 +03:00
|
|
|
force_share);
|
2017-04-21 12:11:55 +03:00
|
|
|
if (!s.src[bs_i]) {
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
s.src_sectors[bs_i] = blk_nb_sectors(s.src[bs_i]);
|
|
|
|
if (s.src_sectors[bs_i] < 0) {
|
2014-06-26 15:23:25 +04:00
|
|
|
error_report("Could not get size of %s: %s",
|
2017-04-21 12:11:55 +03:00
|
|
|
argv[optind + bs_i], strerror(-s.src_sectors[bs_i]));
|
2014-06-26 15:23:25 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2020-09-01 15:51:29 +03:00
|
|
|
src_bs = blk_bs(s.src[bs_i]);
|
|
|
|
s.src_alignment[bs_i] = DIV_ROUND_UP(src_bs->bl.request_alignment,
|
|
|
|
BDRV_SECTOR_SIZE);
|
|
|
|
if (!bdrv_get_info(src_bs, &bdi)) {
|
|
|
|
s.src_alignment[bs_i] = MAX(s.src_alignment[bs_i],
|
|
|
|
bdi.cluster_size / BDRV_SECTOR_SIZE);
|
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
s.total_sectors += s.src_sectors[bs_i];
|
2007-10-31 04:11:44 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-12-04 13:10:57 +04:00
|
|
|
if (sn_opts) {
|
2017-04-21 12:11:55 +03:00
|
|
|
bdrv_snapshot_load_tmp(blk_bs(s.src[0]),
|
2017-02-10 19:28:24 +03:00
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_ID),
|
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_NAME),
|
|
|
|
&local_err);
|
2013-12-04 13:10:57 +04:00
|
|
|
} else if (snapshot_name != NULL) {
|
2017-04-21 12:11:55 +03:00
|
|
|
if (s.src_num > 1) {
|
2011-06-22 16:03:54 +04:00
|
|
|
error_report("No support for concatenating multiple snapshot");
|
2010-09-22 06:58:41 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-12-04 13:10:54 +04:00
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
bdrv_snapshot_load_tmp_by_id_or_name(blk_bs(s.src[0]), snapshot_name,
|
|
|
|
&local_err);
|
2013-12-04 13:10:57 +04:00
|
|
|
}
|
2014-01-30 18:07:28 +04:00
|
|
|
if (local_err) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "Failed to load snapshot: ");
|
2013-12-04 13:10:57 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-09-22 06:58:41 +04:00
|
|
|
}
|
|
|
|
|
2017-05-15 19:47:11 +03:00
|
|
|
if (!skip_create) {
|
|
|
|
/* Find driver and parse its options */
|
|
|
|
drv = bdrv_find_format(out_fmt);
|
|
|
|
if (!drv) {
|
|
|
|
error_report("Unknown file format '%s'", out_fmt);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2017-05-15 19:47:11 +03:00
|
|
|
proto_drv = bdrv_find_protocol(out_filename, true, &local_err);
|
|
|
|
if (!proto_drv) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-26 06:35:36 +04:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation",
|
|
|
|
drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-12-02 20:32:46 +03:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
if (!proto_drv->create_opts) {
|
|
|
|
error_report("Protocol driver '%s' does not support image creation",
|
|
|
|
proto_drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-12-02 20:32:46 +03:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
2009-06-04 17:39:38 +04:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
2015-02-12 20:37:11 +03:00
|
|
|
if (options) {
|
qemu-option: Use returned bool to check for failure
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for QemuOpts functions that now return true / false on success /
error. Coccinelle script:
@@
identifier fun = {
opts_do_parse, parse_option_bool, parse_option_number,
parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set,
qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict,
qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set,
qemu_opts_validate
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-15-armbru@redhat.com>
[Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend
options" resolved by rerunning Coccinelle on master's version]
2020-07-07 19:05:42 +03:00
|
|
|
if (!qemu_opts_do_parse(opts, options, NULL, &local_err)) {
|
2015-03-14 12:23:15 +03:00
|
|
|
error_report_err(local_err);
|
2015-02-12 20:37:11 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-11 17:58:46 +03:00
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2020-08-19 04:36:07 +03:00
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE,
|
|
|
|
s.total_sectors * BDRV_SECTOR_SIZE, &error_abort);
|
2021-09-13 16:17:35 +03:00
|
|
|
ret = add_old_style_options(out_fmt, opts, out_baseimg, backing_fmt);
|
2015-02-11 17:58:46 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2010-10-14 17:46:04 +04:00
|
|
|
/* Get backing file name if -o backing_file was used */
|
2014-06-05 13:20:51 +04:00
|
|
|
out_baseimg_param = qemu_opt_get(opts, BLOCK_OPT_BACKING_FILE);
|
2010-10-14 17:46:04 +04:00
|
|
|
if (out_baseimg_param) {
|
2014-06-05 13:20:51 +04:00
|
|
|
out_baseimg = out_baseimg_param;
|
2010-10-14 17:46:04 +04:00
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
s.target_has_backing = (bool) out_baseimg;
|
2010-10-14 17:46:04 +04:00
|
|
|
|
2020-02-05 14:02:48 +03:00
|
|
|
if (s.has_zero_init && s.target_has_backing) {
|
|
|
|
error_report("Cannot use --target-is-zero when the destination "
|
|
|
|
"image has a backing file");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-04-26 16:46:48 +03:00
|
|
|
if (s.src_num > 1 && out_baseimg) {
|
|
|
|
error_report("Having a backing file for the target makes no sense when "
|
|
|
|
"concatenating multiple input images");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
qemu-img: Deprecate use of -b without -F
Creating an image that requires format probing of the backing image is
potentially unsafe (we've had several CVEs over the years based on
probes leaking information to the guest on a subsequent boot, although
these days tools like libvirt are aware of the issue enough to prevent
the worst effects). For example, if our probing algorithm ever
changes, or if other tools like libvirt determine a different probe
result than we do, then subsequent use of that backing file under a
different format will present corrupted data to the guest.
Fortunately, the worst effects occur only when the backing image is
originally raw, and we at least prevent commit into a probed raw
backing file that would change its probed type.
Still, it is worth starting a deprecation clock so that future
qemu-img can refuse to create backing chains that would rely on
probing, to encourage clients to avoid unsafe practices. Most
warnings are intentionally emitted from bdrv_img_create() in the block
layer, but qemu-img convert uses bdrv_create() which cannot emit its
own warning without causing spurious warnings on other code paths. In
the end, all command-line image creation or backing file rewriting now
performs a check.
Furthermore, if we probe a backing file as non-raw, then it is safe to
explicitly record that result (rather than relying on future probes);
only where we probe a raw image do we care about further warnings to
the user when using such an image (for example, commits into a
probed-raw backing file are prevented), to help them improve their
tooling. But whether or not we make the probe results explicit, we
still warn the user to remind them to upgrade their workflow to supply
-F always.
iotest 114 specifically wants to create an unsafe image for later
amendment rather than defaulting to our new default of recording a
probed format, so it needs an update. While touching it, expand it to
cover all of the various warnings enabled by this patch. iotest 301
also shows a change to qcow messages.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200706203954.341758-11-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-06 23:39:54 +03:00
|
|
|
if (out_baseimg_param) {
|
|
|
|
if (!qemu_opt_get(opts, BLOCK_OPT_BACKING_FMT)) {
|
2021-05-04 00:36:00 +03:00
|
|
|
error_report("Use of backing file requires explicit "
|
|
|
|
"backing format");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
qemu-img: Deprecate use of -b without -F
Creating an image that requires format probing of the backing image is
potentially unsafe (we've had several CVEs over the years based on
probes leaking information to the guest on a subsequent boot, although
these days tools like libvirt are aware of the issue enough to prevent
the worst effects). For example, if our probing algorithm ever
changes, or if other tools like libvirt determine a different probe
result than we do, then subsequent use of that backing file under a
different format will present corrupted data to the guest.
Fortunately, the worst effects occur only when the backing image is
originally raw, and we at least prevent commit into a probed raw
backing file that would change its probed type.
Still, it is worth starting a deprecation clock so that future
qemu-img can refuse to create backing chains that would rely on
probing, to encourage clients to avoid unsafe practices. Most
warnings are intentionally emitted from bdrv_img_create() in the block
layer, but qemu-img convert uses bdrv_create() which cannot emit its
own warning without causing spurious warnings on other code paths. In
the end, all command-line image creation or backing file rewriting now
performs a check.
Furthermore, if we probe a backing file as non-raw, then it is safe to
explicitly record that result (rather than relying on future probes);
only where we probe a raw image do we care about further warnings to
the user when using such an image (for example, commits into a
probed-raw backing file are prevented), to help them improve their
tooling. But whether or not we make the probe results explicit, we
still warn the user to remind them to upgrade their workflow to supply
-F always.
iotest 114 specifically wants to create an unsafe image for later
amendment rather than defaulting to our new default of recording a
probed format, so it needs an update. While touching it, expand it to
cover all of the various warnings enabled by this patch. iotest 301
also shows a change to qcow messages.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200706203954.341758-11-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-06 23:39:54 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-05-18 18:42:12 +04:00
|
|
|
/* Check if compression is supported */
|
2017-04-21 12:11:55 +03:00
|
|
|
if (s.compressed) {
|
2014-06-05 13:20:51 +04:00
|
|
|
bool encryption =
|
|
|
|
qemu_opt_get_bool(opts, BLOCK_OPT_ENCRYPT, false);
|
2017-06-23 19:24:06 +03:00
|
|
|
const char *encryptfmt =
|
|
|
|
qemu_opt_get(opts, BLOCK_OPT_ENCRYPT_FORMAT);
|
2014-06-05 13:20:51 +04:00
|
|
|
const char *preallocation =
|
|
|
|
qemu_opt_get(opts, BLOCK_OPT_PREALLOC);
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2019-06-04 19:15:06 +03:00
|
|
|
if (drv && !block_driver_can_compress(drv)) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Compression not supported for this file format");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2017-06-23 19:24:06 +03:00
|
|
|
if (encryption || encryptfmt) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Compression and encryption not supported at "
|
|
|
|
"the same time");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
2011-10-18 18:19:42 +04:00
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
if (preallocation
|
|
|
|
&& strcmp(preallocation, "off"))
|
2011-10-18 18:19:42 +04:00
|
|
|
{
|
|
|
|
error_report("Compression and preallocation not supported at "
|
|
|
|
"the same time");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2020-05-21 22:21:36 +03:00
|
|
|
/* Determine if bitmaps need copying */
|
|
|
|
if (bitmaps) {
|
|
|
|
if (s.src_num > 1) {
|
|
|
|
error_report("Copying bitmaps only possible with single source");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
ret = convert_check_bitmaps(blk_bs(s.src[0]), skip_broken);
|
2021-07-09 18:39:50 +03:00
|
|
|
if (ret < 0) {
|
2020-05-21 22:21:36 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-14 15:39:47 +03:00
|
|
|
/*
|
|
|
|
* The later open call will need any decryption secrets, and
|
|
|
|
* bdrv_create() will purge "opts", so extract them now before
|
|
|
|
* they are lost.
|
|
|
|
*/
|
|
|
|
if (!skip_create) {
|
|
|
|
open_opts = qdict_new();
|
|
|
|
qemu_opt_foreach(opts, img_add_key_secrets, open_opts, &error_abort);
|
|
|
|
|
2013-09-02 22:07:24 +04:00
|
|
|
/* Create the new image */
|
2014-06-05 13:21:11 +04:00
|
|
|
ret = bdrv_create(drv, out_filename, opts, &local_err);
|
2013-09-02 22:07:24 +04:00
|
|
|
if (ret < 0) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "%s: error while converting %s: ",
|
|
|
|
out_filename, out_fmt);
|
2013-09-02 22:07:24 +04:00
|
|
|
goto out;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2019-07-24 20:12:29 +03:00
|
|
|
s.target_is_new = !skip_create;
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
flags = s.min_sparse ? (BDRV_O_RDWR | BDRV_O_UNMAP) : BDRV_O_RDWR;
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-05-28 13:17:07 +04:00
|
|
|
goto out;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2021-08-19 13:12:00 +03:00
|
|
|
if (flags & BDRV_O_NOCACHE) {
|
|
|
|
/*
|
|
|
|
* If we open the target with O_DIRECT, it may be necessary to
|
|
|
|
* extend its size to align to the physical sector size.
|
|
|
|
*/
|
|
|
|
flags |= BDRV_O_RESIZE;
|
|
|
|
}
|
|
|
|
|
2017-05-15 19:47:11 +03:00
|
|
|
if (skip_create) {
|
|
|
|
s.target = img_open(tgt_image_opts, out_filename, out_fmt,
|
2019-05-07 23:35:02 +03:00
|
|
|
flags, writethrough, s.quiet, false);
|
2017-05-15 19:47:11 +03:00
|
|
|
} else {
|
|
|
|
/* TODO ultimately we should allow --target-image-opts
|
|
|
|
* to be used even when -n is not given.
|
|
|
|
* That has to wait for bdrv_create to be improved
|
|
|
|
* to allow filenames in option syntax
|
|
|
|
*/
|
2018-08-14 15:39:47 +03:00
|
|
|
s.target = img_open_file(out_filename, open_opts, out_fmt,
|
2019-05-07 23:35:02 +03:00
|
|
|
flags, writethrough, s.quiet, false);
|
2018-08-14 15:39:47 +03:00
|
|
|
open_opts = NULL; /* blk_new_open will have freed it */
|
2017-05-15 19:47:11 +03:00
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
if (!s.target) {
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
out_bs = blk_bs(s.target);
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2020-05-21 22:21:36 +03:00
|
|
|
if (bitmaps && !bdrv_supports_persistent_dirty_bitmap(out_bs)) {
|
|
|
|
error_report("Format driver '%s' does not support bitmaps",
|
|
|
|
out_bs->drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-06-04 19:15:06 +03:00
|
|
|
if (s.compressed && !block_driver_can_compress(out_bs->drv)) {
|
2017-05-15 19:47:11 +03:00
|
|
|
error_report("Compression not supported for this file format");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-06-24 01:37:19 +03:00
|
|
|
/* increase bufsectors from the default 4096 (2M) if opt_transfer
|
2018-07-13 10:15:39 +03:00
|
|
|
* or discard_alignment of the out_bs is greater. Limit to
|
|
|
|
* MAX_BUF_SECTORS as maximum which is currently 32768 (16MB). */
|
|
|
|
s.buf_sectors = MIN(MAX_BUF_SECTORS,
|
2017-04-21 12:11:55 +03:00
|
|
|
MAX(s.buf_sectors,
|
|
|
|
MAX(out_bs->bl.opt_transfer >> BDRV_SECTOR_BITS,
|
|
|
|
out_bs->bl.pdiscard_alignment >>
|
|
|
|
BDRV_SECTOR_BITS)));
|
2013-11-27 14:07:06 +04:00
|
|
|
|
2018-07-12 16:00:10 +03:00
|
|
|
/* try to align the write requests to the destination to avoid unnecessary
|
|
|
|
* RMW cycles. */
|
|
|
|
s.alignment = MAX(pow2floor(s.min_sparse),
|
|
|
|
DIV_ROUND_UP(out_bs->bl.request_alignment,
|
|
|
|
BDRV_SECTOR_SIZE));
|
|
|
|
assert(is_power_of_2(s.alignment));
|
|
|
|
|
2013-09-02 22:07:24 +04:00
|
|
|
if (skip_create) {
|
2017-04-21 12:11:55 +03:00
|
|
|
int64_t output_sectors = blk_nb_sectors(s.target);
|
2014-06-26 15:23:21 +04:00
|
|
|
if (output_sectors < 0) {
|
2015-02-25 07:22:27 +03:00
|
|
|
error_report("unable to get output image length: %s",
|
2014-06-26 15:23:21 +04:00
|
|
|
strerror(-output_sectors));
|
2013-09-02 22:07:24 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2017-04-21 12:11:55 +03:00
|
|
|
} else if (output_sectors < s.total_sectors) {
|
2013-09-02 22:07:24 +04:00
|
|
|
error_report("output file is smaller than input file");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
qemu-img: Fix convert -n -B for backing-less targets
s.target_has_backing does not reflect whether the target BDS has a
backing file; it only tells whether we should use a backing file during
conversion (specified by -B).
As such, if you use convert -n, the target does not necessarily actually
have a backing file, and then dereferencing out_bs->backing fails here.
When converting to an existing file, we should set
target_backing_sectors to a negative value, because first, as the
comment explains, this value is only used for optimization, so it is
always fine to do that.
Second, we use this value to determine where the target must be
initialized to zeroes (overlays are initialized to zero after the end of
their backing file). When converting to an existing file, we cannot
assume that to be true.
Cc: qemu-stable@nongnu.org
Fixes: 351c8efff9ad809c822d55620df54d575d536f68
("qemu-img: Special post-backing convert handling")
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200121155915.98232-2-mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-01-21 18:59:14 +03:00
|
|
|
if (s.target_has_backing && s.target_is_new) {
|
2018-05-01 19:57:49 +03:00
|
|
|
/* Errors are treated as "backing length unknown" (which means
|
|
|
|
* s.target_backing_sectors has to be negative, which it will
|
|
|
|
* be automatically). The backing file length is used only
|
|
|
|
* for optimizations, so such a case is not fatal. */
|
2019-06-12 20:00:30 +03:00
|
|
|
s.target_backing_sectors =
|
|
|
|
bdrv_nb_sectors(bdrv_backing_chain_next(out_bs));
|
2018-05-01 19:57:49 +03:00
|
|
|
} else {
|
|
|
|
s.target_backing_sectors = -1;
|
|
|
|
}
|
|
|
|
|
2013-11-27 14:07:07 +04:00
|
|
|
ret = bdrv_get_info(out_bs, &bdi);
|
|
|
|
if (ret < 0) {
|
2017-04-21 12:11:55 +03:00
|
|
|
if (s.compressed) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("could not get block driver info");
|
2010-06-20 23:26:35 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2013-11-27 14:07:07 +04:00
|
|
|
} else {
|
2017-04-21 12:11:55 +03:00
|
|
|
s.compressed = s.compressed || bdi.needs_compressed_writes;
|
|
|
|
s.cluster_sectors = bdi.cluster_size / BDRV_SECTOR_SIZE;
|
|
|
|
}
|
2013-12-05 18:54:53 +04:00
|
|
|
|
2020-10-20 17:47:44 +03:00
|
|
|
if (rate_limit) {
|
|
|
|
set_rate_limit(s.target, rate_limit);
|
|
|
|
}
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
ret = convert_do_copy(&s);
|
2020-05-21 22:21:36 +03:00
|
|
|
|
|
|
|
/* Now copy the bitmaps */
|
|
|
|
if (bitmaps && ret == 0) {
|
qemu-img: Add --skip-broken-bitmaps for 'convert --bitmaps'
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process died abruptly before flushing bitmap
state), the user MUST delete those inconsistent bitmaps before
anything else useful can be done with the image.
We don't want to delete inconsistent bitmaps by default: although a
corrupt bitmap is only a loss of optimization rather than a corruption
of user-visible data, it is still nice to require the user to opt in
to the fact that they are aware of the loss of the bitmap. Still,
requiring the user to check 'qemu-img info' to see whether bitmaps are
consistent, then use 'qemu-img bitmap --remove' to remove offenders,
all before using 'qemu-img convert', is a lot more work than just
adding a knob 'qemu-img convert --bitmaps --skip-broken-bitmaps' which
opts in to skipping the broken bitmaps.
After testing the new option, also demonstrate the way to manually fix
things (either deleting bad bitmaps, or re-creating them as empty) so
that it is possible to convert without the option.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1946084
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210709153951.2801666-4-eblake@redhat.com>
[eblake: warning message tweak, test enhancements]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-07-21 18:53:48 +03:00
|
|
|
ret = convert_copy_bitmaps(blk_bs(s.src[0]), out_bs, skip_broken);
|
2020-05-21 22:21:36 +03:00
|
|
|
}
|
|
|
|
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
2013-11-27 14:07:01 +04:00
|
|
|
if (!ret) {
|
|
|
|
qemu_progress_print(100, 0);
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_end();
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_free(create_opts);
|
2018-08-14 15:39:47 +03:00
|
|
|
qobject_unref(open_opts);
|
2017-04-21 12:11:55 +03:00
|
|
|
blk_unref(s.target);
|
|
|
|
if (s.src) {
|
|
|
|
for (bs_i = 0; bs_i < s.src_num; bs_i++) {
|
|
|
|
blk_unref(s.src[bs_i]);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
g_free(s.src);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
}
|
2017-04-21 12:11:55 +03:00
|
|
|
g_free(s.src_sectors);
|
2020-09-01 15:51:29 +03:00
|
|
|
g_free(s.src_alignment);
|
2014-03-03 17:54:07 +04:00
|
|
|
fail_getopt:
|
2020-11-02 12:04:57 +03:00
|
|
|
qemu_opts_del(sn_opts);
|
2014-03-03 17:54:07 +04:00
|
|
|
g_free(options);
|
|
|
|
|
2017-04-21 12:11:55 +03:00
|
|
|
return !!ret;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2004-08-04 01:15:11 +04:00
|
|
|
|
2006-08-06 01:31:00 +04:00
|
|
|
static void dump_snapshots(BlockDriverState *bs)
|
|
|
|
{
|
|
|
|
QEMUSnapshotInfo *sn_tab, *sn;
|
|
|
|
int nb_sns, i;
|
|
|
|
|
|
|
|
nb_sns = bdrv_snapshot_list(bs, &sn_tab);
|
|
|
|
if (nb_sns <= 0)
|
|
|
|
return;
|
|
|
|
printf("Snapshot list:\n");
|
2019-04-17 22:17:55 +03:00
|
|
|
bdrv_snapshot_dump(NULL);
|
2013-05-25 07:09:45 +04:00
|
|
|
printf("\n");
|
2006-08-06 01:31:00 +04:00
|
|
|
for(i = 0; i < nb_sns; i++) {
|
|
|
|
sn = &sn_tab[i];
|
2019-04-17 22:17:55 +03:00
|
|
|
bdrv_snapshot_dump(sn);
|
2013-05-25 07:09:45 +04:00
|
|
|
printf("\n");
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
2011-08-21 07:09:37 +04:00
|
|
|
g_free(sn_tab);
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
static void dump_json_image_info_list(ImageInfoList *list)
|
|
|
|
{
|
2020-12-11 20:11:37 +03:00
|
|
|
GString *str;
|
2012-10-17 16:02:31 +04:00
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageInfoList(v, NULL, &list, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2020-12-11 20:11:35 +03:00
|
|
|
str = qobject_to_json_pretty(obj, true);
|
2012-10-17 16:02:31 +04:00
|
|
|
assert(str != NULL);
|
2020-12-11 20:11:37 +03:00
|
|
|
printf("%s\n", str->str);
|
2018-04-19 18:01:43 +03:00
|
|
|
qobject_unref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
2020-12-11 20:11:37 +03:00
|
|
|
g_string_free(str, true);
|
2012-10-17 16:02:31 +04:00
|
|
|
}
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
static void dump_json_image_info(ImageInfo *info)
|
|
|
|
{
|
2020-12-11 20:11:37 +03:00
|
|
|
GString *str;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageInfo(v, NULL, &info, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2020-12-11 20:11:35 +03:00
|
|
|
str = qobject_to_json_pretty(obj, true);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
assert(str != NULL);
|
2020-12-11 20:11:37 +03:00
|
|
|
printf("%s\n", str->str);
|
2018-04-19 18:01:43 +03:00
|
|
|
qobject_unref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
2020-12-11 20:11:37 +03:00
|
|
|
g_string_free(str, true);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
}
|
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
static void dump_human_image_info_list(ImageInfoList *list)
|
|
|
|
{
|
|
|
|
ImageInfoList *elem;
|
|
|
|
bool delim = false;
|
|
|
|
|
|
|
|
for (elem = list; elem; elem = elem->next) {
|
|
|
|
if (delim) {
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
delim = true;
|
|
|
|
|
2019-04-17 22:17:55 +03:00
|
|
|
bdrv_image_info_dump(elem->value);
|
2012-10-17 16:02:31 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static gboolean str_equal_func(gconstpointer a, gconstpointer b)
|
|
|
|
{
|
|
|
|
return strcmp(a, b) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Open an image file chain and return an ImageInfoList
|
|
|
|
*
|
|
|
|
* @filename: topmost image filename
|
|
|
|
* @fmt: topmost image format (may be NULL to autodetect)
|
|
|
|
* @chain: true - enumerate entire backing file chain
|
|
|
|
* false - only topmost image file
|
|
|
|
*
|
|
|
|
* Returns a list of ImageInfo objects or NULL if there was an error opening an
|
|
|
|
* image file. If there was an error a message will have been printed to
|
|
|
|
* stderr.
|
|
|
|
*/
|
2016-02-17 13:10:20 +03:00
|
|
|
static ImageInfoList *collect_image_info_list(bool image_opts,
|
|
|
|
const char *filename,
|
2012-10-17 16:02:31 +04:00
|
|
|
const char *fmt,
|
2017-05-02 19:35:39 +03:00
|
|
|
bool chain, bool force_share)
|
2012-10-17 16:02:31 +04:00
|
|
|
{
|
|
|
|
ImageInfoList *head = NULL;
|
2021-01-14 01:10:12 +03:00
|
|
|
ImageInfoList **tail = &head;
|
2012-10-17 16:02:31 +04:00
|
|
|
GHashTable *filenames;
|
2013-06-06 08:27:58 +04:00
|
|
|
Error *err = NULL;
|
2012-10-17 16:02:31 +04:00
|
|
|
|
|
|
|
filenames = g_hash_table_new_full(g_str_hash, str_equal_func, NULL, NULL);
|
|
|
|
|
|
|
|
while (filename) {
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2012-10-17 16:02:31 +04:00
|
|
|
BlockDriverState *bs;
|
|
|
|
ImageInfo *info;
|
|
|
|
|
|
|
|
if (g_hash_table_lookup_extended(filenames, filename, NULL, NULL)) {
|
|
|
|
error_report("Backing file '%s' creates an infinite loop.",
|
|
|
|
filename);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
g_hash_table_insert(filenames, (gpointer)filename, NULL);
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt,
|
2017-05-02 19:35:39 +03:00
|
|
|
BDRV_O_NO_BACKING | BDRV_O_NO_IO, false, false,
|
|
|
|
force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2012-10-17 16:02:31 +04:00
|
|
|
goto err;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2012-10-17 16:02:31 +04:00
|
|
|
|
2013-06-06 08:27:58 +04:00
|
|
|
bdrv_query_image_info(bs, &info, &err);
|
2014-01-30 18:07:28 +04:00
|
|
|
if (err) {
|
2015-02-12 15:55:05 +03:00
|
|
|
error_report_err(err);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-06-06 08:27:58 +04:00
|
|
|
goto err;
|
2013-06-06 08:27:57 +04:00
|
|
|
}
|
2012-10-17 16:02:31 +04:00
|
|
|
|
2021-01-14 01:10:12 +03:00
|
|
|
QAPI_LIST_APPEND(tail, info);
|
2012-10-17 16:02:31 +04:00
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2012-10-17 16:02:31 +04:00
|
|
|
|
2019-12-05 16:46:46 +03:00
|
|
|
/* Clear parameters that only apply to the topmost image */
|
2012-10-17 16:02:31 +04:00
|
|
|
filename = fmt = NULL;
|
2019-12-05 16:46:46 +03:00
|
|
|
image_opts = false;
|
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
if (chain) {
|
|
|
|
if (info->has_full_backing_filename) {
|
|
|
|
filename = info->full_backing_filename;
|
|
|
|
} else if (info->has_backing_filename) {
|
2015-12-14 22:55:15 +03:00
|
|
|
error_report("Could not determine absolute backing filename,"
|
|
|
|
" but backing filename '%s' present",
|
|
|
|
info->backing_filename);
|
|
|
|
goto err;
|
2012-10-17 16:02:31 +04:00
|
|
|
}
|
|
|
|
if (info->has_backing_filename_format) {
|
|
|
|
fmt = info->backing_filename_format;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
g_hash_table_destroy(filenames);
|
|
|
|
return head;
|
|
|
|
|
|
|
|
err:
|
|
|
|
qapi_free_ImageInfoList(head);
|
|
|
|
g_hash_table_destroy(filenames);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
static int img_info(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
2012-10-17 16:02:31 +04:00
|
|
|
bool chain = false;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
const char *filename, *fmt, *output;
|
2012-10-17 16:02:31 +04:00
|
|
|
ImageInfoList *list;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
fmt = NULL;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
output = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2012-10-17 16:02:31 +04:00
|
|
|
{"backing-chain", no_argument, 0, OPTION_BACKING_CHAIN},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-05-02 19:35:39 +03:00
|
|
|
c = getopt_long(argc, argv, ":f:hU",
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
long_options, &option_index);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2012-10-17 16:02:31 +04:00
|
|
|
case OPTION_BACKING_CHAIN:
|
|
|
|
chain = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
list = collect_image_info_list(image_opts, filename, fmt, chain,
|
|
|
|
force_share);
|
2012-10-17 16:02:31 +04:00
|
|
|
if (!list) {
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
2012-10-17 16:02:31 +04:00
|
|
|
dump_human_image_info_list(list);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
2012-10-17 16:02:31 +04:00
|
|
|
if (chain) {
|
|
|
|
dump_json_image_info_list(list);
|
|
|
|
} else {
|
|
|
|
dump_json_image_info(list->value);
|
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
break;
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
qapi_free_ImageInfoList(list);
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-03-26 21:40:43 +03:00
|
|
|
static int dump_map_entry(OutputFormat output_format, MapEntry *e,
|
|
|
|
MapEntry *next)
|
2013-09-04 21:00:33 +04:00
|
|
|
{
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
2016-01-26 06:59:02 +03:00
|
|
|
if (e->data && !e->has_offset) {
|
2013-09-04 21:00:33 +04:00
|
|
|
error_report("File contains external, encrypted or compressed clusters.");
|
2019-03-26 21:40:43 +03:00
|
|
|
return -1;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
2016-01-26 06:59:02 +03:00
|
|
|
if (e->data && !e->zero) {
|
2013-09-04 21:00:33 +04:00
|
|
|
printf("%#-16"PRIx64"%#-16"PRIx64"%#-16"PRIx64"%s\n",
|
2016-01-26 06:59:02 +03:00
|
|
|
e->start, e->length,
|
|
|
|
e->has_offset ? e->offset : 0,
|
|
|
|
e->has_filename ? e->filename : "");
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
/* This format ignores the distinction between 0, ZERO and ZERO|DATA.
|
|
|
|
* Modify the flags here to allow more coalescing.
|
|
|
|
*/
|
2016-01-26 06:59:02 +03:00
|
|
|
if (next && (!next->data || next->zero)) {
|
|
|
|
next->data = false;
|
|
|
|
next->zero = true;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
2020-05-13 16:36:28 +03:00
|
|
|
printf("{ \"start\": %"PRId64", \"length\": %"PRId64","
|
qemu-img: Make unallocated part of backing chain obvious in map
The recently-added NBD context qemu:allocation-depth is able to
distinguish between locally-present data (even when that data is
sparse) [shown as depth 1 over NBD], and data that could not be found
anywhere in the backing chain [shown as depth 0]; and the libnbd
project was recently patched to give the human-readable name "absent"
to an allocation-depth of 0. But qemu-img map --output=json predates
that addition, and has the unfortunate behavior that all portions of
the backing chain that resolve without finding a hit in any backing
layer report the same depth as the final backing layer. This makes it
harder to reconstruct a qcow2 backing chain using just 'qemu-img map'
output, especially when using "backing":null to artificially limit a
backing chain, because it is impossible to distinguish between a
QCOW2_CLUSTER_UNALLOCATED (which defers to a [missing] backing file)
and a QCOW2_CLUSTER_ZERO_PLAIN cluster (which would override any
backing file), since both types of clusters otherwise show as
"data":false,"zero":true" (but note that we can distinguish a
QCOW2_CLUSTER_ZERO_ALLOCATED, which would also have an "offset":
listing).
The task of reconstructing a qcow2 chain was made harder in commit
0da9856851 (nbd: server: Report holes for raw images), because prior
to that point, it was possible to abuse NBD's block status command to
see which portions of a qcow2 file resulted in BDRV_BLOCK_ALLOCATED
(showing up as NBD_STATE_ZERO in isolation) vs. missing from the chain
(showing up as NBD_STATE_ZERO|NBD_STATE_HOLE); but now qemu reports
more accurate sparseness information over NBD.
An obvious solution is to make 'qemu-img map --output=json' add an
additional "present":false designation to any cluster lacking an
allocation anywhere in the chain, without any change to the "depth"
parameter to avoid breaking existing clients. The iotests have
several examples where this distinction demonstrates the additional
accuracy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210701190655.2131223-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: fix more iotest fallout]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-07-01 22:06:55 +03:00
|
|
|
" \"depth\": %"PRId64", \"present\": %s, \"zero\": %s,"
|
|
|
|
" \"data\": %s", e->start, e->length, e->depth,
|
|
|
|
e->present ? "true" : "false",
|
2016-01-26 06:59:02 +03:00
|
|
|
e->zero ? "true" : "false",
|
|
|
|
e->data ? "true" : "false");
|
|
|
|
if (e->has_offset) {
|
2013-09-11 20:47:52 +04:00
|
|
|
printf(", \"offset\": %"PRId64"", e->offset);
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
putchar('}');
|
|
|
|
|
2020-05-13 16:36:28 +03:00
|
|
|
if (next) {
|
|
|
|
puts(",");
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2019-03-26 21:40:43 +03:00
|
|
|
return 0;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:02 +03:00
|
|
|
static int get_block_status(BlockDriverState *bs, int64_t offset,
|
|
|
|
int64_t bytes, MapEntry *e)
|
2013-09-04 21:00:33 +04:00
|
|
|
{
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
int ret;
|
2013-09-04 21:00:33 +04:00
|
|
|
int depth;
|
2016-01-26 06:58:48 +03:00
|
|
|
BlockDriverState *file;
|
2016-02-05 21:12:33 +03:00
|
|
|
bool has_offset;
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
int64_t map;
|
block: Use bdrv_refresh_filename() to pull
Before this patch, bdrv_refresh_filename() is used in a pushing manner:
Whenever the BDS graph is modified, the parents of the modified edges
are supposed to be updated (recursively upwards). However, that is
nonviable, considering that we want child changes not to concern
parents.
Also, in the long run we want a pull model anyway: Here, we would have a
bdrv_filename() function which returns a BDS's filename, freshly
constructed.
This patch is an intermediate step. It adds bdrv_refresh_filename()
calls before every place a BDS.filename value is used. The only
exceptions are protocol drivers that use their own filename, which
clearly would not profit from refreshing that filename before.
Also, bdrv_get_encrypted_filename() is removed along the way (as a user
of BDS.filename), since it is completely unused.
In turn, all of the calls to bdrv_refresh_filename() before this patch
are removed, because we no longer have to call this function on graph
changes.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-02-01 22:29:05 +03:00
|
|
|
char *filename = NULL;
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
/* As an optimization, we could cache the current range of unallocated
|
|
|
|
* clusters in each file of the chain, and avoid querying the same
|
|
|
|
* range repeatedly.
|
|
|
|
*/
|
|
|
|
|
|
|
|
depth = 0;
|
|
|
|
for (;;) {
|
2019-06-12 20:00:30 +03:00
|
|
|
bs = bdrv_skip_filters(bs);
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
ret = bdrv_block_status(bs, offset, bytes, &bytes, &map, &file);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
assert(bytes);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (ret & (BDRV_BLOCK_ZERO|BDRV_BLOCK_DATA)) {
|
|
|
|
break;
|
|
|
|
}
|
2019-06-12 20:00:30 +03:00
|
|
|
bs = bdrv_cow_bs(bs);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (bs == NULL) {
|
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
depth++;
|
|
|
|
}
|
|
|
|
|
2016-02-05 21:12:33 +03:00
|
|
|
has_offset = !!(ret & BDRV_BLOCK_OFFSET_VALID);
|
|
|
|
|
block: Use bdrv_refresh_filename() to pull
Before this patch, bdrv_refresh_filename() is used in a pushing manner:
Whenever the BDS graph is modified, the parents of the modified edges
are supposed to be updated (recursively upwards). However, that is
nonviable, considering that we want child changes not to concern
parents.
Also, in the long run we want a pull model anyway: Here, we would have a
bdrv_filename() function which returns a BDS's filename, freshly
constructed.
This patch is an intermediate step. It adds bdrv_refresh_filename()
calls before every place a BDS.filename value is used. The only
exceptions are protocol drivers that use their own filename, which
clearly would not profit from refreshing that filename before.
Also, bdrv_get_encrypted_filename() is removed along the way (as a user
of BDS.filename), since it is completely unused.
In turn, all of the calls to bdrv_refresh_filename() before this patch
are removed, because we no longer have to call this function on graph
changes.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-02-01 22:29:05 +03:00
|
|
|
if (file && has_offset) {
|
|
|
|
bdrv_refresh_filename(file);
|
|
|
|
filename = file->filename;
|
|
|
|
}
|
|
|
|
|
2016-02-05 21:12:33 +03:00
|
|
|
*e = (MapEntry) {
|
2017-10-12 06:47:02 +03:00
|
|
|
.start = offset,
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
.length = bytes,
|
2016-02-05 21:12:33 +03:00
|
|
|
.data = !!(ret & BDRV_BLOCK_DATA),
|
|
|
|
.zero = !!(ret & BDRV_BLOCK_ZERO),
|
block: Convert bdrv_get_block_status() to bytes
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the name of the function from bdrv_get_block_status() to
bdrv_block_status() ensures that the compiler enforces that all
callers are updated. For now, the io.c layer still assert()s that
all callers are sector-aligned, but that can be relaxed when a later
patch implements byte-based block status in the drivers.
There was an inherent limitation in returning the offset via the
return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which
means an offset can only be mapped for sector-aligned queries (or,
if we declare that non-aligned input is at the same relative position
modulo 512 of the answer), so the new interface also changes things to
return the offset via output through a parameter by reference rather
than mashed into the return value. We'll have some glue code that
munges between the two styles until we finish converting all uses.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_block_status(), coupled
with the tweak in calling convention. But some code, particularly
bdrv_is_allocated(), gets a lot simpler because it no longer has to
mess with sectors.
For ease of review, bdrv_get_block_status_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-12 06:47:03 +03:00
|
|
|
.offset = map,
|
2016-02-05 21:12:33 +03:00
|
|
|
.has_offset = has_offset,
|
|
|
|
.depth = depth,
|
qemu-img: Make unallocated part of backing chain obvious in map
The recently-added NBD context qemu:allocation-depth is able to
distinguish between locally-present data (even when that data is
sparse) [shown as depth 1 over NBD], and data that could not be found
anywhere in the backing chain [shown as depth 0]; and the libnbd
project was recently patched to give the human-readable name "absent"
to an allocation-depth of 0. But qemu-img map --output=json predates
that addition, and has the unfortunate behavior that all portions of
the backing chain that resolve without finding a hit in any backing
layer report the same depth as the final backing layer. This makes it
harder to reconstruct a qcow2 backing chain using just 'qemu-img map'
output, especially when using "backing":null to artificially limit a
backing chain, because it is impossible to distinguish between a
QCOW2_CLUSTER_UNALLOCATED (which defers to a [missing] backing file)
and a QCOW2_CLUSTER_ZERO_PLAIN cluster (which would override any
backing file), since both types of clusters otherwise show as
"data":false,"zero":true" (but note that we can distinguish a
QCOW2_CLUSTER_ZERO_ALLOCATED, which would also have an "offset":
listing).
The task of reconstructing a qcow2 chain was made harder in commit
0da9856851 (nbd: server: Report holes for raw images), because prior
to that point, it was possible to abuse NBD's block status command to
see which portions of a qcow2 file resulted in BDRV_BLOCK_ALLOCATED
(showing up as NBD_STATE_ZERO in isolation) vs. missing from the chain
(showing up as NBD_STATE_ZERO|NBD_STATE_HOLE); but now qemu reports
more accurate sparseness information over NBD.
An obvious solution is to make 'qemu-img map --output=json' add an
additional "present":false designation to any cluster lacking an
allocation anywhere in the chain, without any change to the "depth"
parameter to avoid breaking existing clients. The iotests have
several examples where this distinction demonstrates the additional
accuracy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210701190655.2131223-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: fix more iotest fallout]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-07-01 22:06:55 +03:00
|
|
|
.present = !!(ret & BDRV_BLOCK_ALLOCATED),
|
block: Use bdrv_refresh_filename() to pull
Before this patch, bdrv_refresh_filename() is used in a pushing manner:
Whenever the BDS graph is modified, the parents of the modified edges
are supposed to be updated (recursively upwards). However, that is
nonviable, considering that we want child changes not to concern
parents.
Also, in the long run we want a pull model anyway: Here, we would have a
bdrv_filename() function which returns a BDS's filename, freshly
constructed.
This patch is an intermediate step. It adds bdrv_refresh_filename()
calls before every place a BDS.filename value is used. The only
exceptions are protocol drivers that use their own filename, which
clearly would not profit from refreshing that filename before.
Also, bdrv_get_encrypted_filename() is removed along the way (as a user
of BDS.filename), since it is completely unused.
In turn, all of the calls to bdrv_refresh_filename() before this patch
are removed, because we no longer have to call this function on graph
changes.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-02-01 22:29:05 +03:00
|
|
|
.has_filename = filename,
|
|
|
|
.filename = filename,
|
2016-02-05 21:12:33 +03:00
|
|
|
};
|
|
|
|
|
2013-09-04 21:00:33 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-01-26 06:59:02 +03:00
|
|
|
static inline bool entry_mergeable(const MapEntry *curr, const MapEntry *next)
|
|
|
|
{
|
|
|
|
if (curr->length == 0) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->zero != next->zero ||
|
|
|
|
curr->data != next->data ||
|
|
|
|
curr->depth != next->depth ||
|
qemu-img: Make unallocated part of backing chain obvious in map
The recently-added NBD context qemu:allocation-depth is able to
distinguish between locally-present data (even when that data is
sparse) [shown as depth 1 over NBD], and data that could not be found
anywhere in the backing chain [shown as depth 0]; and the libnbd
project was recently patched to give the human-readable name "absent"
to an allocation-depth of 0. But qemu-img map --output=json predates
that addition, and has the unfortunate behavior that all portions of
the backing chain that resolve without finding a hit in any backing
layer report the same depth as the final backing layer. This makes it
harder to reconstruct a qcow2 backing chain using just 'qemu-img map'
output, especially when using "backing":null to artificially limit a
backing chain, because it is impossible to distinguish between a
QCOW2_CLUSTER_UNALLOCATED (which defers to a [missing] backing file)
and a QCOW2_CLUSTER_ZERO_PLAIN cluster (which would override any
backing file), since both types of clusters otherwise show as
"data":false,"zero":true" (but note that we can distinguish a
QCOW2_CLUSTER_ZERO_ALLOCATED, which would also have an "offset":
listing).
The task of reconstructing a qcow2 chain was made harder in commit
0da9856851 (nbd: server: Report holes for raw images), because prior
to that point, it was possible to abuse NBD's block status command to
see which portions of a qcow2 file resulted in BDRV_BLOCK_ALLOCATED
(showing up as NBD_STATE_ZERO in isolation) vs. missing from the chain
(showing up as NBD_STATE_ZERO|NBD_STATE_HOLE); but now qemu reports
more accurate sparseness information over NBD.
An obvious solution is to make 'qemu-img map --output=json' add an
additional "present":false designation to any cluster lacking an
allocation anywhere in the chain, without any change to the "depth"
parameter to avoid breaking existing clients. The iotests have
several examples where this distinction demonstrates the additional
accuracy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210701190655.2131223-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: fix more iotest fallout]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-07-01 22:06:55 +03:00
|
|
|
curr->present != next->present ||
|
2016-01-26 06:59:02 +03:00
|
|
|
curr->has_filename != next->has_filename ||
|
|
|
|
curr->has_offset != next->has_offset) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->has_filename && strcmp(curr->filename, next->filename)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->has_offset && curr->offset + curr->length != next->offset) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-09-04 21:00:33 +04:00
|
|
|
static int img_map(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2013-09-04 21:00:33 +04:00
|
|
|
BlockDriverState *bs;
|
|
|
|
const char *filename, *fmt, *output;
|
|
|
|
int64_t length;
|
|
|
|
MapEntry curr = { .length = 0 }, next;
|
|
|
|
int ret = 0;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2020-05-13 16:36:29 +03:00
|
|
|
int64_t start_offset = 0;
|
|
|
|
int64_t max_length = -1;
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
|
|
|
output = NULL;
|
|
|
|
for (;;) {
|
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2020-05-13 16:36:29 +03:00
|
|
|
{"start-offset", required_argument, 0, 's'},
|
|
|
|
{"max-length", required_argument, 0, 'l'},
|
2013-09-04 21:00:33 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2020-05-13 16:36:29 +03:00
|
|
|
c = getopt_long(argc, argv, ":f:s:l:hU",
|
2013-09-04 21:00:33 +04:00
|
|
|
long_options, &option_index);
|
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2013-09-04 21:00:33 +04:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2013-09-04 21:00:33 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2013-09-04 21:00:33 +04:00
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2020-05-13 16:36:29 +03:00
|
|
|
case 's':
|
|
|
|
start_offset = cvtnum("start offset", optarg);
|
|
|
|
if (start_offset < 0) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'l':
|
|
|
|
max_length = cvtnum("max length", optarg);
|
|
|
|
if (max_length < 0) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
}
|
2014-04-22 09:36:11 +04:00
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
2014-04-22 09:36:11 +04:00
|
|
|
filename = argv[optind];
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, 0, false, false, force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
if (output_format == OFORMAT_HUMAN) {
|
|
|
|
printf("%-16s%-16s%-16s%s\n", "Offset", "Length", "Mapped to", "File");
|
2020-05-13 16:36:28 +03:00
|
|
|
} else if (output_format == OFORMAT_JSON) {
|
|
|
|
putchar('[');
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
length = blk_getlength(blk);
|
2020-05-13 16:36:27 +03:00
|
|
|
if (length < 0) {
|
|
|
|
error_report("Failed to get size for '%s'", filename);
|
|
|
|
return 1;
|
|
|
|
}
|
2020-05-13 16:36:29 +03:00
|
|
|
if (max_length != -1) {
|
|
|
|
length = MIN(start_offset + max_length, length);
|
|
|
|
}
|
2020-05-13 16:36:27 +03:00
|
|
|
|
2020-05-13 16:36:29 +03:00
|
|
|
curr.start = start_offset;
|
2013-09-04 21:00:33 +04:00
|
|
|
while (curr.start + curr.length < length) {
|
2017-10-12 06:47:02 +03:00
|
|
|
int64_t offset = curr.start + curr.length;
|
2020-07-07 17:46:29 +03:00
|
|
|
int64_t n = length - offset;
|
2013-09-04 21:00:33 +04:00
|
|
|
|
2017-10-12 06:47:02 +03:00
|
|
|
ret = get_block_status(bs, offset, n, &next);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Could not read file metadata: %s", strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-01-26 06:59:02 +03:00
|
|
|
if (entry_mergeable(&curr, &next)) {
|
2013-09-04 21:00:33 +04:00
|
|
|
curr.length += next.length;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (curr.length > 0) {
|
2019-03-26 21:40:43 +03:00
|
|
|
ret = dump_map_entry(output_format, &curr, &next);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
curr = next;
|
|
|
|
}
|
|
|
|
|
2019-03-26 21:40:43 +03:00
|
|
|
ret = dump_map_entry(output_format, &curr, NULL);
|
2020-05-13 16:36:28 +03:00
|
|
|
if (output_format == OFORMAT_JSON) {
|
|
|
|
puts("]");
|
|
|
|
}
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
out:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-09-04 21:00:33 +04:00
|
|
|
return ret < 0;
|
|
|
|
}
|
|
|
|
|
2009-01-07 20:40:15 +03:00
|
|
|
#define SNAPSHOT_LIST 1
|
|
|
|
#define SNAPSHOT_CREATE 2
|
|
|
|
#define SNAPSHOT_APPLY 3
|
|
|
|
#define SNAPSHOT_DELETE 4
|
|
|
|
|
2009-06-07 03:42:17 +04:00
|
|
|
static int img_snapshot(int argc, char **argv)
|
2009-01-07 20:40:15 +03:00
|
|
|
{
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2009-01-07 20:40:15 +03:00
|
|
|
BlockDriverState *bs;
|
|
|
|
QEMUSnapshotInfo sn;
|
|
|
|
char *filename, *snapshot_name = NULL;
|
2010-06-20 23:26:35 +04:00
|
|
|
int c, ret = 0, bdrv_oflags;
|
2009-01-07 20:40:15 +03:00
|
|
|
int action = 0;
|
|
|
|
qemu_timeval tv;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
snapshot: distinguish id and name in snapshot delete
Snapshot creation actually already distinguish id and name since it take
a structured parameter *sn, but delete can't. Later an accurate delete
is needed in qmp_transaction abort and blockdev-snapshot-delete-sync,
so change its prototype. Also *errp is added to tip error, but return
value is kepted to let caller check what kind of error happens. Existing
caller for it are savevm, delvm and qemu-img, they are not impacted by
introducing a new function bdrv_snapshot_delete_by_id_or_name(), which
check the return value and do the operation again.
Before this patch:
For qcow2, it search id first then name to find the one to delete.
For rbd, it search name.
For sheepdog, it does nothing.
After this patch:
For qcow2, logic is the same by call it twice in caller.
For rbd, it always fails in delete with id, but still search for name
in second try, no change to user.
Some code for *errp is based on Pavel's patch.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-11 10:04:33 +04:00
|
|
|
Error *err = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2009-01-07 20:40:15 +03:00
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
bdrv_oflags = BDRV_O_RDWR;
|
2009-01-07 20:40:15 +03:00
|
|
|
/* Parse commandline parameters */
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-05-02 19:35:39 +03:00
|
|
|
c = getopt_long(argc, argv, ":la:c:d:hqU",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2009-01-07 20:40:15 +03:00
|
|
|
case 'h':
|
|
|
|
help();
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
case 'l':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_LIST;
|
2010-01-17 17:48:13 +03:00
|
|
|
bdrv_oflags &= ~BDRV_O_RDWR; /* no need for RW */
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
case 'a':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_APPLY;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
|
|
|
case 'c':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_CREATE;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
|
|
|
case 'd':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_DELETE;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
|
|
|
/* Open the image */
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, NULL, bdrv_oflags, false, quiet,
|
|
|
|
force_share);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2009-01-07 20:40:15 +03:00
|
|
|
|
|
|
|
/* Perform the requested action */
|
|
|
|
switch(action) {
|
|
|
|
case SNAPSHOT_LIST:
|
|
|
|
dump_snapshots(bs);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_CREATE:
|
|
|
|
memset(&sn, 0, sizeof(sn));
|
|
|
|
pstrcpy(sn.name, sizeof(sn.name), snapshot_name);
|
|
|
|
|
|
|
|
qemu_gettimeofday(&tv);
|
|
|
|
sn.date_sec = tv.tv_sec;
|
|
|
|
sn.date_nsec = tv.tv_usec * 1000;
|
|
|
|
|
|
|
|
ret = bdrv_snapshot_create(bs, &sn);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (ret) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not create snapshot '%s': %d (%s)",
|
2009-01-07 20:40:15 +03:00
|
|
|
snapshot_name, ret, strerror(-ret));
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_APPLY:
|
2017-11-20 17:28:41 +03:00
|
|
|
ret = bdrv_snapshot_goto(bs, snapshot_name, &err);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (ret) {
|
2017-11-20 17:28:41 +03:00
|
|
|
error_reportf_err(err, "Could not apply snapshot '%s': ",
|
|
|
|
snapshot_name);
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_DELETE:
|
block/snapshot: remove bdrv_snapshot_delete_by_id_or_name
After the previous patch, the only instance of this function left
is inside qemu-img.c.
qemu-img is using it inside the 'img_snapshot' function to delete
snapshots in the SNAPSHOT_DELETE case, based on a "snapshot_name"
string that refers to the tag, not ID, of the QEMUSnapshotInfo struct.
This can be verified by checking the SNAPSHOT_CREATE case that
comes shortly before SNAPSHOT_DELETE. In that case, the same
"snapshot_name" variable is being strcpy to the 'name' field
of the QEMUSnapshotInfo struct sn:
pstrcpy(sn.name, sizeof(sn.name), snapshot_name);
Based on that, it is unlikely that "snapshot_name" might contain
an "id" in SNAPSHOT_DELETE.
This patch changes SNAPSHOT_DELETE to use snapshot_find() and
snapshot_delete() instead of bdrv_snapshot_delete_by_id_or_name.
After that, there is no instances left of bdrv_snapshot_delete_by_id_or_name
in the code, so it is safe to remove it entirely.
Suggested-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-11-07 16:09:59 +03:00
|
|
|
ret = bdrv_snapshot_find(bs, &sn, snapshot_name);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Could not delete snapshot '%s': snapshot not "
|
|
|
|
"found", snapshot_name);
|
snapshot: distinguish id and name in snapshot delete
Snapshot creation actually already distinguish id and name since it take
a structured parameter *sn, but delete can't. Later an accurate delete
is needed in qmp_transaction abort and blockdev-snapshot-delete-sync,
so change its prototype. Also *errp is added to tip error, but return
value is kepted to let caller check what kind of error happens. Existing
caller for it are savevm, delvm and qemu-img, they are not impacted by
introducing a new function bdrv_snapshot_delete_by_id_or_name(), which
check the return value and do the operation again.
Before this patch:
For qcow2, it search id first then name to find the one to delete.
For rbd, it search name.
For sheepdog, it does nothing.
After this patch:
For qcow2, logic is the same by call it twice in caller.
For rbd, it always fails in delete with id, but still search for name
in second try, no change to user.
Some code for *errp is based on Pavel's patch.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-11 10:04:33 +04:00
|
|
|
ret = 1;
|
block/snapshot: remove bdrv_snapshot_delete_by_id_or_name
After the previous patch, the only instance of this function left
is inside qemu-img.c.
qemu-img is using it inside the 'img_snapshot' function to delete
snapshots in the SNAPSHOT_DELETE case, based on a "snapshot_name"
string that refers to the tag, not ID, of the QEMUSnapshotInfo struct.
This can be verified by checking the SNAPSHOT_CREATE case that
comes shortly before SNAPSHOT_DELETE. In that case, the same
"snapshot_name" variable is being strcpy to the 'name' field
of the QEMUSnapshotInfo struct sn:
pstrcpy(sn.name, sizeof(sn.name), snapshot_name);
Based on that, it is unlikely that "snapshot_name" might contain
an "id" in SNAPSHOT_DELETE.
This patch changes SNAPSHOT_DELETE to use snapshot_find() and
snapshot_delete() instead of bdrv_snapshot_delete_by_id_or_name.
After that, there is no instances left of bdrv_snapshot_delete_by_id_or_name
in the code, so it is safe to remove it entirely.
Suggested-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-11-07 16:09:59 +03:00
|
|
|
} else {
|
|
|
|
ret = bdrv_snapshot_delete(bs, sn.id_str, sn.name, &err);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_reportf_err(err, "Could not delete snapshot '%s': ",
|
|
|
|
snapshot_name);
|
|
|
|
ret = 1;
|
|
|
|
}
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Cleanup */
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
static int img_rebase(int argc, char **argv)
|
|
|
|
{
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL, *blk_old_backing = NULL, *blk_new_backing = NULL;
|
2016-02-26 01:53:54 +03:00
|
|
|
uint8_t *buf_old = NULL;
|
|
|
|
uint8_t *buf_new = NULL;
|
2019-05-23 19:33:36 +03:00
|
|
|
BlockDriverState *bs = NULL, *prefix_chain_bs = NULL;
|
2019-06-12 20:00:30 +03:00
|
|
|
BlockDriverState *unfiltered_bs;
|
2010-01-12 14:55:18 +03:00
|
|
|
char *filename;
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *fmt, *cache, *src_cache, *out_basefmt, *out_baseimg;
|
|
|
|
int c, flags, src_flags, ret;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough, src_writethrough;
|
2010-01-12 14:55:18 +03:00
|
|
|
int unsafe = 0;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2011-03-30 16:16:25 +04:00
|
|
|
int progress = 0;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2013-09-05 16:45:29 +04:00
|
|
|
Error *local_err = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
/* Parse commandline parameters */
|
2010-03-02 14:14:31 +03:00
|
|
|
fmt = NULL;
|
2011-06-20 20:48:19 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2014-07-23 00:58:42 +04:00
|
|
|
src_cache = BDRV_DEFAULT_CACHE;
|
2010-01-12 14:55:18 +03:00
|
|
|
out_baseimg = NULL;
|
|
|
|
out_basefmt = NULL;
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-05-02 19:35:39 +03:00
|
|
|
c = getopt_long(argc, argv, ":hf:F:b:upt:T:qU",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2010-01-12 14:55:18 +03:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2010-01-12 14:55:18 +03:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
return 0;
|
2010-03-02 14:14:31 +03:00
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2010-01-12 14:55:18 +03:00
|
|
|
case 'F':
|
|
|
|
out_basefmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'b':
|
|
|
|
out_baseimg = optarg;
|
|
|
|
break;
|
|
|
|
case 'u':
|
|
|
|
unsafe = 1;
|
|
|
|
break;
|
2011-03-30 16:16:25 +04:00
|
|
|
case 'p':
|
|
|
|
progress = 1;
|
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
src_cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
if (quiet) {
|
|
|
|
progress = 0;
|
|
|
|
}
|
|
|
|
|
2014-04-22 09:36:11 +04:00
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
|
|
|
}
|
|
|
|
if (!unsafe && !out_baseimg) {
|
|
|
|
error_exit("Must specify backing file (-b) or use unsafe mode (-u)");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_init(progress, 2.0);
|
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
2011-06-20 20:48:19 +04:00
|
|
|
flags = BDRV_O_RDWR | (unsafe ? BDRV_O_NO_BACKING : 0);
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-08-26 22:17:56 +04:00
|
|
|
goto out;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
src_flags = 0;
|
|
|
|
ret = bdrv_parse_cache_mode(src_cache, &src_flags, &src_writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", src_cache);
|
2014-08-26 22:17:56 +04:00
|
|
|
goto out;
|
2014-07-23 00:58:42 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
/* The source files are opened read-only, don't care about WCE */
|
|
|
|
assert((src_flags & BDRV_O_RDWR) == 0);
|
|
|
|
(void) src_writethrough;
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
|
|
|
* Open the images.
|
|
|
|
*
|
|
|
|
* Ignore the old backing file for unsafe rebase in case we want to correct
|
|
|
|
* the reference to a renamed or moved backing file.
|
|
|
|
*/
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet,
|
|
|
|
false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2014-08-26 22:17:56 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2019-06-12 20:00:30 +03:00
|
|
|
unfiltered_bs = bdrv_skip_filters(bs);
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
if (out_basefmt != NULL) {
|
2015-02-05 21:58:17 +03:00
|
|
|
if (bdrv_find_format(out_basefmt) == NULL) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Invalid format name: '%s'", out_basefmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* For safe rebasing we need to compare old and new backing file */
|
2014-08-26 22:17:56 +04:00
|
|
|
if (!unsafe) {
|
2015-02-05 21:58:17 +03:00
|
|
|
QDict *options = NULL;
|
2019-06-12 20:00:30 +03:00
|
|
|
BlockDriverState *base_bs = bdrv_cow_bs(unfiltered_bs);
|
2015-02-05 21:58:17 +03:00
|
|
|
|
2019-05-23 19:33:35 +03:00
|
|
|
if (base_bs) {
|
2019-04-25 15:25:10 +03:00
|
|
|
blk_old_backing = blk_new(qemu_get_aio_context(),
|
|
|
|
BLK_PERM_CONSISTENT_READ,
|
2019-05-23 19:33:35 +03:00
|
|
|
BLK_PERM_ALL);
|
|
|
|
ret = blk_insert_bs(blk_old_backing, base_bs,
|
|
|
|
&local_err);
|
|
|
|
if (ret < 0) {
|
2019-05-09 20:52:35 +03:00
|
|
|
error_reportf_err(local_err,
|
2019-05-23 19:33:35 +03:00
|
|
|
"Could not reuse old backing file '%s': ",
|
|
|
|
base_bs->filename);
|
2019-05-09 20:52:35 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
blk_old_backing = NULL;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2015-02-05 21:58:17 +03:00
|
|
|
|
2012-10-16 16:46:18 +04:00
|
|
|
if (out_baseimg[0]) {
|
2018-05-09 21:20:01 +03:00
|
|
|
const char *overlay_filename;
|
|
|
|
char *out_real_path;
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
options = qdict_new();
|
2015-02-05 21:58:17 +03:00
|
|
|
if (out_basefmt) {
|
2017-04-28 00:58:17 +03:00
|
|
|
qdict_put_str(options, "driver", out_basefmt);
|
2017-05-02 19:35:39 +03:00
|
|
|
}
|
|
|
|
if (force_share) {
|
|
|
|
qdict_put_bool(options, BDRV_OPT_FORCE_SHARE, true);
|
2015-02-05 21:58:17 +03:00
|
|
|
}
|
|
|
|
|
block: Use bdrv_refresh_filename() to pull
Before this patch, bdrv_refresh_filename() is used in a pushing manner:
Whenever the BDS graph is modified, the parents of the modified edges
are supposed to be updated (recursively upwards). However, that is
nonviable, considering that we want child changes not to concern
parents.
Also, in the long run we want a pull model anyway: Here, we would have a
bdrv_filename() function which returns a BDS's filename, freshly
constructed.
This patch is an intermediate step. It adds bdrv_refresh_filename()
calls before every place a BDS.filename value is used. The only
exceptions are protocol drivers that use their own filename, which
clearly would not profit from refreshing that filename before.
Also, bdrv_get_encrypted_filename() is removed along the way (as a user
of BDS.filename), since it is completely unused.
In turn, all of the calls to bdrv_refresh_filename() before this patch
are removed, because we no longer have to call this function on graph
changes.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-02-01 22:29:05 +03:00
|
|
|
bdrv_refresh_filename(bs);
|
2018-05-09 21:20:01 +03:00
|
|
|
overlay_filename = bs->exact_filename[0] ? bs->exact_filename
|
|
|
|
: bs->filename;
|
2019-02-01 22:29:14 +03:00
|
|
|
out_real_path =
|
|
|
|
bdrv_get_full_backing_filename_from_filename(overlay_filename,
|
|
|
|
out_baseimg,
|
|
|
|
&local_err);
|
2018-05-09 21:20:01 +03:00
|
|
|
if (local_err) {
|
2019-05-28 22:53:38 +03:00
|
|
|
qobject_unref(options);
|
2018-05-09 21:20:01 +03:00
|
|
|
error_reportf_err(local_err,
|
|
|
|
"Could not resolve backing filename: ");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-05-23 19:33:36 +03:00
|
|
|
/*
|
|
|
|
* Find out whether we rebase an image on top of a previous image
|
|
|
|
* in its chain.
|
|
|
|
*/
|
|
|
|
prefix_chain_bs = bdrv_find_backing_image(bs, out_real_path);
|
2019-05-23 19:33:37 +03:00
|
|
|
if (prefix_chain_bs) {
|
2019-05-28 22:53:38 +03:00
|
|
|
qobject_unref(options);
|
2019-05-23 19:33:37 +03:00
|
|
|
g_free(out_real_path);
|
2019-05-28 22:53:38 +03:00
|
|
|
|
2019-04-25 15:25:10 +03:00
|
|
|
blk_new_backing = blk_new(qemu_get_aio_context(),
|
|
|
|
BLK_PERM_CONSISTENT_READ,
|
2019-05-23 19:33:37 +03:00
|
|
|
BLK_PERM_ALL);
|
|
|
|
ret = blk_insert_bs(blk_new_backing, prefix_chain_bs,
|
|
|
|
&local_err);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_reportf_err(local_err,
|
|
|
|
"Could not reuse backing file '%s': ",
|
|
|
|
out_baseimg);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
blk_new_backing = blk_new_open(out_real_path, NULL,
|
|
|
|
options, src_flags, &local_err);
|
|
|
|
g_free(out_real_path);
|
|
|
|
if (!blk_new_backing) {
|
|
|
|
error_reportf_err(local_err,
|
|
|
|
"Could not open new backing file '%s': ",
|
|
|
|
out_baseimg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-10-16 16:46:18 +04:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check each unallocated cluster in the COW file. If it is unallocated,
|
|
|
|
* accesses go to the backing file. We must therefore compare this cluster
|
|
|
|
* in the old and new backing file, and if they differ we need to copy it
|
|
|
|
* from the old backing file into the COW file.
|
|
|
|
*
|
|
|
|
* If qemu-img crashes during this step, no harm is done. The content of
|
|
|
|
* the image is the same as the original one at any time.
|
|
|
|
*/
|
|
|
|
if (!unsafe) {
|
2017-10-12 06:47:15 +03:00
|
|
|
int64_t size;
|
2019-05-09 20:52:35 +03:00
|
|
|
int64_t old_backing_size = 0;
|
2017-10-12 06:47:15 +03:00
|
|
|
int64_t new_backing_size = 0;
|
|
|
|
uint64_t offset;
|
|
|
|
int64_t n;
|
2012-10-12 16:29:18 +04:00
|
|
|
float local_progress = 0;
|
2010-02-08 11:20:00 +03:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
buf_old = blk_blockalign(blk, IO_BUF_SIZE);
|
|
|
|
buf_new = blk_blockalign(blk, IO_BUF_SIZE);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
size = blk_getlength(blk);
|
|
|
|
if (size < 0) {
|
2014-06-26 15:23:25 +04:00
|
|
|
error_report("Could not get size of '%s': %s",
|
2017-10-12 06:47:15 +03:00
|
|
|
filename, strerror(-size));
|
2014-06-26 15:23:25 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2019-05-09 20:52:35 +03:00
|
|
|
if (blk_old_backing) {
|
|
|
|
old_backing_size = blk_getlength(blk_old_backing);
|
|
|
|
if (old_backing_size < 0) {
|
|
|
|
char backing_name[PATH_MAX];
|
2014-06-26 15:23:25 +04:00
|
|
|
|
2019-05-09 20:52:35 +03:00
|
|
|
bdrv_get_backing_filename(bs, backing_name,
|
|
|
|
sizeof(backing_name));
|
|
|
|
error_report("Could not get size of '%s': %s",
|
|
|
|
backing_name, strerror(-old_backing_size));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-06-26 15:23:25 +04:00
|
|
|
}
|
2015-02-05 21:58:18 +03:00
|
|
|
if (blk_new_backing) {
|
2017-10-12 06:47:15 +03:00
|
|
|
new_backing_size = blk_getlength(blk_new_backing);
|
|
|
|
if (new_backing_size < 0) {
|
2014-06-26 15:23:25 +04:00
|
|
|
error_report("Could not get size of '%s': %s",
|
2017-10-12 06:47:15 +03:00
|
|
|
out_baseimg, strerror(-new_backing_size));
|
2014-06-26 15:23:25 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-10-16 16:46:18 +04:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
if (size != 0) {
|
|
|
|
local_progress = (float)100 / (size / MIN(size, IO_BUF_SIZE));
|
2012-10-12 16:29:18 +04:00
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
for (offset = 0; offset < size; offset += n) {
|
2019-05-09 20:52:36 +03:00
|
|
|
bool buf_old_is_zero = false;
|
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
/* How many bytes can we handle with the next read? */
|
|
|
|
n = MIN(IO_BUF_SIZE, size - offset);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
/* If the cluster is allocated, we don't need to take action */
|
2019-06-12 20:00:30 +03:00
|
|
|
ret = bdrv_is_allocated(unfiltered_bs, offset, n, &n);
|
2013-09-04 21:00:25 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading image metadata: %s",
|
|
|
|
strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
2010-04-29 16:47:48 +04:00
|
|
|
if (ret) {
|
2010-01-12 14:55:18 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2019-05-23 19:33:36 +03:00
|
|
|
if (prefix_chain_bs) {
|
|
|
|
/*
|
|
|
|
* If cluster wasn't changed since prefix_chain, we don't need
|
|
|
|
* to take action
|
|
|
|
*/
|
2019-06-12 20:00:30 +03:00
|
|
|
ret = bdrv_is_allocated_above(bdrv_cow_bs(unfiltered_bs),
|
|
|
|
prefix_chain_bs, false,
|
|
|
|
offset, n, &n);
|
2019-05-23 19:33:36 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading image metadata: %s",
|
|
|
|
strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!ret) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-12-07 15:42:10 +04:00
|
|
|
/*
|
|
|
|
* Read old and new backing file and take into consideration that
|
|
|
|
* backing files may be smaller than the COW image.
|
|
|
|
*/
|
2017-10-12 06:47:15 +03:00
|
|
|
if (offset >= old_backing_size) {
|
|
|
|
memset(buf_old, 0, n);
|
2019-05-09 20:52:36 +03:00
|
|
|
buf_old_is_zero = true;
|
2011-12-07 15:42:10 +04:00
|
|
|
} else {
|
2017-10-12 06:47:15 +03:00
|
|
|
if (offset + n > old_backing_size) {
|
|
|
|
n = old_backing_size - offset;
|
2011-12-07 15:42:10 +04:00
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
ret = blk_pread(blk_old_backing, offset, buf_old, n);
|
2011-12-07 15:42:10 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading from old backing file");
|
|
|
|
goto out;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2011-12-07 15:42:10 +04:00
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
if (offset >= new_backing_size || !blk_new_backing) {
|
|
|
|
memset(buf_new, 0, n);
|
2011-12-07 15:42:10 +04:00
|
|
|
} else {
|
2017-10-12 06:47:15 +03:00
|
|
|
if (offset + n > new_backing_size) {
|
|
|
|
n = new_backing_size - offset;
|
2011-12-07 15:42:10 +04:00
|
|
|
}
|
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
ret = blk_pread(blk_new_backing, offset, buf_new, n);
|
2011-12-07 15:42:10 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading from new backing file");
|
|
|
|
goto out;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If they differ, we need to write to the COW file */
|
|
|
|
uint64_t written = 0;
|
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
while (written < n) {
|
2017-10-12 06:47:14 +03:00
|
|
|
int64_t pnum;
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2017-10-12 06:47:15 +03:00
|
|
|
if (compare_buffers(buf_old + written, buf_new + written,
|
|
|
|
n - written, &pnum))
|
2010-01-12 14:55:18 +03:00
|
|
|
{
|
2019-05-09 20:52:36 +03:00
|
|
|
if (buf_old_is_zero) {
|
|
|
|
ret = blk_pwrite_zeroes(blk, offset + written, pnum, 0);
|
|
|
|
} else {
|
|
|
|
ret = blk_pwrite(blk, offset + written,
|
|
|
|
buf_old + written, pnum, 0);
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
if (ret < 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Error while writing to COW image: %s",
|
2010-01-12 14:55:18 +03:00
|
|
|
strerror(-ret));
|
2010-06-20 23:26:35 +04:00
|
|
|
goto out;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
written += pnum;
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(local_progress, 100);
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Change the backing file. All clusters that are different from the old
|
|
|
|
* backing file are overwritten in the COW file now, so the visible content
|
|
|
|
* doesn't change when we switch the backing file.
|
|
|
|
*/
|
2012-10-16 16:46:18 +04:00
|
|
|
if (out_baseimg && *out_baseimg) {
|
2019-06-12 20:00:30 +03:00
|
|
|
ret = bdrv_change_backing_file(unfiltered_bs, out_baseimg, out_basefmt,
|
|
|
|
true);
|
2012-10-16 16:46:18 +04:00
|
|
|
} else {
|
2019-06-12 20:00:30 +03:00
|
|
|
ret = bdrv_change_backing_file(unfiltered_bs, NULL, NULL, false);
|
2012-10-16 16:46:18 +04:00
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
if (ret == -ENOSPC) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not change the backing file to '%s': No "
|
|
|
|
"space left in the file header", out_baseimg);
|
2021-07-08 18:52:28 +03:00
|
|
|
} else if (ret == -EINVAL && out_baseimg && !out_basefmt) {
|
|
|
|
error_report("Could not change the backing file to '%s': backing "
|
|
|
|
"format must be specified", out_baseimg);
|
2010-01-12 14:55:18 +03:00
|
|
|
} else if (ret < 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not change the backing file to '%s': %s",
|
2010-01-12 14:55:18 +03:00
|
|
|
out_baseimg, strerror(-ret));
|
|
|
|
}
|
|
|
|
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(100, 0);
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
|
|
|
* TODO At this point it is possible to check if any clusters that are
|
|
|
|
* allocated in the COW file are the same in the backing file. If so, they
|
|
|
|
* could be dropped from the COW file. Don't do this before switching the
|
|
|
|
* backing file, in case of a crash this would lead to corruption.
|
|
|
|
*/
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_end();
|
2010-01-12 14:55:18 +03:00
|
|
|
/* Cleanup */
|
|
|
|
if (!unsafe) {
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk_old_backing);
|
|
|
|
blk_unref(blk_new_backing);
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2016-02-26 01:53:54 +03:00
|
|
|
qemu_vfree(buf_old);
|
|
|
|
qemu_vfree(buf_new);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-04-24 12:12:12 +04:00
|
|
|
static int img_resize(int argc, char **argv)
|
|
|
|
{
|
2015-02-12 19:43:08 +03:00
|
|
|
Error *err = NULL;
|
2010-04-24 12:12:12 +04:00
|
|
|
int c, ret, relative;
|
|
|
|
const char *filename, *fmt, *size;
|
2019-09-18 12:51:44 +03:00
|
|
|
int64_t n, total_size, current_size;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL;
|
2017-06-13 23:20:55 +03:00
|
|
|
PreallocMode prealloc = PREALLOC_MODE_OFF;
|
2012-08-06 06:18:42 +04:00
|
|
|
QemuOpts *param;
|
2016-02-17 13:10:17 +03:00
|
|
|
|
2012-08-06 06:18:42 +04:00
|
|
|
static QemuOptsList resize_options = {
|
|
|
|
.name = "resize_options",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(resize_options.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = BLOCK_OPT_SIZE,
|
|
|
|
.type = QEMU_OPT_SIZE,
|
|
|
|
.help = "Virtual disk size"
|
|
|
|
}, {
|
|
|
|
/* end of list */
|
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
},
|
|
|
|
};
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2017-09-18 15:42:27 +03:00
|
|
|
bool shrink = false;
|
2010-04-24 12:12:12 +04:00
|
|
|
|
2011-04-29 12:58:12 +04:00
|
|
|
/* Remove size from argv manually so that negative numbers are not treated
|
|
|
|
* as options by getopt. */
|
|
|
|
if (argc < 3) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Not enough arguments");
|
2011-04-29 12:58:12 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = argv[--argc];
|
|
|
|
|
|
|
|
/* Parse getopt arguments */
|
2010-04-24 12:12:12 +04:00
|
|
|
fmt = NULL;
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-06-13 23:20:55 +03:00
|
|
|
{"preallocation", required_argument, 0, OPTION_PREALLOCATION},
|
2017-09-18 15:42:27 +03:00
|
|
|
{"shrink", no_argument, 0, OPTION_SHRINK},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-03-17 13:45:41 +03:00
|
|
|
c = getopt_long(argc, argv, ":f:hq",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2010-04-24 12:12:12 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch(c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2010-04-24 12:12:12 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2021-02-17 14:56:45 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
user_creatable_process_cmdline(optarg);
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2017-06-13 23:20:55 +03:00
|
|
|
case OPTION_PREALLOCATION:
|
2017-08-24 11:46:10 +03:00
|
|
|
prealloc = qapi_enum_parse(&PreallocMode_lookup, optarg,
|
2017-08-24 11:45:57 +03:00
|
|
|
PREALLOC_MODE__MAX, NULL);
|
2017-06-13 23:20:55 +03:00
|
|
|
if (prealloc == PREALLOC_MODE__MAX) {
|
|
|
|
error_report("Invalid preallocation mode '%s'", optarg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
2017-09-18 15:42:27 +03:00
|
|
|
case OPTION_SHRINK:
|
|
|
|
shrink = true;
|
|
|
|
break;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2018-02-05 19:27:45 +03:00
|
|
|
error_exit("Expecting image file name and size");
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
filename = argv[optind++];
|
|
|
|
|
|
|
|
/* Choose grow, shrink, or absolute resize mode */
|
|
|
|
switch (size[0]) {
|
|
|
|
case '+':
|
|
|
|
relative = 1;
|
|
|
|
size++;
|
|
|
|
break;
|
|
|
|
case '-':
|
|
|
|
relative = -1;
|
|
|
|
size++;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
relative = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Parse size */
|
2014-01-02 06:49:17 +04:00
|
|
|
param = qemu_opts_create(&resize_options, NULL, 0, &error_abort);
|
qemu-option: Use returned bool to check for failure
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for QemuOpts functions that now return true / false on success /
error. Coccinelle script:
@@
identifier fun = {
opts_do_parse, parse_option_bool, parse_option_number,
parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set,
qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict,
qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set,
qemu_opts_validate
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-15-armbru@redhat.com>
[Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend
options" resolved by rerunning Coccinelle on master's version]
2020-07-07 19:05:42 +03:00
|
|
|
if (!qemu_opt_set(param, BLOCK_OPT_SIZE, size, &err)) {
|
2015-02-12 19:43:08 +03:00
|
|
|
error_report_err(err);
|
2010-12-06 19:08:31 +03:00
|
|
|
ret = -1;
|
2012-08-06 06:18:42 +04:00
|
|
|
qemu_opts_del(param);
|
2010-12-06 19:08:31 +03:00
|
|
|
goto out;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
2012-08-06 06:18:42 +04:00
|
|
|
n = qemu_opt_get_size(param, BLOCK_OPT_SIZE, 0);
|
|
|
|
qemu_opts_del(param);
|
2010-04-24 12:12:12 +04:00
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt,
|
2017-05-02 19:35:39 +03:00
|
|
|
BDRV_O_RDWR | BDRV_O_RESIZE, false, quiet,
|
|
|
|
false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2010-12-06 19:08:31 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
|
2017-06-13 23:20:55 +03:00
|
|
|
current_size = blk_getlength(blk);
|
|
|
|
if (current_size < 0) {
|
|
|
|
error_report("Failed to inquire current image length: %s",
|
|
|
|
strerror(-current_size));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2010-04-24 12:12:12 +04:00
|
|
|
if (relative) {
|
2017-06-13 23:20:55 +03:00
|
|
|
total_size = current_size + n * relative;
|
2010-04-24 12:12:12 +04:00
|
|
|
} else {
|
|
|
|
total_size = n;
|
|
|
|
}
|
|
|
|
if (total_size <= 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("New image size must be positive");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
|
2017-06-13 23:20:55 +03:00
|
|
|
if (total_size <= current_size && prealloc != PREALLOC_MODE_OFF) {
|
|
|
|
error_report("Preallocation can only be used for growing images");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-09-18 15:42:27 +03:00
|
|
|
if (total_size < current_size && !shrink) {
|
2020-07-10 15:17:17 +03:00
|
|
|
error_report("Use the --shrink option to perform a shrink operation.");
|
2017-09-18 15:42:27 +03:00
|
|
|
warn_report("Shrinking an image will delete all data beyond the "
|
|
|
|
"shrunken image's end. Before performing such an "
|
|
|
|
"operation, make sure there is no important data there.");
|
2020-07-10 15:17:17 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2017-09-18 15:42:27 +03:00
|
|
|
}
|
|
|
|
|
2019-09-18 12:51:43 +03:00
|
|
|
/*
|
|
|
|
* The user expects the image to have the desired size after
|
|
|
|
* resizing, so pass @exact=true. It is of no use to report
|
|
|
|
* success when the image has not actually been resized.
|
|
|
|
*/
|
2020-04-24 15:54:41 +03:00
|
|
|
ret = blk_truncate(blk, total_size, true, prealloc, 0, &err);
|
2019-09-18 12:51:44 +03:00
|
|
|
if (!ret) {
|
|
|
|
qprintf(quiet, "Image resized.\n");
|
|
|
|
} else {
|
2017-03-28 23:51:27 +03:00
|
|
|
error_report_err(err);
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
static void amend_status_cb(BlockDriverState *bs,
|
2015-07-27 18:51:32 +03:00
|
|
|
int64_t offset, int64_t total_work_size,
|
|
|
|
void *opaque)
|
2014-10-27 13:12:51 +03:00
|
|
|
{
|
|
|
|
qemu_progress_print(100.f * offset / total_work_size, 0);
|
|
|
|
}
|
|
|
|
|
2018-05-10 00:00:20 +03:00
|
|
|
static int print_amend_option_help(const char *format)
|
|
|
|
{
|
|
|
|
BlockDriver *drv;
|
|
|
|
|
|
|
|
/* Find driver and parse its options */
|
|
|
|
drv = bdrv_find_format(format);
|
|
|
|
if (!drv) {
|
|
|
|
error_report("Unknown file format '%s'", format);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!drv->bdrv_amend_options) {
|
|
|
|
error_report("Format driver '%s' does not support option amendment",
|
|
|
|
format);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-06-25 15:55:39 +03:00
|
|
|
/* Every driver supporting amendment must have amend_opts */
|
|
|
|
assert(drv->amend_opts);
|
2018-05-10 00:00:20 +03:00
|
|
|
|
2020-06-25 15:55:40 +03:00
|
|
|
printf("Amend options for '%s':\n", format);
|
2020-06-25 15:55:39 +03:00
|
|
|
qemu_opts_print_help(drv->amend_opts, false);
|
2018-05-10 00:00:20 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-09-03 12:09:50 +04:00
|
|
|
static int img_amend(int argc, char **argv)
|
|
|
|
{
|
2015-02-12 20:37:11 +03:00
|
|
|
Error *err = NULL;
|
2013-09-03 12:09:50 +04:00
|
|
|
int c, ret = 0;
|
|
|
|
char *options = NULL;
|
2020-06-25 15:55:39 +03:00
|
|
|
QemuOptsList *amend_opts = NULL;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOpts *opts = NULL;
|
2014-07-23 00:58:43 +04:00
|
|
|
const char *fmt = NULL, *filename, *cache;
|
|
|
|
int flags;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2014-10-27 13:12:51 +03:00
|
|
|
bool quiet = false, progress = false;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL;
|
2013-09-03 12:09:50 +04:00
|
|
|
BlockDriverState *bs = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2020-06-25 15:55:38 +03:00
|
|
|
bool force = false;
|
2013-09-03 12:09:50 +04:00
|
|
|
|
2014-07-23 00:58:43 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2013-09-03 12:09:50 +04:00
|
|
|
for (;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2020-06-25 15:55:38 +03:00
|
|
|
{"force", no_argument, 0, OPTION_FORCE},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2017-03-17 13:45:41 +03:00
|
|
|
c = getopt_long(argc, argv, ":ho:f:t:pq",
|
2016-02-17 13:10:17 +03:00
|
|
|
long_options, NULL);
|
2013-09-03 12:09:50 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2017-03-17 13:45:40 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
|
|
|
case 'h':
|
2017-03-17 13:45:40 +03:00
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'o':
|
2020-04-15 10:49:25 +03:00
|
|
|
if (accumulate_options(&options, optarg) < 0) {
|
2017-03-17 13:45:40 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out_no_progress;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
|
|
|
case OPTION_OBJECT:
|
2021-02-17 14:56:45 +03:00
|
|
|
user_creatable_process_cmdline(optarg);
|
2017-03-17 13:45:40 +03:00
|
|
|
break;
|
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2020-06-25 15:55:38 +03:00
|
|
|
case OPTION_FORCE:
|
|
|
|
force = true;
|
|
|
|
break;
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
if (!options) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Must specify options (-o)");
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
qemu_progress_init(progress, 1.0);
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
filename = (optind == argc - 1) ? argv[argc - 1] : NULL;
|
|
|
|
if (fmt && has_help_option(options)) {
|
|
|
|
/* If a format is explicitly specified (and possibly no filename is
|
|
|
|
* given), print option help here */
|
2018-05-10 00:00:20 +03:00
|
|
|
ret = print_amend_option_help(fmt);
|
2014-02-21 19:24:07 +04:00
|
|
|
goto out;
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-10-27 13:12:52 +03:00
|
|
|
error_report("Expecting one image file name");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
2013-09-03 12:09:50 +04:00
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
flags = BDRV_O_RDWR;
|
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:43 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet,
|
|
|
|
false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2013-09-03 12:09:50 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-09-03 12:09:50 +04:00
|
|
|
|
|
|
|
fmt = bs->drv->format_name;
|
|
|
|
|
2014-02-21 19:24:06 +04:00
|
|
|
if (has_help_option(options)) {
|
2014-02-21 19:24:07 +04:00
|
|
|
/* If the format was auto-detected, print option help here */
|
2018-05-10 00:00:20 +03:00
|
|
|
ret = print_amend_option_help(fmt);
|
2013-09-03 12:09:50 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-05-10 00:00:17 +03:00
|
|
|
if (!bs->drv->bdrv_amend_options) {
|
|
|
|
error_report("Format driver '%s' does not support option amendment",
|
2014-12-02 20:32:47 +03:00
|
|
|
fmt);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-06-25 15:55:39 +03:00
|
|
|
/* Every driver supporting amendment must have amend_opts */
|
|
|
|
assert(bs->drv->amend_opts);
|
2018-05-10 00:00:17 +03:00
|
|
|
|
2020-06-25 15:55:39 +03:00
|
|
|
amend_opts = qemu_opts_append(amend_opts, bs->drv->amend_opts);
|
|
|
|
opts = qemu_opts_create(amend_opts, NULL, 0, &error_abort);
|
qemu-option: Use returned bool to check for failure
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for QemuOpts functions that now return true / false on success /
error. Coccinelle script:
@@
identifier fun = {
opts_do_parse, parse_option_bool, parse_option_number,
parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set,
qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict,
qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set,
qemu_opts_validate
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-15-armbru@redhat.com>
[Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend
options" resolved by rerunning Coccinelle on master's version]
2020-07-07 19:05:42 +03:00
|
|
|
if (!qemu_opts_do_parse(opts, options, NULL, &err)) {
|
2020-06-25 15:55:40 +03:00
|
|
|
/* Try to parse options using the create options */
|
|
|
|
amend_opts = qemu_opts_append(amend_opts, bs->drv->create_opts);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
opts = qemu_opts_create(amend_opts, NULL, 0, &error_abort);
|
2020-07-07 19:06:11 +03:00
|
|
|
if (qemu_opts_do_parse(opts, options, NULL, NULL)) {
|
2020-06-25 15:55:40 +03:00
|
|
|
error_append_hint(&err,
|
|
|
|
"This option is only supported for image creation\n");
|
|
|
|
}
|
|
|
|
|
2017-01-04 17:56:24 +03:00
|
|
|
error_report_err(err);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
/* In case the driver does not call amend_status_cb() */
|
|
|
|
qemu_progress_print(0.f, 0);
|
2020-06-25 15:55:38 +03:00
|
|
|
ret = bdrv_amend_options(bs, opts, &amend_status_cb, NULL, force, &err);
|
2014-10-27 13:12:51 +03:00
|
|
|
qemu_progress_print(100.f, 0);
|
2013-09-03 12:09:50 +04:00
|
|
|
if (ret < 0) {
|
2018-05-10 00:00:18 +03:00
|
|
|
error_report_err(err);
|
2013-09-03 12:09:50 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2014-10-27 13:12:51 +03:00
|
|
|
qemu_progress_end();
|
|
|
|
|
2015-08-21 02:00:38 +03:00
|
|
|
out_no_progress:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_del(opts);
|
2020-06-25 15:55:39 +03:00
|
|
|
qemu_opts_free(amend_opts);
|
2014-02-21 19:24:06 +04:00
|
|
|
g_free(options);
|
|
|
|
|
2013-09-03 12:09:50 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
typedef struct BenchData {
|
|
|
|
BlockBackend *blk;
|
|
|
|
uint64_t image_size;
|
2015-07-10 19:09:18 +03:00
|
|
|
bool write;
|
2014-08-05 16:17:13 +04:00
|
|
|
int bufsize;
|
2015-07-13 14:13:17 +03:00
|
|
|
int step;
|
2014-08-05 16:17:13 +04:00
|
|
|
int nrreq;
|
|
|
|
int n;
|
2016-06-03 14:59:41 +03:00
|
|
|
int flush_interval;
|
|
|
|
bool drain_on_flush;
|
2014-08-05 16:17:13 +04:00
|
|
|
uint8_t *buf;
|
|
|
|
QEMUIOVector *qiov;
|
|
|
|
|
|
|
|
int in_flight;
|
2016-06-03 14:59:41 +03:00
|
|
|
bool in_flush;
|
2014-08-05 16:17:13 +04:00
|
|
|
uint64_t offset;
|
|
|
|
} BenchData;
|
|
|
|
|
2016-06-03 14:59:41 +03:00
|
|
|
static void bench_undrained_flush_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
if (ret < 0) {
|
2016-08-03 14:37:51 +03:00
|
|
|
error_report("Failed flush request: %s", strerror(-ret));
|
2016-06-03 14:59:41 +03:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
static void bench_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
BenchData *b = opaque;
|
|
|
|
BlockAIOCB *acb;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
2016-08-03 14:37:51 +03:00
|
|
|
error_report("Failed request: %s", strerror(-ret));
|
2014-08-05 16:17:13 +04:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
2016-06-03 14:59:41 +03:00
|
|
|
|
|
|
|
if (b->in_flush) {
|
|
|
|
/* Just finished a flush with drained queue: Start next requests */
|
|
|
|
assert(b->in_flight == 0);
|
|
|
|
b->in_flush = false;
|
|
|
|
} else if (b->in_flight > 0) {
|
|
|
|
int remaining = b->n - b->in_flight;
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
b->n--;
|
|
|
|
b->in_flight--;
|
2016-06-03 14:59:41 +03:00
|
|
|
|
|
|
|
/* Time for flush? Drain queue if requested, then flush */
|
|
|
|
if (b->flush_interval && remaining % b->flush_interval == 0) {
|
|
|
|
if (!b->in_flight || !b->drain_on_flush) {
|
|
|
|
BlockCompletionFunc *cb;
|
|
|
|
|
|
|
|
if (b->drain_on_flush) {
|
|
|
|
b->in_flush = true;
|
|
|
|
cb = bench_cb;
|
|
|
|
} else {
|
|
|
|
cb = bench_undrained_flush_cb;
|
|
|
|
}
|
|
|
|
|
|
|
|
acb = blk_aio_flush(b->blk, cb, b);
|
|
|
|
if (!acb) {
|
|
|
|
error_report("Failed to issue flush request");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (b->drain_on_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
while (b->n > b->in_flight && b->in_flight < b->nrreq) {
|
2016-12-07 18:08:27 +03:00
|
|
|
int64_t offset = b->offset;
|
|
|
|
/* blk_aio_* might look for completed I/Os and kick bench_cb
|
|
|
|
* again, so make sure this operation is counted by in_flight
|
|
|
|
* and b->offset is ready for the next submission.
|
|
|
|
*/
|
|
|
|
b->in_flight++;
|
|
|
|
b->offset += b->step;
|
|
|
|
b->offset %= b->image_size;
|
2015-07-10 19:09:18 +03:00
|
|
|
if (b->write) {
|
2016-12-07 18:08:27 +03:00
|
|
|
acb = blk_aio_pwritev(b->blk, offset, b->qiov, 0, bench_cb, b);
|
2015-07-10 19:09:18 +03:00
|
|
|
} else {
|
2016-12-07 18:08:27 +03:00
|
|
|
acb = blk_aio_preadv(b->blk, offset, b->qiov, 0, bench_cb, b);
|
2015-07-10 19:09:18 +03:00
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
if (!acb) {
|
|
|
|
error_report("Failed to issue request");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_bench(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c, ret = 0;
|
|
|
|
const char *fmt = NULL, *filename;
|
|
|
|
bool quiet = false;
|
|
|
|
bool image_opts = false;
|
2015-07-10 19:09:18 +03:00
|
|
|
bool is_write = false;
|
2014-08-05 16:17:13 +04:00
|
|
|
int count = 75000;
|
|
|
|
int depth = 64;
|
2015-07-10 19:09:18 +03:00
|
|
|
int64_t offset = 0;
|
2014-08-05 16:17:13 +04:00
|
|
|
size_t bufsize = 4096;
|
2015-07-10 19:09:18 +03:00
|
|
|
int pattern = 0;
|
2015-07-13 14:13:17 +03:00
|
|
|
size_t step = 0;
|
2016-06-03 14:59:41 +03:00
|
|
|
int flush_interval = 0;
|
|
|
|
bool drain_on_flush = true;
|
2014-08-05 16:17:13 +04:00
|
|
|
int64_t image_size;
|
|
|
|
BlockBackend *blk = NULL;
|
|
|
|
BenchData data = {};
|
|
|
|
int flags = 0;
|
2016-06-14 12:29:32 +03:00
|
|
|
bool writethrough = false;
|
2014-08-05 16:17:13 +04:00
|
|
|
struct timeval t1, t2;
|
|
|
|
int i;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2018-01-16 09:08:58 +03:00
|
|
|
size_t buf_size;
|
2014-08-05 16:17:13 +04:00
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
2016-06-03 14:59:41 +03:00
|
|
|
{"flush-interval", required_argument, 0, OPTION_FLUSH_INTERVAL},
|
2014-08-05 16:17:13 +04:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2015-07-10 19:09:18 +03:00
|
|
|
{"pattern", required_argument, 0, OPTION_PATTERN},
|
2016-06-03 14:59:41 +03:00
|
|
|
{"no-drain", no_argument, 0, OPTION_NO_DRAIN},
|
2017-05-02 19:35:39 +03:00
|
|
|
{"force-share", no_argument, 0, 'U'},
|
2014-08-05 16:17:13 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2020-01-20 17:18:55 +03:00
|
|
|
c = getopt_long(argc, argv, ":hc:d:f:ni:o:qs:S:t:wU", long_options,
|
|
|
|
NULL);
|
2014-08-05 16:17:13 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2014-08-05 16:17:13 +04:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
|
|
|
case 'h':
|
2014-08-05 16:17:13 +04:00
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'c':
|
|
|
|
{
|
2017-02-10 19:28:23 +03:00
|
|
|
unsigned long res;
|
|
|
|
|
|
|
|
if (qemu_strtoul(optarg, NULL, 0, &res) < 0 || res > INT_MAX) {
|
2014-08-05 16:17:13 +04:00
|
|
|
error_report("Invalid request count specified");
|
|
|
|
return 1;
|
|
|
|
}
|
2017-02-10 19:28:23 +03:00
|
|
|
count = res;
|
2014-08-05 16:17:13 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case 'd':
|
|
|
|
{
|
2017-02-10 19:28:23 +03:00
|
|
|
unsigned long res;
|
|
|
|
|
|
|
|
if (qemu_strtoul(optarg, NULL, 0, &res) < 0 || res > INT_MAX) {
|
2014-08-05 16:17:13 +04:00
|
|
|
error_report("Invalid queue depth specified");
|
|
|
|
return 1;
|
|
|
|
}
|
2017-02-10 19:28:23 +03:00
|
|
|
depth = res;
|
2014-08-05 16:17:13 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'n':
|
|
|
|
flags |= BDRV_O_NATIVE_AIO;
|
|
|
|
break;
|
2020-01-20 17:18:55 +03:00
|
|
|
case 'i':
|
|
|
|
ret = bdrv_parse_aio(optarg, &flags);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid aio option: %s", optarg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
2015-07-10 19:09:18 +03:00
|
|
|
case 'o':
|
|
|
|
{
|
2020-05-13 16:36:26 +03:00
|
|
|
offset = cvtnum("offset", optarg);
|
2017-02-21 23:14:04 +03:00
|
|
|
if (offset < 0) {
|
2015-07-10 19:09:18 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2014-08-05 16:17:13 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
|
|
|
case 's':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
sval = cvtnum_full("buffer size", optarg, 0, INT_MAX);
|
|
|
|
if (sval < 0) {
|
2014-08-05 16:17:13 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
bufsize = sval;
|
|
|
|
break;
|
|
|
|
}
|
2015-07-13 14:13:17 +03:00
|
|
|
case 'S':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
sval = cvtnum_full("step_size", optarg, 0, INT_MAX);
|
|
|
|
if (sval < 0) {
|
2015-07-13 14:13:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
step = sval;
|
|
|
|
break;
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
case 't':
|
|
|
|
ret = bdrv_parse_cache_mode(optarg, &flags, &writethrough);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache mode");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
2015-07-10 19:09:18 +03:00
|
|
|
case 'w':
|
|
|
|
flags |= BDRV_O_RDWR;
|
|
|
|
is_write = true;
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2015-07-10 19:09:18 +03:00
|
|
|
case OPTION_PATTERN:
|
|
|
|
{
|
2017-02-10 19:28:23 +03:00
|
|
|
unsigned long res;
|
|
|
|
|
|
|
|
if (qemu_strtoul(optarg, NULL, 0, &res) < 0 || res > 0xff) {
|
2015-07-10 19:09:18 +03:00
|
|
|
error_report("Invalid pattern byte specified");
|
|
|
|
return 1;
|
|
|
|
}
|
2017-02-10 19:28:23 +03:00
|
|
|
pattern = res;
|
2015-07-10 19:09:18 +03:00
|
|
|
break;
|
|
|
|
}
|
2016-06-03 14:59:41 +03:00
|
|
|
case OPTION_FLUSH_INTERVAL:
|
|
|
|
{
|
2017-02-10 19:28:23 +03:00
|
|
|
unsigned long res;
|
|
|
|
|
|
|
|
if (qemu_strtoul(optarg, NULL, 0, &res) < 0 || res > INT_MAX) {
|
2016-06-03 14:59:41 +03:00
|
|
|
error_report("Invalid flush interval specified");
|
|
|
|
return 1;
|
|
|
|
}
|
2017-02-10 19:28:23 +03:00
|
|
|
flush_interval = res;
|
2016-06-03 14:59:41 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case OPTION_NO_DRAIN:
|
|
|
|
drain_on_flush = false;
|
|
|
|
break;
|
2014-08-05 16:17:13 +04:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
|
|
|
}
|
|
|
|
filename = argv[argc - 1];
|
|
|
|
|
2016-06-03 14:59:41 +03:00
|
|
|
if (!is_write && flush_interval) {
|
|
|
|
error_report("--flush-interval is only available in write tests");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (flush_interval && flush_interval < depth) {
|
|
|
|
error_report("Flush interval can't be smaller than depth");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet,
|
|
|
|
force_share);
|
2014-08-05 16:17:13 +04:00
|
|
|
if (!blk) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
image_size = blk_getlength(blk);
|
|
|
|
if (image_size < 0) {
|
|
|
|
ret = image_size;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
data = (BenchData) {
|
2016-06-03 14:59:41 +03:00
|
|
|
.blk = blk,
|
|
|
|
.image_size = image_size,
|
|
|
|
.bufsize = bufsize,
|
|
|
|
.step = step ?: bufsize,
|
|
|
|
.nrreq = depth,
|
|
|
|
.n = count,
|
|
|
|
.offset = offset,
|
|
|
|
.write = is_write,
|
|
|
|
.flush_interval = flush_interval,
|
|
|
|
.drain_on_flush = drain_on_flush,
|
2014-08-05 16:17:13 +04:00
|
|
|
};
|
2015-07-10 19:09:18 +03:00
|
|
|
printf("Sending %d %s requests, %d bytes each, %d in parallel "
|
2015-07-13 14:13:17 +03:00
|
|
|
"(starting at offset %" PRId64 ", step size %d)\n",
|
2015-07-10 19:09:18 +03:00
|
|
|
data.n, data.write ? "write" : "read", data.bufsize, data.nrreq,
|
2015-07-13 14:13:17 +03:00
|
|
|
data.offset, data.step);
|
2016-06-03 14:59:41 +03:00
|
|
|
if (flush_interval) {
|
|
|
|
printf("Sending flush every %d requests\n", flush_interval);
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
|
2018-01-16 09:08:58 +03:00
|
|
|
buf_size = data.nrreq * data.bufsize;
|
|
|
|
data.buf = blk_blockalign(blk, buf_size);
|
2015-07-10 19:09:18 +03:00
|
|
|
memset(data.buf, pattern, data.nrreq * data.bufsize);
|
|
|
|
|
2018-01-16 09:08:58 +03:00
|
|
|
blk_register_buf(blk, data.buf, buf_size);
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
data.qiov = g_new(QEMUIOVector, data.nrreq);
|
|
|
|
for (i = 0; i < data.nrreq; i++) {
|
|
|
|
qemu_iovec_init(&data.qiov[i], 1);
|
|
|
|
qemu_iovec_add(&data.qiov[i],
|
|
|
|
data.buf + i * data.bufsize, data.bufsize);
|
|
|
|
}
|
|
|
|
|
|
|
|
gettimeofday(&t1, NULL);
|
|
|
|
bench_cb(&data, 0);
|
|
|
|
|
|
|
|
while (data.n > 0) {
|
|
|
|
main_loop_wait(false);
|
|
|
|
}
|
|
|
|
gettimeofday(&t2, NULL);
|
|
|
|
|
|
|
|
printf("Run completed in %3.3f seconds.\n",
|
|
|
|
(t2.tv_sec - t1.tv_sec)
|
|
|
|
+ ((double)(t2.tv_usec - t1.tv_usec) / 1000000));
|
|
|
|
|
|
|
|
out:
|
2018-01-16 09:08:58 +03:00
|
|
|
if (data.buf) {
|
|
|
|
blk_unregister_buf(blk, data.buf);
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
qemu_vfree(data.buf);
|
|
|
|
blk_unref(blk);
|
|
|
|
|
|
|
|
if (ret) {
|
2016-08-10 05:43:12 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
enum ImgBitmapAct {
|
|
|
|
BITMAP_ADD,
|
|
|
|
BITMAP_REMOVE,
|
|
|
|
BITMAP_CLEAR,
|
|
|
|
BITMAP_ENABLE,
|
|
|
|
BITMAP_DISABLE,
|
|
|
|
BITMAP_MERGE,
|
|
|
|
};
|
|
|
|
typedef struct ImgBitmapAction {
|
|
|
|
enum ImgBitmapAct act;
|
|
|
|
const char *src; /* only used for merge */
|
|
|
|
QSIMPLEQ_ENTRY(ImgBitmapAction) next;
|
|
|
|
} ImgBitmapAction;
|
|
|
|
|
|
|
|
static int img_bitmap(int argc, char **argv)
|
|
|
|
{
|
|
|
|
Error *err = NULL;
|
|
|
|
int c, ret = 1;
|
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
const char *fmt = NULL, *src_fmt = NULL, *src_filename = NULL;
|
|
|
|
const char *filename, *bitmap;
|
|
|
|
BlockBackend *blk = NULL, *src = NULL;
|
|
|
|
BlockDriverState *bs = NULL, *src_bs = NULL;
|
|
|
|
bool image_opts = false;
|
|
|
|
int64_t granularity = 0;
|
|
|
|
bool add = false, merge = false;
|
|
|
|
QSIMPLEQ_HEAD(, ImgBitmapAction) actions;
|
|
|
|
ImgBitmapAction *act, *act_next;
|
|
|
|
const char *op;
|
|
|
|
|
|
|
|
QSIMPLEQ_INIT(&actions);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
|
|
|
{"add", no_argument, 0, OPTION_ADD},
|
|
|
|
{"remove", no_argument, 0, OPTION_REMOVE},
|
|
|
|
{"clear", no_argument, 0, OPTION_CLEAR},
|
|
|
|
{"enable", no_argument, 0, OPTION_ENABLE},
|
|
|
|
{"disable", no_argument, 0, OPTION_DISABLE},
|
|
|
|
{"merge", required_argument, 0, OPTION_MERGE},
|
|
|
|
{"granularity", required_argument, 0, 'g'},
|
|
|
|
{"source-file", required_argument, 0, 'b'},
|
|
|
|
{"source-format", required_argument, 0, 'F'},
|
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, ":b:f:F:g:h", long_options, NULL);
|
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (c) {
|
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
|
|
|
case '?':
|
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'b':
|
|
|
|
src_filename = optarg;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'F':
|
|
|
|
src_fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'g':
|
|
|
|
granularity = cvtnum("granularity", optarg);
|
|
|
|
if (granularity < 0) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case OPTION_ADD:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_ADD;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
add = true;
|
|
|
|
break;
|
|
|
|
case OPTION_REMOVE:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_REMOVE;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
break;
|
|
|
|
case OPTION_CLEAR:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_CLEAR;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
break;
|
|
|
|
case OPTION_ENABLE:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_ENABLE;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
break;
|
|
|
|
case OPTION_DISABLE:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_DISABLE;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
break;
|
|
|
|
case OPTION_MERGE:
|
|
|
|
act = g_new0(ImgBitmapAction, 1);
|
|
|
|
act->act = BITMAP_MERGE;
|
|
|
|
act->src = optarg;
|
|
|
|
QSIMPLEQ_INSERT_TAIL(&actions, act, next);
|
|
|
|
merge = true;
|
|
|
|
break;
|
|
|
|
case OPTION_OBJECT:
|
2021-02-17 14:56:45 +03:00
|
|
|
user_creatable_process_cmdline(optarg);
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
break;
|
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (QSIMPLEQ_EMPTY(&actions)) {
|
|
|
|
error_report("Need at least one of --add, --remove, --clear, "
|
|
|
|
"--enable, --disable, or --merge");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (granularity && !add) {
|
|
|
|
error_report("granularity only supported with --add");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (src_fmt && !src_filename) {
|
|
|
|
error_report("-F only supported with -b");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (src_filename && !merge) {
|
|
|
|
error_report("Merge bitmap source file only supported with "
|
|
|
|
"--merge");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (optind != argc - 2) {
|
|
|
|
error_report("Expecting filename and bitmap name");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
filename = argv[optind];
|
|
|
|
bitmap = argv[optind + 1];
|
|
|
|
|
qemu-img: Support bitmap --merge into backing image
If you have the chain 'base.qcow2 <- top.qcow2' and want to merge a
bitmap from top into base, qemu-img was failing with:
qemu-img: Could not open 'top.qcow2': Could not open backing file: Failed to get shared "write" lock
Is another process using the image [base.qcow2]?
The easiest fix is to not open the entire backing chain of either
image (source or destination); after all, the point of 'qemu-img
bitmap' is solely to manipulate bitmaps directly within a single qcow2
image, and this is made more precise if we don't pay attention to
other images in the chain that may happen to have a bitmap by the same
name.
However, note that on a case-by-case analysis, there _are_ times where
we treat it as a feature that we can access a bitmap from a backing
layer in association with an overlay BDS. A demonstration of this is
using NBD to expose both an overlay BDS (for constant contents) and a
bitmap (for learning which blocks are interesting) during an
incremental backup:
Base <- Active <- Temporary
\--block job ->/
where Temporary is being fed by a backup 'sync=none' job. When
exposing Temporary over NBD, referring to a bitmap that lives only in
Active is less effort than having to copy a bitmap into Temporary [1].
So the testsuite additions in this patch check both where bitmaps get
allocated (the qemu-img info output), and that qemu-nbd is indeed able
to access a bitmap inherited from the backing chain since it is a
different use case than 'qemu-img bitmap'.
[1] Full disclosure: prior to the recent commit 374eedd1c4 and
friends, we were NOT able to see bitmaps through filters, which meant
that we actually did not have nice clean semantics for uniformly being
able to pick up bitmaps from anywhere in the backing chain (seen as a
change in behavior between qemu 4.1 and 4.2 at commit 00e30f05de, when
block-copy swapped from a one-off to a filter). Which means libvirt
was already coded to copy bitmaps around for the sake of older qemu,
even though modern qemu no longer needs it. Oh well.
Fixes: http://bugzilla.redhat.com/1877209
Reported-by: Eyal Shenitzky <eshenitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200914191009.644842-1-eblake@redhat.com>
[eblake: more commit message tweaks, per Max Reitz review]
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-09-14 22:10:09 +03:00
|
|
|
/*
|
|
|
|
* No need to open backing chains; we will be manipulating bitmaps
|
|
|
|
* directly in this image without reference to image contents.
|
|
|
|
*/
|
|
|
|
blk = img_open(image_opts, filename, fmt, BDRV_O_RDWR | BDRV_O_NO_BACKING,
|
|
|
|
false, false, false);
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
if (!blk) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
bs = blk_bs(blk);
|
|
|
|
if (src_filename) {
|
qemu-img: Support bitmap --merge into backing image
If you have the chain 'base.qcow2 <- top.qcow2' and want to merge a
bitmap from top into base, qemu-img was failing with:
qemu-img: Could not open 'top.qcow2': Could not open backing file: Failed to get shared "write" lock
Is another process using the image [base.qcow2]?
The easiest fix is to not open the entire backing chain of either
image (source or destination); after all, the point of 'qemu-img
bitmap' is solely to manipulate bitmaps directly within a single qcow2
image, and this is made more precise if we don't pay attention to
other images in the chain that may happen to have a bitmap by the same
name.
However, note that on a case-by-case analysis, there _are_ times where
we treat it as a feature that we can access a bitmap from a backing
layer in association with an overlay BDS. A demonstration of this is
using NBD to expose both an overlay BDS (for constant contents) and a
bitmap (for learning which blocks are interesting) during an
incremental backup:
Base <- Active <- Temporary
\--block job ->/
where Temporary is being fed by a backup 'sync=none' job. When
exposing Temporary over NBD, referring to a bitmap that lives only in
Active is less effort than having to copy a bitmap into Temporary [1].
So the testsuite additions in this patch check both where bitmaps get
allocated (the qemu-img info output), and that qemu-nbd is indeed able
to access a bitmap inherited from the backing chain since it is a
different use case than 'qemu-img bitmap'.
[1] Full disclosure: prior to the recent commit 374eedd1c4 and
friends, we were NOT able to see bitmaps through filters, which meant
that we actually did not have nice clean semantics for uniformly being
able to pick up bitmaps from anywhere in the backing chain (seen as a
change in behavior between qemu 4.1 and 4.2 at commit 00e30f05de, when
block-copy swapped from a one-off to a filter). Which means libvirt
was already coded to copy bitmaps around for the sake of older qemu,
even though modern qemu no longer needs it. Oh well.
Fixes: http://bugzilla.redhat.com/1877209
Reported-by: Eyal Shenitzky <eshenitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200914191009.644842-1-eblake@redhat.com>
[eblake: more commit message tweaks, per Max Reitz review]
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-09-14 22:10:09 +03:00
|
|
|
src = img_open(false, src_filename, src_fmt, BDRV_O_NO_BACKING,
|
|
|
|
false, false, false);
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
if (!src) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
src_bs = blk_bs(src);
|
|
|
|
} else {
|
|
|
|
src_bs = bs;
|
|
|
|
}
|
|
|
|
|
|
|
|
QSIMPLEQ_FOREACH_SAFE(act, &actions, next, act_next) {
|
|
|
|
switch (act->act) {
|
|
|
|
case BITMAP_ADD:
|
|
|
|
qmp_block_dirty_bitmap_add(bs->node_name, bitmap,
|
|
|
|
!!granularity, granularity, true, true,
|
|
|
|
false, false, &err);
|
|
|
|
op = "add";
|
|
|
|
break;
|
|
|
|
case BITMAP_REMOVE:
|
|
|
|
qmp_block_dirty_bitmap_remove(bs->node_name, bitmap, &err);
|
|
|
|
op = "remove";
|
|
|
|
break;
|
|
|
|
case BITMAP_CLEAR:
|
|
|
|
qmp_block_dirty_bitmap_clear(bs->node_name, bitmap, &err);
|
|
|
|
op = "clear";
|
|
|
|
break;
|
|
|
|
case BITMAP_ENABLE:
|
|
|
|
qmp_block_dirty_bitmap_enable(bs->node_name, bitmap, &err);
|
|
|
|
op = "enable";
|
|
|
|
break;
|
|
|
|
case BITMAP_DISABLE:
|
|
|
|
qmp_block_dirty_bitmap_disable(bs->node_name, bitmap, &err);
|
|
|
|
op = "disable";
|
|
|
|
break;
|
2020-05-21 22:21:35 +03:00
|
|
|
case BITMAP_MERGE:
|
|
|
|
do_dirty_bitmap_merge(bs->node_name, bitmap, src_bs->node_name,
|
|
|
|
act->src, &err);
|
qemu-img: Add bitmap sub-command
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not require a bitmap name and because it was already
possible with 'qemu-img info'. A single command line can play one or
more bitmap commands in sequence on the same bitmap name (although all
added bitmaps share the same granularity, and and all merged bitmaps
come from the same source file). Merge defaults to other bitmaps in
the primary image, but can also be told to merge bitmaps from a
distinct image.
While this supports --image-opts for the file being modified, I did
not think it worth the extra complexity to support that for the source
file in a cross-file merges. Likewise, I chose to have --merge only
take a single source rather than following the QMP support for
multiple merges in one go (although you can still use more than one
--merge in the command line); in part because qemu-img is offline and
therefore atomicity is not an issue.
Upcoming patches will add iotest coverage of these commands while
also testing other features.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200513011648.166876-7-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-13 04:16:45 +03:00
|
|
|
op = "merge";
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
g_assert_not_reached();
|
|
|
|
}
|
|
|
|
|
|
|
|
if (err) {
|
|
|
|
error_reportf_err(err, "Operation %s on bitmap %s failed: ",
|
|
|
|
op, bitmap);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
g_free(act);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
blk_unref(src);
|
|
|
|
blk_unref(blk);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-08-10 05:43:12 +03:00
|
|
|
#define C_BS 01
|
|
|
|
#define C_COUNT 02
|
|
|
|
#define C_IF 04
|
|
|
|
#define C_OF 010
|
2016-08-10 17:16:09 +03:00
|
|
|
#define C_SKIP 020
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
struct DdInfo {
|
|
|
|
unsigned int flags;
|
|
|
|
int64_t count;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct DdIo {
|
|
|
|
int bsz; /* Block size */
|
|
|
|
char *filename;
|
|
|
|
uint8_t *buf;
|
2016-08-10 17:16:09 +03:00
|
|
|
int64_t offset;
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
struct DdOpts {
|
|
|
|
const char *name;
|
|
|
|
int (*f)(const char *, struct DdIo *, struct DdIo *, struct DdInfo *);
|
|
|
|
unsigned int flag;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int img_dd_bs(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
int64_t res;
|
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
res = cvtnum_full("bs", arg, 1, INT_MAX);
|
2016-08-10 05:43:12 +03:00
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
if (res < 0) {
|
2016-08-10 05:43:12 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
in->bsz = out->bsz = res;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_count(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
2020-05-13 16:36:26 +03:00
|
|
|
dd->count = cvtnum("count", arg);
|
2016-08-10 05:43:12 +03:00
|
|
|
|
2017-02-21 23:14:04 +03:00
|
|
|
if (dd->count < 0) {
|
2016-08-10 05:43:12 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_if(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
in->filename = g_strdup(arg);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_of(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
out->filename = g_strdup(arg);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
static int img_dd_skip(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
2020-05-13 16:36:26 +03:00
|
|
|
in->offset = cvtnum("skip", arg);
|
2016-08-10 17:16:09 +03:00
|
|
|
|
2017-02-21 23:14:04 +03:00
|
|
|
if (in->offset < 0) {
|
2016-08-10 17:16:09 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-10 05:43:12 +03:00
|
|
|
static int img_dd(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
char *arg = NULL;
|
|
|
|
char *tmp;
|
|
|
|
BlockDriver *drv = NULL, *proto_drv = NULL;
|
|
|
|
BlockBackend *blk1 = NULL, *blk2 = NULL;
|
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
bool image_opts = false;
|
|
|
|
int c, i;
|
|
|
|
const char *out_fmt = "raw";
|
|
|
|
const char *fmt = NULL;
|
|
|
|
int64_t size = 0;
|
|
|
|
int64_t block_count = 0, out_pos, in_pos;
|
2017-05-02 19:35:39 +03:00
|
|
|
bool force_share = false;
|
2016-08-10 05:43:12 +03:00
|
|
|
struct DdInfo dd = {
|
|
|
|
.flags = 0,
|
|
|
|
.count = 0,
|
|
|
|
};
|
|
|
|
struct DdIo in = {
|
|
|
|
.bsz = 512, /* Block size is by default 512 bytes */
|
|
|
|
.filename = NULL,
|
2016-08-10 17:16:09 +03:00
|
|
|
.buf = NULL,
|
|
|
|
.offset = 0
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
struct DdIo out = {
|
|
|
|
.bsz = 512,
|
|
|
|
.filename = NULL,
|
2016-08-10 17:16:09 +03:00
|
|
|
.buf = NULL,
|
|
|
|
.offset = 0
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
const struct DdOpts options[] = {
|
|
|
|
{ "bs", img_dd_bs, C_BS },
|
|
|
|
{ "count", img_dd_count, C_COUNT },
|
|
|
|
{ "if", img_dd_if, C_IF },
|
|
|
|
{ "of", img_dd_of, C_OF },
|
2016-08-10 17:16:09 +03:00
|
|
|
{ "skip", img_dd_skip, C_SKIP },
|
2016-08-10 05:43:12 +03:00
|
|
|
{ NULL, NULL, 0 }
|
|
|
|
};
|
|
|
|
const struct option long_options[] = {
|
|
|
|
{ "help", no_argument, 0, 'h'},
|
2017-05-15 19:47:09 +03:00
|
|
|
{ "object", required_argument, 0, OPTION_OBJECT},
|
2016-08-10 05:43:12 +03:00
|
|
|
{ "image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2017-05-02 19:35:39 +03:00
|
|
|
{ "force-share", no_argument, 0, 'U'},
|
2016-08-10 05:43:12 +03:00
|
|
|
{ 0, 0, 0, 0 }
|
|
|
|
};
|
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
while ((c = getopt_long(argc, argv, ":hf:O:U", long_options, NULL))) {
|
2016-08-10 05:43:12 +03:00
|
|
|
if (c == EOF) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
|
|
|
case 'O':
|
|
|
|
out_fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
break;
|
2016-08-10 05:43:12 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
break;
|
2016-08-10 05:43:12 +03:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
2017-05-02 19:35:39 +03:00
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2017-06-19 18:00:02 +03:00
|
|
|
case OPTION_OBJECT:
|
2021-02-17 14:56:45 +03:00
|
|
|
user_creatable_process_cmdline(optarg);
|
2017-06-19 18:00:02 +03:00
|
|
|
break;
|
2016-08-10 05:43:12 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = optind; i < argc; i++) {
|
|
|
|
int j;
|
|
|
|
arg = g_strdup(argv[i]);
|
|
|
|
|
|
|
|
tmp = strchr(arg, '=');
|
|
|
|
if (tmp == NULL) {
|
|
|
|
error_report("unrecognized operand %s", arg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
*tmp++ = '\0';
|
|
|
|
|
|
|
|
for (j = 0; options[j].name != NULL; j++) {
|
|
|
|
if (!strcmp(arg, options[j].name)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (options[j].name == NULL) {
|
|
|
|
error_report("unrecognized operand %s", arg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (options[j].f(tmp, &in, &out, &dd) != 0) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
dd.flags |= options[j].flag;
|
|
|
|
g_free(arg);
|
|
|
|
arg = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(dd.flags & C_IF && dd.flags & C_OF)) {
|
|
|
|
error_report("Must specify both input and output files");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-05-15 19:47:09 +03:00
|
|
|
|
2017-05-02 19:35:39 +03:00
|
|
|
blk1 = img_open(image_opts, in.filename, fmt, 0, false, false,
|
|
|
|
force_share);
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
if (!blk1) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
drv = bdrv_find_format(out_fmt);
|
|
|
|
if (!drv) {
|
|
|
|
error_report("Unknown file format");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
proto_drv = bdrv_find_protocol(out.filename, true, &local_err);
|
|
|
|
|
|
|
|
if (!proto_drv) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation",
|
|
|
|
drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!proto_drv->create_opts) {
|
|
|
|
error_report("Protocol driver '%s' does not support image creation",
|
|
|
|
proto_drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
|
|
|
|
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
|
|
|
|
|
|
|
size = blk_getlength(blk1);
|
|
|
|
if (size < 0) {
|
|
|
|
error_report("Failed to get size for '%s'", in.filename);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (dd.flags & C_COUNT && dd.count <= INT64_MAX / in.bsz &&
|
|
|
|
dd.count * in.bsz < size) {
|
|
|
|
size = dd.count * in.bsz;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
/* Overflow means the specified offset is beyond input image's size */
|
|
|
|
if (dd.flags & C_SKIP && (in.offset > INT64_MAX / in.bsz ||
|
|
|
|
size < in.bsz * in.offset)) {
|
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE, 0, &error_abort);
|
|
|
|
} else {
|
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE,
|
|
|
|
size - in.bsz * in.offset, &error_abort);
|
|
|
|
}
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
ret = bdrv_create(drv, out.filename, opts, &local_err);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_reportf_err(local_err,
|
|
|
|
"%s: error while creating output image: ",
|
|
|
|
out.filename);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-05-15 19:47:10 +03:00
|
|
|
/* TODO, we can't honour --image-opts for the target,
|
|
|
|
* since it needs to be given in a format compatible
|
|
|
|
* with the bdrv_create() call above which does not
|
|
|
|
* support image-opts style.
|
|
|
|
*/
|
2017-05-15 19:47:12 +03:00
|
|
|
blk2 = img_open_file(out.filename, NULL, out_fmt, BDRV_O_RDWR,
|
2017-05-15 19:47:10 +03:00
|
|
|
false, false, false);
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
if (!blk2) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
if (dd.flags & C_SKIP && (in.offset > INT64_MAX / in.bsz ||
|
|
|
|
size < in.offset * in.bsz)) {
|
|
|
|
/* We give a warning if the skip option is bigger than the input
|
|
|
|
* size and create an empty output disk image (i.e. like dd(1)).
|
|
|
|
*/
|
|
|
|
error_report("%s: cannot skip to specified offset", in.filename);
|
|
|
|
in_pos = size;
|
|
|
|
} else {
|
|
|
|
in_pos = in.offset * in.bsz;
|
|
|
|
}
|
|
|
|
|
2016-08-10 05:43:12 +03:00
|
|
|
in.buf = g_new(uint8_t, in.bsz);
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
for (out_pos = 0; in_pos < size; block_count++) {
|
2016-08-10 05:43:12 +03:00
|
|
|
int in_ret, out_ret;
|
|
|
|
|
|
|
|
if (in_pos + in.bsz > size) {
|
|
|
|
in_ret = blk_pread(blk1, in_pos, in.buf, size - in_pos);
|
|
|
|
} else {
|
|
|
|
in_ret = blk_pread(blk1, in_pos, in.buf, in.bsz);
|
|
|
|
}
|
|
|
|
if (in_ret < 0) {
|
|
|
|
error_report("error while reading from input image file: %s",
|
|
|
|
strerror(-in_ret));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
in_pos += in_ret;
|
|
|
|
|
|
|
|
out_ret = blk_pwrite(blk2, out_pos, in.buf, in_ret, 0);
|
|
|
|
|
|
|
|
if (out_ret < 0) {
|
|
|
|
error_report("error while writing to output image file: %s",
|
|
|
|
strerror(-out_ret));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
out_pos += out_ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
g_free(arg);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_free(create_opts);
|
|
|
|
blk_unref(blk1);
|
|
|
|
blk_unref(blk2);
|
|
|
|
g_free(in.filename);
|
|
|
|
g_free(out.filename);
|
|
|
|
g_free(in.buf);
|
|
|
|
g_free(out.buf);
|
|
|
|
|
|
|
|
if (ret) {
|
2014-08-05 16:17:13 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-07-05 15:57:36 +03:00
|
|
|
static void dump_json_block_measure_info(BlockMeasureInfo *info)
|
|
|
|
{
|
2020-12-11 20:11:37 +03:00
|
|
|
GString *str;
|
2017-07-05 15:57:36 +03:00
|
|
|
QObject *obj;
|
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
|
|
|
|
|
|
|
visit_type_BlockMeasureInfo(v, NULL, &info, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2020-12-11 20:11:35 +03:00
|
|
|
str = qobject_to_json_pretty(obj, true);
|
2017-07-05 15:57:36 +03:00
|
|
|
assert(str != NULL);
|
2020-12-11 20:11:37 +03:00
|
|
|
printf("%s\n", str->str);
|
2018-04-19 18:01:43 +03:00
|
|
|
qobject_unref(obj);
|
2017-07-05 15:57:36 +03:00
|
|
|
visit_free(v);
|
2020-12-11 20:11:37 +03:00
|
|
|
g_string_free(str, true);
|
2017-07-05 15:57:36 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int img_measure(int argc, char **argv)
|
|
|
|
{
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
|
|
|
{"size", required_argument, 0, OPTION_SIZE},
|
|
|
|
{"force-share", no_argument, 0, 'U'},
|
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
|
|
|
BlockBackend *in_blk = NULL;
|
|
|
|
BlockDriver *drv;
|
|
|
|
const char *filename = NULL;
|
|
|
|
const char *fmt = NULL;
|
|
|
|
const char *out_fmt = "raw";
|
|
|
|
char *options = NULL;
|
|
|
|
char *snapshot_name = NULL;
|
|
|
|
bool force_share = false;
|
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
QemuOpts *object_opts = NULL;
|
|
|
|
QemuOpts *sn_opts = NULL;
|
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
bool image_opts = false;
|
|
|
|
uint64_t img_size = UINT64_MAX;
|
|
|
|
BlockMeasureInfo *info = NULL;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
int ret = 1;
|
|
|
|
int c;
|
|
|
|
|
|
|
|
while ((c = getopt_long(argc, argv, "hf:O:o:l:U",
|
|
|
|
long_options, NULL)) != -1) {
|
|
|
|
switch (c) {
|
|
|
|
case '?':
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'O':
|
|
|
|
out_fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'o':
|
2020-04-15 10:49:25 +03:00
|
|
|
if (accumulate_options(&options, optarg) < 0) {
|
2017-07-05 15:57:36 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'l':
|
|
|
|
if (strstart(optarg, SNAPSHOT_OPT_BASE, NULL)) {
|
|
|
|
sn_opts = qemu_opts_parse_noisily(&internal_snapshot_opts,
|
|
|
|
optarg, false);
|
|
|
|
if (!sn_opts) {
|
|
|
|
error_report("Failed in parsing snapshot param '%s'",
|
|
|
|
optarg);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
snapshot_name = optarg;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
|
|
|
case OPTION_OBJECT:
|
2021-02-17 14:56:45 +03:00
|
|
|
user_creatable_process_cmdline(optarg);
|
2017-07-05 15:57:36 +03:00
|
|
|
break;
|
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
case OPTION_OUTPUT:
|
|
|
|
if (!strcmp(optarg, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (!strcmp(optarg, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else {
|
|
|
|
error_report("--output must be used with human or json "
|
|
|
|
"as argument.");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case OPTION_SIZE:
|
|
|
|
{
|
|
|
|
int64_t sval;
|
|
|
|
|
2020-05-13 16:36:26 +03:00
|
|
|
sval = cvtnum("image size", optarg);
|
2017-07-05 15:57:36 +03:00
|
|
|
if (sval < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
img_size = (uint64_t)sval;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (argc - optind > 1) {
|
|
|
|
error_report("At most one filename argument is allowed.");
|
|
|
|
goto out;
|
|
|
|
} else if (argc - optind == 1) {
|
|
|
|
filename = argv[optind];
|
|
|
|
}
|
|
|
|
|
2020-02-21 14:25:21 +03:00
|
|
|
if (!filename && (image_opts || fmt || snapshot_name || sn_opts)) {
|
|
|
|
error_report("--image-opts, -f, and -l require a filename argument.");
|
2017-07-05 15:57:36 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (filename && img_size != UINT64_MAX) {
|
|
|
|
error_report("--size N cannot be used together with a filename.");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!filename && img_size == UINT64_MAX) {
|
|
|
|
error_report("Either --size N or one filename must be specified.");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (filename) {
|
|
|
|
in_blk = img_open(image_opts, filename, fmt, 0,
|
|
|
|
false, false, force_share);
|
|
|
|
if (!in_blk) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sn_opts) {
|
|
|
|
bdrv_snapshot_load_tmp(blk_bs(in_blk),
|
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_ID),
|
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_NAME),
|
|
|
|
&local_err);
|
|
|
|
} else if (snapshot_name != NULL) {
|
|
|
|
bdrv_snapshot_load_tmp_by_id_or_name(blk_bs(in_blk),
|
|
|
|
snapshot_name, &local_err);
|
|
|
|
}
|
|
|
|
if (local_err) {
|
|
|
|
error_reportf_err(local_err, "Failed to load snapshot: ");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
drv = bdrv_find_format(out_fmt);
|
|
|
|
if (!drv) {
|
|
|
|
error_report("Unknown file format '%s'", out_fmt);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation",
|
|
|
|
drv->format_name);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
|
|
|
create_opts = qemu_opts_append(create_opts, bdrv_file.create_opts);
|
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
|
|
|
if (options) {
|
qemu-option: Use returned bool to check for failure
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for QemuOpts functions that now return true / false on success /
error. Coccinelle script:
@@
identifier fun = {
opts_do_parse, parse_option_bool, parse_option_number,
parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set,
qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict,
qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set,
qemu_opts_validate
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-15-armbru@redhat.com>
[Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend
options" resolved by rerunning Coccinelle on master's version]
2020-07-07 19:05:42 +03:00
|
|
|
if (!qemu_opts_do_parse(opts, options, NULL, &local_err)) {
|
2017-07-05 15:57:36 +03:00
|
|
|
error_report_err(local_err);
|
|
|
|
error_report("Invalid options for file format '%s'", out_fmt);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (img_size != UINT64_MAX) {
|
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE, img_size, &error_abort);
|
|
|
|
}
|
|
|
|
|
|
|
|
info = bdrv_measure(drv, opts, in_blk ? blk_bs(in_blk) : NULL, &local_err);
|
|
|
|
if (local_err) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (output_format == OFORMAT_HUMAN) {
|
|
|
|
printf("required size: %" PRIu64 "\n", info->required);
|
|
|
|
printf("fully allocated size: %" PRIu64 "\n", info->fully_allocated);
|
qcow2: Expose bitmaps' size during measure
It's useful to know how much space can be occupied by qcow2 persistent
bitmaps, even though such metadata is unrelated to the guest-visible
data. Report this value as an additional QMP field, present when
measuring an existing image and output format that both support
bitmaps. Update iotest 178 and 190 to updated output, as well as new
coverage in 190 demonstrating non-zero values made possible with the
recently-added qemu-img bitmap command (see 3b51ab4b).
The new 'bitmaps size:' field is displayed automatically as part of
'qemu-img measure' any time it is present in QMP (that is, any time
both the source image being measured and destination format support
bitmaps, even if the measurement is 0 because there are no bitmaps
present). If the field is absent, it means that no bitmaps can be
copied (source, destination, or both lack bitmaps, including when
measuring based on size rather than on a source image). This behavior
is compatible with an upcoming patch adding 'qemu-img convert
--bitmaps': that command will fail in the same situations where this
patch omits the field.
The addition of a new field demonstrates why we should always
zero-initialize qapi C structs; while the qcow2 driver still fully
populates all fields, the raw and crypto drivers had to be tweaked to
avoid uninitialized data.
Consideration was also given towards having a 'qemu-img measure
--bitmaps' which errors out when bitmaps are not possible, and
otherwise sums the bitmaps into the existing allocation totals rather
than displaying as a separate field, as a potential convenience
factor. But this was ultimately decided to be more complexity than
necessary when the QMP interface was sufficient enough with bitmaps
remaining a separate field.
See also: https://bugzilla.redhat.com/1779904
Reported-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200521192137.1120211-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-05-21 22:21:34 +03:00
|
|
|
if (info->has_bitmaps) {
|
|
|
|
printf("bitmaps size: %" PRIu64 "\n", info->bitmaps);
|
|
|
|
}
|
2017-07-05 15:57:36 +03:00
|
|
|
} else {
|
|
|
|
dump_json_block_measure_info(info);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
qapi_free_BlockMeasureInfo(info);
|
|
|
|
qemu_opts_del(object_opts);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_del(sn_opts);
|
|
|
|
qemu_opts_free(create_opts);
|
|
|
|
g_free(options);
|
|
|
|
blk_unref(in_blk);
|
|
|
|
return ret;
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
|
2009-10-02 01:12:16 +04:00
|
|
|
static const img_cmd_t img_cmds[] = {
|
2009-06-07 03:42:17 +04:00
|
|
|
#define DEF(option, callback, arg_string) \
|
|
|
|
{ option, callback },
|
|
|
|
#include "qemu-img-cmds.h"
|
|
|
|
#undef DEF
|
|
|
|
{ NULL, NULL, },
|
|
|
|
};
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
int main(int argc, char **argv)
|
|
|
|
{
|
2009-10-02 01:12:16 +04:00
|
|
|
const img_cmd_t *cmd;
|
2009-06-07 03:42:17 +04:00
|
|
|
const char *cmdname;
|
2014-04-26 01:02:32 +04:00
|
|
|
int c;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
2016-06-17 17:44:13 +03:00
|
|
|
{"version", no_argument, 0, 'V'},
|
2016-06-17 17:44:14 +03:00
|
|
|
{"trace", required_argument, NULL, 'T'},
|
2014-04-26 01:02:32 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-07-23 12:30:11 +04:00
|
|
|
#ifdef CONFIG_POSIX
|
|
|
|
signal(SIGPIPE, SIG_IGN);
|
|
|
|
#endif
|
|
|
|
|
2020-08-25 13:38:48 +03:00
|
|
|
socket_init();
|
log: Make glib logging go through QEMU
This commit adds a error_init() helper which calls
g_log_set_default_handler() so that glib logs (g_log, g_warning, ...)
are handled similarly to other QEMU logs. This means they will get a
timestamp if timestamps are enabled, and they will go through the
HMP monitor if one is configured.
This commit also adds a call to error_init() to the binaries
installed by QEMU. Since error_init() also calls error_set_progname(),
this means that *-linux-user, *-bsd-user and qemu-pr-helper messages
output with error_report, info_report, ... will slightly change: they
will be prefixed by the binary name.
glib debug messages are enabled through G_MESSAGES_DEBUG similarly to
the glib default log handler.
At the moment, this change will mostly impact SPICE logging if your
spice version is >= 0.14.1. With older spice versions, this is not going
to work as expected, but will not have any ill effect, so this call is
not conditional on the SPICE version.
Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20190131164614.19209-3-cfergeau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2019-01-31 19:46:14 +03:00
|
|
|
error_init(argv[0]);
|
2016-10-04 16:35:52 +03:00
|
|
|
module_call_init(MODULE_INIT_TRACE);
|
2014-02-10 10:48:51 +04:00
|
|
|
qemu_init_exec_dir(argv[0]);
|
2010-12-16 17:10:32 +03:00
|
|
|
|
2021-07-20 15:53:53 +03:00
|
|
|
qemu_init_main_loop(&error_fatal);
|
2014-09-18 15:30:49 +04:00
|
|
|
|
2016-05-12 17:10:04 +03:00
|
|
|
qcrypto_init(&error_fatal);
|
2016-04-06 14:12:06 +03:00
|
|
|
|
2016-02-10 21:41:01 +03:00
|
|
|
module_call_init(MODULE_INIT_QOM);
|
2004-08-02 01:59:26 +04:00
|
|
|
bdrv_init();
|
2014-04-22 09:36:11 +04:00
|
|
|
if (argc < 2) {
|
|
|
|
error_exit("Not enough arguments");
|
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
qemu_add_opts(&qemu_source_opts);
|
2016-06-17 17:44:14 +03:00
|
|
|
qemu_add_opts(&qemu_trace_opts);
|
2016-02-17 13:10:17 +03:00
|
|
|
|
2017-03-17 13:45:41 +03:00
|
|
|
while ((c = getopt_long(argc, argv, "+:hVT:", long_options, NULL)) != -1) {
|
2016-06-17 17:44:13 +03:00
|
|
|
switch (c) {
|
2017-03-17 13:45:41 +03:00
|
|
|
case ':':
|
|
|
|
missing_argument(argv[optind - 1]);
|
|
|
|
return 0;
|
2017-03-17 13:45:39 +03:00
|
|
|
case '?':
|
2017-03-17 13:45:41 +03:00
|
|
|
unrecognized_option(argv[optind - 1]);
|
|
|
|
return 0;
|
|
|
|
case 'h':
|
2016-06-17 17:44:13 +03:00
|
|
|
help();
|
|
|
|
return 0;
|
|
|
|
case 'V':
|
|
|
|
printf(QEMU_IMG_VERSION);
|
|
|
|
return 0;
|
2016-06-17 17:44:14 +03:00
|
|
|
case 'T':
|
2020-11-02 14:58:41 +03:00
|
|
|
trace_opt_parse(optarg);
|
2016-06-17 17:44:14 +03:00
|
|
|
break;
|
2009-06-07 03:42:17 +04:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
cmdname = argv[optind];
|
2014-04-26 01:02:32 +04:00
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
/* reset getopt_long scanning */
|
|
|
|
argc -= optind;
|
|
|
|
if (argc < 1) {
|
2014-04-28 22:37:18 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2016-06-17 17:44:13 +03:00
|
|
|
argv += optind;
|
qemu-io: Add generic function for reinitializing optind.
On FreeBSD 11.2:
$ nbdkit memory size=1M --run './qemu-io -f raw -c "aio_write 0 512" $nbd'
Parsing error: non-numeric argument, or extraneous/unrecognized suffix -- aio_write
After main option parsing, we reinitialize optind so we can parse each
command. However reinitializing optind to 0 does not work on FreeBSD.
What happens when you do this is optind remains 0 after the option
parsing loop, and the result is we try to parse argv[optind] ==
argv[0] == "aio_write" as if it was the first parameter.
The FreeBSD manual page says:
In order to use getopt() to evaluate multiple sets of arguments, or to
evaluate a single set of arguments multiple times, the variable optreset
must be set to 1 before the second and each additional set of calls to
getopt(), and the variable optind must be reinitialized.
(From the rest of the man page it is clear that optind must be
reinitialized to 1).
The glibc man page says:
A program that scans multiple argument vectors, or rescans the same
vector more than once, and wants to make use of GNU extensions such as
'+' and '-' at the start of optstring, or changes the value of
POSIXLY_CORRECT between scans, must reinitialize getopt() by resetting
optind to 0, rather than the traditional value of 1. (Resetting to 0
forces the invocation of an internal initialization routine that
rechecks POSIXLY_CORRECT and checks for GNU extensions in optstring.)
This commit introduces an OS-portability function called
qemu_reset_optind which provides a way of resetting optind that works
on FreeBSD and platforms that use optreset, while keeping it the same
as now on other platforms.
Note that the qemu codebase sets optind in many other places, but in
those other places it's setting a local variable and not using getopt.
This change is only needed in places where we are using getopt and the
associated global variable optind.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-id: 20190118101114.11759-2-rjones@redhat.com
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-01-18 13:11:14 +03:00
|
|
|
qemu_reset_optind();
|
2016-06-17 17:44:13 +03:00
|
|
|
|
2016-06-17 17:44:14 +03:00
|
|
|
if (!trace_init_backends()) {
|
|
|
|
exit(1);
|
|
|
|
}
|
2020-11-02 14:58:41 +03:00
|
|
|
trace_init_file();
|
2016-06-17 17:44:14 +03:00
|
|
|
qemu_set_log(LOG_TRACE);
|
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
/* find the command */
|
|
|
|
for (cmd = img_cmds; cmd->name != NULL; cmd++) {
|
|
|
|
if (!strcmp(cmdname, cmd->name)) {
|
|
|
|
return cmd->handler(argc, argv);
|
|
|
|
}
|
|
|
|
}
|
2014-04-26 01:02:32 +04:00
|
|
|
|
2009-06-07 03:42:17 +04:00
|
|
|
/* not found */
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Command not found: %s", cmdname);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|