2004-08-02 01:59:26 +04:00
|
|
|
/*
|
2006-08-08 01:34:46 +04:00
|
|
|
* QEMU disk image utility
|
2007-09-17 01:08:06 +04:00
|
|
|
*
|
2008-01-06 20:21:48 +03:00
|
|
|
* Copyright (c) 2003-2008 Fabrice Bellard
|
2007-09-17 01:08:06 +04:00
|
|
|
*
|
2004-08-02 01:59:26 +04:00
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
|
|
* in the Software without restriction, including without limitation the rights
|
|
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
|
|
* furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
|
|
* THE SOFTWARE.
|
|
|
|
*/
|
2016-01-18 21:01:42 +03:00
|
|
|
#include "qemu/osdep.h"
|
2016-06-01 12:44:21 +03:00
|
|
|
#include "qemu-version.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
#include "qapi-visit.h"
|
2016-09-30 17:45:27 +03:00
|
|
|
#include "qapi/qobject-output-visitor.h"
|
2015-03-17 19:22:46 +03:00
|
|
|
#include "qapi/qmp/qerror.h"
|
2012-12-17 21:19:43 +04:00
|
|
|
#include "qapi/qmp/qjson.h"
|
2016-03-20 20:16:19 +03:00
|
|
|
#include "qemu/cutils.h"
|
2016-02-17 13:10:17 +03:00
|
|
|
#include "qemu/config-file.h"
|
2012-12-17 21:20:00 +04:00
|
|
|
#include "qemu/option.h"
|
|
|
|
#include "qemu/error-report.h"
|
2016-06-17 17:44:14 +03:00
|
|
|
#include "qemu/log.h"
|
2016-02-17 13:10:17 +03:00
|
|
|
#include "qom/object_interfaces.h"
|
2012-12-17 21:20:04 +04:00
|
|
|
#include "sysemu/sysemu.h"
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
#include "sysemu/block-backend.h"
|
2012-12-17 21:19:44 +04:00
|
|
|
#include "block/block_int.h"
|
2014-10-24 17:57:37 +04:00
|
|
|
#include "block/blockjob.h"
|
2013-05-25 07:09:44 +04:00
|
|
|
#include "block/qapi.h"
|
2016-04-06 14:12:06 +03:00
|
|
|
#include "crypto/init.h"
|
2016-06-17 17:44:14 +03:00
|
|
|
#include "trace/control.h"
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
#include <getopt.h>
|
2006-06-14 19:32:10 +04:00
|
|
|
|
2015-01-09 18:17:35 +03:00
|
|
|
#define QEMU_IMG_VERSION "qemu-img version " QEMU_VERSION QEMU_PKGVERSION \
|
2016-10-05 12:54:44 +03:00
|
|
|
"\n" QEMU_COPYRIGHT "\n"
|
2014-04-28 22:37:18 +04:00
|
|
|
|
2009-10-02 01:12:16 +04:00
|
|
|
typedef struct img_cmd_t {
|
2009-06-07 03:42:17 +04:00
|
|
|
const char *name;
|
|
|
|
int (*handler)(int argc, char **argv);
|
2009-10-02 01:12:16 +04:00
|
|
|
} img_cmd_t;
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
enum {
|
|
|
|
OPTION_OUTPUT = 256,
|
|
|
|
OPTION_BACKING_CHAIN = 257,
|
2016-02-17 13:10:17 +03:00
|
|
|
OPTION_OBJECT = 258,
|
2016-02-17 13:10:20 +03:00
|
|
|
OPTION_IMAGE_OPTS = 259,
|
2015-07-10 19:09:18 +03:00
|
|
|
OPTION_PATTERN = 260,
|
2016-06-03 14:59:41 +03:00
|
|
|
OPTION_FLUSH_INTERVAL = 261,
|
|
|
|
OPTION_NO_DRAIN = 262,
|
2013-01-28 15:59:47 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef enum OutputFormat {
|
|
|
|
OFORMAT_JSON,
|
|
|
|
OFORMAT_HUMAN,
|
|
|
|
} OutputFormat;
|
|
|
|
|
2016-03-15 15:03:11 +03:00
|
|
|
/* Default to cache=writeback as data integrity is not important for qemu-img */
|
2011-06-20 20:48:19 +04:00
|
|
|
#define BDRV_DEFAULT_CACHE "writeback"
|
2008-11-30 22:12:49 +03:00
|
|
|
|
2014-08-27 15:08:56 +04:00
|
|
|
static void format_print(void *opaque, const char *name)
|
2004-08-02 01:59:26 +04:00
|
|
|
{
|
2014-08-27 15:08:56 +04:00
|
|
|
printf(" %s", name);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2014-04-22 09:36:11 +04:00
|
|
|
static void QEMU_NORETURN GCC_FMT_ATTR(1, 2) error_exit(const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
error_printf("qemu-img: ");
|
|
|
|
|
|
|
|
va_start(ap, fmt);
|
|
|
|
error_vprintf(fmt, ap);
|
|
|
|
va_end(ap);
|
|
|
|
|
|
|
|
error_printf("\nTry 'qemu-img --help' for more information\n");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
|
2009-01-24 21:19:25 +03:00
|
|
|
/* Please keep in synch with qemu-img.texi */
|
2014-04-22 09:36:11 +04:00
|
|
|
static void QEMU_NORETURN help(void)
|
2004-08-02 01:59:26 +04:00
|
|
|
{
|
2010-02-04 18:49:56 +03:00
|
|
|
const char *help_msg =
|
2014-04-28 22:37:18 +04:00
|
|
|
QEMU_IMG_VERSION
|
2016-06-17 17:44:13 +03:00
|
|
|
"usage: qemu-img [standard options] command [command options]\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"QEMU disk image utility\n"
|
|
|
|
"\n"
|
2016-06-17 17:44:13 +03:00
|
|
|
" '-h', '--help' display this help and exit\n"
|
|
|
|
" '-V', '--version' output version information and exit\n"
|
2016-06-17 17:44:14 +03:00
|
|
|
" '-T', '--trace' [[enable=]<pattern>][,events=<file>][,file=<file>]\n"
|
|
|
|
" specify tracing options\n"
|
2016-06-17 17:44:13 +03:00
|
|
|
"\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"Command syntax:\n"
|
2009-06-07 03:42:17 +04:00
|
|
|
#define DEF(option, callback, arg_string) \
|
|
|
|
" " arg_string "\n"
|
|
|
|
#include "qemu-img-cmds.h"
|
|
|
|
#undef DEF
|
|
|
|
#undef GEN_DOCS
|
2010-02-08 12:04:56 +03:00
|
|
|
"\n"
|
|
|
|
"Command parameters:\n"
|
|
|
|
" 'filename' is a disk image filename\n"
|
2016-02-17 13:10:17 +03:00
|
|
|
" 'objectdef' is a QEMU user creatable object definition. See the qemu(1)\n"
|
|
|
|
" manual page for a description of the object properties. The most common\n"
|
|
|
|
" object type is a 'secret', which is used to supply passwords and/or\n"
|
|
|
|
" encryption keys.\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'fmt' is the disk image format. It is guessed automatically in most cases\n"
|
2011-06-20 20:48:19 +04:00
|
|
|
" 'cache' is the cache mode used to write the output disk image, the valid\n"
|
2012-04-20 13:10:56 +04:00
|
|
|
" options are: 'none', 'writeback' (default, except for convert), 'writethrough',\n"
|
|
|
|
" 'directsync' and 'unsafe' (default for convert)\n"
|
2014-09-02 14:01:02 +04:00
|
|
|
" 'src_cache' is the cache mode used to read input disk images, the valid\n"
|
|
|
|
" options are the same as for the 'cache' option\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'size' is the disk image size in bytes. Optional suffixes\n"
|
2013-06-05 16:19:27 +04:00
|
|
|
" 'k' or 'K' (kilobyte, 1024), 'M' (megabyte, 1024k), 'G' (gigabyte, 1024M),\n"
|
|
|
|
" 'T' (terabyte, 1024G), 'P' (petabyte, 1024T) and 'E' (exabyte, 1024P) are\n"
|
|
|
|
" supported. 'b' is ignored.\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" 'output_filename' is the destination disk image filename\n"
|
|
|
|
" 'output_fmt' is the destination format\n"
|
|
|
|
" 'options' is a comma separated list of format specific options in a\n"
|
|
|
|
" name=value format. Use -o ? for an overview of the options supported by the\n"
|
|
|
|
" used format\n"
|
2013-12-04 13:10:57 +04:00
|
|
|
" 'snapshot_param' is param used for internal snapshot, format\n"
|
|
|
|
" is 'snapshot.id=[ID],snapshot.name=[NAME]', or\n"
|
|
|
|
" '[ID_OR_NAME]'\n"
|
|
|
|
" 'snapshot_id_or_name' is deprecated, use 'snapshot_param'\n"
|
|
|
|
" instead\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
" '-c' indicates that target image must be compressed (qcow format only)\n"
|
|
|
|
" '-u' enables unsafe rebasing. It is assumed that old and new backing file\n"
|
|
|
|
" match exactly. The image doesn't need a working backing file before\n"
|
|
|
|
" rebasing in this case (useful for renaming the backing file)\n"
|
|
|
|
" '-h' with or without a command shows this help and lists the supported formats\n"
|
2011-03-30 16:16:25 +04:00
|
|
|
" '-p' show progress of command (only certain commands)\n"
|
2013-02-13 12:09:40 +04:00
|
|
|
" '-q' use Quiet mode - do not print any output (except errors)\n"
|
2013-10-24 14:07:05 +04:00
|
|
|
" '-S' indicates the consecutive number of bytes (defaults to 4k) that must\n"
|
|
|
|
" contain only zeros for qemu-img to create a sparse image during\n"
|
|
|
|
" conversion. If the number of bytes is 0, the source will not be scanned for\n"
|
|
|
|
" unallocated or zero sectors, and the destination image will always be\n"
|
|
|
|
" fully allocated\n"
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
" '--output' takes the format in which the output must be done (human or json)\n"
|
2013-09-02 22:07:24 +04:00
|
|
|
" '-n' skips the target volume creation (useful if the volume is created\n"
|
|
|
|
" prior to running qemu-img)\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"\n"
|
2012-05-11 18:07:02 +04:00
|
|
|
"Parameters to check subcommand:\n"
|
|
|
|
" '-r' tries to repair any inconsistencies that are found during the check.\n"
|
|
|
|
" '-r leaks' repairs only cluster leaks, whereas '-r all' fixes all\n"
|
|
|
|
" kinds of errors, with a higher risk of choosing the wrong fix or\n"
|
2012-08-11 00:03:25 +04:00
|
|
|
" hiding corruption that has already occurred.\n"
|
2012-05-11 18:07:02 +04:00
|
|
|
"\n"
|
2010-02-08 12:04:56 +03:00
|
|
|
"Parameters to snapshot subcommand:\n"
|
|
|
|
" 'snapshot' is the name of the snapshot to create, apply or delete\n"
|
|
|
|
" '-a' applies a snapshot (revert disk to saved state)\n"
|
|
|
|
" '-c' creates a snapshot\n"
|
|
|
|
" '-d' deletes a snapshot\n"
|
2013-02-13 12:09:41 +04:00
|
|
|
" '-l' lists all snapshots in the given image\n"
|
|
|
|
"\n"
|
|
|
|
"Parameters to compare subcommand:\n"
|
|
|
|
" '-f' first image format\n"
|
|
|
|
" '-F' second image format\n"
|
2016-08-10 05:43:12 +03:00
|
|
|
" '-s' run in Strict mode - fail on different image size or sector allocation\n"
|
|
|
|
"\n"
|
|
|
|
"Parameters to dd subcommand:\n"
|
|
|
|
" 'bs=BYTES' read and write up to BYTES bytes at a time "
|
|
|
|
"(default: 512)\n"
|
|
|
|
" 'count=N' copy only N input blocks\n"
|
|
|
|
" 'if=FILE' read from FILE\n"
|
2016-08-10 17:16:09 +03:00
|
|
|
" 'of=FILE' write to FILE\n"
|
|
|
|
" 'skip=N' skip N bs-sized blocks at the start of input\n";
|
2010-02-04 18:49:56 +03:00
|
|
|
|
|
|
|
printf("%s\nSupported formats:", help_msg);
|
2014-08-27 15:08:56 +04:00
|
|
|
bdrv_iterate_format(format_print, NULL);
|
2004-08-02 01:59:26 +04:00
|
|
|
printf("\n");
|
2014-04-22 09:36:11 +04:00
|
|
|
exit(EXIT_SUCCESS);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
static QemuOptsList qemu_object_opts = {
|
|
|
|
.name = "object",
|
|
|
|
.implied_opt_name = "qom-type",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(qemu_object_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
static QemuOptsList qemu_source_opts = {
|
|
|
|
.name = "source",
|
|
|
|
.implied_opt_name = "file",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(qemu_source_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2013-06-16 19:01:05 +04:00
|
|
|
static int GCC_FMT_ATTR(2, 3) qprintf(bool quiet, const char *fmt, ...)
|
2013-02-13 12:09:40 +04:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
if (!quiet) {
|
|
|
|
va_list args;
|
|
|
|
va_start(args, fmt);
|
|
|
|
ret = vprintf(fmt, args);
|
|
|
|
va_end(args);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2010-12-06 17:25:38 +03:00
|
|
|
static int print_block_option_help(const char *filename, const char *fmt)
|
|
|
|
{
|
|
|
|
BlockDriver *drv, *proto_drv;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOptsList *create_opts = NULL;
|
2015-02-05 21:58:12 +03:00
|
|
|
Error *local_err = NULL;
|
2010-12-06 17:25:38 +03:00
|
|
|
|
|
|
|
/* Find driver and parse its options */
|
|
|
|
drv = bdrv_find_format(fmt);
|
|
|
|
if (!drv) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Unknown file format '%s'", fmt);
|
2010-12-06 17:25:38 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-06-05 13:21:11 +04:00
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
if (filename) {
|
2015-02-05 21:58:12 +03:00
|
|
|
proto_drv = bdrv_find_protocol(filename, true, &local_err);
|
2014-02-21 19:24:07 +04:00
|
|
|
if (!proto_drv) {
|
2015-03-12 18:08:02 +03:00
|
|
|
error_report_err(local_err);
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_free(create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
return 1;
|
|
|
|
}
|
2014-06-05 13:21:11 +04:00
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_print_help(create_opts);
|
|
|
|
qemu_opts_free(create_opts);
|
2010-12-06 17:25:38 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
|
|
|
|
static int img_open_password(BlockBackend *blk, const char *filename,
|
2016-03-21 17:11:42 +03:00
|
|
|
int flags, bool quiet)
|
2004-08-28 01:28:58 +04:00
|
|
|
{
|
|
|
|
BlockDriverState *bs;
|
|
|
|
char password[256];
|
2016-02-17 13:10:20 +03:00
|
|
|
|
|
|
|
bs = blk_bs(blk);
|
2016-03-21 17:11:43 +03:00
|
|
|
if (bdrv_is_encrypted(bs) && bdrv_key_required(bs) &&
|
|
|
|
!(flags & BDRV_O_NO_IO)) {
|
2016-02-17 13:10:20 +03:00
|
|
|
qprintf(quiet, "Disk image '%s' is encrypted.\n", filename);
|
|
|
|
if (qemu_read_password(password, sizeof(password)) < 0) {
|
|
|
|
error_report("No password given");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (bdrv_set_key(bs, password) < 0) {
|
|
|
|
error_report("invalid password");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open_opts(const char *optstr,
|
2016-03-15 15:01:04 +03:00
|
|
|
QemuOpts *opts, int flags, bool writethrough,
|
2016-03-21 17:11:42 +03:00
|
|
|
bool quiet)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
QDict *options;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
BlockBackend *blk;
|
|
|
|
options = qemu_opts_to_qdict(opts, NULL);
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = blk_new_open(NULL, NULL, options, flags, &local_err);
|
2016-02-17 13:10:20 +03:00
|
|
|
if (!blk) {
|
2016-04-06 12:16:18 +03:00
|
|
|
error_reportf_err(local_err, "Could not open '%s': ", optstr);
|
2016-02-17 13:10:20 +03:00
|
|
|
return NULL;
|
|
|
|
}
|
2016-03-15 15:01:04 +03:00
|
|
|
blk_set_enable_write_cache(blk, !writethrough);
|
2016-02-17 13:10:20 +03:00
|
|
|
|
2016-03-21 17:11:42 +03:00
|
|
|
if (img_open_password(blk, optstr, flags, quiet) < 0) {
|
2016-02-17 13:10:20 +03:00
|
|
|
blk_unref(blk);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return blk;
|
|
|
|
}
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open_file(const char *filename,
|
2016-02-17 13:10:20 +03:00
|
|
|
const char *fmt, int flags,
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough, bool quiet)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
BlockBackend *blk;
|
2013-09-05 16:45:29 +04:00
|
|
|
Error *local_err = NULL;
|
2015-02-05 21:58:16 +03:00
|
|
|
QDict *options = NULL;
|
2010-12-16 17:37:41 +03:00
|
|
|
|
2004-08-28 01:28:58 +04:00
|
|
|
if (fmt) {
|
2015-02-05 21:58:16 +03:00
|
|
|
options = qdict_new();
|
|
|
|
qdict_put(options, "driver", qstring_from_str(fmt));
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2011-02-09 13:25:53 +03:00
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = blk_new_open(filename, NULL, options, flags, &local_err);
|
2015-02-05 21:58:16 +03:00
|
|
|
if (!blk) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "Could not open '%s': ", filename);
|
2016-02-17 13:10:20 +03:00
|
|
|
return NULL;
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2016-03-15 15:01:04 +03:00
|
|
|
blk_set_enable_write_cache(blk, !writethrough);
|
2011-02-09 13:25:53 +03:00
|
|
|
|
2016-03-21 17:11:42 +03:00
|
|
|
if (img_open_password(blk, filename, flags, quiet) < 0) {
|
2016-02-17 13:10:20 +03:00
|
|
|
blk_unref(blk);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return blk;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
static BlockBackend *img_open(bool image_opts,
|
2016-02-17 13:10:20 +03:00
|
|
|
const char *filename,
|
2016-03-15 15:01:04 +03:00
|
|
|
const char *fmt, int flags, bool writethrough,
|
2016-03-21 17:11:42 +03:00
|
|
|
bool quiet)
|
2016-02-17 13:10:20 +03:00
|
|
|
{
|
|
|
|
BlockBackend *blk;
|
|
|
|
if (image_opts) {
|
|
|
|
QemuOpts *opts;
|
|
|
|
if (fmt) {
|
|
|
|
error_report("--image-opts and --format are mutually exclusive");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
opts = qemu_opts_parse_noisily(qemu_find_opts("source"),
|
|
|
|
filename, true);
|
|
|
|
if (!opts) {
|
|
|
|
return NULL;
|
|
|
|
}
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open_opts(filename, opts, flags, writethrough, quiet);
|
2016-02-17 13:10:20 +03:00
|
|
|
} else {
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open_file(filename, fmt, flags, writethrough, quiet);
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
return blk;
|
2004-08-28 01:28:58 +04:00
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
static int add_old_style_options(const char *fmt, QemuOpts *opts,
|
2010-12-07 19:44:34 +03:00
|
|
|
const char *base_filename,
|
|
|
|
const char *base_fmt)
|
2009-05-18 18:42:12 +04:00
|
|
|
{
|
2015-02-12 19:43:08 +03:00
|
|
|
Error *err = NULL;
|
|
|
|
|
2009-05-18 18:42:12 +04:00
|
|
|
if (base_filename) {
|
2015-02-12 19:52:20 +03:00
|
|
|
qemu_opt_set(opts, BLOCK_OPT_BACKING_FILE, base_filename, &err);
|
2015-02-12 19:43:08 +03:00
|
|
|
if (err) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Backing file not supported for file format '%s'",
|
|
|
|
fmt);
|
2015-02-12 19:43:08 +03:00
|
|
|
error_free(err);
|
2010-06-20 23:26:35 +04:00
|
|
|
return -1;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (base_fmt) {
|
2015-02-12 19:52:20 +03:00
|
|
|
qemu_opt_set(opts, BLOCK_OPT_BACKING_FMT, base_fmt, &err);
|
2015-02-12 19:43:08 +03:00
|
|
|
if (err) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Backing file format not supported for file "
|
|
|
|
"format '%s'", fmt);
|
2015-02-12 19:43:08 +03:00
|
|
|
error_free(err);
|
2010-06-20 23:26:35 +04:00
|
|
|
return -1;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
return 0;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_create(int argc, char **argv)
|
|
|
|
{
|
2012-11-30 16:52:06 +04:00
|
|
|
int c;
|
2010-12-09 16:17:25 +03:00
|
|
|
uint64_t img_size = -1;
|
2004-08-02 01:59:26 +04:00
|
|
|
const char *fmt = "raw";
|
2009-03-28 20:55:19 +03:00
|
|
|
const char *base_fmt = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
const char *filename;
|
|
|
|
const char *base_filename = NULL;
|
2009-05-18 18:42:11 +04:00
|
|
|
char *options = NULL;
|
2012-11-30 16:52:05 +04:00
|
|
|
Error *local_err = NULL;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "F:b:f:he6o:q",
|
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
2009-03-28 20:55:19 +03:00
|
|
|
case 'F':
|
|
|
|
base_fmt = optarg;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'b':
|
|
|
|
base_filename = optarg;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'e':
|
2011-06-22 16:03:55 +04:00
|
|
|
error_report("option -e is deprecated, please use \'-o "
|
2010-12-07 19:44:34 +03:00
|
|
|
"encryption\' instead!");
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2007-10-24 20:11:42 +04:00
|
|
|
case '6':
|
2011-06-22 16:03:55 +04:00
|
|
|
error_report("option -6 is deprecated, please use \'-o "
|
2010-12-07 19:44:34 +03:00
|
|
|
"compat6\' instead!");
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2009-05-18 18:42:11 +04:00
|
|
|
case 'o':
|
2014-02-21 19:24:04 +04:00
|
|
|
if (!is_valid_option_list(optarg)) {
|
|
|
|
error_report("Invalid option list: %s", optarg);
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
if (!options) {
|
|
|
|
options = g_strdup(optarg);
|
|
|
|
} else {
|
|
|
|
char *old_options = options;
|
|
|
|
options = g_strdup_printf("%s,%s", options, optarg);
|
|
|
|
g_free(old_options);
|
|
|
|
}
|
2009-05-18 18:42:11 +04:00
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
} break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2009-03-28 20:55:19 +03:00
|
|
|
|
2010-05-26 06:35:36 +04:00
|
|
|
/* Get the filename */
|
2014-02-21 19:24:07 +04:00
|
|
|
filename = (optind < argc) ? argv[optind] : NULL;
|
|
|
|
if (options && has_help_option(options)) {
|
|
|
|
g_free(options);
|
|
|
|
return print_block_option_help(filename, fmt);
|
|
|
|
}
|
|
|
|
|
2010-12-06 17:25:39 +03:00
|
|
|
if (optind >= argc) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2014-02-21 19:24:07 +04:00
|
|
|
optind++;
|
2010-05-26 06:35:36 +04:00
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2010-12-09 16:17:25 +03:00
|
|
|
/* Get image size, if specified */
|
|
|
|
if (optind < argc) {
|
2011-01-05 13:41:02 +03:00
|
|
|
int64_t sval;
|
2011-11-22 12:46:05 +04:00
|
|
|
char *end;
|
2015-09-16 19:02:56 +03:00
|
|
|
sval = qemu_strtosz_suffix(argv[optind++], &end,
|
|
|
|
QEMU_STRTOSZ_DEFSUFFIX_B);
|
2011-11-22 12:46:05 +04:00
|
|
|
if (sval < 0 || *end) {
|
2012-12-17 05:49:23 +04:00
|
|
|
if (sval == -ERANGE) {
|
|
|
|
error_report("Image size must be less than 8 EiB!");
|
|
|
|
} else {
|
|
|
|
error_report("Invalid image size specified! You may use k, M, "
|
2013-06-05 16:19:27 +04:00
|
|
|
"G, T, P or E suffixes for ");
|
|
|
|
error_report("kilobytes, megabytes, gigabytes, terabytes, "
|
|
|
|
"petabytes and exabytes.");
|
2012-12-17 05:49:23 +04:00
|
|
|
}
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2010-12-09 16:17:25 +03:00
|
|
|
}
|
|
|
|
img_size = (uint64_t)sval;
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Unexpected argument: %s", argv[optind]);
|
2013-08-05 12:53:04 +04:00
|
|
|
}
|
2010-12-09 16:17:25 +03:00
|
|
|
|
2012-11-30 16:52:05 +04:00
|
|
|
bdrv_img_create(filename, fmt, base_filename, base_fmt,
|
2016-03-18 19:46:45 +03:00
|
|
|
options, img_size, 0, &local_err, quiet);
|
2014-01-30 18:07:28 +04:00
|
|
|
if (local_err) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "%s: ", filename);
|
2014-02-21 19:24:04 +04:00
|
|
|
goto fail;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2012-11-30 16:52:06 +04:00
|
|
|
|
2014-02-21 19:24:04 +04:00
|
|
|
g_free(options);
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
2014-02-21 19:24:04 +04:00
|
|
|
|
|
|
|
fail:
|
|
|
|
g_free(options);
|
|
|
|
return 1;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
static void dump_json_image_check(ImageCheck *check, bool quiet)
|
2013-01-28 15:59:47 +04:00
|
|
|
{
|
|
|
|
QString *str;
|
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageCheck(v, NULL, &check, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2013-01-28 15:59:47 +04:00
|
|
|
str = qobject_to_json_pretty(obj);
|
|
|
|
assert(str != NULL);
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "%s\n", qstring_get_str(str));
|
2013-01-28 15:59:47 +04:00
|
|
|
qobject_decref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
2013-01-28 15:59:47 +04:00
|
|
|
QDECREF(str);
|
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
static void dump_human_image_check(ImageCheck *check, bool quiet)
|
2013-01-28 15:59:47 +04:00
|
|
|
{
|
|
|
|
if (!(check->corruptions || check->leaks || check->check_errors)) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "No errors were found on the image.\n");
|
2013-01-28 15:59:47 +04:00
|
|
|
} else {
|
|
|
|
if (check->corruptions) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "\n%" PRId64 " errors were found on the image.\n"
|
|
|
|
"Data may be corrupted, or further writes to the image "
|
|
|
|
"may corrupt it.\n",
|
|
|
|
check->corruptions);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->leaks) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"\n%" PRId64 " leaked clusters were found on the image.\n"
|
|
|
|
"This means waste of disk space, but no harm to data.\n",
|
|
|
|
check->leaks);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->check_errors) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"\n%" PRId64
|
|
|
|
" internal errors have occurred during the check.\n",
|
|
|
|
check->check_errors);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (check->total_clusters != 0 && check->allocated_clusters != 0) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "%" PRId64 "/%" PRId64 " = %0.2f%% allocated, "
|
|
|
|
"%0.2f%% fragmented, %0.2f%% compressed clusters\n",
|
|
|
|
check->allocated_clusters, check->total_clusters,
|
|
|
|
check->allocated_clusters * 100.0 / check->total_clusters,
|
|
|
|
check->fragmented_clusters * 100.0 / check->allocated_clusters,
|
|
|
|
check->compressed_clusters * 100.0 /
|
|
|
|
check->allocated_clusters);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (check->image_end_offset) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"Image end offset: %" PRId64 "\n", check->image_end_offset);
|
2013-01-28 15:59:47 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int collect_image_check(BlockDriverState *bs,
|
|
|
|
ImageCheck *check,
|
|
|
|
const char *filename,
|
|
|
|
const char *fmt,
|
|
|
|
int fix)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
BdrvCheckResult result;
|
|
|
|
|
|
|
|
ret = bdrv_check(bs, &result, fix);
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
check->filename = g_strdup(filename);
|
|
|
|
check->format = g_strdup(bdrv_get_format_name(bs));
|
|
|
|
check->check_errors = result.check_errors;
|
|
|
|
check->corruptions = result.corruptions;
|
|
|
|
check->has_corruptions = result.corruptions != 0;
|
|
|
|
check->leaks = result.leaks;
|
|
|
|
check->has_leaks = result.leaks != 0;
|
|
|
|
check->corruptions_fixed = result.corruptions_fixed;
|
|
|
|
check->has_corruptions_fixed = result.corruptions != 0;
|
|
|
|
check->leaks_fixed = result.leaks_fixed;
|
|
|
|
check->has_leaks_fixed = result.leaks != 0;
|
|
|
|
check->image_end_offset = result.image_end_offset;
|
|
|
|
check->has_image_end_offset = result.image_end_offset != 0;
|
|
|
|
check->total_clusters = result.bfi.total_clusters;
|
|
|
|
check->has_total_clusters = result.bfi.total_clusters != 0;
|
|
|
|
check->allocated_clusters = result.bfi.allocated_clusters;
|
|
|
|
check->has_allocated_clusters = result.bfi.allocated_clusters != 0;
|
|
|
|
check->fragmented_clusters = result.bfi.fragmented_clusters;
|
|
|
|
check->has_fragmented_clusters = result.bfi.fragmented_clusters != 0;
|
2013-02-07 20:15:04 +04:00
|
|
|
check->compressed_clusters = result.bfi.compressed_clusters;
|
|
|
|
check->has_compressed_clusters = result.bfi.compressed_clusters != 0;
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-06-29 13:43:13 +04:00
|
|
|
/*
|
|
|
|
* Checks an image for consistency. Exit codes:
|
|
|
|
*
|
2014-06-03 00:15:21 +04:00
|
|
|
* 0 - Check completed, image is good
|
|
|
|
* 1 - Check not completed because of internal errors
|
|
|
|
* 2 - Check completed, image is corrupted
|
|
|
|
* 3 - Check completed, image has leaked clusters, but is good otherwise
|
|
|
|
* 63 - Checks are not supported by the image format
|
2010-06-29 13:43:13 +04:00
|
|
|
*/
|
2009-04-22 03:11:53 +04:00
|
|
|
static int img_check(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c, ret;
|
2013-01-28 15:59:47 +04:00
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *filename, *fmt, *output, *cache;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2009-04-22 03:11:53 +04:00
|
|
|
BlockDriverState *bs;
|
2012-05-11 18:07:02 +04:00
|
|
|
int fix = 0;
|
2016-03-15 15:01:04 +03:00
|
|
|
int flags = BDRV_O_CHECK;
|
|
|
|
bool writethrough;
|
2014-10-07 15:59:05 +04:00
|
|
|
ImageCheck *check;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2009-04-22 03:11:53 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
2013-01-28 15:59:47 +04:00
|
|
|
output = NULL;
|
2014-07-23 00:58:42 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2016-03-15 15:01:04 +03:00
|
|
|
|
2009-04-22 03:11:53 +04:00
|
|
|
for(;;) {
|
2013-01-28 15:59:47 +04:00
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
2014-03-24 22:38:54 +04:00
|
|
|
{"repair", required_argument, 0, 'r'},
|
2013-01-28 15:59:47 +04:00
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2013-01-28 15:59:47 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2014-07-23 00:58:42 +04:00
|
|
|
c = getopt_long(argc, argv, "hf:r:T:q",
|
2013-01-28 15:59:47 +04:00
|
|
|
long_options, &option_index);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2009-04-22 03:11:53 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-04-22 03:11:53 +04:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2009-04-22 03:11:53 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2012-05-11 18:07:02 +04:00
|
|
|
case 'r':
|
|
|
|
flags |= BDRV_O_RDWR;
|
|
|
|
|
|
|
|
if (!strcmp(optarg, "leaks")) {
|
|
|
|
fix = BDRV_FIX_LEAKS;
|
|
|
|
} else if (!strcmp(optarg, "all")) {
|
|
|
|
fix = BDRV_FIX_LEAKS | BDRV_FIX_ERRORS;
|
|
|
|
} else {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Unknown option value for -r "
|
|
|
|
"(expecting 'leaks' or 'all'): %s", optarg);
|
2012-05-11 18:07:02 +04:00
|
|
|
}
|
|
|
|
break;
|
2013-01-28 15:59:47 +04:00
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2009-04-22 03:11:53 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-04-22 03:11:53 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", cache);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
check = g_new0(ImageCheck, 1);
|
|
|
|
ret = collect_image_check(bs, check, filename, fmt, fix);
|
2010-06-29 13:43:13 +04:00
|
|
|
|
|
|
|
if (ret == -ENOTSUP) {
|
2014-05-31 23:33:30 +04:00
|
|
|
error_report("This image format does not support checks");
|
2013-10-24 10:53:34 +04:00
|
|
|
ret = 63;
|
2013-01-28 15:59:47 +04:00
|
|
|
goto fail;
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (check->corruptions_fixed || check->leaks_fixed) {
|
|
|
|
int corruptions_fixed, leaks_fixed;
|
2012-05-11 20:16:54 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
leaks_fixed = check->leaks_fixed;
|
|
|
|
corruptions_fixed = check->corruptions_fixed;
|
2010-06-29 13:43:13 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (output_format == OFORMAT_HUMAN) {
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet,
|
|
|
|
"The following inconsistencies were found and repaired:\n\n"
|
|
|
|
" %" PRId64 " leaked clusters\n"
|
|
|
|
" %" PRId64 " corruptions\n\n"
|
|
|
|
"Double checking the fixed image now...\n",
|
|
|
|
check->leaks_fixed,
|
|
|
|
check->corruptions_fixed);
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = collect_image_check(bs, check, filename, fmt, 0);
|
2009-04-22 03:11:53 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
check->leaks_fixed = leaks_fixed;
|
|
|
|
check->corruptions_fixed = corruptions_fixed;
|
2012-03-15 16:13:31 +04:00
|
|
|
}
|
|
|
|
|
2014-10-23 17:29:12 +04:00
|
|
|
if (!ret) {
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
|
|
|
dump_human_image_check(check, quiet);
|
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
|
|
|
dump_json_image_check(check, quiet);
|
|
|
|
break;
|
|
|
|
}
|
2013-01-28 15:59:46 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (ret || check->check_errors) {
|
2014-10-23 17:29:12 +04:00
|
|
|
if (ret) {
|
|
|
|
error_report("Check failed: %s", strerror(-ret));
|
|
|
|
} else {
|
|
|
|
error_report("Check failed");
|
|
|
|
}
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = 1;
|
|
|
|
goto fail;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2010-06-29 13:43:13 +04:00
|
|
|
|
2013-01-28 15:59:47 +04:00
|
|
|
if (check->corruptions) {
|
|
|
|
ret = 2;
|
|
|
|
} else if (check->leaks) {
|
|
|
|
ret = 3;
|
2010-06-29 13:43:13 +04:00
|
|
|
} else {
|
2013-01-28 15:59:47 +04:00
|
|
|
ret = 0;
|
2010-06-29 13:43:13 +04:00
|
|
|
}
|
2013-01-28 15:59:47 +04:00
|
|
|
|
|
|
|
fail:
|
|
|
|
qapi_free_ImageCheck(check);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-01-28 15:59:47 +04:00
|
|
|
return ret;
|
2009-04-22 03:11:53 +04:00
|
|
|
}
|
|
|
|
|
2014-10-24 17:57:37 +04:00
|
|
|
typedef struct CommonBlockJobCBInfo {
|
|
|
|
BlockDriverState *bs;
|
|
|
|
Error **errp;
|
|
|
|
} CommonBlockJobCBInfo;
|
|
|
|
|
|
|
|
static void common_block_job_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
CommonBlockJobCBInfo *cbi = opaque;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
error_setg_errno(cbi->errp, -ret, "Block job failed");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void run_block_job(BlockJob *job, Error **errp)
|
|
|
|
{
|
2016-04-18 18:30:17 +03:00
|
|
|
AioContext *aio_context = blk_get_aio_context(job->blk);
|
2014-10-24 17:57:37 +04:00
|
|
|
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_acquire(aio_context);
|
2014-10-24 17:57:37 +04:00
|
|
|
do {
|
|
|
|
aio_poll(aio_context, true);
|
2015-11-03 02:28:20 +03:00
|
|
|
qemu_progress_print(job->len ?
|
|
|
|
((float)job->offset / job->len * 100.f) : 0.0f, 0);
|
2014-10-24 17:57:37 +04:00
|
|
|
} while (!job->ready);
|
|
|
|
|
|
|
|
block_job_complete_sync(job, errp);
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_release(aio_context);
|
2014-10-24 17:57:39 +04:00
|
|
|
|
|
|
|
/* A block job may finish instantaneously without publishing any progress,
|
|
|
|
* so just signal completion here */
|
|
|
|
qemu_progress_print(100.f, 0);
|
2014-10-24 17:57:37 +04:00
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_commit(int argc, char **argv)
|
|
|
|
{
|
2011-06-20 20:48:19 +04:00
|
|
|
int c, ret, flags;
|
2014-10-24 17:57:40 +04:00
|
|
|
const char *filename, *fmt, *cache, *base;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2014-10-24 17:57:37 +04:00
|
|
|
BlockDriverState *bs, *base_bs;
|
2014-10-24 17:57:39 +04:00
|
|
|
bool progress = false, quiet = false, drop = false;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2014-10-24 17:57:37 +04:00
|
|
|
Error *local_err = NULL;
|
|
|
|
CommonBlockJobCBInfo cbi;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2016-10-27 13:49:04 +03:00
|
|
|
AioContext *aio_context;
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
2011-06-20 20:48:19 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2014-10-24 17:57:40 +04:00
|
|
|
base = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "f:ht:b:dpq",
|
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-10-24 17:57:40 +04:00
|
|
|
case 'b':
|
|
|
|
base = optarg;
|
|
|
|
/* -b implies -d */
|
|
|
|
drop = true;
|
|
|
|
break;
|
2014-10-24 17:57:38 +04:00
|
|
|
case 'd':
|
|
|
|
drop = true;
|
|
|
|
break;
|
2014-10-24 17:57:39 +04:00
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2014-10-24 17:57:39 +04:00
|
|
|
|
|
|
|
/* Progress is not shown in Quiet mode */
|
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-10-24 17:57:38 +04:00
|
|
|
flags = BDRV_O_RDWR | BDRV_O_UNMAP;
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-08-26 22:17:54 +04:00
|
|
|
return 1;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
|
|
|
|
2014-10-24 17:57:39 +04:00
|
|
|
qemu_progress_init(progress, 1.f);
|
|
|
|
qemu_progress_print(0.f, 100);
|
|
|
|
|
2014-10-24 17:57:40 +04:00
|
|
|
if (base) {
|
|
|
|
base_bs = bdrv_find_backing_image(bs, base);
|
|
|
|
if (!base_bs) {
|
2015-03-17 13:54:50 +03:00
|
|
|
error_setg(&local_err, QERR_BASE_NOT_FOUND, base);
|
2014-10-24 17:57:40 +04:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* This is different from QMP, which by default uses the deepest file in
|
|
|
|
* the backing chain (i.e., the very base); however, the traditional
|
|
|
|
* behavior of qemu-img commit is using the immediate backing file. */
|
2015-06-17 15:55:21 +03:00
|
|
|
base_bs = backing_bs(bs);
|
2014-10-24 17:57:40 +04:00
|
|
|
if (!base_bs) {
|
|
|
|
error_setg(&local_err, "Image does not have a backing file");
|
|
|
|
goto done;
|
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
cbi = (CommonBlockJobCBInfo){
|
|
|
|
.errp = &local_err,
|
|
|
|
.bs = bs,
|
|
|
|
};
|
|
|
|
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context = bdrv_get_aio_context(bs);
|
|
|
|
aio_context_acquire(aio_context);
|
2016-10-27 19:06:57 +03:00
|
|
|
commit_active_start("commit", bs, base_bs, BLOCK_JOB_DEFAULT, 0,
|
|
|
|
BLOCKDEV_ON_ERROR_REPORT, common_block_job_cb, &cbi,
|
|
|
|
&local_err, false);
|
2016-10-27 13:49:04 +03:00
|
|
|
aio_context_release(aio_context);
|
2014-10-24 17:57:37 +04:00
|
|
|
if (local_err) {
|
|
|
|
goto done;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2015-09-15 12:58:23 +03:00
|
|
|
/* When the block job completes, the BlockBackend reference will point to
|
|
|
|
* the old backing file. In order to avoid that the top image is already
|
|
|
|
* deleted, so we can still empty it afterwards, increment the reference
|
|
|
|
* counter here preemptively. */
|
2014-10-24 17:57:38 +04:00
|
|
|
if (!drop) {
|
2015-09-15 12:58:23 +03:00
|
|
|
bdrv_ref(bs);
|
2014-10-24 17:57:38 +04:00
|
|
|
}
|
|
|
|
|
2014-10-24 17:57:37 +04:00
|
|
|
run_block_job(bs->job, &local_err);
|
2014-10-24 17:57:38 +04:00
|
|
|
if (local_err) {
|
|
|
|
goto unref_backing;
|
|
|
|
}
|
|
|
|
|
2015-09-15 12:58:23 +03:00
|
|
|
if (!drop && bs->drv->bdrv_make_empty) {
|
|
|
|
ret = bs->drv->bdrv_make_empty(bs);
|
2014-10-24 17:57:38 +04:00
|
|
|
if (ret) {
|
|
|
|
error_setg_errno(&local_err, -ret, "Could not empty %s",
|
|
|
|
filename);
|
|
|
|
goto unref_backing;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
unref_backing:
|
|
|
|
if (!drop) {
|
2015-09-15 12:58:23 +03:00
|
|
|
bdrv_unref(bs);
|
2014-10-24 17:57:38 +04:00
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
done:
|
2014-10-24 17:57:39 +04:00
|
|
|
qemu_progress_end();
|
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
if (local_err) {
|
2015-02-10 17:14:02 +03:00
|
|
|
error_report_err(local_err);
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
|
|
|
}
|
2014-10-24 17:57:37 +04:00
|
|
|
|
|
|
|
qprintf(quiet, "Image committed.\n");
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-06-06 01:53:49 +04:00
|
|
|
/*
|
|
|
|
* Returns true iff the first sector pointed to by 'buf' contains at least
|
|
|
|
* a non-NUL byte.
|
|
|
|
*
|
|
|
|
* 'pnum' is set to the number of sectors (including and immediately following
|
|
|
|
* the first one) that are known to be in the same allocated/unallocated state.
|
|
|
|
*/
|
2004-08-02 01:59:26 +04:00
|
|
|
static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum)
|
|
|
|
{
|
2012-02-07 17:27:24 +04:00
|
|
|
bool is_zero;
|
|
|
|
int i;
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
if (n <= 0) {
|
|
|
|
*pnum = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2012-02-07 17:27:24 +04:00
|
|
|
is_zero = buffer_is_zero(buf, 512);
|
2004-08-02 01:59:26 +04:00
|
|
|
for(i = 1; i < n; i++) {
|
|
|
|
buf += 512;
|
2012-02-07 17:27:24 +04:00
|
|
|
if (is_zero != buffer_is_zero(buf, 512)) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2012-02-07 17:27:24 +04:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
*pnum = i;
|
2012-02-07 17:27:24 +04:00
|
|
|
return !is_zero;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
|
2011-08-26 17:27:13 +04:00
|
|
|
/*
|
|
|
|
* Like is_allocated_sectors, but if the buffer starts with a used sector,
|
|
|
|
* up to 'min' consecutive sectors containing zeros are ignored. This avoids
|
|
|
|
* breaking up write requests for only small sparse areas.
|
|
|
|
*/
|
|
|
|
static int is_allocated_sectors_min(const uint8_t *buf, int n, int *pnum,
|
|
|
|
int min)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int num_checked, num_used;
|
|
|
|
|
|
|
|
if (n < min) {
|
|
|
|
min = n;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = is_allocated_sectors(buf, n, pnum);
|
|
|
|
if (!ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
num_used = *pnum;
|
|
|
|
buf += BDRV_SECTOR_SIZE * *pnum;
|
|
|
|
n -= *pnum;
|
|
|
|
num_checked = num_used;
|
|
|
|
|
|
|
|
while (n > 0) {
|
|
|
|
ret = is_allocated_sectors(buf, n, pnum);
|
|
|
|
|
|
|
|
buf += BDRV_SECTOR_SIZE * *pnum;
|
|
|
|
n -= *pnum;
|
|
|
|
num_checked += *pnum;
|
|
|
|
if (ret) {
|
|
|
|
num_used = num_checked;
|
|
|
|
} else if (*pnum >= min) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
*pnum = num_used;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
|
|
|
* Compares two buffers sector by sector. Returns 0 if the first sector of both
|
|
|
|
* buffers matches, non-zero otherwise.
|
|
|
|
*
|
|
|
|
* pnum is set to the number of sectors (including and immediately following
|
|
|
|
* the first one) that are known to have the same comparison result
|
|
|
|
*/
|
|
|
|
static int compare_sectors(const uint8_t *buf1, const uint8_t *buf2, int n,
|
|
|
|
int *pnum)
|
|
|
|
{
|
2015-02-20 19:06:15 +03:00
|
|
|
bool res;
|
|
|
|
int i;
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
if (n <= 0) {
|
|
|
|
*pnum = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
res = !!memcmp(buf1, buf2, 512);
|
|
|
|
for(i = 1; i < n; i++) {
|
|
|
|
buf1 += 512;
|
|
|
|
buf2 += 512;
|
|
|
|
|
|
|
|
if (!!memcmp(buf1, buf2, 512) != res) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
*pnum = i;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2009-09-15 14:30:43 +04:00
|
|
|
#define IO_BUF_SIZE (2 * 1024 * 1024)
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-02-13 12:09:41 +04:00
|
|
|
static int64_t sectors_to_bytes(int64_t sectors)
|
|
|
|
{
|
|
|
|
return sectors << BDRV_SECTOR_BITS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int64_t sectors_to_process(int64_t total, int64_t from)
|
|
|
|
{
|
|
|
|
return MIN(total - from, IO_BUF_SIZE >> BDRV_SECTOR_BITS);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if passed sectors are empty (not allocated or contain only 0 bytes)
|
|
|
|
*
|
|
|
|
* Returns 0 in case sectors are filled with 0, 1 if sectors contain non-zero
|
|
|
|
* data and negative value on error.
|
|
|
|
*
|
2015-02-05 21:58:18 +03:00
|
|
|
* @param blk: BlockBackend for the image
|
2013-02-13 12:09:41 +04:00
|
|
|
* @param sect_num: Number of first sector to check
|
|
|
|
* @param sect_count: Number of sectors to check
|
|
|
|
* @param filename: Name of disk file we are checking (logging purpose)
|
|
|
|
* @param buffer: Allocated buffer for storing read data
|
|
|
|
* @param quiet: Flag for quiet mode
|
|
|
|
*/
|
2015-02-05 21:58:18 +03:00
|
|
|
static int check_empty_sectors(BlockBackend *blk, int64_t sect_num,
|
2013-02-13 12:09:41 +04:00
|
|
|
int sect_count, const char *filename,
|
|
|
|
uint8_t *buffer, bool quiet)
|
|
|
|
{
|
|
|
|
int pnum, ret = 0;
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk, sect_num << BDRV_SECTOR_BITS, buffer,
|
|
|
|
sect_count << BDRV_SECTOR_BITS);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64 " of %s: %s",
|
|
|
|
sectors_to_bytes(sect_num), filename, strerror(-ret));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
ret = is_allocated_sectors(buffer, sect_count, &pnum);
|
|
|
|
if (ret || pnum != sect_count) {
|
|
|
|
qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
|
|
|
|
sectors_to_bytes(ret ? sect_num : sect_num + pnum));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compares two images. Exit codes:
|
|
|
|
*
|
|
|
|
* 0 - Images are identical
|
|
|
|
* 1 - Images differ
|
|
|
|
* >1 - Error occurred
|
|
|
|
*/
|
|
|
|
static int img_compare(int argc, char **argv)
|
|
|
|
{
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *fmt1 = NULL, *fmt2 = NULL, *cache, *filename1, *filename2;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk1, *blk2;
|
2013-02-13 12:09:41 +04:00
|
|
|
BlockDriverState *bs1, *bs2;
|
|
|
|
int64_t total_sectors1, total_sectors2;
|
|
|
|
uint8_t *buf1 = NULL, *buf2 = NULL;
|
|
|
|
int pnum1, pnum2;
|
|
|
|
int allocated1, allocated2;
|
|
|
|
int ret = 0; /* return value - 0 Ident, 1 Different, >1 Error */
|
|
|
|
bool progress = false, quiet = false, strict = false;
|
2014-07-23 00:58:42 +04:00
|
|
|
int flags;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2013-02-13 12:09:41 +04:00
|
|
|
int64_t total_sectors;
|
|
|
|
int64_t sector_num = 0;
|
|
|
|
int64_t nb_sectors;
|
|
|
|
int c, pnum;
|
|
|
|
uint64_t progress_base;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2014-07-23 00:58:42 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2013-02-13 12:09:41 +04:00
|
|
|
for (;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "hf:F:T:pqs",
|
|
|
|
long_options, NULL);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
|
|
|
case '?':
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt1 = optarg;
|
|
|
|
break;
|
|
|
|
case 'F':
|
|
|
|
fmt2 = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
|
|
|
case 's':
|
|
|
|
strict = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
ret = 2;
|
|
|
|
goto out4;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Progress is not shown in Quiet mode */
|
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 2) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting two image file names");
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
filename1 = argv[optind++];
|
|
|
|
filename2 = argv[optind++];
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
ret = 2;
|
|
|
|
goto out4;
|
|
|
|
}
|
|
|
|
|
2014-08-26 22:17:55 +04:00
|
|
|
/* Initialize before goto out */
|
|
|
|
qemu_progress_init(progress, 2.0);
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
flags = 0;
|
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", cache);
|
|
|
|
ret = 2;
|
|
|
|
goto out3;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk1 = img_open(image_opts, filename1, fmt1, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk1) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 2;
|
2014-10-07 15:59:05 +04:00
|
|
|
goto out3;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk2 = img_open(image_opts, filename2, fmt2, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk2) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 2;
|
2014-10-07 15:59:05 +04:00
|
|
|
goto out2;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
2016-02-17 13:10:20 +03:00
|
|
|
bs1 = blk_bs(blk1);
|
2014-10-07 15:59:05 +04:00
|
|
|
bs2 = blk_bs(blk2);
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
buf1 = blk_blockalign(blk1, IO_BUF_SIZE);
|
|
|
|
buf2 = blk_blockalign(blk2, IO_BUF_SIZE);
|
|
|
|
total_sectors1 = blk_nb_sectors(blk1);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (total_sectors1 < 0) {
|
|
|
|
error_report("Can't get size of %s: %s",
|
|
|
|
filename1, strerror(-total_sectors1));
|
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-05 21:58:18 +03:00
|
|
|
total_sectors2 = blk_nb_sectors(blk2);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (total_sectors2 < 0) {
|
|
|
|
error_report("Can't get size of %s: %s",
|
|
|
|
filename2, strerror(-total_sectors2));
|
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-02-13 12:09:41 +04:00
|
|
|
total_sectors = MIN(total_sectors1, total_sectors2);
|
|
|
|
progress_base = MAX(total_sectors1, total_sectors2);
|
|
|
|
|
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
|
|
|
if (strict && total_sectors1 != total_sectors2) {
|
|
|
|
ret = 1;
|
|
|
|
qprintf(quiet, "Strict mode: Image size mismatch!\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (;;) {
|
2016-01-13 11:37:41 +03:00
|
|
|
int64_t status1, status2;
|
2016-01-26 06:58:48 +03:00
|
|
|
BlockDriverState *file;
|
|
|
|
|
2013-02-13 12:09:41 +04:00
|
|
|
nb_sectors = sectors_to_process(total_sectors, sector_num);
|
|
|
|
if (nb_sectors <= 0) {
|
|
|
|
break;
|
|
|
|
}
|
2016-01-13 11:37:41 +03:00
|
|
|
status1 = bdrv_get_block_status_above(bs1, NULL, sector_num,
|
|
|
|
total_sectors1 - sector_num,
|
2016-01-26 06:58:48 +03:00
|
|
|
&pnum1, &file);
|
2016-01-13 11:37:41 +03:00
|
|
|
if (status1 < 0) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s", filename1);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-01-13 11:37:41 +03:00
|
|
|
allocated1 = status1 & BDRV_BLOCK_ALLOCATED;
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2016-01-13 11:37:41 +03:00
|
|
|
status2 = bdrv_get_block_status_above(bs2, NULL, sector_num,
|
|
|
|
total_sectors2 - sector_num,
|
2016-01-26 06:58:48 +03:00
|
|
|
&pnum2, &file);
|
2016-01-13 11:37:41 +03:00
|
|
|
if (status2 < 0) {
|
2013-02-13 12:09:41 +04:00
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s", filename2);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-01-13 11:37:41 +03:00
|
|
|
allocated2 = status2 & BDRV_BLOCK_ALLOCATED;
|
|
|
|
if (pnum1) {
|
|
|
|
nb_sectors = MIN(nb_sectors, pnum1);
|
|
|
|
}
|
|
|
|
if (pnum2) {
|
|
|
|
nb_sectors = MIN(nb_sectors, pnum2);
|
|
|
|
}
|
2013-02-13 12:09:41 +04:00
|
|
|
|
2016-01-13 11:37:41 +03:00
|
|
|
if (strict) {
|
|
|
|
if ((status1 & ~BDRV_BLOCK_OFFSET_MASK) !=
|
|
|
|
(status2 & ~BDRV_BLOCK_OFFSET_MASK)) {
|
|
|
|
ret = 1;
|
|
|
|
qprintf(quiet, "Strict mode: Offset %" PRId64
|
|
|
|
" block status mismatch!\n",
|
|
|
|
sectors_to_bytes(sector_num));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if ((status1 & BDRV_BLOCK_ZERO) && (status2 & BDRV_BLOCK_ZERO)) {
|
|
|
|
nb_sectors = MIN(pnum1, pnum2);
|
|
|
|
} else if (allocated1 == allocated2) {
|
2013-02-13 12:09:41 +04:00
|
|
|
if (allocated1) {
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk1, sector_num << BDRV_SECTOR_BITS, buf1,
|
|
|
|
nb_sectors << BDRV_SECTOR_BITS);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64 " of %s:"
|
|
|
|
" %s", sectors_to_bytes(sector_num), filename1,
|
|
|
|
strerror(-ret));
|
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk2, sector_num << BDRV_SECTOR_BITS, buf2,
|
|
|
|
nb_sectors << BDRV_SECTOR_BITS);
|
2013-02-13 12:09:41 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64
|
|
|
|
" of %s: %s", sectors_to_bytes(sector_num),
|
|
|
|
filename2, strerror(-ret));
|
|
|
|
ret = 4;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ret = compare_sectors(buf1, buf2, nb_sectors, &pnum);
|
|
|
|
if (ret || pnum != nb_sectors) {
|
|
|
|
qprintf(quiet, "Content mismatch at offset %" PRId64 "!\n",
|
|
|
|
sectors_to_bytes(
|
|
|
|
ret ? sector_num : sector_num + pnum));
|
2013-11-13 16:26:49 +04:00
|
|
|
ret = 1;
|
2013-02-13 12:09:41 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
|
|
|
|
if (allocated1) {
|
2015-02-05 21:58:18 +03:00
|
|
|
ret = check_empty_sectors(blk1, sector_num, nb_sectors,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename1, buf1, quiet);
|
|
|
|
} else {
|
2015-02-05 21:58:18 +03:00
|
|
|
ret = check_empty_sectors(blk2, sector_num, nb_sectors,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename2, buf1, quiet);
|
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64 ": %s",
|
|
|
|
sectors_to_bytes(sector_num), strerror(-ret));
|
2013-11-13 16:26:49 +04:00
|
|
|
ret = 4;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sector_num += nb_sectors;
|
|
|
|
qemu_progress_print(((float) nb_sectors / progress_base)*100, 100);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (total_sectors1 != total_sectors2) {
|
2015-02-05 21:58:18 +03:00
|
|
|
BlockBackend *blk_over;
|
2013-02-13 12:09:41 +04:00
|
|
|
int64_t total_sectors_over;
|
|
|
|
const char *filename_over;
|
|
|
|
|
|
|
|
qprintf(quiet, "Warning: Image size mismatch!\n");
|
|
|
|
if (total_sectors1 > total_sectors2) {
|
|
|
|
total_sectors_over = total_sectors1;
|
2015-02-05 21:58:18 +03:00
|
|
|
blk_over = blk1;
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over = filename1;
|
|
|
|
} else {
|
|
|
|
total_sectors_over = total_sectors2;
|
2015-02-05 21:58:18 +03:00
|
|
|
blk_over = blk2;
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over = filename2;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
nb_sectors = sectors_to_process(total_sectors_over, sector_num);
|
|
|
|
if (nb_sectors <= 0) {
|
|
|
|
break;
|
|
|
|
}
|
2015-02-05 21:58:18 +03:00
|
|
|
ret = bdrv_is_allocated_above(blk_bs(blk_over), NULL, sector_num,
|
2013-02-13 12:09:41 +04:00
|
|
|
nb_sectors, &pnum);
|
|
|
|
if (ret < 0) {
|
|
|
|
ret = 3;
|
|
|
|
error_report("Sector allocation test failed for %s",
|
|
|
|
filename_over);
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
}
|
|
|
|
nb_sectors = pnum;
|
|
|
|
if (ret) {
|
2015-02-05 21:58:18 +03:00
|
|
|
ret = check_empty_sectors(blk_over, sector_num, nb_sectors,
|
2013-02-13 12:09:41 +04:00
|
|
|
filename_over, buf1, quiet);
|
|
|
|
if (ret) {
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while reading offset %" PRId64
|
|
|
|
" of %s: %s", sectors_to_bytes(sector_num),
|
|
|
|
filename_over, strerror(-ret));
|
2013-11-13 16:26:49 +04:00
|
|
|
ret = 4;
|
2013-02-13 12:09:41 +04:00
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sector_num += nb_sectors;
|
|
|
|
qemu_progress_print(((float) nb_sectors / progress_base)*100, 100);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
qprintf(quiet, "Images are identical.\n");
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
qemu_vfree(buf1);
|
|
|
|
qemu_vfree(buf2);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk2);
|
2013-02-13 12:09:41 +04:00
|
|
|
out2:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk1);
|
2013-02-13 12:09:41 +04:00
|
|
|
out3:
|
|
|
|
qemu_progress_end();
|
2016-02-17 13:10:17 +03:00
|
|
|
out4:
|
2013-02-13 12:09:41 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
enum ImgConvertBlockStatus {
|
|
|
|
BLK_DATA,
|
|
|
|
BLK_ZERO,
|
|
|
|
BLK_BACKING_FILE,
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct ImgConvertState {
|
|
|
|
BlockBackend **src;
|
|
|
|
int64_t *src_sectors;
|
|
|
|
int src_cur, src_num;
|
|
|
|
int64_t src_cur_offset;
|
|
|
|
int64_t total_sectors;
|
|
|
|
int64_t allocated_sectors;
|
|
|
|
enum ImgConvertBlockStatus status;
|
|
|
|
int64_t sector_next_status;
|
|
|
|
BlockBackend *target;
|
|
|
|
bool has_zero_init;
|
|
|
|
bool compressed;
|
|
|
|
bool target_has_backing;
|
|
|
|
int min_sparse;
|
|
|
|
size_t cluster_sectors;
|
|
|
|
size_t buf_sectors;
|
|
|
|
} ImgConvertState;
|
|
|
|
|
|
|
|
static void convert_select_part(ImgConvertState *s, int64_t sector_num)
|
|
|
|
{
|
|
|
|
assert(sector_num >= s->src_cur_offset);
|
|
|
|
while (sector_num - s->src_cur_offset >= s->src_sectors[s->src_cur]) {
|
|
|
|
s->src_cur_offset += s->src_sectors[s->src_cur];
|
|
|
|
s->src_cur++;
|
|
|
|
assert(s->src_cur < s->src_num);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_iteration_sectors(ImgConvertState *s, int64_t sector_num)
|
|
|
|
{
|
|
|
|
int64_t ret;
|
|
|
|
int n;
|
|
|
|
|
|
|
|
convert_select_part(s, sector_num);
|
|
|
|
|
|
|
|
assert(s->total_sectors > sector_num);
|
|
|
|
n = MIN(s->total_sectors - sector_num, BDRV_REQUEST_MAX_SECTORS);
|
|
|
|
|
|
|
|
if (s->sector_next_status <= sector_num) {
|
2016-01-26 06:58:48 +03:00
|
|
|
BlockDriverState *file;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
ret = bdrv_get_block_status(blk_bs(s->src[s->src_cur]),
|
|
|
|
sector_num - s->src_cur_offset,
|
2016-01-26 06:58:48 +03:00
|
|
|
n, &n, &file);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret & BDRV_BLOCK_ZERO) {
|
|
|
|
s->status = BLK_ZERO;
|
|
|
|
} else if (ret & BDRV_BLOCK_DATA) {
|
|
|
|
s->status = BLK_DATA;
|
|
|
|
} else if (!s->target_has_backing) {
|
|
|
|
/* Without a target backing file we must copy over the contents of
|
|
|
|
* the backing file as well. */
|
2016-04-27 19:04:58 +03:00
|
|
|
/* Check block status of the backing file chain to avoid
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
* needlessly reading zeroes and limiting the iteration to the
|
|
|
|
* buffer size */
|
2016-04-27 19:04:58 +03:00
|
|
|
ret = bdrv_get_block_status_above(blk_bs(s->src[s->src_cur]), NULL,
|
|
|
|
sector_num - s->src_cur_offset,
|
|
|
|
n, &n, &file);
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret & BDRV_BLOCK_ZERO) {
|
|
|
|
s->status = BLK_ZERO;
|
|
|
|
} else {
|
|
|
|
s->status = BLK_DATA;
|
|
|
|
}
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
} else {
|
|
|
|
s->status = BLK_BACKING_FILE;
|
|
|
|
}
|
|
|
|
|
|
|
|
s->sector_next_status = sector_num + n;
|
|
|
|
}
|
|
|
|
|
|
|
|
n = MIN(n, s->sector_next_status - sector_num);
|
|
|
|
if (s->status == BLK_DATA) {
|
|
|
|
n = MIN(n, s->buf_sectors);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We need to write complete clusters for compressed images, so if an
|
|
|
|
* unallocated area is shorter than that, we must consider the whole
|
|
|
|
* cluster allocated. */
|
|
|
|
if (s->compressed) {
|
|
|
|
if (n < s->cluster_sectors) {
|
|
|
|
n = MIN(s->cluster_sectors, s->total_sectors - sector_num);
|
|
|
|
s->status = BLK_DATA;
|
|
|
|
} else {
|
|
|
|
n = QEMU_ALIGN_DOWN(n, s->cluster_sectors);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_read(ImgConvertState *s, int64_t sector_num, int nb_sectors,
|
|
|
|
uint8_t *buf)
|
|
|
|
{
|
|
|
|
int n;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
assert(nb_sectors <= s->buf_sectors);
|
|
|
|
while (nb_sectors > 0) {
|
|
|
|
BlockBackend *blk;
|
|
|
|
int64_t bs_sectors;
|
|
|
|
|
|
|
|
/* In the case of compression with multiple source files, we can get a
|
|
|
|
* nb_sectors that spreads into the next part. So we must be able to
|
|
|
|
* read across multiple BDSes for one convert_read() call. */
|
|
|
|
convert_select_part(s, sector_num);
|
|
|
|
blk = s->src[s->src_cur];
|
|
|
|
bs_sectors = s->src_sectors[s->src_cur];
|
|
|
|
|
|
|
|
n = MIN(nb_sectors, bs_sectors - (sector_num - s->src_cur_offset));
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk,
|
|
|
|
(sector_num - s->src_cur_offset) << BDRV_SECTOR_BITS,
|
|
|
|
buf, n << BDRV_SECTOR_BITS);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
nb_sectors -= n;
|
|
|
|
buf += n * BDRV_SECTOR_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_write(ImgConvertState *s, int64_t sector_num, int nb_sectors,
|
|
|
|
const uint8_t *buf)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
while (nb_sectors > 0) {
|
|
|
|
int n = nb_sectors;
|
|
|
|
|
|
|
|
switch (s->status) {
|
|
|
|
case BLK_BACKING_FILE:
|
|
|
|
/* If we have a backing file, leave clusters unallocated that are
|
|
|
|
* unallocated in the source image, so that the backing file is
|
|
|
|
* visible at the respective offset. */
|
|
|
|
assert(s->target_has_backing);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case BLK_DATA:
|
|
|
|
/* We must always write compressed clusters as a whole, so don't
|
|
|
|
* try to find zeroed parts in the buffer. We can only save the
|
|
|
|
* write if the buffer is completely zeroed and we're allowed to
|
|
|
|
* keep the target sparse. */
|
|
|
|
if (s->compressed) {
|
|
|
|
if (s->has_zero_init && s->min_sparse &&
|
|
|
|
buffer_is_zero(buf, n * BDRV_SECTOR_SIZE))
|
|
|
|
{
|
|
|
|
assert(!s->target_has_backing);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2016-07-22 11:17:40 +03:00
|
|
|
ret = blk_pwrite_compressed(s->target,
|
|
|
|
sector_num << BDRV_SECTOR_BITS,
|
|
|
|
buf, n << BDRV_SECTOR_BITS);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If there is real non-zero data or we're told to keep the target
|
|
|
|
* fully allocated (-S 0), we must write it. Otherwise we can treat
|
|
|
|
* it as zero sectors. */
|
|
|
|
if (!s->min_sparse ||
|
|
|
|
is_allocated_sectors_min(buf, n, &n, s->min_sparse))
|
|
|
|
{
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pwrite(s->target, sector_num << BDRV_SECTOR_BITS,
|
|
|
|
buf, n << BDRV_SECTOR_BITS, 0);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* fall-through */
|
|
|
|
|
|
|
|
case BLK_ZERO:
|
|
|
|
if (s->has_zero_init) {
|
|
|
|
break;
|
|
|
|
}
|
2016-05-25 01:25:20 +03:00
|
|
|
ret = blk_pwrite_zeroes(s->target, sector_num << BDRV_SECTOR_BITS,
|
|
|
|
n << BDRV_SECTOR_BITS, 0);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
nb_sectors -= n;
|
|
|
|
buf += n * BDRV_SECTOR_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert_do_copy(ImgConvertState *s)
|
|
|
|
{
|
|
|
|
uint8_t *buf = NULL;
|
|
|
|
int64_t sector_num, allocated_done;
|
|
|
|
int ret;
|
|
|
|
int n;
|
|
|
|
|
|
|
|
/* Check whether we have zero initialisation or can get it efficiently */
|
|
|
|
s->has_zero_init = s->min_sparse && !s->target_has_backing
|
|
|
|
? bdrv_has_zero_init(blk_bs(s->target))
|
|
|
|
: false;
|
|
|
|
|
|
|
|
if (!s->has_zero_init && !s->target_has_backing &&
|
|
|
|
bdrv_can_write_zeroes_with_unmap(blk_bs(s->target)))
|
|
|
|
{
|
2016-06-16 16:13:15 +03:00
|
|
|
ret = blk_make_zero(s->target, BDRV_REQ_MAY_UNMAP);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret == 0) {
|
|
|
|
s->has_zero_init = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate buffer for copied data. For compressed images, only one cluster
|
|
|
|
* can be copied at a time. */
|
|
|
|
if (s->compressed) {
|
|
|
|
if (s->cluster_sectors <= 0 || s->cluster_sectors > s->buf_sectors) {
|
|
|
|
error_report("invalid cluster size");
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
s->buf_sectors = s->cluster_sectors;
|
|
|
|
}
|
|
|
|
buf = blk_blockalign(s->target, s->buf_sectors * BDRV_SECTOR_SIZE);
|
|
|
|
|
|
|
|
/* Calculate allocated sectors for progress */
|
|
|
|
s->allocated_sectors = 0;
|
|
|
|
sector_num = 0;
|
|
|
|
while (sector_num < s->total_sectors) {
|
|
|
|
n = convert_iteration_sectors(s, sector_num);
|
|
|
|
if (n < 0) {
|
|
|
|
ret = n;
|
|
|
|
goto fail;
|
|
|
|
}
|
2016-03-25 01:33:57 +03:00
|
|
|
if (s->status == BLK_DATA || (!s->min_sparse && s->status == BLK_ZERO))
|
|
|
|
{
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
s->allocated_sectors += n;
|
|
|
|
}
|
|
|
|
sector_num += n;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do the copy */
|
|
|
|
s->src_cur = 0;
|
|
|
|
s->src_cur_offset = 0;
|
|
|
|
s->sector_next_status = 0;
|
|
|
|
|
|
|
|
sector_num = 0;
|
|
|
|
allocated_done = 0;
|
|
|
|
|
|
|
|
while (sector_num < s->total_sectors) {
|
|
|
|
n = convert_iteration_sectors(s, sector_num);
|
|
|
|
if (n < 0) {
|
|
|
|
ret = n;
|
|
|
|
goto fail;
|
|
|
|
}
|
2016-03-25 01:33:57 +03:00
|
|
|
if (s->status == BLK_DATA || (!s->min_sparse && s->status == BLK_ZERO))
|
|
|
|
{
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
allocated_done += n;
|
|
|
|
qemu_progress_print(100.0 * allocated_done / s->allocated_sectors,
|
|
|
|
0);
|
|
|
|
}
|
|
|
|
|
2016-03-25 01:33:57 +03:00
|
|
|
if (s->status == BLK_DATA) {
|
|
|
|
ret = convert_read(s, sector_num, n, buf);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading sector %" PRId64
|
|
|
|
": %s", sector_num, strerror(-ret));
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
} else if (!s->min_sparse && s->status == BLK_ZERO) {
|
|
|
|
n = MIN(n, s->buf_sectors);
|
|
|
|
memset(buf, 0, n * BDRV_SECTOR_SIZE);
|
|
|
|
s->status = BLK_DATA;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = convert_write(s, sector_num, n, buf);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while writing sector %" PRId64
|
|
|
|
": %s", sector_num, strerror(-ret));
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
sector_num += n;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (s->compressed) {
|
|
|
|
/* signal EOF to align */
|
2016-07-22 11:17:40 +03:00
|
|
|
ret = blk_pwrite_compressed(s->target, 0, NULL, 0);
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
fail:
|
|
|
|
qemu_vfree(buf);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
static int img_convert(int argc, char **argv)
|
|
|
|
{
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
int c, bs_n, bs_i, compress, cluster_sectors, skip_create;
|
2013-11-27 14:07:01 +04:00
|
|
|
int64_t ret = 0;
|
2014-07-23 00:58:42 +04:00
|
|
|
int progress = 0, flags, src_flags;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough, src_writethrough;
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *fmt, *out_fmt, *cache, *src_cache, *out_baseimg, *out_filename;
|
2010-05-26 06:35:36 +04:00
|
|
|
BlockDriver *drv, *proto_drv;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend **blk = NULL, *out_blk = NULL;
|
2010-06-20 23:26:35 +04:00
|
|
|
BlockDriverState **bs = NULL, *out_bs = NULL;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
int64_t total_sectors;
|
2014-06-26 15:23:25 +04:00
|
|
|
int64_t *bs_sectors = NULL;
|
2013-11-27 14:07:06 +04:00
|
|
|
size_t bufsectors = IO_BUF_SIZE / BDRV_SECTOR_SIZE;
|
2006-08-06 01:31:00 +04:00
|
|
|
BlockDriverInfo bdi;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
const char *out_baseimg_param;
|
2009-05-18 18:42:12 +04:00
|
|
|
char *options = NULL;
|
2010-09-22 06:58:41 +04:00
|
|
|
const char *snapshot_name = NULL;
|
2011-08-26 17:27:13 +04:00
|
|
|
int min_sparse = 8; /* Need at least 4k of zeros for sparse detection */
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2013-09-06 19:14:26 +04:00
|
|
|
Error *local_err = NULL;
|
2013-12-04 13:10:57 +04:00
|
|
|
QemuOpts *sn_opts = NULL;
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
ImgConvertState state;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2004-08-02 01:59:26 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
|
|
|
out_fmt = "raw";
|
2011-06-20 20:48:19 +04:00
|
|
|
cache = "unsafe";
|
2014-07-23 00:58:42 +04:00
|
|
|
src_cache = BDRV_DEFAULT_CACHE;
|
2008-06-06 01:53:49 +04:00
|
|
|
out_baseimg = NULL;
|
2010-12-07 19:44:34 +03:00
|
|
|
compress = 0;
|
2013-09-02 22:07:24 +04:00
|
|
|
skip_create = 0;
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "hf:O:B:ce6o:s:l:S:pt:T:qn",
|
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'O':
|
|
|
|
out_fmt = optarg;
|
|
|
|
break;
|
2008-06-06 01:53:49 +04:00
|
|
|
case 'B':
|
|
|
|
out_baseimg = optarg;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'c':
|
2010-12-07 19:44:34 +03:00
|
|
|
compress = 1;
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
|
|
|
case 'e':
|
2011-06-22 16:03:55 +04:00
|
|
|
error_report("option -e is deprecated, please use \'-o "
|
2010-12-07 19:44:34 +03:00
|
|
|
"encryption\' instead!");
|
2014-02-21 19:24:05 +04:00
|
|
|
ret = -1;
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2007-09-17 01:59:02 +04:00
|
|
|
case '6':
|
2011-06-22 16:03:55 +04:00
|
|
|
error_report("option -6 is deprecated, please use \'-o "
|
2010-12-07 19:44:34 +03:00
|
|
|
"compat6\' instead!");
|
2014-02-21 19:24:05 +04:00
|
|
|
ret = -1;
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2009-05-18 18:42:12 +04:00
|
|
|
case 'o':
|
2014-02-21 19:24:05 +04:00
|
|
|
if (!is_valid_option_list(optarg)) {
|
|
|
|
error_report("Invalid option list: %s", optarg);
|
|
|
|
ret = -1;
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2014-02-21 19:24:05 +04:00
|
|
|
}
|
|
|
|
if (!options) {
|
|
|
|
options = g_strdup(optarg);
|
|
|
|
} else {
|
|
|
|
char *old_options = options;
|
|
|
|
options = g_strdup_printf("%s,%s", options, optarg);
|
|
|
|
g_free(old_options);
|
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
break;
|
2010-09-22 06:58:41 +04:00
|
|
|
case 's':
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
2013-12-04 13:10:57 +04:00
|
|
|
case 'l':
|
|
|
|
if (strstart(optarg, SNAPSHOT_OPT_BASE, NULL)) {
|
QemuOpts: Wean off qerror_report_err()
qerror_report_err() is a transitional interface to help with
converting existing monitor commands to QMP. It should not be used
elsewhere.
The only remaining user in qemu-option.c is qemu_opts_parse(). Is it
used in QMP context? If not, we can simply replace
qerror_report_err() by error_report_err().
The uses in qemu-img.c, qemu-io.c, qemu-nbd.c and under tests/ are
clearly not in QMP context.
The uses in vl.c aren't either, because the only QMP command handlers
there are qmp_query_status() and qmp_query_machines(), and they don't
call it.
Remaining uses:
* drive_def(): Command line -drive and such, HMP drive_add and pci_add
* hmp_chardev_add(): HMP chardev-add
* monitor_parse_command(): HMP core
* tmp_config_parse(): Command line -tpmdev
* net_host_device_add(): HMP host_net_add
* net_client_parse(): Command line -net and -netdev
* qemu_global_option(): Command line -global
* vnc_parse_func(): Command line -display, -vnc, default display, HMP
change, QMP change. Bummer.
* qemu_pci_hot_add_nic(): HMP pci_add
* usb_net_init(): Command line -usbdevice, HMP usb_add
Propagate errors through qemu_opts_parse(). Create a convenience
function qemu_opts_parse_noisily() that passes errors to
error_report_err(). Switch all non-QMP users outside tests to it.
That leaves vnc_parse_func(). Propagate errors through it. Since I'm
touching it anyway, rename it to vnc_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2015-02-13 14:50:26 +03:00
|
|
|
sn_opts = qemu_opts_parse_noisily(&internal_snapshot_opts,
|
|
|
|
optarg, false);
|
2013-12-04 13:10:57 +04:00
|
|
|
if (!sn_opts) {
|
|
|
|
error_report("Failed in parsing snapshot param '%s'",
|
|
|
|
optarg);
|
2014-02-21 19:24:05 +04:00
|
|
|
ret = -1;
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2013-12-04 13:10:57 +04:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
snapshot_name = optarg;
|
|
|
|
}
|
|
|
|
break;
|
2011-08-26 17:27:13 +04:00
|
|
|
case 'S':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
2011-11-22 12:46:05 +04:00
|
|
|
char *end;
|
2015-09-16 19:02:56 +03:00
|
|
|
sval = qemu_strtosz_suffix(optarg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
2011-11-22 12:46:05 +04:00
|
|
|
if (sval < 0 || *end) {
|
2011-08-26 17:27:13 +04:00
|
|
|
error_report("Invalid minimum zero buffer size for sparse output specified");
|
2014-02-21 19:24:05 +04:00
|
|
|
ret = -1;
|
2014-03-03 17:54:07 +04:00
|
|
|
goto fail_getopt;
|
2011-08-26 17:27:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
min_sparse = sval / BDRV_SECTOR_SIZE;
|
|
|
|
break;
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
case 'p':
|
|
|
|
progress = 1;
|
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
src_cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2013-09-02 22:07:24 +04:00
|
|
|
case 'n':
|
|
|
|
skip_create = 1;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
goto fail_getopt;
|
|
|
|
}
|
|
|
|
|
2014-03-03 17:54:07 +04:00
|
|
|
/* Initialize before goto out */
|
2013-02-13 12:09:40 +04:00
|
|
|
if (quiet) {
|
|
|
|
progress = 0;
|
|
|
|
}
|
2014-03-03 17:54:07 +04:00
|
|
|
qemu_progress_init(progress, 1.0);
|
|
|
|
|
2007-10-31 04:11:44 +03:00
|
|
|
bs_n = argc - optind - 1;
|
2014-02-21 19:24:07 +04:00
|
|
|
out_filename = bs_n >= 1 ? argv[argc - 1] : NULL;
|
2008-06-06 01:53:49 +04:00
|
|
|
|
2014-02-21 19:24:05 +04:00
|
|
|
if (options && has_help_option(options)) {
|
2010-12-06 17:25:38 +03:00
|
|
|
ret = print_block_option_help(out_filename, out_fmt);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
if (bs_n < 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Must specify image file name");
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-20 23:26:35 +04:00
|
|
|
if (bs_n > 1 && out_baseimg) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("-B makes no sense when concatenating multiple input "
|
|
|
|
"images");
|
2010-12-06 17:25:36 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2012-03-15 16:13:31 +04:00
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
src_flags = 0;
|
|
|
|
ret = bdrv_parse_cache_mode(src_cache, &src_flags, &src_writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", src_cache);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk = g_new0(BlockBackend *, bs_n);
|
2014-06-26 15:23:24 +04:00
|
|
|
bs = g_new0(BlockDriverState *, bs_n);
|
2014-06-26 15:23:25 +04:00
|
|
|
bs_sectors = g_new(int64_t, bs_n);
|
2007-10-31 04:11:44 +03:00
|
|
|
|
|
|
|
total_sectors = 0;
|
|
|
|
for (bs_i = 0; bs_i < bs_n; bs_i++) {
|
2016-03-16 21:54:38 +03:00
|
|
|
blk[bs_i] = img_open(image_opts, argv[optind + bs_i],
|
2016-03-15 15:01:04 +03:00
|
|
|
fmt, src_flags, src_writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk[bs_i]) {
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs[bs_i] = blk_bs(blk[bs_i]);
|
2015-02-05 21:58:18 +03:00
|
|
|
bs_sectors[bs_i] = blk_nb_sectors(blk[bs_i]);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (bs_sectors[bs_i] < 0) {
|
|
|
|
error_report("Could not get size of %s: %s",
|
|
|
|
argv[optind + bs_i], strerror(-bs_sectors[bs_i]));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-06-26 15:23:24 +04:00
|
|
|
total_sectors += bs_sectors[bs_i];
|
2007-10-31 04:11:44 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-12-04 13:10:57 +04:00
|
|
|
if (sn_opts) {
|
|
|
|
ret = bdrv_snapshot_load_tmp(bs[0],
|
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_ID),
|
|
|
|
qemu_opt_get(sn_opts, SNAPSHOT_OPT_NAME),
|
|
|
|
&local_err);
|
|
|
|
} else if (snapshot_name != NULL) {
|
2010-09-22 06:58:41 +04:00
|
|
|
if (bs_n > 1) {
|
2011-06-22 16:03:54 +04:00
|
|
|
error_report("No support for concatenating multiple snapshot");
|
2010-09-22 06:58:41 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-12-04 13:10:54 +04:00
|
|
|
|
|
|
|
bdrv_snapshot_load_tmp_by_id_or_name(bs[0], snapshot_name, &local_err);
|
2013-12-04 13:10:57 +04:00
|
|
|
}
|
2014-01-30 18:07:28 +04:00
|
|
|
if (local_err) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "Failed to load snapshot: ");
|
2013-12-04 13:10:57 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-09-22 06:58:41 +04:00
|
|
|
}
|
|
|
|
|
2009-05-18 18:42:12 +04:00
|
|
|
/* Find driver and parse its options */
|
2004-08-02 01:59:26 +04:00
|
|
|
drv = bdrv_find_format(out_fmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (!drv) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Unknown file format '%s'", out_fmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2015-02-05 21:58:12 +03:00
|
|
|
proto_drv = bdrv_find_protocol(out_filename, true, &local_err);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (!proto_drv) {
|
2015-03-12 18:08:02 +03:00
|
|
|
error_report_err(local_err);
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-26 06:35:36 +04:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
if (!skip_create) {
|
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation",
|
|
|
|
drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-12-02 20:32:46 +03:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
if (!proto_drv->create_opts) {
|
|
|
|
error_report("Protocol driver '%s' does not support image creation",
|
|
|
|
proto_drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-12-02 20:32:46 +03:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
2009-06-04 17:39:38 +04:00
|
|
|
|
2015-02-11 17:58:46 +03:00
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
2015-02-12 20:37:11 +03:00
|
|
|
if (options) {
|
|
|
|
qemu_opts_do_parse(opts, options, NULL, &local_err);
|
|
|
|
if (local_err) {
|
2015-03-14 12:23:15 +03:00
|
|
|
error_report_err(local_err);
|
2015-02-12 20:37:11 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-11 17:58:46 +03:00
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2015-02-12 18:46:36 +03:00
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE, total_sectors * 512,
|
|
|
|
&error_abort);
|
2015-02-11 17:58:46 +03:00
|
|
|
ret = add_old_style_options(out_fmt, opts, out_baseimg, NULL);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2010-10-14 17:46:04 +04:00
|
|
|
/* Get backing file name if -o backing_file was used */
|
2014-06-05 13:20:51 +04:00
|
|
|
out_baseimg_param = qemu_opt_get(opts, BLOCK_OPT_BACKING_FILE);
|
2010-10-14 17:46:04 +04:00
|
|
|
if (out_baseimg_param) {
|
2014-06-05 13:20:51 +04:00
|
|
|
out_baseimg = out_baseimg_param;
|
2010-10-14 17:46:04 +04:00
|
|
|
}
|
|
|
|
|
2009-05-18 18:42:12 +04:00
|
|
|
/* Check if compression is supported */
|
2010-12-07 19:44:34 +03:00
|
|
|
if (compress) {
|
2014-06-05 13:20:51 +04:00
|
|
|
bool encryption =
|
|
|
|
qemu_opt_get_bool(opts, BLOCK_OPT_ENCRYPT, false);
|
|
|
|
const char *preallocation =
|
|
|
|
qemu_opt_get(opts, BLOCK_OPT_PREALLOC);
|
2009-05-18 18:42:12 +04:00
|
|
|
|
2016-07-22 11:17:48 +03:00
|
|
|
if (!drv->bdrv_co_pwritev_compressed) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Compression not supported for this file format");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
if (encryption) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Compression and encryption not supported at "
|
|
|
|
"the same time");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
2011-10-18 18:19:42 +04:00
|
|
|
|
2014-06-05 13:20:51 +04:00
|
|
|
if (preallocation
|
|
|
|
&& strcmp(preallocation, "off"))
|
2011-10-18 18:19:42 +04:00
|
|
|
{
|
|
|
|
error_report("Compression and preallocation not supported at "
|
|
|
|
"the same time");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-05-18 18:42:12 +04:00
|
|
|
}
|
|
|
|
|
2013-09-02 22:07:24 +04:00
|
|
|
if (!skip_create) {
|
|
|
|
/* Create the new image */
|
2014-06-05 13:21:11 +04:00
|
|
|
ret = bdrv_create(drv, out_filename, opts, &local_err);
|
2013-09-02 22:07:24 +04:00
|
|
|
if (ret < 0) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err, "%s: error while converting %s: ",
|
|
|
|
out_filename, out_fmt);
|
2013-09-02 22:07:24 +04:00
|
|
|
goto out;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2007-09-17 12:09:54 +04:00
|
|
|
|
2013-10-24 14:07:06 +04:00
|
|
|
flags = min_sparse ? (BDRV_O_RDWR | BDRV_O_UNMAP) : BDRV_O_RDWR;
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-05-28 13:17:07 +04:00
|
|
|
goto out;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
/* XXX we should allow --image-opts to trigger use of
|
|
|
|
* img_open() here, but then we have trouble with
|
|
|
|
* the bdrv_create() call which takes different params.
|
|
|
|
* Not critical right now, so fix can wait...
|
|
|
|
*/
|
2016-03-15 15:01:04 +03:00
|
|
|
out_blk = img_open_file(out_filename, out_fmt, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!out_blk) {
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
out_bs = blk_bs(out_blk);
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2016-06-24 01:37:19 +03:00
|
|
|
/* increase bufsectors from the default 4096 (2M) if opt_transfer
|
2013-11-27 14:07:06 +04:00
|
|
|
* or discard_alignment of the out_bs is greater. Limit to 32768 (16MB)
|
|
|
|
* as maximum. */
|
|
|
|
bufsectors = MIN(32768,
|
2016-06-24 01:37:19 +03:00
|
|
|
MAX(bufsectors,
|
|
|
|
MAX(out_bs->bl.opt_transfer >> BDRV_SECTOR_BITS,
|
2016-06-24 01:37:21 +03:00
|
|
|
out_bs->bl.pdiscard_alignment >>
|
|
|
|
BDRV_SECTOR_BITS)));
|
2013-11-27 14:07:06 +04:00
|
|
|
|
2013-09-02 22:07:24 +04:00
|
|
|
if (skip_create) {
|
2015-02-05 21:58:18 +03:00
|
|
|
int64_t output_sectors = blk_nb_sectors(out_blk);
|
2014-06-26 15:23:21 +04:00
|
|
|
if (output_sectors < 0) {
|
2015-02-25 07:22:27 +03:00
|
|
|
error_report("unable to get output image length: %s",
|
2014-06-26 15:23:21 +04:00
|
|
|
strerror(-output_sectors));
|
2013-09-02 22:07:24 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2014-06-26 15:23:21 +04:00
|
|
|
} else if (output_sectors < total_sectors) {
|
2013-09-02 22:07:24 +04:00
|
|
|
error_report("output file is smaller than input file");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-11-27 14:07:07 +04:00
|
|
|
cluster_sectors = 0;
|
|
|
|
ret = bdrv_get_info(out_bs, &bdi);
|
|
|
|
if (ret < 0) {
|
|
|
|
if (compress) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("could not get block driver info");
|
2010-06-20 23:26:35 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2013-11-27 14:07:07 +04:00
|
|
|
} else {
|
2014-05-06 17:08:43 +04:00
|
|
|
compress = compress || bdi.needs_compressed_writes;
|
2013-11-27 14:07:07 +04:00
|
|
|
cluster_sectors = bdi.cluster_size / BDRV_SECTOR_SIZE;
|
|
|
|
}
|
|
|
|
|
qemu-img convert: Rewrite copying logic
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-03-19 15:33:32 +03:00
|
|
|
state = (ImgConvertState) {
|
|
|
|
.src = blk,
|
|
|
|
.src_sectors = bs_sectors,
|
|
|
|
.src_num = bs_n,
|
|
|
|
.total_sectors = total_sectors,
|
|
|
|
.target = out_blk,
|
|
|
|
.compressed = compress,
|
|
|
|
.target_has_backing = (bool) out_baseimg,
|
|
|
|
.min_sparse = min_sparse,
|
|
|
|
.cluster_sectors = cluster_sectors,
|
|
|
|
.buf_sectors = bufsectors,
|
|
|
|
};
|
|
|
|
ret = convert_do_copy(&state);
|
2013-12-05 18:54:53 +04:00
|
|
|
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
2013-11-27 14:07:01 +04:00
|
|
|
if (!ret) {
|
|
|
|
qemu_progress_print(100, 0);
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_end();
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_free(create_opts);
|
2014-09-29 18:07:55 +04:00
|
|
|
qemu_opts_del(sn_opts);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(out_blk);
|
2014-10-07 15:59:08 +04:00
|
|
|
g_free(bs);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
if (blk) {
|
|
|
|
for (bs_i = 0; bs_i < bs_n; bs_i++) {
|
|
|
|
blk_unref(blk[bs_i]);
|
|
|
|
}
|
|
|
|
g_free(blk);
|
|
|
|
}
|
2014-06-26 15:23:24 +04:00
|
|
|
g_free(bs_sectors);
|
2014-03-03 17:54:07 +04:00
|
|
|
fail_getopt:
|
|
|
|
g_free(options);
|
|
|
|
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2004-08-04 01:15:11 +04:00
|
|
|
|
2006-08-06 01:31:00 +04:00
|
|
|
static void dump_snapshots(BlockDriverState *bs)
|
|
|
|
{
|
|
|
|
QEMUSnapshotInfo *sn_tab, *sn;
|
|
|
|
int nb_sns, i;
|
|
|
|
|
|
|
|
nb_sns = bdrv_snapshot_list(bs, &sn_tab);
|
|
|
|
if (nb_sns <= 0)
|
|
|
|
return;
|
|
|
|
printf("Snapshot list:\n");
|
2013-05-25 07:09:45 +04:00
|
|
|
bdrv_snapshot_dump(fprintf, stdout, NULL);
|
|
|
|
printf("\n");
|
2006-08-06 01:31:00 +04:00
|
|
|
for(i = 0; i < nb_sns; i++) {
|
|
|
|
sn = &sn_tab[i];
|
2013-05-25 07:09:45 +04:00
|
|
|
bdrv_snapshot_dump(fprintf, stdout, sn);
|
|
|
|
printf("\n");
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
2011-08-21 07:09:37 +04:00
|
|
|
g_free(sn_tab);
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
static void dump_json_image_info_list(ImageInfoList *list)
|
|
|
|
{
|
|
|
|
QString *str;
|
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageInfoList(v, NULL, &list, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
2012-10-17 16:02:31 +04:00
|
|
|
str = qobject_to_json_pretty(obj);
|
|
|
|
assert(str != NULL);
|
|
|
|
printf("%s\n", qstring_get_str(str));
|
|
|
|
qobject_decref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
2012-10-17 16:02:31 +04:00
|
|
|
QDECREF(str);
|
|
|
|
}
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
static void dump_json_image_info(ImageInfo *info)
|
|
|
|
{
|
|
|
|
QString *str;
|
|
|
|
QObject *obj;
|
2016-09-30 17:45:28 +03:00
|
|
|
Visitor *v = qobject_output_visitor_new(&obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
|
|
|
|
visit_type_ImageInfo(v, NULL, &info, &error_abort);
|
|
|
|
visit_complete(v, &obj);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
str = qobject_to_json_pretty(obj);
|
|
|
|
assert(str != NULL);
|
|
|
|
printf("%s\n", qstring_get_str(str));
|
|
|
|
qobject_decref(obj);
|
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.
This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).
Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.
The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.
Generated code is simplified as follows for events:
|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);
and for commands:
| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1465490926-28625-13-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-06-09 19:48:43 +03:00
|
|
|
visit_free(v);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
QDECREF(str);
|
|
|
|
}
|
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
static void dump_human_image_info_list(ImageInfoList *list)
|
|
|
|
{
|
|
|
|
ImageInfoList *elem;
|
|
|
|
bool delim = false;
|
|
|
|
|
|
|
|
for (elem = list; elem; elem = elem->next) {
|
|
|
|
if (delim) {
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
delim = true;
|
|
|
|
|
2013-05-25 07:09:45 +04:00
|
|
|
bdrv_image_info_dump(fprintf, stdout, elem->value);
|
2012-10-17 16:02:31 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static gboolean str_equal_func(gconstpointer a, gconstpointer b)
|
|
|
|
{
|
|
|
|
return strcmp(a, b) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Open an image file chain and return an ImageInfoList
|
|
|
|
*
|
|
|
|
* @filename: topmost image filename
|
|
|
|
* @fmt: topmost image format (may be NULL to autodetect)
|
|
|
|
* @chain: true - enumerate entire backing file chain
|
|
|
|
* false - only topmost image file
|
|
|
|
*
|
|
|
|
* Returns a list of ImageInfo objects or NULL if there was an error opening an
|
|
|
|
* image file. If there was an error a message will have been printed to
|
|
|
|
* stderr.
|
|
|
|
*/
|
2016-02-17 13:10:20 +03:00
|
|
|
static ImageInfoList *collect_image_info_list(bool image_opts,
|
|
|
|
const char *filename,
|
2012-10-17 16:02:31 +04:00
|
|
|
const char *fmt,
|
|
|
|
bool chain)
|
|
|
|
{
|
|
|
|
ImageInfoList *head = NULL;
|
|
|
|
ImageInfoList **last = &head;
|
|
|
|
GHashTable *filenames;
|
2013-06-06 08:27:58 +04:00
|
|
|
Error *err = NULL;
|
2012-10-17 16:02:31 +04:00
|
|
|
|
|
|
|
filenames = g_hash_table_new_full(g_str_hash, str_equal_func, NULL, NULL);
|
|
|
|
|
|
|
|
while (filename) {
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2012-10-17 16:02:31 +04:00
|
|
|
BlockDriverState *bs;
|
|
|
|
ImageInfo *info;
|
|
|
|
ImageInfoList *elem;
|
|
|
|
|
|
|
|
if (g_hash_table_lookup_extended(filenames, filename, NULL, NULL)) {
|
|
|
|
error_report("Backing file '%s' creates an infinite loop.",
|
|
|
|
filename);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
g_hash_table_insert(filenames, (gpointer)filename, NULL);
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt,
|
2016-03-15 15:01:04 +03:00
|
|
|
BDRV_O_NO_BACKING | BDRV_O_NO_IO, false, false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2012-10-17 16:02:31 +04:00
|
|
|
goto err;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2012-10-17 16:02:31 +04:00
|
|
|
|
2013-06-06 08:27:58 +04:00
|
|
|
bdrv_query_image_info(bs, &info, &err);
|
2014-01-30 18:07:28 +04:00
|
|
|
if (err) {
|
2015-02-12 15:55:05 +03:00
|
|
|
error_report_err(err);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-06-06 08:27:58 +04:00
|
|
|
goto err;
|
2013-06-06 08:27:57 +04:00
|
|
|
}
|
2012-10-17 16:02:31 +04:00
|
|
|
|
|
|
|
elem = g_new0(ImageInfoList, 1);
|
|
|
|
elem->value = info;
|
|
|
|
*last = elem;
|
|
|
|
last = &elem->next;
|
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2012-10-17 16:02:31 +04:00
|
|
|
|
|
|
|
filename = fmt = NULL;
|
|
|
|
if (chain) {
|
|
|
|
if (info->has_full_backing_filename) {
|
|
|
|
filename = info->full_backing_filename;
|
|
|
|
} else if (info->has_backing_filename) {
|
2015-12-14 22:55:15 +03:00
|
|
|
error_report("Could not determine absolute backing filename,"
|
|
|
|
" but backing filename '%s' present",
|
|
|
|
info->backing_filename);
|
|
|
|
goto err;
|
2012-10-17 16:02:31 +04:00
|
|
|
}
|
|
|
|
if (info->has_backing_filename_format) {
|
|
|
|
fmt = info->backing_filename_format;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
g_hash_table_destroy(filenames);
|
|
|
|
return head;
|
|
|
|
|
|
|
|
err:
|
|
|
|
qapi_free_ImageInfoList(head);
|
|
|
|
g_hash_table_destroy(filenames);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
static int img_info(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
2012-10-17 16:02:31 +04:00
|
|
|
bool chain = false;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
const char *filename, *fmt, *output;
|
2012-10-17 16:02:31 +04:00
|
|
|
ImageInfoList *list;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
fmt = NULL;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
output = NULL;
|
2004-08-02 01:59:26 +04:00
|
|
|
for(;;) {
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2012-10-17 16:02:31 +04:00
|
|
|
{"backing-chain", no_argument, 0, OPTION_BACKING_CHAIN},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "f:h",
|
|
|
|
long_options, &option_index);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2004-08-02 01:59:26 +04:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2004-08-02 01:59:26 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2012-10-17 16:02:31 +04:00
|
|
|
case OPTION_BACKING_CHAIN:
|
|
|
|
chain = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:20 +03:00
|
|
|
list = collect_image_info_list(image_opts, filename, fmt, chain);
|
2012-10-17 16:02:31 +04:00
|
|
|
if (!list) {
|
2010-06-20 23:26:35 +04:00
|
|
|
return 1;
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
2012-10-17 16:02:31 +04:00
|
|
|
dump_human_image_info_list(list);
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
2012-10-17 16:02:31 +04:00
|
|
|
if (chain) {
|
|
|
|
dump_json_image_info_list(list);
|
|
|
|
} else {
|
|
|
|
dump_json_image_info(list->value);
|
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
break;
|
2006-08-06 01:31:00 +04:00
|
|
|
}
|
qemu-img: Add json output option to the info command.
This option --output=[human|json] make qemu-img info output on
human or JSON representation at the choice of the user.
example:
{
"snapshots": [
{
"vm-clock-nsec": 637102488,
"name": "vm-20120821145509",
"date-sec": 1345553709,
"date-nsec": 220289000,
"vm-clock-sec": 20,
"id": "1",
"vm-state-size": 96522745
},
{
"vm-clock-nsec": 28210866,
"name": "vm-20120821154059",
"date-sec": 1345556459,
"date-nsec": 171392000,
"vm-clock-sec": 46,
"id": "2",
"vm-state-size": 101208714
}
],
"virtual-size": 1073741824,
"filename": "snap.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 985587712,
"dirty-flag": false
}
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2012-09-05 15:09:02 +04:00
|
|
|
|
2012-10-17 16:02:31 +04:00
|
|
|
qapi_free_ImageInfoList(list);
|
2004-08-02 01:59:26 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-09-04 21:00:33 +04:00
|
|
|
static void dump_map_entry(OutputFormat output_format, MapEntry *e,
|
|
|
|
MapEntry *next)
|
|
|
|
{
|
|
|
|
switch (output_format) {
|
|
|
|
case OFORMAT_HUMAN:
|
2016-01-26 06:59:02 +03:00
|
|
|
if (e->data && !e->has_offset) {
|
2013-09-04 21:00:33 +04:00
|
|
|
error_report("File contains external, encrypted or compressed clusters.");
|
|
|
|
exit(1);
|
|
|
|
}
|
2016-01-26 06:59:02 +03:00
|
|
|
if (e->data && !e->zero) {
|
2013-09-04 21:00:33 +04:00
|
|
|
printf("%#-16"PRIx64"%#-16"PRIx64"%#-16"PRIx64"%s\n",
|
2016-01-26 06:59:02 +03:00
|
|
|
e->start, e->length,
|
|
|
|
e->has_offset ? e->offset : 0,
|
|
|
|
e->has_filename ? e->filename : "");
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
/* This format ignores the distinction between 0, ZERO and ZERO|DATA.
|
|
|
|
* Modify the flags here to allow more coalescing.
|
|
|
|
*/
|
2016-01-26 06:59:02 +03:00
|
|
|
if (next && (!next->data || next->zero)) {
|
|
|
|
next->data = false;
|
|
|
|
next->zero = true;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case OFORMAT_JSON:
|
2016-01-26 06:59:02 +03:00
|
|
|
printf("%s{ \"start\": %"PRId64", \"length\": %"PRId64","
|
|
|
|
" \"depth\": %"PRId64", \"zero\": %s, \"data\": %s",
|
2013-09-04 21:00:33 +04:00
|
|
|
(e->start == 0 ? "[" : ",\n"),
|
|
|
|
e->start, e->length, e->depth,
|
2016-01-26 06:59:02 +03:00
|
|
|
e->zero ? "true" : "false",
|
|
|
|
e->data ? "true" : "false");
|
|
|
|
if (e->has_offset) {
|
2013-09-11 20:47:52 +04:00
|
|
|
printf(", \"offset\": %"PRId64"", e->offset);
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
putchar('}');
|
|
|
|
|
|
|
|
if (!next) {
|
|
|
|
printf("]\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int get_block_status(BlockDriverState *bs, int64_t sector_num,
|
|
|
|
int nb_sectors, MapEntry *e)
|
|
|
|
{
|
|
|
|
int64_t ret;
|
|
|
|
int depth;
|
2016-01-26 06:58:48 +03:00
|
|
|
BlockDriverState *file;
|
2016-02-05 21:12:33 +03:00
|
|
|
bool has_offset;
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
/* As an optimization, we could cache the current range of unallocated
|
|
|
|
* clusters in each file of the chain, and avoid querying the same
|
|
|
|
* range repeatedly.
|
|
|
|
*/
|
|
|
|
|
|
|
|
depth = 0;
|
|
|
|
for (;;) {
|
2016-01-26 06:58:48 +03:00
|
|
|
ret = bdrv_get_block_status(bs, sector_num, nb_sectors, &nb_sectors,
|
|
|
|
&file);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
assert(nb_sectors);
|
|
|
|
if (ret & (BDRV_BLOCK_ZERO|BDRV_BLOCK_DATA)) {
|
|
|
|
break;
|
|
|
|
}
|
2015-06-17 15:55:21 +03:00
|
|
|
bs = backing_bs(bs);
|
2013-09-04 21:00:33 +04:00
|
|
|
if (bs == NULL) {
|
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
depth++;
|
|
|
|
}
|
|
|
|
|
2016-02-05 21:12:33 +03:00
|
|
|
has_offset = !!(ret & BDRV_BLOCK_OFFSET_VALID);
|
|
|
|
|
|
|
|
*e = (MapEntry) {
|
|
|
|
.start = sector_num * BDRV_SECTOR_SIZE,
|
|
|
|
.length = nb_sectors * BDRV_SECTOR_SIZE,
|
|
|
|
.data = !!(ret & BDRV_BLOCK_DATA),
|
|
|
|
.zero = !!(ret & BDRV_BLOCK_ZERO),
|
|
|
|
.offset = ret & BDRV_BLOCK_OFFSET_MASK,
|
|
|
|
.has_offset = has_offset,
|
|
|
|
.depth = depth,
|
|
|
|
.has_filename = file && has_offset,
|
|
|
|
.filename = file && has_offset ? file->filename : NULL,
|
|
|
|
};
|
|
|
|
|
2013-09-04 21:00:33 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-01-26 06:59:02 +03:00
|
|
|
static inline bool entry_mergeable(const MapEntry *curr, const MapEntry *next)
|
|
|
|
{
|
|
|
|
if (curr->length == 0) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->zero != next->zero ||
|
|
|
|
curr->data != next->data ||
|
|
|
|
curr->depth != next->depth ||
|
|
|
|
curr->has_filename != next->has_filename ||
|
|
|
|
curr->has_offset != next->has_offset) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->has_filename && strcmp(curr->filename, next->filename)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (curr->has_offset && curr->offset + curr->length != next->offset) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-09-04 21:00:33 +04:00
|
|
|
static int img_map(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
OutputFormat output_format = OFORMAT_HUMAN;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2013-09-04 21:00:33 +04:00
|
|
|
BlockDriverState *bs;
|
|
|
|
const char *filename, *fmt, *output;
|
|
|
|
int64_t length;
|
|
|
|
MapEntry curr = { .length = 0 }, next;
|
|
|
|
int ret = 0;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
fmt = NULL;
|
|
|
|
output = NULL;
|
|
|
|
for (;;) {
|
|
|
|
int option_index = 0;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"format", required_argument, 0, 'f'},
|
|
|
|
{"output", required_argument, 0, OPTION_OUTPUT},
|
2016-02-17 13:10:17 +03:00
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2013-09-04 21:00:33 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "f:h",
|
|
|
|
long_options, &option_index);
|
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
|
|
|
case '?':
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case OPTION_OUTPUT:
|
|
|
|
output = optarg;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
|
|
|
}
|
2014-04-22 09:36:11 +04:00
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
2014-04-22 09:36:11 +04:00
|
|
|
filename = argv[optind];
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
if (output && !strcmp(output, "json")) {
|
|
|
|
output_format = OFORMAT_JSON;
|
|
|
|
} else if (output && !strcmp(output, "human")) {
|
|
|
|
output_format = OFORMAT_HUMAN;
|
|
|
|
} else if (output) {
|
|
|
|
error_report("--output must be used with human or json as argument.");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, 0, false, false);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2013-09-04 21:00:33 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-09-04 21:00:33 +04:00
|
|
|
|
|
|
|
if (output_format == OFORMAT_HUMAN) {
|
|
|
|
printf("%-16s%-16s%-16s%s\n", "Offset", "Length", "Mapped to", "File");
|
|
|
|
}
|
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
length = blk_getlength(blk);
|
2013-09-04 21:00:33 +04:00
|
|
|
while (curr.start + curr.length < length) {
|
|
|
|
int64_t nsectors_left;
|
|
|
|
int64_t sector_num;
|
|
|
|
int n;
|
|
|
|
|
|
|
|
sector_num = (curr.start + curr.length) >> BDRV_SECTOR_BITS;
|
|
|
|
|
|
|
|
/* Probe up to 1 GiB at a time. */
|
|
|
|
nsectors_left = DIV_ROUND_UP(length, BDRV_SECTOR_SIZE) - sector_num;
|
|
|
|
n = MIN(1 << (30 - BDRV_SECTOR_BITS), nsectors_left);
|
|
|
|
ret = get_block_status(bs, sector_num, n, &next);
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Could not read file metadata: %s", strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-01-26 06:59:02 +03:00
|
|
|
if (entry_mergeable(&curr, &next)) {
|
2013-09-04 21:00:33 +04:00
|
|
|
curr.length += next.length;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (curr.length > 0) {
|
|
|
|
dump_map_entry(output_format, &curr, &next);
|
|
|
|
}
|
|
|
|
curr = next;
|
|
|
|
}
|
|
|
|
|
|
|
|
dump_map_entry(output_format, &curr, NULL);
|
|
|
|
|
|
|
|
out:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2013-09-04 21:00:33 +04:00
|
|
|
return ret < 0;
|
|
|
|
}
|
|
|
|
|
2009-01-07 20:40:15 +03:00
|
|
|
#define SNAPSHOT_LIST 1
|
|
|
|
#define SNAPSHOT_CREATE 2
|
|
|
|
#define SNAPSHOT_APPLY 3
|
|
|
|
#define SNAPSHOT_DELETE 4
|
|
|
|
|
2009-06-07 03:42:17 +04:00
|
|
|
static int img_snapshot(int argc, char **argv)
|
2009-01-07 20:40:15 +03:00
|
|
|
{
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk;
|
2009-01-07 20:40:15 +03:00
|
|
|
BlockDriverState *bs;
|
|
|
|
QEMUSnapshotInfo sn;
|
|
|
|
char *filename, *snapshot_name = NULL;
|
2010-06-20 23:26:35 +04:00
|
|
|
int c, ret = 0, bdrv_oflags;
|
2009-01-07 20:40:15 +03:00
|
|
|
int action = 0;
|
|
|
|
qemu_timeval tv;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
snapshot: distinguish id and name in snapshot delete
Snapshot creation actually already distinguish id and name since it take
a structured parameter *sn, but delete can't. Later an accurate delete
is needed in qmp_transaction abort and blockdev-snapshot-delete-sync,
so change its prototype. Also *errp is added to tip error, but return
value is kepted to let caller check what kind of error happens. Existing
caller for it are savevm, delvm and qemu-img, they are not impacted by
introducing a new function bdrv_snapshot_delete_by_id_or_name(), which
check the return value and do the operation again.
Before this patch:
For qcow2, it search id first then name to find the one to delete.
For rbd, it search name.
For sheepdog, it does nothing.
After this patch:
For qcow2, logic is the same by call it twice in caller.
For rbd, it always fails in delete with id, but still search for name
in second try, no change to user.
Some code for *errp is based on Pavel's patch.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-11 10:04:33 +04:00
|
|
|
Error *err = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2009-01-07 20:40:15 +03:00
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
bdrv_oflags = BDRV_O_RDWR;
|
2009-01-07 20:40:15 +03:00
|
|
|
/* Parse commandline parameters */
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "la:c:d:hq",
|
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2009-01-07 20:40:15 +03:00
|
|
|
case 'h':
|
|
|
|
help();
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
case 'l':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_LIST;
|
2010-01-17 17:48:13 +03:00
|
|
|
bdrv_oflags &= ~BDRV_O_RDWR; /* no need for RW */
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
case 'a':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_APPLY;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
|
|
|
case 'c':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_CREATE;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
|
|
|
case 'd':
|
|
|
|
if (action) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Cannot mix '-l', '-a', '-c', '-d'");
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
action = SNAPSHOT_DELETE;
|
|
|
|
snapshot_name = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2009-01-07 20:40:15 +03:00
|
|
|
/* Open the image */
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, NULL, bdrv_oflags, false, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
|
|
|
return 1;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2009-01-07 20:40:15 +03:00
|
|
|
|
|
|
|
/* Perform the requested action */
|
|
|
|
switch(action) {
|
|
|
|
case SNAPSHOT_LIST:
|
|
|
|
dump_snapshots(bs);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_CREATE:
|
|
|
|
memset(&sn, 0, sizeof(sn));
|
|
|
|
pstrcpy(sn.name, sizeof(sn.name), snapshot_name);
|
|
|
|
|
|
|
|
qemu_gettimeofday(&tv);
|
|
|
|
sn.date_sec = tv.tv_sec;
|
|
|
|
sn.date_nsec = tv.tv_usec * 1000;
|
|
|
|
|
|
|
|
ret = bdrv_snapshot_create(bs, &sn);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (ret) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not create snapshot '%s': %d (%s)",
|
2009-01-07 20:40:15 +03:00
|
|
|
snapshot_name, ret, strerror(-ret));
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_APPLY:
|
|
|
|
ret = bdrv_snapshot_goto(bs, snapshot_name);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (ret) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not apply snapshot '%s': %d (%s)",
|
2009-01-07 20:40:15 +03:00
|
|
|
snapshot_name, ret, strerror(-ret));
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case SNAPSHOT_DELETE:
|
snapshot: distinguish id and name in snapshot delete
Snapshot creation actually already distinguish id and name since it take
a structured parameter *sn, but delete can't. Later an accurate delete
is needed in qmp_transaction abort and blockdev-snapshot-delete-sync,
so change its prototype. Also *errp is added to tip error, but return
value is kepted to let caller check what kind of error happens. Existing
caller for it are savevm, delvm and qemu-img, they are not impacted by
introducing a new function bdrv_snapshot_delete_by_id_or_name(), which
check the return value and do the operation again.
Before this patch:
For qcow2, it search id first then name to find the one to delete.
For rbd, it search name.
For sheepdog, it does nothing.
After this patch:
For qcow2, logic is the same by call it twice in caller.
For rbd, it always fails in delete with id, but still search for name
in second try, no change to user.
Some code for *errp is based on Pavel's patch.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-11 10:04:33 +04:00
|
|
|
bdrv_snapshot_delete_by_id_or_name(bs, snapshot_name, &err);
|
2014-01-30 18:07:28 +04:00
|
|
|
if (err) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(err, "Could not delete snapshot '%s': ",
|
|
|
|
snapshot_name);
|
snapshot: distinguish id and name in snapshot delete
Snapshot creation actually already distinguish id and name since it take
a structured parameter *sn, but delete can't. Later an accurate delete
is needed in qmp_transaction abort and blockdev-snapshot-delete-sync,
so change its prototype. Also *errp is added to tip error, but return
value is kepted to let caller check what kind of error happens. Existing
caller for it are savevm, delvm and qemu-img, they are not impacted by
introducing a new function bdrv_snapshot_delete_by_id_or_name(), which
check the return value and do the operation again.
Before this patch:
For qcow2, it search id first then name to find the one to delete.
For rbd, it search name.
For sheepdog, it does nothing.
After this patch:
For qcow2, logic is the same by call it twice in caller.
For rbd, it always fails in delete with id, but still search for name
in second try, no change to user.
Some code for *errp is based on Pavel's patch.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-11 10:04:33 +04:00
|
|
|
ret = 1;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2009-01-07 20:40:15 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Cleanup */
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
return 0;
|
2009-01-07 20:40:15 +03:00
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
static int img_rebase(int argc, char **argv)
|
|
|
|
{
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL, *blk_old_backing = NULL, *blk_new_backing = NULL;
|
2016-02-26 01:53:54 +03:00
|
|
|
uint8_t *buf_old = NULL;
|
|
|
|
uint8_t *buf_new = NULL;
|
2015-02-05 21:58:18 +03:00
|
|
|
BlockDriverState *bs = NULL;
|
2010-01-12 14:55:18 +03:00
|
|
|
char *filename;
|
2014-07-23 00:58:42 +04:00
|
|
|
const char *fmt, *cache, *src_cache, *out_basefmt, *out_baseimg;
|
|
|
|
int c, flags, src_flags, ret;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough, src_writethrough;
|
2010-01-12 14:55:18 +03:00
|
|
|
int unsafe = 0;
|
2011-03-30 16:16:25 +04:00
|
|
|
int progress = 0;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
2013-09-05 16:45:29 +04:00
|
|
|
Error *local_err = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
/* Parse commandline parameters */
|
2010-03-02 14:14:31 +03:00
|
|
|
fmt = NULL;
|
2011-06-20 20:48:19 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2014-07-23 00:58:42 +04:00
|
|
|
src_cache = BDRV_DEFAULT_CACHE;
|
2010-01-12 14:55:18 +03:00
|
|
|
out_baseimg = NULL;
|
|
|
|
out_basefmt = NULL;
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "hf:F:b:upt:T:q",
|
|
|
|
long_options, NULL);
|
2010-12-06 17:25:39 +03:00
|
|
|
if (c == -1) {
|
2010-01-12 14:55:18 +03:00
|
|
|
break;
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2010-01-12 14:55:18 +03:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
return 0;
|
2010-03-02 14:14:31 +03:00
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2010-01-12 14:55:18 +03:00
|
|
|
case 'F':
|
|
|
|
out_basefmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'b':
|
|
|
|
out_baseimg = optarg;
|
|
|
|
break;
|
|
|
|
case 'u':
|
|
|
|
unsafe = 1;
|
|
|
|
break;
|
2011-03-30 16:16:25 +04:00
|
|
|
case 'p':
|
|
|
|
progress = 1;
|
|
|
|
break;
|
2011-06-20 20:48:19 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:42 +04:00
|
|
|
case 'T':
|
|
|
|
src_cache = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-02-13 12:09:40 +04:00
|
|
|
if (quiet) {
|
|
|
|
progress = 0;
|
|
|
|
}
|
|
|
|
|
2014-04-22 09:36:11 +04:00
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
|
|
|
}
|
|
|
|
if (!unsafe && !out_baseimg) {
|
|
|
|
error_exit("Must specify backing file (-b) or use unsafe mode (-u)");
|
2010-12-06 17:25:39 +03:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
filename = argv[optind++];
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_init(progress, 2.0);
|
|
|
|
qemu_progress_print(0, 100);
|
|
|
|
|
2011-06-20 20:48:19 +04:00
|
|
|
flags = BDRV_O_RDWR | (unsafe ? BDRV_O_NO_BACKING : 0);
|
2016-03-15 15:01:04 +03:00
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2011-06-20 20:48:19 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
2014-08-26 22:17:56 +04:00
|
|
|
goto out;
|
2011-06-20 20:48:19 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
src_flags = 0;
|
|
|
|
ret = bdrv_parse_cache_mode(src_cache, &src_flags, &src_writethrough);
|
2014-07-23 00:58:42 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid source cache option: %s", src_cache);
|
2014-08-26 22:17:56 +04:00
|
|
|
goto out;
|
2014-07-23 00:58:42 +04:00
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
/* The source files are opened read-only, don't care about WCE */
|
|
|
|
assert((src_flags & BDRV_O_RDWR) == 0);
|
|
|
|
(void) src_writethrough;
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
|
|
|
* Open the images.
|
|
|
|
*
|
|
|
|
* Ignore the old backing file for unsafe rebase in case we want to correct
|
|
|
|
* the reference to a renamed or moved backing file.
|
|
|
|
*/
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2014-08-26 22:17:56 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
if (out_basefmt != NULL) {
|
2015-02-05 21:58:17 +03:00
|
|
|
if (bdrv_find_format(out_basefmt) == NULL) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Invalid format name: '%s'", out_basefmt);
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* For safe rebasing we need to compare old and new backing file */
|
2014-08-26 22:17:56 +04:00
|
|
|
if (!unsafe) {
|
2015-01-22 16:03:30 +03:00
|
|
|
char backing_name[PATH_MAX];
|
2015-02-05 21:58:17 +03:00
|
|
|
QDict *options = NULL;
|
|
|
|
|
|
|
|
if (bs->backing_format[0] != '\0') {
|
|
|
|
options = qdict_new();
|
|
|
|
qdict_put(options, "driver", qstring_from_str(bs->backing_format));
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
|
|
|
|
bdrv_get_backing_filename(bs, backing_name, sizeof(backing_name));
|
2016-03-16 21:54:38 +03:00
|
|
|
blk_old_backing = blk_new_open(backing_name, NULL,
|
2015-02-05 21:58:17 +03:00
|
|
|
options, src_flags, &local_err);
|
|
|
|
if (!blk_old_backing) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err,
|
|
|
|
"Could not open old backing file '%s': ",
|
|
|
|
backing_name);
|
2016-10-09 12:17:27 +03:00
|
|
|
ret = -1;
|
2010-06-20 23:26:35 +04:00
|
|
|
goto out;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2015-02-05 21:58:17 +03:00
|
|
|
|
2012-10-16 16:46:18 +04:00
|
|
|
if (out_baseimg[0]) {
|
2015-02-05 21:58:17 +03:00
|
|
|
if (out_basefmt) {
|
|
|
|
options = qdict_new();
|
|
|
|
qdict_put(options, "driver", qstring_from_str(out_basefmt));
|
|
|
|
} else {
|
|
|
|
options = NULL;
|
|
|
|
}
|
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk_new_backing = blk_new_open(out_baseimg, NULL,
|
2015-02-05 21:58:17 +03:00
|
|
|
options, src_flags, &local_err);
|
|
|
|
if (!blk_new_backing) {
|
2015-12-18 18:35:14 +03:00
|
|
|
error_reportf_err(local_err,
|
|
|
|
"Could not open new backing file '%s': ",
|
|
|
|
out_baseimg);
|
2016-10-09 12:17:27 +03:00
|
|
|
ret = -1;
|
2012-10-16 16:46:18 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check each unallocated cluster in the COW file. If it is unallocated,
|
|
|
|
* accesses go to the backing file. We must therefore compare this cluster
|
|
|
|
* in the old and new backing file, and if they differ we need to copy it
|
|
|
|
* from the old backing file into the COW file.
|
|
|
|
*
|
|
|
|
* If qemu-img crashes during this step, no harm is done. The content of
|
|
|
|
* the image is the same as the original one at any time.
|
|
|
|
*/
|
|
|
|
if (!unsafe) {
|
2014-06-26 15:23:25 +04:00
|
|
|
int64_t num_sectors;
|
|
|
|
int64_t old_backing_num_sectors;
|
|
|
|
int64_t new_backing_num_sectors = 0;
|
2010-01-12 14:55:18 +03:00
|
|
|
uint64_t sector;
|
2010-04-29 16:47:48 +04:00
|
|
|
int n;
|
2012-10-12 16:29:18 +04:00
|
|
|
float local_progress = 0;
|
2010-02-08 11:20:00 +03:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
buf_old = blk_blockalign(blk, IO_BUF_SIZE);
|
|
|
|
buf_new = blk_blockalign(blk, IO_BUF_SIZE);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
num_sectors = blk_nb_sectors(blk);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (num_sectors < 0) {
|
|
|
|
error_report("Could not get size of '%s': %s",
|
|
|
|
filename, strerror(-num_sectors));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-05 21:58:18 +03:00
|
|
|
old_backing_num_sectors = blk_nb_sectors(blk_old_backing);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (old_backing_num_sectors < 0) {
|
2015-01-22 16:03:30 +03:00
|
|
|
char backing_name[PATH_MAX];
|
2014-06-26 15:23:25 +04:00
|
|
|
|
|
|
|
bdrv_get_backing_filename(bs, backing_name, sizeof(backing_name));
|
|
|
|
error_report("Could not get size of '%s': %s",
|
|
|
|
backing_name, strerror(-old_backing_num_sectors));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-05 21:58:18 +03:00
|
|
|
if (blk_new_backing) {
|
|
|
|
new_backing_num_sectors = blk_nb_sectors(blk_new_backing);
|
2014-06-26 15:23:25 +04:00
|
|
|
if (new_backing_num_sectors < 0) {
|
|
|
|
error_report("Could not get size of '%s': %s",
|
|
|
|
out_baseimg, strerror(-new_backing_num_sectors));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-10-16 16:46:18 +04:00
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
|
2012-10-12 16:29:18 +04:00
|
|
|
if (num_sectors != 0) {
|
|
|
|
local_progress = (float)100 /
|
|
|
|
(num_sectors / MIN(num_sectors, IO_BUF_SIZE / 512));
|
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
for (sector = 0; sector < num_sectors; sector += n) {
|
|
|
|
|
|
|
|
/* How many sectors can we handle with the next read? */
|
|
|
|
if (sector + (IO_BUF_SIZE / 512) <= num_sectors) {
|
|
|
|
n = (IO_BUF_SIZE / 512);
|
|
|
|
} else {
|
|
|
|
n = num_sectors - sector;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If the cluster is allocated, we don't need to take action */
|
2010-04-29 16:47:48 +04:00
|
|
|
ret = bdrv_is_allocated(bs, sector, n, &n);
|
2013-09-04 21:00:25 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading image metadata: %s",
|
|
|
|
strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
2010-04-29 16:47:48 +04:00
|
|
|
if (ret) {
|
2010-01-12 14:55:18 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2011-12-07 15:42:10 +04:00
|
|
|
/*
|
|
|
|
* Read old and new backing file and take into consideration that
|
|
|
|
* backing files may be smaller than the COW image.
|
|
|
|
*/
|
|
|
|
if (sector >= old_backing_num_sectors) {
|
|
|
|
memset(buf_old, 0, n * BDRV_SECTOR_SIZE);
|
|
|
|
} else {
|
|
|
|
if (sector + n > old_backing_num_sectors) {
|
|
|
|
n = old_backing_num_sectors - sector;
|
|
|
|
}
|
|
|
|
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk_old_backing, sector << BDRV_SECTOR_BITS,
|
|
|
|
buf_old, n << BDRV_SECTOR_BITS);
|
2011-12-07 15:42:10 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading from old backing file");
|
|
|
|
goto out;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2011-12-07 15:42:10 +04:00
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
if (sector >= new_backing_num_sectors || !blk_new_backing) {
|
2011-12-07 15:42:10 +04:00
|
|
|
memset(buf_new, 0, n * BDRV_SECTOR_SIZE);
|
|
|
|
} else {
|
|
|
|
if (sector + n > new_backing_num_sectors) {
|
|
|
|
n = new_backing_num_sectors - sector;
|
|
|
|
}
|
|
|
|
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pread(blk_new_backing, sector << BDRV_SECTOR_BITS,
|
|
|
|
buf_new, n << BDRV_SECTOR_BITS);
|
2011-12-07 15:42:10 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("error while reading from new backing file");
|
|
|
|
goto out;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If they differ, we need to write to the COW file */
|
|
|
|
uint64_t written = 0;
|
|
|
|
|
|
|
|
while (written < n) {
|
|
|
|
int pnum;
|
|
|
|
|
|
|
|
if (compare_sectors(buf_old + written * 512,
|
2010-02-17 14:32:59 +03:00
|
|
|
buf_new + written * 512, n - written, &pnum))
|
2010-01-12 14:55:18 +03:00
|
|
|
{
|
2016-05-06 19:26:43 +03:00
|
|
|
ret = blk_pwrite(blk,
|
|
|
|
(sector + written) << BDRV_SECTOR_BITS,
|
|
|
|
buf_old + written * 512,
|
|
|
|
pnum << BDRV_SECTOR_BITS, 0);
|
2010-01-12 14:55:18 +03:00
|
|
|
if (ret < 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Error while writing to COW image: %s",
|
2010-01-12 14:55:18 +03:00
|
|
|
strerror(-ret));
|
2010-06-20 23:26:35 +04:00
|
|
|
goto out;
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
written += pnum;
|
|
|
|
}
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(local_progress, 100);
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Change the backing file. All clusters that are different from the old
|
|
|
|
* backing file are overwritten in the COW file now, so the visible content
|
|
|
|
* doesn't change when we switch the backing file.
|
|
|
|
*/
|
2012-10-16 16:46:18 +04:00
|
|
|
if (out_baseimg && *out_baseimg) {
|
|
|
|
ret = bdrv_change_backing_file(bs, out_baseimg, out_basefmt);
|
|
|
|
} else {
|
|
|
|
ret = bdrv_change_backing_file(bs, NULL, NULL);
|
|
|
|
}
|
|
|
|
|
2010-01-12 14:55:18 +03:00
|
|
|
if (ret == -ENOSPC) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not change the backing file to '%s': No "
|
|
|
|
"space left in the file header", out_baseimg);
|
2010-01-12 14:55:18 +03:00
|
|
|
} else if (ret < 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Could not change the backing file to '%s': %s",
|
2010-01-12 14:55:18 +03:00
|
|
|
out_baseimg, strerror(-ret));
|
|
|
|
}
|
|
|
|
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_print(100, 0);
|
2010-01-12 14:55:18 +03:00
|
|
|
/*
|
|
|
|
* TODO At this point it is possible to check if any clusters that are
|
|
|
|
* allocated in the COW file are the same in the backing file. If so, they
|
|
|
|
* could be dropped from the COW file. Don't do this before switching the
|
|
|
|
* backing file, in case of a crash this would lead to corruption.
|
|
|
|
*/
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
2011-03-30 16:16:25 +04:00
|
|
|
qemu_progress_end();
|
2010-01-12 14:55:18 +03:00
|
|
|
/* Cleanup */
|
|
|
|
if (!unsafe) {
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk_old_backing);
|
|
|
|
blk_unref(blk_new_backing);
|
2010-01-12 14:55:18 +03:00
|
|
|
}
|
2016-02-26 01:53:54 +03:00
|
|
|
qemu_vfree(buf_old);
|
|
|
|
qemu_vfree(buf_new);
|
2010-01-12 14:55:18 +03:00
|
|
|
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2010-01-12 14:55:18 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-04-24 12:12:12 +04:00
|
|
|
static int img_resize(int argc, char **argv)
|
|
|
|
{
|
2015-02-12 19:43:08 +03:00
|
|
|
Error *err = NULL;
|
2010-04-24 12:12:12 +04:00
|
|
|
int c, ret, relative;
|
|
|
|
const char *filename, *fmt, *size;
|
|
|
|
int64_t n, total_size;
|
2013-02-13 12:09:40 +04:00
|
|
|
bool quiet = false;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL;
|
2012-08-06 06:18:42 +04:00
|
|
|
QemuOpts *param;
|
2016-02-17 13:10:17 +03:00
|
|
|
|
2012-08-06 06:18:42 +04:00
|
|
|
static QemuOptsList resize_options = {
|
|
|
|
.name = "resize_options",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(resize_options.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = BLOCK_OPT_SIZE,
|
|
|
|
.type = QEMU_OPT_SIZE,
|
|
|
|
.help = "Virtual disk size"
|
|
|
|
}, {
|
|
|
|
/* end of list */
|
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
},
|
|
|
|
};
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2010-04-24 12:12:12 +04:00
|
|
|
|
2011-04-29 12:58:12 +04:00
|
|
|
/* Remove size from argv manually so that negative numbers are not treated
|
|
|
|
* as options by getopt. */
|
|
|
|
if (argc < 3) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Not enough arguments");
|
2011-04-29 12:58:12 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = argv[--argc];
|
|
|
|
|
|
|
|
/* Parse getopt arguments */
|
2010-04-24 12:12:12 +04:00
|
|
|
fmt = NULL;
|
|
|
|
for(;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "f:hq",
|
|
|
|
long_options, NULL);
|
2010-04-24 12:12:12 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch(c) {
|
2010-12-06 17:25:40 +03:00
|
|
|
case '?':
|
2010-04-24 12:12:12 +04:00
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2013-02-13 12:09:40 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT: {
|
|
|
|
QemuOpts *opts;
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-05 12:53:04 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Expecting one image file name");
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
filename = argv[optind++];
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2010-04-24 12:12:12 +04:00
|
|
|
/* Choose grow, shrink, or absolute resize mode */
|
|
|
|
switch (size[0]) {
|
|
|
|
case '+':
|
|
|
|
relative = 1;
|
|
|
|
size++;
|
|
|
|
break;
|
|
|
|
case '-':
|
|
|
|
relative = -1;
|
|
|
|
size++;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
relative = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Parse size */
|
2014-01-02 06:49:17 +04:00
|
|
|
param = qemu_opts_create(&resize_options, NULL, 0, &error_abort);
|
2015-02-12 19:52:20 +03:00
|
|
|
qemu_opt_set(param, BLOCK_OPT_SIZE, size, &err);
|
2015-02-12 19:43:08 +03:00
|
|
|
if (err) {
|
|
|
|
error_report_err(err);
|
2010-12-06 19:08:31 +03:00
|
|
|
ret = -1;
|
2012-08-06 06:18:42 +04:00
|
|
|
qemu_opts_del(param);
|
2010-12-06 19:08:31 +03:00
|
|
|
goto out;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
2012-08-06 06:18:42 +04:00
|
|
|
n = qemu_opt_get_size(param, BLOCK_OPT_SIZE, 0);
|
|
|
|
qemu_opts_del(param);
|
2010-04-24 12:12:12 +04:00
|
|
|
|
2016-03-16 21:54:38 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt,
|
2016-03-15 15:01:04 +03:00
|
|
|
BDRV_O_RDWR, false, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2010-12-06 19:08:31 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-06-20 23:26:35 +04:00
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
|
|
|
|
if (relative) {
|
2015-02-05 21:58:18 +03:00
|
|
|
total_size = blk_getlength(blk) + n * relative;
|
2010-04-24 12:12:12 +04:00
|
|
|
} else {
|
|
|
|
total_size = n;
|
|
|
|
}
|
|
|
|
if (total_size <= 0) {
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("New image size must be positive");
|
2010-06-20 23:26:35 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2010-04-24 12:12:12 +04:00
|
|
|
}
|
|
|
|
|
2015-02-05 21:58:18 +03:00
|
|
|
ret = blk_truncate(blk, total_size);
|
2010-04-24 12:12:12 +04:00
|
|
|
switch (ret) {
|
|
|
|
case 0:
|
2013-02-13 12:09:40 +04:00
|
|
|
qprintf(quiet, "Image resized.\n");
|
2010-04-24 12:12:12 +04:00
|
|
|
break;
|
|
|
|
case -ENOTSUP:
|
2012-03-06 15:44:45 +04:00
|
|
|
error_report("This image does not support resize");
|
2010-04-24 12:12:12 +04:00
|
|
|
break;
|
|
|
|
case -EACCES:
|
2010-12-16 16:31:53 +03:00
|
|
|
error_report("Image is read-only");
|
2010-04-24 12:12:12 +04:00
|
|
|
break;
|
|
|
|
default:
|
2016-06-15 18:36:29 +03:00
|
|
|
error_report("Error resizing image: %s", strerror(-ret));
|
2010-04-24 12:12:12 +04:00
|
|
|
break;
|
|
|
|
}
|
2010-06-20 23:26:35 +04:00
|
|
|
out:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2010-06-20 23:26:35 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
2010-04-24 12:12:12 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
static void amend_status_cb(BlockDriverState *bs,
|
2015-07-27 18:51:32 +03:00
|
|
|
int64_t offset, int64_t total_work_size,
|
|
|
|
void *opaque)
|
2014-10-27 13:12:51 +03:00
|
|
|
{
|
|
|
|
qemu_progress_print(100.f * offset / total_work_size, 0);
|
|
|
|
}
|
|
|
|
|
2013-09-03 12:09:50 +04:00
|
|
|
static int img_amend(int argc, char **argv)
|
|
|
|
{
|
2015-02-12 20:37:11 +03:00
|
|
|
Error *err = NULL;
|
2013-09-03 12:09:50 +04:00
|
|
|
int c, ret = 0;
|
|
|
|
char *options = NULL;
|
2014-06-05 13:20:51 +04:00
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
QemuOpts *opts = NULL;
|
2014-07-23 00:58:43 +04:00
|
|
|
const char *fmt = NULL, *filename, *cache;
|
|
|
|
int flags;
|
2016-03-15 15:01:04 +03:00
|
|
|
bool writethrough;
|
2014-10-27 13:12:51 +03:00
|
|
|
bool quiet = false, progress = false;
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
BlockBackend *blk = NULL;
|
2013-09-03 12:09:50 +04:00
|
|
|
BlockDriverState *bs = NULL;
|
2016-02-17 13:10:20 +03:00
|
|
|
bool image_opts = false;
|
2013-09-03 12:09:50 +04:00
|
|
|
|
2014-07-23 00:58:43 +04:00
|
|
|
cache = BDRV_DEFAULT_CACHE;
|
2013-09-03 12:09:50 +04:00
|
|
|
for (;;) {
|
2016-02-17 13:10:17 +03:00
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
|
|
|
{"object", required_argument, 0, OPTION_OBJECT},
|
2016-02-17 13:10:20 +03:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2016-02-17 13:10:17 +03:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
|
|
|
c = getopt_long(argc, argv, "ho:f:t:pq",
|
|
|
|
long_options, NULL);
|
2013-09-03 12:09:50 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (c) {
|
|
|
|
case 'h':
|
|
|
|
case '?':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'o':
|
2014-02-21 19:24:06 +04:00
|
|
|
if (!is_valid_option_list(optarg)) {
|
|
|
|
error_report("Invalid option list: %s", optarg);
|
|
|
|
ret = -1;
|
2015-08-21 02:00:38 +03:00
|
|
|
goto out_no_progress;
|
2014-02-21 19:24:06 +04:00
|
|
|
}
|
|
|
|
if (!options) {
|
|
|
|
options = g_strdup(optarg);
|
|
|
|
} else {
|
|
|
|
char *old_options = options;
|
|
|
|
options = g_strdup_printf("%s,%s", options, optarg);
|
|
|
|
g_free(old_options);
|
|
|
|
}
|
2013-09-03 12:09:50 +04:00
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
2014-07-23 00:58:43 +04:00
|
|
|
case 't':
|
|
|
|
cache = optarg;
|
|
|
|
break;
|
2014-10-27 13:12:51 +03:00
|
|
|
case 'p':
|
|
|
|
progress = true;
|
|
|
|
break;
|
2013-09-03 12:09:50 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
2016-02-17 13:10:17 +03:00
|
|
|
case OPTION_OBJECT:
|
|
|
|
opts = qemu_opts_parse_noisily(&qemu_object_opts,
|
|
|
|
optarg, true);
|
|
|
|
if (!opts) {
|
|
|
|
ret = -1;
|
|
|
|
goto out_no_progress;
|
|
|
|
}
|
|
|
|
break;
|
2016-02-17 13:10:20 +03:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
if (!options) {
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Must specify options (-o)");
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
if (qemu_opts_foreach(&qemu_object_opts,
|
|
|
|
user_creatable_add_opts_foreach,
|
qom: -object error messages lost location, restore it
qemu_opts_foreach() runs its callback with the error location set to
the option's location. Any errors the callback reports use the
option's location automatically.
Commit 90998d5 moved the actual error reporting from "inside"
qemu_opts_foreach() to after it. Here's a typical hunk:
if (qemu_opts_foreach(qemu_find_opts("object"),
- object_create,
- object_create_initial, NULL)) {
+ user_creatable_add_opts_foreach,
+ object_create_initial, &err)) {
+ error_report_err(err);
exit(1);
}
Before, object_create() reports from within qemu_opts_foreach(), using
the option's location. Afterwards, we do it after
qemu_opts_foreach(), using whatever location happens to be current
there. Commonly a "none" location.
This is because Error objects don't have location information.
Problematic.
Reproducer:
$ qemu-system-x86_64 -nodefaults -display none -object secret,id=foo,foo=bar
qemu-system-x86_64: Property '.foo' not found
Note no location. This commit restores it:
qemu-system-x86_64: -object secret,id=foo,foo=bar: Property '.foo' not found
Note that the qemu_opts_foreach() bug just fixed could mask the bug
here: if the location it leaves dangling hasn't been clobbered, yet,
it's the correct one.
Reported-by: Eric Blake <eblake@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1461767349-15329-4-git-send-email-armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Paragraph on Error added to commit message]
2016-04-27 17:29:09 +03:00
|
|
|
NULL, NULL)) {
|
2016-02-17 13:10:17 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out_no_progress;
|
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
if (quiet) {
|
|
|
|
progress = false;
|
|
|
|
}
|
|
|
|
qemu_progress_init(progress, 1.0);
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
filename = (optind == argc - 1) ? argv[argc - 1] : NULL;
|
|
|
|
if (fmt && has_help_option(options)) {
|
|
|
|
/* If a format is explicitly specified (and possibly no filename is
|
|
|
|
* given), print option help here */
|
|
|
|
ret = print_block_option_help(filename, fmt);
|
|
|
|
goto out;
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2014-02-21 19:24:07 +04:00
|
|
|
if (optind != argc - 1) {
|
2014-10-27 13:12:52 +03:00
|
|
|
error_report("Expecting one image file name");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
2014-02-21 19:24:07 +04:00
|
|
|
}
|
2013-09-03 12:09:50 +04:00
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
flags = BDRV_O_RDWR;
|
|
|
|
ret = bdrv_parse_cache_mode(cache, &flags, &writethrough);
|
2014-07-23 00:58:43 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache option: %s", cache);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-03-15 15:01:04 +03:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet);
|
2014-10-07 15:59:05 +04:00
|
|
|
if (!blk) {
|
2013-09-03 12:09:50 +04:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-10-07 15:59:05 +04:00
|
|
|
bs = blk_bs(blk);
|
2013-09-03 12:09:50 +04:00
|
|
|
|
|
|
|
fmt = bs->drv->format_name;
|
|
|
|
|
2014-02-21 19:24:06 +04:00
|
|
|
if (has_help_option(options)) {
|
2014-02-21 19:24:07 +04:00
|
|
|
/* If the format was auto-detected, print option help here */
|
2013-09-03 12:09:50 +04:00
|
|
|
ret = print_block_option_help(filename, fmt);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-12-02 20:32:47 +03:00
|
|
|
if (!bs->drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support any options to amend",
|
|
|
|
fmt);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-06-05 13:21:11 +04:00
|
|
|
create_opts = qemu_opts_append(create_opts, bs->drv->create_opts);
|
2014-06-05 13:20:51 +04:00
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
2015-02-12 20:37:11 +03:00
|
|
|
if (options) {
|
|
|
|
qemu_opts_do_parse(opts, options, NULL, &err);
|
|
|
|
if (err) {
|
2015-03-14 12:23:15 +03:00
|
|
|
error_report_err(err);
|
2015-02-12 20:37:11 +03:00
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-09-03 12:09:50 +04:00
|
|
|
}
|
|
|
|
|
2014-10-27 13:12:51 +03:00
|
|
|
/* In case the driver does not call amend_status_cb() */
|
|
|
|
qemu_progress_print(0.f, 0);
|
2015-07-27 18:51:32 +03:00
|
|
|
ret = bdrv_amend_options(bs, opts, &amend_status_cb, NULL);
|
2014-10-27 13:12:51 +03:00
|
|
|
qemu_progress_print(100.f, 0);
|
2013-09-03 12:09:50 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Error while amending options: %s", strerror(-ret));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2014-10-27 13:12:51 +03:00
|
|
|
qemu_progress_end();
|
|
|
|
|
2015-08-21 02:00:38 +03:00
|
|
|
out_no_progress:
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
blk_unref(blk);
|
2014-06-05 13:20:51 +04:00
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_free(create_opts);
|
2014-02-21 19:24:06 +04:00
|
|
|
g_free(options);
|
|
|
|
|
2013-09-03 12:09:50 +04:00
|
|
|
if (ret) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
typedef struct BenchData {
|
|
|
|
BlockBackend *blk;
|
|
|
|
uint64_t image_size;
|
2015-07-10 19:09:18 +03:00
|
|
|
bool write;
|
2014-08-05 16:17:13 +04:00
|
|
|
int bufsize;
|
2015-07-13 14:13:17 +03:00
|
|
|
int step;
|
2014-08-05 16:17:13 +04:00
|
|
|
int nrreq;
|
|
|
|
int n;
|
2016-06-03 14:59:41 +03:00
|
|
|
int flush_interval;
|
|
|
|
bool drain_on_flush;
|
2014-08-05 16:17:13 +04:00
|
|
|
uint8_t *buf;
|
|
|
|
QEMUIOVector *qiov;
|
|
|
|
|
|
|
|
int in_flight;
|
2016-06-03 14:59:41 +03:00
|
|
|
bool in_flush;
|
2014-08-05 16:17:13 +04:00
|
|
|
uint64_t offset;
|
|
|
|
} BenchData;
|
|
|
|
|
2016-06-03 14:59:41 +03:00
|
|
|
static void bench_undrained_flush_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
if (ret < 0) {
|
2016-08-03 14:37:51 +03:00
|
|
|
error_report("Failed flush request: %s", strerror(-ret));
|
2016-06-03 14:59:41 +03:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
static void bench_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
BenchData *b = opaque;
|
|
|
|
BlockAIOCB *acb;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
2016-08-03 14:37:51 +03:00
|
|
|
error_report("Failed request: %s", strerror(-ret));
|
2014-08-05 16:17:13 +04:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
2016-06-03 14:59:41 +03:00
|
|
|
|
|
|
|
if (b->in_flush) {
|
|
|
|
/* Just finished a flush with drained queue: Start next requests */
|
|
|
|
assert(b->in_flight == 0);
|
|
|
|
b->in_flush = false;
|
|
|
|
} else if (b->in_flight > 0) {
|
|
|
|
int remaining = b->n - b->in_flight;
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
b->n--;
|
|
|
|
b->in_flight--;
|
2016-06-03 14:59:41 +03:00
|
|
|
|
|
|
|
/* Time for flush? Drain queue if requested, then flush */
|
|
|
|
if (b->flush_interval && remaining % b->flush_interval == 0) {
|
|
|
|
if (!b->in_flight || !b->drain_on_flush) {
|
|
|
|
BlockCompletionFunc *cb;
|
|
|
|
|
|
|
|
if (b->drain_on_flush) {
|
|
|
|
b->in_flush = true;
|
|
|
|
cb = bench_cb;
|
|
|
|
} else {
|
|
|
|
cb = bench_undrained_flush_cb;
|
|
|
|
}
|
|
|
|
|
|
|
|
acb = blk_aio_flush(b->blk, cb, b);
|
|
|
|
if (!acb) {
|
|
|
|
error_report("Failed to issue flush request");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (b->drain_on_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
while (b->n > b->in_flight && b->in_flight < b->nrreq) {
|
2016-12-07 18:08:27 +03:00
|
|
|
int64_t offset = b->offset;
|
|
|
|
/* blk_aio_* might look for completed I/Os and kick bench_cb
|
|
|
|
* again, so make sure this operation is counted by in_flight
|
|
|
|
* and b->offset is ready for the next submission.
|
|
|
|
*/
|
|
|
|
b->in_flight++;
|
|
|
|
b->offset += b->step;
|
|
|
|
b->offset %= b->image_size;
|
2015-07-10 19:09:18 +03:00
|
|
|
if (b->write) {
|
2016-12-07 18:08:27 +03:00
|
|
|
acb = blk_aio_pwritev(b->blk, offset, b->qiov, 0, bench_cb, b);
|
2015-07-10 19:09:18 +03:00
|
|
|
} else {
|
2016-12-07 18:08:27 +03:00
|
|
|
acb = blk_aio_preadv(b->blk, offset, b->qiov, 0, bench_cb, b);
|
2015-07-10 19:09:18 +03:00
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
if (!acb) {
|
|
|
|
error_report("Failed to issue request");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_bench(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int c, ret = 0;
|
|
|
|
const char *fmt = NULL, *filename;
|
|
|
|
bool quiet = false;
|
|
|
|
bool image_opts = false;
|
2015-07-10 19:09:18 +03:00
|
|
|
bool is_write = false;
|
2014-08-05 16:17:13 +04:00
|
|
|
int count = 75000;
|
|
|
|
int depth = 64;
|
2015-07-10 19:09:18 +03:00
|
|
|
int64_t offset = 0;
|
2014-08-05 16:17:13 +04:00
|
|
|
size_t bufsize = 4096;
|
2015-07-10 19:09:18 +03:00
|
|
|
int pattern = 0;
|
2015-07-13 14:13:17 +03:00
|
|
|
size_t step = 0;
|
2016-06-03 14:59:41 +03:00
|
|
|
int flush_interval = 0;
|
|
|
|
bool drain_on_flush = true;
|
2014-08-05 16:17:13 +04:00
|
|
|
int64_t image_size;
|
|
|
|
BlockBackend *blk = NULL;
|
|
|
|
BenchData data = {};
|
|
|
|
int flags = 0;
|
2016-06-14 12:29:32 +03:00
|
|
|
bool writethrough = false;
|
2014-08-05 16:17:13 +04:00
|
|
|
struct timeval t1, t2;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
2016-06-03 14:59:41 +03:00
|
|
|
{"flush-interval", required_argument, 0, OPTION_FLUSH_INTERVAL},
|
2014-08-05 16:17:13 +04:00
|
|
|
{"image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
2015-07-10 19:09:18 +03:00
|
|
|
{"pattern", required_argument, 0, OPTION_PATTERN},
|
2016-06-03 14:59:41 +03:00
|
|
|
{"no-drain", no_argument, 0, OPTION_NO_DRAIN},
|
2014-08-05 16:17:13 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2015-07-13 14:13:17 +03:00
|
|
|
c = getopt_long(argc, argv, "hc:d:f:no:qs:S:t:w", long_options, NULL);
|
2014-08-05 16:17:13 +04:00
|
|
|
if (c == -1) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (c) {
|
|
|
|
case 'h':
|
|
|
|
case '?':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case 'c':
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
errno = 0;
|
|
|
|
count = strtoul(optarg, &end, 0);
|
|
|
|
if (errno || *end || count > INT_MAX) {
|
|
|
|
error_report("Invalid request count specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case 'd':
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
errno = 0;
|
|
|
|
depth = strtoul(optarg, &end, 0);
|
|
|
|
if (errno || *end || depth > INT_MAX) {
|
|
|
|
error_report("Invalid queue depth specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'n':
|
|
|
|
flags |= BDRV_O_NATIVE_AIO;
|
|
|
|
break;
|
2015-07-10 19:09:18 +03:00
|
|
|
case 'o':
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
errno = 0;
|
|
|
|
offset = qemu_strtosz_suffix(optarg, &end,
|
|
|
|
QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
if (offset < 0|| *end) {
|
|
|
|
error_report("Invalid offset specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2014-08-05 16:17:13 +04:00
|
|
|
case 'q':
|
|
|
|
quiet = true;
|
|
|
|
break;
|
|
|
|
case 's':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
sval = qemu_strtosz_suffix(optarg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
if (sval < 0 || sval > INT_MAX || *end) {
|
|
|
|
error_report("Invalid buffer size specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
bufsize = sval;
|
|
|
|
break;
|
|
|
|
}
|
2015-07-13 14:13:17 +03:00
|
|
|
case 'S':
|
|
|
|
{
|
|
|
|
int64_t sval;
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
sval = qemu_strtosz_suffix(optarg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
if (sval < 0 || sval > INT_MAX || *end) {
|
|
|
|
error_report("Invalid step size specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
step = sval;
|
|
|
|
break;
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
case 't':
|
|
|
|
ret = bdrv_parse_cache_mode(optarg, &flags, &writethrough);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_report("Invalid cache mode");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
2015-07-10 19:09:18 +03:00
|
|
|
case 'w':
|
|
|
|
flags |= BDRV_O_RDWR;
|
|
|
|
is_write = true;
|
|
|
|
break;
|
|
|
|
case OPTION_PATTERN:
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
errno = 0;
|
|
|
|
pattern = strtoul(optarg, &end, 0);
|
|
|
|
if (errno || *end || pattern > 0xff) {
|
|
|
|
error_report("Invalid pattern byte specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2016-06-03 14:59:41 +03:00
|
|
|
case OPTION_FLUSH_INTERVAL:
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
errno = 0;
|
|
|
|
flush_interval = strtoul(optarg, &end, 0);
|
|
|
|
if (errno || *end || flush_interval > INT_MAX) {
|
|
|
|
error_report("Invalid flush interval specified");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case OPTION_NO_DRAIN:
|
|
|
|
drain_on_flush = false;
|
|
|
|
break;
|
2014-08-05 16:17:13 +04:00
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (optind != argc - 1) {
|
|
|
|
error_exit("Expecting one image file name");
|
|
|
|
}
|
|
|
|
filename = argv[argc - 1];
|
|
|
|
|
2016-06-03 14:59:41 +03:00
|
|
|
if (!is_write && flush_interval) {
|
|
|
|
error_report("--flush-interval is only available in write tests");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (flush_interval && flush_interval < depth) {
|
|
|
|
error_report("Flush interval can't be smaller than depth");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
blk = img_open(image_opts, filename, fmt, flags, writethrough, quiet);
|
|
|
|
if (!blk) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
image_size = blk_getlength(blk);
|
|
|
|
if (image_size < 0) {
|
|
|
|
ret = image_size;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
data = (BenchData) {
|
2016-06-03 14:59:41 +03:00
|
|
|
.blk = blk,
|
|
|
|
.image_size = image_size,
|
|
|
|
.bufsize = bufsize,
|
|
|
|
.step = step ?: bufsize,
|
|
|
|
.nrreq = depth,
|
|
|
|
.n = count,
|
|
|
|
.offset = offset,
|
|
|
|
.write = is_write,
|
|
|
|
.flush_interval = flush_interval,
|
|
|
|
.drain_on_flush = drain_on_flush,
|
2014-08-05 16:17:13 +04:00
|
|
|
};
|
2015-07-10 19:09:18 +03:00
|
|
|
printf("Sending %d %s requests, %d bytes each, %d in parallel "
|
2015-07-13 14:13:17 +03:00
|
|
|
"(starting at offset %" PRId64 ", step size %d)\n",
|
2015-07-10 19:09:18 +03:00
|
|
|
data.n, data.write ? "write" : "read", data.bufsize, data.nrreq,
|
2015-07-13 14:13:17 +03:00
|
|
|
data.offset, data.step);
|
2016-06-03 14:59:41 +03:00
|
|
|
if (flush_interval) {
|
|
|
|
printf("Sending flush every %d requests\n", flush_interval);
|
|
|
|
}
|
2014-08-05 16:17:13 +04:00
|
|
|
|
|
|
|
data.buf = blk_blockalign(blk, data.nrreq * data.bufsize);
|
2015-07-10 19:09:18 +03:00
|
|
|
memset(data.buf, pattern, data.nrreq * data.bufsize);
|
|
|
|
|
2014-08-05 16:17:13 +04:00
|
|
|
data.qiov = g_new(QEMUIOVector, data.nrreq);
|
|
|
|
for (i = 0; i < data.nrreq; i++) {
|
|
|
|
qemu_iovec_init(&data.qiov[i], 1);
|
|
|
|
qemu_iovec_add(&data.qiov[i],
|
|
|
|
data.buf + i * data.bufsize, data.bufsize);
|
|
|
|
}
|
|
|
|
|
|
|
|
gettimeofday(&t1, NULL);
|
|
|
|
bench_cb(&data, 0);
|
|
|
|
|
|
|
|
while (data.n > 0) {
|
|
|
|
main_loop_wait(false);
|
|
|
|
}
|
|
|
|
gettimeofday(&t2, NULL);
|
|
|
|
|
|
|
|
printf("Run completed in %3.3f seconds.\n",
|
|
|
|
(t2.tv_sec - t1.tv_sec)
|
|
|
|
+ ((double)(t2.tv_usec - t1.tv_usec) / 1000000));
|
|
|
|
|
|
|
|
out:
|
|
|
|
qemu_vfree(data.buf);
|
|
|
|
blk_unref(blk);
|
|
|
|
|
|
|
|
if (ret) {
|
2016-08-10 05:43:12 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define C_BS 01
|
|
|
|
#define C_COUNT 02
|
|
|
|
#define C_IF 04
|
|
|
|
#define C_OF 010
|
2016-08-10 17:16:09 +03:00
|
|
|
#define C_SKIP 020
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
struct DdInfo {
|
|
|
|
unsigned int flags;
|
|
|
|
int64_t count;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct DdIo {
|
|
|
|
int bsz; /* Block size */
|
|
|
|
char *filename;
|
|
|
|
uint8_t *buf;
|
2016-08-10 17:16:09 +03:00
|
|
|
int64_t offset;
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
struct DdOpts {
|
|
|
|
const char *name;
|
|
|
|
int (*f)(const char *, struct DdIo *, struct DdIo *, struct DdInfo *);
|
|
|
|
unsigned int flag;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int img_dd_bs(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
int64_t res;
|
|
|
|
|
|
|
|
res = qemu_strtosz_suffix(arg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
|
|
|
|
if (res <= 0 || res > INT_MAX || *end) {
|
|
|
|
error_report("invalid number: '%s'", arg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
in->bsz = out->bsz = res;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_count(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
dd->count = qemu_strtosz_suffix(arg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
|
|
|
|
if (dd->count < 0 || *end) {
|
|
|
|
error_report("invalid number: '%s'", arg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_if(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
in->filename = g_strdup(arg);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int img_dd_of(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
out->filename = g_strdup(arg);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
static int img_dd_skip(const char *arg,
|
|
|
|
struct DdIo *in, struct DdIo *out,
|
|
|
|
struct DdInfo *dd)
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
in->offset = qemu_strtosz_suffix(arg, &end, QEMU_STRTOSZ_DEFSUFFIX_B);
|
|
|
|
|
|
|
|
if (in->offset < 0 || *end) {
|
|
|
|
error_report("invalid number: '%s'", arg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-10 05:43:12 +03:00
|
|
|
static int img_dd(int argc, char **argv)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
char *arg = NULL;
|
|
|
|
char *tmp;
|
|
|
|
BlockDriver *drv = NULL, *proto_drv = NULL;
|
|
|
|
BlockBackend *blk1 = NULL, *blk2 = NULL;
|
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
bool image_opts = false;
|
|
|
|
int c, i;
|
|
|
|
const char *out_fmt = "raw";
|
|
|
|
const char *fmt = NULL;
|
|
|
|
int64_t size = 0;
|
|
|
|
int64_t block_count = 0, out_pos, in_pos;
|
|
|
|
struct DdInfo dd = {
|
|
|
|
.flags = 0,
|
|
|
|
.count = 0,
|
|
|
|
};
|
|
|
|
struct DdIo in = {
|
|
|
|
.bsz = 512, /* Block size is by default 512 bytes */
|
|
|
|
.filename = NULL,
|
2016-08-10 17:16:09 +03:00
|
|
|
.buf = NULL,
|
|
|
|
.offset = 0
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
struct DdIo out = {
|
|
|
|
.bsz = 512,
|
|
|
|
.filename = NULL,
|
2016-08-10 17:16:09 +03:00
|
|
|
.buf = NULL,
|
|
|
|
.offset = 0
|
2016-08-10 05:43:12 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
const struct DdOpts options[] = {
|
|
|
|
{ "bs", img_dd_bs, C_BS },
|
|
|
|
{ "count", img_dd_count, C_COUNT },
|
|
|
|
{ "if", img_dd_if, C_IF },
|
|
|
|
{ "of", img_dd_of, C_OF },
|
2016-08-10 17:16:09 +03:00
|
|
|
{ "skip", img_dd_skip, C_SKIP },
|
2016-08-10 05:43:12 +03:00
|
|
|
{ NULL, NULL, 0 }
|
|
|
|
};
|
|
|
|
const struct option long_options[] = {
|
|
|
|
{ "help", no_argument, 0, 'h'},
|
|
|
|
{ "image-opts", no_argument, 0, OPTION_IMAGE_OPTS},
|
|
|
|
{ 0, 0, 0, 0 }
|
|
|
|
};
|
|
|
|
|
|
|
|
while ((c = getopt_long(argc, argv, "hf:O:", long_options, NULL))) {
|
|
|
|
if (c == EOF) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
switch (c) {
|
|
|
|
case 'O':
|
|
|
|
out_fmt = optarg;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
fmt = optarg;
|
|
|
|
break;
|
|
|
|
case '?':
|
|
|
|
error_report("Try 'qemu-img --help' for more information.");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
break;
|
|
|
|
case OPTION_IMAGE_OPTS:
|
|
|
|
image_opts = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = optind; i < argc; i++) {
|
|
|
|
int j;
|
|
|
|
arg = g_strdup(argv[i]);
|
|
|
|
|
|
|
|
tmp = strchr(arg, '=');
|
|
|
|
if (tmp == NULL) {
|
|
|
|
error_report("unrecognized operand %s", arg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
*tmp++ = '\0';
|
|
|
|
|
|
|
|
for (j = 0; options[j].name != NULL; j++) {
|
|
|
|
if (!strcmp(arg, options[j].name)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (options[j].name == NULL) {
|
|
|
|
error_report("unrecognized operand %s", arg);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (options[j].f(tmp, &in, &out, &dd) != 0) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
dd.flags |= options[j].flag;
|
|
|
|
g_free(arg);
|
|
|
|
arg = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(dd.flags & C_IF && dd.flags & C_OF)) {
|
|
|
|
error_report("Must specify both input and output files");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
blk1 = img_open(image_opts, in.filename, fmt, 0, false, false);
|
|
|
|
|
|
|
|
if (!blk1) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
drv = bdrv_find_format(out_fmt);
|
|
|
|
if (!drv) {
|
|
|
|
error_report("Unknown file format");
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
proto_drv = bdrv_find_protocol(out.filename, true, &local_err);
|
|
|
|
|
|
|
|
if (!proto_drv) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!drv->create_opts) {
|
|
|
|
error_report("Format driver '%s' does not support image creation",
|
|
|
|
drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!proto_drv->create_opts) {
|
|
|
|
error_report("Protocol driver '%s' does not support image creation",
|
|
|
|
proto_drv->format_name);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
create_opts = qemu_opts_append(create_opts, drv->create_opts);
|
|
|
|
create_opts = qemu_opts_append(create_opts, proto_drv->create_opts);
|
|
|
|
|
|
|
|
opts = qemu_opts_create(create_opts, NULL, 0, &error_abort);
|
|
|
|
|
|
|
|
size = blk_getlength(blk1);
|
|
|
|
if (size < 0) {
|
|
|
|
error_report("Failed to get size for '%s'", in.filename);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (dd.flags & C_COUNT && dd.count <= INT64_MAX / in.bsz &&
|
|
|
|
dd.count * in.bsz < size) {
|
|
|
|
size = dd.count * in.bsz;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
/* Overflow means the specified offset is beyond input image's size */
|
|
|
|
if (dd.flags & C_SKIP && (in.offset > INT64_MAX / in.bsz ||
|
|
|
|
size < in.bsz * in.offset)) {
|
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE, 0, &error_abort);
|
|
|
|
} else {
|
|
|
|
qemu_opt_set_number(opts, BLOCK_OPT_SIZE,
|
|
|
|
size - in.bsz * in.offset, &error_abort);
|
|
|
|
}
|
2016-08-10 05:43:12 +03:00
|
|
|
|
|
|
|
ret = bdrv_create(drv, out.filename, opts, &local_err);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_reportf_err(local_err,
|
|
|
|
"%s: error while creating output image: ",
|
|
|
|
out.filename);
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
blk2 = img_open(image_opts, out.filename, out_fmt, BDRV_O_RDWR,
|
|
|
|
false, false);
|
|
|
|
|
|
|
|
if (!blk2) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
if (dd.flags & C_SKIP && (in.offset > INT64_MAX / in.bsz ||
|
|
|
|
size < in.offset * in.bsz)) {
|
|
|
|
/* We give a warning if the skip option is bigger than the input
|
|
|
|
* size and create an empty output disk image (i.e. like dd(1)).
|
|
|
|
*/
|
|
|
|
error_report("%s: cannot skip to specified offset", in.filename);
|
|
|
|
in_pos = size;
|
|
|
|
} else {
|
|
|
|
in_pos = in.offset * in.bsz;
|
|
|
|
}
|
|
|
|
|
2016-08-10 05:43:12 +03:00
|
|
|
in.buf = g_new(uint8_t, in.bsz);
|
|
|
|
|
2016-08-10 17:16:09 +03:00
|
|
|
for (out_pos = 0; in_pos < size; block_count++) {
|
2016-08-10 05:43:12 +03:00
|
|
|
int in_ret, out_ret;
|
|
|
|
|
|
|
|
if (in_pos + in.bsz > size) {
|
|
|
|
in_ret = blk_pread(blk1, in_pos, in.buf, size - in_pos);
|
|
|
|
} else {
|
|
|
|
in_ret = blk_pread(blk1, in_pos, in.buf, in.bsz);
|
|
|
|
}
|
|
|
|
if (in_ret < 0) {
|
|
|
|
error_report("error while reading from input image file: %s",
|
|
|
|
strerror(-in_ret));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
in_pos += in_ret;
|
|
|
|
|
|
|
|
out_ret = blk_pwrite(blk2, out_pos, in.buf, in_ret, 0);
|
|
|
|
|
|
|
|
if (out_ret < 0) {
|
|
|
|
error_report("error while writing to output image file: %s",
|
|
|
|
strerror(-out_ret));
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
out_pos += out_ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
g_free(arg);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
qemu_opts_free(create_opts);
|
|
|
|
blk_unref(blk1);
|
|
|
|
blk_unref(blk2);
|
|
|
|
g_free(in.filename);
|
|
|
|
g_free(out.filename);
|
|
|
|
g_free(in.buf);
|
|
|
|
g_free(out.buf);
|
|
|
|
|
|
|
|
if (ret) {
|
2014-08-05 16:17:13 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2009-10-02 01:12:16 +04:00
|
|
|
static const img_cmd_t img_cmds[] = {
|
2009-06-07 03:42:17 +04:00
|
|
|
#define DEF(option, callback, arg_string) \
|
|
|
|
{ option, callback },
|
|
|
|
#include "qemu-img-cmds.h"
|
|
|
|
#undef DEF
|
|
|
|
#undef GEN_DOCS
|
|
|
|
{ NULL, NULL, },
|
|
|
|
};
|
|
|
|
|
2004-08-02 01:59:26 +04:00
|
|
|
int main(int argc, char **argv)
|
|
|
|
{
|
2009-10-02 01:12:16 +04:00
|
|
|
const img_cmd_t *cmd;
|
2009-06-07 03:42:17 +04:00
|
|
|
const char *cmdname;
|
2014-09-18 15:30:49 +04:00
|
|
|
Error *local_error = NULL;
|
2016-06-17 17:44:14 +03:00
|
|
|
char *trace_file = NULL;
|
2014-04-26 01:02:32 +04:00
|
|
|
int c;
|
|
|
|
static const struct option long_options[] = {
|
|
|
|
{"help", no_argument, 0, 'h'},
|
2016-06-17 17:44:13 +03:00
|
|
|
{"version", no_argument, 0, 'V'},
|
2016-06-17 17:44:14 +03:00
|
|
|
{"trace", required_argument, NULL, 'T'},
|
2014-04-26 01:02:32 +04:00
|
|
|
{0, 0, 0, 0}
|
|
|
|
};
|
2004-08-02 01:59:26 +04:00
|
|
|
|
2013-07-23 12:30:11 +04:00
|
|
|
#ifdef CONFIG_POSIX
|
|
|
|
signal(SIGPIPE, SIG_IGN);
|
|
|
|
#endif
|
|
|
|
|
2016-10-04 16:35:52 +03:00
|
|
|
module_call_init(MODULE_INIT_TRACE);
|
2010-12-16 17:10:32 +03:00
|
|
|
error_set_progname(argv[0]);
|
2014-02-10 10:48:51 +04:00
|
|
|
qemu_init_exec_dir(argv[0]);
|
2010-12-16 17:10:32 +03:00
|
|
|
|
2014-09-18 15:30:49 +04:00
|
|
|
if (qemu_init_main_loop(&local_error)) {
|
2015-02-12 15:55:05 +03:00
|
|
|
error_report_err(local_error);
|
2014-09-18 15:30:49 +04:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
|
2016-05-12 17:10:04 +03:00
|
|
|
qcrypto_init(&error_fatal);
|
2016-04-06 14:12:06 +03:00
|
|
|
|
2016-02-10 21:41:01 +03:00
|
|
|
module_call_init(MODULE_INIT_QOM);
|
2004-08-02 01:59:26 +04:00
|
|
|
bdrv_init();
|
2014-04-22 09:36:11 +04:00
|
|
|
if (argc < 2) {
|
|
|
|
error_exit("Not enough arguments");
|
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2016-02-17 13:10:17 +03:00
|
|
|
qemu_add_opts(&qemu_object_opts);
|
2016-02-17 13:10:20 +03:00
|
|
|
qemu_add_opts(&qemu_source_opts);
|
2016-06-17 17:44:14 +03:00
|
|
|
qemu_add_opts(&qemu_trace_opts);
|
2016-02-17 13:10:17 +03:00
|
|
|
|
2016-06-17 17:44:14 +03:00
|
|
|
while ((c = getopt_long(argc, argv, "+hVT:", long_options, NULL)) != -1) {
|
2016-06-17 17:44:13 +03:00
|
|
|
switch (c) {
|
|
|
|
case 'h':
|
|
|
|
help();
|
|
|
|
return 0;
|
|
|
|
case 'V':
|
|
|
|
printf(QEMU_IMG_VERSION);
|
|
|
|
return 0;
|
2016-06-17 17:44:14 +03:00
|
|
|
case 'T':
|
|
|
|
g_free(trace_file);
|
|
|
|
trace_file = trace_opt_parse(optarg);
|
|
|
|
break;
|
2009-06-07 03:42:17 +04:00
|
|
|
}
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|
2009-06-07 03:42:17 +04:00
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
cmdname = argv[optind];
|
2014-04-26 01:02:32 +04:00
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
/* reset getopt_long scanning */
|
|
|
|
argc -= optind;
|
|
|
|
if (argc < 1) {
|
2014-04-28 22:37:18 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2016-06-17 17:44:13 +03:00
|
|
|
argv += optind;
|
2016-07-04 16:16:48 +03:00
|
|
|
optind = 0;
|
2016-06-17 17:44:13 +03:00
|
|
|
|
2016-06-17 17:44:14 +03:00
|
|
|
if (!trace_init_backends()) {
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
trace_init_file(trace_file);
|
|
|
|
qemu_set_log(LOG_TRACE);
|
|
|
|
|
2016-06-17 17:44:13 +03:00
|
|
|
/* find the command */
|
|
|
|
for (cmd = img_cmds; cmd->name != NULL; cmd++) {
|
|
|
|
if (!strcmp(cmdname, cmd->name)) {
|
|
|
|
return cmd->handler(argc, argv);
|
|
|
|
}
|
|
|
|
}
|
2014-04-26 01:02:32 +04:00
|
|
|
|
2009-06-07 03:42:17 +04:00
|
|
|
/* not found */
|
2014-04-22 09:36:11 +04:00
|
|
|
error_exit("Command not found: %s", cmdname);
|
2004-08-02 01:59:26 +04:00
|
|
|
}
|