2009-04-22 19:19:30 +04:00
|
|
|
/*
|
|
|
|
* xen paravirt block device backend
|
|
|
|
*
|
|
|
|
* (c) Gerd Hoffmann <kraxel@redhat.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; under version 2 of the License.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along
|
2009-07-17 00:47:01 +04:00
|
|
|
* with this program; if not, see <http://www.gnu.org/licenses/>.
|
2012-01-13 20:44:23 +04:00
|
|
|
*
|
|
|
|
* Contributions after 2012-01-13 are licensed under the terms of the
|
|
|
|
* GNU GPL, version 2 or (at your option) any later version.
|
2009-04-22 19:19:30 +04:00
|
|
|
*/
|
|
|
|
|
2016-01-18 21:01:42 +03:00
|
|
|
#include "qemu/osdep.h"
|
2009-04-22 19:19:30 +04:00
|
|
|
#include <sys/ioctl.h>
|
|
|
|
#include <sys/uio.h>
|
|
|
|
|
2013-02-04 18:40:22 +04:00
|
|
|
#include "hw/hw.h"
|
2013-02-05 20:06:20 +04:00
|
|
|
#include "hw/xen/xen_backend.h"
|
2013-03-18 20:36:02 +04:00
|
|
|
#include "xen_blkif.h"
|
2012-12-17 21:20:04 +04:00
|
|
|
#include "sysemu/blockdev.h"
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
#include "sysemu/block-backend.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
2015-02-05 21:58:15 +03:00
|
|
|
#include "qapi/qmp/qdict.h"
|
|
|
|
#include "qapi/qmp/qstring.h"
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* ------------------------------------------------------------- */
|
|
|
|
|
|
|
|
static int batch_maps = 0;
|
|
|
|
|
|
|
|
static int max_requests = 32;
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------- */
|
|
|
|
|
|
|
|
#define BLOCK_SIZE 512
|
|
|
|
#define IOCB_COUNT (BLKIF_MAX_SEGMENTS_PER_REQUEST + 2)
|
|
|
|
|
2013-01-14 22:28:19 +04:00
|
|
|
struct PersistentGrant {
|
|
|
|
void *page;
|
|
|
|
struct XenBlkDev *blkdev;
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct PersistentGrant PersistentGrant;
|
|
|
|
|
2014-11-13 20:42:09 +03:00
|
|
|
struct PersistentRegion {
|
|
|
|
void *addr;
|
|
|
|
int num;
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct PersistentRegion PersistentRegion;
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
struct ioreq {
|
|
|
|
blkif_request_t req;
|
|
|
|
int16_t status;
|
|
|
|
|
|
|
|
/* parsed request */
|
|
|
|
off_t start;
|
|
|
|
QEMUIOVector v;
|
|
|
|
int presync;
|
2012-04-26 20:35:53 +04:00
|
|
|
uint8_t mapped;
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* grant mapping */
|
|
|
|
uint32_t domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
uint32_t refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
int prot;
|
|
|
|
void *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
void *pages;
|
2013-01-14 22:28:19 +04:00
|
|
|
int num_unmap;
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* aio status */
|
|
|
|
int aio_inflight;
|
|
|
|
int aio_errors;
|
|
|
|
|
|
|
|
struct XenBlkDev *blkdev;
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_ENTRY(ioreq) list;
|
2011-08-25 10:26:01 +04:00
|
|
|
BlockAcctCookie acct;
|
2009-04-22 19:19:30 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
struct XenBlkDev {
|
|
|
|
struct XenDevice xendev; /* must be first */
|
|
|
|
char *params;
|
|
|
|
char *mode;
|
|
|
|
char *type;
|
|
|
|
char *dev;
|
|
|
|
char *devtype;
|
2013-07-29 14:56:38 +04:00
|
|
|
bool directiosafe;
|
2009-04-22 19:19:30 +04:00
|
|
|
const char *fileproto;
|
|
|
|
const char *filename;
|
|
|
|
int ring_ref;
|
|
|
|
void *sring;
|
|
|
|
int64_t file_blk;
|
|
|
|
int64_t file_size;
|
|
|
|
int protocol;
|
|
|
|
blkif_back_rings_t rings;
|
|
|
|
int more_work;
|
|
|
|
int cnt_map;
|
|
|
|
|
|
|
|
/* request lists */
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_HEAD(inflight_head, ioreq) inflight;
|
|
|
|
QLIST_HEAD(finished_head, ioreq) finished;
|
|
|
|
QLIST_HEAD(freelist_head, ioreq) freelist;
|
2009-04-22 19:19:30 +04:00
|
|
|
int requests_total;
|
|
|
|
int requests_inflight;
|
|
|
|
int requests_finished;
|
|
|
|
|
2013-01-14 22:28:19 +04:00
|
|
|
/* Persistent grants extension */
|
2014-05-07 17:40:04 +04:00
|
|
|
gboolean feature_discard;
|
2013-01-14 22:28:19 +04:00
|
|
|
gboolean feature_persistent;
|
|
|
|
GTree *persistent_gnts;
|
2014-11-13 20:42:09 +03:00
|
|
|
GSList *persistent_regions;
|
2013-01-14 22:28:19 +04:00
|
|
|
unsigned int persistent_gnt_count;
|
|
|
|
unsigned int max_grants;
|
|
|
|
|
2016-09-14 22:10:03 +03:00
|
|
|
/* Grant copy */
|
|
|
|
gboolean feature_grant_copy;
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
/* qemu block driver */
|
2009-07-22 18:42:57 +04:00
|
|
|
DriveInfo *dinfo;
|
2014-10-07 15:59:18 +04:00
|
|
|
BlockBackend *blk;
|
2009-04-22 19:19:30 +04:00
|
|
|
QEMUBH *bh;
|
|
|
|
};
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------- */
|
|
|
|
|
2013-01-14 22:26:53 +04:00
|
|
|
static void ioreq_reset(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
memset(&ioreq->req, 0, sizeof(ioreq->req));
|
|
|
|
ioreq->status = 0;
|
|
|
|
ioreq->start = 0;
|
|
|
|
ioreq->presync = 0;
|
|
|
|
ioreq->mapped = 0;
|
|
|
|
|
|
|
|
memset(ioreq->domids, 0, sizeof(ioreq->domids));
|
|
|
|
memset(ioreq->refs, 0, sizeof(ioreq->refs));
|
|
|
|
ioreq->prot = 0;
|
|
|
|
memset(ioreq->page, 0, sizeof(ioreq->page));
|
|
|
|
ioreq->pages = NULL;
|
|
|
|
|
|
|
|
ioreq->aio_inflight = 0;
|
|
|
|
ioreq->aio_errors = 0;
|
|
|
|
|
|
|
|
ioreq->blkdev = NULL;
|
|
|
|
memset(&ioreq->list, 0, sizeof(ioreq->list));
|
|
|
|
memset(&ioreq->acct, 0, sizeof(ioreq->acct));
|
|
|
|
|
|
|
|
qemu_iovec_reset(&ioreq->v);
|
|
|
|
}
|
|
|
|
|
2013-01-14 22:28:19 +04:00
|
|
|
static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
|
|
|
|
{
|
|
|
|
uint ua = GPOINTER_TO_UINT(a);
|
|
|
|
uint ub = GPOINTER_TO_UINT(b);
|
|
|
|
return (ua > ub) - (ua < ub);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void destroy_grant(gpointer pgnt)
|
|
|
|
{
|
|
|
|
PersistentGrant *grant = pgnt;
|
2016-01-15 16:23:39 +03:00
|
|
|
xengnttab_handle *gnt = grant->blkdev->xendev.gnttabdev;
|
2013-01-14 22:28:19 +04:00
|
|
|
|
2016-01-15 16:23:39 +03:00
|
|
|
if (xengnttab_unmap(gnt, grant->page, 1) != 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&grant->blkdev->xendev, 0,
|
2016-01-15 16:23:39 +03:00
|
|
|
"xengnttab_unmap failed: %s\n",
|
2013-01-14 22:28:19 +04:00
|
|
|
strerror(errno));
|
|
|
|
}
|
|
|
|
grant->blkdev->persistent_gnt_count--;
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&grant->blkdev->xendev, 3,
|
2013-01-14 22:28:19 +04:00
|
|
|
"unmapped grant %p\n", grant->page);
|
|
|
|
g_free(grant);
|
|
|
|
}
|
|
|
|
|
2014-11-13 20:42:09 +03:00
|
|
|
static void remove_persistent_region(gpointer data, gpointer dev)
|
|
|
|
{
|
|
|
|
PersistentRegion *region = data;
|
|
|
|
struct XenBlkDev *blkdev = dev;
|
2016-01-15 16:23:39 +03:00
|
|
|
xengnttab_handle *gnt = blkdev->xendev.gnttabdev;
|
2014-11-13 20:42:09 +03:00
|
|
|
|
2016-01-15 16:23:39 +03:00
|
|
|
if (xengnttab_unmap(gnt, region->addr, region->num) != 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0,
|
2016-01-15 16:23:39 +03:00
|
|
|
"xengnttab_unmap region %p failed: %s\n",
|
2014-11-13 20:42:09 +03:00
|
|
|
region->addr, strerror(errno));
|
|
|
|
}
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 3,
|
2014-11-13 20:42:09 +03:00
|
|
|
"unmapped grant region %p with %d pages\n",
|
|
|
|
region->addr, region->num);
|
|
|
|
g_free(region);
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
|
|
|
|
{
|
|
|
|
struct ioreq *ioreq = NULL;
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
if (QLIST_EMPTY(&blkdev->freelist)) {
|
2010-09-23 15:28:45 +04:00
|
|
|
if (blkdev->requests_total >= max_requests) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/* allocate new struct */
|
2011-08-21 07:09:37 +04:00
|
|
|
ioreq = g_malloc0(sizeof(*ioreq));
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq->blkdev = blkdev;
|
|
|
|
blkdev->requests_total++;
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_iovec_init(&ioreq->v, BLKIF_MAX_SEGMENTS_PER_REQUEST);
|
|
|
|
} else {
|
2010-09-23 15:28:45 +04:00
|
|
|
/* get one from freelist */
|
|
|
|
ioreq = QLIST_FIRST(&blkdev->freelist);
|
|
|
|
QLIST_REMOVE(ioreq, list);
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_INSERT_HEAD(&blkdev->inflight, ioreq, list);
|
2009-04-22 19:19:30 +04:00
|
|
|
blkdev->requests_inflight++;
|
|
|
|
|
|
|
|
out:
|
|
|
|
return ioreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ioreq_finish(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_REMOVE(ioreq, list);
|
|
|
|
QLIST_INSERT_HEAD(&blkdev->finished, ioreq, list);
|
2009-04-22 19:19:30 +04:00
|
|
|
blkdev->requests_inflight--;
|
|
|
|
blkdev->requests_finished++;
|
|
|
|
}
|
|
|
|
|
2012-05-14 20:46:33 +04:00
|
|
|
static void ioreq_release(struct ioreq *ioreq, bool finish)
|
2009-04-22 19:19:30 +04:00
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_REMOVE(ioreq, list);
|
2013-01-14 22:26:53 +04:00
|
|
|
ioreq_reset(ioreq);
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->blkdev = blkdev;
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_INSERT_HEAD(&blkdev->freelist, ioreq, list);
|
2012-05-14 20:46:33 +04:00
|
|
|
if (finish) {
|
|
|
|
blkdev->requests_finished--;
|
|
|
|
} else {
|
|
|
|
blkdev->requests_inflight--;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* translate request into iovec + start offset
|
|
|
|
* do sanity checks along the way
|
|
|
|
*/
|
|
|
|
static int ioreq_parse(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
uintptr_t mem;
|
|
|
|
size_t len;
|
|
|
|
int i;
|
|
|
|
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 3,
|
2010-09-23 15:28:45 +04:00
|
|
|
"op %d, nr %d, handle %d, id %" PRId64 ", sector %" PRId64 "\n",
|
|
|
|
ioreq->req.operation, ioreq->req.nr_segments,
|
|
|
|
ioreq->req.handle, ioreq->req.id, ioreq->req.sector_number);
|
2009-04-22 19:19:30 +04:00
|
|
|
switch (ioreq->req.operation) {
|
|
|
|
case BLKIF_OP_READ:
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq->prot = PROT_WRITE; /* to memory */
|
|
|
|
break;
|
2013-01-14 22:30:30 +04:00
|
|
|
case BLKIF_OP_FLUSH_DISKCACHE:
|
|
|
|
ioreq->presync = 1;
|
2010-11-24 16:08:03 +03:00
|
|
|
if (!ioreq->req.nr_segments) {
|
|
|
|
return 0;
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
/* fall through */
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_OP_WRITE:
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq->prot = PROT_READ; /* from memory */
|
|
|
|
break;
|
2014-05-07 17:40:04 +04:00
|
|
|
case BLKIF_OP_DISCARD:
|
|
|
|
return 0;
|
2009-04-22 19:19:30 +04:00
|
|
|
default:
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq->req.operation);
|
|
|
|
goto err;
|
2009-04-22 19:19:30 +04:00
|
|
|
};
|
|
|
|
|
2009-06-11 13:32:48 +04:00
|
|
|
if (ioreq->req.operation != BLKIF_OP_READ && blkdev->mode[0] != 'w') {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: write req for ro device\n");
|
2009-06-11 13:32:48 +04:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->start = ioreq->req.sector_number * blkdev->file_blk;
|
|
|
|
for (i = 0; i < ioreq->req.nr_segments; i++) {
|
2010-09-23 15:28:45 +04:00
|
|
|
if (i == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: nr_segments too big\n");
|
2010-09-23 15:28:45 +04:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (ioreq->req.seg[i].first_sect > ioreq->req.seg[i].last_sect) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: first > last sector\n");
|
2010-09-23 15:28:45 +04:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (ioreq->req.seg[i].last_sect * BLOCK_SIZE >= XC_PAGE_SIZE) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: page crossing\n");
|
2010-09-23 15:28:45 +04:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
ioreq->domids[i] = blkdev->xendev.dom;
|
|
|
|
ioreq->refs[i] = ioreq->req.seg[i].gref;
|
|
|
|
|
|
|
|
mem = ioreq->req.seg[i].first_sect * blkdev->file_blk;
|
|
|
|
len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk;
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_iovec_add(&ioreq->v, (void*)mem, len);
|
|
|
|
}
|
|
|
|
if (ioreq->start + ioreq->v.size > blkdev->file_size) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n");
|
2010-09-23 15:28:45 +04:00
|
|
|
goto err;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err:
|
|
|
|
ioreq->status = BLKIF_RSP_ERROR;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ioreq_unmap(struct ioreq *ioreq)
|
|
|
|
{
|
2016-01-15 16:23:39 +03:00
|
|
|
xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
|
2009-04-22 19:19:30 +04:00
|
|
|
int i;
|
|
|
|
|
2013-01-14 22:28:19 +04:00
|
|
|
if (ioreq->num_unmap == 0 || ioreq->mapped == 0) {
|
2009-04-22 19:19:30 +04:00
|
|
|
return;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
if (batch_maps) {
|
2010-09-23 15:28:45 +04:00
|
|
|
if (!ioreq->pages) {
|
|
|
|
return;
|
|
|
|
}
|
2016-01-15 16:23:39 +03:00
|
|
|
if (xengnttab_unmap(gnt, ioreq->pages, ioreq->num_unmap) != 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0,
|
2016-01-15 16:23:39 +03:00
|
|
|
"xengnttab_unmap failed: %s\n",
|
2010-09-23 15:28:45 +04:00
|
|
|
strerror(errno));
|
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
ioreq->blkdev->cnt_map -= ioreq->num_unmap;
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq->pages = NULL;
|
2009-04-22 19:19:30 +04:00
|
|
|
} else {
|
2013-01-14 22:28:19 +04:00
|
|
|
for (i = 0; i < ioreq->num_unmap; i++) {
|
2010-09-23 15:28:45 +04:00
|
|
|
if (!ioreq->page[i]) {
|
|
|
|
continue;
|
|
|
|
}
|
2016-01-15 16:23:39 +03:00
|
|
|
if (xengnttab_unmap(gnt, ioreq->page[i], 1) != 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0,
|
2016-01-15 16:23:39 +03:00
|
|
|
"xengnttab_unmap failed: %s\n",
|
2010-09-23 15:28:45 +04:00
|
|
|
strerror(errno));
|
|
|
|
}
|
|
|
|
ioreq->blkdev->cnt_map--;
|
|
|
|
ioreq->page[i] = NULL;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2012-04-26 20:35:53 +04:00
|
|
|
ioreq->mapped = 0;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ioreq_map(struct ioreq *ioreq)
|
|
|
|
{
|
2016-01-15 16:23:39 +03:00
|
|
|
xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
|
2013-01-14 22:28:19 +04:00
|
|
|
uint32_t domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
uint32_t refs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
void *page[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
int i, j, new_maps = 0;
|
|
|
|
PersistentGrant *grant;
|
2014-11-13 20:42:09 +03:00
|
|
|
PersistentRegion *region;
|
2013-01-14 22:28:19 +04:00
|
|
|
/* domids and refs variables will contain the information necessary
|
|
|
|
* to map the grants that are needed to fulfill this request.
|
|
|
|
*
|
|
|
|
* After mapping the needed grants, the page array will contain the
|
|
|
|
* memory address of each granted page in the order specified in ioreq
|
|
|
|
* (disregarding if it's a persistent grant or not).
|
|
|
|
*/
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2012-04-26 20:35:53 +04:00
|
|
|
if (ioreq->v.niov == 0 || ioreq->mapped == 1) {
|
2009-04-22 19:19:30 +04:00
|
|
|
return 0;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
if (ioreq->blkdev->feature_persistent) {
|
|
|
|
for (i = 0; i < ioreq->v.niov; i++) {
|
|
|
|
grant = g_tree_lookup(ioreq->blkdev->persistent_gnts,
|
|
|
|
GUINT_TO_POINTER(ioreq->refs[i]));
|
|
|
|
|
|
|
|
if (grant != NULL) {
|
|
|
|
page[i] = grant->page;
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 3,
|
2013-01-14 22:28:19 +04:00
|
|
|
"using persistent-grant %" PRIu32 "\n",
|
|
|
|
ioreq->refs[i]);
|
|
|
|
} else {
|
|
|
|
/* Add the grant to the list of grants that
|
|
|
|
* should be mapped
|
|
|
|
*/
|
|
|
|
domids[new_maps] = ioreq->domids[i];
|
|
|
|
refs[new_maps] = ioreq->refs[i];
|
|
|
|
page[i] = NULL;
|
|
|
|
new_maps++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Set the protection to RW, since grants may be reused later
|
|
|
|
* with a different protection than the one needed for this request
|
|
|
|
*/
|
|
|
|
ioreq->prot = PROT_WRITE | PROT_READ;
|
|
|
|
} else {
|
|
|
|
/* All grants in the request should be mapped */
|
|
|
|
memcpy(refs, ioreq->refs, sizeof(refs));
|
|
|
|
memcpy(domids, ioreq->domids, sizeof(domids));
|
|
|
|
memset(page, 0, sizeof(page));
|
|
|
|
new_maps = ioreq->v.niov;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (batch_maps && new_maps) {
|
2016-01-15 16:23:39 +03:00
|
|
|
ioreq->pages = xengnttab_map_grant_refs
|
2013-01-14 22:28:19 +04:00
|
|
|
(gnt, new_maps, domids, refs, ioreq->prot);
|
2010-09-23 15:28:45 +04:00
|
|
|
if (ioreq->pages == NULL) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0,
|
2010-09-23 15:28:45 +04:00
|
|
|
"can't map %d grant refs (%s, %d maps)\n",
|
2013-01-14 22:28:19 +04:00
|
|
|
new_maps, strerror(errno), ioreq->blkdev->cnt_map);
|
2010-09-23 15:28:45 +04:00
|
|
|
return -1;
|
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
for (i = 0, j = 0; i < ioreq->v.niov; i++) {
|
|
|
|
if (page[i] == NULL) {
|
|
|
|
page[i] = ioreq->pages + (j++) * XC_PAGE_SIZE;
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
ioreq->blkdev->cnt_map += new_maps;
|
|
|
|
} else if (new_maps) {
|
|
|
|
for (i = 0; i < new_maps; i++) {
|
2016-01-15 16:23:39 +03:00
|
|
|
ioreq->page[i] = xengnttab_map_grant_ref
|
2013-01-14 22:28:19 +04:00
|
|
|
(gnt, domids[i], refs[i], ioreq->prot);
|
2010-09-23 15:28:45 +04:00
|
|
|
if (ioreq->page[i] == NULL) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0,
|
2010-09-23 15:28:45 +04:00
|
|
|
"can't map grant ref %d (%s, %d maps)\n",
|
2013-01-14 22:28:19 +04:00
|
|
|
refs[i], strerror(errno), ioreq->blkdev->cnt_map);
|
2013-10-10 18:10:48 +04:00
|
|
|
ioreq->mapped = 1;
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq_unmap(ioreq);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
ioreq->blkdev->cnt_map++;
|
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
for (i = 0, j = 0; i < ioreq->v.niov; i++) {
|
|
|
|
if (page[i] == NULL) {
|
|
|
|
page[i] = ioreq->page[j++];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-11-13 20:42:09 +03:00
|
|
|
if (ioreq->blkdev->feature_persistent && new_maps != 0 &&
|
|
|
|
(!batch_maps || (ioreq->blkdev->persistent_gnt_count + new_maps <=
|
|
|
|
ioreq->blkdev->max_grants))) {
|
|
|
|
/*
|
|
|
|
* If we are using persistent grants and batch mappings only
|
|
|
|
* add the new maps to the list of persistent grants if the whole
|
|
|
|
* area can be persistently mapped.
|
|
|
|
*/
|
|
|
|
if (batch_maps) {
|
|
|
|
region = g_malloc0(sizeof(*region));
|
|
|
|
region->addr = ioreq->pages;
|
|
|
|
region->num = new_maps;
|
|
|
|
ioreq->blkdev->persistent_regions = g_slist_append(
|
|
|
|
ioreq->blkdev->persistent_regions,
|
|
|
|
region);
|
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
while ((ioreq->blkdev->persistent_gnt_count < ioreq->blkdev->max_grants)
|
|
|
|
&& new_maps) {
|
|
|
|
/* Go through the list of newly mapped grants and add as many
|
|
|
|
* as possible to the list of persistently mapped grants.
|
|
|
|
*
|
|
|
|
* Since we start at the end of ioreq->page(s), we only need
|
|
|
|
* to decrease new_maps to prevent this granted pages from
|
|
|
|
* being unmapped in ioreq_unmap.
|
|
|
|
*/
|
|
|
|
grant = g_malloc0(sizeof(*grant));
|
|
|
|
new_maps--;
|
|
|
|
if (batch_maps) {
|
|
|
|
grant->page = ioreq->pages + (new_maps) * XC_PAGE_SIZE;
|
|
|
|
} else {
|
|
|
|
grant->page = ioreq->page[new_maps];
|
|
|
|
}
|
|
|
|
grant->blkdev = ioreq->blkdev;
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 3,
|
2013-01-14 22:28:19 +04:00
|
|
|
"adding grant %" PRIu32 " page: %p\n",
|
|
|
|
refs[new_maps], grant->page);
|
|
|
|
g_tree_insert(ioreq->blkdev->persistent_gnts,
|
|
|
|
GUINT_TO_POINTER(refs[new_maps]),
|
|
|
|
grant);
|
|
|
|
ioreq->blkdev->persistent_gnt_count++;
|
|
|
|
}
|
2014-11-13 20:42:09 +03:00
|
|
|
assert(!batch_maps || new_maps == 0);
|
2013-01-14 22:28:19 +04:00
|
|
|
}
|
|
|
|
for (i = 0; i < ioreq->v.niov; i++) {
|
|
|
|
ioreq->v.iov[i].iov_base += (uintptr_t)page[i];
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2012-04-26 20:35:53 +04:00
|
|
|
ioreq->mapped = 1;
|
2013-01-14 22:28:19 +04:00
|
|
|
ioreq->num_unmap = new_maps;
|
2009-04-22 19:19:30 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-03-16 17:19:52 +03:00
|
|
|
#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40800
|
2016-09-14 22:10:03 +03:00
|
|
|
|
|
|
|
static void ioreq_free_copy_buffers(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ioreq->v.niov; i++) {
|
|
|
|
ioreq->page[i] = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
qemu_vfree(ioreq->pages);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ioreq_init_copy_buffers(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (ioreq->v.niov == 0) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE);
|
|
|
|
|
|
|
|
for (i = 0; i < ioreq->v.niov; i++) {
|
|
|
|
ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE;
|
|
|
|
ioreq->v.iov[i].iov_base = ioreq->page[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ioreq_grant_copy(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev;
|
|
|
|
xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST];
|
|
|
|
int i, count, rc;
|
|
|
|
int64_t file_blk = ioreq->blkdev->file_blk;
|
|
|
|
|
|
|
|
if (ioreq->v.niov == 0) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
count = ioreq->v.niov;
|
|
|
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
if (ioreq->req.operation == BLKIF_OP_READ) {
|
|
|
|
segs[i].flags = GNTCOPY_dest_gref;
|
|
|
|
segs[i].dest.foreign.ref = ioreq->refs[i];
|
|
|
|
segs[i].dest.foreign.domid = ioreq->domids[i];
|
|
|
|
segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
|
|
|
|
segs[i].source.virt = ioreq->v.iov[i].iov_base;
|
|
|
|
} else {
|
|
|
|
segs[i].flags = GNTCOPY_source_gref;
|
|
|
|
segs[i].source.foreign.ref = ioreq->refs[i];
|
|
|
|
segs[i].source.foreign.domid = ioreq->domids[i];
|
|
|
|
segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk;
|
|
|
|
segs[i].dest.virt = ioreq->v.iov[i].iov_base;
|
|
|
|
}
|
|
|
|
segs[i].len = (ioreq->req.seg[i].last_sect
|
|
|
|
- ioreq->req.seg[i].first_sect + 1) * file_blk;
|
|
|
|
}
|
|
|
|
|
|
|
|
rc = xengnttab_grant_copy(gnt, count, segs);
|
|
|
|
|
|
|
|
if (rc) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0,
|
2016-09-14 22:10:03 +03:00
|
|
|
"failed to copy data %d\n", rc);
|
|
|
|
ioreq->aio_errors++;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
if (segs[i].status != GNTST_okay) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 3,
|
2016-09-14 22:10:03 +03:00
|
|
|
"failed to copy data %d for gref %d, domid %d\n",
|
|
|
|
segs[i].status, ioreq->refs[i], ioreq->domids[i]);
|
|
|
|
ioreq->aio_errors++;
|
|
|
|
rc = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static void ioreq_free_copy_buffers(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ioreq_init_copy_buffers(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ioreq_grant_copy(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2012-04-26 20:35:53 +04:00
|
|
|
static int ioreq_runio_qemu_aio(struct ioreq *ioreq);
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
static void qemu_aio_complete(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
struct ioreq *ioreq = opaque;
|
|
|
|
|
|
|
|
if (ret != 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&ioreq->blkdev->xendev, 0, "%s I/O error\n",
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->req.operation == BLKIF_OP_READ ? "read" : "write");
|
|
|
|
ioreq->aio_errors++;
|
|
|
|
}
|
|
|
|
|
|
|
|
ioreq->aio_inflight--;
|
2012-04-26 20:35:53 +04:00
|
|
|
if (ioreq->presync) {
|
|
|
|
ioreq->presync = 0;
|
|
|
|
ioreq_runio_qemu_aio(ioreq);
|
|
|
|
return;
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
if (ioreq->aio_inflight > 0) {
|
2009-04-22 19:19:30 +04:00
|
|
|
return;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2016-09-14 22:10:03 +03:00
|
|
|
if (ioreq->blkdev->feature_grant_copy) {
|
|
|
|
switch (ioreq->req.operation) {
|
|
|
|
case BLKIF_OP_READ:
|
|
|
|
/* in case of failure ioreq->aio_errors is increased */
|
|
|
|
if (ret == 0) {
|
|
|
|
ioreq_grant_copy(ioreq);
|
|
|
|
}
|
|
|
|
ioreq_free_copy_buffers(ioreq);
|
|
|
|
break;
|
|
|
|
case BLKIF_OP_WRITE:
|
|
|
|
case BLKIF_OP_FLUSH_DISKCACHE:
|
|
|
|
if (!ioreq->req.nr_segments) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ioreq_free_copy_buffers(ioreq);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
|
2016-09-14 22:10:03 +03:00
|
|
|
if (!ioreq->blkdev->feature_grant_copy) {
|
|
|
|
ioreq_unmap(ioreq);
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq_finish(ioreq);
|
2014-02-20 21:57:13 +04:00
|
|
|
switch (ioreq->req.operation) {
|
|
|
|
case BLKIF_OP_WRITE:
|
|
|
|
case BLKIF_OP_FLUSH_DISKCACHE:
|
|
|
|
if (!ioreq->req.nr_segments) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case BLKIF_OP_READ:
|
2015-10-28 18:33:13 +03:00
|
|
|
if (ioreq->status == BLKIF_RSP_OKAY) {
|
|
|
|
block_acct_done(blk_get_stats(ioreq->blkdev->blk), &ioreq->acct);
|
|
|
|
} else {
|
|
|
|
block_acct_failed(blk_get_stats(ioreq->blkdev->blk), &ioreq->acct);
|
|
|
|
}
|
2014-02-20 21:57:13 +04:00
|
|
|
break;
|
2014-05-07 17:40:04 +04:00
|
|
|
case BLKIF_OP_DISCARD:
|
2014-02-20 21:57:13 +04:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_bh_schedule(ioreq->blkdev->bh);
|
|
|
|
}
|
|
|
|
|
2016-11-23 13:39:12 +03:00
|
|
|
static bool blk_split_discard(struct ioreq *ioreq, blkif_sector_t sector_number,
|
|
|
|
uint64_t nr_sectors)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
int64_t byte_offset;
|
|
|
|
int byte_chunk;
|
|
|
|
uint64_t byte_remaining, limit;
|
|
|
|
uint64_t sec_start = sector_number;
|
|
|
|
uint64_t sec_count = nr_sectors;
|
|
|
|
|
|
|
|
/* Wrap around, or overflowing byte limit? */
|
|
|
|
if (sec_start + sec_count < sec_count ||
|
|
|
|
sec_start + sec_count > INT64_MAX >> BDRV_SECTOR_BITS) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
limit = BDRV_REQUEST_MAX_SECTORS << BDRV_SECTOR_BITS;
|
|
|
|
byte_offset = sec_start << BDRV_SECTOR_BITS;
|
|
|
|
byte_remaining = sec_count << BDRV_SECTOR_BITS;
|
|
|
|
|
|
|
|
do {
|
|
|
|
byte_chunk = byte_remaining > limit ? limit : byte_remaining;
|
|
|
|
ioreq->aio_inflight++;
|
|
|
|
blk_aio_pdiscard(blkdev->blk, byte_offset, byte_chunk,
|
|
|
|
qemu_aio_complete, ioreq);
|
|
|
|
byte_remaining -= byte_chunk;
|
|
|
|
byte_offset += byte_chunk;
|
|
|
|
} while (byte_remaining > 0);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
|
2016-09-14 22:10:03 +03:00
|
|
|
if (ioreq->blkdev->feature_grant_copy) {
|
|
|
|
ioreq_init_copy_buffers(ioreq);
|
|
|
|
if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE ||
|
|
|
|
ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
|
|
|
|
ioreq_grant_copy(ioreq)) {
|
|
|
|
ioreq_free_copy_buffers(ioreq);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (ioreq->req.nr_segments && ioreq_map(ioreq)) {
|
|
|
|
goto err;
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
ioreq->aio_inflight++;
|
2010-09-23 15:28:45 +04:00
|
|
|
if (ioreq->presync) {
|
2014-10-07 15:59:18 +04:00
|
|
|
blk_aio_flush(ioreq->blkdev->blk, qemu_aio_complete, ioreq);
|
2012-04-26 20:35:53 +04:00
|
|
|
return 0;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
switch (ioreq->req.operation) {
|
|
|
|
case BLKIF_OP_READ:
|
2014-10-07 15:59:18 +04:00
|
|
|
block_acct_start(blk_get_stats(blkdev->blk), &ioreq->acct,
|
2014-09-05 17:46:18 +04:00
|
|
|
ioreq->v.size, BLOCK_ACCT_READ);
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->aio_inflight++;
|
2016-05-06 19:26:34 +03:00
|
|
|
blk_aio_preadv(blkdev->blk, ioreq->start, &ioreq->v, 0,
|
|
|
|
qemu_aio_complete, ioreq);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_OP_WRITE:
|
2013-01-14 22:30:30 +04:00
|
|
|
case BLKIF_OP_FLUSH_DISKCACHE:
|
2010-09-23 15:28:45 +04:00
|
|
|
if (!ioreq->req.nr_segments) {
|
2010-11-24 16:08:03 +03:00
|
|
|
break;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2011-08-25 10:26:01 +04:00
|
|
|
|
2014-10-07 15:59:18 +04:00
|
|
|
block_acct_start(blk_get_stats(blkdev->blk), &ioreq->acct,
|
2015-10-28 18:32:58 +03:00
|
|
|
ioreq->v.size,
|
|
|
|
ioreq->req.operation == BLKIF_OP_WRITE ?
|
|
|
|
BLOCK_ACCT_WRITE : BLOCK_ACCT_FLUSH);
|
2011-03-09 16:19:35 +03:00
|
|
|
ioreq->aio_inflight++;
|
2016-05-06 19:26:34 +03:00
|
|
|
blk_aio_pwritev(blkdev->blk, ioreq->start, &ioreq->v, 0,
|
|
|
|
qemu_aio_complete, ioreq);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2014-05-07 17:40:04 +04:00
|
|
|
case BLKIF_OP_DISCARD:
|
|
|
|
{
|
2016-11-23 13:39:12 +03:00
|
|
|
struct blkif_request_discard *req = (void *)&ioreq->req;
|
|
|
|
if (!blk_split_discard(ioreq, req->sector_number, req->nr_sectors)) {
|
|
|
|
goto err;
|
|
|
|
}
|
2014-05-07 17:40:04 +04:00
|
|
|
break;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
default:
|
2010-09-23 15:28:45 +04:00
|
|
|
/* unknown operation (shouldn't happen -- parse catches this) */
|
2016-09-14 22:10:03 +03:00
|
|
|
if (!ioreq->blkdev->feature_grant_copy) {
|
|
|
|
ioreq_unmap(ioreq);
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
goto err;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
qemu_aio_complete(ioreq, 0);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err:
|
2011-03-29 05:00:15 +04:00
|
|
|
ioreq_finish(ioreq);
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq->status = BLKIF_RSP_ERROR;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int blk_send_response_one(struct ioreq *ioreq)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = ioreq->blkdev;
|
|
|
|
int send_notify = 0;
|
|
|
|
int have_requests = 0;
|
2017-06-28 00:45:34 +03:00
|
|
|
blkif_response_t *resp;
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* Place on the response ring for the relevant domain. */
|
|
|
|
switch (blkdev->protocol) {
|
|
|
|
case BLKIF_PROTOCOL_NATIVE:
|
2017-06-28 00:45:34 +03:00
|
|
|
resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.native,
|
|
|
|
blkdev->rings.native.rsp_prod_pvt);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_PROTOCOL_X86_32:
|
2017-06-28 00:45:34 +03:00
|
|
|
resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
|
|
|
|
blkdev->rings.x86_32_part.rsp_prod_pvt);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_PROTOCOL_X86_64:
|
2017-06-28 00:45:34 +03:00
|
|
|
resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
|
|
|
|
blkdev->rings.x86_64_part.rsp_prod_pvt);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
default:
|
2014-07-28 10:03:45 +04:00
|
|
|
return 0;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2017-06-28 00:45:34 +03:00
|
|
|
|
|
|
|
resp->id = ioreq->req.id;
|
|
|
|
resp->operation = ioreq->req.operation;
|
|
|
|
resp->status = ioreq->status;
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
blkdev->rings.common.rsp_prod_pvt++;
|
|
|
|
|
|
|
|
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify);
|
|
|
|
if (blkdev->rings.common.rsp_prod_pvt == blkdev->rings.common.req_cons) {
|
2010-09-23 15:28:45 +04:00
|
|
|
/*
|
|
|
|
* Tail check for pending requests. Allows frontend to avoid
|
|
|
|
* notifications if requests are already in flight (lower
|
|
|
|
* overheads and promotes batching).
|
|
|
|
*/
|
|
|
|
RING_FINAL_CHECK_FOR_REQUESTS(&blkdev->rings.common, have_requests);
|
2009-04-22 19:19:30 +04:00
|
|
|
} else if (RING_HAS_UNCONSUMED_REQUESTS(&blkdev->rings.common)) {
|
2010-09-23 15:28:45 +04:00
|
|
|
have_requests = 1;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
2010-09-23 15:28:45 +04:00
|
|
|
if (have_requests) {
|
|
|
|
blkdev->more_work++;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
return send_notify;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* walk finished list, send outstanding responses, free requests */
|
|
|
|
static void blk_send_response_all(struct XenBlkDev *blkdev)
|
|
|
|
{
|
|
|
|
struct ioreq *ioreq;
|
|
|
|
int send_notify = 0;
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
while (!QLIST_EMPTY(&blkdev->finished)) {
|
|
|
|
ioreq = QLIST_FIRST(&blkdev->finished);
|
2010-09-23 15:28:45 +04:00
|
|
|
send_notify += blk_send_response_one(ioreq);
|
2012-05-14 20:46:33 +04:00
|
|
|
ioreq_release(ioreq, true);
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
|
|
|
if (send_notify) {
|
2016-10-25 08:50:16 +03:00
|
|
|
xen_pv_send_notify(&blkdev->xendev);
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int blk_get_request(struct XenBlkDev *blkdev, struct ioreq *ioreq, RING_IDX rc)
|
|
|
|
{
|
|
|
|
switch (blkdev->protocol) {
|
|
|
|
case BLKIF_PROTOCOL_NATIVE:
|
2010-09-23 15:28:45 +04:00
|
|
|
memcpy(&ioreq->req, RING_GET_REQUEST(&blkdev->rings.native, rc),
|
|
|
|
sizeof(ioreq->req));
|
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_PROTOCOL_X86_32:
|
2009-04-25 12:00:11 +04:00
|
|
|
blkif_get_x86_32_req(&ioreq->req,
|
|
|
|
RING_GET_REQUEST(&blkdev->rings.x86_32_part, rc));
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
case BLKIF_PROTOCOL_X86_64:
|
2009-04-25 12:00:11 +04:00
|
|
|
blkif_get_x86_64_req(&ioreq->req,
|
|
|
|
RING_GET_REQUEST(&blkdev->rings.x86_64_part, rc));
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2016-05-23 09:44:57 +03:00
|
|
|
/* Prevent the compiler from accessing the on-ring fields instead. */
|
|
|
|
barrier();
|
2009-04-22 19:19:30 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void blk_handle_requests(struct XenBlkDev *blkdev)
|
|
|
|
{
|
|
|
|
RING_IDX rc, rp;
|
|
|
|
struct ioreq *ioreq;
|
|
|
|
|
|
|
|
blkdev->more_work = 0;
|
|
|
|
|
|
|
|
rc = blkdev->rings.common.req_cons;
|
|
|
|
rp = blkdev->rings.common.sring->req_prod;
|
|
|
|
xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
|
|
|
|
|
2011-10-28 20:03:58 +04:00
|
|
|
blk_send_response_all(blkdev);
|
2009-04-23 22:29:47 +04:00
|
|
|
while (rc != rp) {
|
2009-04-22 19:19:30 +04:00
|
|
|
/* pull request from ring */
|
2010-09-23 15:28:45 +04:00
|
|
|
if (RING_REQUEST_CONS_OVERFLOW(&blkdev->rings.common, rc)) {
|
2009-04-22 19:19:30 +04:00
|
|
|
break;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
ioreq = ioreq_start(blkdev);
|
|
|
|
if (ioreq == NULL) {
|
|
|
|
blkdev->more_work++;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
blk_get_request(blkdev, ioreq, rc);
|
|
|
|
blkdev->rings.common.req_cons = ++rc;
|
|
|
|
|
|
|
|
/* parse them */
|
|
|
|
if (ioreq_parse(ioreq) != 0) {
|
2015-10-28 18:33:13 +03:00
|
|
|
|
|
|
|
switch (ioreq->req.operation) {
|
|
|
|
case BLKIF_OP_READ:
|
|
|
|
block_acct_invalid(blk_get_stats(blkdev->blk),
|
|
|
|
BLOCK_ACCT_READ);
|
|
|
|
break;
|
|
|
|
case BLKIF_OP_WRITE:
|
|
|
|
block_acct_invalid(blk_get_stats(blkdev->blk),
|
|
|
|
BLOCK_ACCT_WRITE);
|
|
|
|
break;
|
|
|
|
case BLKIF_OP_FLUSH_DISKCACHE:
|
|
|
|
block_acct_invalid(blk_get_stats(blkdev->blk),
|
|
|
|
BLOCK_ACCT_FLUSH);
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
};
|
|
|
|
|
2010-09-23 15:28:45 +04:00
|
|
|
if (blk_send_response_one(ioreq)) {
|
2016-10-25 08:50:16 +03:00
|
|
|
xen_pv_send_notify(&blkdev->xendev);
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2012-05-14 20:46:33 +04:00
|
|
|
ioreq_release(ioreq, false);
|
2009-04-22 19:19:30 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2011-10-28 20:03:58 +04:00
|
|
|
ioreq_runio_qemu_aio(ioreq);
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2010-09-23 15:28:45 +04:00
|
|
|
if (blkdev->more_work && blkdev->requests_inflight < max_requests) {
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_bh_schedule(blkdev->bh);
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ------------------------------------------------------------- */
|
|
|
|
|
|
|
|
static void blk_bh(void *opaque)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = opaque;
|
|
|
|
blk_handle_requests(blkdev);
|
|
|
|
}
|
|
|
|
|
2012-06-11 13:52:27 +04:00
|
|
|
/*
|
|
|
|
* We need to account for the grant allocations requiring contiguous
|
|
|
|
* chunks; the worst case number would be
|
|
|
|
* max_req * max_seg + (max_req - 1) * (max_seg - 1) + 1,
|
|
|
|
* but in order to keep things simple just use
|
|
|
|
* 2 * max_req * max_seg.
|
|
|
|
*/
|
|
|
|
#define MAX_GRANTS(max_req, max_seg) (2 * (max_req) * (max_seg))
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
static void blk_alloc(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_INIT(&blkdev->inflight);
|
|
|
|
QLIST_INIT(&blkdev->finished);
|
|
|
|
QLIST_INIT(&blkdev->freelist);
|
2009-04-22 19:19:30 +04:00
|
|
|
blkdev->bh = qemu_bh_new(blk_bh, blkdev);
|
2010-09-23 15:28:45 +04:00
|
|
|
if (xen_mode != XEN_EMULATE) {
|
2009-04-22 19:19:30 +04:00
|
|
|
batch_maps = 1;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2016-01-15 16:23:39 +03:00
|
|
|
if (xengnttab_set_max_grants(xendev->gnttabdev,
|
2012-06-11 13:52:27 +04:00
|
|
|
MAX_GRANTS(max_requests, BLKIF_MAX_SEGMENTS_PER_REQUEST)) < 0) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n",
|
2012-06-11 13:52:27 +04:00
|
|
|
strerror(errno));
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
2014-05-07 17:40:04 +04:00
|
|
|
static void blk_parse_discard(struct XenBlkDev *blkdev)
|
|
|
|
{
|
|
|
|
int enable;
|
|
|
|
|
|
|
|
blkdev->feature_discard = true;
|
|
|
|
|
|
|
|
if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0) {
|
|
|
|
blkdev->feature_discard = !!enable;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (blkdev->feature_discard) {
|
|
|
|
xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
static int blk_init(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
2013-04-05 19:45:10 +04:00
|
|
|
int info = 0;
|
2013-07-29 14:56:38 +04:00
|
|
|
char *directiosafe = NULL;
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* read xenstore entries */
|
|
|
|
if (blkdev->params == NULL) {
|
2011-06-27 19:10:01 +04:00
|
|
|
char *h = NULL;
|
2010-09-23 15:28:45 +04:00
|
|
|
blkdev->params = xenstore_read_be_str(&blkdev->xendev, "params");
|
2011-06-27 19:10:01 +04:00
|
|
|
if (blkdev->params != NULL) {
|
|
|
|
h = strchr(blkdev->params, ':');
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
if (h != NULL) {
|
|
|
|
blkdev->fileproto = blkdev->params;
|
|
|
|
blkdev->filename = h+1;
|
|
|
|
*h = 0;
|
|
|
|
} else {
|
|
|
|
blkdev->fileproto = "<unset>";
|
|
|
|
blkdev->filename = blkdev->params;
|
|
|
|
}
|
|
|
|
}
|
2011-06-30 18:42:31 +04:00
|
|
|
if (!strcmp("aio", blkdev->fileproto)) {
|
|
|
|
blkdev->fileproto = "raw";
|
|
|
|
}
|
2015-12-04 17:41:02 +03:00
|
|
|
if (!strcmp("vhd", blkdev->fileproto)) {
|
|
|
|
blkdev->fileproto = "vpc";
|
|
|
|
}
|
2010-09-23 15:28:45 +04:00
|
|
|
if (blkdev->mode == NULL) {
|
|
|
|
blkdev->mode = xenstore_read_be_str(&blkdev->xendev, "mode");
|
|
|
|
}
|
|
|
|
if (blkdev->type == NULL) {
|
|
|
|
blkdev->type = xenstore_read_be_str(&blkdev->xendev, "type");
|
|
|
|
}
|
|
|
|
if (blkdev->dev == NULL) {
|
|
|
|
blkdev->dev = xenstore_read_be_str(&blkdev->xendev, "dev");
|
|
|
|
}
|
|
|
|
if (blkdev->devtype == NULL) {
|
|
|
|
blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type");
|
|
|
|
}
|
2013-07-29 14:56:38 +04:00
|
|
|
directiosafe = xenstore_read_be_str(&blkdev->xendev, "direct-io-safe");
|
|
|
|
blkdev->directiosafe = (directiosafe && atoi(directiosafe));
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* do we have all we need? */
|
|
|
|
if (blkdev->params == NULL ||
|
2010-09-23 15:28:45 +04:00
|
|
|
blkdev->mode == NULL ||
|
|
|
|
blkdev->type == NULL ||
|
|
|
|
blkdev->dev == NULL) {
|
2011-06-27 19:10:01 +04:00
|
|
|
goto out_error;
|
2010-09-23 15:28:45 +04:00
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
/* read-only ? */
|
2013-04-05 19:45:10 +04:00
|
|
|
if (strcmp(blkdev->mode, "w")) {
|
2010-09-23 15:28:45 +04:00
|
|
|
info |= VDISK_READONLY;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* cdrom ? */
|
2010-09-23 15:28:45 +04:00
|
|
|
if (blkdev->devtype && !strcmp(blkdev->devtype, "cdrom")) {
|
|
|
|
info |= VDISK_CDROM;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2013-04-05 19:45:10 +04:00
|
|
|
blkdev->file_blk = BLOCK_SIZE;
|
|
|
|
|
|
|
|
/* fill info
|
|
|
|
* blk_connect supplies sector-size and sectors
|
|
|
|
*/
|
|
|
|
xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
|
|
|
|
xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
|
|
|
|
xenstore_write_be_int(&blkdev->xendev, "info", info);
|
2013-07-29 14:56:38 +04:00
|
|
|
|
2014-05-07 17:40:04 +04:00
|
|
|
blk_parse_discard(blkdev);
|
|
|
|
|
2013-07-29 14:56:38 +04:00
|
|
|
g_free(directiosafe);
|
2013-04-05 19:45:10 +04:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_error:
|
|
|
|
g_free(blkdev->params);
|
|
|
|
blkdev->params = NULL;
|
|
|
|
g_free(blkdev->mode);
|
|
|
|
blkdev->mode = NULL;
|
|
|
|
g_free(blkdev->type);
|
|
|
|
blkdev->type = NULL;
|
|
|
|
g_free(blkdev->dev);
|
|
|
|
blkdev->dev = NULL;
|
|
|
|
g_free(blkdev->devtype);
|
|
|
|
blkdev->devtype = NULL;
|
2013-07-29 14:56:38 +04:00
|
|
|
g_free(directiosafe);
|
|
|
|
blkdev->directiosafe = false;
|
2013-04-05 19:45:10 +04:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int blk_connect(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
|
|
|
int pers, index, qflags;
|
2013-05-29 15:35:40 +04:00
|
|
|
bool readonly = true;
|
2016-03-15 15:38:43 +03:00
|
|
|
bool writethrough = true;
|
2013-04-05 19:45:10 +04:00
|
|
|
|
|
|
|
/* read-only ? */
|
2013-07-29 14:56:38 +04:00
|
|
|
if (blkdev->directiosafe) {
|
|
|
|
qflags = BDRV_O_NOCACHE | BDRV_O_NATIVE_AIO;
|
|
|
|
} else {
|
2016-03-15 15:38:43 +03:00
|
|
|
qflags = 0;
|
|
|
|
writethrough = false;
|
2013-07-29 14:56:38 +04:00
|
|
|
}
|
2013-04-05 19:45:10 +04:00
|
|
|
if (strcmp(blkdev->mode, "w") == 0) {
|
|
|
|
qflags |= BDRV_O_RDWR;
|
2013-05-29 15:35:40 +04:00
|
|
|
readonly = false;
|
2013-04-05 19:45:10 +04:00
|
|
|
}
|
2014-05-07 17:40:04 +04:00
|
|
|
if (blkdev->feature_discard) {
|
|
|
|
qflags |= BDRV_O_UNMAP;
|
|
|
|
}
|
2013-04-05 19:45:10 +04:00
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
/* init qemu block driver */
|
2009-07-22 18:42:57 +04:00
|
|
|
index = (blkdev->xendev.dev - 202 * 256) / 16;
|
|
|
|
blkdev->dinfo = drive_get(IF_XEN, 0, index);
|
|
|
|
if (!blkdev->dinfo) {
|
2014-04-17 15:16:01 +04:00
|
|
|
Error *local_err = NULL;
|
2015-02-05 21:58:15 +03:00
|
|
|
QDict *options = NULL;
|
2014-09-08 20:50:59 +04:00
|
|
|
|
2015-02-05 21:58:15 +03:00
|
|
|
if (strcmp(blkdev->fileproto, "<unset>")) {
|
|
|
|
options = qdict_new();
|
2017-04-28 00:58:17 +03:00
|
|
|
qdict_put_str(options, "driver", blkdev->fileproto);
|
block: New BlockBackend
A block device consists of a frontend device model and a backend.
A block backend has a tree of block drivers doing the actual work.
The tree is managed by the block layer.
We currently use a single abstraction BlockDriverState both for tree
nodes and the backend as a whole. Drawbacks:
* Its API includes both stuff that makes sense only at the block
backend level (root of the tree) and stuff that's only for use
within the block layer. This makes the API bigger and more complex
than necessary. Moreover, it's not obvious which interfaces are
meant for device models, and which really aren't.
* Since device models keep a reference to their backend, the backend
object can't just be destroyed. But for media change, we need to
replace the tree. Our solution is to make the BlockDriverState
generic, with actual driver state in a separate object, pointed to
by member opaque. That lets us replace the tree by deinitializing
and reinitializing its root. This special need of the root makes
the data structure awkward everywhere in the tree.
The general plan is to separate the APIs into "block backend", for use
by device models, monitor and whatever other code dealing with block
backends, and "block driver", for use by the block layer and whatever
other code (if any) dealing with trees and tree nodes.
Code dealing with block backends, device models in particular, should
become completely oblivious of BlockDriverState. This should let us
clean up both APIs, and the tree data structures.
This commit is a first step. It creates a minimal "block backend"
API: type BlockBackend and functions to create, destroy and find them.
BlockBackend objects are created and destroyed exactly when root
BlockDriverState objects are created and destroyed. "Root" in the
sense of "in bdrv_states". They're not yet used for anything; that'll
come shortly.
A root BlockDriverState is created with bdrv_new_root(), so where to
create a BlockBackend is obvious. Where these roots get destroyed
isn't always as obvious.
It is obvious in qemu-img.c, qemu-io.c and qemu-nbd.c, and in error
paths of blockdev_init(), blk_connect(). That leaves destruction of
objects successfully created by blockdev_init() and blk_connect().
blockdev_init() is used only by drive_new() and qmp_blockdev_add().
Objects created by the latter are currently indestructible (see commit
48f364d "blockdev: Refuse to drive_del something added with
blockdev-add" and commit 2d246f0 "blockdev: Introduce
DriveInfo.enable_auto_del"). Objects created by the former get
destroyed by drive_del().
Objects created by blk_connect() get destroyed by blk_disconnect().
BlockBackend is reference-counted. Its reference count never exceeds
one so far, but that's going to change.
In drive_del(), the BB's reference count is surely one now. The BDS's
reference count is greater than one when something else is holding a
reference, such as a block job. In this case, the BB is destroyed
right away, but the BDS lives on until all extra references get
dropped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-07 15:59:04 +04:00
|
|
|
}
|
2014-09-08 20:50:59 +04:00
|
|
|
|
2015-02-05 21:58:15 +03:00
|
|
|
/* setup via xenbus -> create new block driver instance */
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 2, "create new bdrv (xenbus setup)\n");
|
2016-03-16 21:54:38 +03:00
|
|
|
blkdev->blk = blk_new_open(blkdev->filename, NULL, options,
|
2015-02-05 21:58:15 +03:00
|
|
|
qflags, &local_err);
|
|
|
|
if (!blkdev->blk) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "error: %s\n",
|
2014-09-08 20:50:59 +04:00
|
|
|
error_get_pretty(local_err));
|
|
|
|
error_free(local_err);
|
|
|
|
return -1;
|
|
|
|
}
|
2016-03-15 15:38:43 +03:00
|
|
|
blk_set_enable_write_cache(blkdev->blk, !writethrough);
|
2009-04-22 19:19:30 +04:00
|
|
|
} else {
|
|
|
|
/* setup via qemu cmdline -> already setup for us */
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 2,
|
2016-10-25 08:50:08 +03:00
|
|
|
"get configured bdrv (cmdline setup)\n");
|
2014-10-07 15:59:18 +04:00
|
|
|
blkdev->blk = blk_by_legacy_dinfo(blkdev->dinfo);
|
|
|
|
if (blk_is_read_only(blkdev->blk) && !readonly) {
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 0, "Unexpected read-only drive");
|
2014-10-07 15:59:18 +04:00
|
|
|
blkdev->blk = NULL;
|
2013-09-13 17:51:47 +04:00
|
|
|
return -1;
|
|
|
|
}
|
2014-10-07 15:59:18 +04:00
|
|
|
/* blkdev->blk is not create by us, we get a reference
|
|
|
|
* so we can blk_unref() unconditionally */
|
|
|
|
blk_ref(blkdev->blk);
|
|
|
|
}
|
2016-09-29 19:47:03 +03:00
|
|
|
blk_attach_dev_legacy(blkdev->blk, blkdev);
|
2014-10-07 15:59:18 +04:00
|
|
|
blkdev->file_size = blk_getlength(blkdev->blk);
|
2009-04-22 19:19:30 +04:00
|
|
|
if (blkdev->file_size < 0) {
|
2015-10-19 18:53:29 +03:00
|
|
|
BlockDriverState *bs = blk_bs(blkdev->blk);
|
|
|
|
const char *drv_name = bs ? bdrv_get_format_name(bs) : NULL;
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 1, "blk_getlength: %d (%s) | drv %s\n",
|
2009-04-22 19:19:30 +04:00
|
|
|
(int)blkdev->file_size, strerror(-blkdev->file_size),
|
2015-10-19 18:53:29 +03:00
|
|
|
drv_name ?: "-");
|
2010-09-23 15:28:45 +04:00
|
|
|
blkdev->file_size = 0;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(xendev, 1, "type \"%s\", fileproto \"%s\", filename \"%s\","
|
2010-09-23 15:28:45 +04:00
|
|
|
" size %" PRId64 " (%" PRId64 " MB)\n",
|
|
|
|
blkdev->type, blkdev->fileproto, blkdev->filename,
|
|
|
|
blkdev->file_size, blkdev->file_size >> 20);
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2013-04-05 19:45:10 +04:00
|
|
|
/* Fill in number of sector size and number of sectors */
|
|
|
|
xenstore_write_be_int(&blkdev->xendev, "sector-size", blkdev->file_blk);
|
2013-04-05 19:47:59 +04:00
|
|
|
xenstore_write_be_int64(&blkdev->xendev, "sectors",
|
|
|
|
blkdev->file_size / blkdev->file_blk);
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2010-09-23 15:28:45 +04:00
|
|
|
if (xenstore_read_fe_int(&blkdev->xendev, "ring-ref", &blkdev->ring_ref) == -1) {
|
|
|
|
return -1;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
if (xenstore_read_fe_int(&blkdev->xendev, "event-channel",
|
2010-09-23 15:28:45 +04:00
|
|
|
&blkdev->xendev.remote_port) == -1) {
|
|
|
|
return -1;
|
|
|
|
}
|
2013-01-14 22:28:19 +04:00
|
|
|
if (xenstore_read_fe_int(&blkdev->xendev, "feature-persistent", &pers)) {
|
|
|
|
blkdev->feature_persistent = FALSE;
|
|
|
|
} else {
|
|
|
|
blkdev->feature_persistent = !!pers;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
|
2016-06-29 18:50:48 +03:00
|
|
|
if (!blkdev->xendev.protocol) {
|
|
|
|
blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
|
|
|
|
} else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_NATIVE) == 0) {
|
|
|
|
blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
|
|
|
|
} else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_32) == 0) {
|
|
|
|
blkdev->protocol = BLKIF_PROTOCOL_X86_32;
|
|
|
|
} else if (strcmp(blkdev->xendev.protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
|
|
|
|
blkdev->protocol = BLKIF_PROTOCOL_X86_64;
|
|
|
|
} else {
|
|
|
|
blkdev->protocol = BLKIF_PROTOCOL_NATIVE;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
2016-01-15 16:23:39 +03:00
|
|
|
blkdev->sring = xengnttab_map_grant_ref(blkdev->xendev.gnttabdev,
|
2010-09-23 15:28:45 +04:00
|
|
|
blkdev->xendev.dom,
|
|
|
|
blkdev->ring_ref,
|
|
|
|
PROT_READ | PROT_WRITE);
|
|
|
|
if (!blkdev->sring) {
|
|
|
|
return -1;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
blkdev->cnt_map++;
|
|
|
|
|
|
|
|
switch (blkdev->protocol) {
|
|
|
|
case BLKIF_PROTOCOL_NATIVE:
|
|
|
|
{
|
2010-09-23 15:28:45 +04:00
|
|
|
blkif_sring_t *sring_native = blkdev->sring;
|
|
|
|
BACK_RING_INIT(&blkdev->rings.native, sring_native, XC_PAGE_SIZE);
|
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
case BLKIF_PROTOCOL_X86_32:
|
|
|
|
{
|
2010-09-23 15:28:45 +04:00
|
|
|
blkif_x86_32_sring_t *sring_x86_32 = blkdev->sring;
|
2009-04-25 12:00:11 +04:00
|
|
|
|
|
|
|
BACK_RING_INIT(&blkdev->rings.x86_32_part, sring_x86_32, XC_PAGE_SIZE);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
case BLKIF_PROTOCOL_X86_64:
|
|
|
|
{
|
2010-09-23 15:28:45 +04:00
|
|
|
blkif_x86_64_sring_t *sring_x86_64 = blkdev->sring;
|
2009-04-25 12:00:11 +04:00
|
|
|
|
|
|
|
BACK_RING_INIT(&blkdev->rings.x86_64_part, sring_x86_64, XC_PAGE_SIZE);
|
2010-09-23 15:28:45 +04:00
|
|
|
break;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-01-14 22:28:19 +04:00
|
|
|
if (blkdev->feature_persistent) {
|
|
|
|
/* Init persistent grants */
|
|
|
|
blkdev->max_grants = max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST;
|
|
|
|
blkdev->persistent_gnts = g_tree_new_full((GCompareDataFunc)int_cmp,
|
|
|
|
NULL, NULL,
|
2014-11-13 20:42:09 +03:00
|
|
|
batch_maps ?
|
|
|
|
(GDestroyNotify)g_free :
|
2013-01-14 22:28:19 +04:00
|
|
|
(GDestroyNotify)destroy_grant);
|
2014-11-13 20:42:09 +03:00
|
|
|
blkdev->persistent_regions = NULL;
|
2013-01-14 22:28:19 +04:00
|
|
|
blkdev->persistent_gnt_count = 0;
|
|
|
|
}
|
|
|
|
|
2009-04-22 19:19:30 +04:00
|
|
|
xen_be_bind_evtchn(&blkdev->xendev);
|
|
|
|
|
2016-09-14 22:10:03 +03:00
|
|
|
blkdev->feature_grant_copy =
|
|
|
|
(xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0);
|
|
|
|
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 3, "grant copy operation %s\n",
|
2016-09-14 22:10:03 +03:00
|
|
|
blkdev->feature_grant_copy ? "enabled" : "disabled");
|
|
|
|
|
2016-10-25 08:50:14 +03:00
|
|
|
xen_pv_printf(&blkdev->xendev, 1, "ok: proto %s, ring-ref %d, "
|
2010-09-23 15:28:45 +04:00
|
|
|
"remote port %d, local port %d\n",
|
|
|
|
blkdev->xendev.protocol, blkdev->ring_ref,
|
|
|
|
blkdev->xendev.remote_port, blkdev->xendev.local_port);
|
2009-04-22 19:19:30 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void blk_disconnect(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
|
|
|
|
2014-10-07 15:59:18 +04:00
|
|
|
if (blkdev->blk) {
|
|
|
|
blk_detach_dev(blkdev->blk, blkdev);
|
|
|
|
blk_unref(blkdev->blk);
|
|
|
|
blkdev->blk = NULL;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2016-10-25 08:50:15 +03:00
|
|
|
xen_pv_unbind_evtchn(&blkdev->xendev);
|
2009-04-22 19:19:30 +04:00
|
|
|
|
|
|
|
if (blkdev->sring) {
|
2016-01-15 16:23:39 +03:00
|
|
|
xengnttab_unmap(blkdev->xendev.gnttabdev, blkdev->sring, 1);
|
2010-09-23 15:28:45 +04:00
|
|
|
blkdev->cnt_map--;
|
|
|
|
blkdev->sring = NULL;
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
2014-11-13 20:42:09 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Unmap persistent grants before switching to the closed state
|
|
|
|
* so the frontend can free them.
|
|
|
|
*
|
|
|
|
* In the !batch_maps case g_tree_destroy will take care of unmapping
|
|
|
|
* the grant, but in the batch_maps case we need to iterate over every
|
|
|
|
* region in persistent_regions and unmap it.
|
|
|
|
*/
|
|
|
|
if (blkdev->feature_persistent) {
|
|
|
|
g_tree_destroy(blkdev->persistent_gnts);
|
|
|
|
assert(batch_maps || blkdev->persistent_gnt_count == 0);
|
|
|
|
if (batch_maps) {
|
|
|
|
blkdev->persistent_gnt_count = 0;
|
|
|
|
g_slist_foreach(blkdev->persistent_regions,
|
|
|
|
(GFunc)remove_persistent_region, blkdev);
|
|
|
|
g_slist_free(blkdev->persistent_regions);
|
|
|
|
}
|
|
|
|
blkdev->feature_persistent = false;
|
|
|
|
}
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int blk_free(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
|
|
|
struct ioreq *ioreq;
|
|
|
|
|
2014-10-07 15:59:18 +04:00
|
|
|
if (blkdev->blk || blkdev->sring) {
|
2012-03-30 18:33:03 +04:00
|
|
|
blk_disconnect(xendev);
|
|
|
|
}
|
|
|
|
|
2009-09-12 11:36:22 +04:00
|
|
|
while (!QLIST_EMPTY(&blkdev->freelist)) {
|
2010-09-23 15:28:45 +04:00
|
|
|
ioreq = QLIST_FIRST(&blkdev->freelist);
|
2009-09-12 11:36:22 +04:00
|
|
|
QLIST_REMOVE(ioreq, list);
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_iovec_destroy(&ioreq->v);
|
2011-08-21 07:09:37 +04:00
|
|
|
g_free(ioreq);
|
2009-04-22 19:19:30 +04:00
|
|
|
}
|
|
|
|
|
2011-08-21 07:09:37 +04:00
|
|
|
g_free(blkdev->params);
|
|
|
|
g_free(blkdev->mode);
|
|
|
|
g_free(blkdev->type);
|
|
|
|
g_free(blkdev->dev);
|
|
|
|
g_free(blkdev->devtype);
|
2009-04-22 19:19:30 +04:00
|
|
|
qemu_bh_delete(blkdev->bh);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void blk_event(struct XenDevice *xendev)
|
|
|
|
{
|
|
|
|
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
|
|
|
|
|
|
|
|
qemu_bh_schedule(blkdev->bh);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct XenDevOps xen_blkdev_ops = {
|
|
|
|
.size = sizeof(struct XenBlkDev),
|
|
|
|
.flags = DEVOPS_FLAG_NEED_GNTDEV,
|
|
|
|
.alloc = blk_alloc,
|
|
|
|
.init = blk_init,
|
2011-06-17 16:15:35 +04:00
|
|
|
.initialise = blk_connect,
|
2009-04-22 19:19:30 +04:00
|
|
|
.disconnect = blk_disconnect,
|
|
|
|
.event = blk_event,
|
|
|
|
.free = blk_free,
|
|
|
|
};
|