2021-07-21 02:26:37 +03:00
|
|
|
|
.. _vhost_user_proto:
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
===================
|
|
|
|
|
Vhost-user Protocol
|
|
|
|
|
===================
|
2021-07-22 22:20:16 +03:00
|
|
|
|
|
|
|
|
|
..
|
|
|
|
|
Copyright 2014 Virtual Open Systems Sarl.
|
|
|
|
|
Copyright 2019 Intel Corporation
|
|
|
|
|
Licence: This work is licensed under the terms of the GNU GPL,
|
|
|
|
|
version 2 or later. See the COPYING file in the top-level
|
|
|
|
|
directory.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
.. contents:: Table of Contents
|
|
|
|
|
|
|
|
|
|
Introduction
|
|
|
|
|
============
|
|
|
|
|
|
|
|
|
|
This protocol is aiming to complement the ``ioctl`` interface used to
|
|
|
|
|
control the vhost implementation in the Linux kernel. It implements
|
|
|
|
|
the control plane needed to establish virtqueue sharing with a user
|
|
|
|
|
space process on the same host. It uses communication over a Unix
|
|
|
|
|
domain socket to share file descriptors in the ancillary data of the
|
|
|
|
|
message.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The protocol defines 2 sides of the communication, *front-end* and
|
|
|
|
|
*back-end*. The *front-end* is the application that shares its virtqueues, in
|
|
|
|
|
our case QEMU. The *back-end* is the consumer of the virtqueues.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
In the current implementation QEMU is the *front-end*, and the *back-end*
|
|
|
|
|
is the external process consuming the virtio queues, for example a
|
2019-03-15 21:07:35 +03:00
|
|
|
|
software Ethernet switch running in user space, such as Snabbswitch,
|
2022-03-21 18:30:30 +03:00
|
|
|
|
or a block device back-end processing read & write to a virtual
|
|
|
|
|
disk. In order to facilitate interoperability between various back-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
implementations, it is recommended to follow the :ref:`Backend program
|
|
|
|
|
conventions <backend_conventions>`.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The *front-end* and *back-end* can be either a client (i.e. connecting) or
|
2019-03-15 21:07:35 +03:00
|
|
|
|
server (listening) in the socket communication.
|
|
|
|
|
|
2022-03-04 13:08:54 +03:00
|
|
|
|
Support for platforms other than Linux
|
|
|
|
|
--------------------------------------
|
|
|
|
|
|
|
|
|
|
While vhost-user was initially developed targeting Linux, nowadays it
|
|
|
|
|
is supported on any platform that provides the following features:
|
|
|
|
|
|
|
|
|
|
- A way for requesting shared memory represented by a file descriptor
|
|
|
|
|
so it can be passed over a UNIX domain socket and then mapped by the
|
|
|
|
|
other process.
|
|
|
|
|
|
|
|
|
|
- AF_UNIX sockets with SCM_RIGHTS, so QEMU and the other process can
|
|
|
|
|
exchange messages through it, including ancillary data when needed.
|
|
|
|
|
|
|
|
|
|
- Either eventfd or pipe/pipe2. On platforms where eventfd is not
|
|
|
|
|
available, QEMU will automatically fall back to pipe2 or, as a last
|
|
|
|
|
resort, pipe. Each file descriptor will be used for receiving or
|
|
|
|
|
sending events by reading or writing (respectively) an 8-byte value
|
|
|
|
|
to the corresponding it. The 8-value itself has no meaning and
|
|
|
|
|
should not be interpreted.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Message Specification
|
|
|
|
|
=====================
|
|
|
|
|
|
|
|
|
|
.. Note:: All numbers are in the machine native byte order.
|
|
|
|
|
|
|
|
|
|
A vhost-user message consists of 3 header fields and a payload.
|
|
|
|
|
|
|
|
|
|
+---------+-------+------+---------+
|
|
|
|
|
| request | flags | size | payload |
|
|
|
|
|
+---------+-------+------+---------+
|
|
|
|
|
|
|
|
|
|
Header
|
|
|
|
|
------
|
|
|
|
|
|
|
|
|
|
:request: 32-bit type of the request
|
|
|
|
|
|
|
|
|
|
:flags: 32-bit bit field
|
|
|
|
|
|
|
|
|
|
- Lower 2 bits are the version (currently 0x01)
|
2022-03-21 18:30:30 +03:00
|
|
|
|
- Bit 2 is the reply flag - needs to be sent on each reply from the back-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
- Bit 3 is the need_reply flag - see :ref:`REPLY_ACK <reply_ack>` for
|
|
|
|
|
details.
|
|
|
|
|
|
|
|
|
|
:size: 32-bit size of the payload
|
|
|
|
|
|
|
|
|
|
Payload
|
|
|
|
|
-------
|
|
|
|
|
|
|
|
|
|
Depending on the request type, **payload** can be:
|
|
|
|
|
|
|
|
|
|
A single 64-bit integer
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-----+
|
|
|
|
|
| u64 |
|
|
|
|
|
+-----+
|
|
|
|
|
|
|
|
|
|
:u64: a 64-bit unsigned integer
|
|
|
|
|
|
|
|
|
|
A vring state description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-------+-----+
|
|
|
|
|
| index | num |
|
|
|
|
|
+-------+-----+
|
|
|
|
|
|
|
|
|
|
:index: a 32-bit index
|
|
|
|
|
|
|
|
|
|
:num: a 32-bit number
|
|
|
|
|
|
2023-10-16 16:42:37 +03:00
|
|
|
|
A vring descriptor index for split virtqueues
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-------------+---------------------+
|
|
|
|
|
| vring index | index in avail ring |
|
|
|
|
|
+-------------+---------------------+
|
|
|
|
|
|
|
|
|
|
:vring index: 32-bit index of the respective virtqueue
|
|
|
|
|
|
|
|
|
|
:index in avail ring: 32-bit value, of which currently only the lower 16
|
|
|
|
|
bits are used:
|
|
|
|
|
|
|
|
|
|
- Bits 0–15: Index of the next *Available Ring* descriptor that the
|
|
|
|
|
back-end will process. This is a free-running index that is not
|
|
|
|
|
wrapped by the ring size.
|
|
|
|
|
- Bits 16–31: Reserved (set to zero)
|
|
|
|
|
|
|
|
|
|
Vring descriptor indices for packed virtqueues
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-------------+--------------------+
|
|
|
|
|
| vring index | descriptor indices |
|
|
|
|
|
+-------------+--------------------+
|
|
|
|
|
|
|
|
|
|
:vring index: 32-bit index of the respective virtqueue
|
|
|
|
|
|
|
|
|
|
:descriptor indices: 32-bit value:
|
|
|
|
|
|
|
|
|
|
- Bits 0–14: Index of the next *Available Ring* descriptor that the
|
|
|
|
|
back-end will process. This is a free-running index that is not
|
|
|
|
|
wrapped by the ring size.
|
|
|
|
|
- Bit 15: Driver (Available) Ring Wrap Counter
|
|
|
|
|
- Bits 16–30: Index of the entry in the *Used Ring* where the back-end
|
|
|
|
|
will place the next descriptor. This is a free-running index that
|
|
|
|
|
is not wrapped by the ring size.
|
|
|
|
|
- Bit 31: Device (Used) Ring Wrap Counter
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
A vring address description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
2024-01-12 03:45:55 +03:00
|
|
|
|
+-------+-------+------------+------+-----------+-----+
|
|
|
|
|
| index | flags | descriptor | used | available | log |
|
|
|
|
|
+-------+-------+------------+------+-----------+-----+
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
:index: a 32-bit vring index
|
|
|
|
|
|
|
|
|
|
:flags: a 32-bit vring flags
|
|
|
|
|
|
|
|
|
|
:descriptor: a 64-bit ring address of the vring descriptor table
|
|
|
|
|
|
|
|
|
|
:used: a 64-bit ring address of the vring used ring
|
|
|
|
|
|
|
|
|
|
:available: a 64-bit ring address of the vring available ring
|
|
|
|
|
|
|
|
|
|
:log: a 64-bit guest address for logging
|
|
|
|
|
|
|
|
|
|
Note that a ring address is an IOVA if ``VIRTIO_F_IOMMU_PLATFORM`` has
|
|
|
|
|
been negotiated. Otherwise it is a user address.
|
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
Memory region description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
+---------------+------+--------------+-------------+
|
|
|
|
|
| guest address | size | user address | mmap offset |
|
|
|
|
|
+---------------+------+--------------+-------------+
|
|
|
|
|
|
|
|
|
|
:guest address: a 64-bit guest address of the region
|
|
|
|
|
|
|
|
|
|
:size: a 64-bit size
|
|
|
|
|
|
|
|
|
|
:user address: a 64-bit user address
|
|
|
|
|
|
|
|
|
|
:mmap offset: 64-bit offset where region starts in the mapped memory
|
|
|
|
|
|
2023-03-09 11:51:01 +03:00
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_XEN_MMAP`` protocol feature has been
|
|
|
|
|
successfully negotiated, the memory region description contains two extra
|
|
|
|
|
fields at the end.
|
|
|
|
|
|
|
|
|
|
+---------------+------+--------------+-------------+----------------+-------+
|
|
|
|
|
| guest address | size | user address | mmap offset | xen mmap flags | domid |
|
|
|
|
|
+---------------+------+--------------+-------------+----------------+-------+
|
|
|
|
|
|
|
|
|
|
:xen mmap flags: 32-bit bit field
|
|
|
|
|
|
|
|
|
|
- Bit 0 is set for Xen foreign memory mapping.
|
|
|
|
|
- Bit 1 is set for Xen grant memory mapping.
|
|
|
|
|
- Bit 8 is set if the memory region can not be mapped in advance, and memory
|
|
|
|
|
areas within this region must be mapped / unmapped only when required by the
|
|
|
|
|
back-end. The back-end shouldn't try to map the entire region at once, as the
|
|
|
|
|
front-end may not allow it. The back-end should rather map only the required
|
|
|
|
|
amount of memory at once and unmap it after it is used.
|
|
|
|
|
|
|
|
|
|
:domid: a 32-bit Xen hypervisor specific domain id.
|
|
|
|
|
|
2020-11-09 20:43:55 +03:00
|
|
|
|
Single memory region description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
+---------+--------+
|
|
|
|
|
| padding | region |
|
|
|
|
|
+---------+--------+
|
2020-11-09 20:43:55 +03:00
|
|
|
|
|
|
|
|
|
:padding: 64-bit
|
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
A region is represented by Memory region description.
|
2020-11-09 20:43:55 +03:00
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
Multiple Memory regions description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
2020-11-09 20:43:55 +03:00
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
+-------------+---------+---------+-----+---------+
|
|
|
|
|
| num regions | padding | region0 | ... | region7 |
|
|
|
|
|
+-------------+---------+---------+-----+---------+
|
2020-11-09 20:43:55 +03:00
|
|
|
|
|
2023-03-09 11:51:00 +03:00
|
|
|
|
:num regions: a 32-bit number of regions
|
|
|
|
|
|
|
|
|
|
:padding: 32-bit
|
|
|
|
|
|
|
|
|
|
A region is represented by Memory region description.
|
2020-11-09 20:43:55 +03:00
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Log description
|
|
|
|
|
^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+----------+------------+
|
|
|
|
|
| log size | log offset |
|
|
|
|
|
+----------+------------+
|
|
|
|
|
|
|
|
|
|
:log size: size of area used for logging
|
|
|
|
|
|
|
|
|
|
:log offset: offset from start of supplied file descriptor where
|
|
|
|
|
logging starts (i.e. where guest address 0 would be
|
|
|
|
|
logged)
|
|
|
|
|
|
|
|
|
|
An IOTLB message
|
|
|
|
|
^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+------+------+--------------+-------------------+------+
|
|
|
|
|
| iova | size | user address | permissions flags | type |
|
|
|
|
|
+------+------+--------------+-------------------+------+
|
|
|
|
|
|
|
|
|
|
:iova: a 64-bit I/O virtual address programmed by the guest
|
|
|
|
|
|
|
|
|
|
:size: a 64-bit size
|
|
|
|
|
|
|
|
|
|
:user address: a 64-bit user address
|
|
|
|
|
|
|
|
|
|
:permissions flags: an 8-bit value:
|
|
|
|
|
- 0: No access
|
|
|
|
|
- 1: Read access
|
|
|
|
|
- 2: Write access
|
|
|
|
|
- 3: Read/Write access
|
|
|
|
|
|
|
|
|
|
:type: an 8-bit IOTLB message type:
|
|
|
|
|
- 1: IOTLB miss
|
|
|
|
|
- 2: IOTLB update
|
|
|
|
|
- 3: IOTLB invalidate
|
|
|
|
|
- 4: IOTLB access fail
|
|
|
|
|
|
|
|
|
|
Virtio device config space
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+--------+------+-------+---------+
|
|
|
|
|
| offset | size | flags | payload |
|
|
|
|
|
+--------+------+-------+---------+
|
|
|
|
|
|
|
|
|
|
:offset: a 32-bit offset of virtio device's configuration space
|
|
|
|
|
|
|
|
|
|
:size: a 32-bit configuration space access size in bytes
|
|
|
|
|
|
|
|
|
|
:flags: a 32-bit value:
|
2022-06-08 21:38:47 +03:00
|
|
|
|
- 0: Vhost front-end messages used for writable fields
|
2022-03-21 18:30:30 +03:00
|
|
|
|
- 1: Vhost front-end messages used for live migration
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
:payload: Size bytes array holding the contents of the virtio
|
|
|
|
|
device's configuration space
|
|
|
|
|
|
|
|
|
|
Vring area description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-----+------+--------+
|
|
|
|
|
| u64 | size | offset |
|
|
|
|
|
+-----+------+--------+
|
|
|
|
|
|
|
|
|
|
:u64: a 64-bit integer contains vring index and flags
|
|
|
|
|
|
|
|
|
|
:size: a 64-bit size of this area
|
|
|
|
|
|
|
|
|
|
:offset: a 64-bit offset of this area from the start of the
|
|
|
|
|
supplied file descriptor
|
|
|
|
|
|
|
|
|
|
Inflight description
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+-----------+-------------+------------+------------+
|
|
|
|
|
| mmap size | mmap offset | num queues | queue size |
|
|
|
|
|
+-----------+-------------+------------+------------+
|
|
|
|
|
|
|
|
|
|
:mmap size: a 64-bit size of area to track inflight I/O
|
|
|
|
|
|
|
|
|
|
:mmap offset: a 64-bit offset of this area from the start
|
|
|
|
|
of the supplied file descriptor
|
|
|
|
|
|
|
|
|
|
:num queues: a 16-bit number of virtqueues
|
|
|
|
|
|
|
|
|
|
:queue size: a 16-bit size of virtqueues
|
|
|
|
|
|
vhost-user: Fix protocol feature bit conflict
The VHOST_USER_PROTOCOL_F_XEN_MMAP feature bit was defined in
f21e95ee97d, which has been part of qemu's 8.1.0 release. However, it
seems it was never added to qemu's code, but it is well possible that it
is already used by different front-ends outside of qemu (i.e., Xen).
VHOST_USER_PROTOCOL_F_SHARED_OBJECT in contrast was added to qemu's code
in 16094766627, but never defined in the vhost-user specification. As a
consequence, both bits were defined to be 17, which cannot work.
Regardless of whether actual code or the specification should take
precedence, F_XEN_MMAP is already part of a qemu release, while
F_SHARED_OBJECT is not. Therefore, bump the latter to take number 18
instead of 17, and add this to the specification.
Take the opportunity to add at least a little note on the
VhostUserShared structure to the specification. This structure is
referenced by the new commands introduced in 16094766627, but was not
defined.
Fixes: 160947666276c5b7f6bca4d746bcac2966635d79
("vhost-user: add shared_object msg")
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016083201.23736-1-hreitz@redhat.com>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 11:32:01 +03:00
|
|
|
|
VhostUserShared
|
|
|
|
|
^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+------+
|
|
|
|
|
| UUID |
|
|
|
|
|
+------+
|
|
|
|
|
|
|
|
|
|
:UUID: 16 bytes UUID, whose first three components (a 32-bit value, then
|
|
|
|
|
two 16-bit values) are stored in big endian.
|
|
|
|
|
|
vhost-user.rst: Migrating back-end-internal state
For vhost-user devices, qemu can migrate the virtio state, but not the
back-end's internal state. To do so, we need to be able to transfer
this internal state between front-end (qemu) and back-end.
At this point, this new feature is added for the purpose of virtio-fs
migration. Because virtiofsd's internal state will not be too large, we
believe it is best to transfer it as a single binary blob after the
streaming phase.
These are the additions to the protocol:
- New vhost-user protocol feature VHOST_USER_PROTOCOL_F_DEVICE_STATE
- SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a file
descriptor over which to transfer the state.
- CHECK_DEVICE_STATE: After the state has been transferred through the
file descriptor, the front-end invokes this function to verify
success. There is no in-band way (through the file descriptor) to
indicate failure, so we need to check explicitly.
Once the transfer FD has been established via SET_DEVICE_STATE_FD
(which includes establishing the direction of transfer and migration
phase), the sending side writes its data into it, and the reading side
reads it until it sees an EOF. Then, the front-end will check for
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-5-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:40 +03:00
|
|
|
|
Device state transfer parameters
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
+--------------------+-----------------+
|
|
|
|
|
| transfer direction | migration phase |
|
|
|
|
|
+--------------------+-----------------+
|
|
|
|
|
|
|
|
|
|
:transfer direction: a 32-bit enum, describing the direction in which
|
|
|
|
|
the state is transferred:
|
|
|
|
|
|
|
|
|
|
- 0: Save: Transfer the state from the back-end to the front-end,
|
|
|
|
|
which happens on the source side of migration
|
|
|
|
|
- 1: Load: Transfer the state from the front-end to the back-end,
|
|
|
|
|
which happens on the destination side of migration
|
|
|
|
|
|
|
|
|
|
:migration phase: a 32-bit enum, describing the state in which the VM
|
|
|
|
|
guest and devices are:
|
|
|
|
|
|
|
|
|
|
- 0: Stopped (in the period after the transfer of memory-mapped
|
|
|
|
|
regions before switch-over to the destination): The VM guest is
|
|
|
|
|
stopped, and the vhost-user device is suspended (see
|
|
|
|
|
:ref:`Suspended device state <suspended_device_state>`).
|
|
|
|
|
|
|
|
|
|
In the future, additional phases might be added e.g. to allow
|
|
|
|
|
iterative migration while the device is running.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
C structure
|
|
|
|
|
-----------
|
|
|
|
|
|
|
|
|
|
In QEMU the vhost-user message is implemented with the following struct:
|
|
|
|
|
|
|
|
|
|
.. code:: c
|
|
|
|
|
|
|
|
|
|
typedef struct VhostUserMsg {
|
|
|
|
|
VhostUserRequest request;
|
|
|
|
|
uint32_t flags;
|
|
|
|
|
uint32_t size;
|
|
|
|
|
union {
|
|
|
|
|
uint64_t u64;
|
|
|
|
|
struct vhost_vring_state state;
|
|
|
|
|
struct vhost_vring_addr addr;
|
|
|
|
|
VhostUserMemory memory;
|
|
|
|
|
VhostUserLog log;
|
|
|
|
|
struct vhost_iotlb_msg iotlb;
|
|
|
|
|
VhostUserConfig config;
|
|
|
|
|
VhostUserVringArea area;
|
|
|
|
|
VhostUserInflight inflight;
|
|
|
|
|
};
|
|
|
|
|
} QEMU_PACKED VhostUserMsg;
|
|
|
|
|
|
|
|
|
|
Communication
|
|
|
|
|
=============
|
|
|
|
|
|
|
|
|
|
The protocol for vhost-user is based on the existing implementation of
|
|
|
|
|
vhost for the Linux Kernel. Most messages that can be sent via the
|
|
|
|
|
Unix domain socket implementing vhost-user have an equivalent ioctl to
|
|
|
|
|
the kernel implementation.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The communication consists of the *front-end* sending message requests and
|
|
|
|
|
the *back-end* sending message replies. Most of the requests don't require
|
2019-03-15 21:07:35 +03:00
|
|
|
|
replies. Here is a list of the ones that do:
|
|
|
|
|
|
|
|
|
|
* ``VHOST_USER_GET_FEATURES``
|
|
|
|
|
* ``VHOST_USER_GET_PROTOCOL_FEATURES``
|
|
|
|
|
* ``VHOST_USER_GET_VRING_BASE``
|
|
|
|
|
* ``VHOST_USER_SET_LOG_BASE`` (if ``VHOST_USER_PROTOCOL_F_LOG_SHMFD``)
|
|
|
|
|
* ``VHOST_USER_GET_INFLIGHT_FD`` (if ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD``)
|
|
|
|
|
|
|
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
|
|
:ref:`REPLY_ACK <reply_ack>`
|
|
|
|
|
The section on ``REPLY_ACK`` protocol extension.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
There are several messages that the front-end sends with file descriptors passed
|
2019-03-15 21:07:35 +03:00
|
|
|
|
in the ancillary data:
|
|
|
|
|
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
* ``VHOST_USER_ADD_MEM_REG``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
* ``VHOST_USER_SET_MEM_TABLE``
|
|
|
|
|
* ``VHOST_USER_SET_LOG_BASE`` (if ``VHOST_USER_PROTOCOL_F_LOG_SHMFD``)
|
|
|
|
|
* ``VHOST_USER_SET_LOG_FD``
|
|
|
|
|
* ``VHOST_USER_SET_VRING_KICK``
|
|
|
|
|
* ``VHOST_USER_SET_VRING_CALL``
|
|
|
|
|
* ``VHOST_USER_SET_VRING_ERR``
|
2023-02-08 23:32:57 +03:00
|
|
|
|
* ``VHOST_USER_SET_BACKEND_REQ_FD`` (previous name ``VHOST_USER_SET_SLAVE_REQ_FD``)
|
2019-03-15 21:07:35 +03:00
|
|
|
|
* ``VHOST_USER_SET_INFLIGHT_FD`` (if ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD``)
|
vhost-user.rst: Migrating back-end-internal state
For vhost-user devices, qemu can migrate the virtio state, but not the
back-end's internal state. To do so, we need to be able to transfer
this internal state between front-end (qemu) and back-end.
At this point, this new feature is added for the purpose of virtio-fs
migration. Because virtiofsd's internal state will not be too large, we
believe it is best to transfer it as a single binary blob after the
streaming phase.
These are the additions to the protocol:
- New vhost-user protocol feature VHOST_USER_PROTOCOL_F_DEVICE_STATE
- SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a file
descriptor over which to transfer the state.
- CHECK_DEVICE_STATE: After the state has been transferred through the
file descriptor, the front-end invokes this function to verify
success. There is no in-band way (through the file descriptor) to
indicate failure, so we need to check explicitly.
Once the transfer FD has been established via SET_DEVICE_STATE_FD
(which includes establishing the direction of transfer and migration
phase), the sending side writes its data into it, and the reading side
reads it until it sees an EOF. Then, the front-end will check for
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-5-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:40 +03:00
|
|
|
|
* ``VHOST_USER_SET_DEVICE_STATE_FD``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
If *front-end* is unable to send the full message or receives a wrong
|
2019-03-15 21:07:35 +03:00
|
|
|
|
reply it will close the connection. An optional reconnection mechanism
|
|
|
|
|
can be implemented.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
If *back-end* detects some error such as incompatible features, it may also
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
close the connection. This should only happen in exceptional circumstances.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Any protocol extensions are gated by protocol feature bits, which
|
2022-03-21 18:30:30 +03:00
|
|
|
|
allows full backwards compatibility on both front-end and back-end. As
|
|
|
|
|
older back-ends don't support negotiating protocol features, a feature
|
2019-03-15 21:07:35 +03:00
|
|
|
|
bit was dedicated for this purpose::
|
|
|
|
|
|
|
|
|
|
#define VHOST_USER_F_PROTOCOL_FEATURES 30
|
|
|
|
|
|
2022-03-21 18:30:31 +03:00
|
|
|
|
Note that VHOST_USER_F_PROTOCOL_FEATURES is the UNUSED (30) feature
|
|
|
|
|
bit defined in `VIRTIO 1.1 6.3 Legacy Interface: Reserved Feature Bits
|
|
|
|
|
<https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-4130003>`_.
|
|
|
|
|
VIRTIO devices do not advertise this feature bit and therefore VIRTIO
|
|
|
|
|
drivers cannot negotiate it.
|
|
|
|
|
|
|
|
|
|
This reserved feature bit was reused by the vhost-user protocol to add
|
|
|
|
|
vhost-user protocol feature negotiation in a backwards compatible
|
2022-05-13 13:56:23 +03:00
|
|
|
|
fashion. Old vhost-user front-end and back-end implementations continue to
|
2022-03-21 18:30:31 +03:00
|
|
|
|
work even though they are not aware of vhost-user protocol feature
|
|
|
|
|
negotiation.
|
|
|
|
|
|
2022-03-21 18:30:29 +03:00
|
|
|
|
Ring states
|
|
|
|
|
-----------
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
Rings have two independent states: started/stopped, and enabled/disabled.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
* While a ring is stopped, the back-end must not process the ring at
|
|
|
|
|
all, regardless of whether it is enabled or disabled. The
|
|
|
|
|
enabled/disabled state should still be tracked, though, so it can come
|
|
|
|
|
into effect once the ring is started.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
* started and disabled: The back-end must process the ring without
|
2022-03-21 18:30:29 +03:00
|
|
|
|
causing any side effects. For example, for a networking device,
|
2022-03-21 18:30:30 +03:00
|
|
|
|
in the disabled state the back-end must not supply any new RX packets,
|
2022-03-21 18:30:29 +03:00
|
|
|
|
but must process and discard any TX packets.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
* started and enabled: The back-end must process the ring normally, i.e.
|
|
|
|
|
process all requests and execute them.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
Each ring is initialized in a stopped and disabled state. The back-end
|
|
|
|
|
must start a ring upon receiving a kick (that is, detecting that file
|
|
|
|
|
descriptor is readable) on the descriptor specified by
|
|
|
|
|
``VHOST_USER_SET_VRING_KICK`` or receiving the in-band message
|
|
|
|
|
``VHOST_USER_VRING_KICK`` if negotiated, and stop a ring upon receiving
|
|
|
|
|
``VHOST_USER_GET_VRING_BASE``.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:29 +03:00
|
|
|
|
Rings can be enabled or disabled by ``VHOST_USER_SET_VRING_ENABLE``.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
vhost-user.rst: Clarify enabling/disabling vrings
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature negotiation, all rings have
already been initialized, so it is not entirely clear what this means.
At least the vhost-user-backend Rust crate's implementation interpreted
it to mean that whenever this feature is negotiated, all rings are to
put into a disabled state, which means that every SET_FEATURES call
would disable all rings, effectively halting the device. This is
problematic because the VHOST_F_LOG_ALL feature is also set or cleared
this way, which happens during migration. Doing so should not halt the
device.
Other implementations have interpreted this to mean that the device is
to be initialized with all rings disabled, and a subsequent SET_FEATURES
call that does not set VHOST_USER_F_PROTOCOL_FEATURES will enable all of
them. Here, SET_FEATURES will never disable any ring.
This interpretation does not suffer the problem of unintentionally
halting the device whenever features are set or cleared, so it seems
better and more reasonable.
We can clarify this in the documentation by making it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-3-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:38 +03:00
|
|
|
|
In addition, upon receiving a ``VHOST_USER_SET_FEATURES`` message from
|
|
|
|
|
the front-end without ``VHOST_USER_F_PROTOCOL_FEATURES`` set, the
|
|
|
|
|
back-end must enable all rings immediately.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
While processing the rings (whether they are enabled or not), the back-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
must support changing some configuration aspects on the fly.
|
|
|
|
|
|
2023-10-16 16:42:39 +03:00
|
|
|
|
.. _suspended_device_state:
|
|
|
|
|
|
|
|
|
|
Suspended device state
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
While all vrings are stopped, the device is *suspended*. In addition to
|
|
|
|
|
not processing any vring (because they are stopped), the device must:
|
|
|
|
|
|
|
|
|
|
* not write to any guest memory regions,
|
|
|
|
|
* not send any notifications to the guest,
|
|
|
|
|
* not send any messages to the front-end,
|
|
|
|
|
* still process and reply to messages from the front-end.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Multiple queue support
|
|
|
|
|
----------------------
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Many devices have a fixed number of virtqueues. In this case the front-end
|
2019-06-24 12:13:04 +03:00
|
|
|
|
already knows the number of available virtqueues without communicating with the
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end.
|
2019-06-24 12:13:04 +03:00
|
|
|
|
|
|
|
|
|
Some devices do not have a fixed number of virtqueues. Instead the maximum
|
2022-03-21 18:30:30 +03:00
|
|
|
|
number of virtqueues is chosen by the back-end. The number can depend on host
|
|
|
|
|
resource availability or back-end implementation details. Such devices are called
|
2019-06-24 12:13:04 +03:00
|
|
|
|
multiple queue devices.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Multiple queue support allows the back-end to advertise the maximum number of
|
|
|
|
|
queues. This is treated as a protocol extension, hence the back-end has to
|
2019-06-26 10:48:15 +03:00
|
|
|
|
implement protocol features first. The multiple queues feature is supported
|
|
|
|
|
only when the protocol feature ``VHOST_USER_PROTOCOL_F_MQ`` (bit 0) is set.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The max number of queues the back-end supports can be queried with message
|
|
|
|
|
``VHOST_USER_GET_QUEUE_NUM``. Front-end should stop when the number of requested
|
2019-06-26 10:48:15 +03:00
|
|
|
|
queues is bigger than that.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
As all queues share one connection, the front-end uses a unique index for each
|
2019-06-26 10:48:15 +03:00
|
|
|
|
queue in the sent message to identify a specified queue.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end enables queues by sending message ``VHOST_USER_SET_VRING_ENABLE``.
|
2019-06-26 10:48:15 +03:00
|
|
|
|
vhost-user-net has historically automatically enabled the first queue pair.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Back-ends should always implement the ``VHOST_USER_PROTOCOL_F_MQ`` protocol
|
2019-06-24 12:13:04 +03:00
|
|
|
|
feature, even for devices with a fixed number of virtqueues, since it is simple
|
|
|
|
|
to implement and offers a degree of introspection.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Front-ends must not rely on the ``VHOST_USER_PROTOCOL_F_MQ`` protocol feature for
|
2019-06-24 12:13:04 +03:00
|
|
|
|
devices with a fixed number of virtqueues. Only true multiqueue devices
|
|
|
|
|
require this protocol feature.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Migration
|
|
|
|
|
---------
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
During live migration, the front-end may need to track the modifications
|
|
|
|
|
the back-end makes to the memory mapped regions. The front-end should mark
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the dirty pages in a log. Once it complies to this logging, it may
|
|
|
|
|
declare the ``VHOST_F_LOG_ALL`` vhost feature.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
To start/stop logging of data/used ring writes, the front-end may send
|
2019-03-15 21:07:35 +03:00
|
|
|
|
messages ``VHOST_USER_SET_FEATURES`` with ``VHOST_F_LOG_ALL`` and
|
|
|
|
|
``VHOST_USER_SET_VRING_ADDR`` with ``VHOST_VRING_F_LOG`` in ring's
|
|
|
|
|
flags set to 1/0, respectively.
|
|
|
|
|
|
|
|
|
|
All the modifications to memory pointed by vring "descriptor" should
|
|
|
|
|
be marked. Modifications to "used" vring should be marked if
|
|
|
|
|
``VHOST_VRING_F_LOG`` is part of ring's flags.
|
|
|
|
|
|
|
|
|
|
Dirty pages are of size::
|
|
|
|
|
|
|
|
|
|
#define VHOST_LOG_PAGE 0x1000
|
|
|
|
|
|
|
|
|
|
The log memory fd is provided in the ancillary data of
|
2022-03-21 18:30:30 +03:00
|
|
|
|
``VHOST_USER_SET_LOG_BASE`` message when the back-end has
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_LOG_SHMFD`` protocol feature.
|
|
|
|
|
|
|
|
|
|
The size of the log is supplied as part of ``VhostUserMsg`` which
|
|
|
|
|
should be large enough to cover all known guest addresses. Log starts
|
|
|
|
|
at the supplied offset in the supplied file descriptor. The log
|
|
|
|
|
covers from address 0 to the maximum of guest regions. In pseudo-code,
|
|
|
|
|
to mark page at ``addr`` as dirty::
|
|
|
|
|
|
|
|
|
|
page = addr / VHOST_LOG_PAGE
|
|
|
|
|
log[page / 8] |= 1 << page % 8
|
|
|
|
|
|
|
|
|
|
Where ``addr`` is the guest physical address.
|
|
|
|
|
|
|
|
|
|
Use atomic operations, as the log may be concurrently manipulated.
|
|
|
|
|
|
|
|
|
|
Note that when logging modifications to the used ring (when
|
|
|
|
|
``VHOST_VRING_F_LOG`` is set for this ring), ``log_guest_addr`` should
|
|
|
|
|
be used to calculate the log offset: the write to first byte of the
|
|
|
|
|
used ring is logged at this offset from log start. Also note that this
|
|
|
|
|
value might be outside the legal guest physical address range
|
|
|
|
|
(i.e. does not have to be covered by the ``VhostUserMemory`` table), but
|
|
|
|
|
the bit offset of the last byte of the ring must fall within the size
|
|
|
|
|
supplied by ``VhostUserLog``.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_LOG_FD`` is an optional message with an eventfd in
|
2022-03-21 18:30:30 +03:00
|
|
|
|
ancillary data, it may be used to inform the front-end that the log has
|
2019-03-15 21:07:35 +03:00
|
|
|
|
been modified.
|
|
|
|
|
|
|
|
|
|
Once the source has finished migration, rings will be stopped by the
|
2023-10-16 16:42:39 +03:00
|
|
|
|
source (:ref:`Suspended device state <suspended_device_state>`). No
|
|
|
|
|
further update must be done before rings are restarted.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
In postcopy migration the back-end is started before all the memory has
|
2019-03-15 21:07:35 +03:00
|
|
|
|
been received from the source host, and care must be taken to avoid
|
2022-03-21 18:30:30 +03:00
|
|
|
|
accessing pages that have yet to be received. The back-end opens a
|
2019-03-15 21:07:35 +03:00
|
|
|
|
'userfault'-fd and registers the memory with it; this fd is then
|
2022-03-21 18:30:30 +03:00
|
|
|
|
passed back over to the front-end. The front-end services requests on the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
userfaultfd for pages that are accessed and when the page is available
|
|
|
|
|
it performs WAKE ioctl's on the userfaultfd to wake the stalled
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end. The front-end indicates support for this via the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_PAGEFAULT`` feature.
|
|
|
|
|
|
vhost-user.rst: Migrating back-end-internal state
For vhost-user devices, qemu can migrate the virtio state, but not the
back-end's internal state. To do so, we need to be able to transfer
this internal state between front-end (qemu) and back-end.
At this point, this new feature is added for the purpose of virtio-fs
migration. Because virtiofsd's internal state will not be too large, we
believe it is best to transfer it as a single binary blob after the
streaming phase.
These are the additions to the protocol:
- New vhost-user protocol feature VHOST_USER_PROTOCOL_F_DEVICE_STATE
- SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a file
descriptor over which to transfer the state.
- CHECK_DEVICE_STATE: After the state has been transferred through the
file descriptor, the front-end invokes this function to verify
success. There is no in-band way (through the file descriptor) to
indicate failure, so we need to check explicitly.
Once the transfer FD has been established via SET_DEVICE_STATE_FD
(which includes establishing the direction of transfer and migration
phase), the sending side writes its data into it, and the reading side
reads it until it sees an EOF. Then, the front-end will check for
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-5-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:40 +03:00
|
|
|
|
.. _migrating_backend_state:
|
|
|
|
|
|
|
|
|
|
Migrating back-end state
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
Migrating device state involves transferring the state from one
|
|
|
|
|
back-end, called the source, to another back-end, called the
|
|
|
|
|
destination. After migration, the destination transparently resumes
|
|
|
|
|
operation without requiring the driver to re-initialize the device at
|
|
|
|
|
the VIRTIO level. If the migration fails, then the source can
|
|
|
|
|
transparently resume operation until another migration attempt is made.
|
|
|
|
|
|
|
|
|
|
Generally, the front-end is connected to a virtual machine guest (which
|
|
|
|
|
contains the driver), which has its own state to transfer between source
|
|
|
|
|
and destination, and therefore will have an implementation-specific
|
|
|
|
|
mechanism to do so. The ``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature
|
|
|
|
|
provides functionality to have the front-end include the back-end's
|
|
|
|
|
state in this transfer operation so the back-end does not need to
|
|
|
|
|
implement its own mechanism, and so the virtual machine may have its
|
|
|
|
|
complete state, including vhost-user devices' states, contained within a
|
|
|
|
|
single stream of data.
|
|
|
|
|
|
|
|
|
|
To do this, the back-end state is transferred from back-end to front-end
|
|
|
|
|
on the source side, and vice versa on the destination side. This
|
|
|
|
|
transfer happens over a channel that is negotiated using the
|
|
|
|
|
``VHOST_USER_SET_DEVICE_STATE_FD`` message. This message has two
|
|
|
|
|
parameters:
|
|
|
|
|
|
|
|
|
|
* Direction of transfer: On the source, the data is saved, transferring
|
|
|
|
|
it from the back-end to the front-end. On the destination, the data
|
|
|
|
|
is loaded, transferring it from the front-end to the back-end.
|
|
|
|
|
|
|
|
|
|
* Migration phase: Currently, the only supported phase is the period
|
|
|
|
|
after the transfer of memory-mapped regions before switch-over to the
|
|
|
|
|
destination, when both the source and destination devices are
|
|
|
|
|
suspended (:ref:`Suspended device state <suspended_device_state>`).
|
|
|
|
|
In the future, additional phases might be supported to allow iterative
|
|
|
|
|
migration while the device is running.
|
|
|
|
|
|
|
|
|
|
The nature of the channel is implementation-defined, but it must
|
|
|
|
|
generally behave like a pipe: The writing end will write all the data it
|
|
|
|
|
has into it, signalling the end of data by closing its end. The reading
|
|
|
|
|
end must read all of this data (until encountering the end of file) and
|
|
|
|
|
process it.
|
|
|
|
|
|
|
|
|
|
* When saving, the writing end is the source back-end, and the reading
|
|
|
|
|
end is the source front-end. After reading the state data from the
|
|
|
|
|
channel, the source front-end must transfer it to the destination
|
|
|
|
|
front-end through an implementation-defined mechanism.
|
|
|
|
|
|
|
|
|
|
* When loading, the writing end is the destination front-end, and the
|
|
|
|
|
reading end is the destination back-end. After reading the state data
|
|
|
|
|
from the channel, the destination back-end must deserialize its
|
|
|
|
|
internal state from that data and set itself up to allow the driver to
|
|
|
|
|
seamlessly resume operation on the VIRTIO level.
|
|
|
|
|
|
|
|
|
|
Seamlessly resuming operation means that the migration must be
|
|
|
|
|
transparent to the guest driver, which operates on the VIRTIO level.
|
|
|
|
|
This driver will not perform any re-initialization steps, but continue
|
|
|
|
|
to use the device as if no migration had occurred. The vhost-user
|
|
|
|
|
front-end, however, will re-initialize the vhost state on the
|
|
|
|
|
destination, following the usual protocol for establishing a connection
|
|
|
|
|
to a vhost-user back-end: This includes, for example, setting up memory
|
|
|
|
|
mappings and kick and call FDs as necessary, negotiating protocol
|
|
|
|
|
features, or setting the initial vring base indices (to the same value
|
|
|
|
|
as on the source side, so that operation can resume).
|
|
|
|
|
|
|
|
|
|
Both on the source and on the destination side, after the respective
|
|
|
|
|
front-end has seen all data transferred (when the transfer FD has been
|
|
|
|
|
closed), it sends the ``VHOST_USER_CHECK_DEVICE_STATE`` message to
|
|
|
|
|
verify that data transfer was successful in the back-end, too. The
|
|
|
|
|
back-end responds once it knows whether the transfer and processing was
|
|
|
|
|
successful or not.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Memory access
|
|
|
|
|
-------------
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end sends a list of vhost memory regions to the back-end using the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_SET_MEM_TABLE`` message. Each region has two base
|
|
|
|
|
addresses: a guest address and a user address.
|
|
|
|
|
|
|
|
|
|
Messages contain guest addresses and/or user addresses to reference locations
|
|
|
|
|
within the shared memory. The mapping of these addresses works as follows.
|
|
|
|
|
|
|
|
|
|
User addresses map to the vhost memory region containing that user address.
|
|
|
|
|
|
|
|
|
|
When the ``VIRTIO_F_IOMMU_PLATFORM`` feature has not been negotiated:
|
|
|
|
|
|
|
|
|
|
* Guest addresses map to the vhost memory region containing that guest
|
|
|
|
|
address.
|
|
|
|
|
|
|
|
|
|
When the ``VIRTIO_F_IOMMU_PLATFORM`` feature has been negotiated:
|
|
|
|
|
|
|
|
|
|
* Guest addresses are also called I/O virtual addresses (IOVAs). They are
|
|
|
|
|
translated to user addresses via the IOTLB.
|
|
|
|
|
|
|
|
|
|
* The vhost memory region guest address is not used.
|
|
|
|
|
|
|
|
|
|
IOMMU support
|
|
|
|
|
-------------
|
|
|
|
|
|
|
|
|
|
When the ``VIRTIO_F_IOMMU_PLATFORM`` feature has been negotiated, the
|
2022-03-21 18:30:30 +03:00
|
|
|
|
front-end sends IOTLB entries update & invalidation by sending
|
|
|
|
|
``VHOST_USER_IOTLB_MSG`` requests to the back-end with a ``struct
|
2019-03-15 21:07:35 +03:00
|
|
|
|
vhost_iotlb_msg`` as payload. For update events, the ``iotlb`` payload
|
|
|
|
|
has to be filled with the update message type (2), the I/O virtual
|
|
|
|
|
address, the size, the user virtual address, and the permissions
|
|
|
|
|
flags. Addresses and size must be within vhost memory regions set via
|
|
|
|
|
the ``VHOST_USER_SET_MEM_TABLE`` request. For invalidation events, the
|
|
|
|
|
``iotlb`` payload has to be filled with the invalidation message type
|
2022-03-21 18:30:30 +03:00
|
|
|
|
(3), the I/O virtual address and the size. On success, the back-end is
|
2019-03-15 21:07:35 +03:00
|
|
|
|
expected to reply with a zero payload, non-zero otherwise.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The back-end relies on the back-end communication channel (see :ref:`Back-end
|
|
|
|
|
communication <backend_communication>` section below) to send IOTLB miss
|
2023-02-08 23:32:57 +03:00
|
|
|
|
and access failure events, by sending ``VHOST_USER_BACKEND_IOTLB_MSG``
|
2022-03-21 18:30:30 +03:00
|
|
|
|
requests to the front-end with a ``struct vhost_iotlb_msg`` as
|
2019-03-15 21:07:35 +03:00
|
|
|
|
payload. For miss events, the iotlb payload has to be filled with the
|
|
|
|
|
miss message type (1), the I/O virtual address and the permissions
|
|
|
|
|
flags. For access failure event, the iotlb payload has to be filled
|
|
|
|
|
with the access failure message type (4), the I/O virtual address and
|
2022-03-21 18:30:30 +03:00
|
|
|
|
the permissions flags. For synchronization purpose, the back-end may
|
|
|
|
|
rely on the reply-ack feature, so the front-end may send a reply when
|
2019-03-15 21:07:35 +03:00
|
|
|
|
operation is completed if the reply-ack feature is negotiated and
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-ends requests a reply. For miss events, completed operation means
|
|
|
|
|
either front-end sent an update message containing the IOTLB entry
|
|
|
|
|
containing requested address and permission, or front-end sent nothing if
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the IOTLB miss message is invalid (invalid IOVA or permission).
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end isn't expected to take the initiative to send IOTLB update
|
|
|
|
|
messages, as the back-end sends IOTLB miss messages for the guest virtual
|
2019-03-15 21:07:35 +03:00
|
|
|
|
memory areas it needs to access.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
.. _backend_communication:
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Back-end communication
|
|
|
|
|
----------------------
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
An optional communication channel is provided if the back-end declares
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_BACKEND_REQ`` protocol feature, to allow the
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end to make requests to the front-end.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
The fd is provided via ``VHOST_USER_SET_BACKEND_REQ_FD`` ancillary data.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
A back-end may then send ``VHOST_USER_BACKEND_*`` messages to the front-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
using this fd communication channel.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
If ``VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD`` protocol feature is
|
2022-03-21 18:30:30 +03:00
|
|
|
|
negotiated, back-end can send file descriptors (at most 8 descriptors in
|
|
|
|
|
each message) to front-end via ancillary data using this fd communication
|
2019-03-15 21:07:35 +03:00
|
|
|
|
channel.
|
|
|
|
|
|
|
|
|
|
Inflight I/O tracking
|
|
|
|
|
---------------------
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
To support reconnecting after restart or crash, back-end may need to
|
2019-03-15 21:07:35 +03:00
|
|
|
|
resubmit inflight I/Os. If virtqueue is processed in order, we can
|
|
|
|
|
easily achieve that by getting the inflight descriptors from
|
|
|
|
|
descriptor table (split virtqueue) or descriptor ring (packed
|
|
|
|
|
virtqueue). However, it can't work when we process descriptors
|
|
|
|
|
out-of-order because some entries which store the information of
|
|
|
|
|
inflight descriptors in available ring (split virtqueue) or descriptor
|
2020-11-17 22:34:48 +03:00
|
|
|
|
ring (packed virtqueue) might be overridden by new entries. To solve
|
2022-03-21 18:30:30 +03:00
|
|
|
|
this problem, the back-end need to allocate an extra buffer to store this
|
|
|
|
|
information of inflight descriptors and share it with front-end for
|
2019-03-15 21:07:35 +03:00
|
|
|
|
persistent. ``VHOST_USER_GET_INFLIGHT_FD`` and
|
|
|
|
|
``VHOST_USER_SET_INFLIGHT_FD`` are used to transfer this buffer
|
2022-03-21 18:30:30 +03:00
|
|
|
|
between front-end and back-end. And the format of this buffer is described
|
2019-03-15 21:07:35 +03:00
|
|
|
|
below:
|
|
|
|
|
|
|
|
|
|
+---------------+---------------+-----+---------------+
|
|
|
|
|
| queue0 region | queue1 region | ... | queueN region |
|
|
|
|
|
+---------------+---------------+-----+---------------+
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
N is the number of available virtqueues. The back-end could get it from num
|
2019-03-15 21:07:35 +03:00
|
|
|
|
queues field of ``VhostUserInflight``.
|
|
|
|
|
|
|
|
|
|
For split virtqueue, queue region can be implemented as:
|
|
|
|
|
|
|
|
|
|
.. code:: c
|
|
|
|
|
|
|
|
|
|
typedef struct DescStateSplit {
|
|
|
|
|
/* Indicate whether this descriptor is inflight or not.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint8_t inflight;
|
|
|
|
|
|
|
|
|
|
/* Padding */
|
|
|
|
|
uint8_t padding[5];
|
|
|
|
|
|
|
|
|
|
/* Maintain a list for the last batch of used descriptors.
|
|
|
|
|
* Only available when batching is used for submitting */
|
|
|
|
|
uint16_t next;
|
|
|
|
|
|
|
|
|
|
/* Used to preserve the order of fetching available descriptors.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint64_t counter;
|
|
|
|
|
} DescStateSplit;
|
|
|
|
|
|
|
|
|
|
typedef struct QueueRegionSplit {
|
|
|
|
|
/* The feature flags of this region. Now it's initialized to 0. */
|
|
|
|
|
uint64_t features;
|
|
|
|
|
|
|
|
|
|
/* The version of this region. It's 1 currently.
|
|
|
|
|
* Zero value indicates an uninitialized buffer */
|
|
|
|
|
uint16_t version;
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
/* The size of DescStateSplit array. It's equal to the virtqueue size.
|
|
|
|
|
* The back-end could get it from queue size field of VhostUserInflight. */
|
2019-03-15 21:07:35 +03:00
|
|
|
|
uint16_t desc_num;
|
|
|
|
|
|
|
|
|
|
/* The head of list that track the last batch of used descriptors. */
|
|
|
|
|
uint16_t last_batch_head;
|
|
|
|
|
|
|
|
|
|
/* Store the idx value of used ring */
|
|
|
|
|
uint16_t used_idx;
|
|
|
|
|
|
|
|
|
|
/* Used to track the state of each descriptor in descriptor table */
|
2020-03-04 18:38:16 +03:00
|
|
|
|
DescStateSplit desc[];
|
2019-03-15 21:07:35 +03:00
|
|
|
|
} QueueRegionSplit;
|
|
|
|
|
|
|
|
|
|
To track inflight I/O, the queue region should be processed as follows:
|
|
|
|
|
|
|
|
|
|
When receiving available buffers from the driver:
|
|
|
|
|
|
|
|
|
|
#. Get the next available head-descriptor index from available ring, ``i``
|
|
|
|
|
|
|
|
|
|
#. Set ``desc[i].counter`` to the value of global counter
|
|
|
|
|
|
|
|
|
|
#. Increase global counter by 1
|
|
|
|
|
|
|
|
|
|
#. Set ``desc[i].inflight`` to 1
|
|
|
|
|
|
|
|
|
|
When supplying used buffers to the driver:
|
|
|
|
|
|
|
|
|
|
1. Get corresponding used head-descriptor index, i
|
|
|
|
|
|
|
|
|
|
2. Set ``desc[i].next`` to ``last_batch_head``
|
|
|
|
|
|
|
|
|
|
3. Set ``last_batch_head`` to ``i``
|
|
|
|
|
|
|
|
|
|
#. Steps 1,2,3 may be performed repeatedly if batching is possible
|
|
|
|
|
|
|
|
|
|
#. Increase the ``idx`` value of used ring by the size of the batch
|
|
|
|
|
|
|
|
|
|
#. Set the ``inflight`` field of each ``DescStateSplit`` entry in the batch to 0
|
|
|
|
|
|
|
|
|
|
#. Set ``used_idx`` to the ``idx`` value of used ring
|
|
|
|
|
|
|
|
|
|
When reconnecting:
|
|
|
|
|
|
|
|
|
|
#. If the value of ``used_idx`` does not match the ``idx`` value of
|
|
|
|
|
used ring (means the inflight field of ``DescStateSplit`` entries in
|
|
|
|
|
last batch may be incorrect),
|
|
|
|
|
|
|
|
|
|
a. Subtract the value of ``used_idx`` from the ``idx`` value of
|
|
|
|
|
used ring to get last batch size of ``DescStateSplit`` entries
|
|
|
|
|
|
|
|
|
|
#. Set the ``inflight`` field of each ``DescStateSplit`` entry to 0 in last batch
|
|
|
|
|
list which starts from ``last_batch_head``
|
|
|
|
|
|
|
|
|
|
#. Set ``used_idx`` to the ``idx`` value of used ring
|
|
|
|
|
|
|
|
|
|
#. Resubmit inflight ``DescStateSplit`` entries in order of their
|
|
|
|
|
counter value
|
|
|
|
|
|
|
|
|
|
For packed virtqueue, queue region can be implemented as:
|
|
|
|
|
|
|
|
|
|
.. code:: c
|
|
|
|
|
|
|
|
|
|
typedef struct DescStatePacked {
|
|
|
|
|
/* Indicate whether this descriptor is inflight or not.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint8_t inflight;
|
|
|
|
|
|
|
|
|
|
/* Padding */
|
|
|
|
|
uint8_t padding;
|
|
|
|
|
|
|
|
|
|
/* Link to the next free entry */
|
|
|
|
|
uint16_t next;
|
|
|
|
|
|
|
|
|
|
/* Link to the last entry of descriptor list.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint16_t last;
|
|
|
|
|
|
|
|
|
|
/* The length of descriptor list.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint16_t num;
|
|
|
|
|
|
|
|
|
|
/* Used to preserve the order of fetching available descriptors.
|
|
|
|
|
* Only available for head-descriptor. */
|
|
|
|
|
uint64_t counter;
|
|
|
|
|
|
|
|
|
|
/* The buffer id */
|
|
|
|
|
uint16_t id;
|
|
|
|
|
|
|
|
|
|
/* The descriptor flags */
|
|
|
|
|
uint16_t flags;
|
|
|
|
|
|
|
|
|
|
/* The buffer length */
|
|
|
|
|
uint32_t len;
|
|
|
|
|
|
|
|
|
|
/* The buffer address */
|
|
|
|
|
uint64_t addr;
|
|
|
|
|
} DescStatePacked;
|
|
|
|
|
|
|
|
|
|
typedef struct QueueRegionPacked {
|
|
|
|
|
/* The feature flags of this region. Now it's initialized to 0. */
|
|
|
|
|
uint64_t features;
|
|
|
|
|
|
|
|
|
|
/* The version of this region. It's 1 currently.
|
|
|
|
|
* Zero value indicates an uninitialized buffer */
|
|
|
|
|
uint16_t version;
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
/* The size of DescStatePacked array. It's equal to the virtqueue size.
|
|
|
|
|
* The back-end could get it from queue size field of VhostUserInflight. */
|
2019-03-15 21:07:35 +03:00
|
|
|
|
uint16_t desc_num;
|
|
|
|
|
|
|
|
|
|
/* The head of free DescStatePacked entry list */
|
|
|
|
|
uint16_t free_head;
|
|
|
|
|
|
|
|
|
|
/* The old head of free DescStatePacked entry list */
|
|
|
|
|
uint16_t old_free_head;
|
|
|
|
|
|
|
|
|
|
/* The used index of descriptor ring */
|
|
|
|
|
uint16_t used_idx;
|
|
|
|
|
|
|
|
|
|
/* The old used index of descriptor ring */
|
|
|
|
|
uint16_t old_used_idx;
|
|
|
|
|
|
|
|
|
|
/* Device ring wrap counter */
|
|
|
|
|
uint8_t used_wrap_counter;
|
|
|
|
|
|
|
|
|
|
/* The old device ring wrap counter */
|
|
|
|
|
uint8_t old_used_wrap_counter;
|
|
|
|
|
|
|
|
|
|
/* Padding */
|
|
|
|
|
uint8_t padding[7];
|
|
|
|
|
|
|
|
|
|
/* Used to track the state of each descriptor fetched from descriptor ring */
|
2020-03-04 18:38:16 +03:00
|
|
|
|
DescStatePacked desc[];
|
2019-03-15 21:07:35 +03:00
|
|
|
|
} QueueRegionPacked;
|
|
|
|
|
|
|
|
|
|
To track inflight I/O, the queue region should be processed as follows:
|
|
|
|
|
|
|
|
|
|
When receiving available buffers from the driver:
|
|
|
|
|
|
|
|
|
|
#. Get the next available descriptor entry from descriptor ring, ``d``
|
|
|
|
|
|
|
|
|
|
#. If ``d`` is head descriptor,
|
|
|
|
|
|
|
|
|
|
a. Set ``desc[old_free_head].num`` to 0
|
|
|
|
|
|
|
|
|
|
#. Set ``desc[old_free_head].counter`` to the value of global counter
|
|
|
|
|
|
|
|
|
|
#. Increase global counter by 1
|
|
|
|
|
|
|
|
|
|
#. Set ``desc[old_free_head].inflight`` to 1
|
|
|
|
|
|
|
|
|
|
#. If ``d`` is last descriptor, set ``desc[old_free_head].last`` to
|
|
|
|
|
``free_head``
|
|
|
|
|
|
|
|
|
|
#. Increase ``desc[old_free_head].num`` by 1
|
|
|
|
|
|
|
|
|
|
#. Set ``desc[free_head].addr``, ``desc[free_head].len``,
|
|
|
|
|
``desc[free_head].flags``, ``desc[free_head].id`` to ``d.addr``,
|
|
|
|
|
``d.len``, ``d.flags``, ``d.id``
|
|
|
|
|
|
|
|
|
|
#. Set ``free_head`` to ``desc[free_head].next``
|
|
|
|
|
|
|
|
|
|
#. If ``d`` is last descriptor, set ``old_free_head`` to ``free_head``
|
|
|
|
|
|
|
|
|
|
When supplying used buffers to the driver:
|
|
|
|
|
|
|
|
|
|
1. Get corresponding used head-descriptor entry from descriptor ring,
|
|
|
|
|
``d``
|
|
|
|
|
|
|
|
|
|
2. Get corresponding ``DescStatePacked`` entry, ``e``
|
|
|
|
|
|
|
|
|
|
3. Set ``desc[e.last].next`` to ``free_head``
|
|
|
|
|
|
|
|
|
|
4. Set ``free_head`` to the index of ``e``
|
|
|
|
|
|
|
|
|
|
#. Steps 1,2,3,4 may be performed repeatedly if batching is possible
|
|
|
|
|
|
|
|
|
|
#. Increase ``used_idx`` by the size of the batch and update
|
|
|
|
|
``used_wrap_counter`` if needed
|
|
|
|
|
|
|
|
|
|
#. Update ``d.flags``
|
|
|
|
|
|
|
|
|
|
#. Set the ``inflight`` field of each head ``DescStatePacked`` entry
|
|
|
|
|
in the batch to 0
|
|
|
|
|
|
|
|
|
|
#. Set ``old_free_head``, ``old_used_idx``, ``old_used_wrap_counter``
|
|
|
|
|
to ``free_head``, ``used_idx``, ``used_wrap_counter``
|
|
|
|
|
|
|
|
|
|
When reconnecting:
|
|
|
|
|
|
|
|
|
|
#. If ``used_idx`` does not match ``old_used_idx`` (means the
|
|
|
|
|
``inflight`` field of ``DescStatePacked`` entries in last batch may
|
|
|
|
|
be incorrect),
|
|
|
|
|
|
|
|
|
|
a. Get the next descriptor ring entry through ``old_used_idx``, ``d``
|
|
|
|
|
|
|
|
|
|
#. Use ``old_used_wrap_counter`` to calculate the available flags
|
|
|
|
|
|
|
|
|
|
#. If ``d.flags`` is not equal to the calculated flags value (means
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end has submitted the buffer to guest driver before crash, so
|
2024-02-20 11:52:08 +03:00
|
|
|
|
it has to commit the in-progress update), set ``old_free_head``,
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``old_used_idx``, ``old_used_wrap_counter`` to ``free_head``,
|
|
|
|
|
``used_idx``, ``used_wrap_counter``
|
|
|
|
|
|
|
|
|
|
#. Set ``free_head``, ``used_idx``, ``used_wrap_counter`` to
|
|
|
|
|
``old_free_head``, ``old_used_idx``, ``old_used_wrap_counter``
|
|
|
|
|
(roll back any in-progress update)
|
|
|
|
|
|
|
|
|
|
#. Set the ``inflight`` field of each ``DescStatePacked`` entry in
|
|
|
|
|
free list to 0
|
|
|
|
|
|
|
|
|
|
#. Resubmit inflight ``DescStatePacked`` entries in order of their
|
|
|
|
|
counter value
|
|
|
|
|
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
In-band notifications
|
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
|
|
In some limited situations (e.g. for simulation) it is desirable to
|
|
|
|
|
have the kick, call and error (if used) signals done via in-band
|
|
|
|
|
messages instead of asynchronous eventfd notifications. This can be
|
|
|
|
|
done by negotiating the ``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS``
|
|
|
|
|
protocol feature.
|
|
|
|
|
|
|
|
|
|
Note that due to the fact that too many messages on the sockets can
|
|
|
|
|
cause the sending application(s) to block, it is not advised to use
|
|
|
|
|
this feature unless absolutely necessary. It is also considered an
|
|
|
|
|
error to negotiate this feature without also negotiating
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_BACKEND_REQ`` and ``VHOST_USER_PROTOCOL_F_REPLY_ACK``,
|
2022-03-21 18:30:30 +03:00
|
|
|
|
the former is necessary for getting a message channel from the back-end
|
|
|
|
|
to the front-end, while the latter needs to be used with the in-band
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
notification messages to block until they are processed, both to avoid
|
|
|
|
|
blocking later and for proper processing (at least in the simulation
|
2022-03-21 18:30:30 +03:00
|
|
|
|
use case.) As it has no other way of signalling this error, the back-end
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
should close the connection as a response to a
|
|
|
|
|
``VHOST_USER_SET_PROTOCOL_FEATURES`` message that sets the in-band
|
|
|
|
|
notifications feature flag without the other two.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
Protocol features
|
|
|
|
|
-----------------
|
|
|
|
|
|
|
|
|
|
.. code:: c
|
|
|
|
|
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_MQ 0
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_LOG_SHMFD 1
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_RARP 2
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_REPLY_ACK 3
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_MTU 4
|
2023-02-08 23:32:57 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_BACKEND_REQ 5
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_CROSS_ENDIAN 6
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_CRYPTO_SESSION 7
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_PAGEFAULT 8
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_CONFIG 9
|
2023-02-08 23:32:57 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD 10
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_HOST_NOTIFIER 11
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_RESET_DEVICE 13
|
|
|
|
|
#define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
|
2020-05-21 08:00:32 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15
|
2020-06-18 16:45:01 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_STATUS 16
|
2023-03-09 11:51:01 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
|
vhost-user: Fix protocol feature bit conflict
The VHOST_USER_PROTOCOL_F_XEN_MMAP feature bit was defined in
f21e95ee97d, which has been part of qemu's 8.1.0 release. However, it
seems it was never added to qemu's code, but it is well possible that it
is already used by different front-ends outside of qemu (i.e., Xen).
VHOST_USER_PROTOCOL_F_SHARED_OBJECT in contrast was added to qemu's code
in 16094766627, but never defined in the vhost-user specification. As a
consequence, both bits were defined to be 17, which cannot work.
Regardless of whether actual code or the specification should take
precedence, F_XEN_MMAP is already part of a qemu release, while
F_SHARED_OBJECT is not. Therefore, bump the latter to take number 18
instead of 17, and add this to the specification.
Take the opportunity to add at least a little note on the
VhostUserShared structure to the specification. This structure is
referenced by the new commands introduced in 16094766627, but was not
defined.
Fixes: 160947666276c5b7f6bca4d746bcac2966635d79
("vhost-user: add shared_object msg")
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016083201.23736-1-hreitz@redhat.com>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 11:32:01 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_SHARED_OBJECT 18
|
vhost-user.rst: Migrating back-end-internal state
For vhost-user devices, qemu can migrate the virtio state, but not the
back-end's internal state. To do so, we need to be able to transfer
this internal state between front-end (qemu) and back-end.
At this point, this new feature is added for the purpose of virtio-fs
migration. Because virtiofsd's internal state will not be too large, we
believe it is best to transfer it as a single binary blob after the
streaming phase.
These are the additions to the protocol:
- New vhost-user protocol feature VHOST_USER_PROTOCOL_F_DEVICE_STATE
- SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a file
descriptor over which to transfer the state.
- CHECK_DEVICE_STATE: After the state has been transferred through the
file descriptor, the front-end invokes this function to verify
success. There is no in-band way (through the file descriptor) to
indicate failure, so we need to check explicitly.
Once the transfer FD has been established via SET_DEVICE_STATE_FD
(which includes establishing the direction of transfer and migration
phase), the sending side writes its data into it, and the reading side
reads it until it sees an EOF. Then, the front-end will check for
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-5-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:40 +03:00
|
|
|
|
#define VHOST_USER_PROTOCOL_F_DEVICE_STATE 19
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Front-end message types
|
|
|
|
|
-----------------------
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_FEATURES``
|
|
|
|
|
:id: 1
|
|
|
|
|
:equivalent ioctl: ``VHOST_GET_FEATURES``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: ``u64``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Get from the underlying vhost implementation the features bitmask.
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` signals back-end support
|
2019-03-15 21:07:35 +03:00
|
|
|
|
for ``VHOST_USER_GET_PROTOCOL_FEATURES`` and
|
|
|
|
|
``VHOST_USER_SET_PROTOCOL_FEATURES``.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_FEATURES``
|
|
|
|
|
:id: 2
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_FEATURES``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Enable features in the underlying vhost implementation using a
|
|
|
|
|
bitmask. Feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` signals
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end support for ``VHOST_USER_GET_PROTOCOL_FEATURES`` and
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_SET_PROTOCOL_FEATURES``.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_PROTOCOL_FEATURES``
|
|
|
|
|
:id: 15
|
|
|
|
|
:equivalent ioctl: ``VHOST_GET_FEATURES``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: ``u64``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Get the protocol feature bitmask from the underlying vhost
|
|
|
|
|
implementation. Only legal if feature bit
|
|
|
|
|
``VHOST_USER_F_PROTOCOL_FEATURES`` is present in
|
2022-03-21 18:30:31 +03:00
|
|
|
|
``VHOST_USER_GET_FEATURES``. It does not need to be acknowledged by
|
|
|
|
|
``VHOST_USER_SET_FEATURES``.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
.. Note::
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Back-ends that report ``VHOST_USER_F_PROTOCOL_FEATURES`` must
|
2019-03-15 21:07:35 +03:00
|
|
|
|
support this message even before ``VHOST_USER_SET_FEATURES`` was
|
|
|
|
|
called.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_PROTOCOL_FEATURES``
|
|
|
|
|
:id: 16
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_FEATURES``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Enable protocol features in the underlying vhost implementation.
|
|
|
|
|
|
|
|
|
|
Only legal if feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` is present in
|
2022-03-21 18:30:31 +03:00
|
|
|
|
``VHOST_USER_GET_FEATURES``. It does not need to be acknowledged by
|
|
|
|
|
``VHOST_USER_SET_FEATURES``.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
.. Note::
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Back-ends that report ``VHOST_USER_F_PROTOCOL_FEATURES`` must support
|
2019-03-15 21:07:35 +03:00
|
|
|
|
this message even before ``VHOST_USER_SET_FEATURES`` was called.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_OWNER``
|
|
|
|
|
:id: 3
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_OWNER``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Issued when a new connection is established. It marks the sender
|
|
|
|
|
as the front-end that owns of the session. This can be used on the *back-end*
|
2019-03-15 21:07:35 +03:00
|
|
|
|
as a "session start" flag.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_RESET_OWNER``
|
|
|
|
|
:id: 4
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
.. admonition:: Deprecated
|
|
|
|
|
|
|
|
|
|
This is no longer used. Used to be sent to request disabling all
|
2022-03-21 18:30:30 +03:00
|
|
|
|
rings, but some back-ends interpreted it to also discard connection
|
2019-03-15 21:07:35 +03:00
|
|
|
|
state (this interpretation would lead to bugs). It is recommended
|
2022-03-21 18:30:30 +03:00
|
|
|
|
that back-ends either ignore this message, or use it to disable all
|
2019-03-15 21:07:35 +03:00
|
|
|
|
rings.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_MEM_TABLE``
|
|
|
|
|
:id: 5
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_MEM_TABLE``
|
2023-03-09 11:51:00 +03:00
|
|
|
|
:request payload: multiple memory regions description
|
|
|
|
|
:reply payload: (postcopy only) multiple memory regions description
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Sets the memory map regions on the back-end so it can translate the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
vring addresses. In the ancillary data there is an array of file
|
|
|
|
|
descriptors for each memory mapped region. The size and ordering of
|
|
|
|
|
the fds matches the number and ordering of memory regions.
|
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_POSTCOPY_LISTEN`` has been received,
|
|
|
|
|
``SET_MEM_TABLE`` replies with the bases of the memory mapped
|
2022-03-21 18:30:30 +03:00
|
|
|
|
regions to the front-end. The back-end must have mmap'd the regions but
|
2019-03-15 21:07:35 +03:00
|
|
|
|
not yet accessed them and should not yet generate a userfault
|
|
|
|
|
event.
|
|
|
|
|
|
|
|
|
|
.. Note::
|
|
|
|
|
``NEED_REPLY_MASK`` is not set in this case. QEMU will then
|
|
|
|
|
reply back to the list of mappings with an empty
|
|
|
|
|
``VHOST_USER_SET_MEM_TABLE`` as an acknowledgement; only upon
|
|
|
|
|
reception of this message may the guest start accessing the memory
|
|
|
|
|
and generating faults.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_LOG_BASE``
|
|
|
|
|
:id: 6
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_LOG_BASE``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: u64
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Sets logging shared memory space.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
When the back-end has ``VHOST_USER_PROTOCOL_F_LOG_SHMFD`` protocol feature,
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the log memory fd is provided in the ancillary data of
|
|
|
|
|
``VHOST_USER_SET_LOG_BASE`` message, the size and offset of shared
|
|
|
|
|
memory area provided in the message.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_LOG_FD``
|
|
|
|
|
:id: 7
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_LOG_FD``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Sets the logging file descriptor, which is passed as ancillary data.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_NUM``
|
|
|
|
|
:id: 8
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_NUM``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set the size of the queue.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_ADDR``
|
|
|
|
|
:id: 9
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_ADDR``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring address description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Sets the addresses of the different aspects of the vring.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_BASE``
|
|
|
|
|
:id: 10
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_BASE``
|
2023-10-16 16:42:37 +03:00
|
|
|
|
:request payload: vring descriptor index/indices
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2023-10-16 16:42:37 +03:00
|
|
|
|
Sets the next index to use for descriptors in this vring:
|
|
|
|
|
|
|
|
|
|
* For a split virtqueue, sets only the next descriptor index to
|
|
|
|
|
process in the *Available Ring*. The device is supposed to read the
|
|
|
|
|
next index in the *Used Ring* from the respective vring structure in
|
|
|
|
|
guest memory.
|
|
|
|
|
|
|
|
|
|
* For a packed virtqueue, both indices are supplied, as they are not
|
|
|
|
|
explicitly available in memory.
|
|
|
|
|
|
|
|
|
|
Consequently, the payload type is specific to the type of virt queue
|
|
|
|
|
(*a vring descriptor index for split virtqueues* vs. *vring descriptor
|
|
|
|
|
indices for packed virtqueues*).
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_VRING_BASE``
|
|
|
|
|
:id: 11
|
|
|
|
|
:equivalent ioctl: ``VHOST_USER_GET_VRING_BASE``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
2023-10-16 16:42:37 +03:00
|
|
|
|
:reply payload: vring descriptor index/indices
|
|
|
|
|
|
|
|
|
|
Stops the vring and returns the current descriptor index or indices:
|
|
|
|
|
|
|
|
|
|
* For a split virtqueue, returns only the 16-bit next descriptor
|
|
|
|
|
index to process in the *Available Ring*. Note that this may
|
|
|
|
|
differ from the available ring index in the vring structure in
|
|
|
|
|
memory, which points to where the driver will put new available
|
|
|
|
|
descriptors. For the *Used Ring*, the device only needs the next
|
|
|
|
|
descriptor index at which to put new descriptors, which is the
|
|
|
|
|
value in the vring structure in memory, so this value is not
|
|
|
|
|
covered by this message.
|
|
|
|
|
|
|
|
|
|
* For a packed virtqueue, neither index is explicitly available to
|
|
|
|
|
read from memory, so both indices (as maintained by the device) are
|
|
|
|
|
returned.
|
|
|
|
|
|
|
|
|
|
Consequently, the payload type is specific to the type of virt queue
|
|
|
|
|
(*a vring descriptor index for split virtqueues* vs. *vring descriptor
|
|
|
|
|
indices for packed virtqueues*).
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2023-10-16 16:42:39 +03:00
|
|
|
|
When and as long as all of a device’s vrings are stopped, it is
|
|
|
|
|
*suspended*, see :ref:`Suspended device state
|
|
|
|
|
<suspended_device_state>`.
|
|
|
|
|
|
2023-10-16 16:42:37 +03:00
|
|
|
|
The request payload’s *num* field is currently reserved and must be
|
|
|
|
|
set to 0.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_KICK``
|
|
|
|
|
:id: 12
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_KICK``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set the event file descriptor for adding buffers to the vring. It is
|
|
|
|
|
passed in the ancillary data.
|
|
|
|
|
|
|
|
|
|
Bits (0-7) of the payload contain the vring index. Bit 8 is the
|
|
|
|
|
invalid FD flag. This flag is set when there is no file descriptor
|
|
|
|
|
in the ancillary data. This signals that polling should be used
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
instead of waiting for the kick. Note that if the protocol feature
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` has been negotiated
|
|
|
|
|
this message isn't necessary as the ring is also started on the
|
|
|
|
|
``VHOST_USER_VRING_KICK`` message, it may however still be used to
|
|
|
|
|
set an event file descriptor (which will be preferred over the
|
|
|
|
|
message) or to enable polling.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_CALL``
|
|
|
|
|
:id: 13
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_CALL``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set the event file descriptor to signal when buffers are used. It is
|
|
|
|
|
passed in the ancillary data.
|
|
|
|
|
|
|
|
|
|
Bits (0-7) of the payload contain the vring index. Bit 8 is the
|
|
|
|
|
invalid FD flag. This flag is set when there is no file descriptor
|
|
|
|
|
in the ancillary data. This signals that polling will be used
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
instead of waiting for the call. Note that if the protocol features
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` and
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_BACKEND_REQ`` have been negotiated this message
|
|
|
|
|
isn't necessary as the ``VHOST_USER_BACKEND_VRING_CALL`` message can be
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
used, it may however still be used to set an event file descriptor
|
|
|
|
|
or to enable polling.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_ERR``
|
|
|
|
|
:id: 14
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_ERR``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set the event file descriptor to signal when error occurs. It is
|
|
|
|
|
passed in the ancillary data.
|
|
|
|
|
|
|
|
|
|
Bits (0-7) of the payload contain the vring index. Bit 8 is the
|
|
|
|
|
invalid FD flag. This flag is set when there is no file descriptor
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
in the ancillary data. Note that if the protocol features
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` and
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_BACKEND_REQ`` have been negotiated this message
|
|
|
|
|
isn't necessary as the ``VHOST_USER_BACKEND_VRING_ERR`` message can be
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
used, it may however still be used to set an event file descriptor
|
|
|
|
|
(which will be preferred over the message).
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_QUEUE_NUM``
|
|
|
|
|
:id: 17
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: u64
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Query how many queues the back-end supports.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
This request should be sent only when ``VHOST_USER_PROTOCOL_F_MQ``
|
|
|
|
|
is set in queried protocol features by
|
|
|
|
|
``VHOST_USER_GET_PROTOCOL_FEATURES``.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_ENABLE``
|
|
|
|
|
:id: 18
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Signal the back-end to enable or disable corresponding vring.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SEND_RARP``
|
|
|
|
|
:id: 19
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Ask vhost user back-end to broadcast a fake RARP to notify the migration
|
2019-03-15 21:07:35 +03:00
|
|
|
|
is terminated for guest that does not support GUEST_ANNOUNCE.
|
|
|
|
|
|
|
|
|
|
Only legal if feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` is
|
|
|
|
|
present in ``VHOST_USER_GET_FEATURES`` and protocol feature bit
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_RARP`` is present in
|
|
|
|
|
``VHOST_USER_GET_PROTOCOL_FEATURES``. The first 6 bytes of the
|
|
|
|
|
payload contain the mac address of the guest to allow the vhost user
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end to construct and broadcast the fake RARP.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_NET_SET_MTU``
|
|
|
|
|
:id: 20
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set host MTU value exposed to the guest.
|
|
|
|
|
|
|
|
|
|
This request should be sent only when ``VIRTIO_NET_F_MTU`` feature
|
|
|
|
|
has been successfully negotiated, ``VHOST_USER_F_PROTOCOL_FEATURES``
|
|
|
|
|
is present in ``VHOST_USER_GET_FEATURES`` and protocol feature bit
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_NET_MTU`` is present in
|
|
|
|
|
``VHOST_USER_GET_PROTOCOL_FEATURES``.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
If ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, the back-end must
|
2019-03-15 21:07:35 +03:00
|
|
|
|
respond with zero in case the specified MTU is valid, or non-zero
|
|
|
|
|
otherwise.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_SET_BACKEND_REQ_FD`` (previous name ``VHOST_USER_SET_SLAVE_REQ_FD``)
|
2019-03-15 21:07:35 +03:00
|
|
|
|
:id: 21
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Set the socket file descriptor for back-end initiated requests. It is passed
|
2019-03-15 21:07:35 +03:00
|
|
|
|
in the ancillary data.
|
|
|
|
|
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, and protocol
|
2023-02-08 23:32:57 +03:00
|
|
|
|
feature bit ``VHOST_USER_PROTOCOL_F_BACKEND_REQ`` bit is present in
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_GET_PROTOCOL_FEATURES``. If
|
2022-03-21 18:30:30 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, the back-end must
|
2019-03-15 21:07:35 +03:00
|
|
|
|
respond with zero for success, non-zero otherwise.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_IOTLB_MSG``
|
|
|
|
|
:id: 22
|
|
|
|
|
:equivalent ioctl: N/A (equivalent to ``VHOST_IOTLB_MSG`` message type)
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``struct vhost_iotlb_msg``
|
|
|
|
|
:reply payload: ``u64``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Send IOTLB messages with ``struct vhost_iotlb_msg`` as payload.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end sends such requests to update and invalidate entries in the
|
|
|
|
|
device IOTLB. The back-end has to acknowledge the request with sending
|
2019-03-15 21:07:35 +03:00
|
|
|
|
zero as ``u64`` payload for success, non-zero otherwise.
|
|
|
|
|
|
|
|
|
|
This request should be send only when ``VIRTIO_F_IOMMU_PLATFORM``
|
|
|
|
|
feature has been successfully negotiated.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_VRING_ENDIAN``
|
|
|
|
|
:id: 23
|
|
|
|
|
:equivalent ioctl: ``VHOST_SET_VRING_ENDIAN``
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Set the endianness of a VQ for legacy devices. Little-endian is
|
|
|
|
|
indicated with state.num set to 0 and big-endian is indicated with
|
|
|
|
|
state.num set to 1. Other values are invalid.
|
|
|
|
|
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_CROSS_ENDIAN`` has been negotiated.
|
|
|
|
|
Backends that negotiated this feature should handle both
|
|
|
|
|
endiannesses and expect this message once (per VQ) during device
|
2022-03-21 18:30:30 +03:00
|
|
|
|
configuration (ie. before the front-end starts the VQ).
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_CONFIG``
|
|
|
|
|
:id: 24
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: virtio device config space
|
|
|
|
|
:reply payload: virtio device config space
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_CONFIG`` is negotiated, this message is
|
2022-03-21 18:30:30 +03:00
|
|
|
|
submitted by the vhost-user front-end to fetch the contents of the
|
|
|
|
|
virtio device configuration space, vhost-user back-end's payload size
|
|
|
|
|
MUST match the front-end's request, vhost-user back-end uses zero length of
|
|
|
|
|
payload to indicate an error to the vhost-user front-end. The vhost-user
|
|
|
|
|
front-end may cache the contents to avoid repeated
|
2019-03-15 21:07:35 +03:00
|
|
|
|
``VHOST_USER_GET_CONFIG`` calls.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_CONFIG``
|
|
|
|
|
:id: 25
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: virtio device config space
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_CONFIG`` is negotiated, this message is
|
2022-03-21 18:30:30 +03:00
|
|
|
|
submitted by the vhost-user front-end when the Guest changes the virtio
|
2019-03-15 21:07:35 +03:00
|
|
|
|
device configuration space and also can be used for live migration
|
2022-03-21 18:30:30 +03:00
|
|
|
|
on the destination host. The vhost-user back-end must check the flags
|
|
|
|
|
field, and back-ends MUST NOT accept SET_CONFIG for read-only
|
2019-03-15 21:07:35 +03:00
|
|
|
|
configuration space fields unless the live migration bit is set.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_CREATE_CRYPTO_SESSION``
|
|
|
|
|
:id: 26
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: crypto session description
|
|
|
|
|
:reply payload: crypto session description
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Create a session for crypto operation. The back-end must return
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the session id, 0 or positive for success, negative for failure.
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_CRYPTO_SESSION`` feature has been
|
|
|
|
|
successfully negotiated. It's a required feature for crypto
|
|
|
|
|
devices.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_CLOSE_CRYPTO_SESSION``
|
|
|
|
|
:id: 27
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Close a session for crypto operation which was previously
|
|
|
|
|
created by ``VHOST_USER_CREATE_CRYPTO_SESSION``.
|
|
|
|
|
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_CRYPTO_SESSION`` feature has been
|
|
|
|
|
successfully negotiated. It's a required feature for crypto
|
|
|
|
|
devices.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_POSTCOPY_ADVISE``
|
|
|
|
|
:id: 28
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: userfault fd
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_PAGEFAULT`` is supported, the front-end
|
|
|
|
|
advises back-end that a migration with postcopy enabled is underway,
|
|
|
|
|
the back-end must open a userfaultfd for later use. Note that at this
|
2019-03-15 21:07:35 +03:00
|
|
|
|
stage the migration is still in precopy mode.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_POSTCOPY_LISTEN``
|
|
|
|
|
:id: 29
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end advises back-end that a transition to postcopy mode has
|
|
|
|
|
happened. The back-end must ensure that shared memory is registered
|
2019-03-15 21:07:35 +03:00
|
|
|
|
with userfaultfd to cause faulting of non-present pages.
|
|
|
|
|
|
|
|
|
|
This is always sent sometime after a ``VHOST_USER_POSTCOPY_ADVISE``,
|
|
|
|
|
and thus only when ``VHOST_USER_PROTOCOL_F_PAGEFAULT`` is supported.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_POSTCOPY_END``
|
|
|
|
|
:id: 30
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: ``u64``
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The front-end advises that postcopy migration has now completed. The back-end
|
2022-03-21 18:30:28 +03:00
|
|
|
|
must disable the userfaultfd. The reply is an acknowledgement
|
2019-03-15 21:07:35 +03:00
|
|
|
|
only.
|
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_PAGEFAULT`` is supported, this message
|
|
|
|
|
is sent at the end of the migration, after
|
|
|
|
|
``VHOST_USER_POSTCOPY_LISTEN`` was previously sent.
|
|
|
|
|
|
|
|
|
|
The value returned is an error indication; 0 is success.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_INFLIGHT_FD``
|
|
|
|
|
:id: 31
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: inflight description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD`` protocol feature has
|
2022-03-21 18:30:30 +03:00
|
|
|
|
been successfully negotiated, this message is submitted by the front-end to
|
|
|
|
|
get a shared buffer from back-end. The shared buffer will be used to
|
|
|
|
|
track inflight I/O by back-end. QEMU should retrieve a new one when vm
|
2019-03-15 21:07:35 +03:00
|
|
|
|
reset.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_SET_INFLIGHT_FD``
|
|
|
|
|
:id: 32
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: inflight description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD`` protocol feature has
|
2022-03-21 18:30:30 +03:00
|
|
|
|
been successfully negotiated, this message is submitted by the front-end to
|
|
|
|
|
send the shared inflight buffer back to the back-end so that the back-end
|
|
|
|
|
could get inflight I/O after a crash or restart.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2019-05-24 16:09:38 +03:00
|
|
|
|
``VHOST_USER_GPU_SET_SOCKET``
|
|
|
|
|
:id: 33
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-05-24 16:09:38 +03:00
|
|
|
|
|
|
|
|
|
Sets the GPU protocol socket file descriptor, which is passed as
|
2022-03-21 18:30:30 +03:00
|
|
|
|
ancillary data. The GPU protocol is used to inform the front-end of
|
2019-05-24 16:09:38 +03:00
|
|
|
|
rendering state and updates. See vhost-user-gpu.rst for details.
|
|
|
|
|
|
2019-10-30 00:38:02 +03:00
|
|
|
|
``VHOST_USER_RESET_DEVICE``
|
|
|
|
|
:id: 34
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-10-30 00:38:02 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Ask the vhost user back-end to disable all rings and reset all
|
2019-10-30 00:38:02 +03:00
|
|
|
|
internal device state to the initial state, ready to be
|
2022-03-21 18:30:30 +03:00
|
|
|
|
reinitialized. The back-end retains ownership of the device
|
2019-10-30 00:38:02 +03:00
|
|
|
|
throughout the reset operation.
|
|
|
|
|
|
|
|
|
|
Only valid if the ``VHOST_USER_PROTOCOL_F_RESET_DEVICE`` protocol
|
2022-03-21 18:30:30 +03:00
|
|
|
|
feature is set by the back-end.
|
2019-10-30 00:38:02 +03:00
|
|
|
|
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
``VHOST_USER_VRING_KICK``
|
|
|
|
|
:id: 35
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message may be
|
2022-03-21 18:30:30 +03:00
|
|
|
|
submitted by the front-end to indicate that a buffer was added to
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
the vring instead of signalling it using the vring's kick file
|
2022-03-21 18:30:30 +03:00
|
|
|
|
descriptor or having the back-end rely on polling.
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
The state.num field is currently reserved and must be set to 0.
|
|
|
|
|
|
2020-05-21 08:00:32 +03:00
|
|
|
|
``VHOST_USER_GET_MAX_MEM_SLOTS``
|
|
|
|
|
:id: 36
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: u64
|
2020-05-21 08:00:32 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message is submitted
|
2022-03-21 18:30:30 +03:00
|
|
|
|
by the front-end to the back-end. The back-end should return the message with a
|
2020-05-21 08:00:32 +03:00
|
|
|
|
u64 payload containing the maximum number of memory slots for
|
2022-03-21 18:30:30 +03:00
|
|
|
|
QEMU to expose to the guest. The value returned by the back-end
|
2020-05-21 08:00:40 +03:00
|
|
|
|
will be capped at the maximum number of ram slots which can be
|
|
|
|
|
supported by the target platform.
|
2020-05-21 08:00:35 +03:00
|
|
|
|
|
|
|
|
|
``VHOST_USER_ADD_MEM_REG``
|
|
|
|
|
:id: 37
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: single memory region description
|
2020-05-21 08:00:35 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message is submitted
|
2022-03-21 18:30:30 +03:00
|
|
|
|
by the front-end to the back-end. The message payload contains a memory
|
2020-05-21 08:00:35 +03:00
|
|
|
|
region descriptor struct, describing a region of guest memory which
|
2022-03-21 18:30:30 +03:00
|
|
|
|
the back-end device must map in. When the
|
2020-05-21 08:00:35 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol feature has
|
|
|
|
|
been successfully negotiated, along with the
|
|
|
|
|
``VHOST_USER_REM_MEM_REG`` message, this message is used to set and
|
2022-03-21 18:30:30 +03:00
|
|
|
|
update the memory tables of the back-end device.
|
2020-05-21 08:00:35 +03:00
|
|
|
|
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
Exactly one file descriptor from which the memory is mapped is
|
|
|
|
|
passed in the ancillary data.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
In postcopy mode (see ``VHOST_USER_POSTCOPY_LISTEN``), the back-end
|
|
|
|
|
replies with the bases of the memory mapped region to the front-end.
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
For further details on postcopy, see ``VHOST_USER_SET_MEM_TABLE``.
|
|
|
|
|
They apply to ``VHOST_USER_ADD_MEM_REG`` accordingly.
|
|
|
|
|
|
2020-05-21 08:00:35 +03:00
|
|
|
|
``VHOST_USER_REM_MEM_REG``
|
|
|
|
|
:id: 38
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: single memory region description
|
2020-05-21 08:00:35 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message is submitted
|
2022-03-21 18:30:30 +03:00
|
|
|
|
by the front-end to the back-end. The message payload contains a memory
|
2020-05-21 08:00:35 +03:00
|
|
|
|
region descriptor struct, describing a region of guest memory which
|
2022-03-21 18:30:30 +03:00
|
|
|
|
the back-end device must unmap. When the
|
2020-05-21 08:00:35 +03:00
|
|
|
|
``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol feature has
|
|
|
|
|
been successfully negotiated, along with the
|
|
|
|
|
``VHOST_USER_ADD_MEM_REG`` message, this message is used to set and
|
2022-03-21 18:30:30 +03:00
|
|
|
|
update the memory tables of the back-end device.
|
2020-05-21 08:00:32 +03:00
|
|
|
|
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
The memory region to be removed is identified by its guest address,
|
|
|
|
|
user address and size. The mmap offset is ignored.
|
|
|
|
|
|
|
|
|
|
No file descriptors SHOULD be passed in the ancillary data. For
|
2022-03-21 18:30:30 +03:00
|
|
|
|
compatibility with existing incorrect implementations, the back-end MAY
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
accept messages with one file descriptor. If a file descriptor is
|
2022-03-21 18:30:30 +03:00
|
|
|
|
passed, the back-end MUST close it without using it otherwise.
|
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
The specification for VHOST_USER_ADD/REM_MEM_REG messages is unclear
in several points, which has led to clients having incompatible
implementations. This changes the specification to be more explicit
about them:
* VHOST_USER_ADD_MEM_REG is not specified as receiving a file
descriptor, though it obviously does need to do so. All
implementations agree on this one, fix the specification.
* VHOST_USER_REM_MEM_REG is not specified as receiving a file
descriptor either, and it also has no reason to do so. rust-vmm does
not send file descriptors for removing a memory region (in agreement
with the specification), libvhost-user and QEMU do (which is a bug),
though libvhost-user doesn't actually make any use of it.
Change the specification so that for compatibility QEMU's behaviour
becomes legal, even if discouraged, but rust-vmm's behaviour becomes
the explicitly recommended mode of operation.
* VHOST_USER_ADD_MEM_REG doesn't have a documented return value, which
is the desired behaviour in the non-postcopy case. It also implemented
like this in QEMU and rust-vmm, though libvhost-user is buggy and
sometimes sends an unexpected reply. This will be fixed in a separate
patch.
However, in postcopy mode it does reply like VHOST_USER_SET_MEM_TABLE.
This behaviour is shared between libvhost-user and QEMU; rust-vmm
doesn't implement postcopy mode yet. Mention it explicitly in the
spec.
* The specification doesn't mention how VHOST_USER_REM_MEM_REG
identifies the memory region to be removed. Change it to describe the
existing behaviour of libvhost-user (guest address, user address and
size must match).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220407133657.155281-2-kwolf@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-04-07 16:36:55 +03:00
|
|
|
|
|
2020-06-18 16:45:01 +03:00
|
|
|
|
``VHOST_USER_SET_STATUS``
|
|
|
|
|
:id: 39
|
|
|
|
|
:equivalent ioctl: VHOST_VDPA_SET_STATUS
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``u64``
|
|
|
|
|
:reply payload: N/A
|
2020-06-18 16:45:01 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_STATUS`` protocol feature has been
|
2022-03-21 18:30:30 +03:00
|
|
|
|
successfully negotiated, this message is submitted by the front-end to
|
|
|
|
|
notify the back-end with updated device status as defined in the Virtio
|
2020-06-18 16:45:01 +03:00
|
|
|
|
specification.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_GET_STATUS``
|
|
|
|
|
:id: 40
|
|
|
|
|
:equivalent ioctl: VHOST_VDPA_GET_STATUS
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: ``u64``
|
2020-06-18 16:45:01 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_STATUS`` protocol feature has been
|
2022-03-21 18:30:30 +03:00
|
|
|
|
successfully negotiated, this message is submitted by the front-end to
|
|
|
|
|
query the back-end for its device status as defined in the Virtio
|
2020-06-18 16:45:01 +03:00
|
|
|
|
specification.
|
|
|
|
|
|
2023-10-02 09:57:05 +03:00
|
|
|
|
``VHOST_USER_GET_SHARED_OBJECT``
|
|
|
|
|
:id: 41
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: ``struct VhostUserShared``
|
|
|
|
|
:reply payload: dmabuf fd
|
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_SHARED_OBJECT`` protocol
|
|
|
|
|
feature has been successfully negotiated, and the UUID is found
|
|
|
|
|
in the exporters cache, this message is submitted by the front-end
|
|
|
|
|
to retrieve a given dma-buf fd from a given back-end, determined by
|
|
|
|
|
the requested UUID. Back-end will reply passing the fd when the operation
|
|
|
|
|
is successful, or no fd otherwise.
|
2020-06-18 16:45:01 +03:00
|
|
|
|
|
vhost-user.rst: Migrating back-end-internal state
For vhost-user devices, qemu can migrate the virtio state, but not the
back-end's internal state. To do so, we need to be able to transfer
this internal state between front-end (qemu) and back-end.
At this point, this new feature is added for the purpose of virtio-fs
migration. Because virtiofsd's internal state will not be too large, we
believe it is best to transfer it as a single binary blob after the
streaming phase.
These are the additions to the protocol:
- New vhost-user protocol feature VHOST_USER_PROTOCOL_F_DEVICE_STATE
- SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a file
descriptor over which to transfer the state.
- CHECK_DEVICE_STATE: After the state has been transferred through the
file descriptor, the front-end invokes this function to verify
success. There is no in-band way (through the file descriptor) to
indicate failure, so we need to check explicitly.
Once the transfer FD has been established via SET_DEVICE_STATE_FD
(which includes establishing the direction of transfer and migration
phase), the sending side writes its data into it, and the reading side
reads it until it sees an EOF. Then, the front-end will check for
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20231016134243.68248-5-hreitz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-16 16:42:40 +03:00
|
|
|
|
``VHOST_USER_SET_DEVICE_STATE_FD``
|
|
|
|
|
:id: 42
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: device state transfer parameters
|
|
|
|
|
:reply payload: ``u64``
|
|
|
|
|
|
|
|
|
|
Front-end and back-end negotiate a channel over which to transfer the
|
|
|
|
|
back-end’s internal state during migration. Either side (front-end or
|
|
|
|
|
back-end) may create the channel. The nature of this channel is not
|
|
|
|
|
restricted or defined in this document, but whichever side creates it
|
|
|
|
|
must create a file descriptor that is provided to the respectively
|
|
|
|
|
other side, allowing access to the channel. This FD must behave as
|
|
|
|
|
follows:
|
|
|
|
|
|
|
|
|
|
* For the writing end, it must allow writing the whole back-end state
|
|
|
|
|
sequentially. Closing the file descriptor signals the end of
|
|
|
|
|
transfer.
|
|
|
|
|
|
|
|
|
|
* For the reading end, it must allow reading the whole back-end state
|
|
|
|
|
sequentially. The end of file signals the end of the transfer.
|
|
|
|
|
|
|
|
|
|
For example, the channel may be a pipe, in which case the two ends of
|
|
|
|
|
the pipe fulfill these requirements respectively.
|
|
|
|
|
|
|
|
|
|
Initially, the front-end creates a channel along with such an FD. It
|
|
|
|
|
passes the FD to the back-end as ancillary data of a
|
|
|
|
|
``VHOST_USER_SET_DEVICE_STATE_FD`` message. The back-end may create a
|
|
|
|
|
different transfer channel, passing the respective FD back to the
|
|
|
|
|
front-end as ancillary data of the reply. If so, the front-end must
|
|
|
|
|
then discard its channel and use the one provided by the back-end.
|
|
|
|
|
|
|
|
|
|
Whether the back-end should decide to use its own channel is decided
|
|
|
|
|
based on efficiency: If the channel is a pipe, both ends will most
|
|
|
|
|
likely need to copy data into and out of it. Any channel that allows
|
|
|
|
|
for more efficient processing on at least one end, e.g. through
|
|
|
|
|
zero-copy, is considered more efficient and thus preferred. If the
|
|
|
|
|
back-end can provide such a channel, it should decide to use it.
|
|
|
|
|
|
|
|
|
|
The request payload contains parameters for the subsequent data
|
|
|
|
|
transfer, as described in the :ref:`Migrating back-end state
|
|
|
|
|
<migrating_backend_state>` section.
|
|
|
|
|
|
|
|
|
|
The value returned is both an indication for success, and whether a
|
|
|
|
|
file descriptor for a back-end-provided channel is returned: Bits 0–7
|
|
|
|
|
are 0 on success, and non-zero on error. Bit 8 is the invalid FD
|
|
|
|
|
flag; this flag is set when there is no file descriptor returned.
|
|
|
|
|
When this flag is not set, the front-end must use the returned file
|
|
|
|
|
descriptor as its end of the transfer channel. The back-end must not
|
|
|
|
|
both indicate an error and return a file descriptor.
|
|
|
|
|
|
|
|
|
|
Using this function requires prior negotiation of the
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_CHECK_DEVICE_STATE``
|
|
|
|
|
:id: 43
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: ``u64``
|
|
|
|
|
|
|
|
|
|
After transferring the back-end’s internal state during migration (see
|
|
|
|
|
the :ref:`Migrating back-end state <migrating_backend_state>`
|
|
|
|
|
section), check whether the back-end was able to successfully fully
|
|
|
|
|
process the state.
|
|
|
|
|
|
|
|
|
|
The value returned indicates success or error; 0 is success, any
|
|
|
|
|
non-zero value is an error.
|
|
|
|
|
|
|
|
|
|
Using this function requires prior negotiation of the
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Back-end message types
|
|
|
|
|
----------------------
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
For this type of message, the request is sent by the back-end and the reply
|
|
|
|
|
is sent by the front-end.
|
2022-03-21 18:30:28 +03:00
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_BACKEND_IOTLB_MSG`` (previous name ``VHOST_USER_SLAVE_IOTLB_MSG``)
|
2019-03-15 21:07:35 +03:00
|
|
|
|
:id: 1
|
|
|
|
|
:equivalent ioctl: N/A (equivalent to ``VHOST_IOTLB_MSG`` message type)
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: ``struct vhost_iotlb_msg``
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Send IOTLB messages with ``struct vhost_iotlb_msg`` as payload.
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The back-end sends such requests to notify of an IOTLB miss, or an IOTLB
|
2019-03-15 21:07:35 +03:00
|
|
|
|
access failure. If ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is
|
2022-03-21 18:30:30 +03:00
|
|
|
|
negotiated, and back-end set the ``VHOST_USER_NEED_REPLY`` flag, the front-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
must respond with zero when operation is successfully completed, or
|
|
|
|
|
non-zero otherwise. This request should be send only when
|
|
|
|
|
``VIRTIO_F_IOMMU_PLATFORM`` feature has been successfully
|
|
|
|
|
negotiated.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_BACKEND_CONFIG_CHANGE_MSG`` (previous name ``VHOST_USER_SLAVE_CONFIG_CHANGE_MSG``)
|
2019-03-15 21:07:35 +03:00
|
|
|
|
:id: 2
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: N/A
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
When ``VHOST_USER_PROTOCOL_F_CONFIG`` is negotiated, vhost-user
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end sends such messages to notify that the virtio device's
|
2019-03-15 21:07:35 +03:00
|
|
|
|
configuration space has changed, for those host devices which can
|
|
|
|
|
support such feature, host driver can send ``VHOST_USER_GET_CONFIG``
|
2022-03-21 18:30:30 +03:00
|
|
|
|
message to the back-end to get the latest content. If
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and the back-end sets the
|
|
|
|
|
``VHOST_USER_NEED_REPLY`` flag, the front-end must respond with zero when
|
2019-03-15 21:07:35 +03:00
|
|
|
|
operation is successfully completed, or non-zero otherwise.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_BACKEND_VRING_HOST_NOTIFIER_MSG`` (previous name ``VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG``)
|
2019-03-15 21:07:35 +03:00
|
|
|
|
:id: 3
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring area description
|
|
|
|
|
:reply payload: N/A
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
|
|
|
|
Sets host notifier for a specified queue. The queue index is
|
|
|
|
|
contained in the ``u64`` field of the vring area description. The
|
|
|
|
|
host notifier is described by the file descriptor (typically it's a
|
|
|
|
|
VFIO device fd) which is passed as ancillary data and the size
|
|
|
|
|
(which is mmap size and should be the same as host page size) and
|
|
|
|
|
offset (which is mmap offset) carried in the vring area
|
|
|
|
|
description. QEMU can mmap the file descriptor based on the size and
|
|
|
|
|
offset to get a memory range. Registering a host notifier means
|
|
|
|
|
mapping this memory range to the VM as the specified queue's notify
|
2022-03-21 18:30:30 +03:00
|
|
|
|
MMIO region. The back-end sends this request to tell QEMU to de-register
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the existing notifier if any and register the new notifier if the
|
|
|
|
|
request is sent with a file descriptor.
|
|
|
|
|
|
|
|
|
|
This request should be sent only when
|
|
|
|
|
``VHOST_USER_PROTOCOL_F_HOST_NOTIFIER`` protocol feature has been
|
|
|
|
|
successfully negotiated.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_BACKEND_VRING_CALL`` (previous name ``VHOST_USER_SLAVE_VRING_CALL``)
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
:id: 4
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message may be
|
2022-03-21 18:30:30 +03:00
|
|
|
|
submitted by the back-end to indicate that a buffer was used from
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
the vring instead of signalling this using the vring's call file
|
2022-03-21 18:30:30 +03:00
|
|
|
|
descriptor or having the front-end relying on polling.
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
The state.num field is currently reserved and must be set to 0.
|
|
|
|
|
|
2023-02-08 23:32:57 +03:00
|
|
|
|
``VHOST_USER_BACKEND_VRING_ERR`` (previous name ``VHOST_USER_SLAVE_VRING_ERR``)
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
:id: 5
|
|
|
|
|
:equivalent ioctl: N/A
|
2022-03-21 18:30:28 +03:00
|
|
|
|
:request payload: vring state description
|
|
|
|
|
:reply payload: N/A
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message may be
|
2022-03-21 18:30:30 +03:00
|
|
|
|
submitted by the back-end to indicate that an error occurred on the
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
specific vring, instead of signalling the error file descriptor
|
2022-03-21 18:30:30 +03:00
|
|
|
|
set by the front-end via ``VHOST_USER_SET_VRING_ERR``.
|
docs: vhost-user: add in-band kick/call messages
For good reason, vhost-user is currently built asynchronously, that
way better performance can be obtained. However, for certain use
cases such as simulation, this is problematic.
Consider an event-based simulation in which both the device and CPU
have scheduled according to a simulation "calendar". Now, consider
the CPU sending I/O to the device, over a vring in the vhost-user
protocol. In this case, the CPU must wait for the vring interrupt
to have been processed by the device, so that the device is able to
put an entry onto the simulation calendar to obtain time to handle
the interrupt. Note that this doesn't mean the I/O is actually done
at this time, it just means that the handling of it is scheduled
before the CPU can continue running.
This cannot be done with the asynchronous eventfd based vring kick
and call design.
Extend the protocol slightly, so that a message can be used for kick
and call instead, if VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is
negotiated. This in itself doesn't guarantee synchronisation, but both
sides can also negotiate VHOST_USER_PROTOCOL_F_REPLY_ACK and thus get
a reply to this message by setting the need_reply flag, and ensure
synchronisation this way.
To really use it in both directions, VHOST_USER_PROTOCOL_F_SLAVE_REQ
is also needed.
Since it is used for simulation purposes and too many messages on
the socket can lock up the virtual machine, document that this should
only be used together with the mentioned features.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-6-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-01-23 11:17:07 +03:00
|
|
|
|
|
|
|
|
|
The state.num field is currently reserved and must be set to 0.
|
|
|
|
|
|
2023-10-02 09:57:05 +03:00
|
|
|
|
``VHOST_USER_BACKEND_SHARED_OBJECT_ADD``
|
|
|
|
|
:id: 6
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: ``struct VhostUserShared``
|
|
|
|
|
:reply payload: N/A
|
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_SHARED_OBJECT`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message can be submitted
|
|
|
|
|
by the backends to add themselves as exporters to the virtio shared lookup
|
|
|
|
|
table. The back-end device gets associated with a UUID in the shared table.
|
|
|
|
|
The back-end is responsible of keeping its own table with exported dma-buf fds.
|
|
|
|
|
When another back-end tries to import the resource associated with the UUID,
|
|
|
|
|
it will send a message to the front-end, which will act as a proxy to the
|
|
|
|
|
exporter back-end. If ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and
|
|
|
|
|
the back-end sets the ``VHOST_USER_NEED_REPLY`` flag, the front-end must
|
|
|
|
|
respond with zero when operation is successfully completed, or non-zero
|
|
|
|
|
otherwise.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE``
|
|
|
|
|
:id: 7
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: ``struct VhostUserShared``
|
|
|
|
|
:reply payload: N/A
|
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_SHARED_OBJECT`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message can be submitted
|
|
|
|
|
by the backend to remove themselves from to the virtio-dmabuf shared
|
2024-02-19 17:34:19 +03:00
|
|
|
|
table API. Only the back-end owning the entry (i.e., the one that first added
|
|
|
|
|
it) will have permission to remove it. Otherwise, the message is ignored.
|
|
|
|
|
The shared table will remove the back-end device associated with
|
2023-10-02 09:57:05 +03:00
|
|
|
|
the UUID. If ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and the
|
|
|
|
|
back-end sets the ``VHOST_USER_NEED_REPLY`` flag, the front-end must respond
|
|
|
|
|
with zero when operation is successfully completed, or non-zero otherwise.
|
|
|
|
|
|
|
|
|
|
``VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP``
|
|
|
|
|
:id: 8
|
|
|
|
|
:equivalent ioctl: N/A
|
|
|
|
|
:request payload: ``struct VhostUserShared``
|
|
|
|
|
:reply payload: dmabuf fd and ``u64``
|
|
|
|
|
|
|
|
|
|
When the ``VHOST_USER_PROTOCOL_F_SHARED_OBJECT`` protocol
|
|
|
|
|
feature has been successfully negotiated, this message can be submitted
|
|
|
|
|
by the backends to retrieve a given dma-buf fd from the virtio-dmabuf
|
|
|
|
|
shared table given a UUID. Frontend will reply passing the fd and a zero
|
|
|
|
|
when the operation is successful, or non-zero otherwise. Note that if the
|
|
|
|
|
operation fails, no fd is sent to the backend.
|
|
|
|
|
|
2019-03-15 21:07:35 +03:00
|
|
|
|
.. _reply_ack:
|
|
|
|
|
|
|
|
|
|
VHOST_USER_PROTOCOL_F_REPLY_ACK
|
|
|
|
|
-------------------------------
|
|
|
|
|
|
|
|
|
|
The original vhost-user specification only demands replies for certain
|
|
|
|
|
commands. This differs from the vhost protocol implementation where
|
2022-03-21 18:30:30 +03:00
|
|
|
|
commands are sent over an ``ioctl()`` call and block until the back-end
|
2019-03-15 21:07:35 +03:00
|
|
|
|
has completed.
|
|
|
|
|
|
|
|
|
|
With this protocol extension negotiated, the sender (QEMU) can set the
|
|
|
|
|
``need_reply`` [Bit 3] flag to any command. This indicates that the
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end MUST respond with a Payload ``VhostUserMsg`` indicating success
|
2019-03-15 21:07:35 +03:00
|
|
|
|
or failure. The payload should be set to zero on success or non-zero
|
|
|
|
|
on failure, unless the message already has an explicit reply body.
|
|
|
|
|
|
2022-03-21 18:30:28 +03:00
|
|
|
|
The reply payload gives QEMU a deterministic indication of the result
|
2019-03-15 21:07:35 +03:00
|
|
|
|
of the command. Today, QEMU is expected to terminate the main vhost-user
|
|
|
|
|
loop upon receiving such errors. In future, qemu could be taught to be more
|
|
|
|
|
resilient for selective requests.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
For the message types that already solicit a reply from the back-end,
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the presence of ``VHOST_USER_PROTOCOL_F_REPLY_ACK`` or need_reply bit
|
|
|
|
|
being set brings no behavioural change. (See the Communication_
|
|
|
|
|
section for details.)
|
|
|
|
|
|
|
|
|
|
.. _backend_conventions:
|
|
|
|
|
|
|
|
|
|
Backend program conventions
|
|
|
|
|
===========================
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
vhost-user back-ends can provide various devices & services and may
|
2019-03-15 21:07:35 +03:00
|
|
|
|
need to be configured manually depending on the use case. However, it
|
|
|
|
|
is a good idea to follow the conventions listed here when
|
|
|
|
|
possible. Users, QEMU or libvirt, can then rely on some common
|
2020-09-17 10:50:22 +03:00
|
|
|
|
behaviour to avoid heterogeneous configuration and management of the
|
2022-03-21 18:30:30 +03:00
|
|
|
|
back-end programs and facilitate interoperability.
|
2019-03-15 21:07:35 +03:00
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Each back-end installed on a host system should come with at least one
|
2019-03-15 21:07:35 +03:00
|
|
|
|
JSON file that conforms to the vhost-user.json schema. Each file
|
2022-03-21 18:30:30 +03:00
|
|
|
|
informs the management applications about the back-end type, and binary
|
2019-03-15 21:07:35 +03:00
|
|
|
|
location. In addition, it defines rules for management apps for
|
2022-03-21 18:30:30 +03:00
|
|
|
|
picking the highest priority back-end when multiple match the search
|
2019-03-15 21:07:35 +03:00
|
|
|
|
criteria (see ``@VhostUserBackend`` documentation in the schema file).
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
If the back-end is not capable of enabling a requested feature on the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
host (such as 3D acceleration with virgl), or the initialization
|
2022-03-21 18:30:30 +03:00
|
|
|
|
failed, the back-end should fail to start early and exit with a status
|
2019-03-15 21:07:35 +03:00
|
|
|
|
!= 0. It may also print a message to stderr for further details.
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The back-end program must not daemonize itself, but it may be
|
2019-03-15 21:07:35 +03:00
|
|
|
|
daemonized by the management layer. It may also have a restricted
|
|
|
|
|
access to the system.
|
|
|
|
|
|
|
|
|
|
File descriptors 0, 1 and 2 will exist, and have regular
|
|
|
|
|
stdin/stdout/stderr usage (they may have been redirected to /dev/null
|
|
|
|
|
by the management layer, or to a log handler).
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
The back-end program must end (as quickly and cleanly as possible) when
|
2019-03-15 21:07:35 +03:00
|
|
|
|
the SIGTERM signal is received. Eventually, it may receive SIGKILL by
|
|
|
|
|
the management layer after a few seconds.
|
|
|
|
|
|
|
|
|
|
The following command line options have an expected behaviour. They
|
|
|
|
|
are mandatory, unless explicitly said differently:
|
|
|
|
|
|
|
|
|
|
--socket-path=PATH
|
|
|
|
|
|
|
|
|
|
This option specify the location of the vhost-user Unix domain socket.
|
|
|
|
|
It is incompatible with --fd.
|
|
|
|
|
|
|
|
|
|
--fd=FDNUM
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
When this argument is given, the back-end program is started with the
|
2019-03-15 21:07:35 +03:00
|
|
|
|
vhost-user socket as file descriptor FDNUM. It is incompatible with
|
|
|
|
|
--socket-path.
|
|
|
|
|
|
|
|
|
|
--print-capabilities
|
|
|
|
|
|
2022-03-21 18:30:30 +03:00
|
|
|
|
Output to stdout the back-end capabilities in JSON format, and then
|
2019-03-15 21:07:35 +03:00
|
|
|
|
exit successfully. Other options and arguments should be ignored, and
|
2022-03-21 18:30:30 +03:00
|
|
|
|
the back-end program should not perform its normal function. The
|
2019-03-15 21:07:35 +03:00
|
|
|
|
capabilities can be reported dynamically depending on the host
|
|
|
|
|
capabilities.
|
|
|
|
|
|
|
|
|
|
The JSON output is described in the ``vhost-user.json`` schema, by
|
|
|
|
|
```@VHostUserBackendCapabilities``. Example:
|
|
|
|
|
|
|
|
|
|
.. code:: json
|
|
|
|
|
|
|
|
|
|
{
|
|
|
|
|
"type": "foo",
|
|
|
|
|
"features": [
|
|
|
|
|
"feature-a",
|
|
|
|
|
"feature-b"
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
vhost-user-input
|
|
|
|
|
----------------
|
|
|
|
|
|
|
|
|
|
Command line options:
|
|
|
|
|
|
|
|
|
|
--evdev-path=PATH
|
|
|
|
|
|
|
|
|
|
Specify the linux input device.
|
|
|
|
|
|
|
|
|
|
(optional)
|
|
|
|
|
|
|
|
|
|
--no-grab
|
|
|
|
|
|
|
|
|
|
Do no request exclusive access to the input device.
|
|
|
|
|
|
|
|
|
|
(optional)
|
|
|
|
|
|
|
|
|
|
vhost-user-gpu
|
|
|
|
|
--------------
|
|
|
|
|
|
|
|
|
|
Command line options:
|
|
|
|
|
|
|
|
|
|
--render-node=PATH
|
|
|
|
|
|
|
|
|
|
Specify the GPU DRM render node.
|
|
|
|
|
|
|
|
|
|
(optional)
|
|
|
|
|
|
|
|
|
|
--virgl
|
|
|
|
|
|
|
|
|
|
Enable virgl rendering support.
|
|
|
|
|
|
|
|
|
|
(optional)
|
2019-12-09 04:53:31 +03:00
|
|
|
|
|
|
|
|
|
vhost-user-blk
|
|
|
|
|
--------------
|
|
|
|
|
|
|
|
|
|
Command line options:
|
|
|
|
|
|
|
|
|
|
--blk-file=PATH
|
|
|
|
|
|
|
|
|
|
Specify block device or file path.
|
|
|
|
|
|
|
|
|
|
(optional)
|
|
|
|
|
|
|
|
|
|
--read-only
|
|
|
|
|
|
|
|
|
|
Enable read-only.
|
|
|
|
|
|
|
|
|
|
(optional)
|