tpm_crb is a device for TPM 2.0 Command Response Buffer (CRB)
Interface as defined in TCG PC Client Platform TPM Profile (PTP)
Specification Family “2.0” Level 00 Revision 01.03 v22.
The PTP allows device implementation to switch between TIS and CRB
model at run time, but given that CRB is a simpler device to
implement, I chose to implement it as a different device.
The device doesn't implement other locality than 0 for now (my laptop
TPM doesn't either, so I assume this isn't so bad)
Tested with some success with Linux upstream and Windows 10, seabios &
modified ovmf. The device is recognized and correctly transmit
command/response with passthrough & emu. However, we are missing PPI
ACPI part atm.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Version: GnuPG v1
iQEcBAABAgAGBQJabtfbAAoJEO8Ells5jWIROgUH/2SeXD7Du1w0Ry5Bc7uKBR51
jGm+324jfT5mqajlWQ5rGMTEUHLGX8H4s05FT3/gTl0xTea5rSrUTeW+7RgJaE+N
pOaF0vEhms3sg9rZoF84XlkKjKKsZvAFcK4QRrp4Jc1djQQmOc7d+7wbiGFN5+Ii
OCzq3V4hhVhyFvpasP92aIxdvmz4yW1Vng35njVLm7xTyblMm4mQ/S6qH+/j5UXT
8vEheABU5nt9XTMJO8FaeFe2XzsXgV9ng5NiwR7aPLdghRFffSKUsxTsDJ061BIJ
PbJh/XpELIgsscK6SpEhACeV2gcr2qYbqXS94kWNXziEYdl+oU7ZYLEEMdQIKeE=
=sd76
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Mon 29 Jan 2018 08:14:19 GMT
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
MAINTAINERS: update Dmitry Fleytman email
qemu-doc: Get rid of "vlan=X" example in the documentation
net: Allow netdevs to be used with 'hostfwd_add' and 'hostfwd_remove'
net: Allow hubports to connect to other netdevs
colo: compare the packet based on the tcp sequence number
colo: modified the payload compare function
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU can emulate hubs to connect NICs and netdevs. This is currently
primarily used for the mis-named 'vlan' feature of the networking
subsystem. Now the 'vlan' feature has been marked as deprecated, since
its name is rather confusing and the users often rather mis-configure
their network when trying to use it. But while the 'vlan' parameter
should be removed at one point in time, the basic idea of emulating
a hub in QEMU is still good: It's useful for bundling up the output of
multiple NICs into one single l2tp netdev for example.
Now to be able to use the hubport feature without 'vlan's, there is one
missing piece: The possibility to connect a hubport to a netdev, too.
This patch adds this possibility by introducing a new "netdev=..."
parameter to the hubports.
To bundle up the output of multiple NICs into one socket netdev, you can
now run QEMU with these parameters for example:
qemu-system-ppc64 ... -netdev socket,id=s1,connect=:11122 \
-netdev hubport,hubid=1,id=h1,netdev=s1 \
-netdev hubport,hubid=1,id=h2 -device e1000,netdev=h2 \
-netdev hubport,hubid=1,id=h3 -device virtio-net-pci,netdev=h3
For using the socket netdev, you have got to start another QEMU as the
receiving side first, for example with network dumping enabled:
qemu-system-x86_64 -M isapc -netdev socket,id=s0,listen=:11122 \
-device ne2k_isa,netdev=s0 \
-object filter-dump,id=f1,netdev=s0,file=/tmp/dump.dat
After the ppc64 guest tried to boot from both NICs, you can see in the
dump file (using Wireshark, for example), that the output of both NICs
(the e1000 and the virtio-net-pci) has been successfully transfered
via the socket netdev in this case.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Add command for removing an export. It is needed for cases when we
don't want to keep the export after the operation on it was completed.
The other example is a temporary node, created with blockdev-add.
If we want to delete it we should firstly remove any corresponding
NBD export.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180119135719.24745-3-vsementsov@virtuozzo.com>
[eblake: drop dead nb_clients code]
Signed-off-by: Eric Blake <eblake@redhat.com>
Allow user to specify name for new export, to not reuse internal
node name and to not show it to clients.
This also allows creating several exports per device.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180119135719.24745-2-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJaZyzMAAoJEH8JsnLIjy/WSOYP/jHffS2fOHtdLQU42G76HN0K
jkhSUE8cuKFzgxuJDJhv10AGsGZvDIaRhuumPIFArQkcEwsFDfd0UqzC5GkCnhfn
6frVPsRSUp9BqXqha1+6vOgVRobdBXPpS25ERfanTsbu3aPEDRnGpxmpMvyyimft
BTUnWNCg2lM6bojXrC6oy7MqUdi9p9PviMcQAfnN07SmGa+s6tS2Jc9znvZwgL06
o+oPukWVTAiub5qcH18BLA3T8xcCXWANdY9pUnNj7mXHoxg3kYzzYBArYDh6Kyju
BkSEML1kNcUACFAZ+LSqQpnoc8/5cP+jY5cOBGtUUgjZSns/xnAZxALltds0I4m3
fqQM68oOTX7squAYAaKYVYMirime6aa2OAn2afxPJildPp8uH4lNust95yiUyyJQ
oqA3zfAnP5FfmTnzjLG7smYlRUlcHp8eMPyOKHxp3BuqTMbWY5KQETyDMk3QVnZr
7fSFIdT4sRTdroKXUKHHu3RLFyCo77EBovxY2oUtt6v43qxQhLx0IFwW6jHrcLK9
ifLOr1CqdgwH/OU7h6rzoLcGLX5/eOTxwcCbU0kP2cx4E60VBXmSaDq9TiwQhbeV
4HteS+EP6R0WpCiAvsFl2aUd6iwDRHeYt0aKpYyUuTVrW2mfmLPaQP+tJLjEoaHF
H5HlbNWy2gFAB2uQtmOd
=gjNZ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Tue 23 Jan 2018 12:38:36 GMT
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (29 commits)
iotests: Disable some tests for compat=0.10
iotests: Split 177 into two parts for compat=0.10
iotests: Make 059 pass on machines with little RAM
iotests: Filter compat-dependent info in 198
iotests: Make 191 work with qcow2 options
iotests: Make 184 image-less
iotests: Make 089 compatible with compat=0.10
iotests: Fix 067 for compat=0.10
iotests: Fix 059's reference output
iotests: Fix 051 for compat=0.10
iotests: Fix 020 for vmdk
iotests: Skip 103 for refcount_bits=1
iotests: Forbid 020 for non-file protocols
iotests: Drop format-specific in _filter_img_info
iotests: Fix _img_info for backslashes
block/vmdk: Add blkdebug events
block/qcow: Add blkdebug events
qcow2: No persistent dirty bitmaps for compat=0.10
block/vmdk: Fix , instead of ; at end of line
qemu-iotests: Fix locking issue in 102
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
v2:
* Drop merge failure from a previous pull request that broke virtio-blk on ARM
guests
* Add Parallels XML patch series
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaZgqoAAoJEJykq7OBq3PIwNIIAKD8OwjeQdtznD88ikMGd5CF
PvBHOIXIX7GCaKdAFEP1MMB0xaTN93zhphPZfcQxnnbi2LrnzuP2WCSunKPPcGQJ
ToMRxYV+OkS0Rm8/us9fQpmBf2PKouIlNXP4jZZuEYAmyZgoU7YBQHYP1sw2K6RE
MHPBdKMKzb+S8u+HBx+oy1LQ0cKKRjCYXhdQ4p7rlWkXc5irQQh9d12W/EHS++cd
5wgX5V3aj4rXD4XwXY/kgDXtSG37sgNOHx77W7gDO3KuBolEBVvPvbP7yrCZzfcC
pGLPKwXnHTxOKlwLQ0weD+uIsJWv6XGKLhgX+MFWCVU1PvPuSTWcqfBplgTUU6s=
=mjmD
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
v2:
* Drop merge failure from a previous pull request that broke virtio-blk on ARM
guests
* Add Parallels XML patch series
# gpg: Signature made Mon 22 Jan 2018 16:00:40 GMT
# gpg: using RSA key 0x9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
block/parallels: add backing support to readv/writev
block/parallels: replace some magic numbers
block/parallels: move some structures into header
configure: add dependency
docs/interop/prl-xml: description of Parallels Disk format
block: add block_set_io_throttle virtio-blk-pci QMP example
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that iotest 093 test proves that the throttling configuration
survives a blockdev-remove-medium/blockdev-insert-medium pair, the
original reason for declaring these commands experimental is gone
(see commit 6e0abc251d).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171110224302.14424-5-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This is an incompatible change, which is fine as the commands are
experimental.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171110224302.14424-4-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This reverts commits
ca6011c migration: add postcopy total blocktime into query-migrate
5f32dc8 migration: add blocktime calculation into migration-test
2f7dae9 migration: postcopy_blocktime documentation
3be98be migration: calculate vCPU blocktime on dst side
01a87f0 migration: add postcopy blocktime ctx into MigrationIncomingState
31bf06a migration: introduce postcopy-blocktime capability
as they don't build on ppc32 due to trying to do atomic accesses
on types that are larger than the host pointer type.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The block_set_io_throttle command can look up BlockBackends by the
attached qdev device ID. virtio-blk-pci is a special case because the
actual VirtIOBlock device is the "/virtio-backend" child of the PCI
adapter device.
Add a QMP schema example so clients will know how to use
block_set_io_throttle on the virtio-blk-pci device.
The alternative is to implement some sort of aliasing for qmp_get_blk()
but that is likely to cause confusion and could break future use cases.
Let's not go there.
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180117090700.25811-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Postcopy total blocktime is available on destination side only.
But query-migrate was possible only for source. This patch
adds ability to call query-migrate on destination.
To be able to see postcopy blocktime, need to request postcopy-blocktime
capability.
The query-migrate command will show following sample result:
{"return":
"postcopy-vcpu-blocktime": [115, 100],
"status": "completed",
"postcopy-blocktime": 100
}}
postcopy_vcpu_blocktime contains list, where the first item is the first
vCPU in QEMU.
This patch has a drawback, it combines states of incoming and
outgoing migration. Ongoing migration state will overwrite incoming
state. Looks like better to separate query-migrate for incoming and
outgoing migration or add parameter to indicate type of migration.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Right now it could be used on destination side to
enable vCPU blocktime calculation for postcopy live migration.
vCPU blocktime - it's time since vCPU thread was put into
interruptible sleep, till memory page was copied and thread awake.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We use int for everything (int64_t), and then we check that value is
between 0 and 255. Change it to the valid types.
This change only happens for HMP. QMP always use bytes and similar.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Removing a quorum child node with x-blockdev-change results in a quorum
driver state that cannot be recreated with create options because it
would require a list with gaps. This causes trouble in at least
.bdrv_refresh_filename().
Document this problem so that we won't accidentally mark the command
stable without having addressed it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
When a node is already associated with a BlockBackend the
x-blockdev-set-iothread command refuses to set the IOThread. This is to
prevent accidentally changing the IOThread when the nodes are in use.
When the nodes are created with -drive they automatically get a
BlockBackend. In that case we know nothing is using them yet and it's
safe to set the IOThread. Add a force boolean to override the check.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20171207201320.19284-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Currently there is no easy way for iotests to ensure that a BDS is bound
to a particular IOThread. Normally the virtio-blk device calls
blk_set_aio_context() when dataplane is enabled during guest driver
initialization. This never happens in iotests since -machine
accel=qtest means there is no guest activity (including device driver
initialization).
This patch adds a QMP command to explicitly assign IOThreads in test
cases. See qapi/block-core.json for a description of the command.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20171206144550.22295-9-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When you cancel an in-progress 'mirror' job (or "active `block-commit`")
with QMP `block-job-cancel`, it emits the event: BLOCK_JOB_CANCELLED.
However, when `block-job-cancel` is issued *after* `drive-mirror` has
indicated (via the event BLOCK_JOB_READY) that the source and
destination have reached synchronization:
[...] # Snip `drive-mirror` invocation & outputs
{
"execute":"block-job-cancel",
"arguments":{
"device":"virtio0"
}
}
{"return": {}}
It (`block-job-cancel`) will counterintuitively emit the event
'BLOCK_JOB_COMPLETED':
{
"timestamp":{
"seconds":1510678024,
"microseconds":526240
},
"event":"BLOCK_JOB_COMPLETED",
"data":{
"device":"virtio0",
"len":41126400,
"offset":41126400,
"speed":0,
"type":"mirror"
}
}
But this is expected behaviour, where the _COMPLETED event indicates
that synchronization has successfully ended (and the destination now has
a point-in-time copy, which is at the time of cancel).
So add a small note to this effect in 'block-core.json'. While at it,
also update the "Live disk synchronization -- drive-mirror and
blockdev-mirror" section in 'live-block-operations.rst'.
(Thanks: Max Reitz for reminding me of this caveat on IRC.)
Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When doing a live migration of a Xen guest with libxl, the images for
block devices are locked by the original QEMU process, and this prevent
the QEMU at the destination to take the lock and the migration fail.
>From QEMU point of view, once the RAM of a domain is migrated, there is
two QMP commands, "stop" then "xen-save-devices-state", at which point a
new QEMU is spawned at the destination.
Release locks in "xen-save-devices-state" so the destination can takes
them, if it's a live migration.
This patch add the "live" parameter to "xen-save-devices-state" which
default to true so older version of libxenlight can work with newer
version of QEMU.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 20171114180128.17076-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
bdrv_set_read_only() is used by some block drivers to override the
read-only option given by the user. This is not how read-only images
generally work in QEMU: Instead of second guessing what the user really
meant (which currently includes making an image read-only even if the
user didn't only use the default, but explicitly said read-only=off), we
should error out if we can't provide what the user requested.
This adds deprecation warnings to all callers of bdrv_set_read_only() so
that the behaviour can be corrected after the usual deprecation period.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Right now it is a variable in MigrationState instead of a
MigrationParameter. The change allows to set it as the rest of the
Migration parameters, from the command line, with
query_migration_paramters, set_migrate_parameters, etc.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
A new qmp command allows the caller to continue from a given
paused state.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Add two statuses for use when the 'pause-before-switchover'
capability is enabled.
'pre-switchover' is the state that we wait in for management
to allow us to continue.
'device' is the state we enter while serialising the devices
after management gives us the OK.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When 'pause-before-switchover' is enabled, the outgoing migration
will pause before invalidating the block devices and serializing
the device state.
At this point the management layer gets the chance to clean up any
device jobs or other device users before the migration completes.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The 'sysrq' key was mistakenly added to QEMU to deal with incorrect handling
of the 'print' key in the ps2 device:
commit f2289cb692
Author: balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Date: Wed Jun 4 10:14:16 2008 +0000
Add sysrq to key names known by "sendkey".
Adding sysrq keycode to the table enabling running sysrq debugging in
the guest via the monitor sendkey command, like:
(qemu) sendkey alt-sysrq-t
Tested on x86-64 target and Linux guest.
Signed-off-by: Ryan Harper <ryanh@us.ibm.com>
The ps2 device is now fixed wrt modifiers and the 'print' key. Further the
handling of the 'sysrq' key has some problems of its own, documented in the
previous commit. To cleanup this mess, we convert any use of 'sysrq' into
'print' prior to dispatching the event to device models.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20171019142848.572-9-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
This change introduces a new TPM backend driver that can communicate with
swtpm(software TPM emulator) using unix domain socket interface. QEMU talks to
the TPM emulator using QEMU's socket-based chardev backend device.
Swtpm uses two Unix sockets for communications, one for plain TPM commands and
responses, and one for out-of-band control messages. QEMU passes the data
socket to be used over the control channel.
The swtpm and associated tools can be found here:
https://github.com/stefanberger/swtpm
The swtpm's control channel protocol specification can be found here:
https://github.com/stefanberger/swtpm/wiki/Control-Channel-Specification
Usage:
# setup TPM state directory
mkdir /tmp/mytpm
chown -R tss:root /tmp/mytpm
/usr/bin/swtpm_setup --tpm-state /tmp/mytpm --createek
# Ask qemu to use TPM emulator with given tpm state directory
qemu-system-x86_64 \
[...] \
-chardev socket,id=chrtpm,path=/tmp/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 \
[...]
Signed-off-by: Amarnath Valluri <amarnath.valluri@intel.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Tested-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Make it possible to inject errors on writes performed during a
read operation due to copy-on-read semantics.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The new name is WatchdogAction which is shorter,
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Message-Id: <dbd61a0928821348486d0d6260be2bd3b02b6402.1504771369.git.mprivozn@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This patch add shrinking of the image file for qcow2. As a result, this allows
us to reduce the virtual image size and free up space on the disk without
copying the image. Image can be fragmented and shrink is done by punching holes
in the image file.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20170918124230.8152-4-pbutsykin@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Indicates how many pages we are going to send in each batch to a multifd
thread.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
--
Be consistent with defaults and documentation
Use new DEFINE_PROP_*
Rename x-multifd-group to x-multifd-page-count
Indicates the number of channels that we will create. By default we
create 2 channels.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
--
Catch inconsistent defaults (eric).
Improve comment stating that number of threads is the same than number
of sockets
Use new DEFIN_PROP_*
Rename x-multifd-threads to x-multifd-threads
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
--
Use new DEFINE_PROP
It is a common requirement for virtual machine to send persistent
reservations, but this currently requires either running QEMU with
CAP_SYS_RAWIO, or using out-of-tree patches that let an unprivileged
QEMU bypass Linux's filter on SG_IO commands.
As an alternative mechanism, the next patches will introduce a
privileged helper to run persistent reservation commands without
expanding QEMU's attack surface unnecessarily.
The helper is invoked through a "pr-manager" QOM object, to which
file-posix.c passes SG_IO requests for PERSISTENT RESERVE OUT and
PERSISTENT RESERVE IN commands. For example:
$ qemu-system-x86_64
-device virtio-scsi \
-object pr-manager-helper,id=helper0,path=/var/run/qemu-pr-helper.sock
-drive if=none,id=hd,driver=raw,file.filename=/dev/sdb,file.pr-manager=helper0
-device scsi-block,drive=hd
or:
$ qemu-system-x86_64
-device virtio-scsi \
-object pr-manager-helper,id=helper0,path=/var/run/qemu-pr-helper.sock
-blockdev node-name=hd,driver=raw,file.driver=host_device,file.filename=/dev/sdb,file.pr-manager=helper0
-device scsi-block,drive=hd
Multiple pr-manager implementations are conceivable and possible, though
only one is implemented right now. For example, a pr-manager could:
- talk directly to the multipath daemon from a privileged QEMU
(i.e. QEMU links to libmpathpersist); this makes reservation work
properly with multipath, but still requires CAP_SYS_RAWIO
- use the Linux IOC_PR_* ioctls (they require CAP_SYS_ADMIN though)
- more interestingly, implement reservations directly in QEMU
through file system locks or a shared database (e.g. sqlite)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
block/throttle.c uses existing I/O throttle infrastructure inside a
block filter driver. I/O operations are intercepted in the filter's
read/write coroutines, and referred to block/throttle-groups.c
The driver can be used with the syntax
-drive driver=throttle,file.filename=foo.qcow2,throttle-group=bar
which registers the throttle filter node with the ThrottleGroup 'bar'. The
given group must be created beforehand with object-add or -object.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
ThrottleGroup is converted to an object. This will allow the future
throttle block filter drive easy creation and configuration of throttle
groups in QMP and cli.
A new QAPI struct, ThrottleLimits, is introduced to provide a shared
struct for all throttle configuration needs in QMP.
ThrottleGroups can be created via CLI as
-object throttle-group,id=foo,x-iops-total=100,x-..
where x-* are individual limit properties. Since we can't add non-scalar
properties in -object this interface must be used instead. However,
setting these properties must be disabled after initialization because
certain combinations of limits are forbidden and thus configuration
changes should be done in one transaction. The individual properties
will go away when support for non-scalar values in CLI is implemented
and thus are marked as experimental.
ThrottleGroup also has a `limits` property that uses the ThrottleLimits
struct. It can be used to create ThrottleGroups or set the
configuration in existing groups as follows:
{ "execute": "object-add",
"arguments": {
"qom-type": "throttle-group",
"id": "foo",
"props" : {
"limits": {
"iops-total": 100
}
}
}
}
{ "execute" : "qom-set",
"arguments" : {
"path" : "foo",
"property" : "limits",
"value" : {
"iops-total" : 99
}
}
}
This also means a group's configuration can be fetched with qom-get.
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently, a FOO_lookup is an array of strings terminated by a NULL
sentinel.
A future patch will generate enums with "holes". NULL-termination
will cease to work then.
To prepare for that, store the length in the FOO_lookup by wrapping it
in a struct and adding a member for the length.
The sentinel will be dropped next.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170822132255.23945-13-marcandre.lureau@redhat.com>
[Basically redone]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-16-git-send-email-armbru@redhat.com>
[Rebased]
The next commit will put it to use. May look pointless now, but we're
going to change the FOO_lookup's type, and then it'll help.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-13-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The lookup tables have a sentinel, no need to make callers pass their
size.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-3-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Rebased, commit message corrected]
The generated QEMU QMP reference is now structured as follows:
1.1 Introduction
1.2 Stability Considerations
1.3 Common data types
1.4 Socket data types
1.5 VM run state
1.6 Cryptography
1.7 Block devices
1.7.1 Block core (VM unrelated)
1.7.2 QAPI block definitions (vm unrelated)
1.8 Character devices
1.9 Net devices
1.10 Rocker switch device
1.11 TPM (trusted platform module) devices
1.12 Remote desktop
1.12.1 Spice
1.12.2 VNC
1.13 Input
1.14 Migration
1.15 Transactions
1.16 Tracing
1.17 QMP introspection
1.18 Miscellanea
Section "1.18 Miscellanea" is still too big: it documents 134 symbols.
Section "1.7.1 Block core (VM unrelated)" is also rather big: 128
symbols. All the others are of reasonable size.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503602048-12268-17-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
query-version and query-commands are in common.json for no good
reason. Several similar queries are in qapi-schema.json. Move them
there.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503602048-12268-16-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Except for block-core.json, the sub-schemas are self-contained: if
they use a symbol defined in another sub-schema, they include that
sub-schema. To check, feed the sub-schema to qapi2texi (or any other
QAPI generator) along with the pragma from qapi-schema.json.
Fix up things to make block-core.json self-contained, too.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503602048-12268-15-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>