Commit Graph

11266 Commits

Author SHA1 Message Date
Peter Maydell
6f0e9c26db Generalize memory encryption models
A number of hardware platforms are implementing mechanisms whereby the
 hypervisor does not have unfettered access to guest memory, in order
 to mitigate the security impact of a compromised hypervisor.
 
 AMD's SEV implements this with in-cpu memory encryption, and Intel has
 its own memory encryption mechanism.  POWER has an upcoming mechanism
 to accomplish this in a different way, using a new memory protection
 level plus a small trusted ultravisor.  s390 also has a protected
 execution environment.
 
 The current code (committed or draft) for these features has each
 platform's version configured entirely differently.  That doesn't seem
 ideal for users, or particularly for management layers.
 
 AMD SEV introduces a notionally generic machine option
 "machine-encryption", but it doesn't actually cover any cases other
 than SEV.
 
 This series is a proposal to at least partially unify configuration
 for these mechanisms, by renaming and generalizing AMD's
 "memory-encryption" property.  It is replaced by a
 "confidential-guest-support" property pointing to a platform specific
 object which configures and manages the specific details.
 
 Note to Ram Pai: the documentation I've included for PEF is very
 minimal.  If you could send a patch expanding on that, it would be
 very helpful.
 
 Changes since v8:
  * Rebase
  * Fixed some cosmetic typos
 Changes since v7:
  * Tweaked and clarified meaning of the 'ready' flag
  * Polished the interface to the PEF internals
  * Shifted initialization for s390 PV later (I hope I've finally got
    this after apply_cpu_model() where it needs to be)
 Changes since v6:
  * Moved to using OBJECT_DECLARE_TYPE and OBJECT_DEFINE_TYPE macros
  * Assorted minor fixes
 Changes since v5:
  * Renamed from "securable guest memory" to "confidential guest
    support"
  * Simpler reworking of x86 boot time flash encryption
  * Added a bunch of documentation
  * Fixed some compile errors on POWER
 Changes since v4:
  * Renamed from "host trust limitation" to "securable guest memory",
    which I think is marginally more descriptive
  * Re-organized initialization, because the previous model called at
    kvm_init didn't work for s390
  * Assorted fixes to the s390 implementation; rudimentary testing
    (gitlab CI) only
 Changes since v3:
  * Rebased
  * Added first cut at handling of s390 protected virtualization
 Changes since RFCv2:
  * Rebased
  * Removed preliminary SEV cleanups (they've been merged)
  * Changed name to "host trust limitation"
  * Added migration blocker to the PEF code (based on SEV's version)
 Changes since RFCv1:
  * Rebased
  * Fixed some errors pointed out by Dave Gilbert
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmAg1R8ACgkQbDjKyiDZ
 s5KCVRAAgm/xlgEv2hDZ7z+MuOTNesCpR3uU4iX02xNktox96Qai7XlrA7bhDf1v
 y/0FLnOOL6Kn5OHeS2CiDPIgWIUfapSwDsTPooZ6GqfzCI+r0jIaSBu59IBhvJRh
 o3ZTfT2fsckY9Gy2YN29ssN87ovDTPNlvRAxGH/71mMKEGJcK6QWxGcsyJDmeKq4
 0/tOQaLMFRRagTpwqCT1eacMzyQwkoDcywQHfi0Is+Q4voWPKgDY0qPqLd1OG2XI
 cMQ8fagums3NkPpVbKAW7sIvDiHtH1HNDoHKTiwKtTUsN5LBz+LN87LoKAdBasV0
 AiRm8gi+CkF/NOA2RjwaFmThxt7sr8kTKVuIqTo5m8agqkhJr97+gBxUym49CxTx
 1Zjo9TWsprKXnXl8vfGtAIZ4pkYQzomMDT3AilEST3+zbpRuwTMGOJ5vLF7RrKtF
 AtF2XBiPGZ/NztpbmaukuG/R49wwW5we4dR1zySMcoTsAl1rIzxpfwBnYatOY0Hg
 sVc9gABwQ0kacsseVIX72c+30U02cR8f6uRfuqNAEUW13vdAo/5/PXxGVlevMkw5
 33MYr16CkGnYgtgJtORK+x8/vPlAYiBzZrn71Wym7yKCamf8LMbzPNXKjUaD/GT8
 TZG7abTV8vuS0m7V/hGgV8nTVaG/6VLEyAtO6YpjQ+1p+dO8xBc=
 =TTeT
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dg-gitlab/tags/cgs-pull-request' into staging

Generalize memory encryption models

A number of hardware platforms are implementing mechanisms whereby the
hypervisor does not have unfettered access to guest memory, in order
to mitigate the security impact of a compromised hypervisor.

AMD's SEV implements this with in-cpu memory encryption, and Intel has
its own memory encryption mechanism.  POWER has an upcoming mechanism
to accomplish this in a different way, using a new memory protection
level plus a small trusted ultravisor.  s390 also has a protected
execution environment.

The current code (committed or draft) for these features has each
platform's version configured entirely differently.  That doesn't seem
ideal for users, or particularly for management layers.

AMD SEV introduces a notionally generic machine option
"machine-encryption", but it doesn't actually cover any cases other
than SEV.

This series is a proposal to at least partially unify configuration
for these mechanisms, by renaming and generalizing AMD's
"memory-encryption" property.  It is replaced by a
"confidential-guest-support" property pointing to a platform specific
object which configures and manages the specific details.

Note to Ram Pai: the documentation I've included for PEF is very
minimal.  If you could send a patch expanding on that, it would be
very helpful.

Changes since v8:
 * Rebase
 * Fixed some cosmetic typos
Changes since v7:
 * Tweaked and clarified meaning of the 'ready' flag
 * Polished the interface to the PEF internals
 * Shifted initialization for s390 PV later (I hope I've finally got
   this after apply_cpu_model() where it needs to be)
Changes since v6:
 * Moved to using OBJECT_DECLARE_TYPE and OBJECT_DEFINE_TYPE macros
 * Assorted minor fixes
Changes since v5:
 * Renamed from "securable guest memory" to "confidential guest
   support"
 * Simpler reworking of x86 boot time flash encryption
 * Added a bunch of documentation
 * Fixed some compile errors on POWER
Changes since v4:
 * Renamed from "host trust limitation" to "securable guest memory",
   which I think is marginally more descriptive
 * Re-organized initialization, because the previous model called at
   kvm_init didn't work for s390
 * Assorted fixes to the s390 implementation; rudimentary testing
   (gitlab CI) only
Changes since v3:
 * Rebased
 * Added first cut at handling of s390 protected virtualization
Changes since RFCv2:
 * Rebased
 * Removed preliminary SEV cleanups (they've been merged)
 * Changed name to "host trust limitation"
 * Added migration blocker to the PEF code (based on SEV's version)
Changes since RFCv1:
 * Rebased
 * Fixed some errors pointed out by Dave Gilbert

# gpg: Signature made Mon 08 Feb 2021 06:07:27 GMT
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dg-gitlab/tags/cgs-pull-request:
  s390: Recognize confidential-guest-support option
  confidential guest support: Alter virtio default properties for protected guests
  spapr: PEF: prevent migration
  spapr: Add PEF based confidential guest support
  confidential guest support: Update documentation
  confidential guest support: Move SEV initialization into arch specific code
  confidential guest support: Introduce cgs "ready" flag
  sev: Add Error ** to sev_kvm_init()
  confidential guest support: Rework the "memory-encryption" property
  confidential guest support: Move side effect out of machine_set_memory_encryption()
  sev: Remove false abstraction of flash encryption
  confidential guest support: Introduce new confidential guest support class
  qom: Allow optional sugar props

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-08 11:11:26 +00:00
David Gibson
651615d92d s390: Recognize confidential-guest-support option
At least some s390 cpu models support "Protected Virtualization" (PV),
a mechanism to protect guests from eavesdropping by a compromised
hypervisor.

This is similar in function to other mechanisms like AMD's SEV and
POWER's PEF, which are controlled by the "confidential-guest-support"
machine option.  s390 is a slightly special case, because we already
supported PV, simply by using a CPU model with the required feature
(S390_FEAT_UNPACK).

To integrate this with the option used by other platforms, we
implement the following compromise:

 - When the confidential-guest-support option is set, s390 will
   recognize it, verify that the CPU can support PV (failing if not)
   and set virtio default options necessary for encrypted or protected
   guests, as on other platforms.  i.e. if confidential-guest-support
   is set, we will either create a guest capable of entering PV mode,
   or fail outright.

 - If confidential-guest-support is not set, guests might still be
   able to enter PV mode, if the CPU has the right model.  This may be
   a little surprising, but shouldn't actually be harmful.

To start a guest supporting Protected Virtualization using the new
option use the command line arguments:
    -object s390-pv-guest,id=pv0 -machine confidential-guest-support=pv0

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
2021-02-08 16:57:38 +11:00
David Gibson
6c8ebe30ea spapr: Add PEF based confidential guest support
Some upcoming POWER machines have a system called PEF (Protected
Execution Facility) which uses a small ultravisor to allow guests to
run in a way that they can't be eavesdropped by the hypervisor.  The
effect is roughly similar to AMD SEV, although the mechanisms are
quite different.

Most of the work of this is done between the guest, KVM and the
ultravisor, with little need for involvement by qemu.  However qemu
does need to tell KVM to allow secure VMs.

Because the availability of secure mode is a guest visible difference
which depends on having the right hardware and firmware, we don't
enable this by default.  In order to run a secure guest you need to
create a "pef-guest" object and set the confidential-guest-support
property to point to it.

Note that this just *allows* secure guests, the architecture of PEF is
such that the guest still needs to talk to the ultravisor to enter
secure mode.  Qemu has no direct way of knowing if the guest is in
secure mode, and certainly can't know until well after machine
creation time.

To start a PEF-capable guest, use the command line options:
    -object pef-guest,id=pef0 -machine confidential-guest-support=pef0

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
2021-02-08 16:57:38 +11:00
David Gibson
abc27d4241 confidential guest support: Introduce cgs "ready" flag
The platform specific details of mechanisms for implementing
confidential guest support may require setup at various points during
initialization.  Thus, it's not really feasible to have a single cgs
initialization hook, but instead each mechanism needs its own
initialization calls in arch or machine specific code.

However, to make it harder to have a bug where a mechanism isn't
properly initialized under some circumstances, we want to have a
common place, late in boot, where we verify that cgs has been
initialized if it was requested.

This patch introduces a ready flag to the ConfidentialGuestSupport
base type to accomplish this, which we verify in
qemu_machine_creation_done().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
2021-02-08 16:57:38 +11:00
David Gibson
c9f5aaa6bc sev: Add Error ** to sev_kvm_init()
This allows failures to be reported richly and idiomatically.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08 16:57:38 +11:00
David Gibson
e0292d7c62 confidential guest support: Rework the "memory-encryption" property
Currently the "memory-encryption" property is only looked at once we
get to kvm_init().  Although protection of guest memory from the
hypervisor isn't something that could really ever work with TCG, it's
not conceptually tied to the KVM accelerator.

In addition, the way the string property is resolved to an object is
almost identical to how a QOM link property is handled.

So, create a new "confidential-guest-support" link property which sets
this QOM interface link directly in the machine.  For compatibility we
keep the "memory-encryption" property, but now implemented in terms of
the new property.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08 16:57:38 +11:00
David Gibson
aacdb84413 sev: Remove false abstraction of flash encryption
When AMD's SEV memory encryption is in use, flash memory banks (which are
initialed by pc_system_flash_map()) need to be encrypted with the guest's
key, so that the guest can read them.

That's abstracted via the kvm_memcrypt_encrypt_data() callback in the KVM
state.. except, that it doesn't really abstract much at all.

For starters, the only call site is in code specific to the 'pc'
family of machine types, so it's obviously specific to those and to
x86 to begin with.  But it makes a bunch of further assumptions that
need not be true about an arbitrary confidential guest system based on
memory encryption, let alone one based on other mechanisms:

 * it assumes that the flash memory is defined to be encrypted with the
   guest key, rather than being shared with hypervisor
 * it assumes that that hypervisor has some mechanism to encrypt data into
   the guest, even though it can't decrypt it out, since that's the whole
   point
 * the interface assumes that this encrypt can be done in place, which
   implies that the hypervisor can write into a confidential guests's
   memory, even if what it writes isn't meaningful

So really, this "abstraction" is actually pretty specific to the way SEV
works.  So, this patch removes it and instead has the PC flash
initialization code call into a SEV specific callback.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08 16:57:38 +11:00
David Gibson
f91f9f254b confidential guest support: Introduce new confidential guest support class
Several architectures have mechanisms which are designed to protect
guest memory from interference or eavesdropping by a compromised
hypervisor.  AMD SEV does this with in-chip memory encryption and
Intel's TDX can do similar things.  POWER's Protected Execution
Framework (PEF) accomplishes a similar goal using an ultravisor and
new memory protection features, instead of encryption.

To (partially) unify handling for these, this introduces a new
ConfidentialGuestSupport QOM base class.  "Confidential" is kind of vague,
but "confidential computing" seems to be the buzzword about these schemes,
and "secure" or "protected" are often used in connection to unrelated
things (such as hypervisor-from-guest or guest-from-guest security).

The "support" in the name is significant because in at least some of the
cases it requires the guest to take specific actions in order to protect
itself from hypervisor eavesdropping.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-02-08 16:57:37 +11:00
Greg Kurz
a8dc82ce82 qom: Allow optional sugar props
Global properties have an @optional field, which allows to apply a given
property to a given type even if one of its subclasses doesn't support
it. This is especially used in the compat code when dealing with the
"disable-modern" and "disable-legacy" properties and the "virtio-pci"
type.

Allow object_register_sugar_prop() to set this field as well.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <159738953558.377274.16617742952571083440.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2021-02-08 16:57:37 +11:00
Mark Cave-Ayland
cdf01ca481 utils/fifo8: add VMSTATE_FIFO8_TEST macro
Rewrite the existing VMSTATE_FIFO8 macro to use VMSTATE_FIFO8_TEST as per the
standard pattern in include/migration/vmstate.h.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210128221728.14887-3-mark.cave-ayland@ilande.co.uk>
2021-02-07 20:38:34 +00:00
Claudio Fontana
fb6916dd6c accel: introduce AccelCPUClass extending CPUClass
add a new optional interface to CPUClass, which allows accelerators
to extend the CPUClass with additional accelerator-specific
initializations.

This will allow to separate the target cpu code that is specific
to each accelerator, and register it automatically with object
hierarchy lookup depending on accelerator code availability,
as part of the accel_init_interfaces() initialization step.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-19-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Claudio Fontana
b86f59c715 accel: replace struct CpusAccel with AccelOpsClass
This will allow us to centralize the registration of
the cpus.c module accelerator operations (in accel/accel-softmmu.c),
and trigger it automatically using object hierarchy lookup from the
new accel_init_interfaces() initialization step, depending just on
which accelerators are available in the code.

Rename all tcg-cpus.c, kvm-cpus.c, etc to tcg-accel-ops.c,
kvm-accel-ops.c, etc, matching the object type names.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-18-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Claudio Fontana
940e43aa30 accel: extend AccelState and AccelClass to user-mode
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

[claudio: rebased on Richard's splitwx work]

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-17-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Claudio Fontana
7827168471 cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClass
we cannot in principle make the TCG Operations field definitions
conditional on CONFIG_TCG in code that is included by both common_ss
and specific_ss modules.

Therefore, what we can do safely to restrict the TCG fields to TCG-only
builds, is to move all tcg cpu operations into a separate header file,
which is only included by TCG, target-specific code.

This leaves just a NULL pointer in the cpu.h for the non-TCG builds.

This also tidies up the code in all targets a bit, having all TCG cpu
operations neatly contained by a dedicated data struct.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-16-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Claudio Fontana
c73bdb35a9 cpu: move debug_check_watchpoint to tcg_ops
commit 568496c0c0 ("cpu: Add callback to check architectural") and
commit 3826121d92 ("target-arm: Implement checking of fired")
introduced an ARM-specific hack for cpu_check_watchpoint.

Make debug_check_watchpoint optional, and move it to tcg_ops.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-15-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Claudio Fontana
9ea9087bb4 cpu: move adjust_watchpoint_address to tcg_ops
commit 4061200059 ("arm: Correctly handle watchpoints for BE32 CPUs")

introduced this ARM-specific, TCG-specific hack to adjust the address,
before checking it with cpu_check_watchpoint.

Make adjust_watchpoint_address optional and move it to tcg_ops.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-14-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Claudio Fontana
8535dd702d cpu: move do_unaligned_access to tcg_ops
make it consistently SOFTMMU-only.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio: make the field presence in cpu.h unconditional, removing the ifdefs]
Message-Id: <20210204163931.7358-12-cfontana@suse.de>

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Claudio Fontana
cbc183d2d9 cpu: move cc->transaction_failed to tcg_ops
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio: wrap target code around CONFIG_TCG and !CONFIG_USER_ONLY]

avoiding its use in headers used by common_ss code (should be poisoned).

Note: need to be careful with the use of CONFIG_USER_ONLY,
Message-Id: <20210204163931.7358-11-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Claudio Fontana
0545608056 cpu: move cc->do_interrupt to tcg_ops
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-10-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e9ce43e97a cpu: Move debug_excp_handler to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-8-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e124536f37 cpu: Move tlb_fill to tcg_ops
[claudio: wrapped target code in CONFIG_TCG]

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-7-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
48c1a3e303 cpu: Move cpu_exec_* to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: wrapped target code in CONFIG_TCG]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-6-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
ec62595bab cpu: Move synchronize_from_tb() to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: wrapped target code in CONFIG_TCG, reworded comments]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-5-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Claudio Fontana
7df5e3d6ad accel/tcg: split TCG-only code from cpu_exec_realizefn
move away TCG-only code, make it compile only on TCG.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[claudio: moved the prototypes from hw/core/cpu.h to exec/cpu-all.h]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-4-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e9e51b7154 cpu: Introduce TCGCpuOperations struct
The TCG-specific CPU methods will be moved to a separate struct,
to make it easier to move accel-specific code outside generic CPU
code in the future.  Start by moving tcg_initialize().

The new CPUClass.tcg_opts field may eventually become a pointer,
but keep it an embedded struct for now, to make code conversion
easier.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: move TCGCpuOperations inside include/hw/core/cpu.h]
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-2-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Richard Henderson
13e71f08bf tcg/tci: Make tci_tb_ptr thread-local
Each thread must have its own pc, even under TCI.

Remove the GETPC ifdef, because GETPC is always available for
helpers, and thus is always required.  Move the assignment
under INDEX_op_call, because the value is only visible when
we make a call to a helper function.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204014509.882821-6-richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Philippe Mathieu-Daudé
37c8c531d6 exec/cpu-defs: Remove TCG backends dependency
"exec/cpu-defs.h" contains generic CPU definitions for the
TCG frontends (mostly related to TLB). TCG backends definitions
aren't relevant here.

See tcg/README description:

  4) Backend

  tcg-target.h contains the target specific definitions. tcg-target.c.inc
  contains the target specific code; it is #included by tcg/tcg.c, rather
  than being a standalone C file.

So far only "tcg/tcg.h" requires these headers.

Remove the "target-tcg.h" header dependency on TCG frontends, so we
don't have to rebuild all frontends when hacking a single backend.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210204191423.1754158-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Marian Postevca
602b458201 acpi: Permit OEM ID and OEM table ID fields to be changed
Qemu's ACPI table generation sets the fields OEM ID and OEM table ID
to "BOCHS " and "BXPCxxxx" where "xxxx" is replaced by the ACPI
table name.

Some games like Red Dead Redemption 2 seem to check the ACPI OEM ID
and OEM table ID for the strings "BOCHS" and "BXPC" and if they are
found, the game crashes(this may be an intentional detection
mechanism to prevent playing the game in a virtualized environment).

This patch allows you to override these default values.

The feature can be used in this manner:
qemu -machine oem-id=ABCDEF,oem-table-id=GHIJKLMN

The oem-id string can be up to 6 bytes in size, and the
oem-table-id string can be up to 8 bytes in size. If the string are
smaller than their respective sizes they will be padded with space.
If either of these parameters is not set, the current default values
will be used for the one missing.

Note that the the OEM Table ID field will not be extended with the
name of the table, but will use either the default name or the user
provided one.

This does not affect the -acpitable option (for user-defined ACPI
tables), which has precedence over -machine option.

Signed-off-by: Marian Postevca <posteuca@mutex.one>
Message-Id: <20210119003216.17637-3-posteuca@mutex.one>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-02-05 08:52:59 -05:00
Paolo Bonzini
08b1df8ff4 pci: add romsize property
This property can be useful for distros to set up known-good ROM sizes for
migration purposes.  The VM will fail to start if the ROM is too large,
and migration compatibility will not be broken if the ROM is too small.

Note that even though romsize is a uint32_t, it has to be between 1
(because empty ROM files are not accepted, and romsize must be greater
than the file) and 2^31 (because values above are not powers of two and
are rejected).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20201218182736.1634344-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210203131828.156467-3-pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
2021-02-05 08:52:58 -05:00
Marc-André Lureau
3cddb8b9e0 display/ui: add a callback to indicate GL state is flushed
Displaying rendered resources requires blocking qemu GPU to avoid extra
framebuffer copies. For an external display, via Spice currently, there
is a callback to block/unblock the rendering in the same thread.

But with the vhost-user-gpu backend, the qemu process doesn't handle
the rendering itself, and the blocking callback isn't effective.
Instead, the backend must be notified when the display code is done.

Fix this by adding a new GraphicHwOps callback to indicate the GL state
is flushed, and we are done manipulating the shared GL resources. Call
it from gtk and spice display.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-19-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
f8f3c2719e virtio-gpu: avoid re-entering cmdq processing
The next patch will notify the GL context got flush, which will resume
the queue processing. However, if this happens within the caller
context, it will end up with a stack overflow flush/update loop.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-18-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
2606519b74 ui: add egl dmabuf import to gtkglarea
GtkGLArea is used on wayland, where EGL is usually available.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-17-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
52a37e20db ui: check gtk-egl dmabuf support
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-16-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
0df5c72b3b ui: add qemu_egl_has_dmabuf helper
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-15-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
d0e137bc9a ui: add a DCLOps callback to check dmabuf support
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-13-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
a7dfbe289e ui: add an optional get_flags callback to GraphicHwOps
Those flags can be used to express different requirements for the
display or other needs.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-12-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
ff64d44fb8 vhost-user-gpu: add a configuration flag for dmabuf usage
Let's inform VirtioGPUBase that vhost-user-gpu require DMABUF messages.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-11-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
2cc002483e ui: remove console_has_gl_dmabuf()
This check is currently limited. It only is used by vhost-user-gpu (not
by vfio-display), and will print an error repeatedly during run-time.

We are going to dissociate the GL context from the
DisplayChangeListener, and listeners may come and go. The following
patches will address this differently.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-10-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
227d8444bb ui: annotate DCLOps callback requirements
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-9-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
568b12fccf ui: add gd_gl_area_scanout_disable
Require the callback, drop the fallback path.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-8-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
c110d949b8 ui: remove gl_ctx_get_current
There are no users left.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210204105232.834642-7-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 15:58:54 +01:00
Marc-André Lureau
a652b12013 spice: delay starting until display are initialized
QEMU used to run qemu_spice.display_init() before vm_start(), and
QXL/display interfaces where started then. Now, vm_start() happens
before QXL/display interfaces are added and Spice server doesn't
automatically start them in this case (fixed in spice git)

Fixes Spice regression introduced after 5.2, with refactoring commits
b4e1a34211 ("vl: remove separate preconfig main_loop") and
facf7c60ee ("vl: initialize displays _after_ exiting preconfiguration"),
probably others.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210129152351.161971-1-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2021-02-04 14:32:40 +01:00
Peter Maydell
1ed9228f63 nbd patches for 2021-02-02
- more cleanup from iotest python conversion
 - progress towards consistent use of signed 64-bit types through block layer
 - fix some crashes related to NBD reconnect
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEccLMIrHEYCkn0vOqp6FrSiUnQ2oFAmAasREACgkQp6FrSiUn
 Q2q0bAgAqkkrQtMhTcQ6cZenLhBc7qIojOybnOnO8hXb7SpndHjE9f9OF0f0+OOm
 9vk24KyohvCFc7IwMHA/lccvHYH4Ftb8VPgYEs+U+iCxzdyM8M1goy8tbOp82Fgg
 8ahxRbBP4VTkBUiXNcCxO2KbBQL2m+YkLKvfbw+XN0KbxER6hoikfNsSty97/3su
 ZKtfEqBPH7Tl7kJz0JBfGTpCSPa6cXFzIwxK+m8pq7IKeNrICg1Uu0VR49QfTWlh
 2ptIxvKx+QkxLV+QoW7m/fqXFaIFye0tQAwTr/5b3/jBtAkgg8Yznn0TMgiqwXMr
 SuxbUpjNWqBtjjhwavSje020VhrD2w==
 =7eUe
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2021-02-02-v2' into staging

nbd patches for 2021-02-02

- more cleanup from iotest python conversion
- progress towards consistent use of signed 64-bit types through block layer
- fix some crashes related to NBD reconnect

# gpg: Signature made Wed 03 Feb 2021 14:20:01 GMT
# gpg:                using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full]
# gpg:                 aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full]
# gpg:                 aka "[jpeg image of size 6874]" [full]
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2  F3AA A7A1 6B4A 2527 436A

* remotes/ericb/tags/pull-nbd-2021-02-02-v2:
  nbd: make nbd_read* return -EIO on error
  block/nbd: only enter connection coroutine if it's present
  block/nbd: only detach existing iochannel from aio_context
  block/io: use int64_t bytes in copy_range
  block/io: support int64_t bytes in read/write wrappers
  block/io: support int64_t bytes in bdrv_co_p{read,write}v_part()
  block/io: support int64_t bytes in bdrv_aligned_preadv()
  block/io: support int64_t bytes in bdrv_co_do_copy_on_readv()
  block/io: support int64_t bytes in bdrv_aligned_pwritev()
  block/io: support int64_t bytes in bdrv_co_do_pwrite_zeroes()
  block/io: use int64_t bytes in driver wrappers
  block: use int64_t as bytes type in tracked requests
  block/io: improve bdrv_check_request: check qiov too
  block/throttle-groups: throttle_group_co_io_limits_intercept(): 64bit bytes
  block/io: bdrv_pad_request(): support qemu_iovec_init_extended failure
  block/io: refactor bdrv_pad_request(): move bdrv_pad_request() up
  block: fix theoretical overflow in bdrv_init_padding()
  util/iov: make qemu_iovec_init_extended() honest
  block: refactor bdrv_check_request: add errp
  iotests: Fix expected whitespace for 185

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-03 14:52:12 +00:00
Roman Kagan
5082fc82a6 nbd: make nbd_read* return -EIO on error
NBD reconnect logic considers the error code from the functions that
read NBD messages to tell if reconnect should be attempted or not: it is
attempted on -EIO, otherwise the client transitions to NBD_CLIENT_QUIT
state (see nbd_channel_error).  This error code is propagated from the
primitives like nbd_read.

The problem, however, is that nbd_read itself turns every error into -1
rather than -EIO.  As a result, if the NBD server happens to die while
sending the message, the client in QEMU receives less data than it
expects, considers it as a fatal error, and wouldn't attempt
reestablishing the connection.

Fix it by turning every negative return from qio_channel_read_all into
-EIO returned from nbd_read.  Apparently that was the original behavior,
but got broken later.  Also adjust nbd_readXX to follow.

Fixes: e6798f06a6 ("nbd: generalize usage of nbd_read")
Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210129073859.683063-4-rvkagan@yandex-team.ru>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:17:12 -06:00
Vladimir Sementsov-Ogievskiy
a5215b8fdf block/io: use int64_t bytes in copy_range
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

So, convert now copy_range parameters which are already 64bit to signed
type.

It's safe as we don't work with requests overflowing BDRV_MAX_LENGTH
(which is less than INT64_MAX), and do check the requests in
bdrv_co_copy_range_internal() (by bdrv_check_request32(), which calls
bdrv_check_request()).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201211183934.169161-17-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:17:12 -06:00
Vladimir Sementsov-Ogievskiy
e9e52efdc5 block/io: support int64_t bytes in read/write wrappers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

Now, since bdrv_co_preadv_part() and bdrv_co_pwritev_part() have been
updated, update all their wrappers.

For all of them type of 'bytes' is widening, so callers are safe. We
have update request_fn in blkverify.c simultaneously. Still it's just a
pointer to one of bdrv_co_pwritev() or bdrv_co_preadv(), and type is
widening for callers of the request_fn anyway.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201211183934.169161-16-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: grammar tweak]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:17:12 -06:00
Vladimir Sementsov-Ogievskiy
37e9403ea8 block/io: support int64_t bytes in bdrv_co_p{read,write}v_part()
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

So, prepare bdrv_co_preadv_part() and bdrv_co_pwritev_part() and their
remaining dependencies now.

bdrv_pad_request() is updated simultaneously, as pointer to bytes passed
to it both from bdrv_co_pwritev_part() and bdrv_co_preadv_part().

So, all callers of bdrv_pad_request() are updated to pass 64bit bytes.
bdrv_pad_request() is already good for 64bit requests, add
corresponding assertion.

Look at bdrv_co_preadv_part() and bdrv_co_pwritev_part().
Type is widening, so callers are safe. Let's look inside the functions.

In bdrv_co_preadv_part() and bdrv_aligned_pwritev() we only pass bytes
to other already int64_t interfaces (and some obviously safe
calculations), it's OK.

In bdrv_co_do_zero_pwritev() aligned_bytes may become large now, still
it's passed to bdrv_aligned_pwritev which supports int64_t bytes.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201211183934.169161-15-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:17:11 -06:00
Eric Blake
8024726459 block: use int64_t as bytes type in tracked requests
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

All requests in block/io must not overflow BDRV_MAX_LENGTH, all
external users of BdrvTrackedRequest already have corresponding
assertions, so we are safe. Add some assertions still.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201211183934.169161-9-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:14:15 -06:00
Vladimir Sementsov-Ogievskiy
801625e69d block/throttle-groups: throttle_group_co_io_limits_intercept(): 64bit bytes
The function is called from 64bit io handlers, and bytes is just passed
to throttle_account() which is 64bit too (unsigned though). So, let's
convert intermediate argument to 64bit too.

This patch is a first in the 64-bit-blocklayer series, so we are
generally moving to int64_t for both offset and bytes parameters on all
io paths. Main motivation is realization of 64-bit write_zeroes
operation for fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

Patch-correctness audit by Eric Blake:

  Caller has 32-bit, this patch now causes widening which is safe:
  block/block-backend.c: blk_do_preadv() passes 'unsigned int'
  block/block-backend.c: blk_do_pwritev_part() passes 'unsigned int'
  block/throttle.c: throttle_co_pwrite_zeroes() passes 'int'
  block/throttle.c: throttle_co_pdiscard() passes 'int'

  Caller has 64-bit, this patch fixes potential bug where pre-patch
  could narrow, except it's easy enough to trace that callers are still
  capped at 2G actions:
  block/throttle.c: throttle_co_preadv() passes 'uint64_t'
  block/throttle.c: throttle_co_pwritev() passes 'uint64_t'

  Implementation in question: block/throttle-groups.c
  throttle_group_co_io_limits_intercept() takes 'unsigned int bytes'
  and uses it: argument to util/throttle.c throttle_account(uint64_t)

  All safe: it patches a latent bug, and does not introduce any 64-bit
  gotchas once throttle_co_p{read,write}v are relaxed, and assuming
  throttle_account() is not buggy.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20201211183934.169161-7-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:14:00 -06:00
Vladimir Sementsov-Ogievskiy
4c002cef0e util/iov: make qemu_iovec_init_extended() honest
Actually, we can't extend the io vector in all cases. Handle possible
MAX_IOV and size_t overflows.

For now add assertion to callers (actually they rely on success anyway)
and fix them in the following patch.

Add also some additional good assertions to qemu_iovec_init_slice()
while being here.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201211183934.169161-3-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-02-03 08:00:33 -06:00