Commit Graph

2180 Commits

Author SHA1 Message Date
Michael Roth
70943ad8e4 i386/sev: Add support for SNP CPUID validation
SEV-SNP firmware allows a special guest page to be populated with a
table of guest CPUID values so that they can be validated through
firmware before being loaded into encrypted guest memory where they can
be used in place of hypervisor-provided values[1].

As part of SEV-SNP guest initialization, use this interface to validate
the CPUID entries reported by KVM_GET_CPUID2 prior to initial guest
start and populate the CPUID page reserved by OVMF with the resulting
encrypted data.

[1] SEV SNP Firmware ABI Specification, Rev. 0.8, 8.13.2.6

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-21-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Brijesh Singh
3d8c2a7f48 i386/sev: Add support for populating OVMF metadata pages
OVMF reserves various pages so they can be pre-initialized/validated
prior to launching the guest. Add support for populating these pages
with the expected content.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-20-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Brijesh Singh
f3c30c575d hw/i386/sev: Add function to get SEV metadata from OVMF header
A recent version of OVMF expanded the reset vector GUID list to add
SEV-specific metadata GUID. The SEV metadata describes the reserved
memory regions such as the secrets and CPUID page used during the SEV-SNP
guest launch.

The pc_system_get_ovmf_sev_metadata_ptr() is used to retieve the SEV
metadata pointer from the OVMF GUID list.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-19-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
3d44fdff60 i386/sev: Set CPU state to protected once SNP guest payload is finalized
Once KVM_SNP_LAUNCH_FINISH is called the vCPU state is copied into the
vCPU's VMSA page and measured/encrypted. Any attempt to read/write CPU
state afterward will only be acting on the initial data and so are
effectively no-ops.

Set the vCPU state to protected at this point so that QEMU don't
continue trying to re-sync vCPU data during guest runtime.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-18-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Brijesh Singh
9f3a6999f9 i386/sev: Add handling to encrypt/finalize guest launch data
Process any queued up launch data and encrypt/measure it into the SNP
guest instance prior to initial guest launch.

This also updates the KVM_SEV_SNP_LAUNCH_UPDATE call to handle partial
update responses.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-17-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Brijesh Singh
d3107f882e i386/sev: Add the SNP launch start context
The SNP_LAUNCH_START is called first to create a cryptographic launch
context within the firmware.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-16-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
59d3740cb4 i386/sev: Update query-sev QAPI format to handle SEV-SNP
Most of the current 'query-sev' command is relevant to both legacy
SEV/SEV-ES guests and SEV-SNP guests, with 2 exceptions:

  - 'policy' is a 64-bit field for SEV-SNP, not 32-bit, and
    the meaning of the bit positions has changed
  - 'handle' is not relevant to SEV-SNP

To address this, this patch adds a new 'sev-type' field that can be
used as a discriminator to select between SEV and SEV-SNP-specific
fields/formats without breaking compatibility for existing management
tools (so long as management tools that add support for launching
SEV-SNP guest update their handling of query-sev appropriately).

The corresponding HMP command has also been fixed up similarly.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by:Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-15-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Paolo Bonzini
a808132f6d i386/sev: Add a class method to determine KVM VM type for SNP guests
SEV guests can use either KVM_X86_DEFAULT_VM, KVM_X86_SEV_VM,
or KVM_X86_SEV_ES_VM depending on the configuration and what
the host kernel supports. SNP guests on the other hand can only
ever use KVM_X86_SNP_VM, so split determination of VM type out
into a separate class method that can be set accordingly for
sev-guest vs. sev-snp-guest objects and add handling for SNP.

Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-14-pankaj.gupta@amd.com>
[Remove unnecessary function pointer declaration. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
73ae63b162 i386/sev: Don't return launch measurements for SEV-SNP guests
For SEV-SNP guests, launch measurement is queried from within the guest
during attestation, so don't attempt to return it as part of
query-sev-launch-measure.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-13-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
7831221941 i386/cpu: Set SEV-SNP CPUID bit when SNP enabled
SNP guests will rely on this bit to determine certain feature support.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-12-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Pankaj Gupta
125b95a6d4 i386/sev: Add snp_kvm_init() override for SNP class
SNP does not support SMM and requires guest_memfd for
private guest memory, so add SNP specific kvm_init()
functionality in snp_kvm_init() class method.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-11-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Pankaj Gupta
990da8d243 i386/sev: Add sev_kvm_init() override for SEV class
Some aspects of the init routine SEV are specific to SEV and not
applicable for SNP guests, so move the SEV-specific bits into
separate class method and retain only the common functionality.

Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-10-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
99190f805d i386/sev: Add a sev_snp_enabled() helper
Add a simple helper to check if the current guest type is SNP. Also have
SNP-enabled imply that SEV-ES is enabled as well, and fix up any places
where the sev_es_enabled() check is expecting a pure/non-SNP guest.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-9-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Brijesh Singh
7b34df4426 i386/sev: Introduce 'sev-snp-guest' object
SEV-SNP support relies on a different set of properties/state than the
existing 'sev-guest' object. This patch introduces the 'sev-snp-guest'
object, which can be used to configure an SEV-SNP guest. For example,
a default-configured SEV-SNP guest with no additional information
passed in for use with attestation:

  -object sev-snp-guest,id=sev0

or a fully-specified SEV-SNP guest where all spec-defined binary
blobs are passed in as base64-encoded strings:

  -object sev-snp-guest,id=sev0, \
    policy=0x30000, \
    init-flags=0, \
    id-block=YWFhYWFhYWFhYWFhYWFhCg==, \
    id-auth=CxHK/OKLkXGn/KpAC7Wl1FSiisWDbGTEKz..., \
    author-key-enabled=on, \
    host-data=LNkCWBRC5CcdGXirbNUV1OrsR28s..., \
    guest-visible-workarounds=AA==, \

See the QAPI schema updates included in this patch for more usage
details.

In some cases these blobs may be up to 4096 characters, but this is
generally well below the default limit for linux hosts where
command-line sizes are defined by the sysconf-configurable ARG_MAX
value, which defaults to 2097152 characters for Ubuntu hosts, for
example.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Acked-by: Markus Armbruster <armbru@redhat.com> (for QAPI schema)
Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-8-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Pankaj Gupta
bce615a14a i386/sev: Move sev_launch_finish to separate class method
When sev-snp-guest objects are introduced there will be a number of
differences in how the launch finish is handled compared to the existing
sev-guest object. Move sev_launch_finish() to a class method to make it
easier to implement SNP-specific launch update functionality later.

Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-7-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Pankaj Gupta
6600f1ac0c i386/sev: Move sev_launch_update to separate class method
When sev-snp-guest objects are introduced there will be a number of
differences in how the launch data is handled compared to the existing
sev-guest object. Move sev_launch_start() to a class method to make it
easier to implement SNP-specific launch update functionality later.

Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-6-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Michael Roth
16dcf200dc i386/sev: Introduce "sev-common" type to encapsulate common SEV state
Currently all SEV/SEV-ES functionality is managed through a single
'sev-guest' QOM type. With upcoming support for SEV-SNP, taking this
same approach won't work well since some of the properties/state
managed by 'sev-guest' is not applicable to SEV-SNP, which will instead
rely on a new QOM type with its own set of properties/state.

To prepare for this, this patch moves common state into an abstract
'sev-common' parent type to encapsulate properties/state that are
common to both SEV/SEV-ES and SEV-SNP, leaving only SEV/SEV-ES-specific
properties/state in the current 'sev-guest' type. This should not
affect current behavior or command-line options.

As part of this patch, some related changes are also made:

  - a static 'sev_guest' variable is currently used to keep track of
    the 'sev-guest' instance. SEV-SNP would similarly introduce an
    'sev_snp_guest' static variable. But these instances are now
    available via qdev_get_machine()->cgs, so switch to using that
    instead and drop the static variable.

  - 'sev_guest' is currently used as the name for the static variable
    holding a pointer to the 'sev-guest' instance. Re-purpose the name
    as a local variable referring the 'sev-guest' instance, and use
    that consistently throughout the code so it can be easily
    distinguished from sev-common/sev-snp-guest instances.

  - 'sev' is generally used as the name for local variables holding a
    pointer to the 'sev-guest' instance. In cases where that now points
    to common state, use the name 'sev_common'; in cases where that now
    points to state specific to 'sev-guest' instance, use the name
    'sev_guest'

In order to enable kernel-hashes for SNP, pull it from
SevGuestProperties to its parent SevCommonProperties so
it will be available for both SEV and SNP.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Co-developed-by: Dov Murik <dovmurik@linux.ibm.com>
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Acked-by: Markus Armbruster <armbru@redhat.com> (QAPI schema)
Co-developed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-5-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Pankaj Gupta
18c453409a i386/sev: Replace error_report with error_setg
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240530111643.1091816-2-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:06 +02:00
Paolo Bonzini
7604bbc2d8 target/i386: fix xsave.flat from kvm-unit-tests
xsave.flat checks that "executing the XSETBV instruction causes a general-
protection fault (#GP) if ECX = 0 and EAX[2:1] has the value 10b".  QEMU allows
that option, so the test fails.  Add the condition.

Cc: qemu-stable@nongnu.org
Fixes: 892544317f ("target/i386: implement XSAVE and XRSTOR of AVX registers", 2022-10-18)
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:05 +02:00
Zhao Liu
f2c04bede3 target/i386/tcg: Fix RDPID feature check
DisasContext.cpuid_ext_features indicates CPUID.01H.ECX.

Use DisasContext.cpuid_7_0_ecx_features field to check RDPID feature bit
(CPUID_7_0_ECX_RDPID).

Fixes: 6750485bf4 ("target/i386: implement RDPID in TCG")
Inspired-by: Xinyu Li <lixinyu20s@ict.ac.cn>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240603080723.1256662-1-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:05 +02:00
Xinyu Li
0683fff1cf target/i386: fix memory opsize for Mov to/from Seg
This commit fixes an issue with MOV instructions (0x8C and 0x8E)
involving segment registers; MOV to segment register's source is
16-bit, while MOV from segment register has to explicitly set the
memory operand size to 16 bits.  Introduce a new flag
X86_SPECIAL_Op0_Mw to handle this specification correctly.

Signed-off-by: Xinyu Li <lixinyu20s@ict.ac.cn>
Message-ID: <20240602100528.2135717-1-lixinyu20s@ict.ac.cn>
Fixes: 5e9e21bcc4 ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:05 +02:00
Xinyu Li
da7c95920d target/i386: fix SSE and SSE2 feature check
Features check of CPUID_SSE and CPUID_SSE2 should use cpuid_features,
rather than cpuid_ext_features.

Signed-off-by: Xinyu Li <lixinyu20s@ict.ac.cn>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240602100904.2137939-1-lixinyu20s@ict.ac.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-05 11:01:05 +02:00
Alex Bennée
a4c2735f35 cpu: move Qemu[Thread|Cond] setup into common code
Aside from the round robin threads this is all common code. By
moving the halt_cond setup we also no longer need hacks to work around
the race between QOM object creation and thread creation.

It is a little ugly to free stuff up for the round robin thread but
better it deal with its own specialises than making the other
accelerators jump through hoops.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20240530194250.1801701-3-alex.bennee@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-06-04 10:02:39 +02:00
Richard Henderson
a93b4061b0 target/i386/kvm: Improve KVM_EXIT_NOTIFY warnings
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240412073346.458116-28-richard.henderson@linaro.org>
[PMD: Fixed typo reported by Peter Maydell]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-06-04 10:02:39 +02:00
Peter Maydell
408b2b3d9d accel/tcg: Make TCGCPUOps::cpu_exec_halt return bool for whether to halt
The TCGCPUOps::cpu_exec_halt method is called from cpu_handle_halt()
when the CPU is halted, so that a target CPU emulation can do
anything target-specific it needs to do.  (At the moment we only use
this on i386.)

The current specification of the method doesn't allow the target
specific code to do something different if the CPU is about to come
out of the halt state, because cpu_handle_halt() only determines this
after the method has returned.  (If the method called cpu_has_work()
itself this would introduce a potential race if an interrupt arrived
between the target's method implementation checking and
cpu_handle_halt() repeating the check.)

Change the definition of the method so that it returns a bool to
tell cpu_handle_halt() whether to stay in halt or not.

We will want this for the Arm target, where FEAT_WFxT wants to do
some work only for the case where the CPU is in halt but about to
leave it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240430140035.3889879-2-peter.maydell@linaro.org
2024-05-30 16:13:48 +01:00
Richard Henderson
60b54b67c6 target/i386: Introduce X86Access and use for xsave and friends
linux-user/i386: Fix allocation and alignment of fp state in signal frame
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmZT2GwdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV87pQf9F/cmrKQG1mVWKmJd
 MI7l63lbxejdgAADv1nmro+oapCsJSaQeUSrYp904ydqJjVfBJkaoXfknGsvxrNA
 oW7nEuYt0sBKdaBUKhYpMOJ3ivfw7lVVMJmjNv9ngZRhW+WOoJrBHoleUkVLiM7D
 rxkMLL+LQ7BR9i0Lv1unorOkqUPGNOnEd45qRn6k1g/Qnqi8SNMzxFwO8+232u8m
 EG9un/oh4mKPyb5vSg3Y4JLg+yDKCRScBqBU1wcKFe1u+umBkv2BNcU+k62AJh1q
 bv8i1n+X/dFAd1aj0NEupi04EOZIof5m3T4YIWg7M4I94NiFWNZ18vgskkmiO+Mo
 0KPd/A==
 =sYrE
 -----END PGP SIGNATURE-----

Merge tag 'pull-lu-20240526' of https://gitlab.com/rth7680/qemu into staging

target/i386: Introduce X86Access and use for xsave and friends
linux-user/i386: Fix allocation and alignment of fp state in signal frame

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmZT2GwdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV87pQf9F/cmrKQG1mVWKmJd
# MI7l63lbxejdgAADv1nmro+oapCsJSaQeUSrYp904ydqJjVfBJkaoXfknGsvxrNA
# oW7nEuYt0sBKdaBUKhYpMOJ3ivfw7lVVMJmjNv9ngZRhW+WOoJrBHoleUkVLiM7D
# rxkMLL+LQ7BR9i0Lv1unorOkqUPGNOnEd45qRn6k1g/Qnqi8SNMzxFwO8+232u8m
# EG9un/oh4mKPyb5vSg3Y4JLg+yDKCRScBqBU1wcKFe1u+umBkv2BNcU+k62AJh1q
# bv8i1n+X/dFAd1aj0NEupi04EOZIof5m3T4YIWg7M4I94NiFWNZ18vgskkmiO+Mo
# 0KPd/A==
# =sYrE
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 26 May 2024 05:48:44 PM PDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]

* tag 'pull-lu-20240526' of https://gitlab.com/rth7680/qemu: (28 commits)
  target/i386: Pass host pointer and size to cpu_x86_{xsave,xrstor}
  target/i386: Pass host pointer and size to cpu_x86_{fxsave,fxrstor}
  target/i386: Pass host pointer and size to cpu_x86_{fsave,frstor}
  target/i386: Convert do_xrstor to X86Access
  target/i386: Convert do_xsave to X86Access
  linux-user/i386: Honor xfeatures in xrstor_sigcontext
  linux-user/i386: Fix allocation and alignment of fp state
  linux-user/i386: Return boolean success from xrstor_sigcontext
  linux-user/i386: Return boolean success from restore_sigcontext
  linux-user/i386: Fix -mregparm=3 for signal delivery
  linux-user/i386: Split out struct target_fregs_state
  linux-user/i386: Replace target_fpstate_fxsave with X86LegacyXSaveArea
  linux-user/i386: Remove xfeatures from target_fpstate_fxsave
  linux-user/i386: Drop xfeatures_size from sigcontext arithmetic
  target/i386: Add {hw,sw}_reserved to X86LegacyXSaveArea
  target/i386: Add rbfm argument to cpu_x86_{xsave,xrstor}
  target/i386: Split out do_xsave_chk
  target/i386: Convert do_xrstor_* to X86Access
  target/i386: Convert do_xsave_* to X86Access
  tagret/i386: Convert do_fxsave, do_fxrstor to X86Access
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 17:51:00 -07:00
Richard Henderson
701890bdd0 target/i386: Pass host pointer and size to cpu_x86_{xsave,xrstor}
We have already validated the memory region in the course of
validating the signal frame.  No need to do it again within
the helper function.

In addition, return failure when the header contains invalid
xstate_bv.  The kernel handles this via exception handling
within XSTATE_OP within xrstor_from_user_sigframe.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 15:49:58 -07:00
Richard Henderson
9c2fb9e1d5 target/i386: Pass host pointer and size to cpu_x86_{fxsave,fxrstor}
We have already validated the memory region in the course of
validating the signal frame.  No need to do it again within
the helper function.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 15:45:27 -07:00
Richard Henderson
76d8d0f85c target/i386: Pass host pointer and size to cpu_x86_{fsave,frstor}
We have already validated the memory region in the course of
validating the signal frame.  No need to do it again within
the helper function.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 15:45:27 -07:00
Richard Henderson
d5dc3a927a target/i386: Convert do_xrstor to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 15:45:27 -07:00
Richard Henderson
c6e6d1508a target/i386: Convert do_xsave to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 15:45:27 -07:00
Richard Henderson
6dba8b471c target/i386: Add {hw,sw}_reserved to X86LegacyXSaveArea
This completes the 512 byte structure, allowing the union to
be removed.  Assert that the structure layout is as expected.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
a2d64d61c1 target/i386: Add rbfm argument to cpu_x86_{xsave,xrstor}
For now, continue to pass all 1's from signal.c.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
a8f68831c6 target/i386: Split out do_xsave_chk
This path is not required by user-only, and can in fact
be shared between xsave and xrstor.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
58955a96d9 target/i386: Convert do_xrstor_* to X86Access
The body of do_xrstor is now fully converted.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
6b1b736bae target/i386: Convert do_xsave_* to X86Access
The body of do_xsave is now fully converted.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
6d030aab29 tagret/i386: Convert do_fxsave, do_fxrstor to X86Access
Move the alignment fault from do_* to helper_*, as it need
not apply to usage from within user-only signal handling.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
e41d2eaf17 target/i386: Convert do_xrstor_{fpu,mxcr,sse} to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
b7e6d3ad30 target/i386: Convert do_xsave_{fpu,mxcr,sse} to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
94f60f8f1c target/i386: Convert do_fsave, do_frstor to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
505e2ef744 target/i386: Convert do_fstenv to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
bc13c2dd01 target/i386: Convert do_fldenv to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
4526f58a27 target/i386: Convert helper_{fbld,fbst}_ST0 to X86Access
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
d3e8b648ab target/i386: Convert do_fldt, do_fstt to X86Access
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Richard Henderson
24f6813924 target/i386: Add tcg/access.[ch]
Provide a method to amortize page lookup across large blocks.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-26 12:51:50 -07:00
Paolo Bonzini
56da568f88 target/i386: remove aflag argument of gen_lea_v_seg
It is always s->aflag.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:02 +02:00
Paolo Bonzini
6605817b1a target/i386: clean up repeated string operations
Do not bother generating inline wrappers for gen_repz and gen_repz2;
use s->prefix to separate REPZ from REPNZ in the case of SCAS and
CMPS.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:02 +02:00
Paolo Bonzini
f0e754d3ce target/i386: introduce gen_lea_ss_ofs
Generalize gen_stack_A0() to include an initial add and to use an arbitrary
destination.  This is a common pattern and it is not a huge burden to
add the extra arguments to the only caller of gen_stack_A0().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
20237d4070 target/i386: use mo_stacksize more
Use mo_stacksize for all stack accesses, including when
a 64-bit code segment is impossible and the code is
therefore checking only for SS32(s).

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
d0e31d6d37 target/i386: inline gen_add_A0_ds_seg
It is only used in MONITOR, where a direct call of gen_lea_v_seg
is simpler, and in XLAT.  Inline it in the latter.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
420d60caad target/i386: split gen_ldst_modrm for load and store
The is_store argument of gen_ldst_modrm has only ever been passed
a constant.  Just split the function in two.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
b3c49e654a target/i386: reg in gen_ldst_modrm is always OR_TMP0
Values other than OR_TMP0 were only ever used by MOV and MOVNTI
opcodes.  Now that these have been converted to the new decoder,
remove the argument.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
f5bd6a48ee target/i386: raze the gen_eob* jungle
Make gen_eob take the DISAS_* constant as an argument, so that
it is not necessary to have wrappers around it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
ad8f2ad77e target/i386: assert that gen_update_eip_cur and gen_update_eip_next are the same in tb_stop
This is an invariant now that there are no calls to gen_eob_inhibit_irq()
outside tb_stop.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
2512f786bf target/i386: avoid calling gen_eob_inhibit_irq before tb_stop
sti only has one exit, so it does not need to generate the
end-of-translation code inline.  It can be deferred to tb_stop.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
c8494cb8b1 target/i386: avoid calling gen_eob_syscall before tb_stop
syscall and sysret only have one exit, so they do not need to
generate the end-of-translation code inline.  It can be
deferred to tb_stop.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
9594b59331 target/i386: document and group DISAS_* constants
Place DISAS_* constants that update cpu_eip first, and
the "jump" ones last.  Add comments explaining the differences
and usage.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
abdcc5c8ef target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS
Mark cc_op as clean and do not spill it at the end of the translation block.
Technically this is a tiny bit less efficient, but:

* it results in translations that are a tiny bit smaller

* for most of these instructions, it is not unlikely that they are close to
the end of the basic block, in which case cc_op would not be overwritten

* anyway the cost is probably dwarfed by that of computing flags.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
a0625efd4d target/i386: cpu_load_eflags already sets cc_op
No need to set it again at the end of the translation block, cc_op_dirty
can be set to false.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
f6ac77eab6 target/i386: remove unnecessary gen_update_cc_op before gen_eob*
This is already handled in gen_eob().  Before adding another DISAS_*
case, remove the double calls.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
69d7281262 target/i386: cleanup eob handling of RSM
gen_helper_rsm cannot generate an exception, and reloads the flags.
So there's no need to spill cc_op and update cpu_eip, but on the
other hand cc_op must be reset to CC_OP_EFLAGS before returning.

It all works by chance, because by spilling cc_op before the call
to the helper, it becomes non-dirty and gen_eob will not overwrite
the CC_OP_EFLAGS value that is placed there by the helper.  But
let's clean it up.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:28:01 +02:00
Paolo Bonzini
f0f0136abb target/i386: no single-step exception after MOV or POP SS
Intel SDM 18.3.1.4 "If an occurrence of the MOV or POP instruction
loads the SS register executes with EFLAGS.TF = 1, no single-step debug
exception occurs following the MOV or POP instruction."

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 13:27:54 +02:00
Paolo Bonzini
8225bff7c5 target/i386: disable jmp_opt if EFLAGS.RF is 1
If EFLAGS.RF is 1, special processing in gen_eob_worker() is needed and
therefore goto_tb cannot be used.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-25 10:00:12 +02:00
donsheng
84d4b72854 target-i386: hyper-v: Correct kvm_hv_handle_exit return value
This bug fix addresses the incorrect return value of kvm_hv_handle_exit for
KVM_EXIT_HYPERV_SYNIC, which should be EXCP_INTERRUPT.

Handling of KVM_EXIT_HYPERV_SYNIC in QEMU needs to be synchronous.
This means that async_synic_update should run in the current QEMU vCPU
thread before returning to KVM, returning EXCP_INTERRUPT to guarantee this.
Returning 0 can cause async_synic_update to run asynchronously.

One problem (kvm-unit-tests's hyperv_synic test fails with timeout error)
caused by this bug:

When a guest VM writes to the HV_X64_MSR_SCONTROL MSR to enable Hyper-V SynIC,
a VM exit is triggered and processed by the kvm_hv_handle_exit function of the
QEMU vCPU. This function then calls the async_synic_update function to set
synic->sctl_enabled to true. A true value of synic->sctl_enabled is required
before creating SINT routes using the hyperv_sint_route_new() function.

If kvm_hv_handle_exit returns 0 for KVM_EXIT_HYPERV_SYNIC, the current QEMU
vCPU thread may return to KVM and enter the guest VM before running
async_synic_update. In such case, the hyperv_synic test’s subsequent call to
synic_ctl(HV_TEST_DEV_SINT_ROUTE_CREATE, ...) immediately after writing to
HV_X64_MSR_SCONTROL can cause QEMU’s hyperv_sint_route_new() function to return
prematurely (because synic->sctl_enabled is false).

If the SINT route is not created successfully, the SINT interrupt will not be
fired, resulting in a timeout error in the hyperv_synic test.

Fixes: 267e071bd6 (“hyperv: make overlay pages for SynIC”)
Suggested-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
Message-ID: <20240521200114.11588-1-dongsheng.x.zhang@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:56:28 +02:00
Zhao Liu
5eb608a13b i386/cpu: Use CPUCacheInfo.share_level to encode CPUID[0x8000001D].EAX[bits 25:14]
CPUID[0x8000001D].EAX[bits 25:14] NumSharingCache: number of logical
processors sharing cache.

The number of logical processors sharing this cache is
NumSharingCache + 1.

After cache models have topology information, we can use
CPUCacheInfo.share_level to decide which topology level to be encoded
into CPUID[0x8000001D].EAX[bits 25:14].

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-22-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:56:27 +02:00
Zhao Liu
f602eb925a i386/cpu: Use CPUCacheInfo.share_level to encode CPUID[4]
CPUID[4].EAX[bits 25:14] is used to represent the cache topology for
Intel CPUs.

After cache models have topology information, we can use
CPUCacheInfo.share_level to decide which topology level to be encoded
into CPUID[4].EAX[bits 25:14].

And since with the helper max_processor_ids_for_cache(), the filed
CPUID[4].EAX[bits 25:14] (original virable "num_apic_ids") is parsed
based on cpu topology levels, which are verified when parsing -smp, it's
no need to check this value by "assert(num_apic_ids > 0)" again, so
remove this assert().

Additionally, wrap the encoding of CPUID[4].EAX[bits 31:26] into a
helper to make the code cleaner.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-21-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:56:27 +02:00
Zhao Liu
9fcba76ab9 i386: Add cache topology info in CPUCacheInfo
Currently, by default, the cache topology is encoded as:
1. i/d cache is shared in one core.
2. L2 cache is shared in one core.
3. L3 cache is shared in one die.

This default general setting has caused a misunderstanding, that is, the
cache topology is completely equated with a specific cpu topology, such
as the connection between L2 cache and core level, and the connection
between L3 cache and die level.

In fact, the settings of these topologies depend on the specific
platform and are not static. For example, on Alder Lake-P, every
four Atom cores share the same L2 cache.

Thus, we should explicitly define the corresponding cache topology for
different cache models to increase scalability.

Except legacy_l2_cache_cpuid2 (its default topo level is
CPU_TOPO_LEVEL_UNKNOW), explicitly set the corresponding topology level
for all other cache models. In order to be compatible with the existing
cache topology, set the CPU_TOPO_LEVEL_CORE level for the i/d cache, set
the CPU_TOPO_LEVEL_CORE level for L2 cache, and set the
CPU_TOPO_LEVEL_DIE level for L3 cache.

The field for CPUID[4].EAX[bits 25:14] or CPUID[0x8000001D].EAX[bits
25:14] will be set based on CPUCacheInfo.share_level.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20240424154929.1487382-20-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
588208346f i386/cpu: Introduce module-id to X86CPU
Introduce module-id to be consistent with the module-id field in
CpuInstanceProperties.

Following the legacy smp check rules, also add the module_id validity
into x86_cpu_pre_plug().

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Co-developed-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-17-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
5304873acd i386: Expose module level in CPUID[0x1F]
Linux kernel (from v6.4, with commit edc0a2b595765 ("x86/topology: Fix
erroneous smp_num_siblings on Intel Hybrid platforms") is able to
handle platforms with Module level enumerated via CPUID.1F.

Expose the module level in CPUID[0x1F] if the machine has more than 1
modules.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-15-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
3568adc995 i386: Support modules_per_die in X86CPUTopoInfo
Support module level in i386 cpu topology structure "X86CPUTopoInfo".

Since x86 does not yet support the "modules" parameter in "-smp",
X86CPUTopoInfo.modules_per_die is currently always 1.

Therefore, the module level width in APIC ID, which can be calculated by
"apicid_bitwidth_for_count(topo_info->modules_per_die)", is always 0 for
now, so we can directly add APIC ID related helpers to support module
level parsing.

In addition, update topology structure in test-x86-topo.c.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Co-developed-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-14-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
81c392ab5c i386: Introduce module level cpu topology to CPUX86State
Intel CPUs implement module level on hybrid client products (e.g.,
ADL-N, MTL, etc) and E-core server products.

A module contains a set of cores that share certain resources (in
current products, the resource usually includes L2 cache, as well as
module scoped features and MSRs).

Module level support is the prerequisite for L2 cache topology on
module level. With module level, we can implement the Guest's CPU
topology and future cache topology to be consistent with the Host's on
Intel hybrid client/E-core server platforms.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Co-developed-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-13-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
822bce9f58 i386/cpu: Decouple CPUID[0x1F] subleaf with specific topology level
At present, the subleaf 0x02 of CPUID[0x1F] is bound to the "die" level.

In fact, the specific topology level exposed in 0x1F depends on the
platform's support for extension levels (module, tile and die).

To help expose "module" level in 0x1F, decouple CPUID[0x1F] subleaf
with specific topology level.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240424154929.1487382-12-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
0f6ed7ba13 i386: Split topology types of CPUID[0x1F] from the definitions of CPUID[0xB]
CPUID[0xB] defines SMT, Core and Invalid types, and this leaf is shared
by Intel and AMD CPUs.

But for extended topology levels, Intel CPU (in CPUID[0x1F]) and AMD CPU
(in CPUID[0x80000026]) have the different definitions with different
enumeration values.

Though CPUID[0x80000026] hasn't been implemented in QEMU, to avoid
possible misunderstanding, split topology types of CPUID[0x1F] from the
definitions of CPUID[0xB] and introduce CPUID[0x1F]-specific topology
types.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-11-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
6ddeb0ec8c i386/cpu: Introduce bitmap to cache available CPU topology levels
Currently, QEMU checks the specify number of topology domains to detect
if there's extended topology levels (e.g., checking nr_dies).

With this bitmap, the extended CPU topology (the levels other than SMT,
core and package) could be easier to detect without touching the
topology details.

This is also in preparation for the follow-up to decouple CPUID[0x1F]
subleaf with specific topology level.

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240424154929.1487382-10-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
2613747a79 i386/cpu: Consolidate the use of topo_info in cpu_x86_cpuid()
In cpu_x86_cpuid(), there are many variables in representing the cpu
topology, e.g., topo_info, cs->nr_cores and cs->nr_threads.

Since the names of cs->nr_cores and cs->nr_threads do not accurately
represent its meaning, the use of cs->nr_cores or cs->nr_threads is
prone to confusion and mistakes.

And the structure X86CPUTopoInfo names its members clearly, thus the
variable "topo_info" should be preferred.

In addition, in cpu_x86_cpuid(), to uniformly use the topology variable,
replace env->dies with topo_info.dies_per_pkg as well.

Suggested-by: Robert Hoo <robert.hu@linux.intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-9-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
9a085c4b4a i386/cpu: Use APIC ID info get NumSharingCache for CPUID[0x8000001D].EAX[bits 25:14]
The commit 8f4202fb10 ("i386: Populate AMD Processor Cache Information
for cpuid 0x8000001D") adds the cache topology for AMD CPU by encoding
the number of sharing threads directly.

From AMD's APM, NumSharingCache (CPUID[0x8000001D].EAX[bits 25:14])
means [1]:

The number of logical processors sharing this cache is the value of
this field incremented by 1. To determine which logical processors are
sharing a cache, determine a Share Id for each processor as follows:

ShareId = LocalApicId >> log2(NumSharingCache+1)

Logical processors with the same ShareId then share a cache. If
NumSharingCache+1 is not a power of two, round it up to the next power
of two.

From the description above, the calculation of this field should be same
as CPUID[4].EAX[bits 25:14] for Intel CPUs. So also use the offsets of
APIC ID to calculate this field.

[1]: APM, vol.3, appendix.E.4.15 Function 8000_001Dh--Cache Topology
     Information

Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240424154929.1487382-8-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:29 +02:00
Zhao Liu
88dd4ca06c i386/cpu: Use APIC ID info to encode cache topo in CPUID[4]
Refer to the fixes of cache_info_passthrough ([1], [2]) and SDM, the
CPUID.04H:EAX[bits 25:14] and CPUID.04H:EAX[bits 31:26] should use the
nearest power-of-2 integer.

The nearest power-of-2 integer can be calculated by pow2ceil() or by
using APIC ID offset/width (like L3 topology using 1 << die_offset [3]).

But in fact, CPUID.04H:EAX[bits 25:14] and CPUID.04H:EAX[bits 31:26]
are associated with APIC ID. For example, in linux kernel, the field
"num_threads_sharing" (Bits 25 - 14) is parsed with APIC ID. And for
another example, on Alder Lake P, the CPUID.04H:EAX[bits 31:26] is not
matched with actual core numbers and it's calculated by:
"(1 << (pkg_offset - core_offset)) - 1".

Therefore the topology information of APIC ID should be preferred to
calculate nearest power-of-2 integer for CPUID.04H:EAX[bits 25:14] and
CPUID.04H:EAX[bits 31:26]:
1. d/i cache is shared in a core, 1 << core_offset should be used
   instead of "cs->nr_threads" in encode_cache_cpuid4() for
   CPUID.04H.00H:EAX[bits 25:14] and CPUID.04H.01H:EAX[bits 25:14].
2. L2 cache is supposed to be shared in a core as for now, thereby
   1 << core_offset should also be used instead of "cs->nr_threads" in
   encode_cache_cpuid4() for CPUID.04H.02H:EAX[bits 25:14].
3. Similarly, the value for CPUID.04H:EAX[bits 31:26] should also be
   calculated with the bit width between the package and SMT levels in
   the APIC ID (1 << (pkg_offset - core_offset) - 1).

In addition, use APIC ID bits calculations to replace "pow2ceil()" for
cache_info_passthrough case.

[1]: efb3934adf ("x86: cpu: make sure number of addressable IDs for processor cores meets the spec")
[2]: d7caf13b5f ("x86: cpu: fixup number of addressable IDs for logical processors sharing cache")
[3]: d65af288a8 ("i386: Update new x86_apicid parsing rules with die_offset support")

Fixes: 7e3482f824 ("i386: Helpers to encode cache information consistently")
Suggested-by: Robert Hoo <robert.hu@linux.intel.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Message-ID: <20240424154929.1487382-7-zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:43:26 +02:00
Zhao Liu
12f6b8280f i386/cpu: Fix i/d-cache topology to core level for Intel CPU
For i-cache and d-cache, current QEMU hardcodes the maximum IDs for CPUs
sharing cache (CPUID.04H.00H:EAX[bits 25:14] and CPUID.04H.01H:EAX[bits
25:14]) to 0, and this means i-cache and d-cache are shared in the SMT
level.

This is correct if there's single thread per core, but is wrong for the
hyper threading case (one core contains multiple threads) since the
i-cache and d-cache are shared in the core level other than SMT level.

For AMD CPU, commit 8f4202fb10 ("i386: Populate AMD Processor Cache
Information for cpuid 0x8000001D") has already introduced i/d cache
topology as core level by default.

Therefore, in order to be compatible with both multi-threaded and
single-threaded situations, we should set i-cache and d-cache be shared
at the core level by default.

This fix changes the default i/d cache topology from per-thread to
per-core. Potentially, this change in L1 cache topology may affect the
performance of the VM if the user does not specifically specify the
topology or bind the vCPU. However, the way to achieve optimal
performance should be to create a reasonable topology and set the
appropriate vCPU affinity without relying on QEMU's default topology
structure.

Fixes: 7e3482f824 ("i386: Helpers to encode cache information consistently")
Suggested-by: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20240424154929.1487382-6-zhao1.liu@intel.com>
[Add compat property. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 19:39:33 +02:00
Binbin Wu
0117067131 target/i386: add control bits support for LAM
LAM uses CR3[61] and CR3[62] to configure/enable LAM on user pointers.
LAM uses CR4[28] to configure/enable LAM on supervisor pointers.

For CR3 LAM bits, no additional handling needed:
- TCG
  LAM is not supported for TCG of target-i386.  helper_write_crN() and
  helper_vmrun() check max physical address bits before calling
  cpu_x86_update_cr3(), no change needed, i.e. CR3 LAM bits are not allowed
  to be set in TCG.
- gdbstub
  x86_cpu_gdb_write_register() will call cpu_x86_update_cr3() to update cr3.
  Allow gdb to set the LAM bit(s) to CR3, if vcpu doesn't support LAM,
  KVM_SET_SREGS will fail as other reserved bits.

For CR4 LAM bit, its reservation depends on vcpu supporting LAM feature or
not.
- TCG
  LAM is not supported for TCG of target-i386.  helper_write_crN() and
  helper_vmrun() check CR4 reserved bit before calling cpu_x86_update_cr4(),
  i.e. CR4 LAM bit is not allowed to be set in TCG.
- gdbstub
  x86_cpu_gdb_write_register() will call cpu_x86_update_cr4() to update cr4.
  Mask out LAM bit on CR4 if vcpu doesn't support LAM.
- x86_cpu_reset_hold() doesn't need special handling.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240112060042.19925-3-binbin.wu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 15:53:30 +02:00
Robert Hoo
ba67809059 target/i386: add support for LAM in CPUID enumeration
Linear Address Masking (LAM) is a new Intel CPU feature, which allows
software to use of the untranslated address bits for metadata.

The bit definition:
CPUID.(EAX=7,ECX=1):EAX[26]

Add CPUID definition for LAM.

Note LAM feature is not supported for TCG of target-i386, LAM CPIUD bit
will not be added to TCG_7_1_EAX_FEATURES.

More info can be found in Intel ISE Chapter "LINEAR ADDRESS MASKING(LAM)"
https://cdrdv2.intel.com/v1/dl/getContent/671368

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240112060042.19925-2-binbin.wu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 15:53:30 +02:00
Paolo Bonzini
ec56891984 target/i386: clean up AAM/AAD
The 32-bit AAM/AAD opcodes are using helpers that read and write flags and
env->regs[R_EAX].  Clean them up so that the table correctly includes AX
as a 16-bit input and output.

No real reason to do it to be honest, but they are nice one-output helpers
and it removes the masking of env->regs[R_EAX] that generic load/writeback
code already does.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240522123912.608497-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 15:53:30 +02:00
Paolo Bonzini
d0414d71f6 target/i386: generate simpler code for ROL/ROR with immediate count
gen_rot_carry and gen_rot_overflow are meant to be called with count == NULL
if the count cannot be zero.  However this is not done in gen_ROL and gen_ROR,
and writing everywhere "can_be_zero ? count : NULL" is burdensome and less
readable.  Just pass can_be_zero as a separate argument.

gen_RCL and gen_RCR use a conditional branch to skip the computation
if count is zero, so they can pass false unconditionally to gen_rot_overflow.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240522123914.608516-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-22 15:53:30 +02:00
Richard Henderson
dfc7228be3 target/i386: Use translator_ldub for everything
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-15 08:55:19 +02:00
Richard Henderson
962a145cdc accel/tcg: Provide default implementation of disas_log
Almost all of the disas_log implementations are identical.
Unify them within translator_loop.

Drop extra Priv/Virt logging from target/riscv.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-15 08:55:18 +02:00
Paolo Bonzini
1b1badf3c5 i386: select correct components for no-board build
The local APIC is a part of the CPU and has callbacks that are invoked
from multiple accelerators.

The IOAPIC on the other hand is optional, but ioapic_eoi_broadcast is
used by common x86 code to implement the IOAPIC's implicit EOI mode.
Add a stub in case the IOAPIC device is not included but the APIC is.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240509170044.190795-13-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:15 +02:00
Paolo Bonzini
fe01af5d47 target/i386: fix feature dependency for WAITPKG
The VMX feature bit depends on general availability of WAITPKG,
not the other way round.

Fixes: 33cc88261c ("target/i386: add support for VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE", 2023-08-28)
Cc: qemu-stable@nongnu.org
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:14 +02:00
Paolo Bonzini
3fabbe0b7d target/i386: move prefetch and multi-byte UD/NOP to new decoder
These are trivial to add, and moving them to the new decoder fixes some
corner cases: raising #UD instead of an instruction fetch page fault for
the undefined opcodes, and incorrectly rejecting 0F 18 prefetches with
register operands (which are treated as reserved NOPs).

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:14 +02:00
Paolo Bonzini
40a3ec7b5f target/i386: rdpkru/wrpkru are no-prefix instructions
Reject 0x66/0xf3/0xf2 in front of them.

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:14 +02:00
Paolo Bonzini
41c685dc59 target/i386: fix operand size for DATA16 REX.W POPCNT
According to the manual, 32-bit vs 64-bit is governed by REX.W
and REX ignores the 0x66 prefix.  This can be confirmed with this
program:

    #include <stdio.h>
    int main()
    {
       int x = 0x12340000;
       int y;
       asm("popcntl %1, %0" : "=r" (y) : "r" (x)); printf("%x\n", y);
       asm("mov $-1, %0; .byte 0x66; popcntl %1, %0" : "+r" (y) : "r" (x)); printf("%x\n", y);
       asm("mov $-1, %0; .byte 0x66; popcntq %q1, %q0" : "+r" (y) : "r" (x)); printf("%x\n", y);
    }

which prints 5/ffff0000/5 on real hardware and 5/ffff0000/ffff0000
on QEMU.

Cc: qemu-stable@nongnu.org
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:14 +02:00
Paolo Bonzini
9f07e47a5e target/i386: remove PCOMMIT from TCG, deprecate property
The PCOMMIT instruction was never included in any physical processor.
TCG implements it as a no-op instruction, but its utility is debatable
to say the least.  Drop it from the decoder since it is only available
with "-cpu max", which does not guarantee migration compatibility
across versions, and deprecate the property just in case someone is
using it as "pcommit=off".

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-10 15:45:14 +02:00
Philippe Mathieu-Daudé
8b4d80bb53 misc: Use QEMU header path relative to include/ directory
QEMU headers are relative to the include/ directory,
not to the project root directory. Remove "include/".

See also:
https://www.qemu.org/docs/master/devel/style.html#include-directives

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240507142737.95735-1-philmd@linaro.org>
2024-05-09 00:07:21 +02:00
Paolo Bonzini
d4e6d40c36 target/i386: remove duplicate prefix decoding
Now that a bulk of opcodes go through the new decoder, it is sensible
to do some cleanup.  Go immediately through disas_insn_new and only jump
back after parsing the prefixes.

disas_insn() now only contains the three sigsetjmp cases, and they
are more easily managed if they are inlined into i386_tr_translate_insn.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
ef309ec2a6 target/i386: split legacy decoder into a separate function
Split the bits that have some duplication with disas_insn_new, from
those that should be the main topic of the conversion.  This is the
first step towards removing duplicate decoding of prefixes between
disas_insn and disas_insn_new.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
7795b455dd target/i386: decode x87 instructions in a separate function
These are unlikely to be converted to the table-based decoding
soon (perhaps there could be generic ESC decoding in decode-new.c.inc
for the Mod/RM byte, but not operand decoding), so keep them separate
from the remaining legacy-decoded instructions.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
aef4f4affd target/i386: remove now-converted opcodes from old decoder
Send all converted opcodes to disas_insn_new() directly from the big
decoding switch statement; once more, the debugging/bisecting logic
disappears.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
3519b813e1 target/i386: port extensions of one-byte opcodes to new decoder
A few two-byte opcodes are simple extensions of existing one-byte opcodes;
they are easy to decode and need no change to emit.c.inc.  Port them to
the new decoder.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
37861fa519 target/i386: move BSWAP to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
2b8046f361 target/i386: move remaining conditional operations to new decoder
Move long-displacement Jcc, SETcc and CMOVcc to the new decoder.
While filling in the tables makes the code seem longer, the new
emitters are all just one line of code.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
40c4b92f1c target/i386: merge and enlarge a few ranges for call to disas_insn_new
Since new opcodes are not going to be added in translate.c, round the
case labels that call to disas_insn_new(), including whole sets of
eight opcodes when possible.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
d7c41a60d0 target/i386: move C0-FF opcodes to new decoder (except for x87)
The shift instructions are rewritten instead of reusing code from the old
decoder.  Rotates use CC_OP_ADCOX more extensively and generally rely
more on the optimizer, so that the code generators are shared between
the immediate-count and variable-count cases.

In particular, this makes gen_RCL and gen_RCR pretty efficient for the
count == 1 case, which becomes (apart from a few extra movs) something like:

  (compute_cc_all if needed)
  // save old value for OF calculation
  mov     cc_src2, T0
  // the bulk of RCL is just this!
  deposit T0, cc_src, T0, 1, TARGET_LONG_BITS - 1
  // compute carry
  shr     cc_dst, cc_src2, length - 1
  and     cc_dst, cc_dst, 1
  // compute overflow
  xor     cc_src2, cc_src2, T0
  extract cc_src2, cc_src2, length - 1, 1

32-bit MUL and IMUL are also slightly more efficient on 64-bit hosts.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
b603136402 target/i386: generalize gen_movl_seg_T0
In the new decoder it is sometimes easier to put the segment
in T1 instead of T0, usually because another operand was loaded
by common code in T0.  Genrealize gen_movl_seg_T0 to allow
using any source.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:26 +02:00
Paolo Bonzini
5e9e21bcc4 target/i386: move 60-BF opcodes to new decoder
Compared to the old decoder, the main differences in translation
are for the little-used ARPL instruction.  IMUL is adjusted a bit
to share more code to produce flags, but is otherwise very similar.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:53:14 +02:00
Paolo Bonzini
2666fbd271 target/i386: allow instructions with more than one immediate
While keeping decode->immediate for convenience and for 4-operand instructions,
store the immediate in X86DecodedOp as well.  This enables instructions
with more than one immediate such as ENTER.  It can also be used for far
calls and jumps.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:19 +02:00
Paolo Bonzini
442e38c4fb target/i386: extract gen_far_call/jmp, reordering temporaries
Extract the code into new functions, and swap T0/T1 so that T0 corresponds
to the first immediate in the instruction stream.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:19 +02:00
Paolo Bonzini
cc1d28bdbe target/i386: move 00-5F opcodes to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:19 +02:00
Paolo Bonzini
445457693c target/i386: reintroduce debugging mechanism
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:12 +02:00
Paolo Bonzini
8b5de7ea56 target/i386: cleanup *gen_eob*
Create a new wrapper for syscall/sysret, and do not go through multiple
layers of wrappers.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:09 +02:00
Paolo Bonzini
ccfabc00e0 target/i386: clarify the "reg" argument of functions returning CCPrepare
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:52:02 +02:00
Paolo Bonzini
89e4e65ac0 target/i386: do not use s->T0 and s->T1 as scratch registers for CCPrepare
Instead of using s->T0 or s->T1, create a scratch register
when computing the C, NC, L or LE conditions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:51:59 +02:00
Paolo Bonzini
bccb0c138e target/i386: extend cc_* when using them to compute flags
Instead of using s->tmp0 or s->tmp4 as the result, just extend the cc_*
registers in place.  It is harmless and, if multiple setcc instructions
are used, the optimizer will be able to remove the redundant ones.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:51:53 +02:00
Paolo Bonzini
dd17322be7 target/i386: pull cc_op update to callers of gen_jmp_rel{,_csize}
gen_update_cc_op must be called before control flow splits.  Doing it
in gen_jmp_rel{,_csize} may hide bugs, instead assert that cc_op is
clean---even if that means a few more calls to gen_update_cc_op().

With this new invariant, setting cc_op to CC_OP_DYNAMIC is unnecessary
since the caller should have done it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:51:43 +02:00
Paolo Bonzini
bbba9594e8 target/i386: cleanup cc_op changes for REP/REPZ/REPNZ
gen_update_cc_op must be called before control flow splits.  Do it
where the jump on ECX!=0 is translated.

On the other hand, remove the call before gen_jcc1, which takes care of
it already, and explain why REPZ/REPNZ need not use CC_OP_DYNAMIC---the
translation block ends before any control-flow-dependent cc_op could
be observed.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:51:31 +02:00
Paolo Bonzini
64ddadc6bb target/i386: cc_op is not dynamic in gen_jcc1
Resetting cc_op to CC_OP_DYNAMIC should be done at control flow junctions,
which is not the case here.  This translation block is ending and the
only effect of calling set_cc_op() would be a discard of s->cc_srcT.
This discard is useless (it's a temporary, not a global) and in fact
prevents gen_prepare_cc from returning s->cc_srcT.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:51:17 +02:00
Paolo Bonzini
e995f3f944 target/i386: remove mask from CCPrepare
With the introduction of TSTEQ and TSTNE the .mask field is always -1,
so remove all the now-unnecessary code.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:50:39 +02:00
Paolo Bonzini
9309b53e83 target/i386: use TSTEQ/TSTNE to check flags
The new conditions obviously come in handy when testing individual bits
of EFLAGS, and they make it possible to remove the .mask field of
CCPrepare.

Lowering to shift+and is done by the optimizer if necessary.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:50:39 +02:00
Paolo Bonzini
15957eb9ef target/i386: use TSTEQ/TSTNE to test low bits
When testing the sign bit or equality to zero of a partial register, it
is useful to use a single TSTEQ or TSTNE operation.  It can also be used
to test the parity flag, using bit 0 of the population count.

Do not do this for target_ulong-sized values however; the optimizer would
produce a comparison against zero anyway, and it avoids shifts by 64
which are undefined behavior.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:50:39 +02:00
Babu Moger
b776569a53 target/i386: Fix CPUID encoding of Fn8000001E_ECX
Observed the following failure while booting the SEV-SNP guest and the
guest fails to boot with the smp parameters:
"-smp 192,sockets=1,dies=12,cores=8,threads=2".

qemu-system-x86_64: sev_snp_launch_update: SNP_LAUNCH_UPDATE ret=-5 fw_error=22 'Invalid parameter'
qemu-system-x86_64: SEV-SNP: CPUID validation failed for function 0x8000001e, index: 0x0.
provided: eax:0x00000000, ebx: 0x00000100, ecx: 0x00000b00, edx: 0x00000000
expected: eax:0x00000000, ebx: 0x00000100, ecx: 0x00000300, edx: 0x00000000
qemu-system-x86_64: SEV-SNP: failed update CPUID page

Reason for the failure is due to overflowing of bits used for "Node per
processor" in CPUID Fn8000001E_ECX. This field's width is 3 bits wide and
can hold maximum value 0x7. With dies=12 (0xB), it overflows and spills
over into the reserved bits. In the case of SEV-SNP, this causes CPUID
enforcement failure and guest fails to boot.

The PPR documentation for CPUID_Fn8000001E_ECX [Node Identifiers]
=================================================================
Bits    Description
31:11   Reserved.

10:8    NodesPerProcessor: Node per processor. Read-only.
        ValidValues:
        Value   Description
        0h      1 node per processor.
        7h-1h   Reserved.

7:0     NodeId: Node ID. Read-only. Reset: Fixed,XXh.
=================================================================

As in the spec, the valid value for "node per processor" is 0 and rest
are reserved.

Looking back at the history of decoding of CPUID_Fn8000001E_ECX, noticed
that there were cases where "node per processor" can be more than 1. It
is valid only for pre-F17h (pre-EPYC) architectures. For EPYC or later
CPUs, the linux kernel does not use this information to build the L3
topology.

Also noted that the CPUID Function 0x8000001E_ECX is available only when
TOPOEXT feature is enabled. This feature is enabled only for EPYC(F17h)
or later processors. So, previous generation of processors do not not
enumerate 0x8000001E_ECX leaf.

There could be some corner cases where the older guests could enable the
TOPOEXT feature by running with -cpu host, in which case legacy guests
might notice the topology change. To address those cases introduced a
new CPU property "legacy-multi-node". It will be true for older machine
types to maintain compatibility. By default, it will be false, so new
decoding will be used going forward.

The documentation is taken from Preliminary Processor Programming
Reference (PPR) for AMD Family 19h Model 11h, Revision B1 Processors 55901
Rev 0.25 - Oct 6, 2022.

Cc: qemu-stable@nongnu.org
Fixes: 31ada106d8 ("Simplify CPUID_8000_001E for AMD")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-ID: <0ee4b0a8293188a53970a2b0e4f4ef713425055e.1714757834.git.babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-07 08:50:38 +02:00
Richard Henderson
873f9ca385 Accelerator patches
- Extract page-protection definitions to page-protection.h
 - Rework in accel/tcg in preparation of extracting TCG fields from CPUState
 - More uses of get_task_state() in user emulation
 - Xen refactors in preparation for adding multiple map caches (Juergen & Edgar)
 - MAINTAINERS updates (Aleksandar and Bin)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmY40CAACgkQ4+MsLN6t
 wN5drxAA1oIsuUzpAJmlMIxZwlzbICiuexgn/HH9DwWNlrarKo7V1l4YB8jd9WOg
 IKuj7c39kJKsDEB8BXApYwcly+l7DYdnAAI8Z7a+eN+ffKNl/0XBaLjsGf58RNwY
 fb39/cXWI9ZxKxsHMSyjpiu68gOGvZ5JJqa30Fr+eOGuug9Fn/fOe1zC6l/dMagy
 Dnym72stpD+hcsN5sVwohTBIk+7g9og1O/ctRx6Q3ZCOPz4p0+JNf8VUu43/reaR
 294yRK++JrSMhOVFRzP+FH1G25NxiOrVCFXZsUTYU+qPDtdiKtjH1keI/sk7rwZ7
 U573lesl7ewQFf1PvMdaVf0TrQyOe6kUGr9Mn2k8+KgjYRAjTAQk8V4Ric/+xXSU
 0rd7Cz7lyQ8jm0DoOElROv+lTDQs4dvm3BopF3Bojo4xHLHd3SFhROVPG4tvGQ3H
 72Q5UPR2Jr2QZKiImvPceUOg0z5XxoN6KRUkSEpMFOiTRkbwnrH59z/qPijUpe6v
 8l5IlI9GjwkL7pcRensp1VC6e9KC7F5Od1J/2RLDw3UQllMQXqVw2bxD3CEtDRJL
 QSZoS4d1jUCW4iAYdqh/8+2cOIPiCJ4ai5u7lSdjrIJkRErm32FV/pQLZauoHlT5
 eTPUgzDoRXVgI1X1slTpVXlEEvRNbhZqSkYLkXr80MLn5hTafo0=
 =3Qkg
 -----END PGP SIGNATURE-----

Merge tag 'accel-20240506' of https://github.com/philmd/qemu into staging

Accelerator patches

- Extract page-protection definitions to page-protection.h
- Rework in accel/tcg in preparation of extracting TCG fields from CPUState
- More uses of get_task_state() in user emulation
- Xen refactors in preparation for adding multiple map caches (Juergen & Edgar)
- MAINTAINERS updates (Aleksandar and Bin)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmY40CAACgkQ4+MsLN6t
# wN5drxAA1oIsuUzpAJmlMIxZwlzbICiuexgn/HH9DwWNlrarKo7V1l4YB8jd9WOg
# IKuj7c39kJKsDEB8BXApYwcly+l7DYdnAAI8Z7a+eN+ffKNl/0XBaLjsGf58RNwY
# fb39/cXWI9ZxKxsHMSyjpiu68gOGvZ5JJqa30Fr+eOGuug9Fn/fOe1zC6l/dMagy
# Dnym72stpD+hcsN5sVwohTBIk+7g9og1O/ctRx6Q3ZCOPz4p0+JNf8VUu43/reaR
# 294yRK++JrSMhOVFRzP+FH1G25NxiOrVCFXZsUTYU+qPDtdiKtjH1keI/sk7rwZ7
# U573lesl7ewQFf1PvMdaVf0TrQyOe6kUGr9Mn2k8+KgjYRAjTAQk8V4Ric/+xXSU
# 0rd7Cz7lyQ8jm0DoOElROv+lTDQs4dvm3BopF3Bojo4xHLHd3SFhROVPG4tvGQ3H
# 72Q5UPR2Jr2QZKiImvPceUOg0z5XxoN6KRUkSEpMFOiTRkbwnrH59z/qPijUpe6v
# 8l5IlI9GjwkL7pcRensp1VC6e9KC7F5Od1J/2RLDw3UQllMQXqVw2bxD3CEtDRJL
# QSZoS4d1jUCW4iAYdqh/8+2cOIPiCJ4ai5u7lSdjrIJkRErm32FV/pQLZauoHlT5
# eTPUgzDoRXVgI1X1slTpVXlEEvRNbhZqSkYLkXr80MLn5hTafo0=
# =3Qkg
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 May 2024 05:42:08 AM PDT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'accel-20240506' of https://github.com/philmd/qemu: (28 commits)
  MAINTAINERS: Update my email address
  MAINTAINERS: Update Aleksandar Rikalo email
  system: Pass RAM MemoryRegion and is_write in xen_map_cache()
  xen: mapcache: Break out xen_map_cache_init_single()
  xen: mapcache: Break out xen_invalidate_map_cache_single()
  xen: mapcache: Refactor xen_invalidate_map_cache_entry_unlocked
  xen: mapcache: Refactor xen_replace_cache_entry_unlocked
  xen: mapcache: Break out xen_ram_addr_from_mapcache_single
  xen: mapcache: Refactor xen_remap_bucket for multi-instance
  xen: mapcache: Refactor xen_map_cache for multi-instance
  xen: mapcache: Refactor lock functions for multi-instance
  xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
  system: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
  user: Use get_task_state() helper
  user: Declare get_task_state() once in 'accel/tcg/vcpu-state.h'
  user: Forward declare TaskState type definition
  accel/tcg: Move @plugin_mem_cbs from CPUState to CPUNegativeOffsetState
  accel/tcg: Restrict cpu_plugin_mem_cbs_enabled() to TCG
  accel/tcg: Restrict qemu_plugin_vcpu_exit_hook() to TCG plugins
  accel/tcg: Update CPUNegativeOffsetState::can_do_io field documentation
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-06 10:19:10 -07:00
Richard Henderson
d4c453f648 * target/i386: Introduce SapphireRapids-v3 to add missing features
* switch boards to "default y"
 * allow building emulators without any board
 * configs: list "implied" device groups in the default configs
 * remove unnecessary declarations from typedefs.h
 * target/i386: Give IRQs a chance when resetting HF_INHIBIT_IRQ_MASK
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmY1ILsUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNtIwf+MEehq2HudZvsK1M8FrvNmkB/AssO
 x4tqL8DlTus23mQDBu9+rANTB93ManJdK9ybtf6NfjEwK+R8RJslLVnuy/qT+aQX
 PD208L88fjZg17G8uyawwvD1VmqWzHFSN14ShmKzqB2yPXXo/1cJ30w78DbD50yC
 6rw/xbC5j195CwE2u8eBcIyY4Hh2PUYEE4uyHbYVr57cMjfmmA5Pg4I4FJrpLrF3
 eM2Avl/4pIbsW3zxXVB8QbAkgypxZErk3teDK1AkPJnlnBYM1jGKbt/GdKe7vcHR
 V/o+7NlcbS3oHVItQ2gP3m91stjFq+NhixaZpa0VlmuqayBa3xNGl0G6OQ==
 =ZbNW
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386: Introduce SapphireRapids-v3 to add missing features
* switch boards to "default y"
* allow building emulators without any board
* configs: list "implied" device groups in the default configs
* remove unnecessary declarations from typedefs.h
* target/i386: Give IRQs a chance when resetting HF_INHIBIT_IRQ_MASK

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmY1ILsUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNtIwf+MEehq2HudZvsK1M8FrvNmkB/AssO
# x4tqL8DlTus23mQDBu9+rANTB93ManJdK9ybtf6NfjEwK+R8RJslLVnuy/qT+aQX
# PD208L88fjZg17G8uyawwvD1VmqWzHFSN14ShmKzqB2yPXXo/1cJ30w78DbD50yC
# 6rw/xbC5j195CwE2u8eBcIyY4Hh2PUYEE4uyHbYVr57cMjfmmA5Pg4I4FJrpLrF3
# eM2Avl/4pIbsW3zxXVB8QbAkgypxZErk3teDK1AkPJnlnBYM1jGKbt/GdKe7vcHR
# V/o+7NlcbS3oHVItQ2gP3m91stjFq+NhixaZpa0VlmuqayBa3xNGl0G6OQ==
# =ZbNW
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 03 May 2024 10:36:59 AM PDT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (46 commits)
  qga/commands-posix: fix typo in qmp_guest_set_user_password
  migration: do not include coroutine_int.h
  kvm: move target-dependent interrupt routing out of kvm-all.c
  pci: remove some types from typedefs.h
  tcg: remove CPU* types from typedefs.h
  display: remove GraphicHwOps from typedefs.h
  qapi/machine: remove types from typedefs.h
  monitor: remove MonitorDef from typedefs.h
  migration: remove PostcopyDiscardState from typedefs.h
  lockable: remove QemuLockable from typedefs.h
  intc: remove PICCommonState from typedefs.h
  qemu-option: remove QemuOpt from typedefs.h
  net: remove AnnounceTimer from typedefs.h
  numa: remove types from typedefs.h
  qdev-core: remove DeviceListener from typedefs.h
  fw_cfg: remove useless declarations from typedefs.h
  build: do not build virtio-vga-gl if virgl/opengl not available
  bitmap: Use g_try_new0/g_new0/g_renew
  target/i386: Introduce SapphireRapids-v3 to add missing features
  docs: document new convention for Kconfig board symbols
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-06 08:34:34 -07:00
Philippe Mathieu-Daudé
b254c342cf accel/tcg: Access tcg_cflags with getter / setter
Access the CPUState::tcg_cflags via tcg_cflags_has() and
tcg_cflags_set() helpers.

Mechanical change using the following Coccinelle spatch script:

  @@
  expression cpu;
  expression flags;
  @@
  -     cpu->tcg_cflags & flags
  +     tcg_cflags_has(cpu, flags)

  @@
  expression cpu;
  expression flags;
  @@
  -     (tcg_cflags_has(cpu, flags))
  +     tcg_cflags_has(cpu, flags)

  @@
  expression cpu;
  expression flags;
  @@
  -     cpu->tcg_cflags |= flags;
  +     tcg_cflags_set(cpu, flags);

Then manually moving the declarations, and adding both
tcg_cflags_has() and tcg_cflags_set() definitions.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240427155714.53669-15-philmd@linaro.org>
2024-05-06 11:21:05 +02:00
Philippe Mathieu-Daudé
74781c0888 exec/cpu: Extract page-protection definitions to page-protection.h
Extract page-protection definitions from "exec/cpu-all.h"
to "exec/page-protection.h".

The list of files requiring the new header was generated
using:

$ git grep -wE \
  'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)'

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240427155714.53669-3-philmd@linaro.org>
2024-05-06 11:17:15 +02:00
Richard Henderson
97c872276d - Fix NULL dereference in NVMM & WHPX init_vcpu()
- Move user emulation headers "exec/user" to "user"
 - Fix SH-4 ADDV / SUBV opcodes
 - Drop Cocoa compatility on macOS <= 10.12
 - Update Anthony PERARD email
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmY1BE4ACgkQ4+MsLN6t
 wN73jg//dbdHQU+4oM7BgTduDZn1ulKz5DtNEJawRP6vxIcSQ+Co2Yd+gdLOSXNI
 2BaoiOQ8cyDEHj1Uud3WVB0GsQYvHrqgXcjeHpX6yFFCZNyRvzEXizJPHKCtq+4e
 XQFtfTFftlJdaKCLqyDqVnrVNRacFPm7kinrEQbTSmglAhwnyu3GwsokDAYiJTqp
 g0n6mX/pWVEMDNY1HrDsk2Q/pyIZFmzhtuRyXRvi/bh8/BnmMCpySG+2463dnu1O
 xIGr2w8ldc+mKn2w2k3wYKDnUHz/NhOkE86tL/ZxIgjCzeenZXwXNdzM+cuAmOZX
 L9KIu5io6bTevYVwwHhd5/N6MCqVEhoRmsUQfF0CIkIzxXbyF14M89YHXZo3YJAd
 n2uoJ7i6hF/4Pt6Uqlg09+vEk7onwrobnTPnbKHEKNWHNOMKXpq1CBxxcVz2qe24
 +CTAAOOhHqaTjODPSexzHZDZYxugCy1XSqps9AFF1HqUcmsPCL/PQ75YGTJJO0oF
 0V1Yvzjhin26AQS9SglIeXnHxYC26Cg2mXnUpVbryWnG888r0XAGpRl+FEuXK7Ln
 /dGuCIWTozypSkG9304IlxlYsOoXhL11NZqINW+W/Tor3dMRQhWUQcHqv98Jl4Ad
 rnpzZ0Dhd9ityZdbI0CCMZZZLY5dw1Rq5q407GTJr1CDU4PJBh0=
 =N8q0
 -----END PGP SIGNATURE-----

Merge tag 'accel-sh4-ui-20240503' of https://github.com/philmd/qemu into staging

- Fix NULL dereference in NVMM & WHPX init_vcpu()
- Move user emulation headers "exec/user" to "user"
- Fix SH-4 ADDV / SUBV opcodes
- Drop Cocoa compatility on macOS <= 10.12
- Update Anthony PERARD email

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmY1BE4ACgkQ4+MsLN6t
# wN73jg//dbdHQU+4oM7BgTduDZn1ulKz5DtNEJawRP6vxIcSQ+Co2Yd+gdLOSXNI
# 2BaoiOQ8cyDEHj1Uud3WVB0GsQYvHrqgXcjeHpX6yFFCZNyRvzEXizJPHKCtq+4e
# XQFtfTFftlJdaKCLqyDqVnrVNRacFPm7kinrEQbTSmglAhwnyu3GwsokDAYiJTqp
# g0n6mX/pWVEMDNY1HrDsk2Q/pyIZFmzhtuRyXRvi/bh8/BnmMCpySG+2463dnu1O
# xIGr2w8ldc+mKn2w2k3wYKDnUHz/NhOkE86tL/ZxIgjCzeenZXwXNdzM+cuAmOZX
# L9KIu5io6bTevYVwwHhd5/N6MCqVEhoRmsUQfF0CIkIzxXbyF14M89YHXZo3YJAd
# n2uoJ7i6hF/4Pt6Uqlg09+vEk7onwrobnTPnbKHEKNWHNOMKXpq1CBxxcVz2qe24
# +CTAAOOhHqaTjODPSexzHZDZYxugCy1XSqps9AFF1HqUcmsPCL/PQ75YGTJJO0oF
# 0V1Yvzjhin26AQS9SglIeXnHxYC26Cg2mXnUpVbryWnG888r0XAGpRl+FEuXK7Ln
# /dGuCIWTozypSkG9304IlxlYsOoXhL11NZqINW+W/Tor3dMRQhWUQcHqv98Jl4Ad
# rnpzZ0Dhd9ityZdbI0CCMZZZLY5dw1Rq5q407GTJr1CDU4PJBh0=
# =N8q0
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 03 May 2024 08:35:42 AM PDT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'accel-sh4-ui-20240503' of https://github.com/philmd/qemu:
  ui/cocoa.m: Drop old macOS-10.12-and-earlier compat ifdefs
  target/sh4: Rename TCGv variables as manual for SUBV opcode
  target/sh4: Rename TCGv variables as manual for ADDV opcode
  target/sh4: Fix SUBV opcode
  target/sh4: Fix ADDV opcode
  MAINTAINERS: Update my email address
  plugins: Update stale comment
  plugins/api: Only include 'exec/ram_addr.h' with system emulation
  coverity: Update user emulation regexp
  user: Move 'thunk.h' from 'exec/user' to 'user'
  user: Move 'abitypes.h' from 'exec/user' to 'user'
  exec: Include missing license in 'exec/cpu-common.h'
  accel/whpx: Fix NULL dereference in whpx_init_vcpu()
  accel/nvmm: Fix NULL dereference in nvmm_init_vcpu()

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-05-03 14:42:50 -07:00
Lei Wang
b10b248173 target/i386: Introduce SapphireRapids-v3 to add missing features
Add the missing features(ss, tsc-adjust, cldemote, movdiri, movdir64b) in
the SapphireRapids-v3 CPU model.

Signed-off-by: Lei Wang <lei4.wang@intel.com>
Message-ID: <20240424072912.43188-1-lei4.wang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-03 15:47:48 +02:00
Paolo Bonzini
4921d0a753 i386: switch boards to "default y"
Some targets use "default y" for boards to filter out those that require
TCG.  For consistency we are switching all other targets to do the same.
Continue with i386.

No changes to generated config-devices.mak files, other than
adding CONFIG_I386 to the x86_64-softmmu target.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-03 15:47:47 +02:00
Philippe Mathieu-Daudé
083367dbbf accel/whpx: Fix NULL dereference in whpx_init_vcpu()
When mechanically moving the @dirty field to AccelCPUState
in commit 9ad49538c7, we neglected cpu->accel is still NULL
when we want to dereference it.

Fixes: 9ad49538c7 ("accel/whpx: Use accel-specific per-vcpu @dirty field")
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Suggested-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240429091918.27429-2-philmd@linaro.org>
2024-05-03 14:37:51 +02:00
Philippe Mathieu-Daudé
61653b4a97 accel/nvmm: Fix NULL dereference in nvmm_init_vcpu()
When mechanically moving the @dirty field to AccelCPUState
in commit 79f1926b2d, we neglected cpu->accel is still NULL
when we want to dereference it.

Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Suggested-by: Volker Rümelin <vr_qemu@t-online.de>
Fixes: 79f1926b2d ("accel/nvmm: Use accel-specific per-vcpu @dirty field")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240429091918.27429-3-philmd@linaro.org>
2024-05-03 14:37:51 +02:00
Paolo Bonzini
5ce77fcb1b Kconfig: kvm: allow building without any board
KVM code might have to call functions on the PCIDevice that is
passed to kvm_arch_fixup_msi_route().  This fails in the case
where --without-default-devices is used and no board is
configured.  While this is not really a useful configuration,
and therefore setting up stubs for CONFIG_PCI is overkill,
failing the build is impolite.  Just include the PCI
subsystem if kvm_arch_fixup_msi_route() requires it, as
is the case for ARM and x86.

Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-03 14:16:26 +02:00
Ruihan Li
6a5a63f74b target/i386: Give IRQs a chance when resetting HF_INHIBIT_IRQ_MASK
When emulated with QEMU, interrupts will never come in the following
loop. However, if the NOP instruction is uncommented, interrupts will
fire as normal.

	loop:
		cli
    		call do_sti
		jmp loop

	do_sti:
		sti
		# nop
		ret

This behavior is different from that of a real processor. For example,
if KVM is enabled, interrupts will always fire regardless of whether the
NOP instruction is commented or not. Also, the Intel Software Developer
Manual states that after the STI instruction is executed, the interrupt
inhibit should end as soon as the next instruction (e.g., the RET
instruction if the NOP instruction is commented) is executed.

This problem is caused because the previous code may choose not to end
the TB even if the HF_INHIBIT_IRQ_MASK has just been reset (e.g., in the
case where the STI instruction is immediately followed by the RET
instruction), so that IRQs may not have a change to trigger. This commit
fixes the problem by always terminating the current TB to give IRQs a
chance to trigger when HF_INHIBIT_IRQ_MASK is reset.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
Message-ID: <20240415064518.4951-4-lrh2000@pku.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-03 14:15:48 +02:00
Thomas Huth
4984321436 target/i386/cpu: Remove "x86" prefix from the CPU list
Printing an "x86" in front of each CPU name is not helpful at all:
It is confusing for the users since they don't know whether they
have to specify these letters for the "-cpu" parameter, too, and
it also takes some precious space in the dense output of the CPU
entries. Let's simply remove this now and use two spaces at the
beginning of the lines for the indentation of the entries instead,
like most other target architectures are doing it for their CPU help
output already.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-29 09:37:26 +03:00
Philippe Mathieu-Daudé
e620363687 accel/hvf: Use accel-specific per-vcpu @dirty field
HVF has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-4-philmd@linaro.org>
2024-04-26 17:03:00 +02:00
Philippe Mathieu-Daudé
79f1926b2d accel/nvmm: Use accel-specific per-vcpu @dirty field
NVMM has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-3-philmd@linaro.org>
2024-04-26 17:02:56 +02:00
Philippe Mathieu-Daudé
9ad49538c7 accel/whpx: Use accel-specific per-vcpu @dirty field
WHPX has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-2-philmd@linaro.org>
2024-04-26 17:02:24 +02:00
Philippe Mathieu-Daudé
7e17a52469 target/i386: Include missing 'exec/exec-all.h' header
The XRSTOR instruction ends calling tlb_flush(), declared
in "exec/exec-all.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231211212003.21686-13-philmd@linaro.org>
2024-04-26 15:31:37 +02:00
Philippe Mathieu-Daudé
e92dd33224 target: Define TCG_GUEST_DEFAULT_MO in 'cpu-param.h'
accel/tcg/ files requires the following definitions:

  - TARGET_LONG_BITS
  - TARGET_PAGE_BITS
  - TARGET_PHYS_ADDR_SPACE_BITS
  - TCG_GUEST_DEFAULT_MO

The first 3 are defined in "cpu-param.h". The last one
in "cpu.h", with a bunch of definitions irrelevant for
TCG. By moving the TCG_GUEST_DEFAULT_MO definition to
"cpu-param.h", we can simplify various accel/tcg includes.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20231211212003.21686-4-philmd@linaro.org>
2024-04-26 15:31:37 +02:00
Richard Henderson
a118c4aff4 Misc HW patch queue
- Script to compare machines compat_props[] (Maksim)
 - Introduce 'module' CPU topology level (Zhao)
 - Various cleanups (Thomas, Zhao, Inès, Bernhard)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmYqN3wACgkQ4+MsLN6t
 wN4hTw/9FHsItnEkme/864DRPSP7A9mCGa+JfzJmsL8oUb9fBjXXKm+lNchMLu3B
 uvzfXB2Ea24yf5vyrldo0XlU3i/4GDvqXTI6YFYqBvitGICauYBu+6n2NZh2Y/Pn
 zZCcVo167o0q7dHu2WSrZ6cSUchsF2C80HjuS07QaN2YZ7QMuN1+uqTjCQ/JHQWA
 MH4xHh7cXdfCbbv8iNhMWn6sa+Bw/UyfRcc2W6w9cF5Q5cuuTshgDyd0JBOzkM1i
 Mcul7TuKrSiLUeeeqfTjwtw3rtbNfkelV3ycgvgECFAlzPSjF5a6d/EGdO2zo3T/
 aFZnQBYrb4U0SzsmfXFHW7cSylIc1Jn2CCuZZBIvdVcu8TGDD5XsgZbGoCfKdWxp
 l67qbQJy1Mp3LrRzygJIaxDOfE8fhhRrcIxfK/GoTHaCkqeFRkGjTeiDTVBqAES2
 zs6kUYZyG/xGaa2tsMu+HbtSO5EEqPC2QCdHayY3deW42Kwjj/HFV50Ya8YgYSVp
 gEAjTDOle2dDjlkYud+ymTJz7LnGb3G7q0EZRI9DWolx/bu+uZGQqTSRRre4qFQY
 SgN576hsFGN4NdM7tyJWiiqD/OC9ZeqUx3gGBtmI52Q6obBCE9hcow0fPs55Tk95
 1YzPrt/3IoPI5ZptCoA8DFiysQ46OLtpIsQO9YcrpJmxWyLDSr0=
 =tm+U
 -----END PGP SIGNATURE-----

Merge tag 'hw-misc-20240425' of https://github.com/philmd/qemu into staging

Misc HW patch queue

- Script to compare machines compat_props[] (Maksim)
- Introduce 'module' CPU topology level (Zhao)
- Various cleanups (Thomas, Zhao, Inès, Bernhard)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmYqN3wACgkQ4+MsLN6t
# wN4hTw/9FHsItnEkme/864DRPSP7A9mCGa+JfzJmsL8oUb9fBjXXKm+lNchMLu3B
# uvzfXB2Ea24yf5vyrldo0XlU3i/4GDvqXTI6YFYqBvitGICauYBu+6n2NZh2Y/Pn
# zZCcVo167o0q7dHu2WSrZ6cSUchsF2C80HjuS07QaN2YZ7QMuN1+uqTjCQ/JHQWA
# MH4xHh7cXdfCbbv8iNhMWn6sa+Bw/UyfRcc2W6w9cF5Q5cuuTshgDyd0JBOzkM1i
# Mcul7TuKrSiLUeeeqfTjwtw3rtbNfkelV3ycgvgECFAlzPSjF5a6d/EGdO2zo3T/
# aFZnQBYrb4U0SzsmfXFHW7cSylIc1Jn2CCuZZBIvdVcu8TGDD5XsgZbGoCfKdWxp
# l67qbQJy1Mp3LrRzygJIaxDOfE8fhhRrcIxfK/GoTHaCkqeFRkGjTeiDTVBqAES2
# zs6kUYZyG/xGaa2tsMu+HbtSO5EEqPC2QCdHayY3deW42Kwjj/HFV50Ya8YgYSVp
# gEAjTDOle2dDjlkYud+ymTJz7LnGb3G7q0EZRI9DWolx/bu+uZGQqTSRRre4qFQY
# SgN576hsFGN4NdM7tyJWiiqD/OC9ZeqUx3gGBtmI52Q6obBCE9hcow0fPs55Tk95
# 1YzPrt/3IoPI5ZptCoA8DFiysQ46OLtpIsQO9YcrpJmxWyLDSr0=
# =tm+U
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 25 Apr 2024 03:59:08 AM PDT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'hw-misc-20240425' of https://github.com/philmd/qemu: (22 commits)
  hw/core: Support module-id in numa configuration
  hw/core: Introduce module-id as the topology subindex
  hw/core/machine: Support modules in -smp
  hw/core/machine: Introduce the module as a CPU topology level
  hw/i386/pc_sysfw: Remove unused parameter from pc_isa_bios_init()
  hw/misc : Correct 5 spaces indents in stm32l4x5_exti
  hw/xtensa: Include missing 'exec/cpu-common.h' in 'bootparam.h'
  hw/elf_ops: Rename elf_ops.h -> elf_ops.h.inc
  hw/cxl/cxl-cdat: Make cxl_doe_cdat_init() return boolean
  hw/cxl/cxl-cdat: Make ct3_build_cdat() return boolean
  hw/cxl/cxl-cdat: Make ct3_load_cdat() return boolean
  hw: Add a Kconfig switch for the TYPE_CPU_CLUSTER device
  hw: Fix problem with the A*MPCORE switches in the Kconfig files
  hw/riscv/virt: Replace sprintf by g_strdup_printf
  hw/misc/imx: Replace sprintf() by snprintf()
  hw/misc/applesmc: Simplify DeviceReset handler
  target/i386: Move APIC related code to cpu-apic.c
  hw/core: Remove check on NEED_CPU_H in tcg-cpu-ops.h
  scripts: add script to compare compatibility properties
  python/qemu/machine: add method to retrieve QEMUMachine::binary field
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-04-25 09:43:29 -07:00
Peter Maydell
ad80e36744 hw, target: Add ResetType argument to hold and exit phase methods
We pass a ResetType argument to the Resettable class enter
phase method, but we don't pass it to hold and exit, even though
the callsites have it readily available. This means that if
a device cared about the ResetType it would need to record it
in the enter phase method to use later on. Pass the type to
all three of the phase methods to avoid having to do that.

Commit created with

  for dir in hw target include; do \
      spatch --macro-file scripts/cocci-macro-file.h \
             --sp-file scripts/coccinelle/reset-type.cocci \
             --keep-comments --smpl-spacing --in-place \
             --include-headers --dir $dir; done

and no manual edits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Luc Michel <luc.michel@amd.com>
Message-id: 20240412160809.1260625-5-peter.maydell@linaro.org
2024-04-25 10:21:06 +01:00
Philippe Mathieu-Daudé
63073574e8 target/i386: Move APIC related code to cpu-apic.c
Move APIC related code split in cpu-sysemu.c and
monitor.c to cpu-apic.c.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20240321154838.95771-4-philmd@linaro.org>
2024-04-25 10:12:54 +02:00
Mark Cave-Ayland
7653b44534 target/i386/translate.c: always write 32-bits for SGDT and SIDT
The various Intel CPU manuals claim that SGDT and SIDT can write either 24-bits
or 32-bits depending upon the operand size, but this is incorrect. Not only do
the Intel CPU manuals give contradictory information between processor
revisions, but this information doesn't even match real-life behaviour.

In fact, tests on real hardware show that the CPU always writes 32-bits for SGDT
and SIDT, and this behaviour is required for at least OS/2 Warp and WFW 3.11 with
Win32s to function correctly. Remove the masking applied due to the operand size
for SGDT and SIDT so that the TCG behaviour matches the behaviour on real
hardware.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2198

--
MCA: Whilst I don't have a copy of OS/2 Warp handy, I've confirmed that this
patch fixes the issue in WFW 3.11 with Win32s. For more technical information I
highly recommend the excellent write-up at
https://www.os2museum.com/wp/sgdtsidt-fiction-and-reality/.
Message-ID: <20240419195147.434894-1-mark.cave-ayland@ilande.co.uk>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:26 +02:00
Zhao Liu
aec202cb0e target/i386/cpu: Merge the warning and error messages for AMD HT check
Currently, the difference between warn_report_once() and
error_report_once() is the former has the "warning:" prefix, while the
latter does not have a similar level prefix.

At the meantime, considering that there is no error handling logic here,
and the purpose of error_report_once() is only to prompt the user with
an abnormal message, there is no need to use an error-level message here,
and instead we can just use a warning.

Therefore, downgrade the message in error_report_once() to warning, and
merge it into the previous warn_report_once().

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-4-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:26 +02:00
Zhao Liu
8e3991ebc8 target/i386/cpu: Consolidate the use of warn_report_once()
The difference between error_printf() and error_report() is the latter
may contain more information, such as the name of the program
("qemu-system-x86_64").

Thus its variant error_report_once() and warn_report()'s variant
warn_report_once() can be used here to print the information only once
without a static local variable "ht_warned".

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-3-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:26 +02:00
Zhao Liu
7502ffb2f3 target/i386/host-cpu: Consolidate the use of warn_report_once()
Use warn_report_once() to get rid of the static local variable "warned".

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-2-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:26 +02:00
Michael Roth
ea7fbd3753 hw/i386/sev: Use legacy SEV VM types for older machine types
Newer 9.1 machine types will default to using the KVM_SEV_INIT2 API for
creating SEV/SEV-ES going forward. However, this API results in guest
measurement changes which are generally not expected for users of these
older guest types and can cause disruption if they switch to a newer
QEMU/kernel version. Avoid this by continuing to use the older
KVM_SEV_INIT/KVM_SEV_ES_INIT APIs for older machine types.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-ID: <20240409230743.962513-4-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Michael Roth
023267334d i386/sev: Add 'legacy-vm-type' parameter for SEV guest objects
QEMU will currently automatically make use of the KVM_SEV_INIT2 API for
initializing SEV and SEV-ES guests verses the older
KVM_SEV_INIT/KVM_SEV_ES_INIT interfaces.

However, the older interfaces will silently avoid sync'ing FPU/XSAVE
state to the VMSA prior to encryption, thus relying on behavior and
measurements that assume the related fields to be allow zero.

With KVM_SEV_INIT2, this state is now synced into the VMSA, resulting in
measurements changes and, theoretically, behaviorial changes, though the
latter are unlikely to be seen in practice.

To allow a smooth transition to the newer interface, while still
providing a mechanism to maintain backward compatibility with VMs
created using the older interfaces, provide a new command-line
parameter:

  -object sev-guest,legacy-vm-type=true,...

and have it default to false.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-ID: <20240409230743.962513-2-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Paolo Bonzini
663e2f443e target/i386: SEV: use KVM_SEV_INIT2 if possible
Implement support for the KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM virtual
machine types, and the KVM_SEV_INIT2 function of KVM_MEMORY_ENCRYPT_OP.

These replace the KVM_SEV_INIT and KVM_SEV_ES_INIT functions, and have
several advantages:

- sharing the initialization sequence with SEV-SNP and TDX

- allowing arguments including the set of desired VMSA features

- protection against invalid use of KVM_GET/SET_* ioctls for guests
  with encrypted state

If the KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM types are not supported,
fall back to KVM_SEV_INIT and KVM_SEV_ES_INIT (which use the
default x86 VM type).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Paolo Bonzini
ee88612df1 target/i386: Implement mc->kvm_type() to get VM type
KVM is introducing a new API to create confidential guests, which
will be used by TDX and SEV-SNP but is also available for SEV and
SEV-ES.  The API uses the VM type argument to KVM_CREATE_VM to
identify which confidential computing technology to use.

Since there are no other expected uses of VM types, delegate
mc->kvm_type() for x86 boards to the confidential-guest-support
object pointed to by ms->cgs.

For example, if a sev-guest object is specified to confidential-guest-support,
like,

  qemu -machine ...,confidential-guest-support=sev0 \
       -object sev-guest,id=sev0,...

it will check if a VM type KVM_X86_SEV_VM or KVM_X86_SEV_ES_VM
is supported, and if so use them together with the KVM_SEV_INIT2
function of the KVM_MEMORY_ENCRYPT_OP ioctl. If not, it will fall back to
KVM_SEV_INIT and KVM_SEV_ES_INIT.

This is a preparatory work towards TDX and SEV-SNP support, but it
will also enable support for VMSA features such as DebugSwap, which
are only available via KVM_SEV_INIT2.

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Paolo Bonzini
d82e9c843d target/i386: introduce x86-confidential-guest
Introduce a common superclass for x86 confidential guest implementations.
It will extend ConfidentialGuestSupportClass with a method that provides
the VM type to be passed to KVM_CREATE_VM.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Paolo Bonzini
a99c0c66eb KVM: remove kvm_arch_cpu_check_are_resettable
Board reset requires writing a fresh CPU state.  As far as KVM is
concerned, the only thing that blocks reset is that CPU state is
encrypted; therefore, kvm_cpus_are_resettable() can simply check
if that is the case.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Paolo Bonzini
5c3131c392 KVM: track whether guest state is encrypted
So far, KVM has allowed KVM_GET/SET_* ioctls to execute even if the
guest state is encrypted, in which case they do nothing.  For the new
API using VM types, instead, the ioctls will fail which is a safer and
more robust approach.

The new API will be the only one available for SEV-SNP and TDX, but it
is also usable for SEV and SEV-ES.  In preparation for that, require
architecture-specific KVM code to communicate the point at which guest
state is protected (which must be after kvm_cpu_synchronize_post_init(),
though that might change in the future in order to suppor migration).
From that point, skip reading registers so that cpu->vcpu_dirty is
never true: if it ever becomes true, kvm_arch_put_registers() will
fail miserably.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Xiaoyao Li
637c95b37b i386/sev: Switch to use confidential_guest_kvm_init()
Use confidential_guest_kvm_init() instead of calling SEV
specific sev_kvm_init(). This allows the introduction of multiple
confidential-guest-support subclasses for different x86 vendors.

As a bonus, stubs are not needed anymore since there is no
direct call from target/i386/kvm/kvm.c to SEV code.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20240229060038.606591-1-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Pawan Gupta
41bdd98128 target/i386: Export RFDS bit to guests
Register File Data Sampling (RFDS) is a CPU side-channel vulnerability
that may expose stale register value. CPUs that set RFDS_NO bit in MSR
IA32_ARCH_CAPABILITIES indicate that they are not vulnerable to RFDS.
Similarly, RFDS_CLEAR indicates that CPU is affected by RFDS, and has
the microcode to help mitigate RFDS.

Make RFDS_CLEAR and RFDS_NO bits available to guests.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <9a38877857392b5c2deae7e7db1b170d15510314.1710341348.git.pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Tao Su
6e82d3b622 target/i386: Add new CPU model SierraForest
According to table 1-2 in Intel Architecture Instruction Set Extensions and
Future Features (rev 051) [1], SierraForest has the following new features
which have already been virtualized:

- CMPCCXADD CPUID.(EAX=7,ECX=1):EAX[bit 7]
- AVX-IFMA CPUID.(EAX=7,ECX=1):EAX[bit 23]
- AVX-VNNI-INT8 CPUID.(EAX=7,ECX=1):EDX[bit 4]
- AVX-NE-CONVERT CPUID.(EAX=7,ECX=1):EDX[bit 5]

Add above features to new CPU model SierraForest. Comparing with GraniteRapids
CPU model, SierraForest bare-metal removes the following features:

- HLE CPUID.(EAX=7,ECX=0):EBX[bit 4]
- RTM CPUID.(EAX=7,ECX=0):EBX[bit 11]
- AVX512F CPUID.(EAX=7,ECX=0):EBX[bit 16]
- AVX512DQ CPUID.(EAX=7,ECX=0):EBX[bit 17]
- AVX512_IFMA CPUID.(EAX=7,ECX=0):EBX[bit 21]
- AVX512CD CPUID.(EAX=7,ECX=0):EBX[bit 28]
- AVX512BW CPUID.(EAX=7,ECX=0):EBX[bit 30]
- AVX512VL CPUID.(EAX=7,ECX=0):EBX[bit 31]
- AVX512_VBMI CPUID.(EAX=7,ECX=0):ECX[bit 1]
- AVX512_VBMI2 CPUID.(EAX=7,ECX=0):ECX[bit 6]
- AVX512_VNNI CPUID.(EAX=7,ECX=0):ECX[bit 11]
- AVX512_BITALG CPUID.(EAX=7,ECX=0):ECX[bit 12]
- AVX512_VPOPCNTDQ CPUID.(EAX=7,ECX=0):ECX[bit 14]
- LA57 CPUID.(EAX=7,ECX=0):ECX[bit 16]
- TSXLDTRK CPUID.(EAX=7,ECX=0):EDX[bit 16]
- AMX-BF16 CPUID.(EAX=7,ECX=0):EDX[bit 22]
- AVX512_FP16 CPUID.(EAX=7,ECX=0):EDX[bit 23]
- AMX-TILE CPUID.(EAX=7,ECX=0):EDX[bit 24]
- AMX-INT8 CPUID.(EAX=7,ECX=0):EDX[bit 25]
- AVX512_BF16 CPUID.(EAX=7,ECX=1):EAX[bit 5]
- fast zero-length MOVSB CPUID.(EAX=7,ECX=1):EAX[bit 10]
- fast short CMPSB, SCASB CPUID.(EAX=7,ECX=1):EAX[bit 12]
- AMX-FP16 CPUID.(EAX=7,ECX=1):EAX[bit 21]
- PREFETCHI CPUID.(EAX=7,ECX=1):EDX[bit 14]
- XFD CPUID.(EAX=0xD,ECX=1):EAX[bit 4]
- EPT_PAGE_WALK_LENGTH_5 VMX_EPT_VPID_CAP(0x48c)[bit 7]

Add all features of GraniteRapids CPU model except above features to
SierraForest CPU model.

SierraForest doesn’t support TSX and RTM but supports TAA_NO. When RTM is
not enabled in host, KVM will not report TAA_NO. So, just don't include
TAA_NO in SierraForest CPU model.

[1] https://cdrdv2.intel.com/v1/dl/getContent/671368

Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Message-ID: <20240320021044.508263-1-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Zhenzhong Duan
c895fa54e3 target/i386: Introduce Icelake-Server-v7 to enable TSX
When start L2 guest with both L1/L2 using Icelake-Server-v3 or above,
QEMU reports below warning:

"warning: host doesn't support requested feature: MSR(10AH).taa-no [bit 8]"

Reason is QEMU Icelake-Server-v3 has TSX feature disabled but enables taa-no
bit. It's meaningless that TSX isn't supported but still claim TSX is secure.
So L1 KVM doesn't expose taa-no to L2 if TSX is unsupported, then starting L2
triggers the warning.

Fix it by introducing a new version Icelake-Server-v7 which has both TSX
and taa-no features. Then guest can use TSX securely when it see taa-no.

This matches the production Icelake which supports TSX and isn't susceptible
to TSX Async Abort (TAA) vulnerabilities, a.k.a, taa-no.

Ideally, TSX should have being enabled together with taa-no since v3, but for
compatibility, we'd better to add v7 to enable it.

Fixes: d965dc3559 ("target/i386: Add ARCH_CAPABILITIES related bits into Icelake-Server CPU model")
Tested-by: Xiangfei Ma <xiangfeix.ma@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Message-ID: <20240320093138.80267-2-zhenzhong.duan@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:25 +02:00
Sean Christopherson
a5acf4f26c i386/kvm: Move architectural CPUID leaf generation to separate helper
Move the architectural (for lack of a better term) CPUID leaf generation
to a separate helper so that the generation code can be reused by TDX,
which needs to generate a canonical VM-scoped configuration.

For now this is just a cleanup, so keep the function static.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240229063726.610065-23-xiaoyao.li@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-23 17:35:13 +02:00
Gerd Hoffmann
0d08c42368 kvm: add support for guest physical bits
Query kvm for supported guest physical address bits, in cpuid
function 80000008, eax[23:16].  Usually this is identical to host
physical address bits.  With NPT or EPT being used this might be
restricted to 48 (max 4-level paging address space size) even if
the host cpu supports more physical address bits.

When set pass this to the guest, using cpuid too.  Guest firmware
can use this to figure how big the usable guest physical address
space is, so PCI bar mapping are actually reachable.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240318155336.156197-2-kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-18 11:17:28 +02:00
Gerd Hoffmann
513ba32dcc target/i386: add guest-phys-bits cpu property
Allows to set guest-phys-bits (cpuid leaf 80000008, eax[23:16])
via -cpu $model,guest-phys-bits=$nr.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-ID: <20240318155336.156197-3-kraxel@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-04-18 11:17:28 +02:00
Richard Henderson
8df1ba49d7 target/i386: Preserve DisasContextBase.insn_start across rewind
When aborting translation of the current insn, restore the
previous value of insn_start.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Jørgen Hansen <Jorgen.Hansen@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-04-09 07:45:09 -10:00
Philippe Mathieu-Daudé
d6fd5d8346 accel/hvf: Un-inline hvf_arch_supports_guest_debug()
See previous commit and commit 9de9fa5cf2 ("Avoid using inlined
functions with external linkage") for rationale.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240313184954.42513-3-philmd@linaro.org>
2024-04-02 14:55:32 +02:00
Gregory Price
9dab7bbb01 target/i386/tcg: Enable page walking from MMIO memory
CXL emulation of interleave requires read and write hooks due to
requirement for subpage granularity. The Linux kernel stack now enables
using this memory as conventional memory in a separate NUMA node. If a
process is deliberately forced to run from that node
$ numactl --membind=1 ls
the page table walk on i386 fails.

Useful part of backtrace:

    (cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p")
    at ../../cpu-target.c:359
    (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>)
    at ../../accel/tcg/cputlb.c:1339
    (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030
    (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356
    (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439
    at ../../accel/tcg/ldst_common.c.inc:301
    at ../../target/i386/tcg/sysemu/excp_helper.c:173
    (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0)
    at ../../target/i386/tcg/sysemu/excp_helper.c:578
    (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604

Avoid this by plumbing the address all the way down from
x86_cpu_tlb_fill() where is available as retaddr to the actual accessors
which provide it to probe_access_full() which already handles MMIO accesses.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Gregory Price <gregory.price@memverge.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-03-26 14:23:50 +01:00
Tao Su
7fd226b047 target/i386: Revert monitor_puts() in do_inject_x86_mce()
monitor_puts() doesn't check the monitor pointer, but do_inject_x86_mce()
may have a parameter with NULL monitor pointer. Revert monitor_puts() in
do_inject_x86_mce() to fix, then the fact that we send the same message to
monitor and log is again more obvious.

Fixes: bf0c50d4aa (monitor: expose monitor_puts to rest of code)
Reviwed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Message-ID: <20240320083640.523287-1-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-03-20 09:47:00 +01:00
Paolo Bonzini
2cc68629a6 target/i386: fix direction of "32-bit MMU" test
The low bit of MMU indices for x86 TCG indicates whether the processor is
in 32-bit mode and therefore linear addresses have to be masked to 32 bits.
However, the index was computed incorrectly, leading to possible conflicts
in the TLB for any address above 4G.

Analyzed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Fixes: b1661801c1 ("target/i386: Fix physical address truncation", 2024-02-28)
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2206
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-03-18 22:53:23 +01:00
Peter Maydell
441e0eefab Error reporting patches for 2024-03-12
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmXwWOYSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZT+voP/jAEyPfbtwggKLHjSCkHchn/uUziLJ2o
 //i7+ZV9soCizAEkW+AkIR17PsMCaRsa8W4AULLn+ZaDJNy1Vlj2WYIkgeFm/rba
 AWfNXywIg7dLnj0Hd98nz13hPuP52hO9vpakPhcua9L6mmk1htdqbbGIFIIfbQhp
 e6FM+sBEW44uGcZx+N0wMEpKF0F7RId/jzH4mfP35WE7CLaAr2EfTXFaadAM636e
 QsrM8wuiNAPQeyXz14gxYTWAnnMGglM5WQ4hoxSGN0y8c007gvff5vMKc7vapn4/
 DdiYJqpq/DIWaiGL0Fl8Cpry3WrQ8UY0st745kCLF/f9nlL0GvnBGdLdUaap7lQZ
 A/C1sDKNubAGwzcw643AhV73QHc9f5kDBdWIj5wj3k5DQmBmgKACzGs1edDVVB+2
 OaStqZZ/V9Q5gljjh6PiHEptTjPhsaftX7GGjbhXTJUDFB9GONSCEVwAdZZxJ0Pm
 6cQLtcIMtcjL4xXNz6niVZkxGT/zu4kqbZ01LudIqEQAnULwRiVpyjkCmReSAOPP
 eBtkCQtn7WPlz4N3ZiV2+a1p4/e88KH9wvxF+XvPEJjgsdeUmxX44f82ouLPJzvE
 fOXE11tRr41u9m+UmoinVo581CKYGlkRJlNQWQwFOmnXoKP2nPZzADxraihkCR5p
 wT0Hz9uwJs94
 =6FSf
 -----END PGP SIGNATURE-----

Merge tag 'pull-error-2024-03-12' of https://repo.or.cz/qemu/armbru into staging

Error reporting patches for 2024-03-12

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmXwWOYSHGFybWJydUBy
# ZWRoYXQuY29tAAoJEDhwtADrkYZT+voP/jAEyPfbtwggKLHjSCkHchn/uUziLJ2o
# //i7+ZV9soCizAEkW+AkIR17PsMCaRsa8W4AULLn+ZaDJNy1Vlj2WYIkgeFm/rba
# AWfNXywIg7dLnj0Hd98nz13hPuP52hO9vpakPhcua9L6mmk1htdqbbGIFIIfbQhp
# e6FM+sBEW44uGcZx+N0wMEpKF0F7RId/jzH4mfP35WE7CLaAr2EfTXFaadAM636e
# QsrM8wuiNAPQeyXz14gxYTWAnnMGglM5WQ4hoxSGN0y8c007gvff5vMKc7vapn4/
# DdiYJqpq/DIWaiGL0Fl8Cpry3WrQ8UY0st745kCLF/f9nlL0GvnBGdLdUaap7lQZ
# A/C1sDKNubAGwzcw643AhV73QHc9f5kDBdWIj5wj3k5DQmBmgKACzGs1edDVVB+2
# OaStqZZ/V9Q5gljjh6PiHEptTjPhsaftX7GGjbhXTJUDFB9GONSCEVwAdZZxJ0Pm
# 6cQLtcIMtcjL4xXNz6niVZkxGT/zu4kqbZ01LudIqEQAnULwRiVpyjkCmReSAOPP
# eBtkCQtn7WPlz4N3ZiV2+a1p4/e88KH9wvxF+XvPEJjgsdeUmxX44f82ouLPJzvE
# fOXE11tRr41u9m+UmoinVo581CKYGlkRJlNQWQwFOmnXoKP2nPZzADxraihkCR5p
# wT0Hz9uwJs94
# =6FSf
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 12 Mar 2024 13:30:14 GMT
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* tag 'pull-error-2024-03-12' of https://repo.or.cz/qemu/armbru:
  target/loongarch: Fix query-cpu-model-expansion to reject props
  target: Improve error reporting for CpuModelInfo member @props
  target/i386: Fix query-cpu-model-expansion to reject props
  target: Simplify type checks for CpuModelInfo member @props

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-12 16:55:56 +00:00
Markus Armbruster
8934643a0e target: Improve error reporting for CpuModelInfo member @props
query-cpu-model-comparison, query-cpu-model-baseline, and
query-cpu-model-expansion take CpuModelInfo arguments.  Errors in
@props members of these arguments are reported for 'props', without
further context.  For instance, s390x rejects

    {"execute": "query-cpu-model-comparison", "arguments": {"modela": {"name": "z13", "props": {}}, "modelb": {"name": "z14", "props": []}}}

with

    {"error": {"class": "GenericError", "desc": "Invalid parameter type for 'props', expected: object"}}

This is unusual; the common QAPI unmarshaling machinery would complain
about 'modelb.props'.  Our hand-written code to visit the @props
member neglects to provide the context.

Tweak it so it provides it.  The command above now fails with

    {"error": {"class": "GenericError", "desc": "Invalid parameter type for 'modelb.props', expected: dict"}}

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20240305145919.2186971-4-armbru@redhat.com>
Acked-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2024-03-12 14:03:00 +01:00
Markus Armbruster
68192a5ffd target/i386: Fix query-cpu-model-expansion to reject props
CpuModelInfo member @props is semantically a mapping from name to
value, and syntactically a JSON object on the wire.  This translates
to QDict in C.  Since the QAPI schema language lacks the means to
express 'object', we use 'any' instead.  This is QObject in C.
Commands taking a CpuModelInfo argument need to check the QObject is a
QDict.

The i386 version of qmp_query_cpu_model_expansion() fails to check.
Instead, @props is silently ignored when it's not an object.  For
instance,

    {"execute": "query-cpu-model-expansion", "arguments": {"type": "full", "model": {"name": "qemu64", "props": null}}}

succeeds.

Fix by refactoring the code to match the other targets.  Now the
command fails as it should:

    {"error": {"class": "GenericError", "desc": "Invalid parameter type for 'props', expected: object"}}

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20240305145919.2186971-3-armbru@redhat.com>
2024-03-12 14:02:46 +01:00
Philippe Mathieu-Daudé
f8436a1605 target/i386/hvf: Use CPUState typedef
QEMU coding style recommend using structure typedefs:
https://www.qemu.org/docs/master/devel/style.html#typedefs

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240129164514.73104-14-philmd@linaro.org>
[thuth: Break long lines to avoid checkpatch.pl errors]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-03-12 12:03:58 +01:00
Philippe Mathieu-Daudé
348802b526 target: Replace CPU_GET_CLASS(cpu -> obj) in cpu_reset_hold() handler
Since CPU() macro is a simple cast, the following are equivalent:

  Object *obj;
  CPUState *cs = CPU(obj)

In order to ease static analysis when running
scripts/coccinelle/cpu_env.cocci from the previous commit,
replace:

 - CPU_GET_CLASS(cpu);
 + CPU_GET_CLASS(obj);

Most code use the 'cs' variable name for CPUState handle.
Replace few 's' -> 'cs' to unify cpu_reset_hold() style.

No logical change in this patch.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240129164514.73104-7-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-03-12 11:46:16 +01:00
Philippe Mathieu-Daudé
94956d7b51 bulk: Call in place single use cpu_env()
Avoid CPUArchState local variable when cpu_env() is used once.

Mechanical patch using the following Coccinelle spatch script:

 @@
 type CPUArchState;
 identifier env;
 expression cs;
 @@
  {
 -    CPUArchState *env = cpu_env(cs);
      ... when != env
 -     env
 +     cpu_env(cs)
      ... when != env
  }

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240129164514.73104-5-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-03-12 11:46:16 +01:00
Philippe Mathieu-Daudé
ee1004bba6 bulk: Access existing variables initialized to &S->F when available
When a variable is initialized to &struct->field, use it
in place. Rationale: while this makes the code more concise,
this also helps static analyzers.

Mechanical change using the following Coccinelle spatch script:

 @@
 type S, F;
 identifier s, m, v;
 @@
      S *s;
      ...
      F *v = &s->m;
      <+...
 -    &s->m
 +    v
      ...+>

Inspired-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240129164514.73104-2-philmd@linaro.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
[thuth: Dropped hunks that need a rebase, and fixed sizeof() in pmu_realize()]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-03-12 11:46:16 +01:00
Peter Maydell
7489f7f3f8 Misc HW patch queue
- hmp: Shorter 'info qtree' output (Zoltan)
 - qdev: Add a granule_mode property (Eric)
 - Some ERRP_GUARD() fixes (Zhao)
 - Doc & style fixes in docs/interop/firmware.json (Thomas)
 - hw/xen: Housekeeping (Phil)
 - hw/ppc/mac99: Change timebase frequency 25 -> 100 MHz (Mark)
 - hw/intc/apic: Memory leak fix (Paolo)
 - hw/intc/grlib_irqmp: Ensure ncpus value is in range (Clément)
 - hw/m68k/mcf5208: Add support for reset (Angelo)
 - hw/i386/pc: Housekeeping (Phil)
 - hw/core/smp: Remove/deprecate parameter=0,1 adapting test-smp-parse (Zhao)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmXstpMACgkQ4+MsLN6t
 wN6XBw//dNItFhf1YX+Au4cNoQVDgHE9RtzEIGOnwcL1CgrA9rAQQfLRE5KWM6sN
 1qiPh+T6SPxtiQ2rw4AIpsI7TXjO72b/RDWpUUSwnfH39eC77pijkxIK+i9mYI9r
 p0sPjuP6OfolUFYeSbYX+DmNZh1ONPf27JATJQEf0st8dyswn7lTQvJEaQ97kwxv
 UKA0JD5l9LZV8Zr92cgCzlrfLcbVblJGux9GYIL09yN78yqBuvTm77GBC/rvC+5Q
 fQC5PQswJZ0+v32AXIfSysMp2R6veo4By7VH9Lp51E/u9jpc4ZbcDzxzaJWE6zOR
 fZ01nFzou1qtUfZi+MxNiDR96LP6YoT9xFdGYfNS6AowZn8kymCs3eo7M9uvb+rN
 A2Sgis9rXcjsR4e+w1YPBXwpalJnLwB0QYhEOStR8wo1ceg7GBG6zHUJV89OGzsA
 KS8X0aV1Ulkdm/2H6goEhzrcC6FWLg8pBJpfKK8JFWxXNrj661xM0AAFVL9we356
 +ymthS2x/RTABSI+1Lfsoo6/SyXoimFXJJWA82q9Yzoaoq2gGMWnfwqxfix6JrrA
 PuMnNP5WNvh04iWcNz380P0psLVteHWcVfTRN3JvcJ9iJ2bpjcU1mQMJtvSF9wBn
 Y8kiJTUmZCu3br2e5EfxmypM/h8y29VD/1mxPk8Dtcq3gjx9AU4=
 =juZH
 -----END PGP SIGNATURE-----

Merge tag 'hw-misc-20240309' of https://github.com/philmd/qemu into staging

Misc HW patch queue

- hmp: Shorter 'info qtree' output (Zoltan)
- qdev: Add a granule_mode property (Eric)
- Some ERRP_GUARD() fixes (Zhao)
- Doc & style fixes in docs/interop/firmware.json (Thomas)
- hw/xen: Housekeeping (Phil)
- hw/ppc/mac99: Change timebase frequency 25 -> 100 MHz (Mark)
- hw/intc/apic: Memory leak fix (Paolo)
- hw/intc/grlib_irqmp: Ensure ncpus value is in range (Clément)
- hw/m68k/mcf5208: Add support for reset (Angelo)
- hw/i386/pc: Housekeeping (Phil)
- hw/core/smp: Remove/deprecate parameter=0,1 adapting test-smp-parse (Zhao)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmXstpMACgkQ4+MsLN6t
# wN6XBw//dNItFhf1YX+Au4cNoQVDgHE9RtzEIGOnwcL1CgrA9rAQQfLRE5KWM6sN
# 1qiPh+T6SPxtiQ2rw4AIpsI7TXjO72b/RDWpUUSwnfH39eC77pijkxIK+i9mYI9r
# p0sPjuP6OfolUFYeSbYX+DmNZh1ONPf27JATJQEf0st8dyswn7lTQvJEaQ97kwxv
# UKA0JD5l9LZV8Zr92cgCzlrfLcbVblJGux9GYIL09yN78yqBuvTm77GBC/rvC+5Q
# fQC5PQswJZ0+v32AXIfSysMp2R6veo4By7VH9Lp51E/u9jpc4ZbcDzxzaJWE6zOR
# fZ01nFzou1qtUfZi+MxNiDR96LP6YoT9xFdGYfNS6AowZn8kymCs3eo7M9uvb+rN
# A2Sgis9rXcjsR4e+w1YPBXwpalJnLwB0QYhEOStR8wo1ceg7GBG6zHUJV89OGzsA
# KS8X0aV1Ulkdm/2H6goEhzrcC6FWLg8pBJpfKK8JFWxXNrj661xM0AAFVL9we356
# +ymthS2x/RTABSI+1Lfsoo6/SyXoimFXJJWA82q9Yzoaoq2gGMWnfwqxfix6JrrA
# PuMnNP5WNvh04iWcNz380P0psLVteHWcVfTRN3JvcJ9iJ2bpjcU1mQMJtvSF9wBn
# Y8kiJTUmZCu3br2e5EfxmypM/h8y29VD/1mxPk8Dtcq3gjx9AU4=
# =juZH
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 09 Mar 2024 19:20:51 GMT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* tag 'hw-misc-20240309' of https://github.com/philmd/qemu: (43 commits)
  hw/m68k/mcf5208: add support for reset
  tests/unit/test-smp-parse: Test "parameter=0" SMP configurations
  tests/unit/test-smp-parse: Test smp_props.has_clusters
  tests/unit/test-smp-parse: Test the full 7-levels topology hierarchy
  tests/unit/test-smp-parse: Test "drawers" and "books" combination case
  tests/unit/test-smp-parse: Test "drawers" parameter in -smp
  tests/unit/test-smp-parse: Test "books" parameter in -smp
  tests/unit/test-smp-parse: Make test cases aware of the book/drawer
  tests/unit/test-smp-parse: Bump max_cpus to 4096
  tests/unit/test-smp-parse: Use CPU number macros in invalid topology case
  tests/unit/test-smp-parse: Drop the unsupported "dies=1" case
  hw/core/machine-smp: Calculate total CPUs once in machine_parse_smp_config()
  hw/core/machine-smp: Deprecate unsupported "parameter=1" SMP configurations
  hw/core/machine-smp: Remove deprecated "parameter=0" SMP configurations
  docs/interop/firmware.json: Fix doc for FirmwareFlashMode
  docs/interop/firmware.json: Align examples
  hw/intc/grlib_irqmp: abort realize when ncpus value is out of range
  mac_newworld: change timebase frequency from 100MHz to 25MHz for mac99 machine
  hmp: Add option to info qtree to omit details
  qdev: Add a granule_mode property
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-09 20:12:21 +00:00
Zhao Liu
f55cceac8c target/i386/sev: Fix missing ERRP_GUARD() for error_prepend()
As the comment in qapi/error, passing @errp to error_prepend() requires
ERRP_GUARD():

* = Why, when and how to use ERRP_GUARD() =
*
* Without ERRP_GUARD(), use of the @errp parameter is restricted:
...
* - It should not be passed to error_prepend(), error_vprepend() or
*   error_append_hint(), because that doesn't work with &error_fatal.
* ERRP_GUARD() lifts these restrictions.
*
* To use ERRP_GUARD(), add it right at the beginning of the function.
* @errp can then be used without worrying about the argument being
* NULL or &error_fatal.

ERRP_GUARD() could avoid the case when @errp is the pointer of
error_fatal, the user can't see this additional information, because
exit() happens in error_setg earlier than information is added [1].

The sev_inject_launch_secret() passes @errp to error_prepend(), and as
an APIs defined in target/i386/sev.h, it is necessary to protect its
@errp with ERRP_GUARD().

To avoid the issue like [1] said, add missing ERRP_GUARD() at the
beginning of this function.

[1]: Issue description in the commit message of commit ae7c80a7bd
     ("error: New macro ERRP_GUARD()").

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240229143914.1977550-17-zhao1.liu@linux.intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-03-09 18:51:45 +01:00
Maciej S. Szmigiero
6093637b4d vmbus: Print a warning when enabled without the recommended set of features
Some Windows versions crash at boot or fail to enable the VMBus device if
they don't see the expected set of Hyper-V features (enlightenments).

Since this provides poor user experience let's warn user if the VMBus
device is enabled without the recommended set of Hyper-V features.

The recommended set is the minimum set of Hyper-V features required to make
the VMBus device work properly in Windows Server versions 2016, 2019 and
2022.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
2024-03-08 14:18:56 +01:00
Peter Maydell
bfe8020c81 * target/i386: Fix physical address truncation on 32-bit PAE
* Remove globals for options -no-fd-bootchk and -win2k-hack
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmXebwQUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroPozAf/Vgc9u6C+8PcPDrol6qxjI+EOHLNy
 7M3/OFpUkwLXuOSawb6syYxHpLS38fKRcsb2ninngUmbRWA6p+KNUizlAFMj7op5
 wJmtdamCwCwXXaw20SfWxx2Ih0JS7FQsRsU94HTOdaDB17C9+hBcYwcggsOAXCmq
 gyVenEF1mov2A4jLMhdVIRX784AAoEP+QAuhBKQBrQwRLCTTyNdHl7jXdB9w+2sh
 KafokoFLcozJHz/tN3AhRKy6zjPugJyQmJwBRuj9tstCILtXpvf/ZE/3pUq5l3ZY
 A6dCI0zWAlGNTkpKRXsMFozNIVP2htnyidy29XHptlY5acfjtQ++rMu3BQ==
 =WY4H
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386: Fix physical address truncation on 32-bit PAE
* Remove globals for options -no-fd-bootchk and -win2k-hack

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmXebwQUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroPozAf/Vgc9u6C+8PcPDrol6qxjI+EOHLNy
# 7M3/OFpUkwLXuOSawb6syYxHpLS38fKRcsb2ninngUmbRWA6p+KNUizlAFMj7op5
# wJmtdamCwCwXXaw20SfWxx2Ih0JS7FQsRsU94HTOdaDB17C9+hBcYwcggsOAXCmq
# gyVenEF1mov2A4jLMhdVIRX784AAoEP+QAuhBKQBrQwRLCTTyNdHl7jXdB9w+2sh
# KafokoFLcozJHz/tN3AhRKy6zjPugJyQmJwBRuj9tstCILtXpvf/ZE/3pUq5l3ZY
# A6dCI0zWAlGNTkpKRXsMFozNIVP2htnyidy29XHptlY5acfjtQ++rMu3BQ==
# =WY4H
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 27 Feb 2024 23:23:48 GMT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  ide, vl: turn -win2k-hack into a property on IDE devices
  ide: collapse parameters to ide_init_drive
  target/i386: leave the A20 bit set in the final NPT walk
  target/i386: remove unnecessary/wrong application of the A20 mask
  target/i386: Fix physical address truncation
  target/i386: use separate MMU indexes for 32-bit accesses
  target/i386: introduce function to query MMU indices
  target/i386: check validity of VMCB addresses
  target/i386: mask high bits of CR3 in 32-bit mode
  vl, pc: turn -no-fd-bootchk into a machine property

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-28 14:23:21 +00:00
Akihiko Odaki
ecd6f6a882 gdbstub: Infer number of core registers from XML
GDBFeature has the num_regs member so use it where applicable to
remove magic numbers.

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-Id: <20231213-gdb-v17-8-777047380591@daynix.com>
[AJB: remove core reg check from microblaze read reg]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240227144335.1196131-13-alex.bennee@linaro.org>
2024-02-28 09:09:58 +00:00
Paolo Bonzini
b5a9de3259 target/i386: leave the A20 bit set in the final NPT walk
The A20 mask is only applied to the final memory access.  Nested
page tables are always walked with the raw guest-physical address.

Unlike the previous patch, in this one the masking must be kept, but
it was done too early.

Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11 ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
a28fe7dc19 target/i386: remove unnecessary/wrong application of the A20 mask
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already
applied in get_physical_address(), which is called via probe_access_full()
and x86_cpu_tlb_fill().

If ptw_translate() on the other hand does a MMU_NESTED_IDX access,
the A20 mask must not be applied to the address that is looked up in
the nested page tables; it must be applied only to the addresses that
hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the
previous paragraph).

Therefore, we can remove A20 masking from the computation of the page
table entry's address, and let get_physical_address() or mmu_translate()
apply it when they know they are returning a host-physical address.

Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11 ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
b1661801c1 target/i386: Fix physical address truncation
The address translation logic in get_physical_address() will currently
truncate physical addresses to 32 bits unless long mode is enabled.
This is incorrect when using physical address extensions (PAE) outside
of long mode, with the result that a 32-bit operating system using PAE
to access memory above 4G will experience undefined behaviour.

The truncation code was originally introduced in commit 33dfdb5 ("x86:
only allow real mode to access 32bit without LMA"), where it applied
only to translations performed while paging is disabled (and so cannot
affect guests using PAE).

Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
rearranged the code such that the truncation also applied to the use
of MMU_PHYS_IDX and MMU_NESTED_IDX.  Commit 4a1e9d4 ("target/i386: Use
atomic operations for pte updates") brought this truncation into scope
for page table entry accesses, and is the first commit for which a
Windows 10 32-bit guest will reliably fail to boot if memory above 4G
is present.

The truncation code however is not completely redundant.  Even though the
maximum address size for any executed instruction is 32 bits, helpers for
operations such as BOUND, FSAVE or XSAVE may ask get_physical_address()
to translate an address outside of the 32-bit range, if invoked with an
argument that is close to the 4G boundary.  Likewise for processor
accesses, for example TSS or IDT accesses, when EFER.LMA==0.

So, move the address truncation in get_physical_address() so that it
applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040
Fixes: 4a1e9d4d11 ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Cc: qemu-stable@nongnu.org
Co-developed-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
90f641531c target/i386: use separate MMU indexes for 32-bit accesses
Accesses from a 32-bit environment (32-bit code segment for instruction
accesses, EFER.LMA==0 for processor accesses) have to mask away the
upper 32 bits of the address.  While a bit wasteful, the easiest way
to do so is to use separate MMU indexes.  These days, QEMU anyway is
compiled with a fixed value for NB_MMU_MODES.  Split MMU_USER_IDX,
MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
5f97afe254 target/i386: introduce function to query MMU indices
Remove knowledge of specific MMU indexes (other than MMU_NESTED_IDX and
MMU_PHYS_IDX) from mmu_translate().  This will make it possible to split
32-bit and 64-bit MMU indexes.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
d09c79010f target/i386: check validity of VMCB addresses
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the
maximum physical address width of the processor.  Setting them to
1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual).

The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE,
even though the manual is not clear on that.

Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11 ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:39 +01:00
Paolo Bonzini
68fb78d7d5 target/i386: mask high bits of CR3 in 32-bit mode
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level
paging or PAE paging).  Do this in mmu_translate() to remove
the last where get_physical_address() meaningfully drops the high
bits of the address.

Cc: qemu-stable@nongnu.org
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: 4a1e9d4d11 ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-28 00:23:38 +01:00
Paolo Bonzini
028ade14da target/i386: do not filter processor tracing features except on KVM
The processor tracing features in cpu_x86_cpuid() are hardcoded to a set
that should be safe on all processor that support PT virtualization.
But as an additional check, x86_cpu_filter_features() also checks
that the accelerator supports that safe subset, and if not it marks
CPUID_7_0_EBX_INTEL_PT as unavailable.

This check fails on accelerators other than KVM, but it is actually
unnecessary to do it because KVM is the only accelerator that uses the
safe subset.  Everything else just provides nonzero values for CPUID
leaf 0x14 (TCG/HVF because processor tracing is not supported; qtest
because nothing is able to read CPUID anyway).  Restricting the check
to KVM fixes a warning with the qtest accelerator:

    $ qemu-system-x86_64 -display none -cpu max,mmx=off -accel qtest
    qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25]

The warning also happens in the test-x86-cpuid-compat qtest.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2096
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20240221162910.101327-1-pbonzini@redhat.com>
Fixes: d047402436 ("target/i386: Call accel-agnostic x86_cpu_get_supported_cpuid()")
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-02-23 08:13:52 +01:00
Philippe Mathieu-Daudé
22942c1060 target/i386/monitor: Remove unused 'hw/i386/pc.h' header
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Luc Michel <luc.michel@amd.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20 20:34:21 +03:00
Ziqiao Kong
99d0dcd7f1 target/i386: Generate an illegal opcode exception on cmp instructions with lock prefix
target/i386: As specified by Intel Manual Vol2 3-180, cmp instructions
are not allowed to have lock prefix and a `UD` should be raised. Without
this patch, s1->T0 will be uninitialized and used in the case OP_CMPL.

Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-ID: <20240215095015.570748-2-ziqiaokong@gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:09 +01:00
Xiaoyao Li
0729857c70 i386/cpuid: Move leaf 7 to correct group
CPUID leaf 7 was grouped together with SGX leaf 0x12 by commit
b9edbadefb ("i386: Propagate SGX CPUID sub-leafs to KVM") by mistake.

SGX leaf 0x12 has its specific logic to check if subleaf (starting from 2)
is valid or not by checking the bit 0:3 of corresponding EAX is 1 or
not.

Leaf 7 follows the logic that EAX of subleaf 0 enumerates the maximum
valid subleaf.

Fixes: b9edbadefb ("i386: Propagate SGX CPUID sub-leafs to KVM")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240125024016.2521244-4-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Xiaoyao Li
a3b5376521 i386/cpuid: Remove subleaf constraint on CPUID leaf 1F
No such constraint that subleaf index needs to be less than 64.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by:Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240125024016.2521244-3-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Xiaoyao Li
10f92799af i386/cpuid: Decrease cpuid_i when skipping CPUID leaf 1F
Existing code misses a decrement of cpuid_i when skip leaf 0x1F.
There's a blank CPUID entry(with leaf, subleaf as 0, and all fields
stuffed 0s) left in the CPUID array.

It conflicts with correct CPUID leaf 0.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by:Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240125024016.2521244-2-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Xiaoyao Li
eba978061e target/i386: Add support of KVM_FEATURE_ASYNC_PF_VMEXIT for guest
KVM_FEATURE_ASYNC_PF_VMEXIT has been introduced for years, however QEMU
doesn't support expose it to guest. Add support for it.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20231024083354.1171308-1-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Xiaoyao Li
a11a365159 i386/cpu: Mask with XCR0/XSS mask for FEAT_XSAVE_XCR0_HI and FEAT_XSAVE_XSS_HI leafs
The value of FEAT_XSAVE_XCR0_HI leaf and FEAT_XSAVE_XSS_HI leaf also
need to be masked by XCR0 and XSS mask respectively, to make it
logically correct.

Fixes: 301e90675c ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-3-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Xiaoyao Li
81f5cad385 i386/cpu: Clear FEAT_XSAVE_XSS_LO/HI leafs when CPUID_EXT_XSAVE is not available
Leaf FEAT_XSAVE_XSS_LO and FEAT_XSAVE_XSS_HI also need to be cleared
when CPUID_EXT_XSAVE is not set.

Fixes: 301e90675c ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-2-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-02-16 13:56:08 +01:00
Bernhard Beschow
f70c1c068d target/i386/cpu: Fix typo in comment
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240106132546.21248-4-shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-02-14 06:09:32 -05:00
Bui Quang Minh
774204cf98 apic, i386/tcg: add x2apic transitions
This commit adds support for x2APIC transitions when writing to
MSR_IA32_APICBASE register and finally adds CPUID_EXT_X2APIC to
TCG_EXT_FEATURES.

The set_base in APICCommonClass now returns an integer to indicate error in
execution. apic_set_base return -1 on invalid APIC state transition,
accelerator can use this to raise appropriate exception.

Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
Message-Id: <20240111154404.5333-4-minhquangbui99@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-02-14 06:09:32 -05:00
Bui Quang Minh
b5ee0468e9 apic: add support for x2APIC mode
This commit extends the APIC ID to 32-bit long and remove the 255 max APIC
ID limit in userspace APIC. The array that manages local APICs is now
dynamically allocated based on the max APIC ID of created x86 machine.
Also, new x2APIC IPI destination determination scheme, self IPI and x2APIC
mode register access are supported.

Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
Message-Id: <20240111154404.5333-3-minhquangbui99@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-02-14 06:09:32 -05:00
Bui Quang Minh
b2101358e5 i386/tcg: implement x2APIC registers MSR access
This commit creates apic_register_read/write which are used by both
apic_mem_read/write for MMIO access and apic_msr_read/write for MSR access.

The apic_msr_read/write returns -1 on error, accelerator can use this to
raise the appropriate exception.

Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
Message-Id: <20240111154404.5333-2-minhquangbui99@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-02-14 06:09:32 -05:00
Richard Henderson
3b91614004 include/exec: Change cpu_mmu_index argument to CPUState
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 16:46:10 +10:00
Richard Henderson
a120d32097 include/exec: Implement cpu_mmu_index generically
For user-only mode, use MMU_USER_IDX.
For system mode, use CPUClass.mmu_index.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 16:46:10 +10:00
Richard Henderson
ace0c5fe59 target/i386: Populate CPUClass.mmu_index
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 08:52:25 +10:00
Philippe Mathieu-Daudé
ec1d32af12 target/i386: Extract x86_cpu_exec_halt() from accel/tcg/
Move this x86-specific code out of the generic accel/tcg/.

Reported-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240124101639.30056-10-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 21:04:10 +10:00
Philippe Mathieu-Daudé
6ae754815f target/i386: Extract x86_need_replay_interrupt() from accel/tcg/
Move this x86-specific code out of the generic accel/tcg/.

Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240124101639.30056-8-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 21:04:10 +10:00
Richard Henderson
1764ad70ce include/qemu: Add TCGCPUOps typedef to typedefs.h
QEMU coding style recommends using structure typedefs.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 21:04:10 +10:00
Anton Johansson
32f0c394bb target: Use vaddr in gen_intermediate_code
Makes gen_intermediate_code() signature target agnostic so the function
can be called from accel/tcg/translate-all.c without target specifics.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20240119144024.14289-9-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 07:06:03 +10:00
Peter Maydell
3f2a357b95 HW core patch queue
. Deprecate unmaintained SH-4 models (Samuel)
 . HPET: Convert DPRINTF calls to trace events (Daniel)
 . Implement buffered block writes in Intel PFlash (Gerd)
 . Ignore ELF loadable segments with zero size (Bin)
 . ESP/NCR53C9x: PCI DMA fixes (Mark)
 . PIIX: Simplify Xen PCI IRQ routing (Bernhard)
 . Restrict CPU 'start-powered-off' property to sysemu (Phil)
 
 . target/alpha: Only build sys_helper.c on system emulation (Phil)
 . target/xtensa: Use generic instruction breakpoint API & add test (Max)
 . Restrict icount to system emulation (Phil)
 . Do not set CPUState TCG-specific flags in non-TCG accels (Phil)
 . Cleanup TCG tb_invalidate API (Phil)
 . Correct LoongArch/KVM include path (Bibo)
 . Do not ignore throttle errors in crypto backends (Phil)
 
 . MAINTAINERS updates (Raphael, Zhao)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmWqXbkACgkQ4+MsLN6t
 wN6VVBAAkP/Bs2JfQYobPZVV868wceM97KeUJMXP2YWf6dSLpHRCQN5KtuJcACM9
 y3k3R7nMeVJSGmzl/1gF1G9JhjoCLoVLX/ejeBppv4Wq//9sEdggaQfdCwkhWw2o
 IK/gPjTZpimE7Er4hPlxmuhSRuM1MX4duKFRRfuZpE7XY14Y7/Hk12VIG7LooO0x
 2Sl8CaU0DN7CWmRVDoUkwVx7JBy28UVarRDsgpBim7oKmjjBFnCJkH6B6NJXEiYr
 z1BmIcHa87S09kG1ek+y8aZpG9iPC7nUWjPIQyJGhnfrnBuO7hQHwCLIjHHp5QBR
 BoMr8YQNTI34/M/D8pBfg96LrGDjkQOfwRyRddkMP/jJcNPMAPMNGbfVaIrfij1e
 T+jFF4gQenOvy1XKCY3Uk/a11P3tIRFBEeOlzzQg4Aje9W2MhUNwK2HTlRfBbrRr
 V30R764FDmHlsyOu6/E3jqp4GVCgryF1bglPOBjVEU5uytbQTP8jshIpGVnxBbF+
 OpFwtsoDbsousNKVcO5+B0mlHcB9Ru9h11M5/YD/jfLMk95Ga90JGdgYpqQ5tO5Y
 aqQhKfCKbfgKuKhysxpsdWAwHZzVrlSf+UrObF0rl2lMXXfcppjCqNaw4QJ0oedc
 DNBxTPcCE2vWhUzP3A60VH7jLh4nLaqSTrxxQKkbx+Je1ERGrxs=
 =KmQh
 -----END PGP SIGNATURE-----

Merge tag 'hw-cpus-20240119' of https://github.com/philmd/qemu into staging

HW core patch queue

. Deprecate unmaintained SH-4 models (Samuel)
. HPET: Convert DPRINTF calls to trace events (Daniel)
. Implement buffered block writes in Intel PFlash (Gerd)
. Ignore ELF loadable segments with zero size (Bin)
. ESP/NCR53C9x: PCI DMA fixes (Mark)
. PIIX: Simplify Xen PCI IRQ routing (Bernhard)
. Restrict CPU 'start-powered-off' property to sysemu (Phil)

. target/alpha: Only build sys_helper.c on system emulation (Phil)
. target/xtensa: Use generic instruction breakpoint API & add test (Max)
. Restrict icount to system emulation (Phil)
. Do not set CPUState TCG-specific flags in non-TCG accels (Phil)
. Cleanup TCG tb_invalidate API (Phil)
. Correct LoongArch/KVM include path (Bibo)
. Do not ignore throttle errors in crypto backends (Phil)

. MAINTAINERS updates (Raphael, Zhao)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmWqXbkACgkQ4+MsLN6t
# wN6VVBAAkP/Bs2JfQYobPZVV868wceM97KeUJMXP2YWf6dSLpHRCQN5KtuJcACM9
# y3k3R7nMeVJSGmzl/1gF1G9JhjoCLoVLX/ejeBppv4Wq//9sEdggaQfdCwkhWw2o
# IK/gPjTZpimE7Er4hPlxmuhSRuM1MX4duKFRRfuZpE7XY14Y7/Hk12VIG7LooO0x
# 2Sl8CaU0DN7CWmRVDoUkwVx7JBy28UVarRDsgpBim7oKmjjBFnCJkH6B6NJXEiYr
# z1BmIcHa87S09kG1ek+y8aZpG9iPC7nUWjPIQyJGhnfrnBuO7hQHwCLIjHHp5QBR
# BoMr8YQNTI34/M/D8pBfg96LrGDjkQOfwRyRddkMP/jJcNPMAPMNGbfVaIrfij1e
# T+jFF4gQenOvy1XKCY3Uk/a11P3tIRFBEeOlzzQg4Aje9W2MhUNwK2HTlRfBbrRr
# V30R764FDmHlsyOu6/E3jqp4GVCgryF1bglPOBjVEU5uytbQTP8jshIpGVnxBbF+
# OpFwtsoDbsousNKVcO5+B0mlHcB9Ru9h11M5/YD/jfLMk95Ga90JGdgYpqQ5tO5Y
# aqQhKfCKbfgKuKhysxpsdWAwHZzVrlSf+UrObF0rl2lMXXfcppjCqNaw4QJ0oedc
# DNBxTPcCE2vWhUzP3A60VH7jLh4nLaqSTrxxQKkbx+Je1ERGrxs=
# =KmQh
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 19 Jan 2024 11:32:09 GMT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* tag 'hw-cpus-20240119' of https://github.com/philmd/qemu: (36 commits)
  configure: Add linux header compile support for LoongArch
  MAINTAINERS: Update hw/core/cpu.c entry
  MAINTAINERS: Update Raphael Norwitz email
  hw/elf_ops: Ignore loadable segments with zero size
  hw/scsi/esp-pci: set DMA_STAT_BCMBLT when BLAST command issued
  hw/scsi/esp-pci: synchronise setting of DMA_STAT_DONE with ESP completion interrupt
  hw/scsi/esp-pci: generate PCI interrupt from separate ESP and PCI sources
  hw/scsi/esp-pci: use correct address register for PCI DMA transfers
  target/riscv: Rename tcg_cpu_FOO() to include 'riscv'
  target/i386: Rename tcg_cpu_FOO() to include 'x86'
  hw/s390x: Rename cpu_class_init() to include 'sclp'
  hw/core/cpu: Rename cpu_class_init() to include 'common'
  accel: Rename accel_init_ops_interfaces() to include 'system'
  cpus: Restrict 'start-powered-off' property to system emulation
  system/watchpoint: Move TCG specific code to accel/tcg/
  system/replay: Restrict icount to system emulation
  hw/pflash: implement update buffer for block writes
  hw/pflash: use ldn_{be,le}_p and stn_{be,le}_p
  hw/pflash: refactor pflash_data_write()
  hw/i386/pc_piix: Make piix_intx_routing_notifier_xen() more device independent
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-19 11:39:38 +00:00
Philippe Mathieu-Daudé
e129593f6f target/i386: Rename tcg_cpu_FOO() to include 'x86'
The tcg_cpu_FOO() names are x86 specific, so rename
them as x86_tcg_cpu_FOO() (as other names in this file)
to ease navigating the code.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Message-ID: <20240111120221.35072-5-philmd@linaro.org>
2024-01-19 12:28:59 +01:00
Philippe Mathieu-Daudé
b94b8c604b accel: Do not set CPUState::tcg_cflags in non-TCG accels
'tcg_cflags' is specific to TCG.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231130075958.21285-1-philmd@linaro.org>
2024-01-19 12:28:59 +01:00
Paolo Bonzini
592d0bc030 remove unnecessary casts from uintptr_t
uintptr_t, or unsigned long which is equivalent on Linux I32LP64 systems,
is an unsigned type and there is no need to further cast to __u64 which is
another unsigned integer type; widening casts from unsigned integers
zero-extend the value.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-01-18 10:43:51 +01:00
Paolo Bonzini
729ba8e933 target/i386: pcrel: store low bits of physical address in data[0]
For PC-relative translation blocks, env->eip changes during the
execution of a translation block, Therefore, QEMU must be able to
recover an instruction's PC just from the TranslationBlock struct and
the instruction data with.  Because a TB will not span two pages, QEMU
stores all the low bits of EIP in the instruction data and replaces them
in x86_restore_state_to_opc.  Bits 12 and higher (which may vary between
executions of a PCREL TB, since these only use the physical address in
the hash key) are kept unmodified from env->eip.  The assumption is that
these bits of EIP, unlike bits 0-11, will not change as the translation
block executes.

Unfortunately, this is incorrect when the CS base is not aligned to a page.
Then the linear address of the instructions (i.e. the one with the
CS base addred) indeed will never span two pages, but bits 12+ of EIP
can actually change.  For example, if CS base is 0x80262200 and EIP =
0x6FF4, the first instruction in the translation block will be at linear
address 0x802691F4.  Even a very small TB will cross to EIP = 0x7xxx,
while the linear addresses will remain comfortably within a single page.

The fix is simply to use the low bits of the linear address for data[0],
since those don't change.  Then x86_restore_state_to_opc uses tb->cs_base
to compute a temporary linear address (referring to some unknown
instruction in the TB, but with the correct values of bits 12 and higher);
the low bits are replaced with data[0], and EIP is obtained by subtracting
again the CS base.

Huge thanks to Mark Cave-Ayland for the image and initial debugging,
and to Gitlab user @kjliew for help with bisecting another occurrence
of (hopefully!) the same bug.

It should be relatively easy to write a testcase that performs MMIO on
an EIP with different bits 12+ than the first instruction of the translation
block; any help is welcome.

Fixes: e3a79e0e87 ("target/i386: Enable TARGET_TB_PCREL", 2022-10-11)
Cc: qemu-stable@nongnu.org
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1759
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1964
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2012
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-01-18 10:43:42 +01:00
guoguangyao
2926eab896 target/i386: fix incorrect EIP in PC-relative translation blocks
The PCREL patches introduced a bug when updating EIP in the !CF_PCREL case.
Using s->pc in func gen_update_eip_next() solves the problem.

Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22f ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Signed-off-by: guoguangyao <guoguangyao18@mails.ucas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240115020804.30272-1-guoguangyao18@mails.ucas.ac.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-01-18 10:43:14 +01:00
Richard Henderson
a58506b748 target/i386: Do not re-compute new pc with CF_PCREL
With PCREL, we have a page-relative view of EIP, and an
approximation of PC = EIP+CSBASE that is good enough to
detect page crossings.  If we try to recompute PC after
masking EIP, we will mess up that approximation and write
a corrupt value to EIP.

We already handled masking properly for PCREL, so the
fix in b5e0d5d2 was only needed for the !PCREL path.

Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22f ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240101230617.129349-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-01-18 10:43:14 +01:00
Stefan Hajnoczi
7c754c787e qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql()
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL)
instead, it is already widely used and unambiguous.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
32ead8e62f qemu/main-loop: rename QEMU_IOTHREAD_LOCK_GUARD to BQL_LOCK_GUARD
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL)
instead, it is already widely used and unambiguous.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
195801d700 system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Gavin Shan
4b26aa9f3a target: Use generic cpu_model_from_type()
Use generic cpu_model_from_type() when the CPU model name needs to
be extracted from the CPU type name.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-23-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Peter Maydell
05470c3979 * configure: use a native non-cross compiler for linux-user
* meson: cleanups
 * target/i386: miscellaneous cleanups and optimizations
 * target/i386: implement CMPccXADD
 * target/i386: the sgx_epc_get_section stub is reachable
 * esp: check for NULL result from scsi_device_find()
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmWRImYUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNd7AgAgcyJGiMfUkXqhefplpm06RDXQIa8
 FuoJqPb21lO75DQKfaFRAc4xGLagjJROMJGHMm9HvMu2VlwvOydkQlfFRspENxQ/
 5XzGdb/X0A7HA/mwUfnMB1AZx0Vs32VI5IBSc6acc9fmgeZ84XQEoM3KBQHUik7X
 mSkE4eltR9gJ+4IaGo4voZtK+YoVD8nEcuqmnKihSPWizev0FsZ49aNMtaYa9qC/
 Xs3kiQd/zPibHDHJu0ulFsNZgxtUcvlLHTCf8gO4dHWxCFLXGubMush83McpRtNB
 Qoh6cTLH+PBXfrxMR3zmTZMNvo8Euls3s07Y8TkNP4vdIIE/kMeMDW1wJw==
 =mq30
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* configure: use a native non-cross compiler for linux-user
* meson: cleanups
* target/i386: miscellaneous cleanups and optimizations
* target/i386: implement CMPccXADD
* target/i386: the sgx_epc_get_section stub is reachable
* esp: check for NULL result from scsi_device_find()

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmWRImYUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNd7AgAgcyJGiMfUkXqhefplpm06RDXQIa8
# FuoJqPb21lO75DQKfaFRAc4xGLagjJROMJGHMm9HvMu2VlwvOydkQlfFRspENxQ/
# 5XzGdb/X0A7HA/mwUfnMB1AZx0Vs32VI5IBSc6acc9fmgeZ84XQEoM3KBQHUik7X
# mSkE4eltR9gJ+4IaGo4voZtK+YoVD8nEcuqmnKihSPWizev0FsZ49aNMtaYa9qC/
# Xs3kiQd/zPibHDHJu0ulFsNZgxtUcvlLHTCf8gO4dHWxCFLXGubMush83McpRtNB
# Qoh6cTLH+PBXfrxMR3zmTZMNvo8Euls3s07Y8TkNP4vdIIE/kMeMDW1wJw==
# =mq30
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 31 Dec 2023 08:12:22 GMT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (46 commits)
  meson.build: report graphics backends separately
  configure, meson: rename targetos to host_os
  meson: rename config_all
  meson: remove CONFIG_ALL
  meson: remove config_targetos
  meson: remove CONFIG_POSIX and CONFIG_WIN32 from config_targetos
  meson: remove OS definitions from config_targetos
  meson: always probe u2f and canokey if the option is enabled
  meson: move subdirs to "Collect sources" section
  meson: move config-host.h definitions together
  meson: move CFI detection code with other compiler flags
  meson: keep subprojects together
  meson: move accelerator dependency checks together
  meson: move option validation together
  meson: move program checks together
  meson: add more sections to main meson.build
  configure: unify again the case arms in probe_target_compiler
  configure: remove unnecessary subshell
  Makefile: clean qemu-iotests output
  meson: use version_compare() to compare version
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-04 19:55:20 +00:00
Paolo Bonzini
405c7c0708 target/i386: implement CMPccXADD
The main difficulty here is that a page fault when writing to the destination
must not overwrite the flags.  Therefore, the flags computation must be
inlined instead of using gen_jcc1*.

For simplicity, I am using an unconditional cmpxchg operation, that becomes
a NOP if the comparison fails.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:40 +01:00
Paolo Bonzini
e7bbb7cb71 target/i386: introduce flags writeback mechanism
ALU instructions can write to both memory and flags.  If the CC_SRC*
and CC_DST locations have been written already when a memory access
causes a fault, the value in CC_SRC* and CC_DST might be interpreted
with the wrong CC_OP (the one that is in effect before the instruction.

Besides just using the wrong result for the flags, something like
subtracting -1 can have disastrous effects if the current CC_OP is
CC_OP_EFLAGS: this is because QEMU does not expect bits outside the ALU
flags to be set in CC_SRC, and env->eflags can end up set to all-ones.
In the case of the attached testcase, this sets IOPL to 3 and would
cause an assertion failure if SUB is moved to the new decoder.

This mechanism is not really needed for BMI instructions, which can
only write to a register, but put it to use anyway for cleanliness.
In the case of BZHI, the code has to be modified slightly to ensure
that decode->cc_src is written, otherwise the new assertions trigger.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:30 +01:00
Paolo Bonzini
4b2baf4a55 target/i386: adjust decoding of J operand
gen_jcc() has been changed to accept a relative offset since the
new decoder was written.  Adjust the J operand, which is meant
to be used with jump instructions such as gen_jcc(), to not
include the program counter and to not truncate the result, as
both operations are now performed by common code.

The result is that J is now the same as the I operand.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:30 +01:00
Paolo Bonzini
d4f611711a target/i386: move operand load and writeback out of gen_cmovcc1
Similar to gen_setcc1, make gen_cmovcc1 receive TCGv.  This is more friendly
to simultaneous implementation in the old and the new decoder.

A small wart is that s->T0 of CMOV is currently the *second* argument (which
would ordinarily be in T1).  Therefore, the condition has to be inverted in
order to overwrite s->T0 with cpu_regs[reg] if the MOV is not performed.

This only applies to the old decoder, and this code will go away soon.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:15 +01:00
Paolo Bonzini
3497f1646f target/i386: prepare for implementation of STOS/SCAS in new decoder
Do not use gen_op, and pull the load from the accumulator into
disas_insn.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:59 +01:00
Paolo Bonzini
9a5922d6bd target/i386: do not use s->tmp0 for jumps on ECX ==/!= 0
Create a new temporary, to ease the register allocator's work.

Creation of the temporary is pushed into gen_ext_tl, which
also allows NULL as the first parameter now.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:55 +01:00
Paolo Bonzini
1ec46bf237 target/i386: do not use s->tmp4 for push
Just create a temporary for the occasion.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:53 +01:00
Paolo Bonzini
80e55f54ac target/i386: split eflags computation out of gen_compute_eflags
The new x86 decoder wants the gen_* functions to compute EFLAGS before
writeback, which can be an issue for instructions with a memory
destination such as ARPL or shifts.

Extract code to compute the EFLAGS without clobbering CC_SRC, in case
the memory write causes a fault.  The flags writeback mechanism will
take care of copying the result to CC_SRC.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:50 +01:00
Paolo Bonzini
c0099cd40e target/i386: do not clobber T0 on string operations
The new decoder would rather have the operand in T0 when expanding SCAS, rather
than use R_EAX directly as gen_scas currently does.  This makes SCAS more similar
to CMP and SUB, in that CC_DST = T0 - T1.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:24 +01:00
Paolo Bonzini
24c0573bb0 target/i386: do not clobber A0 in POP translation
The new decoder likes to compute the address in A0 very early, so the
gen_lea_v_seg in gen_pop_T0 would clobber the address of the memory
operand.  Instead use T0 since it is already available and will be
overwritten immediately after.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:21 +01:00
Paolo Bonzini
a71e0b246a target/i386: do not decode string source/destination into decode->mem
decode->mem is only used if one operand has has_ea == true.  String
operations will not use decode->mem and will load A0 on their own, because
they are the only case of two memory operands in a single instruction.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:18 +01:00
Paolo Bonzini
8a36bbcf6c target/i386: add X86_SPECIALs for MOVSX and MOVZX
Usually the registers are just moved into s->T0 without much care for
their operand size.  However, in some cases we can get more efficient
code if the operand fetching logic syncs with the emission function
on what is nicer.

All the current uses are mostly demonstrative and only reduce the code
in the emission functions, because the instructions do not support
memory operands.  However the logic is generic and applies to several
more instructions such as MOVSXD (aka movslq), one-byte shift
instructions, multiplications, XLAT, and indirect calls/jumps.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:15 +01:00
Paolo Bonzini
5baf5641cc target/i386: rename zext0/zext2 and make them closer to the manual
X86_SPECIAL_ZExtOp0 and X86_SPECIAL_ZExtOp2 are poorly named; they are a hack
that is needed by scalar insertion and extraction instructions, and not really
related to zero extension: for PEXTR the zero extension is done by the generation
functions, for PINSR the high bits are not used at all and in fact are *not*
filled with zeroes when loaded into s->T1.

Rename the values to match the effect described in the manual, and explain
better in the comments.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:11 +01:00
Paolo Bonzini
6dd2afed55 target/i386: avoid trunc and ext for MULX and RORX
Use _tl operations for 32-bit operands on 32-bit targets, and only go
through trunc and extu ops for 64-bit targets.  While the trunc/ext
ops should be pretty much free after optimization, the optimizer also
does not like having the same temporary used in multiple EBBs.
Therefore it is nicer to not use tmpN* unless necessary.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:08 +01:00
Paolo Bonzini
b609db9477 target/i386: reimplement check for validity of LOCK prefix
The previous check erroneously allowed CMP to be modified with LOCK.
Instead, tag explicitly the instructions that do support LOCK.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:08 +01:00
Paolo Bonzini
8147df44da target/i386: document more deviations from the manual
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:05 +01:00
Paolo Bonzini
2455e9cf5a target/i386: clean up cpu_cc_compute_all
cpu_cc_compute_all() has an argument that is always equal to CC_OP for historical
reasons (dating back to commit a7812ae412, "TCG variable type checking.", 2008-11-17,
which added the argument to helper_cc_compute_all).  It does not make sense for the
argument to have any other value, so remove it and clean up some lines that are not
too long anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:02 +01:00
Paolo Bonzini
8cc746525c target/i386: remove unnecessary truncations
gen_lea_v_seg (called by gen_add_A0_ds_seg) already zeroes any
bits of s->A0 beyond s->aflag.  It does so before summing the
segment base and, if not in 64-bit mode, also after summing it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:58 +01:00
Paolo Bonzini
83280f6a62 target/i386: remove unnecessary arguments from raise_interrupt
is_int is always 1, and error_code is always zero.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:55 +01:00
Paolo Bonzini
1e7dde8008 target/i386: speedup JO/SETO after MUL or IMUL
OF is equal to the carry flag, so use the same CCPrepare.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:52 +01:00
Paolo Bonzini
6032627f07 target/i386: optimize computation of JL and JLE from flags
Take advantage of the fact that there can be no 1 bits between SF and OF.
If they were adjacent, you could sum SF and get a carry only if SF was
already set.  Then the value of OF in the sum is the XOR of OF itself,
the carry (which is SF) and 0 (the value of the OF bit in the addend):
this is OF^SF exactly.

Because OF and SF are not adjacent, just place more 1 bits to the
left so that the carry propagates, which means summing CC_O - CC_S.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:48 +01:00
Richard Henderson
c4f54bd6cc target/i386: Constify VMState in machine.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231221031652.119827-9-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Philippe Mathieu-Daudé
c0f6cd9f67 target: Restrict 'sysemu/reset.h' to system emulation
vCPU "reset" is only possible with system emulation.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20231212113640.30287-5-philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2023-12-20 10:29:23 +01:00
Richard Henderson
b5e0d5d22f target/i386: Fix 32-bit wrapping of pc/eip computation
In 32-bit mode, pc = eip + cs_base is also 32-bit, and must wrap.
Failure to do so results in incorrect memory exceptions to the guest.
Before 732d548732, this was implicitly done via truncation to
target_ulong but only in qemu-system-i386, not qemu-system-x86_64.

To fix this, we must add conditional zero-extensions.
Since we have to test for 32 vs 64-bit anyway, note that cs_base
is always zero in 64-bit mode.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2022
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20231212172510.103305-1-richard.henderson@linaro.org>
2023-12-12 13:35:08 -08:00
Michael Roth
5746f70d68 i386/sev: Avoid SEV-ES crash due to missing MSR_EFER_LMA bit
Commit 7191f24c7f ("accel/kvm/kvm-all: Handle register access errors")
added error checking for KVM_SET_SREGS/KVM_SET_SREGS2. In doing so, it
exposed a long-running bug in current KVM support for SEV-ES where the
kernel assumes that MSR_EFER_LMA will be set explicitly by the guest
kernel, in which case EFER write traps would result in KVM eventually
seeing MSR_EFER_LMA get set and recording it in such a way that it would
be subsequently visible when accessing it via KVM_GET_SREGS/etc.

However, guest kernels currently rely on MSR_EFER_LMA getting set
automatically when MSR_EFER_LME is set and paging is enabled via
CR0_PG_MASK. As a result, the EFER write traps don't actually expose the
MSR_EFER_LMA bit, even though it is set internally, and when QEMU
subsequently tries to pass this EFER value back to KVM via
KVM_SET_SREGS* it will fail various sanity checks and return -EINVAL,
which is now considered fatal due to the aforementioned QEMU commit.

This can be addressed by inferring the MSR_EFER_LMA bit being set when
paging is enabled and MSR_EFER_LME is set, and synthesizing it to ensure
the expected bits are all present in subsequent handling on the host
side.

Ultimately, this handling will be implemented in the host kernel, but to
avoid breaking QEMU's SEV-ES support when using older host kernels, the
same handling can be done in QEMU just after fetching the register
values via KVM_GET_SREGS*. Implement that here.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
Cc: Lara Lazier <laramglazier@gmail.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc:  <kvm@vger.kernel.org>
Fixes: 7191f24c7f ("accel/kvm/kvm-all: Handle register access errors")
Signed-off-by: Michael Roth <michael.roth@amd.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231206155821.1194551-1-michael.roth@amd.com>
2023-12-06 14:34:11 -05:00
Markus Armbruster
298d8b1220 target/i386/cpu: Improve error message for property "vendor"
Improve

    $ qemu-system-x86_64 -device max-x86_64-cpu,vendor=me
    qemu-system-x86_64: -device max-x86_64-cpu,vendor=me: Property '.vendor' doesn't take value 'me'

to

    qemu-system-x86_64: -device max-x86_64-cpu,vendor=0123456789abc: value of property 'vendor' must consist of exactly 12 characters

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20231031111059.3407803-8-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[Typo corrected]
2023-11-17 10:07:52 +01:00
Stefan Hajnoczi
ed1d873caa Misc hardware patch queue
HW emulation:
 - PMBus fixes and tests (Titus)
 - IDE fixes and tests (Fiona)
 - New ADM1266 sensor (Titus)
 - Better error propagation in PCI-ISA i82378 (Philippe)
 - Declare SD model QOM types using DEFINE_TYPES macro (Philippe)
 
 Topology:
 - Fix CPUState::nr_cores calculation (Zhuocheng Ding and Zhao Liu)
 
 Monitor:
 - Synchronize CPU state in 'info lapic' (Dongli Zhang)
 
 QOM:
 - Have 'cpu-qom.h' target-agnostic (Philippe)
 - Move ArchCPUClass definition to each target's cpu.h (Philippe)
 - Call object_class_is_abstract once in cpu_class_by_name (Philippe)
 
 UI:
 - Use correct key names in titles on MacOS / SDL2 (Adrian)
 
 MIPS:
 - Fix MSA BZ/BNZ and TX79 LQ/SQ opcodes (Philippe)
 
 Nios2:
 - Create IRQs *after* vCPU is realized (Philippe)
 
 PPC:
 - Restrict KVM objects to system emulation (Philippe)
 - Move target-specific definitions out of 'cpu-qom.h' (Philippe)
 
 S390X:
 - Make hw/s390x/css.h and hw/s390x/sclp.h headers target agnostic (Philippe)
 
 X86:
 - HVF & KVM cleanups (Philippe)
 
 Various targets:
 - Use env_archcpu() to optimize (Philippe)
 
 Misc:
 - Few global variable shadowing removed (Philippe)
 - Introduce cpu_exec_reset_hold and factor tcg_cpu_reset_hold out (Philippe)
 - Remove few more 'softmmu' mentions (Philippe)
 - Fix and cleanup in vl.c (Akihiko & Marc-André)
 - Resource leak fix in dump (Zongmin Zhou)
 - MAINTAINERS updates (Thomas, Daniel)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVKKmEACgkQ4+MsLN6t
 wN4xHQ//X/enH4C7K3VP/tSinDiwmXN2o61L9rjqSDQkBaCtktZx4c8qKSDL7V4S
 vwzmvvBn3biMXQwZNVJo9d0oz2qoaF9tI6Ao0XDHAan9ziagfG9YMqWhkCfj077Q
 jLdCqkUuMJBvQgXGB1a6UgCme8PQx7h0oqjbCNfB0ZBls24b5DiEjO87LE4OTbTi
 zKRhYEpZpGwIVcy+1dAsbaBpGFP06sr1doB9Wz4c06eSx7t0kFSPk6U4CyOPrGXh
 ynyCxPwngxIXmarY8gqPs3SBs7oXsH8Q/ZOHr1LbuXhwSuw/0zBQU9aF7Ir8RPan
 DB79JjPrtxTAhICKredWT79v9M18D2/1MpONgg4vtx5K2FzGYoAJULCHyfkHMRSM
 L6/H0ZQPHvf7w72k9EcSQIhd0wPlMqRmfy37/8xcLiw1h4l/USx48QeKaeFWeSEu
 DgwSk+R61HbrKvQz/U0tF98zUEyBaQXNrKmyzht0YE4peAtpbPNBeRHkd0GMae/Z
 HOmkt8QlFQ0T14qSK7mSHaSJTUzRvFGD01cbuCDxVsyCWWsesEikXBACZLG5RCRY
 Rn1WeX1H9eE3kKi9iueLnhzcF9yM5XqFE3f6RnDzY8nkg91lsTMSQgFcIpv6uGyp
 3WOTNSC9SoFyI3x8pCWiKOGytPUb8xk+PnOA85wYvVmT+7j6wus=
 =OVdQ
 -----END PGP SIGNATURE-----

Merge tag 'misc-cpus-20231107' of https://github.com/philmd/qemu into staging

Misc hardware patch queue

HW emulation:
- PMBus fixes and tests (Titus)
- IDE fixes and tests (Fiona)
- New ADM1266 sensor (Titus)
- Better error propagation in PCI-ISA i82378 (Philippe)
- Declare SD model QOM types using DEFINE_TYPES macro (Philippe)

Topology:
- Fix CPUState::nr_cores calculation (Zhuocheng Ding and Zhao Liu)

Monitor:
- Synchronize CPU state in 'info lapic' (Dongli Zhang)

QOM:
- Have 'cpu-qom.h' target-agnostic (Philippe)
- Move ArchCPUClass definition to each target's cpu.h (Philippe)
- Call object_class_is_abstract once in cpu_class_by_name (Philippe)

UI:
- Use correct key names in titles on MacOS / SDL2 (Adrian)

MIPS:
- Fix MSA BZ/BNZ and TX79 LQ/SQ opcodes (Philippe)

Nios2:
- Create IRQs *after* vCPU is realized (Philippe)

PPC:
- Restrict KVM objects to system emulation (Philippe)
- Move target-specific definitions out of 'cpu-qom.h' (Philippe)

S390X:
- Make hw/s390x/css.h and hw/s390x/sclp.h headers target agnostic (Philippe)

X86:
- HVF & KVM cleanups (Philippe)

Various targets:
- Use env_archcpu() to optimize (Philippe)

Misc:
- Few global variable shadowing removed (Philippe)
- Introduce cpu_exec_reset_hold and factor tcg_cpu_reset_hold out (Philippe)
- Remove few more 'softmmu' mentions (Philippe)
- Fix and cleanup in vl.c (Akihiko & Marc-André)
- Resource leak fix in dump (Zongmin Zhou)
- MAINTAINERS updates (Thomas, Daniel)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVKKmEACgkQ4+MsLN6t
# wN4xHQ//X/enH4C7K3VP/tSinDiwmXN2o61L9rjqSDQkBaCtktZx4c8qKSDL7V4S
# vwzmvvBn3biMXQwZNVJo9d0oz2qoaF9tI6Ao0XDHAan9ziagfG9YMqWhkCfj077Q
# jLdCqkUuMJBvQgXGB1a6UgCme8PQx7h0oqjbCNfB0ZBls24b5DiEjO87LE4OTbTi
# zKRhYEpZpGwIVcy+1dAsbaBpGFP06sr1doB9Wz4c06eSx7t0kFSPk6U4CyOPrGXh
# ynyCxPwngxIXmarY8gqPs3SBs7oXsH8Q/ZOHr1LbuXhwSuw/0zBQU9aF7Ir8RPan
# DB79JjPrtxTAhICKredWT79v9M18D2/1MpONgg4vtx5K2FzGYoAJULCHyfkHMRSM
# L6/H0ZQPHvf7w72k9EcSQIhd0wPlMqRmfy37/8xcLiw1h4l/USx48QeKaeFWeSEu
# DgwSk+R61HbrKvQz/U0tF98zUEyBaQXNrKmyzht0YE4peAtpbPNBeRHkd0GMae/Z
# HOmkt8QlFQ0T14qSK7mSHaSJTUzRvFGD01cbuCDxVsyCWWsesEikXBACZLG5RCRY
# Rn1WeX1H9eE3kKi9iueLnhzcF9yM5XqFE3f6RnDzY8nkg91lsTMSQgFcIpv6uGyp
# 3WOTNSC9SoFyI3x8pCWiKOGytPUb8xk+PnOA85wYvVmT+7j6wus=
# =OVdQ
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 07 Nov 2023 20:15:29 HKT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* tag 'misc-cpus-20231107' of https://github.com/philmd/qemu: (75 commits)
  dump: Add close fd on error return to avoid resource leak
  ui/sdl2: use correct key names in win title on mac
  MAINTAINERS: Add more guest-agent related files to the corresponding section
  MAINTAINERS: Add include/hw/xtensa/mx_pic.h to the XTFPGA machine section
  MAINTAINERS: update libvirt devel mailing list address
  MAINTAINERS: Add the CAN documentation file to the CAN section
  MAINTAINERS: Add include/hw/timer/tmu012.h to the SH4 R2D section
  hw/sd: Declare QOM types using DEFINE_TYPES() macro
  hw/i2c: pmbus: reset page register for out of range reads
  hw/i2c: pmbus: immediately clear faults on request
  tests/qtest: add tests for ADM1266
  hw/sensor: add ADM1266 device model
  hw/i2c: pmbus: add VCAP register
  hw/i2c: pmbus: add fan support
  hw/i2c: pmbus: add vout mode bitfields
  hw/i2c: pmbus add support for block receive
  tests/qtest: ahci-test: add test exposing reset issue with pending callback
  hw/ide: reset: cancel async DMA operation before resetting state
  hw/cpu: Update the comments of nr_cores and nr_dies
  system/cpus: Fix CPUState.nr_cores' calculation
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-11-08 08:32:26 +08:00
Zhao Liu
aa1878fbc9 hw/cpu: Update the comments of nr_cores and nr_dies
In the nr_threads' comment, specify it represents the
number of threads in the "core" to avoid confusion.

Also add comment for nr_dies in CPUX86State.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Babu Moger <babu.moger@amd.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20231024090323.1859210-5-zhao1.liu@linux.intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-11-07 13:08:48 +01:00
Zhuocheng Ding
958ac3c42b system/cpus: Fix CPUState.nr_cores' calculation
From CPUState.nr_cores' comment, it represents "number of cores within
this CPU package".

After 003f230e37 ("machine: Tweak the order of topology members in
struct CpuTopology"), the meaning of smp.cores changed to "the number of
cores in one die", but this commit missed to change CPUState.nr_cores'
calculation, so that CPUState.nr_cores became wrong and now it
misses to consider numbers of clusters and dies.

At present, only i386 is using CPUState.nr_cores.

But as for i386, which supports die level, the uses of CPUState.nr_cores
are very confusing:

Early uses are based on the meaning of "cores per package" (before die
is introduced into i386), and later uses are based on "cores per die"
(after die's introduction).

This difference is due to that commit a94e142899 ("target/i386: Add
CPUID.1F generation support for multi-dies PCMachine") misunderstood
that CPUState.nr_cores means "cores per die" when calculated
CPUID.1FH.01H:EBX. After that, the changes in i386 all followed this
wrong understanding.

With the influence of 003f230e37 and a94e142899, for i386 currently
the result of CPUState.nr_cores is "cores per die", thus the original
uses of CPUState.cores based on the meaning of "cores per package" are
wrong when multiple dies exist:
1. In cpu_x86_cpuid() of target/i386/cpu.c, CPUID.01H:EBX[bits 23:16] is
   incorrect because it expects "cpus per package" but now the
   result is "cpus per die".
2. In cpu_x86_cpuid() of target/i386/cpu.c, for all leaves of CPUID.04H:
   EAX[bits 31:26] is incorrect because they expect "cpus per package"
   but now the result is "cpus per die". The error not only impacts the
   EAX calculation in cache_info_passthrough case, but also impacts other
   cases of setting cache topology for Intel CPU according to cpu
   topology (specifically, the incoming parameter "num_cores" expects
   "cores per package" in encode_cache_cpuid4()).
3. In cpu_x86_cpuid() of target/i386/cpu.c, CPUID.0BH.01H:EBX[bits
   15:00] is incorrect because the EBX of 0BH.01H (core level) expects
   "cpus per package", which may be different with 1FH.01H (The reason
   is 1FH can support more levels. For QEMU, 1FH also supports die,
   1FH.01H:EBX[bits 15:00] expects "cpus per die").
4. In cpu_x86_cpuid() of target/i386/cpu.c, when CPUID.80000001H is
   calculated, here "cpus per package" is expected to be checked, but in
   fact, now it checks "cpus per die". Though "cpus per die" also works
   for this code logic, this isn't consistent with AMD's APM.
5. In cpu_x86_cpuid() of target/i386/cpu.c, CPUID.80000008H:ECX expects
   "cpus per package" but it obtains "cpus per die".
6. In simulate_rdmsr() of target/i386/hvf/x86_emu.c, in
   kvm_rdmsr_core_thread_count() of target/i386/kvm/kvm.c, and in
   helper_rdmsr() of target/i386/tcg/sysemu/misc_helper.c,
   MSR_CORE_THREAD_COUNT expects "cpus per package" and "cores per
   package", but in these functions, it obtains "cpus per die" and
   "cores per die".

On the other hand, these uses are correct now (they are added in/after
a94e142899):
1. In cpu_x86_cpuid() of target/i386/cpu.c, topo_info.cores_per_die
   meets the actual meaning of CPUState.nr_cores ("cores per die").
2. In cpu_x86_cpuid() of target/i386/cpu.c, vcpus_per_socket (in CPUID.
   04H's calculation) considers number of dies, so it's correct.
3. In cpu_x86_cpuid() of target/i386/cpu.c, CPUID.1FH.01H:EBX[bits
   15:00] needs "cpus per die" and it gets the correct result, and
   CPUID.1FH.02H:EBX[bits 15:00] gets correct "cpus per package".

When CPUState.nr_cores is correctly changed to "cores per package" again
, the above errors will be fixed without extra work, but the "currently"
correct cases will go wrong and need special handling to pass correct
"cpus/cores per die" they want.

Fix CPUState.nr_cores' calculation to fit the original meaning "cores
per package", as well as changing calculation of topo_info.cores_per_die,
vcpus_per_socket and CPUID.1FH.

Fixes: a94e142899 ("target/i386: Add CPUID.1F generation support for multi-dies PCMachine")
Fixes: 003f230e37 ("machine: Tweak the order of topology members in struct CpuTopology")
Signed-off-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Co-developed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20231024090323.1859210-4-zhao1.liu@linux.intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-11-07 13:08:48 +01:00
Philippe Mathieu-Daudé
9348028e7e target: Move ArchCPUClass definition to 'cpu.h'
The OBJECT_DECLARE_CPU_TYPE() macro forward-declares each
ArchCPUClass type. These forward declarations are sufficient
for code in hw/ to use the QOM definitions. No need to expose
these structure definitions. Keep each local to their target/
by moving them to the corresponding "cpu.h" header.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231013140116.255-13-philmd@linaro.org>
2023-11-07 13:08:48 +01:00
Dongli Zhang
2b34d7382b target/i386/monitor: synchronize cpu state for lapic info
While the default "info lapic" always synchronizes cpu state ...

mon_get_cpu()
-> mon_get_cpu_sync(mon, true)
   -> cpu_synchronize_state(cpu)
      -> ioctl KVM_GET_LAPIC (taking KVM as example)

... the cpu state is not synchronized when the apic-id is available as
argument.

The cpu state should be synchronized when apic-id is available. Otherwise
the "info lapic <apic-id>" always returns stale data.

Reference:
https://lore.kernel.org/all/20211028155457.967291-19-berrange@redhat.com/

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Message-ID: <20231030085336.2681386-1-armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231026211938.162815-1-dongli.zhang@oracle.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
de910c496a target/i386/kvm: Correct comment in kvm_cpu_realize()
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230918160257.30127-4-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
3152e95413 target/i386/hvf: Rename 'X86CPU *x86_cpu' variable as 'cpu'
Follow the naming used by other files in target/i386/.

No functional changes.

Suggested-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231020111136.44401-4-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
5366a0644f target/i386/hvf: Rename 'CPUState *cpu' variable as 'cs'
Follow the naming used by other files in target/i386/.

No functional changes.

Suggested-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231020111136.44401-3-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
89c02195c9 target/i386/hvf: Use CPUState typedef
Follow C style guidelines and use CPUState forward
declaration from "qemu/typedefs.h".

No functional changes.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231020111136.44401-2-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
a9e445df54 target/i386/hvf: Use env_archcpu() in simulate_[rdmsr/wrmsr]()
When CPUArchState* is available (here CPUX86State*), we can
use the fast env_archcpu() macro to get ArchCPU* (here X86CPU*).
The QOM cast X86_CPU() macro will be slower when building with
--enable-qom-cast-debug.

Pass CPUX86State* as argument to simulate_rdmsr / simulate_wrmsr
instead of a CPUState* to avoid an extra cast.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Roman Bolshakov <roman@roolebo.dev>
Tested-by: Roman Bolshakov <roman@roolebo.dev>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231009110239.66778-7-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
82b641d626 target/i386/hvf: Use x86_cpu in simulate_[rdmsr|wrmsr]()
We already have 'x86_cpu = X86_CPU(cpu)'. Use the variable
instead of doing another QOM cast with X86_CPU().

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Roman Bolshakov <roman@roolebo.dev>
Tested-by: Roman Bolshakov <roman@roolebo.dev>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231009110239.66778-6-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
2d56be5a29 target: Declare FOO_CPU_TYPE_NAME/SUFFIX in 'cpu-qom.h'
Hegerogeneous code needs access to the FOO_CPU_TYPE_NAME()
macro to resolve target CPU types. Move the declaration
(along with the required FOO_CPU_TYPE_SUFFIX) to "cpu-qom.h".

"target/foo/cpu-qom.h" is supposed to be target agnostic
(include-able by any target). Add such mention in the
header.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231013140116.255-7-philmd@linaro.org>
2023-11-07 12:13:27 +01:00
Philippe Mathieu-Daudé
6ee45fac56 target: Unify QOM style
Enforce the style described by commit 067109a11c ("docs/devel:
mention the spacing requirement for QOM"):

  The first declaration of a storage or class structure should
  always be the parent and leave a visual space between that
  declaration and the new code. It is also useful to separate
  backing for properties (options driven by the user) and internal
  state to make navigation easier.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20231013140116.255-2-philmd@linaro.org>
2023-11-07 12:13:27 +01:00