Commit Graph

1792 Commits

Author SHA1 Message Date
Ricky Zhou
8bf171c2d1 target/i386: Fix exception classes for MOVNTPS/MOVNTPD.
Before this change, MOVNTPS and MOVNTPD were labeled as Exception Class
4 (only requiring alignment for legacy SSE instructions). This changes
them to Exception Class 1 (always requiring memory alignment), as
documented in the Intel manual.
Message-Id: <20230501111428.95998-3-ricky@rzhou.org>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Ricky Zhou
cab529b0dc target/i386: Fix exception classes for SSE/AVX instructions.
Fix the exception classes for some SSE/AVX instructions to match what is
documented in the Intel manual.

These changes are expected to have no functional effect on the behavior
that qemu implements (primarily >= 16-byte memory alignment checks). For
instance, since qemu does not implement the AC flag, there is no
difference in behavior between Exception Classes 4 and 5 for
instructions where the SSE version only takes <16 byte memory operands.
Message-Id: <20230501111428.95998-2-ricky@rzhou.org>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Ricky Zhou
afa94dabc5 target/i386: Fix and add some comments next to SSE/AVX instructions.
Adds some comments describing what instructions correspond to decoding
table entries and fixes some existing comments which named the wrong
instruction.
Message-Id: <20230501111428.95998-1-ricky@rzhou.org>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Xinyu Li
056d649007 target/i386: fix avx2 instructions vzeroall and vpermdq
vzeroall: xmm_regs should be used instead of xmm_t0
vpermdq: bit 3 and 7 of imm should be considered

Signed-off-by: Xinyu Li <lixinyu20s@ict.ac.cn>
Message-Id: <20230510145222.586487-1-lixinyu20s@ict.ac.cn>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Paolo Bonzini
2b55e479e6 target/i386: fix operand size for VCOMI/VUCOMI instructions
Compared to other SSE instructions, VUCOMISx and VCOMISx are different:
the single and double precision versions are distinguished through a
prefix, however they use no-prefix and 0x66 for SS and SD respectively.
Scalar values usually are associated with 0xF2 and 0xF3.

Because of these, they incorrectly perform a 128-bit memory load instead
of a 32- or 64-bit load.  Fix this by writing a custom decoding function.

I tested that the reproducer is fixed and the test-avx output does not
change.

Reported-by: Gabriele Svelto <gsvelto@mozilla.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1637
Fixes: f8d19eec0d ("target/i386: reimplement 0x0f 0x28-0x2f, add AVX", 2022-10-18)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Emanuele Giuseppe Esposito
22e1094ca8 target/i386: add support for FB_CLEAR feature
As reported by the Intel's doc:
"FB_CLEAR: The processor will overwrite fill buffer values as part of
MD_CLEAR operations with the VERW instruction.
On these processors, L1D_FLUSH does not overwrite fill buffer values."

If this cpu feature is present in host, allow QEMU to choose whether to
show it to the guest too.
One disadvantage of not exposing it is that the guest will report
a non existing vulnerability in
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
because the mitigation is present only when the cpu has
        (FLUSH_L1D and MD_CLEAR) or FB_CLEAR
features enabled.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230201135759.555607-3-eesposit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Emanuele Giuseppe Esposito
0e7e3bf1a5 target/i386: add support for FLUSH_L1D feature
As reported by Intel's doc:
"L1D_FLUSH: Writeback and invalidate the L1 data cache"

If this cpu feature is present in host, allow QEMU to choose whether to
show it to the guest too.
One disadvantage of not exposing it is that the guest will report
a non existing vulnerability in
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
because the mitigation is present only when the cpu has
	(FLUSH_L1D and MD_CLEAR) or FB_CLEAR
features enabled.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230201135759.555607-2-eesposit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18 08:53:50 +02:00
Babu Moger
166b174188 target/i386: Add EPYC-Genoa model to support Zen 4 processor series
Adds the support for AMD EPYC Genoa generation processors. The model
display for the new processor will be EPYC-Genoa.

Adds the following new feature bits on top of the feature bits from
the previous generation EPYC models.

avx512f         : AVX-512 Foundation instruction
avx512dq        : AVX-512 Doubleword & Quadword Instruction
avx512ifma      : AVX-512 Integer Fused Multiply Add instruction
avx512cd        : AVX-512 Conflict Detection instruction
avx512bw        : AVX-512 Byte and Word Instructions
avx512vl        : AVX-512 Vector Length Extension Instructions
avx512vbmi      : AVX-512 Vector Byte Manipulation Instruction
avx512_vbmi2    : AVX-512 Additional Vector Byte Manipulation Instruction
gfni            : AVX-512 Galois Field New Instructions
avx512_vnni     : AVX-512 Vector Neural Network Instructions
avx512_bitalg   : AVX-512 Bit Algorithms, add bit algorithms Instructions
avx512_vpopcntdq: AVX-512 AVX-512 Vector Population Count Doubleword and
                  Quadword Instructions
avx512_bf16	: AVX-512 BFLOAT16 instructions
la57            : 57-bit virtual address support (5-level Page Tables)
vnmi            : Virtual NMI (VNMI) allows the hypervisor to inject the NMI
                  into the guest without using Event Injection mechanism
                  meaning not required to track the guest NMI and intercepting
                  the IRET.
auto-ibrs       : The AMD Zen4 core supports a new feature called Automatic IBRS.
                  It is a "set-and-forget" feature that means that, unlike e.g.,
                  s/w-toggled SPEC_CTRL.IBRS, h/w manages its IBRS mitigation
                  resources automatically across CPL transitions.

Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <20230504205313.225073-8-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Babu Moger
62a798d4bc target/i386: Add VNMI and automatic IBRS feature bits
Add the following featute bits.

vnmi: Virtual NMI (VNMI) allows the hypervisor to inject the NMI into the
      guest without using Event Injection mechanism meaning not required to
      track the guest NMI and intercepting the IRET.
      The presence of this feature is indicated via the CPUID function
      0x8000000A_EDX[25].

automatic-ibrs :
      The AMD Zen4 core supports a new feature called Automatic IBRS.
      It is a "set-and-forget" feature that means that, unlike e.g.,
      s/w-toggled SPEC_CTRL.IBRS, h/w manages its IBRS mitigation
      resources automatically across CPL transitions.
      The presence of this feature is indicated via the CPUID function
      0x80000021_EAX[8].

The documention for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
   40332 4.05 Date October 2022

Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
Signed-off-by: Kim Phillips <kim.phillips@amd.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
Message-Id: <20230504205313.225073-7-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Babu Moger
27f03be6f5 target/i386: Add missing feature bits in EPYC-Milan model
Add the following feature bits for EPYC-Milan model and bump the version.
vaes            : Vector VAES(ENC|DEC), VAES(ENC|DEC)LAST instruction support
vpclmulqdq	: Vector VPCLMULQDQ instruction support
stibp-always-on : Single Thread Indirect Branch Prediction Mode has enhanced
                  performance and may be left Always on
amd-psfd	: Predictive Store Forward Disable
no-nested-data-bp         : Processor ignores nested data breakpoints
lfence-always-serializing : LFENCE instruction is always serializing
null-sel-clr-base         : Null Selector Clears Base. When this bit is
                            set, a null segment load clears the segment base

These new features will be added in EPYC-Milan-v2. The "-cpu help" output
after the change will be.

    x86 EPYC-Milan             (alias configured by machine type)
    x86 EPYC-Milan-v1          AMD EPYC-Milan Processor
    x86 EPYC-Milan-v2          AMD EPYC-Milan Processor

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. SECURITY ANALYSIS OF AMD PREDICTIVE STORE FORWARDING
c. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
    40332 4.05 Date October 2022

Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
Message-Id: <20230504205313.225073-6-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Babu Moger
b70eec312b target/i386: Add feature bits for CPUID_Fn80000021_EAX
Add the following feature bits.
no-nested-data-bp	  : Processor ignores nested data breakpoints.
lfence-always-serializing : LFENCE instruction is always serializing.
null-sel-cls-base	  : Null Selector Clears Base. When this bit is
			    set, a null segment load clears the segment base.

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
    40332 4.05 Date October 2022

Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
Message-Id: <20230504205313.225073-5-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Babu Moger
bb039a230e target/i386: Add a couple of feature bits in 8000_0008_EBX
Add the following feature bits.

amd-psfd : Predictive Store Forwarding Disable:
           PSF is a hardware-based micro-architectural optimization
           designed to improve the performance of code execution by
           predicting address dependencies between loads and stores.
           While SSBD (Speculative Store Bypass Disable) disables both
           PSF and speculative store bypass, PSFD only disables PSF.
           PSFD may be desirable for the software which is concerned
           with the speculative behavior of PSF but desires a smaller
           performance impact than setting SSBD.
	   Depends on the following kernel commit:
           b73a54321ad8 ("KVM: x86: Expose Predictive Store Forwarding Disable")

stibp-always-on :
           Single Thread Indirect Branch Prediction mode has enhanced
           performance and may be left always on.

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. SECURITY ANALYSIS OF AMD PREDICTIVE STORE FORWARDING

Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Message-Id: <20230504205313.225073-4-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Michael Roth
d7c72735f6 target/i386: Add new EPYC CPU versions with updated cache_info
Introduce new EPYC cpu versions: EPYC-v4 and EPYC-Rome-v3.
The only difference vs. older models is an updated cache_info with
the 'complex_indexing' bit unset, since this bit is not currently
defined for AMD and may cause problems should it be used for
something else in the future. Setting this bit will also cause
CPUID validation failures when running SEV-SNP guests.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20230504205313.225073-3-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Michael Roth
cca0a000d0 target/i386: allow versioned CPUs to specify new cache_info
New EPYC CPUs versions require small changes to their cache_info's.
Because current QEMU x86 CPU definition does not support versioned
cach_info, we would have to declare a new CPU type for each such case.
To avoid the dup work, add "cache_info" in X86CPUVersionDefinition",
to allow new cache_info pointers to be specified for a new CPU version.

Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20230504205313.225073-2-babu.moger@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-08 16:35:30 +02:00
Jiaxi Chen
d1a1111514 target/i386: Add support for PREFETCHIT0/1 in CPUID enumeration
Latest Intel platform Granite Rapids has introduced a new instruction -
PREFETCHIT0/1, which moves code to memory (cache) closer to the
processor depending on specific hints.

The bit definition:
CPUID.(EAX=7,ECX=1):EDX[bit 14]

Add CPUID definition for PREFETCHIT0/1.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-7-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Jiaxi Chen
ecd2e6ca03 target/i386: Add support for AVX-NE-CONVERT in CPUID enumeration
AVX-NE-CONVERT is a new set of instructions which can convert low
precision floating point like BF16/FP16 to high precision floating point
FP32, as well as convert FP32 elements to BF16. This instruction allows
the platform to have improved AI capabilities and better compatibility.

The bit definition:
CPUID.(EAX=7,ECX=1):EDX[bit 5]

Add CPUID definition for AVX-NE-CONVERT.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-6-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Jiaxi Chen
eaaa197d5b target/i386: Add support for AVX-VNNI-INT8 in CPUID enumeration
AVX-VNNI-INT8 is a new set of instructions in the latest Intel platform
Sierra Forest, aims for the platform to have superior AI capabilities.
This instruction multiplies the individual bytes of two unsigned or
unsigned source operands, then adds and accumulates the results into the
destination dword element size operand.

The bit definition:
CPUID.(EAX=7,ECX=1):EDX[bit 4]

AVX-VNNI-INT8 is on a new feature bits leaf. Add a CPUID feature word
FEAT_7_1_EDX for this leaf.

Add CPUID definition for AVX-VNNI-INT8.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-5-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Jiaxi Chen
a957a88416 target/i386: Add support for AVX-IFMA in CPUID enumeration
AVX-IFMA is a new instruction in the latest Intel platform Sierra
Forest. This instruction packed multiplies unsigned 52-bit integers and
adds the low/high 52-bit products to Qword Accumulators.

The bit definition:
CPUID.(EAX=7,ECX=1):EAX[bit 23]

Add CPUID definition for AVX-IFMA.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-4-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Jiaxi Chen
99ed8445ea target/i386: Add support for AMX-FP16 in CPUID enumeration
Latest Intel platform Granite Rapids has introduced a new instruction -
AMX-FP16, which performs dot-products of two FP16 tiles and accumulates
the results into a packed single precision tile. AMX-FP16 adds FP16
capability and allows a FP16 GPU trained model to run faster without
loss of accuracy or added SW overhead.

The bit definition:
CPUID.(EAX=7,ECX=1):EAX[bit 21]

Add CPUID definition for AMX-FP16.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-3-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Jiaxi Chen
a9ce107fd0 target/i386: Add support for CMPCCXADD in CPUID enumeration
CMPccXADD is a new set of instructions in the latest Intel platform
Sierra Forest. This new instruction set includes a semaphore operation
that can compare and add the operands if condition is met, which can
improve database performance.

The bit definition:
CPUID.(EAX=7,ECX=1):EAX[bit 7]

Add CPUID definition for CMPCCXADD.

Signed-off-by: Jiaxi Chen <jiaxi.chen@linux.intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20230303065913.1246327-2-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Tom Lendacky
fb6bbafc0f i386/cpu: Update how the EBX register of CPUID 0x8000001F is set
Update the setting of CPUID 0x8000001F EBX to clearly document the ranges
associated with fields being set.

Fixes: 6cb8f2a663 ("cpu/i386: populate CPUID 0x8000_001F when SEV is active")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <5822fd7d02b575121380e1f493a8f6d9eba2b11a.1664550870.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Tom Lendacky
8168fed9f8 i386/sev: Update checks and information related to reduced-phys-bits
The value of the reduced-phys-bits parameter is propogated to the CPUID
information exposed to the guest. Update the current validation check to
account for the size of the CPUID field (6-bits), ensuring the value is
in the range of 1 to 63.

Maintain backward compatibility, to an extent, by allowing a value greater
than 1 (so that the previously documented value of 5 still works), but not
allowing anything over 63.

Fixes: d8575c6c02 ("sev/i386: add command to initialize the memory encryption context")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <cca5341a95ac73f904e6300f10b04f9c62e4e8ff.1664550870.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-28 12:50:34 +02:00
Richard Henderson
732e89f4c4 tcg: Replace tcg_abort with g_assert_not_reached
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:17:46 +01:00
Richard Henderson
1cc6e1a201 * Optional use of Meson wrap for slirp
* Coverity fixes
 * Avoid -Werror=maybe-uninitialized
 * Mark coroutine QMP command functions as coroutine_fn
 * Mark functions that suspend as coroutine_mixed_fn
 * target/i386: Fix SGX CPUID leaf
 * First batch of qatomic_mb_read() removal
 * Small atomic.rst improvement
 * NBD cleanup
 * Update libvirt-ci submodule
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmRBAzwUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroP64gf+NzLW95tylCfhKuuLq/TjuOTQqHCD
 KVLlA1I3pwJfk4SUuigrnaJtwfa/tBiWxfaivUdPAzPzeXyxcVSOps0neohrmFBh
 2e3ylBWWz22K0gkLtrFwJT99TVy6w6Xhj9SX8HPRfxl4k8yMPrUJNW78hh6APAwq
 /etZY6+ieHC7cwG4xluhxsHnxnBYBYD+18hUd+b5LchD/yvCSCNNiursutpa0Ar/
 r/HtDwNFKlaApO3sU4R3yYgdS1Fvcas4tDZaumADsQlSG5z+UeJldc98LiRlFrAA
 gnskBSaaly/NgWqY3hVCYaBGyjD4lWPkX/FEChi0XX6Fl1P0umQAv/7z3w==
 =XSAs
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* Optional use of Meson wrap for slirp
* Coverity fixes
* Avoid -Werror=maybe-uninitialized
* Mark coroutine QMP command functions as coroutine_fn
* Mark functions that suspend as coroutine_mixed_fn
* target/i386: Fix SGX CPUID leaf
* First batch of qatomic_mb_read() removal
* Small atomic.rst improvement
* NBD cleanup
* Update libvirt-ci submodule

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmRBAzwUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroP64gf+NzLW95tylCfhKuuLq/TjuOTQqHCD
# KVLlA1I3pwJfk4SUuigrnaJtwfa/tBiWxfaivUdPAzPzeXyxcVSOps0neohrmFBh
# 2e3ylBWWz22K0gkLtrFwJT99TVy6w6Xhj9SX8HPRfxl4k8yMPrUJNW78hh6APAwq
# /etZY6+ieHC7cwG4xluhxsHnxnBYBYD+18hUd+b5LchD/yvCSCNNiursutpa0Ar/
# r/HtDwNFKlaApO3sU4R3yYgdS1Fvcas4tDZaumADsQlSG5z+UeJldc98LiRlFrAA
# gnskBSaaly/NgWqY3hVCYaBGyjD4lWPkX/FEChi0XX6Fl1P0umQAv/7z3w==
# =XSAs
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 20 Apr 2023 10:17:48 AM BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (25 commits)
  tests: lcitool: Switch to OpenSUSE Leap 15.4
  tests: libvirt-ci: Update to commit '2fa24dce8bc'
  configure: Honour cross-prefix when finding ObjC compiler
  coverity: unify Fedora dockerfiles
  nbd: a BlockExport always has a BlockBackend
  docs: explain effect of smp_read_barrier_depends() on modern architectures
  qemu-coroutine: remove qatomic_mb_read()
  postcopy-ram: do not use qatomic_mb_read
  block-backend: remove qatomic_mb_read()
  target/i386: Change wrong XFRM value in SGX CPUID leaf
  monitor: mark mixed functions that can suspend
  migration: mark mixed functions that can suspend
  io: mark mixed functions that can suspend
  qapi-gen: mark coroutine QMP command functions as coroutine_fn
  target/mips: tcg: detect out-of-bounds accesses to cpu_gpr and cpu_gpr_hi
  coverity: update COMPONENTS.md
  lasi: fix RTC migration
  target/i386: Avoid unreachable variable declaration in mmu_translate()
  configure: Avoid -Werror=maybe-uninitialized
  tests: bios-tables-test: replace memset with initializer
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-22 06:10:51 +01:00
Thomas Huth
123fa10279 target/i386: Set family/model/stepping of the "max" CPU according to LM bit
We want to get rid of the "#ifdef TARGET_X86_64" compile-time switch
in the long run, so we can drop the separate compilation of the
"qemu-system-i386" binary one day - but we then still need a way to
run a guest with max. CPU settings in 32-bit mode. So the "max" CPU
should determine its family/model/stepping settings according to the
"large mode" (LM) CPU feature bit during runtime, so that it is
possible to run "qemu-system-x86_64 -cpu max,lm=off" and still get
a sane family/model/stepping setting for the guest CPU.

To be able to check the LM bit, we have to move the code that sets
up these properties to a "realize" function, since the LM setting is
not available yet when the "instance_init" function is being called.

Message-Id: <20230306154311.476458-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2023-04-20 11:25:32 +02:00
Yang Zhong
72497cff89 target/i386: Change wrong XFRM value in SGX CPUID leaf
The previous patch wrongly replaced FEAT_XSAVE_XCR0_{LO|HI} with
FEAT_XSAVE_XSS_{LO|HI} in CPUID(EAX=12,ECX=1):{ECX,EDX}.  As a result,
SGX enclaves only supported SSE and x87 feature (xfrm=0x3).

Fixes: 301e90675c ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Yang Zhong <yang.zhong@linux.intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Message-Id: <20230406064041.420039-1-yang.zhong@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-20 11:17:35 +02:00
Peter Maydell
987b63f24a target/i386: Avoid unreachable variable declaration in mmu_translate()
Coverity complains (CID 1507880) that the declaration "int error_code;"
in mmu_translate() is unreachable code. Since this is only a declaration,
this isn't actually a bug, but:
 * it's a bear-trap for future changes, because if it was changed to
   include an initialization 'int error_code = foo;' then the
   initialization wouldn't actually happen (being dead code)
 * it's against our coding style, which wants declarations to be
   at the start of blocks
 * it means that anybody reading the code has to go and look up
   exactly what the C rules are for skipping over variable declarations
   using a goto

Move the declaration to the top of the function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230406155946.3362077-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-20 11:17:35 +02:00
Richard Henderson
cc37d98bfb *: Add missing includes of qemu/error-report.h
This had been pulled in via qemu/plugin.h from hw/core/cpu.h,
but that will be removed.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230310195252.210956-5-richard.henderson@linaro.org>
[AJB: add various additional cases shown by CI]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230315174331.2959-15-alex.bennee@linaro.org>
Reviewed-by: Emilio Cota <cota@braap.org>
2023-03-22 15:06:57 +00:00
Miroslav Rezanina
ddc7cb30f8 Fix build without CONFIG_XEN_EMU
Upstream commit ddf0fd9ae1 "hw/xen: Support HVM_PARAM_CALLBACK_TYPE_GSI callback"
added kvm_xen_maybe_deassert_callback usage to target/i386/kvm/kvm.c file without
conditional preprocessing check. This breaks any build not using CONFIG_XEN_EMU.

Protect call by conditional preprocessing to allow build without CONFIG_XEN_EMU.

Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20230308130557.2420-1-mrezanin@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-03-15 11:52:24 +01:00
Richard Henderson
3df11bb14a target/i386: Avoid use of tcg_const_* throughout
All uses are strictly read-only.  Most of the obviously so,
as direct arguments to gen_helper_*.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-13 06:44:37 -07:00
Anton Johansson
6787318a5d target/i386: Remove NB_MMU_MODES define
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230306175230.7110-9-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-13 06:44:37 -07:00
Peter Maydell
b1224d8395 gdbstub refactor:
- split user and softmmu code
   - use cleaner headers for tb_flush, target_ulong
   - probe for gdb multiarch support at configure
   - make syscall handling target independent
   - add update guest debug of accel ops
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmQHomMACgkQ+9DbCVqe
 KkTtFAf/aEyKY0iUNxtB4/oV1L2VnLaZi+iKoZT4RQgrhOhzr5WV6/3/V05cw1RJ
 SIwcl8wB4gowYILs44eM/Qzcixiugl++2rvM4YVXiQyWKzkH6sY4X2iFuPGTwHLp
 y+E7RM77QNS7M9xYaVkdsQawnbsgjG67wZKbb88aaekFEn61UuDg1V2Nqa2ICy7Y
 /8yGIB2ixDfXOF0z4g8NOG44BXTDBtJbcEzf5GMz6D4HGnPZUbENy1nT0OcBk3zK
 PqKPNkPFZ360pqA9MtougjZ3xTBb7Afe9nRRMquV2RoFmkkY2otSjdPBFQu5GBlm
 NyTXEzjIQ6tCZlbS0eqdPVrUHHUx9g==
 =Al36
 -----END PGP SIGNATURE-----

Merge tag 'pull-gdbstub-070323-3' of https://gitlab.com/stsquad/qemu into staging

gdbstub refactor:

  - split user and softmmu code
  - use cleaner headers for tb_flush, target_ulong
  - probe for gdb multiarch support at configure
  - make syscall handling target independent
  - add update guest debug of accel ops

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmQHomMACgkQ+9DbCVqe
# KkTtFAf/aEyKY0iUNxtB4/oV1L2VnLaZi+iKoZT4RQgrhOhzr5WV6/3/V05cw1RJ
# SIwcl8wB4gowYILs44eM/Qzcixiugl++2rvM4YVXiQyWKzkH6sY4X2iFuPGTwHLp
# y+E7RM77QNS7M9xYaVkdsQawnbsgjG67wZKbb88aaekFEn61UuDg1V2Nqa2ICy7Y
# /8yGIB2ixDfXOF0z4g8NOG44BXTDBtJbcEzf5GMz6D4HGnPZUbENy1nT0OcBk3zK
# PqKPNkPFZ360pqA9MtougjZ3xTBb7Afe9nRRMquV2RoFmkkY2otSjdPBFQu5GBlm
# NyTXEzjIQ6tCZlbS0eqdPVrUHHUx9g==
# =Al36
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 07 Mar 2023 20:45:23 GMT
# gpg:                using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8  DF35 FBD0 DB09 5A9E 2A44

* tag 'pull-gdbstub-070323-3' of https://gitlab.com/stsquad/qemu: (30 commits)
  gdbstub: move update guest debug to accel ops
  gdbstub: Build syscall.c once
  stubs: split semihosting_get_target from system only stubs
  gdbstub: Adjust gdb_do_syscall to only use uint32_t and uint64_t
  gdbstub: Remove gdb_do_syscallv
  gdbstub: split out softmmu/user specifics for syscall handling
  include: split target_long definition from cpu-defs
  testing: probe gdb for supported architectures ahead of time
  gdbstub: only compile gdbstub twice for whole build
  gdbstub: move syscall handling to new file
  gdbstub: move register helpers into standalone include
  gdbstub: don't use target_ulong while handling registers
  gdbstub: fix address type of gdb_set_cpu_pc
  gdbstub: specialise stub_can_reverse
  gdbstub: introduce gdb_get_max_cpus
  gdbstub: specialise target_memory_rw_debug
  gdbstub: specialise handle_query_attached
  gdbstub: abstract target specific details from gdb_put_packet_binary
  gdbstub: rationalise signal mapping in softmmu
  gdbstub: move chunks of user code into own files
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-09 16:54:51 +00:00
Alex Bennée
4ea5fe997d gdbstub: move register helpers into standalone include
These inline helpers are all used by target specific code so move them
out of the general header so we don't needlessly pollute the rest of
the API with target specific stuff.

Note we have to include cpu.h in semihosting as it was relying on a
side effect before.

Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

Message-Id: <20230302190846.2593720-21-alex.bennee@linaro.org>
Message-Id: <20230303025805.625589-21-richard.henderson@linaro.org>
2023-03-07 20:44:08 +00:00
David Woodhouse
de26b26197 hw/xen: Implement soft reset for emulated gnttab
This is only part of it; we will also need to get the PV back end drivers
to tear down their own mappings (or do it for them, but they kind of need
to stop using the pointers too).

Some more work on the actual PV back ends and xen-bus code is going to be
needed to really make soft reset and migration fully functional, and this
part is the basis for that.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-07 17:04:30 +00:00
Richard Henderson
3e7da311d7 target/i386: Simplify POPF
Compute the eflags write mask separately, leaving one call
to the helper.  Use tcg_constant_i32.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-05 13:45:31 -08:00
Richard Henderson
5128d58480 target/i386: Drop tcg_temp_free
Translators are no longer required to free tcg temporaries.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-05 13:44:08 -08:00
Peter Maydell
c61d1a066c * bugfixes
* show machine ACPI support in QAPI
 * Core Xen emulation support for KVM/x86
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmQAlrYUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroONWwf/fxDUMcZUvvatNxiVMhNfqEt/cL0F
 Durv1PmbbeVh9PP0W7XFkEXO3LCIRDyR4rtmCs7gHGdmzDOWQ+QIWgQijQ/y7ElQ
 bTVsvs0+s/6H3csP3dJTJaXSHshbQvrAZTsyk5KcAB6xdL1KqulfLUoGvXJhAmRs
 NKZN8un+nuAhFhL0VBWA9eQaP+BVHQI5ItAj8PaoBby4+Q9fNnat6j1/G4iLly8J
 dxIwCnuRHLiB3melWtadwbv6ddLJFeZNa50HUIsynqoItTzmRVr+oXz1yfq087dB
 9uksmoqb+icGEdwqs0iYbQ/dhVnIrMDpn/n2Us28S5VdIMVvxr1JEbEkSQ==
 =0jY8
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* bugfixes
* show machine ACPI support in QAPI
* Core Xen emulation support for KVM/x86

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmQAlrYUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroONWwf/fxDUMcZUvvatNxiVMhNfqEt/cL0F
# Durv1PmbbeVh9PP0W7XFkEXO3LCIRDyR4rtmCs7gHGdmzDOWQ+QIWgQijQ/y7ElQ
# bTVsvs0+s/6H3csP3dJTJaXSHshbQvrAZTsyk5KcAB6xdL1KqulfLUoGvXJhAmRs
# NKZN8un+nuAhFhL0VBWA9eQaP+BVHQI5ItAj8PaoBby4+Q9fNnat6j1/G4iLly8J
# dxIwCnuRHLiB3melWtadwbv6ddLJFeZNa50HUIsynqoItTzmRVr+oXz1yfq087dB
# 9uksmoqb+icGEdwqs0iYbQ/dhVnIrMDpn/n2Us28S5VdIMVvxr1JEbEkSQ==
# =0jY8
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 02 Mar 2023 12:29:42 GMT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (62 commits)
  Makefile: qemu-bundle is a directory
  qapi: Add 'acpi' field to 'query-machines' output
  hw/xen: Subsume xen_be_register_common() into xen_be_init()
  i386/xen: Document Xen HVM emulation
  kvm/i386: Add xen-evtchn-max-pirq property
  hw/xen: Support MSI mapping to PIRQ
  hw/xen: Support GSI mapping to PIRQ
  hw/xen: Implement emulated PIRQ hypercall support
  i386/xen: Implement HYPERVISOR_physdev_op
  hw/xen: Automatically add xen-platform PCI device for emulated Xen guests
  hw/xen: Add basic ring handling to xenstore
  hw/xen: Add xen_xenstore device for xenstore emulation
  hw/xen: Add backend implementation of interdomain event channel support
  i386/xen: handle HVMOP_get_param
  i386/xen: Reserve Xen special pages for console, xenstore rings
  i386/xen: handle PV timer hypercalls
  hw/xen: Implement GNTTABOP_query_size
  i386/xen: Implement HYPERVISOR_grant_table_op and GNTTABOP_[gs]et_verson
  hw/xen: Support mapping grant frames
  hw/xen: Add xen_gnttab device for grant table emulation
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-02 16:13:45 +00:00
Peter Maydell
0ccf919d74 Monitor patches for 2023-03-02
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmQASV0SHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZT4mEQALkbu0i/Y2B2XKpGyp9Z/uf0zoHmL6fA
 UEy2S8yn3K1uPPDmzWsQ7PgxDnirVzePa88UzuW3iUiiHBcQp0IsFOP6LZ47NTTz
 UM146odlDtn50bHWr6vHtToNU+PcKOw8ELX28eDE+ihtg+8B+B6cBgLa14VKGSJ4
 4oBJbsNMG8U3qJgqrIMomBeP38TorTdKq05jEE9txqsiw5uYO6jQE9owNkLQP76U
 8T/99sgQzyQjJ7VjOdyu2ZveUwGIpmGzmeA26CcwYP8uhYTRY+Lk+5gZnC15pad9
 TMimrq+7vwuzqKQpZw5rZO25ryQmKgQX49hSt/dKZEFNvb9vtKq693VhRoP4EMCz
 136suIATRkXHTw2FhjC2l3lnN+rQEfTr+zuGvazQ9ZOibHFPhxOAR4RNPTFXbfk6
 fOM7wW2Y3lhlQdhLc+0Ar2N/GzjEHi4WJhk4nV0V1PK79dLPYA5kuYGUuqzeA04P
 Fu1EvpNWgHpQd3m8oFxjfozn9LMDohUrdHknrF0+VncAfzcPic1z4VhKDg+kMLJx
 1WePIMdMMS/aIYpNCMevLm11GQXhd2B4GG3xhNpM/BfHQ9KLM1dfoTEGfG9ZpKNv
 Qyi1ofpgKzX5mpSHrdACK/rm45KIJRbprGgAe3fZFh65iGQ51wwZd16MUV/c8exN
 ouu3jimfHWWG
 =RuRo
 -----END PGP SIGNATURE-----

Merge tag 'pull-monitor-2023-03-02' of https://repo.or.cz/qemu/armbru into staging

Monitor patches for 2023-03-02

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmQASV0SHGFybWJydUBy
# ZWRoYXQuY29tAAoJEDhwtADrkYZT4mEQALkbu0i/Y2B2XKpGyp9Z/uf0zoHmL6fA
# UEy2S8yn3K1uPPDmzWsQ7PgxDnirVzePa88UzuW3iUiiHBcQp0IsFOP6LZ47NTTz
# UM146odlDtn50bHWr6vHtToNU+PcKOw8ELX28eDE+ihtg+8B+B6cBgLa14VKGSJ4
# 4oBJbsNMG8U3qJgqrIMomBeP38TorTdKq05jEE9txqsiw5uYO6jQE9owNkLQP76U
# 8T/99sgQzyQjJ7VjOdyu2ZveUwGIpmGzmeA26CcwYP8uhYTRY+Lk+5gZnC15pad9
# TMimrq+7vwuzqKQpZw5rZO25ryQmKgQX49hSt/dKZEFNvb9vtKq693VhRoP4EMCz
# 136suIATRkXHTw2FhjC2l3lnN+rQEfTr+zuGvazQ9ZOibHFPhxOAR4RNPTFXbfk6
# fOM7wW2Y3lhlQdhLc+0Ar2N/GzjEHi4WJhk4nV0V1PK79dLPYA5kuYGUuqzeA04P
# Fu1EvpNWgHpQd3m8oFxjfozn9LMDohUrdHknrF0+VncAfzcPic1z4VhKDg+kMLJx
# 1WePIMdMMS/aIYpNCMevLm11GQXhd2B4GG3xhNpM/BfHQ9KLM1dfoTEGfG9ZpKNv
# Qyi1ofpgKzX5mpSHrdACK/rm45KIJRbprGgAe3fZFh65iGQ51wwZd16MUV/c8exN
# ouu3jimfHWWG
# =RuRo
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 02 Mar 2023 06:59:41 GMT
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* tag 'pull-monitor-2023-03-02' of https://repo.or.cz/qemu/armbru:
  target/ppc: Restrict 'qapi-commands-machine.h' to system emulation
  target/loongarch: Restrict 'qapi-commands-machine.h' to system emulation
  target/i386: Restrict 'qapi-commands-machine.h' to system emulation
  target/arm: Restrict 'qapi-commands-machine.h' to system emulation
  readline: fix hmp completion issue

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-02 10:54:17 +00:00
Philippe Mathieu-Daudé
390dbc6e2e target/i386: Restrict 'qapi-commands-machine.h' to system emulation
Since commit a0e61807a3 ("qapi: Remove QMP events and commands from
user-mode builds") we don't generate the "qapi-commands-machine.h"
header in a user-emulation-only build.

Guard qmp_query_cpu_definitions() within CONFIG_USER_ONLY; move
x86_cpu_class_check_missing_features() closer since it is only used
by this QMP command handler.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230223155540.30370-3-philmd@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-03-02 07:51:33 +01:00
Richard Henderson
3a5d177322 target/i386: Don't use tcg_temp_local_new
Since tcg_temp_new is now identical, use that.
In some cases we can avoid a copy from A0 or T0.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:33:28 -10:00
Richard Henderson
597f9b2d30 accel/tcg: Pass max_insn to gen_intermediate_code by pointer
In preparation for returning the number of insns generated
via the same pointer.  Adjust only the prototypes so far.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:33:27 -10:00
Anton Johansson
34a39c2443 target/i386: Replace tb_pc() with tb->pc
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230227135202.9710-23-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:33:20 -10:00
Anton Johansson
f6680c5ea4 target/i386: Remove TARGET_TB_PCREL define
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230227135202.9710-11-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:33:02 -10:00
Anton Johansson
2e3afe8e19 target/i386: Replace TARGET_TB_PCREL with CF_PCREL
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230227135202.9710-8-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:31:56 -10:00
Anton Johansson
492f8b88ae target/i386: set CF_PCREL in x86_cpu_realizefn
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230227135202.9710-3-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-01 07:31:27 -10:00
David Woodhouse
e16aff4cc2 kvm/i386: Add xen-evtchn-max-pirq property
The default number of PIRQs is set to 256 to avoid issues with 32-bit MSI
devices. Allow it to be increased if the user desires.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:09:22 +00:00
David Woodhouse
6096cf7877 hw/xen: Support MSI mapping to PIRQ
The way that Xen handles MSI PIRQs is kind of awful.

There is a special MSI message which targets a PIRQ. The vector in the
low bits of data must be zero. The low 8 bits of the PIRQ# are in the
destination ID field, the extended destination ID field is unused, and
instead the high bits of the PIRQ# are in the high 32 bits of the address.

Using the high bits of the address means that we can't intercept and
translate these messages in kvm_send_msi(), because they won't be caught
by the APIC — addresses like 0x1000fee46000 aren't in the APIC's range.

So we catch them in pci_msi_trigger() instead, and deliver the event
channel directly.

That isn't even the worst part. The worst part is that Xen snoops on
writes to devices' MSI vectors while they are *masked*. When a MSI
message is written which looks like it targets a PIRQ, it remembers
the device and vector for later.

When the guest makes a hypercall to bind that PIRQ# (snooped from a
marked MSI vector) to an event channel port, Xen *unmasks* that MSI
vector on the device. Xen guests using PIRQ delivery of MSI don't
ever actually unmask the MSI for themselves.

Now that this is working we can finally enable XENFEAT_hvm_pirqs and
let the guest use it all.

Tested with passthrough igb and emulated e1000e + AHCI.

           CPU0       CPU1
  0:         65          0   IO-APIC   2-edge      timer
  1:          0         14  xen-pirq   1-ioapic-edge  i8042
  4:          0        846  xen-pirq   4-ioapic-edge  ttyS0
  8:          1          0  xen-pirq   8-ioapic-edge  rtc0
  9:          0          0  xen-pirq   9-ioapic-level  acpi
 12:        257          0  xen-pirq  12-ioapic-edge  i8042
 24:       9600          0  xen-percpu    -virq      timer0
 25:       2758          0  xen-percpu    -ipi       resched0
 26:          0          0  xen-percpu    -ipi       callfunc0
 27:          0          0  xen-percpu    -virq      debug0
 28:       1526          0  xen-percpu    -ipi       callfuncsingle0
 29:          0          0  xen-percpu    -ipi       spinlock0
 30:          0       8608  xen-percpu    -virq      timer1
 31:          0        874  xen-percpu    -ipi       resched1
 32:          0          0  xen-percpu    -ipi       callfunc1
 33:          0          0  xen-percpu    -virq      debug1
 34:          0       1617  xen-percpu    -ipi       callfuncsingle1
 35:          0          0  xen-percpu    -ipi       spinlock1
 36:          8          0   xen-dyn    -event     xenbus
 37:          0       6046  xen-pirq    -msi       ahci[0000:00:03.0]
 38:          1          0  xen-pirq    -msi-x     ens4
 39:          0         73  xen-pirq    -msi-x     ens4-rx-0
 40:         14          0  xen-pirq    -msi-x     ens4-rx-1
 41:          0         32  xen-pirq    -msi-x     ens4-tx-0
 42:         47          0  xen-pirq    -msi-x     ens4-tx-1

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:09:22 +00:00
David Woodhouse
aa98ee38a5 hw/xen: Implement emulated PIRQ hypercall support
This wires up the basic infrastructure but the actual interrupts aren't
there yet, so don't advertise it to the guest.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:09:01 +00:00
David Woodhouse
799c23548f i386/xen: Implement HYPERVISOR_physdev_op
Just hook up the basic hypercalls to stubs in xen_evtchn.c for now.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:08:26 +00:00
David Woodhouse
c08f5d0e53 hw/xen: Add xen_xenstore device for xenstore emulation
Just the basic shell, with the event channel hookup. It only dumps the
buffer for now; a real ring implmentation will come in a subsequent patch.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:08:26 +00:00
Joao Martins
c6623cc3e7 i386/xen: handle HVMOP_get_param
Which is used to fetch xenstore PFN and port to be used
by the guest. This is preallocated by the toolstack when
guest will just read those and use it straight away.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
8b57d5c523 i386/xen: Reserve Xen special pages for console, xenstore rings
Xen has eight frames at 0xfeff8000 for this; we only really need two for
now and KVM puts the identity map at 0xfeffc000, so limit ourselves to
four.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
Joao Martins
b746a77926 i386/xen: handle PV timer hypercalls
Introduce support for one shot and periodic mode of Xen PV timers,
whereby timer interrupts come through a special virq event channel
with deadlines being set through:

1) set_timer_op hypercall (only oneshot)
2) vcpu_op hypercall for {set,stop}_{singleshot,periodic}_timer
hypercalls

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
b46f9745b1 hw/xen: Implement GNTTABOP_query_size
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
28b7ae94a2 i386/xen: Implement HYPERVISOR_grant_table_op and GNTTABOP_[gs]et_verson
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
a28b0fc034 hw/xen: Add xen_gnttab device for grant table emulation
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
6f43f2ee49 kvm/i386: Add xen-gnttab-max-frames property
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:52 +00:00
David Woodhouse
2aff696b10 hw/xen: Support HVM_PARAM_CALLBACK_TYPE_PCI_INTX callback
The guest is permitted to specify an arbitrary domain/bus/device/function
and INTX pin from which the callback IRQ shall appear to have come.

In QEMU we can only easily do this for devices that actually exist, and
even that requires us "knowing" that it's a PCMachine in order to find
the PCI root bus — although that's OK really because it's always true.

We also don't get to get notified of INTX routing changes, because we
can't do that as a passive observer; if we try to register a notifier
it will overwrite any existing notifier callback on the device.

But in practice, guests using PCI_INTX will only ever use pin A on the
Xen platform device, and won't swizzle the INTX routing after they set
it up. So this is just fine.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:07:50 +00:00
David Woodhouse
ddf0fd9ae1 hw/xen: Support HVM_PARAM_CALLBACK_TYPE_GSI callback
The GSI callback (and later PCI_INTX) is a level triggered interrupt. It
is asserted when an event channel is delivered to vCPU0, and is supposed
to be cleared when the vcpu_info->evtchn_upcall_pending field for vCPU0
is cleared again.

Thankfully, Xen does *not* assert the GSI if the guest sets its own
evtchn_upcall_pending field; we only need to assert the GSI when we
have delivered an event for ourselves. So that's the easy part, kind of.

There's a slight complexity in that we need to hold the BQL before we
can call qemu_set_irq(), and we definitely can't do that while holding
our own port_lock (because we'll need to take that from the qemu-side
functions that the PV backend drivers will call). So if we end up
wanting to set the IRQ in a context where we *don't* already hold the
BQL, defer to a BH.

However, we *do* need to poll for the evtchn_upcall_pending flag being
cleared. In an ideal world we would poll that when the EOI happens on
the PIC/IOAPIC. That's how it works in the kernel with the VFIO eventfd
pairs — one is used to trigger the interrupt, and the other works in the
other direction to 'resample' on EOI, and trigger the first eventfd
again if the line is still active.

However, QEMU doesn't seem to do that. Even VFIO level interrupts seem
to be supported by temporarily unmapping the device's BARs from the
guest when an interrupt happens, then trapping *all* MMIO to the device
and sending the 'resample' event on *every* MMIO access until the IRQ
is cleared! Maybe in future we'll plumb the 'resample' concept through
QEMU's irq framework but for now we'll do what Xen itself does: just
check the flag on every vmexit if the upcall GSI is known to be
asserted.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 09:06:44 +00:00
David Woodhouse
a15b10978f hw/xen: Implement EVTCHNOP_reset
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
306670461b hw/xen: Implement EVTCHNOP_bind_vcpu
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
8432788104 hw/xen: Implement EVTCHNOP_bind_interdomain
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
e1db61b87b hw/xen: Implement EVTCHNOP_alloc_unbound
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
cf7679abdd hw/xen: Implement EVTCHNOP_send
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
f5417856d2 hw/xen: Implement EVTCHNOP_bind_ipi
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
c723d4c15e hw/xen: Implement EVTCHNOP_bind_virq
Add the array of virq ports to each vCPU so that we can deliver timers,
debug ports, etc. Global virqs are allocated against vCPU 0 initially,
but can be migrated to other vCPUs (when we implement that).

The kernel needs to know about VIRQ_TIMER in order to accelerate timers,
so tell it via KVM_XEN_VCPU_ATTR_TYPE_TIMER. Also save/restore the value
of the singleshot timer across migration, as the kernel will handle the
hypercalls automatically now.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
190cc3c0ed hw/xen: Implement EVTCHNOP_unmask
This finally comes with a mechanism for actually injecting events into
the guest vCPU, with all the atomic-test-and-set that's involved in
setting the bit in the shinfo, then the index in the vcpu_info, and
injecting either the lapic vector as MSI, or letting KVM inject the
bare vector.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
83eb581134 hw/xen: Implement EVTCHNOP_close
It calls an internal close_port() helper which will also be used from
EVTCHNOP_reset and will actually do the work to disconnect/unbind a port
once any of that is actually implemented in the first place.

That in turn calls a free_port() internal function which will be in
error paths after allocation.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
4858ba2065 hw/xen: Implement EVTCHNOP_status
This adds the basic structure for maintaining the port table and reporting
the status of ports therein.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
27d4075dd8 i386/xen: Add support for Xen event channel delivery to vCPU
The kvm_xen_inject_vcpu_callback_vector() function will either deliver
the per-vCPU local APIC vector (as an MSI), or just kick the vCPU out
of the kernel to trigger KVM's automatic delivery of the global vector.
Support for asserting the GSI/PCI_INTX callbacks will come later.

Also add kvm_xen_get_vcpu_info_hva() which returns the vcpu_info of
a given vCPU.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
David Woodhouse
91cce75617 hw/xen: Add xen_evtchn device for event channel emulation
Include basic support for setting HVM_PARAM_CALLBACK_IRQ to the global
vector method HVM_PARAM_CALLBACK_TYPE_VECTOR, which is handled in-kernel
by raising the vector whenever the vCPU's vcpu_info->evtchn_upcall_pending
flag is set.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
Ankur Arora
5dbcd01a8d i386/xen: implement HVMOP_set_param
This is the hook for adding the HVM_PARAM_CALLBACK_IRQ parameter in a
subsequent commit.

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Split out from another commit]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
Ankur Arora
105b47fdf2 i386/xen: implement HVMOP_set_evtchn_upcall_vector
The HVMOP_set_evtchn_upcall_vector hypercall sets the per-vCPU upcall
vector, to be delivered to the local APIC just like an MSI (with an EOI).

This takes precedence over the system-wide delivery method set by the
HVMOP_set_param hypercall with HVM_PARAM_CALLBACK_IRQ. It's used by
Windows and Xen (PV shim) guests but normally not by Linux.

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Rework for upstream kernel changes and split from HVMOP_set_param]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
Joao Martins
3b06f29b24 i386/xen: implement HYPERVISOR_event_channel_op
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Ditch event_channel_op_compat which was never available to HVM guests]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
Joao Martins
5092db87e4 i386/xen: handle VCPUOP_register_runstate_memory_area
Allow guest to setup the vcpu runstates which is used as
steal clock.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:50 +00:00
Joao Martins
f068930277 i386/xen: handle VCPUOP_register_vcpu_time_info
In order to support Linux vdso in Xen.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
c345104cd1 i386/xen: handle VCPUOP_register_vcpu_info
Handle the hypercall to set a per vcpu info, and also wire up the default
vcpu_info in the shared_info page for the first 32 vCPUs.

To avoid deadlock within KVM a vCPU thread must set its *own* vcpu_info
rather than it being set from the context in which the hypercall is
invoked.

Add the vcpu_info (and default) GPA to the vmstate_x86_cpu for migration,
and restore it in kvm_arch_put_registers() appropriately.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
d70bd6a485 i386/xen: implement HYPERVISOR_vcpu_op
This is simply when guest tries to register a vcpu_info
and since vcpu_info placement is optional in the minimum ABI
therefore we can just fail with -ENOSYS

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
671bfdcd47 i386/xen: implement HYPERVISOR_hvm_op
This is when guest queries for support for HVMOP_pagetable_dying.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
David Woodhouse
782a79601e i386/xen: implement XENMEM_add_to_physmap_batch
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
fb0fd2ce38 i386/xen: implement HYPERVISOR_memory_op
Specifically XENMEM_add_to_physmap with space XENMAPSPACE_shared_info to
allow the guest to set its shared_info page.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Use the xen_overlay device, add compat support]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
David Woodhouse
110a0ea59f i386/xen: manage and save/restore Xen guest long_mode setting
Xen will "latch" the guest's 32-bit or 64-bit ("long mode") setting when
the guest writes the MSR to fill in the hypercall page, or when the guest
sets the event channel callback in HVM_PARAM_CALLBACK_IRQ.

KVM handles the former and sets the kernel's long_mode flag accordingly.
The latter will be handled in userspace. Keep them in sync by noticing
when a hypercall is made in a mode that doesn't match qemu's idea of
the guest mode, and resyncing from the kernel. Do that same sync right
before serialization too, in case the guest has set the hypercall page
but hasn't yet made a system call.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
David Woodhouse
c789b9ef5f i386/xen: Implement SCHEDOP_poll and SCHEDOP_yield
They both do the same thing and just call sched_yield. This is enough to
stop the Linux guest panicking when running on a host kernel which doesn't
intercept SCHEDOP_poll and lets it reach userspace.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
79b7067dc6 i386/xen: implement HYPERVISOR_sched_op, SCHEDOP_shutdown
It allows to shutdown itself via hypercall with any of the 3 reasons:
  1) self-reboot
  2) shutdown
  3) crash

Implementing SCHEDOP_shutdown sub op let us handle crashes gracefully rather
than leading to triple faults if it remains unimplemented.

In addition, the SHUTDOWN_soft_reset reason is used for kexec, to reset
Xen shared pages and other enlightenments and leave a clean slate for the
new kernel without the hypervisor helpfully writing information at
unexpected addresses.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Ditch sched_op_compat which was never available for HVM guests,
        Add SCHEDOP_soft_reset]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
bedcc13924 i386/xen: implement HYPERVISOR_xen_version
This is just meant to serve as an example on how we can implement
hypercalls. xen_version specifically since Qemu does all kind of
feature controllability. So handling that here seems appropriate.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Implement kvm_gva_rw() safely]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
55a3f666b4 i386/xen: handle guest hypercalls
This means handling the new exit reason for Xen but still
crashing on purpose. As we implement each of the hypercalls
we will then return the right return code.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Add CPL to hypercall tracing, disallow hypercalls from CPL > 0]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
David Woodhouse
5e691a955a i386/kvm: Set Xen vCPU ID in KVM
There are (at least) three different vCPU ID number spaces. One is the
internal KVM vCPU index, based purely on which vCPU was chronologically
created in the kernel first. If userspace threads are all spawned and
create their KVM vCPUs in essentially random order, then the KVM indices
are basically random too.

The second number space is the APIC ID space, which is consistent and
useful for referencing vCPUs. MSIs will specify the target vCPU using
the APIC ID, for example, and the KVM Xen APIs also take an APIC ID
from userspace whenever a vCPU needs to be specified (as opposed to
just using the appropriate vCPU fd).

The third number space is not normally relevant to the kernel, and is
the ACPI/MADT/Xen CPU number which corresponds to cs->cpu_index. But
Xen timer hypercalls use it, and Xen timer hypercalls *really* want
to be accelerated in the kernel rather than handled in userspace, so
the kernel needs to be told.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Joao Martins
f66b8a83c5 i386/kvm: handle Xen HVM cpuid leaves
Introduce support for emulating CPUID for Xen HVM guests. It doesn't make
sense to advertise the KVM leaves to a Xen guest, so do Xen unconditionally
when the xen-version machine property is set.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Obtain xen_version from KVM property, make it automatic]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
David Woodhouse
61491cf441 i386/kvm: Add xen-version KVM accelerator property and init KVM Xen support
This just initializes the basic Xen support in KVM for now. Only permitted
on TYPE_PC_MACHINE because that's where the sysbus devices for Xen heap
overlay, event channel, grant tables and other stuff will exist. There's
no point having the basic hypercall support if nothing else works.

Provide sysemu/kvm_xen.h and a kvm_xen_get_caps() which will be used
later by support devices.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
2023-03-01 08:22:49 +00:00
Richard Henderson
d507e6c565 accel/tcg: Add 'size' param to probe_access_full
Change to match the recent change to probe_access_flags.
All existing callers updated to supply 0, so no change in behaviour.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-28 10:32:31 -10:00
Peter Maydell
6276340310 - buildsys
- Various header cleaned up (removing pointless headers)
   - Mark various files/code user/system specific
   - Make various objects target-independent
   - Remove tswapN() calls from dump.o
   - Suggest g_assert_not_reached() instead of assert(0)
 
 - qdev / qom
   - Replace various container_of() by QOM cast macros
   - Declare some QOM macros using OBJECT_DECLARE_TYPE()
   - Embed OHCI QOM child in SM501 chipset
 
 - hw (ISA & IDE)
   - add some documentation, improve function names
   - un-inline, open-code few functions
   - have ISA API accessing IRQ/DMA prefer ISABus over ISADevice
   - Demote IDE subsystem maintenance to "Odd Fixes"
 
 - ui: Improve Ctrl+Alt hint on Darwin Cocoa
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmP9IeAACgkQ4+MsLN6t
 wN7bdQ//SxJYJuQvqTT6s+O0LmP6NbqvhxCXX7YAwK2jCTM+zTgcqqRZCcisLQol
 3ENu2UhnZmiLKHSOxatOVozbws08/u8Vl+WkW4UTMUb1yo5KPaPtq808Y95RdAJB
 7D7B5juDGnFRAHXZz38zVk9uIuEkm+Po/pD0JQa+upBtAAgOJTqGavDNSR5+T0Yl
 VjGdwK0b10skPqiF6OABYoy/4IFHVJJFIbARZh+a7hrF0llsbzUts5JiYsOxEEHQ
 t3woUItdMnS1m0+Ty4AQ8m0Yv9y4HZOIzixvsZ+vChj5ariwUhL9/7wC/s/UCYEg
 gKVA5X8R6n/ME6DScK99a+CyR/MXkz70b/rOUZxoutXhV3xdh4X1stL4WN9W/m3z
 D4i4ZrUsDUcKCGWlj49of/dKbOPwk1+e/mT0oDZD6JzG0ODjfdVxvJ/JEV2iHgS3
 WqHuSKzX/20H9j7/MgfbQ0HjBFOQ8tl781vQzhD+y+cF/IiTsHhrE6esIWho4bob
 kfSdVydUWWRnBsnyGoRZXoEMX9tn+pu0nKxEDm2Bo2+jajsa0aZZPokgjxaz4MnD
 Hx+/p1E+8IuOn05JgzQSgTJmKFdSbya203tXIsTo1kL2aJTJ6QfMvgEPP/fkn+lS
 oQyVBFZmb1JDdTM1MxOncnlWLg74rp/CWEc+u5pSdbxMO/M/uac=
 =AV/+
 -----END PGP SIGNATURE-----

Merge tag 'buildsys-qom-qdev-ui-20230227' of https://github.com/philmd/qemu into staging

- buildsys
  - Various header cleaned up (removing pointless headers)
  - Mark various files/code user/system specific
  - Make various objects target-independent
  - Remove tswapN() calls from dump.o
  - Suggest g_assert_not_reached() instead of assert(0)

- qdev / qom
  - Replace various container_of() by QOM cast macros
  - Declare some QOM macros using OBJECT_DECLARE_TYPE()
  - Embed OHCI QOM child in SM501 chipset

- hw (ISA & IDE)
  - add some documentation, improve function names
  - un-inline, open-code few functions
  - have ISA API accessing IRQ/DMA prefer ISABus over ISADevice
  - Demote IDE subsystem maintenance to "Odd Fixes"

- ui: Improve Ctrl+Alt hint on Darwin Cocoa

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmP9IeAACgkQ4+MsLN6t
# wN7bdQ//SxJYJuQvqTT6s+O0LmP6NbqvhxCXX7YAwK2jCTM+zTgcqqRZCcisLQol
# 3ENu2UhnZmiLKHSOxatOVozbws08/u8Vl+WkW4UTMUb1yo5KPaPtq808Y95RdAJB
# 7D7B5juDGnFRAHXZz38zVk9uIuEkm+Po/pD0JQa+upBtAAgOJTqGavDNSR5+T0Yl
# VjGdwK0b10skPqiF6OABYoy/4IFHVJJFIbARZh+a7hrF0llsbzUts5JiYsOxEEHQ
# t3woUItdMnS1m0+Ty4AQ8m0Yv9y4HZOIzixvsZ+vChj5ariwUhL9/7wC/s/UCYEg
# gKVA5X8R6n/ME6DScK99a+CyR/MXkz70b/rOUZxoutXhV3xdh4X1stL4WN9W/m3z
# D4i4ZrUsDUcKCGWlj49of/dKbOPwk1+e/mT0oDZD6JzG0ODjfdVxvJ/JEV2iHgS3
# WqHuSKzX/20H9j7/MgfbQ0HjBFOQ8tl781vQzhD+y+cF/IiTsHhrE6esIWho4bob
# kfSdVydUWWRnBsnyGoRZXoEMX9tn+pu0nKxEDm2Bo2+jajsa0aZZPokgjxaz4MnD
# Hx+/p1E+8IuOn05JgzQSgTJmKFdSbya203tXIsTo1kL2aJTJ6QfMvgEPP/fkn+lS
# oQyVBFZmb1JDdTM1MxOncnlWLg74rp/CWEc+u5pSdbxMO/M/uac=
# =AV/+
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 27 Feb 2023 21:34:24 GMT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* tag 'buildsys-qom-qdev-ui-20230227' of https://github.com/philmd/qemu: (125 commits)
  ui/cocoa: user friendly characters for release mouse
  dump: Add create_win_dump() stub for non-x86 targets
  dump: Simplify compiling win_dump.o by introducing win_dump_available()
  dump: Clean included headers
  dump: Replace TARGET_PAGE_SIZE -> qemu_target_page_size()
  dump: Replace tswapN() -> cpu_to_dumpN()
  hw/ide/pci: Add PCIIDEState::isa_irq[]
  hw/ide/via: Replace magic 2 value by ARRAY_SIZE / MAX_IDE_DEVS
  hw/ide/piix: Refactor pci_piix_init_ports as pci_piix_init_bus per bus
  hw/ide/piix: Pass Error* to pci_piix_init_ports() for better error msg
  hw/ide/piix: Remove unused includes
  hw/ide/pci: Unexport bmdma_active_if()
  hw/ide/ioport: Remove unnecessary includes
  hw/ide: Declare ide_get_[geometry/bios_chs_trans] in 'hw/ide/internal.h'
  hw/ide: Rename idebus_active_if() -> ide_bus_active_if()
  hw/ide: Rename ide_init2() -> ide_bus_init_output_irq()
  hw/ide: Rename ide_exec_cmd() -> ide_bus_exec_cmd()
  hw/ide: Rename ide_register_restart_cb -> ide_bus_register_restart_cb
  hw/ide: Rename ide_create_drive() -> ide_bus_create_drive()
  hw/ide: Rename ide_set_irq() -> ide_bus_set_irq()
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-28 15:09:18 +00:00
Bernhard Beschow
7f54640b4b hw: Move ioapic*.h to intc/
The ioapic sources reside in hw/intc already. Move the headers there
as well.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230213173033.98762-11-shentey@gmail.com>
[PMD: Keep ioapic_internal.h in hw/intc/, not under include/]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Philippe Mathieu-Daudé
b5c6a3c1df target/i386: Remove x86_cpu_dump_local_apic_state() dead stub
x86_cpu_dump_local_apic_state() is called from monitor.c which
is only compiled for system emulation since commit bf95728400
("monitor: remove target-specific code from monitor.c").

Interestingly this stub was added few weeks later in commit
1f871d49e3 ("hmp: added local apic dump state") and was not
necessary by that time.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221216220158.6317-5-philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Philippe Mathieu-Daudé
dd20414f9a target/i386/cpu: Remove dead helper_lock() declaration
Missed in commit 37b995f6e7 ("target-i386: remove helper_lock()").

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221216220158.6317-3-philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Philippe Mathieu-Daudé
aa8a7d2018 target/i386: Remove NEED_CPU_H guard from target-specific headers
NEED_CPU_H is always defined for these target-specific headers.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221216220158.6317-2-philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Philippe Mathieu-Daudé
6d2d454a88 target/cpu: Restrict cpu_get_phys_page_debug() handlers to sysemu
The 'hwaddr' type is only available / meaningful on system emulation.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221216215519.5522-5-philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Wang, Lei
7eb061b06e i386: Add new CPU model SapphireRapids
The new CPU model mostly inherits features from Icelake-Server, while
adding new features:
 - AMX (Advance Matrix eXtensions)
 - Bus Lock Debug Exception
and new instructions:
 - AVX VNNI (Vector Neural Network Instruction):
    - VPDPBUS: Multiply and Add Unsigned and Signed Bytes
    - VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation
    - VPDPWSSD: Multiply and Add Signed Word Integers
    - VPDPWSSDS: Multiply and Add Signed Integers with Saturation
 - FP16: Replicates existing AVX512 computational SP (FP32) instructions
   using FP16 instead of FP32 for ~2X performance gain
 - SERIALIZE: Provide software with a simple way to force the processor to
   complete all modifications, faster, allowed in all privilege levels and
   not causing an unconditional VM exit
 - TSX Suspend Load Address Tracking: Allows programmers to choose which
   memory accesses do not need to be tracked in the TSX read set
 - AVX512_BF16: Vector Neural Network Instructions supporting BFLOAT16
   inputs and conversion instructions from IEEE single precision
 - fast zero-length MOVSB (KVM doesn't support yet)
 - fast short STOSB (KVM doesn't support yet)
 - fast short CMPSB, SCASB (KVM doesn't support yet)

Features that may be added in future versions:
 - CET (virtualization support hasn't been merged)

Signed-off-by: Wang, Lei <lei4.wang@intel.com>
Reviewed-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <20220812055751.14553-1-lei4.wang@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-27 18:53:00 +01:00
Paolo Bonzini
3023c9b4d1 target/i386: KVM: allow fast string operations if host supports them
These are just a flag that documents the performance characteristic of
an instruction; it needs no hypervisor support.  So include them even
if KVM does not show them.  In particular, FZRM/FSRS/FSRC have only
been added very recently, but they are available on Sapphire Rapids
processors.

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-27 18:44:57 +01:00
Paolo Bonzini
58794f644e target/i386: add FZRM, FSRS, FSRC
These are three more markers for string operation optimizations.
They can all be added to TCG, whose string operations are more or
less as fast as they can be for short lengths.

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-27 18:44:03 +01:00
Paolo Bonzini
c0728d4e3d target/i386: add FSRM to TCG
Fast short REP MOVS can be added to TCG, since a trivial translation
of string operation is a good option for short lengths.

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-27 18:43:53 +01:00
Richard Henderson
9ad2ba6e8e target/i386: Fix BZHI instruction
We did not correctly handle N >= operand size.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1374
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230114233206.3118472-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-27 09:18:55 +01:00
Peter Maydell
c259930c2e Error reporting patches patches for 2023-02-23
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmP3ZogSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZT+PsP/ibioHJkJiR8yMt2/2iSwpkMrphZDmRQ
 5sAgxCARdcp0m7maH4McCFkgtERcROip+j98FV29qI4y2P/mLkt1jyMYC+TH9r4O
 X3G997526gzZBLIJJsnYlVlJ1Gbgn+uCy4AzRLuhaKAHsYoxkP0jygoSs/eIZ9tK
 Wg2tkQ/wY4bXihrlzdOpWqU3Y0ADo2PQ29p7HWheRMDQz6JQxq82hFFs1jgGQ1aq
 4HmcpIMX0+/LshFbDU91dL1pxW17vWT9J3xtzAsWlfBBgAh257LKvJqVD0XojL04
 FxJZ05IqTXZ04gvwgji0dcvNjdmP/dXVoGLfxAYwCFtKxiig700bdNb0+6MjCT6u
 P2tSPyQQzNQ5LYI7AgER4kMyXK22RkBXx+Q7y7QK1YXszWWSmGFZWGLA2FSg4lO6
 5jsCgtEGixsMym/ox3XeoywSh4BgWkNXC+gKMSg/hQXgfriQmndHUOlK0ZU95I43
 7gnPol+pU1HIEy/GDU8oMyieG513Ti1KVPZyv/FbuW75AYUDlHAXH/5OFlsuaLIR
 1QF449xCLR5vIOOLXHbKJ9jbkcAaidhq5pOhLr7oV3yKh4H53iNB7gy8+vJ6XtBf
 tXXcYPVD8LpZxDegKNpIaeT0Nr4pyy6bYfrF+YeisVotD6PDtPALfJ9eSCWjaQsl
 DG2opOfv5xuV
 =VRxu
 -----END PGP SIGNATURE-----

Merge tag 'pull-error-2023-02-23' of https://repo.or.cz/qemu/armbru into staging

Error reporting patches patches for 2023-02-23

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmP3ZogSHGFybWJydUBy
# ZWRoYXQuY29tAAoJEDhwtADrkYZT+PsP/ibioHJkJiR8yMt2/2iSwpkMrphZDmRQ
# 5sAgxCARdcp0m7maH4McCFkgtERcROip+j98FV29qI4y2P/mLkt1jyMYC+TH9r4O
# X3G997526gzZBLIJJsnYlVlJ1Gbgn+uCy4AzRLuhaKAHsYoxkP0jygoSs/eIZ9tK
# Wg2tkQ/wY4bXihrlzdOpWqU3Y0ADo2PQ29p7HWheRMDQz6JQxq82hFFs1jgGQ1aq
# 4HmcpIMX0+/LshFbDU91dL1pxW17vWT9J3xtzAsWlfBBgAh257LKvJqVD0XojL04
# FxJZ05IqTXZ04gvwgji0dcvNjdmP/dXVoGLfxAYwCFtKxiig700bdNb0+6MjCT6u
# P2tSPyQQzNQ5LYI7AgER4kMyXK22RkBXx+Q7y7QK1YXszWWSmGFZWGLA2FSg4lO6
# 5jsCgtEGixsMym/ox3XeoywSh4BgWkNXC+gKMSg/hQXgfriQmndHUOlK0ZU95I43
# 7gnPol+pU1HIEy/GDU8oMyieG513Ti1KVPZyv/FbuW75AYUDlHAXH/5OFlsuaLIR
# 1QF449xCLR5vIOOLXHbKJ9jbkcAaidhq5pOhLr7oV3yKh4H53iNB7gy8+vJ6XtBf
# tXXcYPVD8LpZxDegKNpIaeT0Nr4pyy6bYfrF+YeisVotD6PDtPALfJ9eSCWjaQsl
# DG2opOfv5xuV
# =VRxu
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 23 Feb 2023 13:13:44 GMT
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* tag 'pull-error-2023-02-23' of https://repo.or.cz/qemu/armbru:
  rocker: Tweak stubbed out monitor commands' error messages
  migration/colo: Improve an x-colo-lost-heartbeat error message
  hw/core: Improve the query-hotpluggable-cpus error message
  replay: Simplify setting replay blockers
  qga: Drop dangling reference to QERR_QGA_LOGGING_DISABLED
  hw/acpi: Move QMP command to hw/core/
  hw/acpi: Dumb down acpi_table_add() stub
  hw/smbios: Dumb down smbios_entry_add() stub
  hw/core: Improve error message when machine doesn't provide NMIs
  dump: Assert cpu_get_note_size() can't fail
  dump: Improve error message when target doesn't support memory dump
  error: Drop superfluous #include "qapi/qmp/qerror.h"

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-24 15:08:51 +00:00
Markus Armbruster
6f1e91f716 error: Drop superfluous #include "qapi/qmp/qerror.h"
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20230207075115.1525-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Konstantin Kostiuk <kkostiuk@redhat.com>
2023-02-23 13:56:14 +01:00
TaiseiIto
49be78ca02 target/i386/gdbstub: Fix a bug about order of FPU stack in 'g' packets.
Before this commit, when GDB attached an OS working on QEMU, order of FPU
stack registers printed by GDB command 'info float' was wrong. There was a
bug causing the problem in 'g' packets sent by QEMU to GDB. The packets have
values of registers of machine emulated by QEMU containing FPU stack
registers. There are 2 ways to specify a x87 FPU stack register. The first
is specifying by absolute indexed register names (R0, ..., R7). The second
is specifying by stack top relative indexed register names (ST0, ..., ST7).
Values of the FPU stack registers should be located in 'g' packet and be
ordered by the relative index. But QEMU had located these registers ordered
by the absolute index. After this commit, when QEMU reads registers to make
a 'g' packet, QEMU specifies FPU stack registers by the relative index.
Then, the registers are ordered correctly in the packet. As a result, GDB,
the packet receiver, can print FPU stack registers in the correct order.

Signed-off-by: TaiseiIto <taisei1212@outlook.jp>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <TY0PR0101MB4285923FBE9AD97CE832D95BA4E59@TY0PR0101MB4285.apcprd01.prod.exchangelabs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-21 13:46:49 +01:00
Richard Henderson
6fbef9426b target/i386: Fix 32-bit AD[CO]X insns in 64-bit mode
Failure to truncate the inputs results in garbage for the carry-out.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1373
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230115012103.3131796-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-16 16:57:34 +01:00
Paolo Bonzini
60c7dd22e1 target/i386: fix ADOX followed by ADCX
When ADCX is followed by ADOX or vice versa, the second instruction's
carry comes from EFLAGS and the condition codes use the CC_OP_ADCOX
operation.  Retrieving the carry from EFLAGS is handled by this bit
of gen_ADCOX:

        tcg_gen_extract_tl(carry_in, cpu_cc_src,
            ctz32(cc_op == CC_OP_ADCX ? CC_C : CC_O), 1);

Unfortunately, in this case cc_op has been overwritten by the previous
"if" statement to CC_OP_ADCOX.  This works by chance when the first
instruction is ADCX; however, if the first instruction is ADOX,
ADCX will incorrectly take its carry from OF instead of CF.

Fix by moving the computation of the new cc_op at the end of the function.
The included exhaustive test case fails without this patch and passes
afterwards.

Because ADCX/ADOX need not be invoked through the VEX prefix, this
regression bisects to commit 16fc5726a6 ("target/i386: reimplement
0x0f 0x38, add AVX", 2022-10-18).  However, the mistake happened a
little earlier, when BMI instructions were rewritten using the new
decoder framework.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1471
Reported-by: Paul Jolly <https://gitlab.com/myitcv>
Fixes: 1d0b926150 ("target/i386: move scalar 0F 38 and 0F 3A instruction to new decoder", 2022-10-18)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-11 09:07:25 +01:00
Richard Henderson
99282098dc target/i386: Fix C flag for BLSI, BLSMSK, BLSR
We forgot to set cc_src, which is used for computing C.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1370
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230114180601.2993644-1-richard.henderson@linaro.org>
Cc: qemu-stable@nongnu.org
Fixes: 1d0b926150 ("target/i386: move scalar 0F 38 and 0F 3A instruction to new decoder", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-11 09:07:25 +01:00
Richard Henderson
b14c009897 target/i386: Fix BEXTR instruction
There were two problems here: not limiting the input to operand bits,
and not correctly handling large extraction length.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1372
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230114230542.3116013-3-richard.henderson@linaro.org>
Cc: qemu-stable@nongnu.org
Fixes: 1d0b926150 ("target/i386: move scalar 0F 38 and 0F 3A instruction to new decoder", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-11 09:07:25 +01:00
Richard Henderson
5f0dd8cd33 target/i386: Inline cmpxchg16b
Use tcg_gen_atomic_cmpxchg_i128 for the atomic case,
and tcg_gen_qemu_ld/st_i128 otherwise.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:43 -10:00
Richard Henderson
326ad06cf5 target/i386: Inline cmpxchg8b
Use tcg_gen_atomic_cmpxchg_i64 for the atomic case,
and tcg_gen_nonatomic_cmpxchg_i64 otherwise.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:43 -10:00
Richard Henderson
6218c177af target/i386: Split out gen_cmpxchg8b, gen_cmpxchg16b
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:43 -10:00
Thomas Huth
90c167a1da docs/about/deprecated: Mark HAXM in QEMU as deprecated
The HAXM project has been retired (see https://github.com/intel/haxm#status),
so we should mark the code in QEMU as deprecated (and finally remove it
unless somebody else picks the project up again - which is quite unlikely
since there are now whpx and hvf on these operating systems, too).

Message-Id: <20230126121034.1035138-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2023-01-26 13:25:07 +01:00
Peter Maydell
fcb7e040f5 Header cleanup patches for 2023-01-20
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmPKN6YSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZTPeoQAIKl/BF6PFRNq0/k3vPqMe6nltjgkpa/
 p7E5qRlo31RCeUB+f0iW26mySnNTgYkE28yy57HxUML/9Lp1bbxyDgRNiJ406a4L
 kFVF04kOIFez1+mfvWN92DZqcl/EAAqNL6XqSFyO38kYwcsFsi+BZ7DLZbL9Ea8v
 wVywB96mN6KyrLWCJ2D0OqIVuPHSHol+5zt9e6+ShBgN0FfElLbv0F4KH3VJ1olA
 psKl6w6V9+c2zV1kT/H+S763m6mQdwtVo/UuOJoElI+Qib/UBxDOrhdYf4Zg7hKf
 ByUuhJUASm8y9yD/42mFs90B6eUNzLSBC8v1PgRqSqDHtllveP4RysklBlyIMlOs
 DKtqEuRuIJ/qDXliIFHY6tBnUkeITSd7BCxkQYfaGyaSOcviDSlE3AyaaBC0sY4F
 P/lTTiRg5ksvhDYtJnW3mSfmT2PY7aBtyE3D1Z84v9hek6D0reMQTE97yL/j4m7P
 wJP8aM3Z8GILCVxFIh02wmqWZhZUCGsIDS/vxVm+u060n66qtDIQFBoazsFJrCME
 eWI+qDNDr6xhLegeYajGDM9pdpQc3x0siiuHso4wMSI9NZxwP+tkCVhTpqmrRcs4
 GSH/4IlUXqEZdUQDL38DfA22C1TV8BzyMhGLTUERWWYki1sr99yv0pdFyk5r3nLB
 SURwr58rB2zo
 =dOfq
 -----END PGP SIGNATURE-----

Merge tag 'pull-include-2023-01-20' of https://repo.or.cz/qemu/armbru into staging

Header cleanup patches for 2023-01-20

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmPKN6YSHGFybWJydUBy
# ZWRoYXQuY29tAAoJEDhwtADrkYZTPeoQAIKl/BF6PFRNq0/k3vPqMe6nltjgkpa/
# p7E5qRlo31RCeUB+f0iW26mySnNTgYkE28yy57HxUML/9Lp1bbxyDgRNiJ406a4L
# kFVF04kOIFez1+mfvWN92DZqcl/EAAqNL6XqSFyO38kYwcsFsi+BZ7DLZbL9Ea8v
# wVywB96mN6KyrLWCJ2D0OqIVuPHSHol+5zt9e6+ShBgN0FfElLbv0F4KH3VJ1olA
# psKl6w6V9+c2zV1kT/H+S763m6mQdwtVo/UuOJoElI+Qib/UBxDOrhdYf4Zg7hKf
# ByUuhJUASm8y9yD/42mFs90B6eUNzLSBC8v1PgRqSqDHtllveP4RysklBlyIMlOs
# DKtqEuRuIJ/qDXliIFHY6tBnUkeITSd7BCxkQYfaGyaSOcviDSlE3AyaaBC0sY4F
# P/lTTiRg5ksvhDYtJnW3mSfmT2PY7aBtyE3D1Z84v9hek6D0reMQTE97yL/j4m7P
# wJP8aM3Z8GILCVxFIh02wmqWZhZUCGsIDS/vxVm+u060n66qtDIQFBoazsFJrCME
# eWI+qDNDr6xhLegeYajGDM9pdpQc3x0siiuHso4wMSI9NZxwP+tkCVhTpqmrRcs4
# GSH/4IlUXqEZdUQDL38DfA22C1TV8BzyMhGLTUERWWYki1sr99yv0pdFyk5r3nLB
# SURwr58rB2zo
# =dOfq
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 20 Jan 2023 06:41:42 GMT
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* tag 'pull-include-2023-01-20' of https://repo.or.cz/qemu/armbru:
  include/hw/ppc include/hw/pci-host: Drop extra typedefs
  include/hw/ppc: Don't include hw/pci-host/pnv_phb.h from pnv.h
  include/hw/ppc: Supply a few missing includes
  include/hw/ppc: Split pnv_chip.h off pnv.h
  include/hw/block: Include hw/block/block.h where needed
  hw/sparc64/niagara: Use blk_name() instead of open-coding it
  include/block: Untangle inclusion loops
  coroutine: Use Coroutine typedef name instead of structure tag
  coroutine: Split qemu/coroutine-core.h off qemu/coroutine.h
  coroutine: Clean up superfluous inclusion of qemu/lockable.h
  coroutine: Move coroutine_fn to qemu/osdep.h, trim includes
  coroutine: Clean up superfluous inclusion of qemu/coroutine.h

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-20 13:17:55 +00:00
Markus Armbruster
e2c1c34f13 include/block: Untangle inclusion loops
We have two inclusion loops:

       block/block.h
    -> block/block-global-state.h
    -> block/block-common.h
    -> block/blockjob.h
    -> block/block.h

       block/block.h
    -> block/block-io.h
    -> block/block-common.h
    -> block/blockjob.h
    -> block/block.h

I believe these go back to Emanuele's reorganization of the block API,
merged a few months ago in commit d7e2fe4aac.

Fortunately, breaking them is merely a matter of deleting unnecessary
includes from headers, and adding them back in places where they are
now missing.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
2023-01-20 07:24:28 +01:00
Philippe Mathieu-Daudé
883f2c591f bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plx
The 'hwaddr' type is defined in "exec/hwaddr.h" as:

    hwaddr is the type of a physical address
   (its size can be different from 'target_ulong').

All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx:

 $ fgrep define include/exec/hwaddr.h
 #define HWADDR_H
 #define HWADDR_BITS 64
 #define HWADDR_MAX UINT64_MAX
 #define TARGET_FMT_plx "%016" PRIx64
         ^^^^^^
 #define HWADDR_PRId PRId64
 #define HWADDR_PRIi PRIi64
 #define HWADDR_PRIo PRIo64
 #define HWADDR_PRIu PRIu64
 #define HWADDR_PRIx PRIx64
 #define HWADDR_PRIX PRIX64

Since hwaddr's size can be *different* from target_ulong, it is
very confusing to read one of its format using the 'TARGET_FMT_'
prefix, normally used for the target_long / target_ulong types:

$ fgrep TARGET_FMT_ include/exec/cpu-defs.h
 #define TARGET_FMT_lx "%08x"
 #define TARGET_FMT_ld "%d"
 #define TARGET_FMT_lu "%u"
 #define TARGET_FMT_lx "%016" PRIx64
 #define TARGET_FMT_ld "%" PRId64
 #define TARGET_FMT_lu "%" PRIu64

Apparently this format was missed during commit a8170e5e97
("Rename target_phys_addr_t to hwaddr"), so complete it by
doing a bulk-rename with:

 $ sed -i -e s/TARGET_FMT_plx/HWADDR_FMT_plx/g $(git grep -l TARGET_FMT_plx)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230110212947.34557-1-philmd@linaro.org>
[thuth: Fix some warnings from checkpatch.pl along the way]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2023-01-18 11:14:34 +01:00
Paolo Bonzini
3d304620ec target/i386: fix operand size of unary SSE operations
VRCPSS, VRSQRTSS and VCVTSx2Sx have a 32-bit or 64-bit memory operand,
which is represented in the decoding tables by X86_VEX_REPScalar.  Add it
to the tables, and make validate_vex() handle the case of an instruction
that is in exception type 4 without the REP prefix and exception type 5
with it; this is the cas of VRCP and VRSQRT.

Reported-by: yongwoo <https://gitlab.com/yongwoo36>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1377
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-11 10:44:35 +01:00
Eric Auger
c0a6665c3c target/i386: Remove compilation errors when -Werror=maybe-uninitialized
To avoid compilation errors when -Werror=maybe-uninitialized is used,
add a default case with g_assert_not_reached().

Otherwise with GCC 11.3.1 "cc (GCC) 11.3.1 20220421 (Red Hat 11.3.1-2)"
we get:

../target/i386/ops_sse.h: In function ‘helper_vpermdq_ymm’:
../target/i386/ops_sse.h:2495:13: error: ‘r3’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
     2495 |     d->Q(3) = r3;
          |     ~~~~~~~~^~~~
../target/i386/ops_sse.h:2494:13: error: ‘r2’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
     2494 |     d->Q(2) = r2;
          |     ~~~~~~~~^~~~
../target/i386/ops_sse.h:2493:13: error: ‘r1’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
     2493 |     d->Q(1) = r1;
          |     ~~~~~~~~^~~~
../target/i386/ops_sse.h:2492:13: error: ‘r0’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
     2492 |     d->Q(0) = r0;
          |     ~~~~~~~~^~~~

Signed-off-by: Eric Auger <eric.auger@redhat.com>

Message-Id: <20221222140158.1260748-1-eric.auger@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-11 09:59:39 +01:00
Joe Richey
b585edca34 i386: Emit correct error code for 64-bit IDT entry
When in 64-bit mode, IDT entiries are 16 bytes, so `intno * 16` is used
for base/limit/offset calculations. However, even in 64-bit mode, the
exception error code still uses bits [3,16) for the invlaid interrupt
index.

This means the error code should still be `intno * 8 + 2` even in 64-bit
mode.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1382
Signed-off-by: Joe Richey <joerichey@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-11 09:59:38 +01:00
Kai Huang
d45f24fe75 target/i386: Add SGX aex-notify and EDECCSSA support
The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
allows one enclave to receive a notification in the ERESUME after the
enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
(ENCLU[EDECCSSA]) to facilitate the AEX notification handling.

Whether the hardware supports to create enclave with AEX-notify support
is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].

Add support to allow to expose the new SGX AEX-notify feature and the
new EDECCSSA user leaf function to KVM guest.

Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-06 00:51:02 +01:00
Paolo Bonzini
eaaaf8abdc KVM: remove support for kernel-irqchip=off
-machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
is the replacement that works, so remove the deprecated support for the former.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-06 00:51:02 +01:00
Peter Maydell
e86787d33b target/i386: Convert to 3-phase reset
Convert the i386 CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-7-peter.maydell@linaro.org
2022-12-16 15:58:15 +00:00
Peter Maydell
48804eebd4 Miscellaneous patches for 2022-12-14
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmOZ6lYSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZT6VEQAKynjWh3AIZ4/qOgrVqsP0oRspevLmfH
 BbuGoldjYpEE7RbwuCaZalZ7iy7TcSySxnPfUDVsFHd7NWffJVjwKHifGC0D/Ez0
 +Ggyb1CBebN+mS7t+BNFUHdMM+wxFIlHwg4f4aTFbn2o0HKgj2a8tcNzNRonZbfa
 xURnvbD4G4u0VZEc3Jak+x193xbOJFsuuWq0BZnDuNk+XqjyW2RwfpXLPJVk+82a
 4uy/YgYuqXUqBeULwcJj+shBL4SXR9GyajTFMS64przSUle0ADUmXkPtaS2agV7e
 Pym/UQuAcxvNyw34fJsiMZxx6rZI9YU30jQUMRLoYcPRR/Q/aiPeiiHtiD6Kaid7
 IfOeH/EArXaQRFpD89xj4YcaTnRLQOEj0NXgXvAbQf6eD8JYyao/S/0lCsPZEoA2
 nibLqEQ25ncDNXoSomuwtfjVff3w68lODFbhwqfA0gf3cPtCgVZ6xQ8P/McNY6K6
 wqFHXMWTDHk1LOCTucjYz1z2TGzTnSG4iWi5Yt6FSxAc958AO+v5ALn/1pcYun+E
 azM/MF0AInKj2aJCT530zT0tpCs/Jo07YKC8k6ubi77S0ZdmGS1XLeXkRXfk1+yI
 OhuUgiVlSTHxD69DagT2vbnx1mDMM9X+OBIMvEi5nwvD9A/ghaCgkDeGFvbA1ud0
 t0mxPBZJ+tiZ
 =JJjG
 -----END PGP SIGNATURE-----

Merge tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru into staging

Miscellaneous patches for 2022-12-14

# gpg: Signature made Wed 14 Dec 2022 15:23:02 GMT
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru:
  ppc4xx_sdram: Simplify sdram_ddr_size() to return
  block/vmdk: Simplify vmdk_co_create() to return directly
  cleanup: Tweak and re-run return_directly.cocci
  io: Tidy up fat-fingered parameter name
  qapi: Use returned bool to check for failure (again)
  sockets: Use ERRP_GUARD() where obviously appropriate
  qemu-config: Use ERRP_GUARD() where obviously appropriate
  qemu-config: Make config_parse_qdict() return bool
  monitor: Use ERRP_GUARD() in monitor_init()
  monitor: Simplify monitor_fd_param()'s error handling
  error: Move ERRP_GUARD() to the beginning of the function
  error: Drop a few superfluous ERRP_GUARD()
  error: Drop some obviously superfluous error_propagate()
  Drop more useless casts from void * to pointer

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15 10:13:46 +00:00
Markus Armbruster
fe8ac1fa49 qapi machine: Elide redundant has_FOO in generated C
The has_FOO for pointer-valued FOO are redundant, except for arrays.
They are also a nuisance to work with.  Recent commit "qapi: Start to
elide redundant has_FOO in generated C" provided the means to elide
them step by step.  This is the step for qapi/machine*.json.

Said commit explains the transformation in more detail.  The invariant
violations mentioned there do not occur here.

Cc: Eduardo Habkost <eduardo@habkost.net>
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Yanan Wang <wangyanan55@huawei.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221104160712.3005652-16-armbru@redhat.com>
2022-12-14 20:04:47 +01:00
Markus Armbruster
d1c81c3496 qapi: Use returned bool to check for failure (again)
Commit 012d4c96e2 changed the visitor functions taking Error ** to
return bool instead of void, and the commits following it used the new
return value to simplify error checking.  Since then a few more uses
in need of the same treatment crept in.  Do that.  All pretty
mechanical except for

* balloon_stats_get_all()

  This is basically the same transformation commit 012d4c96e2 applied
  to the virtual walk example in include/qapi/visitor.h.

* set_max_queue_size()

  Additionally replace "goto end of function" by return.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221121085054.683122-10-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2022-12-14 16:19:35 +01:00
Markus Armbruster
3d558330ad Drop more useless casts from void * to pointer
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20221123133811.1398562-1-armbru@redhat.com>
2022-12-14 16:19:35 +01:00
Richard Henderson
8218c048be target/i386: Always completely initialize TranslateFault
In get_physical_address, the canonical address check failed to
set TranslateFault.stage2, which resulted in an uninitialized
read from the struct when reporting the fault in x86_cpu_tlb_fill.

Adjust all error paths to use structure assignment so that the
entire struct is always initialized.

Reported-by: Daniel Hoffman <dhoff749@gmail.com>
Fixes: 9bbcf37219 ("target/i386: Reorg GET_HPHYS")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221201074522.178498-1-richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1324
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-12-01 09:53:24 +01:00
Paolo Bonzini
38e65936a8 target/i386: allow MMX instructions with CR4.OSFXSR=0
MMX state is saved/restored by FSAVE/FRSTOR so the instructions are
not illegal opcodes even if CR4.OSFXSR=0.  Make sure that validate_vex
takes into account the prefix and only checks HF_OSFXSR_MASK in the
presence of an SSE instruction.

Fixes: 20581aadec ("target/i386: validate VEX prefixes via the instructions' exception classes", 2022-10-18)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1350
Reported-by: Helge Konetzka (@hejko on gitlab.com)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-12-01 09:05:05 +01:00
Paolo Bonzini
35d95e4126 target/i386: hardcode R_EAX as destination register for LAHF/SAHF
When translating code that is using LAHF and SAHF in combination with the
REX prefix, the instructions should not use any other register than AH;
however, QEMU selects SPL (SP being register 4, just like AH) if the
REX prefix is present.  To fix this, use deposit directly without
going through gen_op_mov_v_reg and gen_op_mov_reg_v.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/130
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-15 09:34:42 +10:00
Paolo Bonzini
d1bb978ba1 target/i386: fix cmpxchg with 32-bit register destination
Unlike the memory case, where "the destination operand receives a write
cycle without regard to the result of the comparison", rm must not be
touched altogether if the write fails, including not zero-extending
it on 64-bit processors.  This is not how the movcond currently works,
because it is always followed by a gen_op_mov_reg_v to rm.

To fix it, introduce a new function that is similar to gen_op_mov_reg_v
but writes to a TCG temporary.

Considering that gen_extu(ot, oldv) is not needed in the memory case
either, the two cases for register and memory destinations are different
enough that one might as well fuse the two "if (mod == 3)" into one.
So do that too.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/508
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[rth: Add a test case ]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-15 09:34:42 +10:00
Stefan Hajnoczi
7f5acfcb66 * bug fixes
* reduced memory footprint for IPI virtualization on Intel processors
 * asynchronous teardown support (Linux only)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNiVykUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroN0Swf/YxjphCtFgYYSO14WP+7jAnfRZLhm
 0xWChWP8rco5I352OBFeFU64Av5XoLGNn6SZLl8lcg86lQ/G0D27jxu6wOcDDHgw
 0yTDO1gevj51UKsbxoC66OWSZwKTEo398/BHPDcI2W41yOFycSdtrPgspOrFRVvf
 7M3nNjuNPsQorZeuu8NGr3jakqbt99ZDXcyDEWbrEAcmy2JBRMbGgT0Kdnc6aZfW
 CvL+1ljxzldNwGeNBbQW2QgODbfHx5cFZcy4Daze35l5Ra7K/FrgAzr6o/HXptya
 9fEs5LJQ1JWI6JtpaWwFy7fcIIOsJ0YW/hWWQZSDt9JdAJFE5/+vF+Kz5Q==
 =CgrO
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* bug fixes
* reduced memory footprint for IPI virtualization on Intel processors
* asynchronous teardown support (Linux only)

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNiVykUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroN0Swf/YxjphCtFgYYSO14WP+7jAnfRZLhm
# 0xWChWP8rco5I352OBFeFU64Av5XoLGNn6SZLl8lcg86lQ/G0D27jxu6wOcDDHgw
# 0yTDO1gevj51UKsbxoC66OWSZwKTEo398/BHPDcI2W41yOFycSdtrPgspOrFRVvf
# 7M3nNjuNPsQorZeuu8NGr3jakqbt99ZDXcyDEWbrEAcmy2JBRMbGgT0Kdnc6aZfW
# CvL+1ljxzldNwGeNBbQW2QgODbfHx5cFZcy4Daze35l5Ra7K/FrgAzr6o/HXptya
# 9fEs5LJQ1JWI6JtpaWwFy7fcIIOsJ0YW/hWWQZSDt9JdAJFE5/+vF+Kz5Q==
# =CgrO
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 02 Nov 2022 07:40:25 EDT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  target/i386: Fix test for paging enabled
  util/log: Close per-thread log file on thread termination
  target/i386: Set maximum APIC ID to KVM prior to vCPU creation
  os-posix: asynchronous teardown for shutdown on Linux
  target/i386: Fix calculation of LOCK NEG eflags

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-11-03 10:54:37 -04:00
Richard Henderson
03a60ae9ca target/i386: Fix test for paging enabled
If CR0.PG is unset, pg_mode will be zero, but it will also be zero
for non-PAE/non-PSE page tables with CR0.WP=0.  Restore the
correct test for paging enabled.

Fixes: 98281984a3 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1269
Reported-by: Andreas Gustafsson <gson@gson.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221102091232.1092552-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-11-02 12:35:16 +01:00
Richard Henderson
6317933086 target/i386: Expand eflags updates inline
The helpers for reset_rf, cli, sti, clac, stac are
completely trivial; implement them inline.

Drop some nearby #if 0 code.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-01 08:31:41 +11:00
Richard Henderson
3d419a4dd2 accel/tcg: Remove will_exit argument from cpu_restore_state
The value passed is always true, and if the target's
synchronize_from_tb hook is non-trivial, not exiting
may be erroneous.

Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-01 08:31:41 +11:00
Richard Henderson
f484f213c9 target/i386: Use cpu_unwind_state_data for tpr access
Avoid cpu_restore_state, and modifying env->eip out from
underneath the translator with TARGET_TB_PCREL.  There is
some slight duplication from x86_restore_state_to_opc,
but it's just a few lines.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1269
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-01 08:31:37 +11:00
Zeng Guang
19e2a9fb9d target/i386: Set maximum APIC ID to KVM prior to vCPU creation
Specify maximum possible APIC ID assigned for current VM session to KVM
prior to the creation of vCPUs. By this setting, KVM can set up VM-scoped
data structure indexed by the APIC ID, e.g. Posted-Interrupt Descriptor
pointer table to support Intel IPI virtualization, with the most optimal
memory footprint.

It can be achieved by calling KVM_ENABLE_CAP for KVM_CAP_MAX_VCPU_ID
capability once KVM has enabled it. Ignoring the return error if KVM
doesn't support this capability yet.

Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20220825025246.26618-1-guang.zeng@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-31 09:46:34 +01:00
Qi Hu
1215317510 target/i386: Fix calculation of LOCK NEG eflags
After:

        lock negl -0x14(%rbp)
        pushf
        pop    %rax

%rax will contain the wrong value because the "lock neg" calculates the
wrong eflags.  Simple test:

        #include <assert.h>

        int main()
        {
          __volatile__ unsigned test = 0x2363a;
          __volatile__ char cond = 0;
          asm(
              "lock negl %0 \n\t"
              "sets %1"
              : "=m"(test), "=r"(cond));
          assert(cond & 1);
          return 0;
        }

Reported-by: Jinyang Shen <shenjinyang@loongson.cn>
Co-Developed-by: Xuehai Chen <chenxuehai@loongson.cn>
Signed-off-by: Xuehai Chen <chenxuehai@loongson.cn>
Signed-off-by: Qi Hu <huqi@loongson.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-31 09:46:34 +01:00
Stefan Hajnoczi
08a5d04606 Revert incorrect cflags initialization.
Add direct jumps for tcg/loongarch64.
 Speed up breakpoint check.
 Improve assertions for atomic.h.
 Move restore_state_to_opc to TCGCPUOps.
 Cleanups to TranslationBlock maintenance.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmNYlo4dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9y2wf9EKsCA6VtYI2Qtftf
 q/ujYFmUf8AKTb9eVcA0XX71CT1dEnFR7GQyT8B8X13x0pSbOX7tbEWHPreegTFV
 tESiejvymi6Q9devAB58GVwNoU/zPIQQGhCPxkVUKDmRztJz22MbGUzd7UKPPgU8
 2nVMkIpLTMBsKeFLxE/D3ZntmdKsgyI/1Dtkl9TxvlDGsCbMjbNcr8lM+TLaG2oX
 GZhFyJHKEVy0cobukvhhb/9rU7AWdG/BnFmZM16JxvHV/YCwJBx3Udhcy9xPePUU
 yIjkGsUAq4aB6H9RFuTWh7GmaY5u6gMbTTi2J7hDos0mzauYJtpgEB/H42LpycGE
 sOhkLQ==
 =DUb8
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20221026' of https://gitlab.com/rth7680/qemu into staging

Revert incorrect cflags initialization.
Add direct jumps for tcg/loongarch64.
Speed up breakpoint check.
Improve assertions for atomic.h.
Move restore_state_to_opc to TCGCPUOps.
Cleanups to TranslationBlock maintenance.

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmNYlo4dHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9y2wf9EKsCA6VtYI2Qtftf
# q/ujYFmUf8AKTb9eVcA0XX71CT1dEnFR7GQyT8B8X13x0pSbOX7tbEWHPreegTFV
# tESiejvymi6Q9devAB58GVwNoU/zPIQQGhCPxkVUKDmRztJz22MbGUzd7UKPPgU8
# 2nVMkIpLTMBsKeFLxE/D3ZntmdKsgyI/1Dtkl9TxvlDGsCbMjbNcr8lM+TLaG2oX
# GZhFyJHKEVy0cobukvhhb/9rU7AWdG/BnFmZM16JxvHV/YCwJBx3Udhcy9xPePUU
# yIjkGsUAq4aB6H9RFuTWh7GmaY5u6gMbTTi2J7hDos0mzauYJtpgEB/H42LpycGE
# sOhkLQ==
# =DUb8
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 25 Oct 2022 22:08:14 EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-tcg-20221026' of https://gitlab.com/rth7680/qemu: (47 commits)
  accel/tcg: Remove restore_state_to_opc function
  target/xtensa: Convert to tcg_ops restore_state_to_opc
  target/tricore: Convert to tcg_ops restore_state_to_opc
  target/sparc: Convert to tcg_ops restore_state_to_opc
  target/sh4: Convert to tcg_ops restore_state_to_opc
  target/s390x: Convert to tcg_ops restore_state_to_opc
  target/rx: Convert to tcg_ops restore_state_to_opc
  target/riscv: Convert to tcg_ops restore_state_to_opc
  target/ppc: Convert to tcg_ops restore_state_to_opc
  target/openrisc: Convert to tcg_ops restore_state_to_opc
  target/nios2: Convert to tcg_ops restore_state_to_opc
  target/mips: Convert to tcg_ops restore_state_to_opc
  target/microblaze: Convert to tcg_ops restore_state_to_opc
  target/m68k: Convert to tcg_ops restore_state_to_opc
  target/loongarch: Convert to tcg_ops restore_state_to_opc
  target/i386: Convert to tcg_ops restore_state_to_opc
  target/hppa: Convert to tcg_ops restore_state_to_opc
  target/hexagon: Convert to tcg_ops restore_state_to_opc
  target/cris: Convert to tcg_ops restore_state_to_opc
  target/avr: Convert to tcg_ops restore_state_to_opc
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-26 10:53:41 -04:00
Richard Henderson
434382e640 target/i386: Convert to tcg_ops restore_state_to_opc
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26 11:11:28 +10:00
Stefan Hajnoczi
79fc2fb685 Pull request
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmNXleQSHGxhdXJlbnRA
 dml2aWVyLmV1AAoJEPMMOL0/L748TIsP/1gulTFpYAs3Kao6IZonsuCzrjQrJWqv
 5SD7cVb7isOWdOSNK3glE4dG54Q38PaS9GHaCvzIndjHxlWddCCUuwiw6p1Wdo70
 fjNfcCOEPoalQbkZvLejhs5n2rlfTvS5JUnLKVD9+ton7hjnTyKGDDYao5mYhtzv
 Kn9NpCD3m+K3orzG2Jj7jR1UAumg4cW4YQEpT8ItDT4Y5UAxjL6TZQ6CE220DQDq
 YwDrHEgDYr/UKlTbIC/JwlKOLr0sh+UB1VV8GZS6e6pU9u5WpDDHlQZpU8W2tLLg
 cG5m8tLG2avFxRMUFrPNZ8Lx2xKO8wL1PtgAO9w7qFK+r0soZvv+Zh4ev/t5zGLf
 ciliItqf97yPYNIc3su75jqdQHed7lmZc3m9LBHg8VXN6rAatt8vWUbG90sAZuTU
 tWBZHvQmG0s2MK4UYqeQ59tc21v9T2+VCiiv/1vjgEUr8tBhXS562jrDt/bNEqKa
 eRzT4h4ffbP6BJRnyakxkFkQ7nd2OdlLNKUAr9Tk6T2fYuarfEdbYx//0950agqD
 AAtdQ/AJm6Pq1Px0/RuMKK5WsL818BoAkfr6n7qXleunytJ1W5hjW9EmFIPZWPTR
 ce/lSFHA0+MCpg6C8zAa4iNBg/Pk0p3GRrTeWyHK1FjV+Gep1QtE/a1vk/qiPzTM
 qZVfPxa8cXXe
 =caiq
 -----END PGP SIGNATURE-----

Merge tag 'trivial-branch-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging

Pull request

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmNXleQSHGxhdXJlbnRA
# dml2aWVyLmV1AAoJEPMMOL0/L748TIsP/1gulTFpYAs3Kao6IZonsuCzrjQrJWqv
# 5SD7cVb7isOWdOSNK3glE4dG54Q38PaS9GHaCvzIndjHxlWddCCUuwiw6p1Wdo70
# fjNfcCOEPoalQbkZvLejhs5n2rlfTvS5JUnLKVD9+ton7hjnTyKGDDYao5mYhtzv
# Kn9NpCD3m+K3orzG2Jj7jR1UAumg4cW4YQEpT8ItDT4Y5UAxjL6TZQ6CE220DQDq
# YwDrHEgDYr/UKlTbIC/JwlKOLr0sh+UB1VV8GZS6e6pU9u5WpDDHlQZpU8W2tLLg
# cG5m8tLG2avFxRMUFrPNZ8Lx2xKO8wL1PtgAO9w7qFK+r0soZvv+Zh4ev/t5zGLf
# ciliItqf97yPYNIc3su75jqdQHed7lmZc3m9LBHg8VXN6rAatt8vWUbG90sAZuTU
# tWBZHvQmG0s2MK4UYqeQ59tc21v9T2+VCiiv/1vjgEUr8tBhXS562jrDt/bNEqKa
# eRzT4h4ffbP6BJRnyakxkFkQ7nd2OdlLNKUAr9Tk6T2fYuarfEdbYx//0950agqD
# AAtdQ/AJm6Pq1Px0/RuMKK5WsL818BoAkfr6n7qXleunytJ1W5hjW9EmFIPZWPTR
# ce/lSFHA0+MCpg6C8zAa4iNBg/Pk0p3GRrTeWyHK1FjV+Gep1QtE/a1vk/qiPzTM
# qZVfPxa8cXXe
# =caiq
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 25 Oct 2022 03:53:08 EDT
# gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg:                issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* tag 'trivial-branch-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu:
  accel/tcg/tcg-accel-ops-rr: fix trivial typo
  ui: remove useless typecasts
  treewide: Remove the unnecessary space before semicolon
  include/hw/scsi/scsi.h: Remove unused scsi_legacy_handle_cmdline() prototype
  vmstate-static-checker:remove this redundant return
  tests/qtest: vhost-user-test: Fix [-Werror=format-overflow=] build warning
  tests/qtest: migration-test: Fix [-Werror=format-overflow=] build warning
  Drop useless casts from g_malloc() & friends to pointer
  elf2dmp: free memory in failure
  hw/core: Tidy up unnecessary casting away of const
  .gitignore: add multiple items to .gitignore

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-25 11:37:17 -04:00
Markus Armbruster
0a553c12c7 Drop useless casts from g_malloc() & friends to pointer
These memory allocation functions return void *, and casting to
another pointer type is useless clutter.  Drop these casts.

If you really want another pointer type, consider g_new().

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220923120025.448759-3-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-10-22 23:15:40 +02:00
Paolo Bonzini
2872b0f390 target/i386: implement FMA instructions
The only issue with FMA instructions is that there are _a lot_ of them (30
opcodes, each of which comes in up to 4 versions depending on VEX.W and
VEX.L; a total of 96 possibilities).  However, they can be implement with
only 6 helpers, two for scalar operations and four for packed operations.
(Scalar versions do not do any merging; they only affect the bottom 32
or 64 bits of the output operand.  Therefore, there is no separate XMM
and YMM of the scalar helpers).

First, we can reduce the number of helpers to one third by passing four
operands (one output and three inputs); the reordering of which operands
go to the multiply and which go to the add is done in emit.c.

Second, the different instructions also dispatch to the same softfloat
function, so the flags for float32_muladd and float64_muladd are passed
in the helper as int arguments, with a little extra complication to
handle FMADDSUB and FMSUBADD.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-22 09:05:54 +02:00
Paolo Bonzini
cf5ec6641e target/i386: implement F16C instructions
F16C only consists of two instructions, which are a bit peculiar
nevertheless.

First, they access only the low half of an YMM or XMM register for the
packed-half operand; the exact size still depends on the VEX.L flag.
This is similar to the existing avx_movx flag, but not exactly because
avx_movx is hardcoded to affect operand 2.  To this end I added a "ph"
format name; it's possible to reuse this approach for the VPMOVSX and
VPMOVZX instructions, though that would also require adding two more
formats for the low-quarter and low-eighth of an operand.

Second, VCVTPS2PH is somewhat weird because it *stores* the result of
the instruction into memory rather than loading it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20 15:16:18 +02:00
Paolo Bonzini
314d3eff66 target/i386: introduce function to set rounding mode from FPCW or MXCSR bits
VROUND, FSTCW and STMXCSR all have to perform the same conversion from
x86 rounding modes to softfloat constants.  Since the ISA is consistent
on the meaning of the two-bit rounding modes, extract the common code
into a wrapper for set_float_rounding_mode.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20 15:16:13 +02:00
Paolo Bonzini
0d4bcac3ca target/i386: decode-new: avoid out-of-bounds access to xmm_regs[-1]
If the destination is a memory register, op->n is -1.  Going through
tcg_gen_gvec_dup_imm path is both useless (the value has been stored
by the gen_* function already) and wrong because of the out-of-bounds
access.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20 15:15:50 +02:00
Paolo Bonzini
653fad2497 target/i386: remove old SSE decoder
With all SSE (and AVX!) instructions now implemented in disas_insn_new,
it's possible to remove gen_sse, as well as the helpers for instructions
that now use gvec.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
71a0891d61 target/i386: move 3DNow to the new decoder
This adds another kind of weirdness when you thought you had seen it all:
an opcode byte that comes _after_ the address, not before.  It's not
worth adding a new X86_SPECIAL_* constant for it, but it's actually
not unlike VCMP; so, forgive me for exploiting the similarity and just
deciding to dispatch to the right gen_helper_* call in a single code
generation function.

In fact, the old decoder had a bug where s->rip_offset should have
been set to 1 for 3DNow! instructions, and it's fixed now.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paul Brook
2f8a21d8ff target/i386: Enable AVX cpuid bits when using TCG
Include AVX, AVX2 and VAES in the guest cpuid features supported by TCG.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-40-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
57f6bba023 target/i386: implement VLDMXCSR/VSTMXCSR
These are exactly the same as the non-VEX version, but one has to be careful
that only VEX.L=0 is allowed.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
892544317f target/i386: implement XSAVE and XRSTOR of AVX registers
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
f8d19eec0d target/i386: reimplement 0x0f 0x28-0x2f, add AVX
Here the code is a bit uglier due to the truncation and extension
of registers to and from 32-bit.  There is also a mistake in the
manual with respect to the size of the memory operand of CVTPS2PI
and CVTTPS2PI, reported by Ricky Zhou.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
7170a17ec3 target/i386: reimplement 0x0f 0x10-0x17, add AVX
These are mostly moves, and yet are a total pain.  The main issue
is that:

1) some instructions are selected by mod==11 (register operand)
vs. mod=00/01/10 (memory operand)

2) stores to memory are two-operand operations, while the 3-register
and load-from-memory versions operate on the entire contents of the
destination; this makes it easier to separate the gen_* function for
the store case

3) it's inefficient to load into xmm_T0 only to move the value out
again, so the gen_* function for the load case is separated too

The manual also has various mistakes in the operands here, for example
the store case of MOVHPS operates on a 128-bit source (albeit discarding
the bottom 64 bits) and therefore should be Mq,Vdq rather than Mq,Vq.
Likewise for the destination and source of MOVHLPS.

VUNPCK?PS and VUNPCK?PD are the same as VUNPCK?DQ and VUNPCK?QDQ,
but encoded as prefixes rather than separate operands.  The helpers
can be reused however.

For MOVSLDUP, MOVSHDUP and MOVDDUP I chose to reimplement them as
helpers.  I named the helper for MOVDDUP "movdldup" in preparation
for possible future introduction of MOVDHDUP and to clarify the
similarity with MOVSLDUP.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
aba2b8ecb9 target/i386: reimplement 0x0f 0xc2, 0xc4-0xc6, add AVX
Nothing special going on here, for once.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
16fc5726a6 target/i386: reimplement 0x0f 0x38, add AVX
There are several special cases here:

1) extending moves have different widths for the helpers vs. for the
memory loads, and the width for memory loads depends on VEX.L too.
This is represented by X86_SPECIAL_AVXExtMov.

2) some instructions, such as variable-width shifts, select the vector element
size via REX.W.

3) VSIB instructions (VGATHERxPy, VPGATHERxy) are also part of this group,
and they have (among other things) two output operands.

3) the macros for 4-operand blends (which are under 0x0f 0x3a) have to be
extended to support 2-operand blends.  The 2-operand variant actually
came a few years earlier, but it is clearer to implement them in the
opposite order.

X86_TYPE_WM, introduced earlier for unaligned loads, is reused for helpers
that accept a Reg* but have a M argument.

These three-byte opcodes also include AVX new instructions, for which
the helpers were originally implemented by Paul Brook <paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Richard Henderson
d4af67a27a target/i386: Use tcg gvec ops for pmovmskb
As pmovmskb is used by strlen et al, this is the third
highest overhead sse operation at %0.8.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[Reorganize to generate code for any vector size. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
7906847768 target/i386: reimplement 0x0f 0x3a, add AVX
The more complicated operations here are insertions and extractions.
Otherwise, there are just more entries than usual because the PS/PD/SS/SD
variations are encoded in the opcode rater than in the prefixes.

These three-byte opcodes also include AVX new instructions, whose
implementation in the helpers was originally done by Paul Brook
<paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
a64eee3ab4 target/i386: clarify (un)signedness of immediates from 0F3Ah opcodes
Three-byte opcodes from the 0F3Ah area all have an immediate byte which
is usually unsigned.  Clarify in the helper code that it is unsigned;
the new decoder treats immediates as signed by default, and seeing
an intN_t in the prototype might give the wrong impression that one
can use decode->immediate directly.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
6bbeb98d10 target/i386: reimplement 0x0f 0xd0-0xd7, 0xe0-0xe7, 0xf0-0xf7, add AVX
The more complicated ones here are d6-d7, e6-e7, f7.  The others
are trivial.

For LDDQU, using gen_load_sse directly might corrupt the register if
the second part of the load fails.  Therefore, add a custom X86_TYPE_WM
value; like X86_TYPE_W it does call gen_load(), but it also rejects a
value of 11 in the ModRM field like X86_TYPE_M.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
ce4fcb9478 target/i386: reimplement 0x0f 0x70-0x77, add AVX
This includes shifts by immediate, which use bits 3-5 of the ModRM byte
as an opcode extension.  With the exception of 128-bit shifts, they are
implemented using gvec.

This also covers VZEROALL and VZEROUPPER, which use the same opcode
as EMMS.  If we were wanting to optimize out gen_clear_ymmh then this
would be one of the starting points.  The implementation of the VZEROALL
and VZEROUPPER helpers is by Paul Brook.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
d1c1a4222c target/i386: reimplement 0x0f 0x78-0x7f, add AVX
These are a mixed batch, including the first two horizontal
(66 and F2 only) operations, more moves, and SSE4a extract/insert.

Because SSE4a is pretty rare, I chose to leave the helper as they are,
but it is possible to unify them by loading index and length from the
source XMM register and generating deposit or extract TCG ops.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
03b4588070 target/i386: reimplement 0x0f 0x50-0x5f, add AVX
These are mostly floating-point SSE operations.  The odd ones out
are MOVMSK and CVTxx2yy, the others are straightforward.

Unary operations are a bit special in AVX because they have 2 operands
for PD/PS operands (VEX.vvvv must be 1111b), and 3 operands for SD/SS.
They are handled using X86_OP_GROUP3 for compactness.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:05 +02:00
Paolo Bonzini
1d0efbdb35 target/i386: reimplement 0x0f 0xd8-0xdf, 0xe8-0xef, 0xf8-0xff, add AVX
These are more simple integer instructions present in both MMX and SSE/AVX,
with no holes that were later occupied by newer instructions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
92ec056a6b target/i386: reimplement 0x0f 0x60-0x6f, add AVX
These are both MMX and SSE/AVX instructions, except for vmovdqu.  In both
cases the inputs and output is in s->ptr{0,1,2}, so the only difference
between MMX, SSE, and AVX is which helper to call.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
b98f886c8f target/i386: Introduce 256-bit vector helpers
The new implementation of SSE will cover AVX from the get go, because
all the work for the helper functions is already done.  We just need to
build them.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
6e0cac782a target/i386: implement additional AVX comparison operators
The new implementation of SSE will cover AVX from the get go, so include
the 24 extra comparison operators that are only available with the VEX
prefix.

Based on a patch by Paul Brook <paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
620f75566a target/i386: provide 3-operand versions of unary scalar helpers
Compared to Paul's implementation, the new decoder will use a different approach
to implement AVX's merging of dst with src1 on scalar operations.  Adjust the
old SSE decoder to be compatible with new-style helpers.

The affected instructions are CVTSx2Sx, ROUNDSx, RSQRTSx, SQRTSx, RCPSx.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
1de9e7e612 target/i386: support operand merging in binary scalar helpers
Compared to Paul's implementation, the new decoder will use a different approach
to implement AVX's merging of dst with src1 on scalar operations.  Adjust the
helpers to provide this functionality.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
f05f9789f5 target/i386: extend helpers to support VEX.V 3- and 4- operand encodings
Add to the helpers all the operands that are needed to implement AVX.

Extracted from a patch by Paul Brook <paul@nowt.org>.

Message-Id: <20220424220204.2493824-26-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paul Brook
30f419219a target/i386: Prepare ops_sse_header.h for 256 bit AVX
Adjust all #ifdefs to match the ones in ops_sse.h.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-23-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
1d0b926150 target/i386: move scalar 0F 38 and 0F 3A instruction to new decoder
Because these are the only VEX instructions that QEMU supports, the
new decoder is entered on the first byte of a valid VEX prefix, and VEX
decoding only needs to be done in decode-new.c.inc.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
55a3328669 target/i386: validate SSE prefixes directly in the decoding table
Many SSE and AVX instructions are only valid with specific prefixes
(none, 66, F3, F2).  Introduce a direct way to encode this in the
decoding table to avoid using decode groups too much.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
20581aadec target/i386: validate VEX prefixes via the instructions' exception classes
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paul Brook
608db8dbfb target/i386: add AVX_EN hflag
Add a new hflag bit to determine whether AVX instructions are allowed

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-4-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
caa01fadbe target/i386: add CPUID feature checks to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
268dc4648f target/i386: add CPUID[EAX=7,ECX=0].ECX to DisasContext
TCG will shortly implement VAES instructions, so add the relevant feature
word to the DisasContext.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
6ba13999be target/i386: add ALU load/writeback core
Add generic code generation that takes care of preparing operands
around calls to decode.e.gen in a table-driven manner, so that ALU
operations need not take care of that.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
b3e22b2318 target/i386: add core of new i386 decoder
The new decoder is based on three principles:

- use mostly table-driven decoding, using tables derived as much as possible
  from the Intel manual.  Centralizing the decode the operands makes it
  more homogeneous, for example all immediates are signed.  All modrm
  handling is in one function, and can be shared between SSE and ALU
  instructions (including XMM<->GPR instructions).  The SSE/AVX decoder
  will also not have duplicated code between the 0F, 0F38 and 0F3A tables.

- keep the code as "non-branchy" as possible.  Generally, the code for
  the new decoder is more verbose, but the control flow is simpler.
  Conditionals are not nested and have small bodies.  All instruction
  groups are resolved even before operands are decoded, and code
  generation is separated as much as possible within small functions
  that only handle one instruction each.

- keep address generation and (for ALU operands) memory loads and writeback
  as much in common code as possible.  All ALU operations for example
  are implemented as T0=f(T0,T1).  For non-ALU instructions,
  read-modify-write memory operations are rare, but registers do not
  have TCGv equivalents: therefore, the common logic sets up pointer
  temporaries with the operands, while load and writeback are handled
  by gvec or by helpers.

These principles make future code review and extensibility simpler, at
the cost of having a relatively large amount of code in the form of this
patch.  Even EVEX should not be _too_ hard to implement (it's just a crazy
large amount of possibilities).

This patch introduces the main decoder flow, and integrates the old
decoder with the new one.  The old decoder takes care of parsing
prefixes and then optionally drops to the new one.  The changes to the
old decoder are minimal and allow it to be replaced incrementally with
the new one.

There is a debugging mechanism through a "LIMIT" environment variable.
In user-mode emulation, the variable is the number of instructions
decoded by the new decoder before permanently switching to the old one.
In system emulation, the variable is the highest opcode that is decoded
by the new decoder (this is less friendly, but it's the best that can
be done without requiring deterministic execution).

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
a61ef762f9 target/i386: make rex_w available even in 32-bit mode
REX.W can be used even in 32-bit mode by AVX instructions, where it is retroactively
renamed to VEX.W.  Make the field available even in 32-bit mode but keep the REX_W()
macro as it was; this way, that the handling of dflag does not use it by mistake and
the AVX code more clearly points at the special VEX behavior of the bit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Paolo Bonzini
12a2c9c72c target/i386: make ldo/sto operations consistent with ldq
ldq takes a pointer to the first byte to load the 64-bit word in;
ldo takes a pointer to the first byte of the ZMMReg.  Make them
consistent, which will be useful in the new SSE decoder's
load/writeback routines.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
75f107a856 target/i386: Define XMMReg and access macros, align ZMM registers
This will be used for emission and endian adjustments of gvec operations.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822223722.1697758-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
8629e77be5 target/i386: Use probe_access_full for final stage2 translation
Rather than recurse directly on mmu_translate, go through the
same softmmu lookup that we did for the page table walk.
This centralizes all knowledge of MMU_NESTED_IDX, with respect
to setup of TranslationParams, to get_physical_address.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-10-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
4a1e9d4d11 target/i386: Use atomic operations for pte updates
Use probe_access_full in order to resolve to a host address,
which then lets us use a host cmpxchg to update the pte.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/279
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-9-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
11b4e971dc target/i386: Combine 5 sets of variables in mmu_translate
We don't need one variable set per translation level,
which requires copying into pte/pte_addr for huge pages.
Standardize on pte/pte_addr for all levels.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-8-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
726ea33531 target/i386: Use MMU_NESTED_IDX for vmload/vmsave
Use MMU_NESTED_IDX for each memory access, rather than
just a single translation to physical.  Adjust svm_save_seg
and svm_load_seg to pass in mmu_idx.

This removes the last use of get_hphys so remove it.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-7-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
98281984a3 target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX
These new mmu indexes will be helpful for improving
paging and code throughout the target.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-6-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
9bbcf37219 target/i386: Reorg GET_HPHYS
Replace with PTE_HPHYS for the page table walk, and a direct call
to mmu_translate for the final stage2 translation.  Hoist the check
for HF2_NPT_MASK out to get_physical_address, which avoids the
recursive call when stage2 is disabled.

We can now return all the way out to x86_cpu_tlb_fill before raising
an exception, which means probe works.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-5-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
3563362ddf target/i386: Introduce structures for mmu_translate
Create TranslateParams for inputs, TranslateResults for successful
outputs, and TranslateFault for error outputs; return true on success.

Move stage1 error paths from handle_mmu_fault to x86_cpu_tlb_fill;
reorg the rest of handle_mmu_fault into get_physical_address.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-4-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
e4ddff5262 target/i386: Direct call get_hphys from mmu_translate
Use a boolean to control the call to get_hphys instead
of passing a null function pointer.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-3-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
487d11333a target/i386: Use MMUAccessType across excp_helper.c
Replace int is_write1 and magic numbers with the proper
MMUAccessType access_type and enumerators.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Richard Henderson
913f0836f6 target/i386: Save and restore pc_save before tcg_remove_ops_after
Restore pc_save while undoing any state change that may have
happened while decoding the instruction.  Leave a TODO about
removing all of that when the table-based decoder is complete.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221016222303.288551-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Peter Maydell
08c4f4db60 target/i386: Use device_cold_reset() to reset the APIC
The semantic difference between the deprecated device_legacy_reset()
function and the newer device_cold_reset() function is that the new
function resets both the device itself and any qbuses it owns,
whereas the legacy function resets just the device itself and nothing
else.

The x86_cpu_after_reset() function uses device_legacy_reset() to reset
the APIC; this is an APICCommonState and does not have any qbuses, so
for this purpose the two functions behave identically and we can stop
using the deprecated one.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20221013171926.1447899-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Maciej S. Szmigiero
ec19444a53 hyperv: fix SynIC SINT assertion failure on guest reset
Resetting a guest that has Hyper-V VMBus support enabled triggers a QEMU
assertion failure:
hw/hyperv/hyperv.c:131: synic_reset: Assertion `QLIST_EMPTY(&synic->sint_routes)' failed.

This happens both on normal guest reboot or when using "system_reset" HMP
command.

The failing assertion was introduced by commit 64ddecc88b ("hyperv: SControl is optional to enable SynIc")
to catch dangling SINT routes on SynIC reset.

The root cause of this problem is that the SynIC itself is reset before
devices using SINT routes have chance to clean up these routes.

Since there seems to be no existing mechanism to force reset callbacks (or
methods) to be executed in specific order let's use a similar method that
is already used to reset another interrupt controller (APIC) after devices
have been reset - by invoking the SynIC reset from the machine reset
handler via a new x86_cpu_after_reset() function co-located with
the existing x86_cpu_reset() in target/i386/cpu.c.
Opportunistically move the APIC reset handler there, too.

Fixes: 64ddecc88b ("hyperv: SControl is optional to enable SynIc") # exposed the bug
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <cb57cee2e29b20d06f81dce054cbcea8b5d497e8.1664552976.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18 13:58:04 +02:00
Stefan Hajnoczi
bb76f8e275 * scsi-disk: support setting CD-ROM block size via device options
* target/i386: Implement MSR_CORE_THREAD_COUNT MSR
 * target/i386: notify VM exit support
 * target/i386: PC-relative translation block support
 * target/i386: support for XSAVE state in signal frames (linux-user)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNFKP4UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNJnwgAgCcOOxmY4Qem0Gd1L+SJKpEtGMOd
 4LY7443vT36pMpvqFNSfp5GBjDT1MgTD8BIY28miLMq959LT89LyM9g/H7IKOT82
 uyCsW3jW+6F19EZVkNvzTt+3USn/kaHn50zA4Ss9kvdNZr31b2LYqtglVCznfZwH
 oI1rDhvsXubq8oWvwkqH7IwduK8mw+EB5Yz7AjYQ6eiYjenTrQBObpwQNbb4rlUf
 oRm8dk/YJ2gfI2HQkoznGEbgpngy2tIU1vHNEpIk5NpwXxrulOyui3+sWaG4pH8f
 oAOrSDC23M5A6jBJJAzDJ1q6M677U/kwJypyGQ7IyvyhECXE3tR+lHX1eA==
 =tqeJ
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* scsi-disk: support setting CD-ROM block size via device options
* target/i386: Implement MSR_CORE_THREAD_COUNT MSR
* target/i386: notify VM exit support
* target/i386: PC-relative translation block support
* target/i386: support for XSAVE state in signal frames (linux-user)

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNFKP4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNJnwgAgCcOOxmY4Qem0Gd1L+SJKpEtGMOd
# 4LY7443vT36pMpvqFNSfp5GBjDT1MgTD8BIY28miLMq959LT89LyM9g/H7IKOT82
# uyCsW3jW+6F19EZVkNvzTt+3USn/kaHn50zA4Ss9kvdNZr31b2LYqtglVCznfZwH
# oI1rDhvsXubq8oWvwkqH7IwduK8mw+EB5Yz7AjYQ6eiYjenTrQBObpwQNbb4rlUf
# oRm8dk/YJ2gfI2HQkoznGEbgpngy2tIU1vHNEpIk5NpwXxrulOyui3+sWaG4pH8f
# oAOrSDC23M5A6jBJJAzDJ1q6M677U/kwJypyGQ7IyvyhECXE3tR+lHX1eA==
# =tqeJ
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 11 Oct 2022 04:27:42 EDT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (37 commits)
  linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstate
  linux-user: i386/signal: support FXSAVE fpstate on 32-bit emulation
  linux-user: i386/signal: move fpstate at the end of the 32-bit frames
  KVM: x86: Implement MSR_CORE_THREAD_COUNT MSR
  i386: kvm: Add support for MSR filtering
  x86: Implement MSR_CORE_THREAD_COUNT MSR
  target/i386: Enable TARGET_TB_PCREL
  target/i386: Inline gen_jmp_im
  target/i386: Add cpu_eip
  target/i386: Create eip_cur_tl
  target/i386: Merge gen_jmp_tb and gen_goto_tb into gen_jmp_rel
  target/i386: Remove MemOp argument to gen_op_j*_ecx
  target/i386: Use gen_jmp_rel for DISAS_TOO_MANY
  target/i386: Use gen_jmp_rel for gen_jcc
  target/i386: Use gen_jmp_rel for loop, repz, jecxz insns
  target/i386: Create gen_jmp_rel
  target/i386: Use DISAS_TOO_MANY to exit after gen_io_start
  target/i386: Create eip_next_*
  target/i386: Truncate values for lcall_real to i32
  target/i386: Introduce DISAS_JUMP
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-13 13:55:03 -04:00
Paolo Bonzini
5d2456789a linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstate
Add support for saving/restoring extended save states when signals
are delivered.  This allows using AVX, MPX or PKRU registers in
signal handlers.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 10:27:35 +02:00
Alexander Graf
37656470f6 KVM: x86: Implement MSR_CORE_THREAD_COUNT MSR
The MSR_CORE_THREAD_COUNT MSR describes CPU package topology, such as number
of threads and cores for a given package. This is information that QEMU has
readily available and can provide through the new user space MSR deflection
interface.

This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to KVM.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-4-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Alexander Graf
860054d8ce i386: kvm: Add support for MSR filtering
KVM has grown support to deflect arbitrary MSRs to user space since
Linux 5.10. For now we don't expect to make a lot of use of this
feature, so let's expose it the easiest way possible: With up to 16
individually maskable MSRs.

This patch adds a kvm_filter_msr() function that other code can call
to install a hook on KVM MSR reads or writes.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-3-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Alexander Graf
62a44fddb2 x86: Implement MSR_CORE_THREAD_COUNT MSR
Intel CPUs starting with Haswell-E implement a new MSR called
MSR_CORE_THREAD_COUNT which exposes the number of threads and cores
inside of a package.

This MSR is used by XNU to populate internal data structures and not
implementing it prevents virtual machines with more than 1 vCPU from
booting if the emulated CPU generation is at least Haswell-E.

This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to TCG.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-2-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
e3a79e0e87 target/i386: Enable TARGET_TB_PCREL
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-27-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
7db973bece target/i386: Inline gen_jmp_im
Expand this function at each of its callers.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-26-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
f771ca6a61 target/i386: Add cpu_eip
Create a tcg global temp for this, and use it instead of explicit stores.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-25-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
75ec746a07 target/i386: Create eip_cur_tl
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-24-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
900cc7e536 target/i386: Merge gen_jmp_tb and gen_goto_tb into gen_jmp_rel
These functions have only one caller, and the logic is more
obvious this way.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-23-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
0ebacb5d1e target/i386: Remove MemOp argument to gen_op_j*_ecx
These functions are always passed aflag, so we might as well
read it from DisasContext directly.  While we're at it, use
a common subroutine for these two functions.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-22-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
5f7ec6efcc target/i386: Use gen_jmp_rel for DISAS_TOO_MANY
With gen_jmp_rel, we may chain between two translation blocks
which may only be separated because of TB size limits.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-21-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
54b191de67 target/i386: Use gen_jmp_rel for gen_jcc
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-20-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
2255da493a target/i386: Use gen_jmp_rel for loop, repz, jecxz insns
With gen_jmp_rel, we may chain to the next tb instead of merely
writing to eip and exiting.  For repz, subtract cur_insn_len to
restart the current insn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-19-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
8760ded661 target/i386: Create gen_jmp_rel
Create a common helper for pc-relative branches.  The jmp jb insn
was missing a mask for CODE32.  In all cases the CODE64 check was
incorrectly placed, allowing PREFIX_DATA to truncate %rip to 16 bits.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-18-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
202005f1f8 target/i386: Use DISAS_TOO_MANY to exit after gen_io_start
We can set is_jmp early, using only one if, and let that
be overwritten by gen_rep*'s calls to gen_jmp_tb.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-17-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
9e599bf707 target/i386: Create eip_next_*
Create helpers for loading the address of the next insn.
Use tcg_constant_* in adjacent code where convenient.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-16-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
8c03ab9f74 target/i386: Truncate values for lcall_real to i32
Use i32 not int or tl for eip and cs arguments.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-15-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
faf9ea5fa5 target/i386: Introduce DISAS_JUMP
Drop the unused dest argument to gen_jr().
Remove most of the calls to gen_jr, and use DISAS_JUMP.
Remove some unused loads of eip for lcall and ljmp.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-14-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
122e6d7b4a target/i386: Remove cur_eip, next_eip arguments to gen_repz*
All callers pass s->base.pc_next and s->pc, which we can just
as well compute within the functions.  Pull out common helpers
and reduce the amount of code under macros.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-13-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
ad1d6f072d target/i386: Create cur_insn_len, cur_insn_len_i32
Create common routines for computing the length of the insn.
Use tcg_constant_i32 in the new function, while we're at it.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-12-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
6424ac8eec target/i386: USe DISAS_EOB_ONLY
Replace lone calls to gen_eob() with the new enumerator.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-11-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
634a405193 target/i386: Use DISAS_EOB_NEXT
Replace sequences of gen_update_cc_op, gen_update_eip_next,
and gen_eob with the new is_jmp enumerator.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-10-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
4da4523c6c target/i386: Use DISAS_EOB* in gen_movl_seg_T0
Set is_jmp properly in gen_movl_seg_T0, so that the callers
need to nothing special.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-9-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
200ef60399 target/i386: Introduce DISAS_EOB*
Add a few DISAS_TARGET_* aliases to reduce the number of
calls to gen_eob() and gen_eob_inhibit_irq().  So far,
only update i386_tr_translate_insn for exiting the block
because of single-step or previous inhibit irq.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-8-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
09e99df4d5 target/i386: Create gen_update_eip_next
Sync EIP before exiting a translation block.
Replace all gen_jmp_im that use s->pc.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-7-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
65e4af230d target/i386: Create gen_update_eip_cur
Like gen_update_cc_op, sync EIP before doing something
that could raise an exception.  Replace all gen_jmp_im
that use s->base.pc_next.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-6-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
8ed6c98501 target/i386: Remove cur_eip, next_eip arguments to gen_interrupt
All callers pass s->base.pc_next and s->pc, which we can just as
well compute within the function.  Adjust to use tcg_constant_i32
while we're at it.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-5-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
522365508e target/i386: Remove cur_eip argument to gen_exception
All callers pass s->base.pc_next - s->cs_base, which we can just
as well compute within the function.  Note the special case of
EXCP_VSYSCALL in which s->cs_base wasn't subtracted, but cs_base
is always zero in 64-bit mode, when vsyscall is used.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-4-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
f66c8e8cd9 target/i386: Return bool from disas_insn
Instead of returning the new pc, which is present in
DisasContext, return true if an insn was translated.
This is false when we detect a page crossing and must
undo the insn under translation.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-3-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Richard Henderson
ddf83b35bd target/i386: Remove pc_start
The DisasContext member and the disas_insn local variable of
the same name are identical to DisasContextBase.pc_next.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:01 +02:00
Chenyi Qiang
e2e69f6bb9 i386: add notify VM exit support
There are cases that malicious virtual machine can cause CPU stuck (due
to event windows don't open up), e.g., infinite loop in microcode when
nested #AC (CVE-2015-5307). No event window means no event (NMI, SMI and
IRQ) can be delivered. It leads the CPU to be unavailable to host or
other VMs. Notify VM exit is introduced to mitigate such kind of
attacks, which will generate a VM exit if no event window occurs in VM
non-root mode for a specified amount of time (notify window).

A new KVM capability KVM_CAP_X86_NOTIFY_VMEXIT is exposed to user space
so that the user can query the capability and set the expected notify
window when creating VMs. The format of the argument when enabling this
capability is as follows:
  Bit 63:32 - notify window specified in qemu command
  Bit 31:0  - some flags (e.g. KVM_X86_NOTIFY_VMEXIT_ENABLED is set to
              enable the feature.)

Users can configure the feature by a new (x86 only) accel property:
    qemu -accel kvm,notify-vmexit=run|internal-error|disable,notify-window=n

The default option of notify-vmexit is run, which will enable the
capability and do nothing if the exit happens. The internal-error option
raises a KVM internal error if it happens. The disable option does not
enable the capability. The default value of notify-window is 0. It is valid
only when notify-vmexit is not disabled. The valid range of notify-window
is non-negative. It is even safe to set it to zero since there's an
internal hardware threshold to be added to ensure no false positive.

Because a notify VM exit may happen with VM_CONTEXT_INVALID set in exit
qualification (no cases are anticipated that would set this bit), which
means VM context is corrupted. It would be reflected in the flags of
KVM_EXIT_NOTIFY exit. If KVM_NOTIFY_CONTEXT_INVALID bit is set, raise a KVM
internal error unconditionally.

Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-5-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11 09:36:00 +02:00
Paolo Bonzini
3dba0a335c kvm: allow target-specific accelerator properties
Several hypervisor capabilities in KVM are target-specific.  When exposed
to QEMU users as accelerator properties (i.e. -accel kvm,prop=value), they
should not be available for all targets.

Add a hook for targets to add their own properties to -accel kvm, for
now no such property is defined.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220929072014.20705-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-10 09:23:16 +02:00
Chenyi Qiang
12f89a39cf i386: kvm: extend kvm_{get, put}_vcpu_events to support pending triple fault
For the direct triple faults, i.e. hardware detected and KVM morphed
to VM-Exit, KVM will never lose them. But for triple faults sythesized
by KVM, e.g. the RSM path, if KVM exits to userspace before the request
is serviced, userspace could migrate the VM and lose the triple fault.

A new flag KVM_VCPUEVENT_VALID_TRIPLE_FAULT is defined to signal that
the event.triple_fault_pending field contains a valid state if the
KVM_CAP_X86_TRIPLE_FAULT_EVENT capability is enabled.

Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-10 09:23:16 +02:00
Janosch Frank
1af0006ab9 dump: Replace opaque DumpState pointer with a typed one
It's always better to convey the type of a pointer if at all
possible. So let's add the DumpState typedef to typedefs.h and move
the dump note functions from the opaque pointers to DumpState
pointers.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Cédric Le Goater <clg@kaod.org>
CC: Daniel Henrique Barboza <danielhb413@gmail.com>
CC: David Gibson <david@gibson.dropbear.id.au>
CC: Greg Kurz <groug@kaod.org>
CC: Palmer Dabbelt <palmer@dabbelt.com>
CC: Alistair Francis <alistair.francis@wdc.com>
CC: Bin Meng <bin.meng@windriver.com>
CC: Cornelia Huck <cohuck@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>
CC: David Hildenbrand <david@redhat.com>
Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
2022-10-06 19:30:43 +04:00
Alex Bennée
bf0c50d4aa monitor: expose monitor_puts to rest of code
This helps us construct strings elsewhere before echoing to the
monitor. It avoids having to jump through hoops like:

  monitor_printf(mon, "%s", s->str);

It will be useful in following patches but for now convert all
existing plain "%s" printfs to use the _puts api.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220929114231.583801-33-alex.bennee@linaro.org>
2022-10-06 11:53:40 +01:00
Stefan Hajnoczi
4a9c04672a Cache CPUClass for use in hot code paths.
Add CPUTLBEntryFull, probe_access_full, tlb_set_page_full.
 Add generic support for TARGET_TB_PCREL.
 tcg/ppc: Optimize 26-bit jumps using STQ for POWER 2.07
 target/sh4: Fix TB_FLAG_UNALIGN
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmM8jXEdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/oEggArAHK8FtydfQ4ZwnF
 SjXfpdP50OC0SZn3uBN93FZOrxz9UYG9t1oDHs39J/+b/u2nwJYch//EH2k+NtOW
 hc3iIgS9bWgs/UWZESkViKQccw7gpYlc21Br38WWwFNEFyecX0p+e9pJgld5rSv1
 mRGvCs5J2svH2tcXl/Sb/JWgcumOJoG7qy2aLyJGolR6UOfwcfFMzQXzq8qjpRKH
 Jh84qusE/rLbzBsdN6snJY4+dyvUo03lT5IJ4d+FQg2tUip+Qqt7pnMbsqq6qF6H
 R6fWU1JTbsh7GxXJwQJ83jLBnUsi8cy6FKrZ3jyiBq76+DIpR0PqoEe+PN/weInU
 TN0z4g==
 =RfXJ
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20221004' of https://gitlab.com/rth7680/qemu into staging

Cache CPUClass for use in hot code paths.
Add CPUTLBEntryFull, probe_access_full, tlb_set_page_full.
Add generic support for TARGET_TB_PCREL.
tcg/ppc: Optimize 26-bit jumps using STQ for POWER 2.07
target/sh4: Fix TB_FLAG_UNALIGN

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmM8jXEdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/oEggArAHK8FtydfQ4ZwnF
# SjXfpdP50OC0SZn3uBN93FZOrxz9UYG9t1oDHs39J/+b/u2nwJYch//EH2k+NtOW
# hc3iIgS9bWgs/UWZESkViKQccw7gpYlc21Br38WWwFNEFyecX0p+e9pJgld5rSv1
# mRGvCs5J2svH2tcXl/Sb/JWgcumOJoG7qy2aLyJGolR6UOfwcfFMzQXzq8qjpRKH
# Jh84qusE/rLbzBsdN6snJY4+dyvUo03lT5IJ4d+FQg2tUip+Qqt7pnMbsqq6qF6H
# R6fWU1JTbsh7GxXJwQJ83jLBnUsi8cy6FKrZ3jyiBq76+DIpR0PqoEe+PN/weInU
# TN0z4g==
# =RfXJ
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 04 Oct 2022 15:45:53 EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-tcg-20221004' of https://gitlab.com/rth7680/qemu:
  target/sh4: Fix TB_FLAG_UNALIGN
  tcg/ppc: Optimize 26-bit jumps
  accel/tcg: Introduce TARGET_TB_PCREL
  accel/tcg: Introduce tb_pc and log_pc
  hw/core: Add CPUClass.get_pc
  include/hw/core: Create struct CPUJumpCache
  accel/tcg: Inline tb_flush_jmp_cache
  accel/tcg: Do not align tb->page_addr[0]
  accel/tcg: Use DisasContextBase in plugin_gen_tb_start
  accel/tcg: Use bool for page_find_alloc
  accel/tcg: Remove PageDesc code_bitmap
  include/exec: Introduce TARGET_PAGE_ENTRY_EXTRA
  accel/tcg: Introduce tlb_set_page_full
  accel/tcg: Introduce probe_access_full
  accel/tcg: Suppress auto-invalidate in probe_access_internal
  accel/tcg: Drop addr member from SavedIOTLB
  accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull
  cputlb: used cached CPUClass in our hot-paths
  hw/core/cpu-sysemu: used cached class in cpu_asidx_from_attrs
  cpu: cache CPUClass in CPUState for hot code paths

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-05 10:17:02 -04:00
Richard Henderson
fbf59aad17 accel/tcg: Introduce tb_pc and log_pc
The availability of tb->pc will shortly be conditional.
Introduce accessor functions to minimize ifdefs.

Pass around a known pc to places like tcg_gen_code,
where the caller must already have the value.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-04 12:13:12 -07:00
Richard Henderson
e4fdf9df5b hw/core: Add CPUClass.get_pc
Populate this new method for all targets.  Always match
the result that would be given by cpu_get_tb_cpu_state,
as we will want these values to correspond in the logs.

Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (target/sparc)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
Cc: Eduardo Habkost <eduardo@habkost.net> (supporter:Machine core)
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> (supporter:Machine core)
Cc: "Philippe Mathieu-Daudé" <f4bug@amsat.org> (reviewer:Machine core)
Cc: Yanan Wang <wangyanan55@huawei.com> (reviewer:Machine core)
Cc: Michael Rolnik <mrolnik@gmail.com> (maintainer:AVR TCG CPUs)
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> (maintainer:CRIS TCG CPUs)
Cc: Taylor Simpson <tsimpson@quicinc.com> (supporter:Hexagon TCG CPUs)
Cc: Song Gao <gaosong@loongson.cn> (maintainer:LoongArch TCG CPUs)
Cc: Xiaojuan Yang <yangxiaojuan@loongson.cn> (maintainer:LoongArch TCG CPUs)
Cc: Laurent Vivier <laurent@vivier.eu> (maintainer:M68K TCG CPUs)
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> (reviewer:MIPS TCG CPUs)
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> (reviewer:MIPS TCG CPUs)
Cc: Chris Wulff <crwulff@gmail.com> (maintainer:NiosII TCG CPUs)
Cc: Marek Vasut <marex@denx.de> (maintainer:NiosII TCG CPUs)
Cc: Stafford Horne <shorne@gmail.com> (odd fixer:OpenRISC TCG CPUs)
Cc: Yoshinori Sato <ysato@users.sourceforge.jp> (reviewer:RENESAS RX CPUs)
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (maintainer:SPARC TCG CPUs)
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> (maintainer:TriCore TCG CPUs)
Cc: Max Filippov <jcmvbkbc@gmail.com> (maintainer:Xtensa TCG CPUs)
Cc: qemu-arm@nongnu.org (open list:ARM TCG CPUs)
Cc: qemu-ppc@nongnu.org (open list:PowerPC TCG CPUs)
Cc: qemu-riscv@nongnu.org (open list:RISC-V TCG CPUs)
Cc: qemu-s390x@nongnu.org (open list:S390 TCG CPUs)
2022-10-04 12:13:12 -07:00
Stefan Hajnoczi
fafd35a6da Pull request trivial patches branch 20220930-v2
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmM7XoISHGxhdXJlbnRA
 dml2aWVyLmV1AAoJEPMMOL0/L748D/0QAKbYtTWjhFPeapjZVoTv13YrTvczWrcF
 omL6IZivVq0t7hun4iem0DwmvXJELMGexEOTvEJOzM19IIlvvwvOsI8xnxpcMnEY
 6GKVbs53Ba0bg2yh7Dll2W9jkou9eX27DwUHMVF8KX7qqsbU+WyD/vdGZitgGt+T
 8yna7kzVvNVsdB3+DbIatI5RzzHeu4OqeuH/WCtAyzCaLB64UYTcHprskxIp4+wp
 dR+EUSoDEr9Qx4PC+uVEsTFK1zZjyAYNoNIkh6fhlkRvDJ1uA75m3EJ57P8xPPqe
 VbVkPMKi0d4c52m6XvLsQhyYryLx/qLLUAkJWVpY66aHcapYbZAEAfZmNGTQLrOJ
 qIOJzIkOdU6l3pRgXVdVCgkHRc2HETwET2LyVbNkUz/vBlW2wOZQbZFbezComael
 bQ/gNBYqP+eOGnZzeWbKBGHr/9QDBClNufidIMC+sOiUw0iSifzjkFwvH7IElx6K
 EQCOSV6pOhKVlinTpmBbk1XD3xDkQ7ZidiLT9g+P1c8dExrXBhWOnfUHueISb8+s
 KKMozuxQ/6/3c/DP5hwI9cKPEWEbqJfq1kMuxIvEivKGwUIqX2yq4VJ+hSlYJ+CW
 nGjXZldtf4KwH+cTsxyPmdZRR5Q7+ODr5Xo7GNvEKBuDsHs7uUl1c3vvOykQgje9
 +dyJR6TfbQWn
 =aK29
 -----END PGP SIGNATURE-----

Merge tag 'trivial-branch-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging

Pull request trivial patches branch 20220930-v2

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmM7XoISHGxhdXJlbnRA
# dml2aWVyLmV1AAoJEPMMOL0/L748D/0QAKbYtTWjhFPeapjZVoTv13YrTvczWrcF
# omL6IZivVq0t7hun4iem0DwmvXJELMGexEOTvEJOzM19IIlvvwvOsI8xnxpcMnEY
# 6GKVbs53Ba0bg2yh7Dll2W9jkou9eX27DwUHMVF8KX7qqsbU+WyD/vdGZitgGt+T
# 8yna7kzVvNVsdB3+DbIatI5RzzHeu4OqeuH/WCtAyzCaLB64UYTcHprskxIp4+wp
# dR+EUSoDEr9Qx4PC+uVEsTFK1zZjyAYNoNIkh6fhlkRvDJ1uA75m3EJ57P8xPPqe
# VbVkPMKi0d4c52m6XvLsQhyYryLx/qLLUAkJWVpY66aHcapYbZAEAfZmNGTQLrOJ
# qIOJzIkOdU6l3pRgXVdVCgkHRc2HETwET2LyVbNkUz/vBlW2wOZQbZFbezComael
# bQ/gNBYqP+eOGnZzeWbKBGHr/9QDBClNufidIMC+sOiUw0iSifzjkFwvH7IElx6K
# EQCOSV6pOhKVlinTpmBbk1XD3xDkQ7ZidiLT9g+P1c8dExrXBhWOnfUHueISb8+s
# KKMozuxQ/6/3c/DP5hwI9cKPEWEbqJfq1kMuxIvEivKGwUIqX2yq4VJ+hSlYJ+CW
# nGjXZldtf4KwH+cTsxyPmdZRR5Q7+ODr5Xo7GNvEKBuDsHs7uUl1c3vvOykQgje9
# +dyJR6TfbQWn
# =aK29
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 03 Oct 2022 18:13:22 EDT
# gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg:                issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* tag 'trivial-branch-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu:
  docs: Update TPM documentation for usage of a TPM 2
  Use g_new() & friends where that makes obvious sense
  Drop superfluous conditionals around g_free()
  block/qcow2-bitmap: Add missing cast to silent GCC error
  checkpatch: ignore target/hexagon/imported/* files
  mem/cxl_type3: fix GPF DVSEC
  .gitignore: add .cache/ to .gitignore
  hw/virtio/vhost-shadow-virtqueue: Silence GCC error "maybe-uninitialized"

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-04 14:04:18 -04:00
Markus Armbruster
76eb88b12b Drop superfluous conditionals around g_free()
There is no need to guard g_free(P) with if (P): g_free(NULL) is safe.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220923090428.93529-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-10-04 00:10:11 +02:00
Ray Zhang
c4ef867f29 target/i386/kvm: fix kvmclock_current_nsec: Assertion `time.tsc_timestamp <= migration_tsc' failed
New KVM_CLOCK flags were added in the kernel.(c68dc1b577eabd5605c6c7c08f3e07ae18d30d5d)
```
+ #define KVM_CLOCK_VALID_FLAGS						\
+	(KVM_CLOCK_TSC_STABLE | KVM_CLOCK_REALTIME | KVM_CLOCK_HOST_TSC)

	case KVM_CAP_ADJUST_CLOCK:
-		r = KVM_CLOCK_TSC_STABLE;
+		r = KVM_CLOCK_VALID_FLAGS;
```

kvm_has_adjust_clock_stable needs to handle additional flags,
so that s->clock_is_reliable can be true and kvmclock_current_nsec doesn't need to be called.

Signed-off-by: Ray Zhang <zhanglei002@gmail.com>
Message-Id: <20220922100523.2362205-1-zhanglei002@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-01 21:16:36 +02:00
Paolo Bonzini
efcca7ef17 target/i386: introduce insn_get_addr
The "O" operand type in the Intel SDM needs to load an 8- to 64-bit
unsigned value, while insn_get is limited to 32 bits.  Extract the code
out of disas_insn and into a separate function.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini
5c2f60bd1b target/i386: REPZ and REPNZ are mutually exclusive
The later prefix wins if both are present, make it show in s->prefix too.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini
ca4b1b43bc target/i386: fix INSERTQ implementation
INSERTQ is defined to not modify any bits in the lower 64 bits of the
destination, other than the ones being replaced with bits from the
source operand.  QEMU instead is using unshifted bits from the source
for those bits.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini
034668c329 target/i386: correctly mask SSE4a bit indices in register operands
SSE4a instructions EXTRQ and INSERTQ have two bit index operands, that can be
immediates or taken from an XMM register.  In both cases, the fields are
6-bit wide and the top two bits in the byte are ignored.  translate.c is
doing that correctly for the immediate case, but not for the XMM case, so
fix it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini
958e1dd130 target/i386: Raise #GP on unaligned m128 accesses when required.
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
 - Instructions explicitly requiring memory alignment (Exceptions Type 1
   in the "AVX and SSE Instruction Exception Specification" section of
   the SDM)
 - Legacy SSE instructions that load/store 128-bit values (Exceptions
   Types 2 and 4).

This change sets MO_ALIGN_16 on 128-bit memory accesses that require
16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access
hooks that simulate a #GP exception in qemu-user and qemu-system,
respectively.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Message-Id: <20220830034816.57091-2-ricky@rzhou.org>
[Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-18 09:17:40 +02:00
Ilya Leoshkevich
950936681f target/i386: Make translator stop before the end of a page
Right now translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.

An implementation, like the one arm and s390x have, would require an
i386 length disassembler, which is burdensome to maintain. Another
alternative would be to single-step at the end of a guest page, but
this may come with a performance impact.

Fix by snapshotting disassembly state and restoring it after we figure
out we crossed a page boundary. This includes rolling back cc_op
updates and emitted ops.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1143
Message-Id: <20220817150506.592862-4-iii@linux.ibm.com>
[rth: Simplify end-of-insn cross-page checks.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Richard Henderson
306c872103 accel/tcg: Add pc and host_pc params to gen_intermediate_code
Pass these along to translator_loop -- pc may be used instead
of tb->pc, and host_pc is currently unused.  Adjust all targets
at one time.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Richard Henderson
dac8d19bdb accel/tcg: Remove translator_ldsw
The only user can easily use translator_lduw and
adjust the type to signed during the return.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Paul Brook
a64fc26919 target/i386: AVX+AES helpers prep
Make the AES vector helpers AVX ready

No functional changes to existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-22-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
5a09df21f7 target/i386: AVX pclmulqdq prep
Make the pclmulqdq helper AVX ready

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-21-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
0e29cea589 target/i386: Rewrite blendv helpers
Rewrite the blendv helpers so that they can easily be extended to support
the AVX encodings, which make all 4 arguments explicit.

No functional changes to the existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-20-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
fd17264ad1 target/i386: Misc AVX helper prep
Fixup various vector helpers that either trivially exten to 256 bit,
or don't have 256 bit variants.

No functional changes to existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-19-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
6567ffb4f2 target/i386: Destructive FP helpers for AVX
Perpare the horizontal atithmetic vector helpers for AVX
These currently use a dummy Reg typed variable to store the result then
assign the whole register.  This will cause 128 bit operations to corrupt
the upper half of the register, so replace it with explicit temporaries
and element assignments.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-18-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
6f218d6e99 target/i386: Dot product AVX helper prep
Make the dpps and dppd helpers AVX-ready

I can't see any obvious reason why dppd shouldn't work on 256 bit ymm
registers, but both AMD and Intel agree that it's xmm only.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-17-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
cbf4ad5498 target/i386: reimplement AVX comparison helpers
AVX includes an additional set of comparison predicates, some of which
our softfloat implementation does not expose as separate functions.
Rewrite the helpers in terms of floatN_compare for future extensibility.

Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-24-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
3403cafeee target/i386: Floating point arithmetic helper AVX prep
Prepare the "easy" floating point vector helpers for AVX

No functional changes to existing helpers.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-16-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook
d45b0de63d target/i386: Destructive vector helpers for AVX
These helpers need to take special care to avoid overwriting source values
before the wole result has been calculated.  Currently they use a dummy
Reg typed variable to store the result then assign the whole register.
This will cause 128 bit operations to corrupt the upper half of the register,
so replace it with explicit temporaries and element assignments.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-14-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00