Commit Graph

290 Commits

Author SHA1 Message Date
Philippe Mathieu-Daudé
30ca39244b target/i386: Simplify TARGET_X86_64 #ifdef'ry
Merge two TARGET_X86_64 consecutive blocks.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-4-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Ilya Leoshkevich
4e116893c6 accel/tcg: Add DisasContextBase argument to translator_ld*
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
[rth: Split out of a larger patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Lara Lazier
52fb8ad37a target/i386: Added vVMLOAD and vVMSAVE feature
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without
causing a VMEXIT. (APM2 15.33.1)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
7760bb069f target/i386: Added changed priority check for VIRQ
Writes to cr8 affect v_tpr. This could set or unset an interrupt
request as the priority might have changed.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
66a0201ba7 target/i386: Added ignore TPR check in ctl_has_irq
The APM2 states that if V_IGN_TPR is nonzero, the current
virtual interrupt ignores the (virtual) TPR.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
b67e2796a1 target/i386: Added VGIF V_IRQ masking capability
VGIF provides masking capability for when virtual interrupts
are taken. (APM2)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
e3126a5c92 target/i386: Moved int_ctl into CPUX86State structure
Moved int_ctl into the CPUX86State structure.  It removes some
unnecessary stores and loads, and prepares for tracking the vIRQ
state even when it is masked due to vGIF.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
900eeca579 target/i386: Added VGIF feature
VGIF allows STGI and CLGI to execute in guest mode and control virtual
interrupts in guest mode.
When the VGIF feature is enabled then:
 * executing STGI in the guest sets bit 9 of the VMCB offset 60h.
 * executing CLGI in the guest clears bit 9 of the VMCB offset 60h.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210730070742.9674-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
97afb47e15 target/i386: VMRUN and VMLOAD canonicalizations
APM2 requires that VMRUN and VMLOAD canonicalize (sign extend to 63
from 48/57) all base addresses in the segment registers that have been
respectively loaded.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210804113058.45186-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
24d84c7e48 target/i386: Fixed size of constant for Windows
~0UL has 64 bits on Linux and 32 bits on Windows.

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210812111056.26926-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-13 14:31:49 +02:00
Paolo Bonzini
f594bfb79f target/i386: fix typo in ctl_has_irq
The shift constant was incorrect, causing int_prio to always be zero.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29 10:15:52 +02:00
Lara Lazier
eceb4f0112 target/i386: Added consistency checks for event injection
VMRUN exits with SVM_EXIT_ERR if either:
 * The event injected has a reserved type.
 * When the event injected is of type 3 (exception), and the vector that
 has been specified does not correspond to an exception.

This does not fix the entire exc_inj test in kvm-unit-tests.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210725090855.19713-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29 10:15:52 +02:00
Claudio Fontana
5b8978d804 i386: do not call cpudef-only models functions for max, host, base
Some cpu properties have to be set only for cpu models in builtin_x86_defs,
registered with x86_register_cpu_model_type, and not for
cpu models "base", "max", and the subclass "host".

These properties are the ones set by function x86_cpu_apply_props,
(also including kvm_default_props, tcg_default_props),
and the "vendor" property for the KVM and HVF accelerators.

After recent refactoring of cpu, which also affected these properties,
they were instead set unconditionally for all x86 cpus.

This has been detected as a bug with Nested on AMD with cpu "host",
as svm was not turned on by default, due to the wrongful setting of
kvm_default_props via x86_cpu_apply_props, which set svm to "off".

Rectify the bug introduced in commit "i386: split cpu accelerators"
and document the functions that are builtin_x86_defs-only.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/477
Message-Id: <20210723112921.12637-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-23 15:47:13 +02:00
Lara Lazier
3407259b20 target/i386: Added consistency checks for CR3
All MBZ in CR3 must be zero (APM2 15.5)
Added checks in both helper_vmrun and helper_write_crN.
When EFER.LMA is zero the upper 32 bits needs to be zeroed.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210723112740.45962-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-23 15:46:20 +02:00
Peter Maydell
7b7ca8ebde Bugfixes.
-----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmD5bn8UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroP3AQgAjQ/YziEk0SwA6MeaWfNIdrhj4+I/
 7JXmNlTLRC622IyN9NmJu57Y9Z9PXp/yCLe8V1cTz8K3lnMSBD1ZR1vWB2FtjUnX
 0McaLzcRpmJCeezcKSDJYYVkMQVz2OvNvNyPVK0qRPkt6+knt+9kWNxYAKfsSkln
 L7knUYi4gtM0w0+kQLReohVSJOACQMzl35jXPSArsrWwbZyKZ1pQwgvM3pGMmPv4
 xYNebGjYZRgTul0c5PZsLh9F3TueeTfRvhtwtuyyXPNcvIlgAeV40NuUAXYI6wKF
 FEKtoaBTZUBEOSKK5Z/fYlN+C+e8ItlGurrqvucmjlCqIxotggEf+DYUNQ==
 =/WeY
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging

Bugfixes.

# gpg: Signature made Thu 22 Jul 2021 14:11:27 BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini-gitlab/tags/for-upstream:
  configure: Let --without-default-features disable vhost-kernel and vhost-vdpa
  configure: Fix the default setting of the "xen" feature
  configure: Allow vnc to get disabled with --without-default-features
  configure: Fix --without-default-features propagation to meson
  meson: fix dependencies for modinfo
  configure: Drop obsolete check for the alloc_size attribute
  target/i386: Added consistency checks for EFER
  target/i386: Added consistency checks for CR4
  target/i386: Added V_INTR_PRIO check to virtual interrupts
  qemu-config: restore "machine" in qmp_query_command_line_options()
  usb: fix usb-host dependency check
  chardev-spice: add missing module_obj directive
  vl: Parse legacy default_machine_opts
  qemu-config: fix memory leak on ferror()
  qemu-config: never call the callback after an error, fix leak

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-22 18:32:02 +01:00
Lara Lazier
d499f196fe target/i386: Added consistency checks for EFER
EFER.SVME has to be set, and EFER reserved bits must
be zero.
In addition the combinations
 * EFER.LMA or EFER.LME is non-zero and the processor does not support LM
 * non-zero EFER.LME and CR0.PG and zero CR4.PAE
 * non-zero EFER.LME and CR0.PG and zero CR0.PE
 * non-zero EFER.LME, CR0.PG, CR4.PAE, CS.L and CS.D
are all invalid.
(AMD64 Architecture Programmer's Manual, V2, 15.5)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Lara Lazier
213ff024a2 target/i386: Added consistency checks for CR4
All MBZ bits in CR4 must be zero. (APM2 15.5)
Added reserved bitmask and added checks in both
helper_vmrun and helper_write_crN.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Lara Lazier
b128b25a5a target/i386: Added V_INTR_PRIO check to virtual interrupts
The APM2 states that The processor takes a virtual INTR interrupt
if V_IRQ and V_INTR_PRIO indicate that there is a virtual interrupt pending
whose priority is greater than the value in V_TPR.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Richard Henderson
b5cf742841 accel/tcg: Remove TranslatorOps.breakpoint_check
The hook is now unused, with breakpoints checked outside translation.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:47:05 -10:00
Richard Henderson
7b9810ea42 target/i386: Implement debug_check_breakpoint
Return false for RF set, as we do in i386_tr_breakpoint_check.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:47:05 -10:00
Richard Henderson
be9568b4e0 tcg: Rename helper_atomic_*_mmu and provide for user-only
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr.  Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Ziqiao Kong
84abdd7d27 target/i386: Correct implementation for FCS, FIP, FDS and FDP
Update FCS:FIP and FDS:FDP according to the Intel Manual Vol.1 8.1.8.
Note that CPUID.(EAX=07H,ECX=0H):EBX[bit 13] is not implemented by
design in this patch and will be added along with TCG features flag
in a separate patch later.

Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-2-ziqiaokong@gmail.com>
[rth: Push FDS/FDP handling down into mod != 3 case; free last_addr.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-13 08:13:19 -07:00
Richard Henderson
bbdda9b74f target/i386: Split out do_fninit
Do not call helper_fninit directly from helper_xrstor.
Do call the new helper from do_fsave.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-13 08:13:19 -07:00
Ziqiao Kong
505910a6e2 target/i386: Trivial code motion and code style fix
A new pair of braces has to be added to declare variables in the case block.
The code style is also fixed according to the transalte.c itself during the
code motion.

Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-1-ziqiaokong@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-13 08:13:19 -07:00
Dmitry Voronetskiy
080ac33542 target/i386: Tidy hw_breakpoint_remove
Since cpu_breakpoint and cpu_watchpoint are in a union,
the code should access only one of them.

Signed-off-by: Dmitry Voronetskiy <davoronetskiy@gmail.com>
Message-Id: <20210613180838.21349-1-davoronetskiy@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-13 08:13:19 -07:00
Peter Maydell
bd38ae26ce Add translator_use_goto_tb.
Cleanups in prep of breakpoint fixes.
 Misc fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDpvModHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/1jgf+J1JMsPfxlSCwbbdc
 WEuWEcuKdcDFqhsePa6LaPYHTKuEEwavTG0kPbLIVZW2f6BTBeSYxAC6EWhq7pWo
 MGMhIOZM3fF0Yj+azuoybu9qxQ/K/aLM3GYt/OU00mvzturBezz+ka8MvWCrUwta
 XlhxhwnKsSP7lDWPBBjcdIIGiFJyxIRoU43giWaXrsvsc8ORJbmy7rgZfTKAit+w
 AvtQlc7TBi5nImz6f/KmEoy8mHEOhMf7czzo+v0u97lTiNK717/AHEwMfX9J585O
 GjlA9XmUUsNAciuLy48F1rHkgJxYAwo0G2shklpqPaOP5FctKm1reCSb8VEfAGaX
 Xq3UVA==
 =E9i/
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210710' into staging

Add translator_use_goto_tb.
Cleanups in prep of breakpoint fixes.
Misc fixes.

# gpg: Signature made Sat 10 Jul 2021 16:29:14 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-tcg-20210710: (41 commits)
  cpu: Add breakpoint tracepoints
  tcg: Remove TCG_TARGET_HAS_goto_ptr
  accel/tcg: Log tb->cflags with -d exec
  accel/tcg: Split out log_cpu_exec
  accel/tcg: Move tb_lookup to cpu-exec.c
  accel/tcg: Move helper_lookup_tb_ptr to cpu-exec.c
  target/i386: Use cpu_breakpoint_test in breakpoint_handler
  tcg: Fix prologue disassembly
  target/xtensa: Use translator_use_goto_tb
  target/tricore: Use tcg_gen_lookup_and_goto_ptr
  target/tricore: Use translator_use_goto_tb
  target/sparc: Use translator_use_goto_tb
  target/sh4: Use translator_use_goto_tb
  target/s390x: Remove use_exit_tb
  target/s390x: Use translator_use_goto_tb
  target/rx: Use translator_use_goto_tb
  target/riscv: Use translator_use_goto_tb
  target/ppc: Use translator_use_goto_tb
  target/openrisc: Use translator_use_goto_tb
  target/nios2: Use translator_use_goto_tb
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-12 11:02:39 +01:00
Richard Henderson
50b208b848 target/i386: Use cpu_breakpoint_test in breakpoint_handler
The loop is performing a simple boolean test for the existence
of a BP_CPU breakpoint at EIP.  Plus it gets the iteration wrong,
if we happen to have a BP_GDB breakpoint at the same address.

We have a function for this: cpu_breakpoint_test.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210620062317.1399034-1-richard.henderson@linaro.org>
2021-07-09 20:05:27 -07:00
Richard Henderson
b473534d5d target/i386: Use translator_use_goto_tb
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 09:42:28 -07:00
Philippe Mathieu-Daudé
1797b08d24 tcg: Avoid including 'trace-tcg.h' in target translate.c
The root trace-events only declares a single TCG event:

  $ git grep -w tcg trace-events
  trace-events:115:# tcg/tcg-op.c
  trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d"

and only a tcg/tcg-op.c uses it:

  $ git grep -l trace_guest_mem_before_tcg
  tcg/tcg-op.c

therefore it is pointless to include "trace-tcg.h" in each target
(because it is not used). Remove it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210629050935.2570721-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 09:38:33 -07:00
Paolo Bonzini
533883fd7e target/i386: fix exceptions for MOV to DR
Use raise_exception_ra (without error code) when raising the illegal
opcode operation; raise #GP when setting bits 63:32 of DR6 or DR7.

Move helper_get_dr to sysemu/ since it is a privileged instruction
that is not needed on user-mode emulators.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-09 18:21:34 +02:00
Lara Lazier
acf23ffb58 target/i386: Added DR6 and DR7 consistency checks
DR6[63:32] and DR7[63:32] are reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-3-laramglazier@gmail.com>
[Ignore for 32-bit builds. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-09 18:21:33 +02:00
Lara Lazier
481077b28b target/i386: Added MSRPM and IOPM size check
The address of the last entry in the MSRPM and
in the IOPM must be smaller than the largest physical address.
(APM2 15.10-15.11)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-09 18:21:33 +02:00
David Edmondson
48e5c98a38 target/i386: Move X86XSaveArea into TCG
Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 08:33:51 +02:00
David Edmondson
fea4500841 target/i386: Populate x86_ext_save_areas offsets using cpuid where possible
Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 08:33:48 +02:00
Richard Henderson
94fdf98721 target/i386: Improve bswap translation
Use a break instead of an ifdefed else.
There's no need to move the values through s->T0.
Remove TCG_BSWAP_IZ and the preceding zero-extension.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
2b836c2ac1 tcg: Add flags argument to tcg_gen_bswap16_*, tcg_gen_bswap32_i64
Implement the new semantics in the fallback expansion.
Change all callers to supply the flags that keep the
semantics unchanged locally.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Lara Lazier
e0375ec760 target/i386: Added Intercept CR0 writes check
When the selective CR0 write intercept is set, all writes to bits in
CR0 other than CR0.TS or CR0.MP cause a VMEXIT.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-5-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-16 15:02:41 +02:00
Lara Lazier
498df2a747 target/i386: Added consistency checks for CR0
The combination of unset CD and set NW bit in CR0 is illegal.
CR0[63:32] are also reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-4-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-16 15:02:40 +02:00
Lara Lazier
7eb54ca95d target/i386: Added consistency checks for VMRUN intercept and ASID
Zero VMRUN intercept and ASID should cause an immediate VMEXIT
during the consistency checks performed by VMRUN.
(AMD64 Architecture Programmer's Manual, V2, 15.5)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-16 15:02:40 +02:00
Lara Lazier
813c6459ee target/i386: Refactored intercept checks into cpu_svm_has_intercept
Added cpu_svm_has_intercept to reduce duplication when checking the
corresponding intercept bit outside of cpu_svm_check_intercept_param

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-16 15:02:39 +02:00
Richard Henderson
e18a6ec8c4 target/i386: Fix decode of cr8
A recent cleanup did not recognize that there are two ways
to encode cr8: one via the LOCK and the other via REX.

Fixes: 7eff2e7c
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/380
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210602035511.96834-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-04 13:47:08 +02:00
Paolo Bonzini
1b627f389f target/i386: tcg: fix switching from 16-bit to 32-bit tasks or vice versa
The format of the task state segment is governed by bit 3 in the
descriptor type field.  On a task switch, the format for saving
is given by the current value of TR's type field, while the
format for loading is given by the new descriptor.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-04 13:47:08 +02:00
Paolo Bonzini
a5505f6b5b target/i386: tcg: fix loading of registers from 16-bit TSS
According to the manual, the high 16-bit of the registers are preserved
when switching to a 16-bit task.  Implement this in switch_tss_ra.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-04 13:47:08 +02:00
Paolo Bonzini
28f6aa1178 target/i386: tcg: fix segment register offsets for 16-bit TSS
The TSS offsets in the manuals have only 2-byte slots for the
segment registers.  QEMU incorrectly uses 4-byte slots, so
that SS overlaps the LDT selector.

Resolves: #382
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-04 13:47:08 +02:00
Richard Henderson
8da5f1dbb0 softfloat: Introduce Floatx80RoundPrec
Use an enumeration instead of raw 32/64/80 values.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-03 14:04:02 -07:00
Richard Henderson
119065574d hw/core: Constify TCGCPUOps
We no longer have any runtime modifications to this struct,
so declare them all const.

Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20210227232519.222663-3-richard.henderson@linaro.org>
2021-05-26 15:33:59 -07:00
Peter Maydell
972e848b53 s390x fixes and cleanups; also related fixes in xtensa,
arm, and x86 code
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAmCmVLMSHGNvaHVja0By
 ZWRoYXQuY29tAAoJEN7Pa5PG8C+vihcP/2yiwThQBll+ZDKYimRu91hMkmty+24c
 F3YNv+6HnKTmnFPoo35O1iH4phd5LVZJTVicOl+XAw75DzFMpwMh8ukfq4hIYvPY
 9QSYdDBj/JX0CHTo0u2Wl92dr87vsVGwMwgqojnNZXUOMYyQGpDT/RgHqTfoCzNH
 Dl6/MqgmTNBSCZGS6GOfkmUC6bT9ZTaiSHpXPJCfvgpANDG6l2Mblz8ihcOjygoP
 e8KVXKERoUGViT+MXTAJLUlMu6valDFY6pZUh6u3EOzqqLSRXrAJACLz+zv77X7P
 Ryn03md1KWj0PRh8eEC/VfadeRbIXHrhw5T8oK8HwHW4VErL5fcAwt1EybRNWe6U
 UEj446qT37hwA9TthqZtZiR+aZHO70JRmf0svnxXaM6WepRVxzwHexDnKNi6gJvd
 cdH+yIcIzu5fEnoHNC0famYdJT4f+hmPj2r+FtbMWZXLRxMT26p4mlE0joY7EjOg
 saGBlGSdHTcSGk2X7RV/iX38s/BYpOuYM6dsi6EKn3Z1/vQbvrJ9ZZWaDDhmykJE
 1n4nOgwj7kOolNw3VlJOEBhJvozh1mf9Sr0SsXEAQQYWLwPFgX4nNnOwkk5jBTY5
 fH5Oy/aUk5tf8mmST8Sw/oSM377YC+ez3o8mtKkXtu3H0W4HTm1mnSIHbWG7xhw2
 WjmfHyRrEWT1
 =secp
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/cohuck-gitlab/tags/s390x-20210520-v2' into staging

s390x fixes and cleanups; also related fixes in xtensa,
arm, and x86 code

# gpg: Signature made Thu 20 May 2021 13:23:15 BST
# gpg:                using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg:                issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg:                 aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg:                 aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg:                 aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg:                 aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0  18CE DECF 6B93 C6F0 2FAF

* remotes/cohuck-gitlab/tags/s390x-20210520-v2:
  tests/tcg/x86_64: add vsyscall smoke test
  target/i386: Make sure that vsyscall's tb->size != 0
  vfio-ccw: Attempt to clean up all IRQs on error
  hw/s390x/ccw: Register qbus type in abstract TYPE_CCW_DEVICE parent
  vfio-ccw: Permit missing IRQs
  accel/tcg: Assert that tb->size != 0 after translation
  target/xtensa: Make sure that tb->size != 0
  target/arm: Make sure that commpage's tb->size != 0
  target/s390x: Fix translation exception on illegal instruction

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-20 18:42:00 +01:00
Ilya Leoshkevich
9b21049edd target/i386: Make sure that vsyscall's tb->size != 0
tb_gen_code() assumes that tb->size must never be zero, otherwise it
may produce spurious exceptions. For x86_64 this may happen when
creating a translation block for the vsyscall page.

Fix by pretending that vsyscall translation blocks have at least one
instruction.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210519045738.1335210-2-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-20 14:19:30 +02:00
Richard Henderson
7fb7c42394 target/i386: Remove user-only i/o stubs
With the previous patch for check_io, we now have enough for
the compiler to dead-code eliminate all of the i/o helpers.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-51-richard.henderson@linaro.org>
2021-05-19 12:17:23 -05:00
Richard Henderson
d76b9c6f07 target/i386: Move helper_check_io to sysemu
The we never allow i/o from user-only, and the tss check
that helper_check_io does will always fail.  Use an ifdef
within gen_check_io and return false, indicating that an
exception is known to be raised.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-50-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
e497803556 target/i386: Create helper_check_io
Drop helper_check_io[bwl] and expose their common
subroutine to tcg directly.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210514151342.384376-49-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
1bca40fe42 target/i386: Pass in port to gen_check_io
Pass in a pre-truncated TCGv_i32 value.  We were doing the
truncation of EDX in multiple places, now only once per insn.
While all callers use s->tmp2_i32, for cleanliness of the
subroutine, use a parameter anyway.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-48-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
bc2e436d7c target/i386: Tidy gen_check_io
Get cur_eip from DisasContext.  Do not require the caller
to use svm_is_rep; get prefix from DisasContext.  Use the
proper symbolic constants for SVM_IOIO_*.

While we're touching all call sites, return bool in
preparation for gen_check_io raising #GP.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-47-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
244843b757 target/i386: Exit tb after wrmsr
At minimum, wrmsr can change efer, which affects HF_LMA.

Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-46-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
f7803b7759 target/i386: Eliminate user stubs for read/write_crN, rd/wrmsr
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-45-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
27bd3216a7 target/i386: Inline user cpu_svm_check_intercept_param
The user-version is a no-op.  This lets us completely
remove tcg/user/svm_stubs.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-44-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
35e5a5d5cb target/i386: Unify invlpg, invlpga
Use a single helper, flush_page, to do the work.
Use gen_svm_check_intercept.
Perform the zero-extension for invlpga inline.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-43-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
4ea2449b58 target/i386: Move invlpg, hlt, monitor, mwait to sysemu
These instructions are all privileged.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-42-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
eb26784fe1 target/i386: Pass env to do_pause and do_hlt
Having the callers upcast to X86CPU is a waste, since we
don't need it.  We even have to recover env in do_hlt.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-41-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
7eff2e7c65 target/i386: Cleanup read_crN, write_crN, lmsw
Pull the svm intercept check into the translator.
Pull the entire implementation of lmsw into the translator.
Push the check for CR8LEG into the regno validation switch.
Unify the gen_io_start check between read/write.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-40-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
3d4fce8b8e target/i386: Remove user stub for cpu_vmexit
This function is only called from tcg/sysemu/.
There is no need for a stub in tcg/user/.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-39-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b53605dbd2 target/i386: Remove pc_start argument to gen_svm_check_intercept
When exiting helper_svm_check_intercept via exception, cpu_vmexit
calls cpu_restore_state, which will recover eip and cc_op via unwind.
Therefore we do not need to store eip or cc_op before the call.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-38-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
d051ea04d0 target/i386: Tidy svm_check_intercept from tcg
The param argument to helper_svm_check_intercept_param is always 0;
eliminate it and rename to helper_svm_check_intercept.  Fold
gen_svm_check_intercept_param into gen_svm_check_intercept.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-37-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
ed3c4739e9 target/i386: Simplify gen_debug usage
Both invocations pass the start of the current instruction,
which is available as s->base.pc_next.  The function sets
is_jmp, so we can eliminate a second setting.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-36-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b82055aece target/i386: Mark some helpers as noreturn
Any helper that always raises an exception or interrupt,
or simply exits to the main loop, can be so marked.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-35-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
8d6806c7dd target/i386: Eliminate SVM helpers for user-only
Use STUB_HELPER to ensure that such calls are always eliminated.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-34-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
e6aeb948bb target/i386: Implement skinit in translate.c
Our sysemu implementation is a stub.  We can already intercept
instructions for vmexit, and raising #UD is trivial.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-33-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b322b3afc1 target/i386: Assert !GUEST for user-only
For user-only, we do not need to check for VMM intercept.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-32-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
5d2238896a target/i386: Assert !SVME for user-only
Most of the VMM instructions are already disabled for user-only,
by being usable only from ring 0.

The spec is intentionally loose for VMMCALL, allowing the VMM to
define syscalls for user-only.  However, we're not emulating any
VMM, so VMMCALL can just raise #UD unconditionally.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-31-richard.henderson@linaro.org>
2021-05-19 12:16:48 -05:00
Richard Henderson
9f55e5a947 target/i386: Add stub generator for helper_set_dr
This removes an ifdef from the middle of disas_insn,
and ensures that the branch is not reachable.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-30-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a6f62100a8 target/i386: Reorder DisasContext members
Sort all of the single-byte members to the same area
of the structure, eliminating 8 bytes of padding.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-29-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
3236c2ade2 target/i386: Fix the comment for repz_opt
After fixing a typo in the comment, fixup for CODING_STYLE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-28-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
305d08e512 target/i386: Reduce DisasContext jmp_opt, repz_opt to bool
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-27-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
c1de1a1ace target/i386: Leave TF in DisasContext.flags
It's just as easy to clear the flag with AND than assignment.
In two cases the test for the bit can be folded together with
the test for HF_INHIBIT_IRQ_MASK.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-26-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
5862579473 target/i386: Reduce DisasContext popl_esp_hack and rip_offset to uint8_t
Both of these fields store the size of a single memory access,
so the range of values is 0-8.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-25-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a77ca425d7 target/i386: Reduce DisasContext.vex_[lv] to uint8_t
Currently, vex_l is either {0,1}; if in the future we implement
AVX-512, the max value will be 2.  In vex_v we store a register
number.  This is 0-15 for SSE, and 0-31 for AVX-512.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-24-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a8b9b657a0 target/i386: Reduce DisasContext.prefix to uint8_t
The highest bit in this set is 0x40 (PREFIX_REX).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-23-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
c651f3a3cb target/i386: Reduce DisasContext.override to int8_t
The range of values is -1 (none) to 5 (R_GS).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-22-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
c6ad6f44ed target/i386: Reduce DisasContext.flags to uint32_t
The value comes from tb->flags, which is uint32_t.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-21-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
0046060e5d target/i386: Remove DisasContext.f_st as unused
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-20-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
8ab1e4860b target/i386: Move rex_w into DisasContext
Treat this flag exactly like we treat the other rex bits.
The -1 initialization is unused; the two tests are > 0 and == 1,
so the value can be reduced to a bool.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-19-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
bbdb4237c5 target/i386: Move rex_r into DisasContext
Treat this flag exactly like we treat rex_b and rex_x.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-18-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
915ffe89a5 target/i386: Tidy REX_B, REX_X definition
Change the storage from int to uint8_t since the value is in {0,8}.
For x86_64 add 0 in the macros to (1) promote the type back to int,
and (2) make the macro an rvalue.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-17-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
1e92b7275c target/i386: Introduce REX_PREFIX
The existing flag, x86_64_hregs, does not accurately describe
its setting.  It is true if and only if a REX prefix has been
seen.  Yes, that affects the "h" regs, but that's secondary.

Add PREFIX_REX and include this bit in s->prefix.  Add REX_PREFIX
so that the check folds away when x86_64 is compiled out.

Fold away the reg >= 8 check, because bit 3 of the register
number comes from the REX prefix in the first place.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-16-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
beedb93c04 target/i386: Assert !ADDSEG for x86_64 user-only
LMA disables traditional segmentation, exposing a flat address space.
This means that ADDSEG is off.

Since we're adding an accessor macro, pull the value directly out
of flags otherwise.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-15-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
73e90dc458 target/i386: Assert LMA for x86_64 user-only
LMA is a pre-requisite for CODE64, so there is no way to disable it
for x86_64-linux-user, and there is no way to enable it for i386.

Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-14-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
eec7d0f838 target/i386: Assert CODE64 for x86_64 user-only
For x86_64 user-only, there is no way to leave 64-bit mode.

Without x86_64, there is no way to enter 64-bit mode.  There is
an existing macro to aid with that; simply place it in the right
place in the ifdef chain.

Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-13-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
b40a47a17f target/i386: Assert SS32 for x86_64 user-only
For user-only, SS32 == !VM86, because we are never in
real-mode.  Since we cannot enter vm86 mode for x86_64
user-only, SS32 is always set.

Since we're adding an accessor macro, pull the value
directly out of flags otherwise.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-12-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
9996dcfd67 target/i386: Assert CODE32 for x86_64 user-only
For user-only, CODE32 == !VM86, because we are never in real-mode.
Since we cannot enter vm86 mode for x86_64 user-only, CODE32 is
always set.

Since we're adding an accessor macro, pull the value directly out
of flags otherwise.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-11-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
f8a35846d5 target/i386: Assert !VM86 for x86_64 user-only
For i386-linux-user, we can enter vm86 mode via the vm86(2) syscall.
That syscall explicitly returns to 32-bit mode, and the syscall does
not exist for a 64-bit x86_64 executable.

Since we're adding an accessor macro, pull the value directly out of
flags otherwise.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-10-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
0ab011cca0 target/i386: Assert IOPL is 0 for user-only
On real hardware, the linux kernel has the iopl(2) syscall which
can set IOPL to 3, to allow e.g. the xserver to briefly disable
interrupts while programming the graphics card.

However, QEMU cannot and does not implement this syscall, so the
IOPL is never changed from 0.  Which means that all of the checks
vs CPL <= IOPL are false for user-only.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-9-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
01b9d8c1b2 target/i386: Assert CPL is 3 for user-only
A user-mode executable always runs in ring 3.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-8-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
d75f912927 target/i386: Assert PE is set for user-only
A user-mode executable is never in real-mode.  Since we're adding
an accessor macro, pull the value directly out of flags for sysemu.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-7-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
ca7874c2fa target/i386: Split out check_iopl
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-6-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
aa9f21b1f0 target/i386: Split out check_vm86_iopl
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-5-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
e048f3d6b9 target/i386: Unify code paths for IRET
In vm86 mode, we use the same helper as real-mode, but with
an extra check for IOPL.  All non-exceptional paths set EFLAGS.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-4-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
bc19f5052d target/i386: Split out check_cpl0
Split out the check for CPL != 0 and the raising of #GP.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-3-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Richard Henderson
6bd9958645 target/i386: Split out gen_exception_gpf
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-2-richard.henderson@linaro.org>
2021-05-19 12:15:46 -05:00
Paolo Bonzini
68746930ae target/i386: use mmu_translate for NPT walk
Unify the duplicate code between get_hphys and mmu_translate, by simply
making get_hphys call mmu_translate.  This also fixes the support for
5-level nested page tables.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:14 -04:00
Paolo Bonzini
33ce155c67 target/i386: allow customizing the next phase of the translation
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:14 -04:00
Paolo Bonzini
31dd35eb2d target/i386: extend pg_mode to more CR0 and CR4 bits
In order to unify the two stages of page table lookup, we need
mmu_translate to use either the host CR0/EFER/CR4 or the guest's.
To do so, make mmu_translate use the same pg_mode constants that
were used for the NPT lookup.

This also prepares for adding 5-level NPT support, which however does
not work yet.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:14 -04:00
Paolo Bonzini
cd906d315d target/i386: pass cr3 to mmu_translate
First step in unifying the nested and regular page table walk.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:13 -04:00
Paolo Bonzini
661ff4879e target/i386: extract mmu_translate
Extract the page table lookup out of handle_mmu_fault, which only has
to invoke mmu_translate and either fill the TLB or deliver the page
fault.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:13 -04:00
Paolo Bonzini
616a89eaad target/i386: move paging mode constants from SVM to cpu.h
We will reuse the page walker for both SVM and regular accesses.  To do
so we will build a function that receives the currently active paging
mode; start by including in cpu.h the constants and the function to go
from cr4/hflags/efer to the paging mode.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:13 -04:00
Paolo Bonzini
6ed6b0d380 target/i386: merge SVM_NPTEXIT_* with PF_ERROR_* constants
They are the same value, and are so by design.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-11 04:11:13 -04:00
Claudio Fontana
cc3f2be6b7 accel: add init_accel_cpu for adapting accel behavior to CPU type
while on x86 all CPU classes can use the same set of TCGCPUOps,
on ARM the right accel behavior depends on the type of the CPU.

So we need a way to specialize the accel behavior according to
the CPU. Therefore, add a second initialization, after the
accel_cpu->cpu_class_init, that allows to do this.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210322132800.7470-24-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:52 -04:00
Claudio Fontana
30493a030f i386: split seg_helper into user-only and sysemu parts
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio]:
Rebased on commit 68775856 ("target/i386: svm: do not discard high 32 bits")

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-18-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:52 -04:00
Claudio Fontana
b39030942d i386: split svm_helper into sysemu and stub-only user
For now we just copy over the previous user stubs, but really,

everything that requires s->cpl == 0 should be impossible
to trigger from user-mode emulation.

Later on we should add a check that asserts this easily f.e.:

static bool check_cpl0(DisasContext *s)
{
     int cpl = s->cpl;
 #ifdef CONFIG_USER_ONLY
     assert(cpl == 3);
 #endif
     if (cpl != 0) {
         gen_exception(s, EXCP0D_GPF, s->pc_start - s->cs_base);
         return false;
     }
     return true;
}

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-17-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Claudio Fontana
83a3d9c740 i386: separate fpu_helper sysemu-only parts
create a separate tcg/sysemu/fpu_helper.c for the sysemu-only parts.

For user mode, some small #ifdefs remain in tcg/fpu_helper.c
which do not seem worth splitting into their own user-mode module.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-16-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Claudio Fontana
a4b1f4e611 i386: split misc helper user stubs and sysemu part
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio]:
Rebased on da3f3b02("target/i386: fail if toggling LA57 in 64-bitmode")

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-15-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Claudio Fontana
6d8d1a031a i386: move TCG bpt_helper into sysemu/
for user-mode, assert that the hidden IOBPT flags are not set
while attempting to generate io_bpt helpers.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-14-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Claudio Fontana
e7f2670f2a i386: split tcg excp_helper into sysemu and user parts
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio]:
Rebased on commit b8184135 ("target/i386: allow modifying TCG phys-addr-bits")

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-13-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Claudio Fontana
a93b55ec22 i386: split smm helper (sysemu)
smm is only really useful for sysemu, split in two modules
around the CONFIG_USER_ONLY, in order to remove the ifdef
and use the build system instead.

add cpu_abort() when detecting attempts to enter SMM mode via
SMI interrupt in user-mode, and assert that the cpu is not
in SMM mode while translating RSM instructions.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-12-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:51 -04:00
Paolo Bonzini
222f3e6f19 i386: split off sysemu-only functionality in tcg-cpu
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-11-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:50 -04:00
Claudio Fontana
9ea057dc64 accel-cpu: make cpu_realizefn return a bool
overall, all devices' realize functions take an Error **errp, but return void.

hw/core/qdev.c code, which realizes devices, therefore does:

local_err = NULL;
dc->realize(dev, &local_err);
if (local_err != NULL) {
    goto fail;
}

However, we can improve at least accel_cpu to return a meaningful bool value.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-9-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:50 -04:00
Claudio Fontana
f5cc5a5c16 i386: split cpu accelerators from cpu.c, using AccelCPUClass
i386 is the first user of AccelCPUClass, allowing to split
cpu.c into:

cpu.c            cpuid and common x86 cpu functionality
host-cpu.c       host x86 cpu functions and "host" cpu type
kvm/kvm-cpu.c    KVM x86 AccelCPUClass
hvf/hvf-cpu.c    HVF x86 AccelCPUClass
tcg/tcg-cpu.c    TCG x86 AccelCPUClass

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio]:
Rebased on commit b8184135 ("target/i386: allow modifying TCG phys-addr-bits")

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-5-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:49 -04:00
Richard Henderson
0ac2b19743 target/i386: Split out do_fsave, do_frstor, do_fxsave, do_fxrstor
The helper_* functions must use GETPC() to unwind from TCG.
The cpu_x86_* functions cannot, and directly calling the
helper_* functions is a bug.  Split out new functions that
perform the work and can be used by both.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-4-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:49 -04:00
Richard Henderson
e3a6923454 target/i386: Rename helper_fldt, helper_fstt
Change the prefix from "helper" to "do".  The former should be
reserved for those functions that are called from TCG; the latter
is in use within the file already for those functions that are
called from the helper functions, adding a "retaddr" argument.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-3-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10 15:41:49 -04:00
Richard Henderson
10b8eb94c0 target/i386: Verify memory operand for lcall and ljmp
These two opcodes only allow a memory operand.

Lacking the check for a register operand, we used the A0 temp
without initialization, which led to a tcg abort.

Buglink: https://bugs.launchpad.net/qemu/+bug/1921138
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210324164650.128608-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-01 09:40:45 +02:00
Paolo Bonzini
687758565a target/i386: svm: do not discard high 32 bits of EXITINFO1
env->error_code is only 32-bits wide, so the high 32 bits of EXITINFO1
are being lost.  However, even though saving guest state and restoring
host state must be delayed to do_vmexit, because they might take tb_lock,
it is always possible to write to the VMCB.  So do this for the exit
code and EXITINFO1, just like it is already being done for EXITINFO2.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-19 08:48:18 -04:00
Paolo Bonzini
da3f3b020f target/i386: fail if toggling LA57 in 64-bit mode
This fixes kvm-unit-tests access.flat with -cpu qemu64,la57.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-19 08:48:18 -04:00
Paolo Bonzini
b818413583 target/i386: allow modifying TCG phys-addr-bits
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-19 08:48:18 -04:00
Zheng Zhan Liang
c45b426acd tcg/i386: rdpmc: fix the the condtions
Signed-off-by: Zheng Zhan Liang <linuxmaker@163.com>
Message-Id: <20210225054756.35962-1-linuxmaker@163.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-25 15:41:53 +01:00
Richard Henderson
3e8f1628e8 exec: Use cpu_untagged_addr in g2h; split out g2h_untagged
Use g2h_untagged in contexts that have no cpu, e.g. the binary
loaders that operate before the primary cpu is created.  As a
colollary, target_mmap and friends must use untagged addresses,
since they are used by the loaders.

Use g2h_untagged on values returned from target_mmap, as the
kernel never applies a tag itself.

Use g2h_untagged on all pc values.  The only current user of
tags, aarch64, removes tags from code addresses upon branch,
so "pc" is always untagged.

Use g2h with the cpu context on hand wherever possible.

Use g2h_untagged in lock_user, which will be updated soon.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210212184902.1251044-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-16 11:04:53 +00:00
Paolo Bonzini
e7e7bdabab target/i86: implement PKS
Protection Keys for Supervisor-mode pages is a simple extension of
the PKU feature that QEMU already implements.  For supervisor-mode
pages, protection key restrictions come from a new MSR.  The MSR
has no XSAVE state associated to it.

PKS is only respected in long mode.  However, in principle it is
possible to set the MSR even outside long mode, and in fact
even the XSAVE state for PKRU could be set outside long mode
using XRSTOR.  So do not limit the migration subsections for
PKRU and PKRS to long mode.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-08 14:43:55 +01:00
David Greenaway
51909241d2 target/i386: Fix decoding of certain BMI instructions
This patch fixes a translation bug for a subset of x86 BMI instructions
such as the following:

   c4 e2 f9 f7 c0                shlxq   %rax, %rax, %rax

Currently, these incorrectly generate an undefined instruction exception
when SSE is disabled via CR4, while instructions like "shrxq" work fine.

The problem appears to be related to BMI instructions encoded using VEX
and with a mandatory prefix of "0x66" (data). Instructions with this
data prefix (such as shlxq) are currently rejected. Instructions with
other mandatory prefixes (such as shrxq) translate as expected.

This patch removes the incorrect check in "gen_sse" that causes the
exception to be generated. For the non-BMI cases, the check is
redundant: prefixes are already checked at line 3696.

Buglink: https://bugs.launchpad.net/qemu/+bug/1748296

Signed-off-by: David Greenaway <dgreenaway@google.com>
Message-Id: <20210114063958.1508050-1-dgreenaway@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-08 14:43:55 +01:00
Claudio Fontana
7827168471 cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClass
we cannot in principle make the TCG Operations field definitions
conditional on CONFIG_TCG in code that is included by both common_ss
and specific_ss modules.

Therefore, what we can do safely to restrict the TCG fields to TCG-only
builds, is to move all tcg cpu operations into a separate header file,
which is only included by TCG, target-specific code.

This leaves just a NULL pointer in the cpu.h for the non-TCG builds.

This also tidies up the code in all targets a bit, having all TCG cpu
operations neatly contained by a dedicated data struct.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-16-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Claudio Fontana
0545608056 cpu: move cc->do_interrupt to tcg_ops
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-10-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e9ce43e97a cpu: Move debug_excp_handler to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-8-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e124536f37 cpu: Move tlb_fill to tcg_ops
[claudio: wrapped target code in CONFIG_TCG]

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-7-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
48c1a3e303 cpu: Move cpu_exec_* to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: wrapped target code in CONFIG_TCG]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-6-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
ec62595bab cpu: Move synchronize_from_tb() to tcg_ops
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: wrapped target code in CONFIG_TCG, reworded comments]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-5-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Eduardo Habkost
e9e51b7154 cpu: Introduce TCGCpuOperations struct
The TCG-specific CPU methods will be moved to a separate struct,
to make it easier to move accel-specific code outside generic CPU
code in the future.  Start by moving tcg_initialize().

The new CPUClass.tcg_opts field may eventually become a pointer,
but keep it an embedded struct for now, to make code conversion
easier.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: move TCGCpuOperations inside include/hw/core/cpu.h]
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-2-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Philippe Mathieu-Daudé
c117e5b11a target/i386: Use X86Seg enum for segment registers
Use the dedicated X86Seg enum type for segment registers.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210109233427.749748-1-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-01-12 17:05:10 +01:00
Richard Henderson
04a37d4ca4 tcg: Make tb arg to synchronize_from_tb const
There is nothing within the translators that ought to be
changing the TranslationBlock data, so make it const.

This does not actually use the read-only copy of the
data structure that exists within the rx region.

Reviewed-by: Joelle van Dyne <j@getutm.app>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-07 05:09:41 -10:00
Peter Maydell
3df1a3d070 target/i386: Check privilege level for protected mode 'int N' task gate
When the 'int N' instruction is executed in protected mode, the
pseudocode in the architecture manual specifies that we need to check:

 * vector number within IDT limits
 * selected IDT descriptor is a valid type (interrupt, trap or task gate)
 * if this was a software interrupt then gate DPL < CPL

The way we had structured the code meant that the privilege check for
software interrupts ended up not in the code path taken for task gate
handling, because all of the task gate handling code was in the 'case 5'
of the switch which was checking "is this descriptor a valid type".

Move the task gate handling code out of that switch (so that it is now
purely doing the "valid type?" check) and below the software interrupt
privilege check.

The effect of this missing check was that in a guest userspace binary
executing 'int 8' would cause a guest kernel panic rather than the
userspace binary being handed a SEGV.

This is essentially the same bug fixed in VirtualBox in 2012:
https://www.halfdog.net/Security/2012/VirtualBoxSoftwareInterrupt0x8GuestCrash/

Note that for QEMU this is not a security issue because it is only
present when using TCG.

Fixes: https://bugs.launchpad.net/qemu/+bug/1813201
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20201121224445.16236-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-02 21:03:09 +01:00
Chen Qun
bdddc1c425 target/i386: silence the compiler warnings in gen_shiftd_rm_T1
The current "#ifdef TARGET_X86_64" statement affects
the compiler's determination of fall through.

When using -Wimplicit-fallthrough in our CFLAGS, the compiler showed warning:
target/i386/translate.c: In function ‘gen_shiftd_rm_T1’:
target/i386/translate.c:1773:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
         if (is_right) {
            ^
target/i386/translate.c:1782:5: note: here
     case MO_32:
     ^~~~

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201211152426.350966-6-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-12-18 09:14:23 +01:00
Claudio Fontana
69483f3115 i386: tcg: remove inline from cpu_load_eflags
make it a regular function.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201212155530.23098-9-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-12-16 15:50:33 -05:00
Claudio Fontana
ed69e8314d i386: move TCG cpu class initialization to tcg/
to do this, we need to take code out of cpu.c and helper.c,
and also move some prototypes from cpu.h, for code that is
needed in tcg/xxx_helper.c, and which in turn is part of the
callbacks registered by the class initialization.

Therefore, do some shuffling of the parts of cpu.h that
are only relevant for tcg/, and put them in tcg/helper-tcg.h

For FT0 and similar macros, put them in tcg/fpu-helper.c
since they are used only there.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201212155530.23098-8-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-12-16 15:50:33 -05:00
Claudio Fontana
1b248f147e i386: move TCG accel files into tcg/
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

[claudio: moved cc_helper_template.h to tcg/ too]

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20201212155530.23098-6-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-12-16 14:06:53 -05:00