Do not use exec/cpu_ldst_{,useronly_}template.h directly,
but instead use the functional interface.
Cc: Eduardo Habkost <ehabkost@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cleanup in the boilerplate that each target must define.
Replace x86_env_get_cpu with env_archcpu. The combination
CPU(x86_env_get_cpu) should have used ENV_GET_CPU to begin;
use env_cpu now.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix the SYSCALL instruction in 64-bit (long mode). The RF flag
should be cleared in R11 as well as in the RFLAGS. Intel
and AMD CPUs behave same. AMD has this documented in the
APM vol 3.
Signed-off-by: Roman Kapl <rka@sysgo.com>
Signed-off-by: Rudolf Marek <rudolf.marek@sysgo.com>
Message-Id: <20181019122449.26387-1-rka@sysgo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This flag will be used for KVM's nested VMX migration; the HF_GUEST_MASK name
is already used in KVM, adopt it in QEMU as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Interrupt handling depends on various flags in env->hflags or env->hflags2,
and the exact detail were not exactly replicated between x86_cpu_has_work
and x86_cpu_exec_interrupt. Create a new function that extracts the
highest-priority non-masked interrupt, and use it in both functions.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The current implementation has three bugs,
* segment limits are not enforced in protected mode if the L bit is set
in the target segment descriptor
* segment limits are not enforced in compatibility mode (ljmp to 32-bit
code segment in long mode)
* #GP(new_cs) is generated rather than #GP(0)
Now the segment limits are enforced if we're not in long mode OR the
target code segment doesn't have the L bit set.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180816011903.39816-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently call gates are always treated as 32-bit gates. In IA-32e mode
(either compatibility or 64-bit submode), system segment descriptors are
always 64-bit. Treating them as 32-bit has the expected unfortunate
effect: only the lower 32 bits of the offset are loaded, the stack
pointer is truncated, a bad new stack pointer is loaded from the TSS (if
switching privilege levels), etc.
This change adds support for 64-bit call gate to the lcall and ljmp
instructions. Additionally, there should be a check for non-canonical
stack pointers, but I've omitted that since there doesn't seem to be
checks for non-canonical addresses in this code elsewhere.
I've left the raise_exception_err_ra lines unwapped at 80 columns to
match the style in the rest of the file.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180819181725.34098-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check for SVM interception prior to injecting an NMI. Tested via the
Jailhouse hypervisor.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <c65877e9a011ee4962931287e59f502c482b8d0b.1522769774.git.jan.kiszka@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the handling of conforming code segments before the handling
of stack switch.
Because dpl == cpl after the new "if", it's now unnecessary to check
the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked
slightly above the modified code, so the final "else" is unreachable
and we can remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In do_interrupt64(), when interrupt stack table(ist) is enabled
and the the target code segment is conforming(e2 & DESC_C_MASK), the
old implementation always set new CPL to 0, and SS.RPL to 0.
This is incorrect for when CPL3 code access a CPL0 conforming code
segment, the CPL should remain unchanged. Otherwise higher privileged
code can be compromised.
The patch fix this for always set dpl = cpl when the target code segment
is conforming, and modify the last parameter `flags`, which contains
correct new CPL, in cpu_x86_load_seg_cache().
Signed-off-by: Wu Xiang <willx8@gmail.com>
Message-Id: <20170621142152.GA18094@wxdeubuntu.ipads-lab.se.sjtu.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paths through the softmmu code during code generation now need to be audited
to check for double locking of tb_lock. In particular, VMEXIT can take tb_lock
through cpu_vmexit -> cpu_x86_update_cr4 -> tlb_flush.
To avoid this, split VMEXIT delivery in two parts, similar to what is done with
exceptions. cpu_vmexit only records the VMEXIT exit code and information, and
cc->do_interrupt can then deliver it when it is safe to take the lock.
Reported-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Tested-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Commit 2afbdf8 ("target-i386: exception handling for memory helpers",
2015-09-15) changed tlb_fill's cpu_restore_state+raise_exception_err
to raise_exception_err_ra. After this change, the cpu_restore_state
and raise_exception_err's cpu_loop_exit are merged into
raise_exception_err_ra's cpu_loop_exit_restore.
This actually fixed some bugs, but when SVM is enabled there is a
second path from raise_exception_err_ra to cpu_loop_exit. This is
the VMEXIT path, and now cpu_vmexit is called without a
cpu_restore_state before.
The fix is to pass the retaddr to cpu_vmexit (via
cpu_svm_check_intercept_param). All helpers can now use GETPC() to pass
the correct retaddr, too.
Cc: qemu-stable@nongnu.org
Fixes: 2afbdf8480
Reported-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Tested-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch improves interrupt handling in record/replay mode.
Now "interrupt" event is saved only when cc->cpu_exec_interrupt returns true.
This patch also adds missing return to cpu_exec_interrupt function.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20170124071708.4572.64023.stgit@PASHA-ISP>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [crisµblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>