Currently, vex_l is either {0,1}; if in the future we implement
AVX-512, the max value will be 2. In vex_v we store a register
number. This is 0-15 for SSE, and 0-31 for AVX-512.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-24-richard.henderson@linaro.org>
The highest bit in this set is 0x40 (PREFIX_REX).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-23-richard.henderson@linaro.org>
The range of values is -1 (none) to 5 (R_GS).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-22-richard.henderson@linaro.org>
The value comes from tb->flags, which is uint32_t.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-21-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-20-richard.henderson@linaro.org>
Treat this flag exactly like we treat the other rex bits.
The -1 initialization is unused; the two tests are > 0 and == 1,
so the value can be reduced to a bool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-19-richard.henderson@linaro.org>
Treat this flag exactly like we treat rex_b and rex_x.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-18-richard.henderson@linaro.org>
Change the storage from int to uint8_t since the value is in {0,8}.
For x86_64 add 0 in the macros to (1) promote the type back to int,
and (2) make the macro an rvalue.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-17-richard.henderson@linaro.org>
The existing flag, x86_64_hregs, does not accurately describe
its setting. It is true if and only if a REX prefix has been
seen. Yes, that affects the "h" regs, but that's secondary.
Add PREFIX_REX and include this bit in s->prefix. Add REX_PREFIX
so that the check folds away when x86_64 is compiled out.
Fold away the reg >= 8 check, because bit 3 of the register
number comes from the REX prefix in the first place.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-16-richard.henderson@linaro.org>
LMA disables traditional segmentation, exposing a flat address space.
This means that ADDSEG is off.
Since we're adding an accessor macro, pull the value directly out
of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-15-richard.henderson@linaro.org>
LMA is a pre-requisite for CODE64, so there is no way to disable it
for x86_64-linux-user, and there is no way to enable it for i386.
Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-14-richard.henderson@linaro.org>
For x86_64 user-only, there is no way to leave 64-bit mode.
Without x86_64, there is no way to enter 64-bit mode. There is
an existing macro to aid with that; simply place it in the right
place in the ifdef chain.
Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-13-richard.henderson@linaro.org>
For user-only, SS32 == !VM86, because we are never in
real-mode. Since we cannot enter vm86 mode for x86_64
user-only, SS32 is always set.
Since we're adding an accessor macro, pull the value
directly out of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-12-richard.henderson@linaro.org>
For user-only, CODE32 == !VM86, because we are never in real-mode.
Since we cannot enter vm86 mode for x86_64 user-only, CODE32 is
always set.
Since we're adding an accessor macro, pull the value directly out
of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-11-richard.henderson@linaro.org>
For i386-linux-user, we can enter vm86 mode via the vm86(2) syscall.
That syscall explicitly returns to 32-bit mode, and the syscall does
not exist for a 64-bit x86_64 executable.
Since we're adding an accessor macro, pull the value directly out of
flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-10-richard.henderson@linaro.org>
On real hardware, the linux kernel has the iopl(2) syscall which
can set IOPL to 3, to allow e.g. the xserver to briefly disable
interrupts while programming the graphics card.
However, QEMU cannot and does not implement this syscall, so the
IOPL is never changed from 0. Which means that all of the checks
vs CPL <= IOPL are false for user-only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-9-richard.henderson@linaro.org>
A user-mode executable always runs in ring 3.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-8-richard.henderson@linaro.org>
A user-mode executable is never in real-mode. Since we're adding
an accessor macro, pull the value directly out of flags for sysemu.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-7-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-6-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-5-richard.henderson@linaro.org>
In vm86 mode, we use the same helper as real-mode, but with
an extra check for IOPL. All non-exceptional paths set EFLAGS.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-4-richard.henderson@linaro.org>
Split out the check for CPL != 0 and the raising of #GP.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-3-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-2-richard.henderson@linaro.org>
Unify the duplicate code between get_hphys and mmu_translate, by simply
making get_hphys call mmu_translate. This also fixes the support for
5-level nested page tables.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to unify the two stages of page table lookup, we need
mmu_translate to use either the host CR0/EFER/CR4 or the guest's.
To do so, make mmu_translate use the same pg_mode constants that
were used for the NPT lookup.
This also prepares for adding 5-level NPT support, which however does
not work yet.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extract the page table lookup out of handle_mmu_fault, which only has
to invoke mmu_translate and either fill the TLB or deliver the page
fault.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will reuse the page walker for both SVM and regular accesses. To do
so we will build a function that receives the currently active paging
mode; start by including in cpu.h the constants and the function to go
from cr4/hflags/efer to the paging mode.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
while on x86 all CPU classes can use the same set of TCGCPUOps,
on ARM the right accel behavior depends on the type of the CPU.
So we need a way to specialize the accel behavior according to
the CPU. Therefore, add a second initialization, after the
accel_cpu->cpu_class_init, that allows to do this.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210322132800.7470-24-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For now we just copy over the previous user stubs, but really,
everything that requires s->cpl == 0 should be impossible
to trigger from user-mode emulation.
Later on we should add a check that asserts this easily f.e.:
static bool check_cpl0(DisasContext *s)
{
int cpl = s->cpl;
#ifdef CONFIG_USER_ONLY
assert(cpl == 3);
#endif
if (cpl != 0) {
gen_exception(s, EXCP0D_GPF, s->pc_start - s->cs_base);
return false;
}
return true;
}
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-17-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
create a separate tcg/sysemu/fpu_helper.c for the sysemu-only parts.
For user mode, some small #ifdefs remain in tcg/fpu_helper.c
which do not seem worth splitting into their own user-mode module.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-16-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
for user-mode, assert that the hidden IOBPT flags are not set
while attempting to generate io_bpt helpers.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-14-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
smm is only really useful for sysemu, split in two modules
around the CONFIG_USER_ONLY, in order to remove the ifdef
and use the build system instead.
add cpu_abort() when detecting attempts to enter SMM mode via
SMI interrupt in user-mode, and assert that the cpu is not
in SMM mode while translating RSM instructions.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-12-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
overall, all devices' realize functions take an Error **errp, but return void.
hw/core/qdev.c code, which realizes devices, therefore does:
local_err = NULL;
dc->realize(dev, &local_err);
if (local_err != NULL) {
goto fail;
}
However, we can improve at least accel_cpu to return a meaningful bool value.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-9-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
i386 is the first user of AccelCPUClass, allowing to split
cpu.c into:
cpu.c cpuid and common x86 cpu functionality
host-cpu.c host x86 cpu functions and "host" cpu type
kvm/kvm-cpu.c KVM x86 AccelCPUClass
hvf/hvf-cpu.c HVF x86 AccelCPUClass
tcg/tcg-cpu.c TCG x86 AccelCPUClass
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[claudio]:
Rebased on commit b8184135 ("target/i386: allow modifying TCG phys-addr-bits")
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-5-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The helper_* functions must use GETPC() to unwind from TCG.
The cpu_x86_* functions cannot, and directly calling the
helper_* functions is a bug. Split out new functions that
perform the work and can be used by both.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-4-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Change the prefix from "helper" to "do". The former should be
reserved for those functions that are called from TCG; the latter
is in use within the file already for those functions that are
called from the helper functions, adding a "retaddr" argument.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-3-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These two opcodes only allow a memory operand.
Lacking the check for a register operand, we used the A0 temp
without initialization, which led to a tcg abort.
Buglink: https://bugs.launchpad.net/qemu/+bug/1921138
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210324164650.128608-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
env->error_code is only 32-bits wide, so the high 32 bits of EXITINFO1
are being lost. However, even though saving guest state and restoring
host state must be delayed to do_vmexit, because they might take tb_lock,
it is always possible to write to the VMCB. So do this for the exit
code and EXITINFO1, just like it is already being done for EXITINFO2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use g2h_untagged in contexts that have no cpu, e.g. the binary
loaders that operate before the primary cpu is created. As a
colollary, target_mmap and friends must use untagged addresses,
since they are used by the loaders.
Use g2h_untagged on values returned from target_mmap, as the
kernel never applies a tag itself.
Use g2h_untagged on all pc values. The only current user of
tags, aarch64, removes tags from code addresses upon branch,
so "pc" is always untagged.
Use g2h with the cpu context on hand wherever possible.
Use g2h_untagged in lock_user, which will be updated soon.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210212184902.1251044-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Protection Keys for Supervisor-mode pages is a simple extension of
the PKU feature that QEMU already implements. For supervisor-mode
pages, protection key restrictions come from a new MSR. The MSR
has no XSAVE state associated to it.
PKS is only respected in long mode. However, in principle it is
possible to set the MSR even outside long mode, and in fact
even the XSAVE state for PKRU could be set outside long mode
using XRSTOR. So do not limit the migration subsections for
PKRU and PKRS to long mode.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>