Processors up to the Pentium (says Bochs---I do not have old enough
manuals) require a 32KiB alignment for the SMBASE, but newer processors
do not need that, and Tiano Core will use non-aligned SMBASE values.
Reported-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU
address space gets an extra region which is an alias of
/machine/smram. This extra region is enabled or disabled
as the CPU enters/exits SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because the limit field's bits 31:20 is 1, G should be 1.
VMX actually enforces this, let's do it for completeness
in QEMU as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU is not blocking NMIs on entry to SMM. Implementing this has to
cover a few corner cases, because:
- NMIs can then be enabled by an IRET instruction and there
is no mechanism to _set_ the "NMIs masked" flag on exit from SMM:
"A special case can occur if an SMI handler nests inside an NMI handler
and then another NMI occurs. [...] When the processor enters SMM while
executing an NMI handler, the processor saves the SMRAM state save map
but does not save the attribute to keep NMI interrupts disabled.
- However, there is some hidden state, because "If NMIs were blocked
before the SMI occurred [and no IRET is executed while in SMM], they
are blocked after execution of RSM." This is represented by the new
HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_
on exit from RSM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These include page table walks, SVM accesses and SMM state save accesses.
The bulk of the patch is obtained with
sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* remotes/kvm/uq/master:
kvm: Fix eax for cpuid leaf 0x40000000
kvmclock: Ensure proper env->tsc value for kvmclock_current_nsec calculation
kvm: Enable -cpu option to hide KVM
kvm: Ensure negative return value on kvm_init() error handling path
target-i386: set CC_OP to CC_OP_EFLAGS in cpu_load_eflags
target-i386: get CPL from SS.DPL
target-i386: rework CPL checks during task switch, preparing for next patch
target-i386: fix segment flags for SMM and VM86 mode
target-i386: Fix vm86 mode regression introduced in fd460606fd.
kvm_stat: allow choosing between tracepoints and old stats
kvmclock: Ensure time in migration never goes backward
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There is no reason to keep that out of the function. The comment refers
to the disassembler's cc_op state rather than the CPUState field.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The cpu_x86_load_seg_cache() function inspects cr0 and eflags, so make
sure all changes to eflags and cr0 are done prior to loading the
segment caches.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commits fdfba1a298,
f606604f1c and
2c17449b30 added usages of ENV_GET_CPU()
macro in target-specific code.
Use x86_env_get_cpu() or reuse existing X86CPU variable instead.
Cc: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.
Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Move the DUMP_FPU and DUMP_CCOP flags for cpu_dump_state() from being
x86-specific flags to being generic ones. This allows us to drop some
TARGET_I386 ifdefs in various places, and means that we can (potentially)
be more consistent across architectures about which monitor commands or
debug abort printouts include FPU register contents and info about
QEMU's condition-code optimisations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>