A transition from CPUFooState to FooCPU can be considered safe,
just like FooCPU::env access in the opposite direction.
The only benefit of the FOO_CPU() casts would be protection against
bogus CPUFooState pointers, but then surrounding code would likely
break, too.
This should slightly improve interrupt etc. performance when going from
CPUFooState to FooCPU.
For any additional CPU() casts see 3556c233d9
(qom: allow turning cast debugging off).
Reported-by: Anthony Liguori <aliguori@us.ibm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The functions cpu_clone_regs() and cpu_set_tls() are not purely CPU
related -- they are specific to the TLS ABI for a a particular OS.
Move them into the linux-user/ tree where they belong.
target-lm32 had entirely unused implementations, since it has no
linux-user target; just drop them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Multiple -machine options with the same ID are merged. All but the
one without an ID are to be silently ignored.
In most places, we query these options with a null ID. This is
correct.
In some places, we instead query whatever options come first in the
list. This is wrong. When the -machine processed first happens to
have an ID, options are taken from that ID, and the ones specified
without ID are silently ignored.
Example:
$ upstream-qemu -nodefaults -S -display none -monitor stdio -machine id=foo -machine accel=kvm,usb=on
$ upstream-qemu -nodefaults -S -display none -monitor stdio -machine id=foo,accel=kvm,usb=on -machine accel=xen
$ upstream-qemu -nodefaults -S -display none -monitor stdio -machine accel=xen -machine id=foo,accel=kvm,usb=on
$ qemu-system-x86_64 -nodefaults -S -display none -monitor stdio -machine accel=kvm,usb=on
QEMU 1.5.50 monitor - type 'help' for more information
(qemu) info kvm
kvm support: enabled
(qemu) info usb
(qemu) q
$ qemu-system-x86_64 -nodefaults -S -display none -monitor stdio -machine id=foo -machine accel=kvm,usb=on
QEMU 1.5.50 monitor - type 'help' for more information
(qemu) info kvm
kvm support: disabled
(qemu) info usb
(qemu) q
$ qemu-system-x86_64 -nodefaults -S -display none -monitor stdio -machine id=foo,accel=kvm,usb=on -machine accel=xen
QEMU 1.5.50 monitor - type 'help' for more information
(qemu) info kvm
kvm support: enabled
(qemu) info usb
USB support not enabled
(qemu) q
$ qemu-system-x86_64 -nodefaults -S -display none -monitor stdio -machine accel=xen -machine id=foo,accel=kvm,usb=on
xc: error: Could not obtain handle on privileged command interface (2 = No such file or directory): Internal error
xen be core: can't open xen interface
failed to initialize Xen: Operation not permitted
Option usb is queried correctly, and the one without an ID wins,
regardless of option order.
Option accel is queried incorrectly, and which one wins depends on
option order and ID.
Affected options are accel (and its sugared forms -enable-kvm and
-no-kvm), kernel_irqchip, kvm_shadow_mem.
Additionally, option kernel_irqchip is normally on by default, except
it's off when no -machine options are given. Bug can't bite, because
kernel_irqchip is used only when KVM is enabled, KVM is off by
default, and enabling always creates -machine options. Downstreams
that enable KVM by default do get bitten, though.
Use qemu_get_machine_opts() to fix these bugs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1372943363-24081-5-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This allows to move the call into CPUState's realizefn.
Therefore move the stub into libqemustub.a.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Make cpustats monitor command available unconditionally.
Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change Monitor::mon_cpu to CPUState as well.
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It no longer relies on CPUArchState since 20d695a.
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Relocate assignment of x86 get_arch_id to have all hooks in one place.
Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Spotted by Coverity,
x86_reg_info_32[] is CPU_NB_REGS32 elements long, so accessing
x86_reg_info_32[CPU_NB_REGS32] will be one element off array.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed by: Jesse Larrew <jlarrew@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The CPUID level value on Conroe, Penryn, and Nehalem are too low. This
causes at least one known problem: the -smp "threads" option doesn't
work as expect if level is < 4, because thread count information is
provided to the guest on CPUID[EAX=4,ECX=2].EAX
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The CPUID model values on Conroe, Penryn, and Nehalem are too
conservative and don't reflect the values found on real Conroe, Penryn,
and Nehalem CPUs.
This causes at least one known problems: Windows XP disables sysenter
when (family == 6 && model <= 2), but Skype tries to use the sysenter
instruction anyway because it is reported as available on CPUID, making
it crash.
This patch sets appropriate model values that correspond to real Conroe,
Penryn, and Nehalem CPUs.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The code reorganization in commit 4a6fd938 broke handling of PREFIX_ADR.
While fixing this, tidy and comment the code so that it's more obvious
what's going on in setting both aflag and dflag.
The TARGET_X86_64 ifdef can be eliminated because CODE64 expands to the
constant zero when TARGET_X86_64 is undefined.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1369855851-21400-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Function walk_pte() needs pte index to calculate virtual address.
However, pte index of PAE paging or IA-32e paging is 9 bit, so the mask
should be 0x1ff.
Signed-off-by: Qiao Nuohan <qiaonuohan@cn.fujitsu.com>
Reviewed-by: Jesse Larrew <jlarrew@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
The code used to walk IA-32e page-tables, and possibly PAE page-tables,
uses the bit mask ~0xfff to get the next PML4E/PDPTE/PDE/PTE address.
However, as we use a uint64_t to store the resulting address, that mask
gets expanded to 0xfffffffffffff000 which not only ends up selecting
reserved bits but also selects the XD bit (execute-disable) which
happens to be enabled by Windows 8, causing qemu_get_ram_ptr() to abort.
This commit fixes that problem by replacing ~0xfff by a correct mask
that only selects the address bit range (ie. bits 51:12).
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Fix EFLAGS corruption by ROR r8/r16 imm instruction located at the end
of the TB, similarly to commit 089305ac for the non-immediate case.
Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The Atom core (cpu name "n270" in QEMU speak) supports MOVBE. This is
needed when booting 3.8 and later linux kernels built with the MATOM
target because we require MOVBE in order to boot properly now.
Signed-off-by: Borislav Petkov <bp@suse.de>
[ehabkost: added compat code to disable MOVBE on pc-*-1.4 and older]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Introduce x86_cpu_compat_set_features(), that can be used to set/unset
feature bits on specific CPU models for machine-type compatibility.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This changes the model number of 486 to 8 (DX4) which matches the
feature set presented, and actually has the CPUID instruction.
This adds a compatibility property, to keep model=0 on pc-*-1.4 and older.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
[AF: Add compat_props entry]
Tested-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
After initializing the object from its x86_def_t and before setting any
additional -cpu arguments, set any global properties for the designated
subclass <name>-{i386,x86_64}-cpu.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This field will contain the feature bits that were filtered out because
of missing host support.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This property will be useful for libvirt, as libvirt already has logic
based on low-level feature bits (not feature names), so it will be
really easy to convert the current libvirt logic to something using the
"feature-words" property.
The property will have two main use cases:
- Checking host capabilities, by checking the features of the "host"
CPU model
- Checking which features are enabled on each CPU model
Example output:
$ ./QMP/qmp --path=/tmp/m \
qom-get --path=/machine/icc-bridge/icc/child[0] \
--property=feature-words
item[0].cpuid-register: EDX
item[0].cpuid-input-eax: 2147483658
item[0].features: 0
item[1].cpuid-register: EAX
item[1].cpuid-input-eax: 1073741825
item[1].features: 0
item[2].cpuid-register: EDX
item[2].cpuid-input-eax: 3221225473
item[2].features: 0
item[3].cpuid-register: ECX
item[3].cpuid-input-eax: 2147483649
item[3].features: 101
item[4].cpuid-register: EDX
item[4].cpuid-input-eax: 2147483649
item[4].features: 563346425
item[5].cpuid-register: EBX
item[5].cpuid-input-eax: 7
item[5].features: 0
item[5].cpuid-input-ecx: 0
item[6].cpuid-register: ECX
item[6].cpuid-input-eax: 1
item[6].features: 2155880449
item[7].cpuid-register: EDX
item[7].cpuid-input-eax: 1
item[7].features: 126614521
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Instead of open-coding the filtering code for each feature word, change
the existing code to use the feature_word_info array, that has exactly
the same CPUID eax/ecx/register values for each feature word.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
FEAT_7_0_EBX uses ECX as input, so we have to take that into account
when reporting feature word values.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This replaces the feature-bit fields on both X86CPU and x86_def_t
structs with an array.
With this, we will be able to simplify code that simply does the same
operation on all feature words (e.g. kvm_check_features_against_host(),
filter_features_for_kvm(), add_flagname_to_bitmaps(), CPU feature-bit
property lookup/registration, and the proposed "feature-words" property)
The following field replacements were made on X86CPU and x86_def_t:
(cpuid_)features -> features[FEAT_1_EDX]
(cpuid_)ext_features -> features[FEAT_1_ECX]
(cpuid_)ext2_features -> features[FEAT_8000_0001_EDX]
(cpuid_)ext3_features -> features[FEAT_8000_0001_ECX]
(cpuid_)ext4_features -> features[FEAT_C000_0001_EDX]
(cpuid_)kvm_features -> features[FEAT_KVM]
(cpuid_)svm_features -> features[FEAT_SVM]
(cpuid_)7_0_ebx_features -> features[FEAT_7_0_EBX]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Break lines on kvm_check_features_against_host(), kvm_cpu_fill_host(),
and builtin_x86_defs, so they don't get too long once the *_features
fields are replaced by an array.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add appropriate spaces around operators, and break line where it needs
to be broken to allow feature-words array to be introduced without
having too-long lines.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It allows APIC to be hotplugged.
* map APIC's mmio at board level if it is present
* do not register mmio region for each APIC, since
only one is used/mapped
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
X86CPU should have parent bus so it could provide bus for child APIC.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Put APIC_SPACE_SIZE in a public header so that it can be
reused elsewhere later.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The property is used from board level to set APIC ID for CPUs it
creates. Do so in a new pc_new_cpu() helper, to be reused for hot-plug.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This helper replaces '_' with '-' in a uniform way.
As a side effect, even custom mappings must use '-' now.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
[AF: Split off; operate on NUL-terminated string rather than '=' delimiter]
Signed-off-by: Andreas Färber <afaerber@suse.de>
get_arch_id() adds possibility for generic code to get a guest-visible
CPU ID without accessing CPUArchState.
If derived classes don't override it, it will return cpu_index.
Override it on target-i386 in X86CPU to return the APIC ID.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Fixed EFLAGS corruption by ROR r8/r16 instruction located at the end of the TB.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Read and write steal time MSR, so that reporting is functional across
migration.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Move CPU creation and features parsing into a separate cpu_x86_create()
function, so that board would be able to set board-specific CPU
properties before CPU is realized.
Keep cpu_x86_init() for compatibility with the code that uses cpu_init()
and doesn't need to modify CPU properties.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
* Add braces to 'if' statements;
* Remove last TAB character from the source.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Changed whitespace]
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
When APIC is hotplugged during CPU hotplug, device_set_realized()
calls device_reset() on it. And if QEMU runs in KVM mode, following
call chain will fail:
apic_reset_common()
-> kvm_apic_vapic_base_update()
-> kvm_vcpu_ioctl(cpu->kvm_fd,...)
due to cpu->kvm_fd not being initialized yet.
cpu->kvm_fd is initialized during qemu_init_vcpu() but x86_cpu_apic_init()
can't be moved after it because kvm_init_vcpu() -> kvm_arch_reset_vcpu()
relies on APIC to determine if CPU is BSP for setting initial env->mp_state.
So split APIC device creation from its initialization and realize APIC
after CPU is created, when it's safe to call APIC's reset method.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
We were missing a bunch of feature lists. Fix this by simply dumping
the meta list feature_word_info.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
kvm_enabled() cannot be true at this point because accelerators are
initialized much later during init. Also, hiding this makes it very hard
to discover for users. Simply dump unconditionally if CONFIG_KVM is set.
Add explanation for "host" CPU type.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The PCLMULQDQ instruction has been introduced on the Westmere CPU.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A common dependency of the constant's current users:
- hw/apic_common.c
- hw/i386/kvmvapic.c
- target-i386/cpu.c
is "target-i386/cpu.h".
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Message-id: 1363821803-3380-9-git-send-email-lersek@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
commit 5ec01c2e96 broke "-cpu ..,enforce",
as it has moved kvm_check_features_against_host() after the
filter_features_for_kvm() call. filter_features_for_kvm() removes all
features not supported by the host, so this effectively made
kvm_check_features_against_host() impossible to fail.
This patch changes the call so we check for host feature support before
filtering the feature bits.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 1364935692-24004-1-git-send-email-ehabkost@redhat.com
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
valids can equals to -1 if the reg/mem string is empty. Change the
expression to have an empty xor mask in that case.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The inner loop should only change the current bit of the result, instead
of the whole result.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
pcmpXstrX instructions in "Equal each" mode force both invalid element
pair to true. It means (upper - MAX(valids, validd)) bits should be set
to 1, not (upper - MAX(valids, validd) + 1).
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Fix the order of the of the comparisons to match the "Intel 64 and
IA-32 Architectures Software Developer's Manual".
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
pcmpXstrm instructions returns their result in the XMM0 register and
not in the first operand.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
ffs1 returns the first bit set to one starting counting from the most
significant bit.
pcmpXstri returns the most significant bit set to one, starting counting
from the least significant bit.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The "Intel 64 and IA-32 Architectures Software Developer's Manual" (at
least recent versions) clearly says that the comparison is signed.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
gen_op_mov_TN_reg() loads the value in cpu_T[0], so this temporary should
be used instead of cpu_tmp0.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We can compute the value in cpu_dump_state anyway, and gratuitous
modifications to eflags creates heisenbugs.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
When starting from CC_OP_DYNAMIC, and issuing adox before adcx,
a typo used the wrong value for the resulting CC_OP.
Cc: Blue Swirl <blauwirbel@gmail.com>
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Fix various typos and misspellings. The bulk of these were found with
codespell.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move it to qom/cpu.h to avoid issues with include order.
Change pc_acpi_smi_interrupt() opaque to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Expose vmstate_cpu as vmstate_x86_cpu and hook it up to CPUClass::vmsd.
Adapt opaques and VMState fields to X86CPU. Drop cpu_{save,load}().
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQEcBAABAgAGBQJRL1gUAAoJECgfDbjSjVRpIk4IAL17zSadWgd99ZrH6EtZ3/cw
mhuxgm+vRfZPHl82lGC/NthLrTbJ5hpVe1Ff9vrMIkx3OZsh97iqoXS4iPjo9804
Pb5zhDqHJQJDTQKCllb9seu6e5D9Fw3aPp+BcH5QfyEOc/X5l0c5IffRdo6xDT9G
1dDEywntl/wwfCej/kVBu4H7G2/bF7wEMvda7kvBPzZsc6y0TsDSAewk5EX54+/p
wRKw8IBa/T2/ldSoBcqPW1Zd2oeuvKhty4vrXlO1UVZi+uTWNmJxUm6Z1GqNInvE
im0FGlSxwTJF7nX3JQv6tB46GRL8V/IC5+9I5UJc5nT8ScrX4rIxRbJTnsRkn4Y=
=eUQN
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mst/tags/for_anthony' into staging
virtio,vhost,pci,e1000
Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 28 Feb 2013 07:13:56 AM CST using RSA key ID D28D5469
# gpg: Can't check signature: public key not found
# By Michael S. Tsirkin (2) and others
# Via Michael S. Tsirkin
* mst/tags/for_anthony:
Set virtio-serial device to have a default of 2 MSI vectors.
ICH9 LPC: Reset Control Register, basic implementation
Fix guest OS hang when 64bit PCI bar present
e1000: unbreak the guest network migration to 1.3
vhost: memory sync fixes
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
These correspond very closely to the insns that we're emulating.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This patch addresses the issue fully described here:
http://lists.nongnu.org/archive/html/qemu-devel/2013-02/msg01804.html
Linux kernels prior to 2.6.36 do not disable the PCI device during
enumeration process. Since lower and higher parts of a 64bit BAR
are programmed separately this leads to qemu receiving a request to occupy
a completely wrong address region for a short period of time.
We have found that the boot process screws up completely if kvm-apic range
is overlapped even for a short period of time (it is fine for other
regions though).
This patch raises the priority of the kvm-apic memory region, so it is
never pushed out by PCI devices. The patch is quite safe as it does not
touch memory manager.
Signed-off-by: Alexey Korolev <akorolex@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting. By discarding CC_OP in
a later flags setting insn, we can discard that movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce new functions to extract PF, SF, OF, ZF in addition to CF.
These provide single entry points for optimizing accesses to a single
flag.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the conditional calls to gen_op_set_cc_op go away, and
gen_op_set_cc_op itself gets inlined into its only remaining caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a dirty flag to know whether env->cc_op is up to date,
rather than forcing s->cc_op to DYNAMIC and losing info.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Before computing flags we need to store the cc_op to memory. Move this
to gen_compute_eflags_c and gen_compute_eflags rather than doing it all
over the place.
Alo, after computing the flags in cpu_cc_src we are in EFLAGS mode.
Set s->cc_op and discard cpu_cc_dst in gen_compute_eflags, rather than
doing it all over the place.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Discard CC_DST and set s->cc_op immediately after computing EFLAGS.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always compute EFLAGS first since it is needed whenever
the shift is non-zero, i.e. most of the time. This makes it possible
to remove some writes of CC_OP_EFLAGS to cpu_cc_op and more importantly
removes cases where s->cc_op becomes CC_OP_DYNAMIC. Also, we can
remove cc_tmp and just modify cc_src from within the helper.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Always compute EFLAGS first since it is needed whenever the shift is
non-zero, i.e. most of the time. This makes it possible to remove some
writes of CC_OP_EFLAGS to cpu_cc_op and more importantly removes cases
where s->cc_op becomes CC_OP_DYNAMIC. These are slow and we want to
avoid them: CC_OP_EFLAGS is quite efficient once we paid the initial
cost of computing the flags.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As in the gen_repz_scas/gen_repz_cmps case, delay setting
CC_OP_DYNAMIC in gen_jcc until after code generation. All of
gen_jcc1/is_fast_jcc/gen_setcc_slow_T0 now work on s->cc_op, which makes
things a bit easier to follow and to patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Set it to the appropriate CC_OP_SUBx constant in gen_scas/gen_cmps.
In the repz case it can be overridden to CC_OP_DYNAMIC after generating
the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a function that abstracts extracting an 8, 16, 32 or 64-bit value
with or without sign, generalizing gen_extu and gen_exts.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In order to instantiate a CPU subtype we will need to know which type,
so move the cpu_model splitting into cpu_x86_init().
Parameters need to be set on the X86CPU instance, so move
cpu_x86_parse_featurestr() into cpu_x86_init() as well.
This leaves cpu_x86_register() operating on the model name only.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Consolidate CPU functions in cpu.c.
Allows to make cpu_x86_register() static.
No functional changes.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The target-specific ENV_GET_CPU() macros have allowed us to navigate
from CPUArchState to CPUState. The reverse direction was not supported.
Avoid introducing CPU_GET_ENV() macros by initializing an untyped
pointer that is initialized in derived instance_init functions.
The field may not be called "env" due to it being poisoned.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>