Commit Graph

1534 Commits

Author SHA1 Message Date
Eduardo Habkost
7b458bfd12 target-i386: Add "tsc_adjust" CPU feature name
tsc_adjust migration support is already implemented (commit
f28558d3d3), so we can add it to the list
of known feature names.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-26 14:52:43 +02:00
Eduardo Habkost
5bd8ff07e6 target-i386: Add "mpx" CPU feature name
Migration support for MPX is already implemented (commit
79e9ebebbf), so we can add it to the list
of known feature names.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-26 14:52:38 +02:00
Alex Williamson
9db2efd95e x86: Clear MTRRs on vCPU reset
The SDM specifies (June 2014 Vol3 11.11.5):

    On a hardware reset, the P6 and more recent processors clear the
    valid flags in variable-range MTRRs and clear the E flag in the
    IA32_MTRR_DEF_TYPE MSR to disable all MTRRs. All other bits in the
    MTRRs are undefined.

We currently do none of that, so whatever MTRR settings you had prior
to reset is what you have after reset.  Usually this doesn't matter
because KVM often ignores the guest mappings and uses write-back
anyway.  However, if you have an assigned device and an IOMMU that
allows NoSnoop for that device, KVM defers to the guest memory
mappings which are now stale after reset.  The result is that OVMF
rebooting on such a configuration takes a full minute to LZMA
decompress the firmware volume, a process that is nearly instant on
the initial boot.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 18:53:42 +02:00
Alex Williamson
d1ae67f626 x86: kvm: Add MTRR support for kvm_get|put_msrs()
The MTRR state in KVM currently runs completely independent of the
QEMU state in CPUX86State.mtrr_*.  This means that on migration, the
target loses MTRR state from the source.  Generally that's ok though
because KVM ignores it and maps everything as write-back anyway.  The
exception to this rule is when we have an assigned device and an IOMMU
that doesn't promote NoSnoop transactions from that device to be cache
coherent.  In that case KVM trusts the guest mapping of memory as
configured in the MTRR.

This patch updates kvm_get|put_msrs() so that we retrieve the actual
vCPU MTRR settings and therefore keep CPUX86State synchronized for
migration.  kvm_put_msrs() is also used on vCPU reset and therefore
allows future modificaitons of MTRR state at reset to be realized.

Note that the entries array used by both functions was already
slightly undersized for holding every possible MSR, so this patch
increases it beyond the 28 new entries necessary for MTRR state.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 18:53:42 +02:00
Alex Williamson
d8b5c67b05 x86: Use common variable range MTRR counts
We currently define the number of variable range MTRR registers as 8
in the CPUX86State structure and vmstate, but use MSR_MTRRcap_VCNT
(also 8) to report to guests the number available.  Change this to
use MSR_MTRRcap_VCNT consistently.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 18:53:42 +02:00
William Grant
1844e68eca target-i386: Don't forbid NX bit on PAE PDEs and PTEs
Commit e8f6d00c30 ("target-i386: raise
page fault for reserved physical address bits") added a check that the
NX bit is not set on PAE PDPEs, but it also added it to rsvd_mask for
the rest of the function. This caused any PDEs or PTEs with NX set to be
erroneously rejected, making PAE guests with NX support unusable.

Signed-off-by: William Grant <wgrant@ubuntu.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 18:53:42 +02:00
Jincheng Miao
47575997be linux-user: Fix syscall instruction usermode emulation on X86_64
Currently syscall instruction is buggy on user mode X86_64,
the EIP is updated after do_syscall(), that is too late for
clone(). Because clone() will create a thread at the env->EIP
(the address of syscall insn), and then child thread enters
do_syscall() again, that is not expected. Sometimes it is tragic.

User mode syscall insn emulation is not used MSR, so the
action should be same to INT 0x80. INT 0x80 will update EIP in
do_interrupt(), ditto for syscall() for consistency.

Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2014-08-22 15:06:33 +03:00
Peter Maydell
5c6b3c50cc Tracing pull request
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJT6hhYAAoJEJykq7OBq3PIH44IAIC42HoYJFgE1RkLl/77PhpV
 WNNDJ/SIh/084PS6XKvHja0aUGjmQM/QmlCuV17MLp7ub1XeMDoncP9AnVhiWTyL
 a3c5TJw8OasBadffSFLXh5ZmW/fgkie+TjXIWud4dB+hZmd28uV46tLLRrJFJA6O
 uCpAKUUCVyN78LDhsGVUzZAYjXzeFQQ9Eq5z4dysfCO5x4y5rvcTs6MJ6X5vxUBP
 rF3RTKb5DmcFZvuOYJxVx9WiDOe6RiMS72sitQCszvGspmBtVP0CvJQnHu7nMOVf
 Ljti0XVui3t3Jto+DJSH4ki0i025MSetgAMhk1bYcVnK4XQ2t03DrQExOM+VjjM=
 =+ba+
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging

Tracing pull request

* remotes/stefanha/tags/tracing-pull-request:
  virtio-rng: add some trace events
  trace: add some tcg tracing support
  trace: teach lttng backend to use format strings
  trace: [tcg] Include TCG-tracing header on all targets
  trace: [tcg] Include event definitions in "trace.h"
  trace: [tcg] Generate TCG tracing routines
  trace: [tcg] Include TCG-tracing helpers
  trace: [tcg] Define TCG tracing helper routine wrappers
  trace: [tcg] Define TCG tracing helper routines
  trace: [tcg] Declare TCG tracing helper routines
  trace: [tcg] Add 'tcg' event property
  trace: [tcg] Argument type transformation machinery
  trace: [tcg] Argument type transformation rules
  trace: [tcg] Add documentation
  trace: install simpletrace SystemTap tapset
  simpletrace: add simpletrace.py --no-header option
  trace: add tracetool simpletrace_stap format
  trace: extract stap_escape() function for reuse

Conflicts:
	Makefile.objs
2014-08-15 16:37:17 +01:00
Lluís Vilanova
a7e30d84ce trace: [tcg] Include TCG-tracing header on all targets
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-08-12 14:26:12 +01:00
chenfan
5bb4c35dca target-i386/cpu.c: Fix two error output indentation
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-08-09 00:06:32 +04:00
Ricky Zhou
b4bda1ae57 target-i386: Allow execute from user mode when SMEP is enabled.
Previously, execute would be disabled for all pages with SMEP enabled,
regardless of what mode the access took place in.

Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-15 18:43:14 +02:00
Eduardo Habkost
8248c36a5d target-i386: Add "kvmclock-stable-bit" feature bit name
KVM_FEATURE_CLOCKSOURCE_STABLE_BIT is enabled by default and supported
by KVM. But not having a name defined makes QEMU treat it as an unknown
and unmigratable feature flag (as any unknown feature may possibly
require state to be migrated), and disable it by default on "-cpu host".

As a side-effect, the new name also makes the flag configurable,
allowing the user to disable it (which may be useful for testing or for
compatibility with old kernels).

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-10 17:06:33 +02:00
Eduardo Habkost
ece0135407 target-i386: Broadwell CPU model
This adds a new CPU model named "Broadwell". It has all the features
from Haswell, plus PREFETCHW, RDSEED, ADX, SMAP.

PREFETCHW was already supported as "3dnowprefetch".

RDSEED, ADX was added on Linux v3.15-rc1.

SMAP was added on Linux v3.15-rc2.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: Wang, Yong Y <yong.y.wang@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dugger, Donald D <donald.d.dugger@intel.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
b3fb3a200b target-i386: Fix indentation of CPU model definitions
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Marcelo Tosatti
303752a906 target-i386: Support "invariant tsc" flag
Expose "Invariant TSC" flag, if KVM is enabled. From Intel documentation:

17.13.1 Invariant TSC The time stamp counter in newer processors may
support an enhancement, referred to as invariant TSC. Processor’s
support for invariant TSC is indicated by CPUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C-.
and T-states. This is the architectural behavior moving forward. On
processors with invariant TSC support, the OS may use the TSC for wall
clock timer services (instead of ACPI or HPET timers). TSC reads are
much more efficient and do not incur the overhead associated with a ring
transition or access to a platform resource.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
[ehabkost: redo feature filtering to use .tcg_features]
[ehabkost: add CPUID_APM_INVTSC macro, add it to .unmigratable_flags]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Marcelo Tosatti
68bfd0ad4a target-i386: block migration and savevm if invariant tsc is exposed
Invariant TSC documentation mentions that "invariant TSC will run at a
constant rate in all ACPI P-, C-. and T-states".

This is not the case if migration to a host with different TSC frequency
is allowed, or if savevm is performed. So block migration/savevm.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[AF+mtosatti: Updated error message]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
120eee7d1f target-i386: Set migratable=yes by default on "host" CPU mooel
Having only migratable flags reported by default on the "host" CPU model
is safer for the following reasons:

 * Existing users may expect "-cpu host" to be migration-safe, if they
   take care of always using compatible host CPUs, host kernels, and
   QEMU versions.
 * Users who don't care aboug migration and want to enable all features
   supported by the host kernel can simply change their setup to use
   migratable=no.

Without this change, people using "-cpu host" will stop being able to
migrate, because now "invtsc" is getting enabled by default.

We are not setting migratable=yes by default on all X86CPU subclasses,
because users should be able to get non-migratable features enabled if
they ask for them explicitly.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
84f1b92f97 target-i386: Add "migratable" property to "host" CPU model
This flag will allow the user to choose between two modes:
 * All flags that can be enabled on the host, even if unmigratable
   (migratable=no);
 * All flags that can be enabled on the host, are known to QEMU
   and migratable (migratable=yes).

The default is still migratable=false, to keep current behavior, but
this will be changed to migratable=true by another patch.

My plan was to support the "migratable" flag on all CPU classes, but
have the default to "false" on all CPU models except "host". However,
DeviceClass has no mechanism to allow a child class to have a different
property default from the parent class yet, so by now only the "host"
CPU model will support the "migratable" flag.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
fefb41bf34 target-i386: Support check/enforce flags in TCG mode, too
If enforce/check is specified in TCG mode, QEMU will ensure all CPU
features are supported by TCG, so no CPU feature is silently disabled.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Be explicit about TCG vs. !KVM]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
37ce3522cb target-i386: Loop-based feature word filtering in TCG mode
Instead of manually filtering each feature word, add a tcg_features
field to FeatureWordInfo, and use that field to filter all feature words
in TCG mode.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
e1c224b4eb target-i386: Loop-based copying and setting/unsetting of feature words
Now that we have the feature word arrays, we don't need to manually copy
each array item, we can simply iterate through each feature word.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:57 +02:00
Eduardo Habkost
621626ce7d target-i386: Define TCG_*_FEATURES earlier in cpu.c
Those macros will be used in the feature_word_info array data, so need
to be defined earlier.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:56 +02:00
Eduardo Habkost
84a6c6cd40 target-i386: Filter KVM and 0xC0000001 features on TCG
TCG doesn't support any of the feature flags on FEAT_KVM and
FEAT_C000_0001_EDX feature words, so clear all bits on those feature
words.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:56 +02:00
Eduardo Habkost
d0a70f46fa target-i386: Filter FEAT_7_0_EBX TCG features too
The TCG_7_0_EBX_FEATURES macro was defined but never used (it even had a
typo that was never noticed). Make the existing TCG feature filtering
code use it.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 23:54:29 +02:00
Eduardo Habkost
a42d9938a1 target-i386: Make TCG feature filtering more readable
Instead of an #ifdef in the middle of the code, just set
TCG_EXT2_FEATURES to a different value depending on TARGET_X86_64.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Eduardo Habkost
27418adf32 target-i386: Isolate KVM-specific code on CPU feature filtering logic
This will allow us to re-use the feature filtering logic (and the
check/enforce flag logic) for TCG.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Eduardo Habkost
8459e3961e target-i386: Pass FeatureWord argument to report_unavailable_features()
This will help us simplify the code that calls
report_unavailable_features() later.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Eduardo Habkost
51f63aed32 target-i386: Merge feature filtering/checking functions
Merge filter_features_for_kvm() and kvm_check_features_against_host().

Both functions made exactly the same calculations, the only difference
was that filter_features_for_kvm() changed the bits on cpu->features[],
and kvm_check_features_against_host() did error reporting.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Eduardo Habkost
857aee337c target-i386: Simplify reporting of unavailable features
Instead of checking and calling unavailable_host_feature() once for each
bit, simply call the function (now renamed to
report_unavailable_features()) once for each feature word.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Drop unused return value]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Eduardo Habkost
136a7e9a85 target-i386: kvm: Don't enable MONITOR by default on any CPU model
KVM never supported the MONITOR flag so it doesn't make sense to have it
enabled by default when KVM is enabled.

The rationale here is similar to the cases where it makes sense to have
a feature enabled by default on all CPU models when on KVM mode (e.g.
x2apic). In this case we are having a feature disabled by default for
the same reasons.

In this case we don't need machine-type compat code because it is
currently impossible to run a KVM VM with the MONITOR flag set.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-06-25 18:04:15 +02:00
Tom Musta
04af534d55 target-i386: Use Common ShiftRows and InvShiftRows Tables
This patch eliminates the (now) redundant copy of the Advanced Encryption Standard (AES)
ShiftRows and InvShiftRows tables; the code is updated to use the common tables declared in
include/qemu/aes.h.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-06-16 13:24:33 +02:00
Juan Quintela
d49805aeea savevm: Remove all the unneeded version_minimum_id_old (x86)
After previous Peter patch, they are redundant.  This way we don't
assign them except when needed.  Once there, there were lots of case
where the ".fields" indentation was wrong:

     .fields = (VMStateField []) {
and
     .fields =      (VMStateField []) {

Change all the combinations to:

     .fields = (VMStateField[]){

The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
2014-06-16 04:55:26 +02:00
Paolo Bonzini
6b1dd54b6a cpu/x86: correctly set errors in x86_cpu_parse_featurestr
Because of the "goto out", the contents of local_err are leaked
and lost.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-06-10 19:39:34 +04:00
Peter Maydell
e3a17ef6cc target-i386/translate.c: Remove unused tcg_gen_lshift()
The function tcg_gen_lshift() is unused; remove it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-06-10 19:39:34 +04:00
Peter Maydell
31e25e3e57 Merge remote-tracking branch 'remotes/bonzini/softmmu-smap' into staging
* remotes/bonzini/softmmu-smap: (33 commits)
  target-i386: cleanup x86_cpu_get_phys_page_debug
  target-i386: fix protection bits in the TLB for SMEP
  target-i386: support long addresses for 4MB pages (PSE-36)
  target-i386: raise page fault for reserved bits in large pages
  target-i386: unify reserved bits and NX bit check
  target-i386: simplify pte/vaddr calculation
  target-i386: raise page fault for reserved physical address bits
  target-i386: test reserved PS bit on PML4Es
  target-i386: set correct error code for reserved bit access
  target-i386: introduce support for 1 GB pages
  target-i386: introduce do_check_protect label
  target-i386: tweak handling of PG_NX_MASK
  target-i386: commonize checks for PAE and non-PAE
  target-i386: commonize checks for 4MB and 4KB pages
  target-i386: commonize checks for 2MB and 4KB pages
  target-i386: fix coding standards in x86_cpu_handle_mmu_fault
  target-i386: simplify SMAP handling in MMU_KSMAP_IDX
  target-i386: fix kernel accesses with SMAP and CPL = 3
  target-i386: move check_io helpers to seg_helper.c
  target-i386: rename KSMAP to KNOSMAP
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-05 21:06:14 +01:00
Peter Maydell
9f0355b590 Merge remote-tracking branch 'remotes/kvm/uq/master' into staging
* remotes/kvm/uq/master:
  kvm: Fix eax for cpuid leaf 0x40000000
  kvmclock: Ensure proper env->tsc value for kvmclock_current_nsec calculation
  kvm: Enable -cpu option to hide KVM
  kvm: Ensure negative return value on kvm_init() error handling path
  target-i386: set CC_OP to CC_OP_EFLAGS in cpu_load_eflags
  target-i386: get CPL from SS.DPL
  target-i386: rework CPL checks during task switch, preparing for next patch
  target-i386: fix segment flags for SMM and VM86 mode
  target-i386: Fix vm86 mode regression introduced in fd460606fd.
  kvm_stat: allow choosing between tracepoints and old stats
  kvmclock: Ensure time in migration never goes backward

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-05 19:16:28 +01:00
Paolo Bonzini
16b96f82cd target-i386: cleanup x86_cpu_get_phys_page_debug
Make the code a bit more similar to x86_cpu_handle_mmu_fault.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:35 +02:00
Paolo Bonzini
b09481de91 target-i386: fix protection bits in the TLB for SMEP
User pages must be marked as non-executable when running under SMEP;
otherwise, fetching the page first and then calling it will fail.

With this patch, all SMEP testcases in kvm-unit-tests now pass.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:35 +02:00
Paolo Bonzini
de431a655a target-i386: support long addresses for 4MB pages (PSE-36)
4MB pages can use 40-bit addresses by putting the higher 8 bits in bits
20-13 of the PDE.  Bit 21 is reserved.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:35 +02:00
Paolo Bonzini
eaad03e472 target-i386: raise page fault for reserved bits in large pages
In large pages, bit 12 is for PAT, but bits starting at 13 are reserved.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:35 +02:00
Paolo Bonzini
e2a32ebbfe target-i386: unify reserved bits and NX bit check
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
e7e898a76a target-i386: simplify pte/vaddr calculation
They can moved to after the dirty bit processing, and unified between
CR0.PG=1 and CR0.PG=0.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
e8f6d00c30 target-i386: raise page fault for reserved physical address bits
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
b728464ae8 target-i386: test reserved PS bit on PML4Es
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
c1eb2fa3fd target-i386: set correct error code for reserved bit access
The correct error code is 9 (present, reserved), not 8.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
77549a7809 target-i386: introduce support for 1 GB pages
Given the simplifications to the code in the previous patches, this
is now very simple to do.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
b052e4509b target-i386: introduce do_check_protect label
This will help adding 1GB page support in the next patch.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
870a706735 target-i386: tweak handling of PG_NX_MASK
Remove the tail of the PAE case, so that we can use "goto" in the
next patch to jump to the protection checks.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
7c82256006 target-i386: commonize checks for PAE and non-PAE
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
487cad8853 target-i386: commonize checks for 4MB and 4KB pages
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
00cc3e1d70 target-i386: commonize checks for 2MB and 4KB pages
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
843408b3cf target-i386: fix coding standards in x86_cpu_handle_mmu_fault
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
f57584dc87 target-i386: simplify SMAP handling in MMU_KSMAP_IDX
Do not use this MMU index at all if CR4.SMAP is false, and drop
the SMAP check from x86_cpu_handle_mmu_fault.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
8a201bd47e target-i386: fix kernel accesses with SMAP and CPL = 3
With SMAP, implicit kernel accesses from user mode always behave as
if AC=0.  To do this, kernel mode is not anymore a separate MMU mode.
Instead, KERNEL_IDX is renamed to KSMAP_IDX and the kernel mode accessors
wrap KSMAP_IDX and KNOSMAP_IDX.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
81cf8d8adc target-i386: move check_io helpers to seg_helper.c
Prepare for adding _kernel accessors there in the next patch.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
43773ed369 target-i386: rename KSMAP to KNOSMAP
This is the mode where SMAP is overridden, put "NO" in its name.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:34 +02:00
Paolo Bonzini
f08b617018 softmmu: introduce cpu_ldst.h
This will collect all load and store helpers soon.  For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:33 +02:00
Paolo Bonzini
0f590e749f softmmu: commonize helper definitions
They do not need to be in op_helper.c.  Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:33 +02:00
Jidong Xiao
79b6f2f651 kvm: Fix eax for cpuid leaf 0x40000000
Since Linux kernel 3.5, KVM has documented eax for leaf 0x40000000
to be KVM_CPUID_FEATURES:

57c22e5f35

But qemu still tries to set it to 0. It would be better to make qemu
and kvm consistent. This patch just fixes this issue.

Signed-off-by: Jidong Xiao <jidong.xiao@gmail.com>
[Include kvm_base in the value. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-04 09:12:04 +02:00
Alex Williamson
f522d2acc5 kvm: Enable -cpu option to hide KVM
The latest Nvidia driver (337.88) specifically checks for KVM as the
hypervisor and reports Code 43 for the driver in a Windows guest when
found.  Removing or changing the KVM signature is sufficient for the
driver to load and work.  This patch adds an option to easily allow
the KVM hypervisor signature to be hidden using '-cpu kvm=off'.  We
continue to expose KVM via the cpuid value by default.  The state of
this option does not supercede or replace -enable-kvm or the accel=kvm
machine option.  This only changes the visibility of KVM to the guest
and paravirtual features specifically tied to the KVM cpuid.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-03 18:40:48 +02:00
Richard Henderson
2ef6175aa7 tcg: Invert the inclusion of helper.h
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h.  This
minimizes the files that must be rebuilt when changing the macros
for file N.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Paolo Bonzini
28fb26f19f target-i386: set CC_OP to CC_OP_EFLAGS in cpu_load_eflags
There is no reason to keep that out of the function.  The comment refers
to the disassembler's cc_op state rather than the CPUState field.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-21 18:02:08 +02:00
Paolo Bonzini
7125c937c9 target-i386: get CPL from SS.DPL
CS.RPL is not equal to the CPL in the few instructions between
setting CR0.PE and reloading CS.  We get this right in the common
case, because writes to CR0 do not modify the CPL, but it would
not be enough if an SMI comes exactly during that brief period.
Were this to happen, the RSM instruction would erroneously set
CPL to the low two bits of the real-mode selector; and if they are
not 00, the next instruction fetch cannot access the code segment
and causes a triple fault.

However, SS.DPL *is* always equal to the CPL.  In real processors
(AMD only) there is a weird case of SYSRET setting SS.DPL=SS.RPL
from the STAR register while forcing CPL=3, but we do not emulate
that.

Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-21 18:02:08 +02:00
Paolo Bonzini
d3b5491897 target-i386: rework CPL checks during task switch, preparing for next patch
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL.  So far this worked
by carefully setting the CS selector and flags before doing the task
switch; but this will not work once we get the CPL from SS.DPL.

Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task, until the descriptor of SS is loaded.

Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-21 18:02:08 +02:00
Paolo Bonzini
b98dbc9095 target-i386: fix segment flags for SMM and VM86 mode
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL.  The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.

For consistency, SMM ought to have the same flags except with
CPL=0.

Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-21 18:02:08 +02:00
Kevin O'Connor
87446327cc target-i386: Fix vm86 mode regression introduced in fd460606fd.
Commit fd460606fd moved setting of eflags above calls to
cpu_x86_load_seg_cache() in seg_helper.c.  Unfortunately, in
do_interrupt_protected() this moved the clearing of VM_MASK above a
test for it.

Fix this regression by storing the value of VM_MASK at the start of
do_interrupt_protected().

Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-21 18:02:08 +02:00
Peter Maydell
ef3cb5ca82 Merge remote-tracking branch 'remotes/kvm/uq/master' into staging
* remotes/kvm/uq/master:
  pc: port 92 reset requires a low->high transition
  cpu: make CPU_INTERRUPT_RESET available on all targets
  apic: do not accept SIPI on the bootstrap processor
  target-i386: preserve FPU and MSR state on INIT
  target-i386: fix set of registers zeroed on reset
  kvm: forward INIT signals coming from the chipset
  kvm: reset state from the CPU's reset method
  target-i386: the x86 CPL is stored in CS.selector - auto update hflags accordingly.
  target-i386: set eflags prior to calling cpu_x86_load_seg_cache() in seg_helper.c
  target-i386: set eflags and cr0 prior to calling cpu_x86_load_seg_cache() in smm_helper.c
  target-i386: set eflags prior to calling svm_load_seg_cache() in svm_helper.c
  pci-assign: limit # of msix vectors
  pci-assign: Fix a bug when map MSI-X table memory failed
  kvm: make one_reg helpers available for everyone
  target-i386: Remove unused data from local array

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-15 15:38:40 +01:00
Paolo Bonzini
4a92a558f4 cpu: make CPU_INTERRUPT_RESET available on all targets
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets.  Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.

Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1.  Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:21:51 +02:00
Paolo Bonzini
43175fa96a target-i386: preserve FPU and MSR state on INIT
Most MSRs, plus the FPU, MMX, MXCSR, XMM and YMM registers should not
be zeroed on INIT (Table 9-1 in the Intel SDM).  Copy them out of
CPUX86State and back in, instead of special casing env->pat.

The relevant fields are already consecutive except PAT and SMBASE.
However:

- KVM and Hyper-V MSRs should be reset because they include memory
locations written by the hypervisor.  These MSRs are moved together
at the end of the preserved area.

- SVM state can be moved out of the way since it is written by VMRUN.

Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Paolo Bonzini
05e7e819d7 target-i386: fix set of registers zeroed on reset
BND0-3, BNDCFGU, BNDCFGS, BNDSTATUS were not zeroed on reset, but they
should be (Intel Instruction Set Extensions Programming Reference
319433-015, pages 9-4 and 9-6).  Same for YMM.

XCR0 should be reset to 1.

TSC and TSC_RESET were zeroed already by the memset, remove the explicit
assignments.

Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Paolo Bonzini
e0723c4510 kvm: forward INIT signals coming from the chipset
Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Paolo Bonzini
50a2c6e55f kvm: reset state from the CPU's reset method
Now that we have a CPU object with a reset method, it is better to
keep the KVM reset close to the CPU reset.  Using qemu_register_reset
as we do now keeps them far apart.

With this patch, PPC no longer calls the kvm_arch_ function, so
it can get removed there.  Other arches call it from their CPU
reset handler, and the function gets an ARMCPU/X86CPU/S390CPU.

Note that ARM- and s390-specific functions are called kvm_arm_*
and kvm_s390_*, while x86-specific functions are called kvm_arch_*.
That follows the convention used by the different architectures.
Changing that is the topic of a separate patch.

Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Kevin O'Connor
7848c8d19f target-i386: the x86 CPL is stored in CS.selector - auto update hflags accordingly.
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS).  Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.

This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.

Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Kevin O'Connor
fd460606fd target-i386: set eflags prior to calling cpu_x86_load_seg_cache() in seg_helper.c
The cpu_x86_load_seg_cache() function inspects eflags, so make sure
all changes to eflags are done prior to loading the segment caches.

Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Kevin O'Connor
010e639a8d target-i386: set eflags and cr0 prior to calling cpu_x86_load_seg_cache() in smm_helper.c
The cpu_x86_load_seg_cache() function inspects cr0 and eflags, so make
sure all changes to eflags and cr0 are done prior to loading the
segment caches.

Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Kevin O'Connor
304520291a target-i386: set eflags prior to calling svm_load_seg_cache() in svm_helper.c
The svm_load_seg_cache() function calls cpu_x86_load_seg_cache() which
inspects env->eflags.  So, make sure all changes to eflags are done
prior to loading the segment cache.

Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:12:40 +02:00
Stefan Weil
8e03c100a7 target-i386: Remove unused data from local array
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-13 13:10:36 +02:00
Richard Henderson
dc1823ce26 target-i386: Preserve the Z bit for bt/bts/btr/btc
Older Intel manuals (pre-2010) and current AMD manuals describe Z as
undefined, but newer Intel manuals describe Z as unchanged.

Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 14:20:04 -07:00
Markus Armbruster
65cd9064e1 qom: Clean up fragile use of error_is_set() in set() methods
Using error_is_set(ERRP) to find out whether a function failed is
either wrong, fragile, or unnecessarily opaque.  It's wrong when ERRP
may be null, because errors go undetected when it is.  It's fragile
when proving ERRP non-null involves a non-local argument.  Else, it's
unnecessarily opaque (see commit 84d18f0).

I guess the error_is_set(errp) in the ObjectProperty set() methods are
merely fragile right now, because I can't find a call chain that
passes a null errp argument.

Make the code more robust and more obviously correct: receive the
error in a local variable, then propagate it through the parameter.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-05 19:08:49 +02:00
Paolo Bonzini
466e6e9d13 target-i386: reorder fields in cpu/msr_hyperv_hypercall subsection
The subsection already exists in one well-known enterprise Linux
distribution, but for some strange reason the fields were swapped
when forward-porting the patch to upstream.

Limit headaches for said enterprise Linux distributor when the
time will come to rebase their version of QEMU.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1396452782-21473-1-git-send-email-pbonzini@redhat.com
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-05 10:49:05 +01:00
Luiz Capitulino
c8c14bcb72 target-i386: x86_cpu_get_phys_page_debug(): support 1GB page translation
Linux guests, when using more than 4GB of RAM, may end up using 1GB pages
to store (kernel) data. When this happens, we're unable to debug a running
Linux kernel with GDB:

(gdb) p node_data[0]->node_id
Cannot access memory at address 0xffff88013fffd3a0
(gdb)

GDB returns this error because x86_cpu_get_phys_page_debug() doesn't support
translating 1GB pages in IA-32e paging mode and returns an error to GDB.

This commit adds support for 1GB page translation for IA32e paging.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-31 19:06:48 +02:00
Peter Maydell
2cd49cbfab target-i386: Avoid shifting left into sign bit
Add 'U' suffixes where necessary to avoid (1 << 31) which
shifts left into the sign bit, which is undefined behaviour.
Add the suffix also for other constants in the same groupings
even if they don't shift into bit 31, for consistency.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-03-27 19:22:49 +04:00
Stefan Weil
a443bc3496 target-i386: Add missing 'static' and 'const' attributes
This fixes warnings from the static code analysis (smatch).

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-03-27 19:22:48 +04:00
Andreas Färber
0c591eb0a9 cputlb: Change tlb_set_page() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:47 +01:00
Andreas Färber
00c8cb0a36 cputlb: Change tlb_flush() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:47 +01:00
Andreas Färber
31b030d4ab cputlb: Change tlb_flush_page() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:47 +01:00
Andreas Färber
a47dddd734 exec: Change cpu_abort() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:28 +01:00
Andreas Färber
0ea8cb8895 cpu-exec: Change cpu_resume_from_signal() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
b3310ab338 exec: Change cpu_breakpoint_{insert,remove{,_by_ref,_all}} argument
Use CPUState. Allows to clean up CPUArchState in gdbstub.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
75a34036d4 exec: Change cpu_watchpoint_{insert,remove{,_by_ref,_all}} argument
Use CPUState. This lets us drop a few local env usages.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
3f38f309b2 translate-all: Change cpu_restore_state() argument to CPUState
This lets us drop some local variables in tlb_fill() functions.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
5638d180d6 cpu-exec: Change cpu_loop_exit() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
d5a11fefef exec: Change tlb_fill() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
f0c3c505a8 cpu: Move breakpoints field from CPU_COMMON to CPUState
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
ff4700b05c cpu: Move watchpoint fields from CPU_COMMON to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
27103424c4 cpu: Move exception_index field from CPU_COMMON to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber
93afeade09 cpu: Move mem_io_{pc,vaddr} fields from CPU_COMMON to CPUState
Reset them.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber
7510454e3e cpu: Turn cpu_handle_mmu_fault() into a CPUClass hook
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber
94a444b295 cpu: Introduce CPUClass::parse_features() hook
Adapt the X86CPU implementation to suit the generic hook.
This involves a cleanup of error handling to cope with NULL errp.

Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:45 +01:00
Eduardo Habkost
d940ee9b78 target-i386: X86CPU model subclasses
Register separate QOM types for each x86 CPU model.

This will allow management code to more easily probe what each CPU model
provides, by simply creating objects using the appropriate class name,
without having to restart QEMU.

This also allows us to eliminate the qdev_prop_set_globals_for_type()
hack to set CPU-model-specific global properties.

Instead of creating separate class_init functions for each class, I just
used class_data to store a pointer to the X86CPUDefinition struct for
each CPU model. This should make the patch shorter and easier to review.
Later we can gradually convert each X86CPUDefinition field to lists of
per-class property defaults.

The "host" CPU model is special, as the feature flags depend on KVM
being initialized. So it has its own class_init and instance_init
function, and feature flags are set on instance_init instead of
class_init.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Tested-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Limit the host CPU type to CONFIG_KVM as build fix]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:07 +01:00