Cc: Jia Liu <proljc@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Tested-by: Jia Liu <proljc@gmail.com>
Message-id: 1410626734-3804-17-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-16-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-15-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1410626734-3804-11-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since do_interrupt_m68k_hardirq is no longer used outside
op_helper.c, make it static.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-10-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-9-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 1410626734-3804-8-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Continuing the removal of ifdefs from cpu_exec.
Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-7-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-5-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that the code that was within the "exit" ifdef block
was identical to the cpu_compute_eflags inline, so make that
simplification at the same time.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-4-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Around the cpu_exec_enter/exit hooks contain many empty
ifdef blocks. Delete all of these to highlight those
targets for which we actually need to do work.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-3-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for removing a bunch of ifdefs from cpu_exec.
Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-2-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Correct an error in the logic for deciding whether we can
take an IRQ interrupt which meant that on M profile cores
it was never possible to disable them.
The design here is still bogus in that M profile doesn't
have separate "IRQ" and "FIQ", which are an A/R profile
concept; we should ideally implement the proper priority
based scheme.
Signed-off-by: David Hoover <spm@boiteauxlettres.sent.at>
[PMM: Wrote a proper commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds a couple of tcg specific trace-events which are useful for
tracing execution though tcg generated blocks. It's been tested with
lttng user space tracing but is generic enough for all systems. The tcg
events are:
* translate_block - when a subject block is translated
* exec_tb - when a translated block is entered
* exec_tb_exit - when we exit the translated code
* exec_tb_nocache - special case translations
Of course we can only trace the entrance to the first block of a chain
as each block will jump directly to the next when it can. See the -d
nochain patch to allow more complete tracing at the expense of
performance.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Show in 'info jit' the current delay between the host clock
and the guest clock. In addition, print the maximum advance
and delay of the guest compared to the host.
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the align option is enabled, we print to the user whenever
the guest clock is behind the host clock in order for he/she
to have a hint about the actual performance. The maximum
print interval is 2s and we limit the number of messages to 100.
If desired, this can be changed in cpu-exec.c
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The goal is to sleep qemu whenever the guest clock
is in advance compared to the host clock (we use
the monotonic clocks). The amount of time to sleep
is calculated in the execution loop in cpu_exec.
At first, we tried to approximate at each for loop the real time elapsed
while searching for a TB (generating or retrieving from cache) and
executing it. We would then approximate the virtual time corresponding
to the number of virtual instructions executed. The difference between
these 2 values would allow us to know if the guest is in advance or delayed.
However, the function used for measuring the real time
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME)) proved to be very expensive.
We had an added overhead of 13% of the total run time.
Therefore, we modified the algorithm and only take into account the
difference between the 2 clocks at the begining of the cpu_exec function.
During the for loop we try to reduce the advance of the guest only by
computing the virtual time elapsed and sleeping if necessary. The overhead
is thus reduced to 3%. Even though this method still has a noticeable
overhead, it no longer is a bottleneck in trying to achieve a better
guest frequency for which the guest clock is faster than the host one.
As for the the alignement of the 2 clocks, with the first algorithm
the guest clock was oscillating between -1 and 1ms compared to the host clock.
Using the second algorithm we notice that the guest is 5ms behind the host, which
is still acceptable for our use case.
The tests where conducted using fio and stress. The host machine in an i5 CPU at
3.10GHz running Debian Jessie (kernel 3.12). The guest machine is an arm versatile-pb
built with buildroot.
Currently, on our test machine, the lowest icount we can achieve that is suitable for
aligning the 2 clocks is 6. However, we observe that the IO tests (using fio) are
slower than the cpu tests (using stress).
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the guest attempts to execute from unreadable memory, this will
cause us to longjmp back to the main loop from inside the
target frontend decoder. For linux-user mode, this means we will
still hold the tb_ctx.tb_lock, and will deadlock when we try to
start executing code again. Unlock the lock in the return-from-longjmp
code path to avoid this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Andrei Warkentin <andrey.warkentin@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Default to false.
Tidy variable naming and inline cast uses while at it.
Tested-by: Jia Liu <proljc@gmail.com> (or32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
To avoid complication in code that otherwise would not need to
care about whether EL1 is AArch32 or AArch64, we should store
the interrupt mask bits (CPSR.AIF in AArch32 and PSTATE.DAIF
in AArch64) in one place consistently regardless of EL1's mode.
Since AArch64 has an extra enable bit (D for debug exceptions)
which isn't visible in AArch32, this means we need to keep
the enables in env->pstate. (This is also consistent with the
general approach we're taking that we handle 32 bit CPUs as
being like AArch64/ARMv8 CPUs but which only run in 32 bit mode.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The previous placement could result in duplicate logging while
still processing interrupts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace growing numbers of inline x86_env_get_cpu() with x86_cpu variable.
Reviewed-by: Chen Fan <chen.fan@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This motion is preparing for refactoring vCPU APIC subsequently.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Local variable CPUClass *cc needs to be reloaded after return from longjmp,
too. (This fixes a mips-softmmu crash observed on FreeBSD when QEMU is
built with clang.)
Reported-by: Dimitry Andric <dim@FreeBSD.org>
Signed-off-by: Juergen Lock <nox@jelal.kn-bremen.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Prepares for changing cpu_single_step() argument to CPUState.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Where no extra implementation is needed, fall back to CPUClass::set_pc().
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.
Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* 'mingw' of git://qemu.weilnetz.de/qemu:
qemu-timer: move timeBeginPeriod/timeEndPeriod to os-win32
Release SMP restriction on Windows
Ensure good ordering of memory instruction in cpu_exec
Check effective suspension of TCG thread
The IO thread, when it senses cpu_single_env == 0, expects exit_request
to be checked later on. A compiler scheduling constraint is not strong
enough to ensure this on modern architecture. A memory fence is needed
as well.
Signed-off-by: Olivier Hainque <hainque@adacore.com>
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
The CONFIG_DEBUG_EXEC define compiles out a single qemu_log_mask()
call, which is a pretty trivial cost even for something in the main
cpu_exec() loop. Having this be conditionally defined means that
'-d exec' on a non-debug build will silently do nothing. Drop the
define and the configure machinery that sets it, in favour of just
always allowing this log option to be enabled at runtime. As a
concession to the mainloopiness, we use qemu_loglevel_mask()+qemu_log()
rather than qemu_log_mask() to avoid the function call overhead.
Note that DEBUG_DISAS is always defined, so removing the
'|| defined(CONFIG_DEBUG_EXEC)' from those conditionals makes
no behavioural change for that logging.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We can compute the value in cpu_dump_state anyway, and gratuitous
modifications to eflags creates heisenbugs.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Fix some of the nasty TCG race conditions and crashes by implementing
cpu_exit() as setting a flag which is checked at the start of each TB.
This avoids crashes if a thread or signal handler calls cpu_exit()
while the execution thread is itself modifying the TB graph (which
may happen in system emulation mode as well as in linux-user mode
with a multithreaded guest binary).
This fixes the crashes seen in LP:668799; however there are another
class of crashes described in LP:1098729 which stem from the fact
that in linux-user with a multithreaded guest all threads will
use and modify the same global TCG date structures (including the
generated code buffer) without any kind of locking. This means that
multithreaded guest binaries are still in the "unsupported"
category.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
If tcg_qemu_tb_exec() returns a value whose low bits don't indicate a
link to an indexed next TB, this means that the TB execution never
started (eg because the instruction counter hit zero). In this case the
guest PC has to be reset to the address of the start of the TB.
Refactor the cpu-exec code to make all tcg_qemu_tb_exec() calls pass
through a wrapper function which does this restoration if necessary.
Note that the apparent change in cpu_exec_nocache() from calling
cpu_pc_from_tb() with the old TB to calling it with the TB returned by
do_tcg_qemu_tb_exec() is safe, because in the nocache case we can
guarantee that the TB we try to execute is not linked to any others,
so the only possible returned TB is the one we started at. That is,
we should arguably previously have included in cpu_exec_nocache() an
assert(next_tb & ~TB_EXIT_MASK) == tb), since the API requires restore
from next_tb but we were using tb.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Document tcg_qemu_tb_exec(). In particular, its return value is a
combination of a pointer to the next translation block and some
extra information in the low two bits. Provide some #defines for
the values passed in these bits to improve code clarity.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The setjmp() function doesn't specify whether signal masks are saved and
restored; on Linux they are not, but on BSD (including MacOSX) they are.
We want to have consistent behaviour across platforms, so we should
always use "don't save/restore signal mask" (this is also generally
going to be faster). This also works around a bug in MacOSX where the
signal-restoration on longjmp() affects the signal mask for a completely
different thread, not just the mask for the thread which did the longjmp.
The most visible effect of this was that ctrl-C was ignored on MacOSX
because the CPU thread did a longjmp which resulted in its signal mask
being applied to every thread, so that all threads had SIGINT and SIGTERM
blocked.
The POSIX-sanctioned portable way to do a jump without affecting signal
masks is to siglongjmp() to a sigjmp_buf which was created by calling
sigsetjmp() with a zero savemask parameter, so change all uses of
setjmp()/longjmp() accordingly. [Technically POSIX allows sigsetjmp(buf, 0)
to save the signal mask; however the following siglongjmp() must not
restore the signal mask, so the pair can be effectively considered as
"sigjmp/longjmp which don't touch the mask".]
For Windows we provide a trivial sigsetjmp/siglongjmp in terms of
setjmp/longjmp -- this is OK because no user will ever pass a non-zero
savemask.
The setjmp() uses in tests/tcg/test-i386.c and tests/tcg/linux-test.c
are left untouched because these are self-contained singlethreaded
test programs intended to be run under QEMU's Linux emulation, so they
have neither the portability nor the multithreading issues to deal with.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Explictly NULL it on CPU reset since it was located before breakpoints.
Change vapic_report_tpr_access() argument to CPUState. This also
resolves the use of void* for cpu.h independence.
Change vAPIC patch_instruction() argument to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
It's worth to clean-up translation blocks variables and move them
into one context as was suggested by Swirl.
Also if we use this context directly inside tcg_ctx, then it
speeds up code generation a bit.
Signed-off-by: Evgeny Voevodin <evgenyvoevodin@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
For target-mips also change the return type to bool.
Make include paths for cpu-qom.h consistent for alpha and unicore32.
Signed-off-by: Andreas Färber <afaerber@suse.de>
[AF: Updated new target-openrisc function accordingly]
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
Move the DUMP_FPU and DUMP_CCOP flags for cpu_dump_state() from being
x86-specific flags to being generic ones. This allows us to drop some
TARGET_I386 ifdefs in various places, and means that we can (potentially)
be more consistent across architectures about which monitor commands or
debug abort printouts include FPU register contents and info about
QEMU's condition-code optimisations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch provides a way to optionally suppress spurious interrupts,
as a workaround for systems described below:
Some old operating systems do not handle spurious interrupts well,
and qemu tends to generate them significantly more often than
real hardware.
Examples:
- Microport UNIX System V/386 v 2.1 (ca 1987)
(The main problem I'm fixing: Without this patch, it panics
sporadically when accessing the hard disk.)
- AT&T UNIX System V/386 Release 4.0 Version 2.1a (ca 1991)
See screenshot in "QEMU Official OS Support List":
http://www.claunia.com/qemu/objectManager.php?sClass=application&iId=9
(I don't have this system to test.)
- A report about OS/2 boot lockup from 2004 by Hampa Hug:
http://lists.nongnu.org/archive/html/qemu-devel/2004-09/msg00367.html
(My patch was partially inspired by his.)
Also: http://lists.nongnu.org/archive/html/qemu-devel/2005-06/msg00243.html
(I don't have this system to test.)
Signed-off-by: Matthew Ogilvie <mmogilvi_qemu@miniinfo.net>
Signed-off-by: malc <av1474@comtv.ru>
This patch initializes the cpuid to exactly correct value because
linux kernel will check it.
In addition, the exception types are specified in proper situations.
Then it could make exceptions generated correctly and timely.
Signed-off-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* 'x86cpu_qom_tcg_v2' of git://github.com/imammedo/qemu:
target-i386: move tcg initialization into x86_cpu_initfn()
cleanup cpu_set_debug_excp_handler
target-xtensa: drop usage of prev_debug_excp_handler
target-i386: drop usage of prev_debug_excp_handler
KVM performs TPR raising asynchronously to QEMU, specifically outside
QEMU's global lock. When an interrupt is injected into the APIC and TPR
is checked to decide if this can be delivered, a stale TPR value may be
used, causing spurious interrupts in the end.
Fix this by deferring apic_update_irq to the context of the target VCPU.
We introduce a new interrupt flag for this, CPU_INTERRUPT_POLL. When it
is set, the VCPU calls apic_poll_irq before checking for further pending
interrupts. To avoid special-casing KVM, we also implement this logic
for TCG mode.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Add an explicit CPUX86State parameter instead of relying on AREG0.
Merge raise_exception_env() to raise_exception(), likewise with
raise_exception_err_env() and raise_exception_err().
Introduce cpu_svm_check_intercept_param() and cpu_vmexit()
as wrappers.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
There are no users left for previous exception handler returned from
cpu_set_debug_excp_handler. It should simplify code a little.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
- The M-flag is encoded in different bits on cris v10 and cris v32.
Signed-off-by: Lars Persson <larper@axis.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
CPUState will be needed for all targets in the future, so place it into
the main variable declaration block.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Alexander Graf <agraf@suse.de>
Allows to use cpu_reset() in place of cpu_state_reset().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
The idea behind qtest is pretty simple. Instead of executing a CPU via TCG or
KVM, rely on an external process to send events to the device model that the CPU
would normally generate.
qtest presents itself as an accelerator. In addition, a new option is added to
establish a qtest server (-qtest) that takes a character device. This is what
allows the external process to send CPU events to the device model.
qtest uses a simple line based protocol to send the events. Documentation of
that protocol is in qtest.c.
I considered reusing the monitor for this job. Adding interrupts would be a bit
difficult. In addition, logging would also be difficult.
qtest has extensive logging support. All protocol commands are logged with
time stamps using a new command line option (-qtest-log). Logging is important
since ultimately, this is a feature for debugging.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
next_tb is the numeric value of a tcg target (= QEMU host) address.
Using tcg_target_ulong instead of unsigned long shows this and makes
the code portable for hosts with an unusual size of long (w64).
The type cast '(long)(next_tb & ~3)' was not needed (casting
unsigned long to long does not change the bits, and nor does
casting long to pointer for most (= all non w64) hosts.
It is removed here.
Macro or function tcg_qemu_tb_exec is used to set next_tb.
The function also returns next_tb. Therefore tcg_qemu_tb_exec
must return a tcg_target_ulong.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Scripted conversion:
for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
sed -i "s/CPUState/CPUArchState/g" $file
done
All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Frees the identifier cpu_reset for QOM CPUs (manual rename).
Don't hide the parameter type behind explicit casts, use static
functions with strongly typed argument to indirect.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
On ppc405ep there is a register that allows for software to reset the
core, but not the whole system. Implement this reset using a reset
interrupt.
This gets rid of a bunch of #if 0'ed code.
Reported-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Each target uses the #define macro (in target-xxx/cpu.h) to rename
cpu_exec (cpu-exec.c) to cpu_xxx_exec, then defines its own cpu_loop
which calls cpu_xxx_exec. So basically, cpu-exec.c is not only the i386
emulator main execution loop. This patch corrects the comment of this
file and does indentation cleanup.
Signed-off-by: Chen Wei-Ren (陳韋任) <chenwj@iis.sinica.edu.tw>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
In the current emulation of the load-and-reserve (lwarx) and
store-conditional (stwcx.) instructions, the internal reservation
mechanism is taken into account, however each CPU has its own
reservation information and this information is not synchronized between
CPUs to perform proper synchronization.
The following test case with 2 CPUs shows that the semantics of the
"lwarx" and "stwcx." instructions are not preserved by the emulation.
The test case does the following :
- CPU0: reserve a memory location
- CPU1: reserve the same memory location
- CPU0: perform stwcx. on the location
The last store-conditional operation succeeds while it is supposed to
fail since the reservation was supposed to be lost at the second reserve
operation.
This (one line) patch fixes this problem in a very simple manner by
removing the reservation of a CPU every time it is scheduled (in
cpu_exec()). While this is a harsh workaround, it does not affect the
guest code much because reservations are usually held for a very short
time, that is an lwarx is almost always followed by an stwcx. a few
instructions below. Therefore, in most cases, the reservation will be
taken and consumed before a CPU switch occurs. However in the rare case
where a CPU switch does occur between the lwarx and its corresponding
stwcx. this patch solves a potential erroneous behavior of the
synchronization instructions.
Signed-off-by: Elie Richa <richa@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
- mark privileged opcodes with ring check;
- make debug exception on exception handler entry.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Avoid this warning from clang analyzer:
/src/qemu/cpu-exec.c:97:5: warning: Value stored to 'phys_page2' is never read
phys_page2 = -1;
Adjust the scope of the variable while at it.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Recent compilers look deep into cpu_exec, find longjmp as a noreturn
function and decide to smash some stack variables as they won't be used
again. This may lead to env becoming invalid after return from setjmp,
causing crashes. Fix it by reloading env from cpu_single_env in that
case.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Make functions take a parameter for CPUState instead of relying
on global env. Pass CPUState pointer to TCG prologue, which moves
it to AREG0.
Thanks to Peter Maydell and Laurent Desnogues for the ARM prologue
change.
Revert the hacks to avoid AREG0 use on Sparc hosts.
Move cpu_has_work() and cpu_pc_from_tb() from exec.h to cpu.h.
Compile the file without HELPER_CFLAGS.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Before the next patch, fix coding style of the areas affected.
Change the type of the return value from cpu_has_work() and
qemu_cpu_has_work() to bool.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Now that all targets use common function signature for do_interrupt(), there is no
need for the #ifdeffery anymore.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Pass CPUState to do_interrupt(). This is needed by later patches.
It would be cleaner to move the function to helper.c, but there are
a few dependencies between do_interrupt() and other functions.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Several x86 specific functions are called from cpu-exec.c with the
assumption that global env register is valid. This will be changed
later, so make the functions use caller supplied CPUState parameter.
It would be cleaner to move the functions to helper.c, but there are
quite a lot of dependencies between do_interrupt() and other functions.
Add helpers for svm_check_intercept() and cpu_cc_compute_all() instead
of calling the helper (which uses global env, AREG0) directly.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
There is little in common with user and softmmu versions of cpu_resume_signal(),
split them.
Fix coding style for the user emulator part.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
tb_invalidate_page_range() was intended to be used to invalidate an
area of a TB which the guest explicitly flushes from i-cache. However,
QEMU detects writes to code areas where TBs have been generated, so
his has never been useful.
Delete the function, adjust callers.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This bit is never set, therefore we should not read it either.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This mask contains all of the bits that should be ignored while single
stepping in the debugger. The mask contains 2 bits that are not currently
cleared, but are also never set. The bits are included in the mask for
consistency in handling of the CPU_INTERRUPT_TGT_EXT_N bits.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The code changed here is an unused data type name (evt_flush_occurred).
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
The previous patch removed the need for parameter puc.
Is is now unused, so remove it.
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
This patch adds some code paths for running s390x guest OSs without the
need for KVM.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Commit 83f338f73e broke x86 hardware breakpoint emulation by moving the
debug exception handling out of cpu_exec. Fix this by moving all TCG
related bits back, only leaving the generic guest debugging parts in
cpus.c.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
CC: TeLeMan <geleman@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
All implementations are now the same, and there is only one caller,
so inline the function there.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This patch adds support for the LatticeMico32 softcore processor by Lattice
Semiconductor.
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Mixing up TCG bits with KVM already led to problems around eflags
emulation on x86. Moreover, quite some code that TCG requires on cpu
enty/exit is useless for KVM. So dispatch between tcg_cpu_exec and
kvm_cpu_exec as early as possible.
The core logic of cpu_halted from cpu_exec is added to
kvm_arch_process_irqchip_events. Moving away from cpu_exec makes
exception_index meaningless for KVM, we can simply pass the exit reason
directly (only "EXCP_DEBUG vs. rest" is relevant).
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
To prepare splitting up KVM and TCG CPU entry/exit, move the debug
exception into cpus.c and invoke cpu_handle_debug_exception on return
from qemu_cpu_exec.
This also allows to clean up the debug request signaling: We can assign
the job of informing main-loop to qemu_system_debug_request and stop the
calling cpu directly in cpu_handle_debug_exception. That means a debug
stop will now only be signaled via debug_requested and not additionally
via vmstop_requested.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
When the CPU is in wait state, do not wake-up if an interrupt can't be
taken. This avoid host CPU running at 100% if a device (e.g. timer) has
an interrupt line left enabled.
Also factorize code to check if interrupts are enabled in
cpu_mips_hw_interrupts_pending().
Based on a patch from Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Move the last found TB to the head of the list so it will be found more quickly next time it will be looked for.
Signed-off-by: Kirill Batuzov <batuzovk@ispras.ru>
Signed-off-by: Pavel Yushchenko <pau@ispras.ru>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
If a cpu_exit request is pending, ensure that we leave the CPU loop
quickly. For this purpose, keep the global exit_request pending until
we are about to leave tcg_cpu_exec. Also, immediately break out of the
SMP loop if the request is set, do not run till the end of the chain.
This preserves the VCPU scheduling order in SMP mode.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
If a signal hit after the env->exit_request check but before cpu_exec
updated env->current_tb, cpu_unlink_tb called from the signal hander
will not unlink the current TB. This may leave us stuck in a guest loop
if no further unlink is invoked.
Fix this by reordering current_tb update and exit_request check,
additionally enforcing the correct order via a compiler barrier.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Store tcg loop exit request on a global variable, and transfer it to
per-CPUState exit_request after assignment of cpu_single_env.
This makes exit request signal from robust. Drop the timedlock hack.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
When -d cpu logging was handled by target-foo/translate.c,
it was controled by DEBUG_DISAS, which is enabled by default.
Use the same condition in cpu_exec.
At the same time, reduce the if-deffery by assuming no flags
update is required for the target.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
ia64 has some strangenesses that need to be workaround:
- it has a __clone2() syscall instead of the using clone() one, with
different arguments, and which is not declared in the usual headers.
- ucontext.uc_sigmask is declared with type long int, while it is
actually of type sigset_t.
- uc_mcontext, uc_sigmask, uc_stack, uc_link are declared using #define,
which clashes with the target_ucontext fields. Change their names to
tuc_*, as already done for some target architectures.
The page tracking code in exec.c is used by both userspace and system
emulation. Userspace emulation uses it to track virtual pages, and
system emulation to track ram pages. Introduce a new type to hold this
kind of address.
Signed-off-by: Paul Brook <paul@codesourcery.com>
This ensures that the compiler does not move it away from
the "env = env1;" assignment. Fixes a miscompilation
on gcc 4.4, reported by Jay Foad.
Cc: <jay.foad@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This should explain a lot of the weird breakages of upstream KVM we've
seen recently (actually we should have seen it much earlier):
Stop translating eflags into TCG format when in kvm mode as we never
translate it back and rather sync this broken state into the kernel.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Since b567b38 (target-arm: remove T0 and T1, 2009-10-16) the only global
register that is used is AREG0, so the complexity of hostregs_helper.h
is unused. Use regular assignments and a compiler optimization barrier.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The while loop will be executed exactly 0 or 1 times, depending on
env->exit_request.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
By virtue of the previous patch env->current_tb will always be NULL at
the top of cpu_exec's outermost for loop, and at the end of the innermost
while loop.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
There are three paths from the innermost while loop of cpu_exec
to the top of the outermost for loop. Two do not reset
env->current_tb. Fix this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
cpu_check_irqs
- handle SOFTINT register TICK and STICK timer bits
- only check interrupt levels greater than PIL value
- handle preemption by higher level traps
cpu_exec
- handle CPU_INTERRUPT_HARD only if interrupts are enabled
- PIL 15 is not special level on sparcv9
Signed-off-by: Igor V. Kovalenko <igor.v.kovalenko@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Because Qemu currently requires a TCG target to exist and there are quite some
useful helpers here to lay the groundwork for out KVM target, let's create a
stub TCG emulation target for S390X CPUs.
This is required to make tcg happy. The emulation target itself won't work
though.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
handle_cpu_signal is very nearly copy-paste code for each target, with a
few minor variations. This patch sets up appropriate defaults for a
generic handle_cpu_signal and provides overrides for particular targets
that did things differently. Fixing things like the persistent (XXX:
use sigsetjmp) should now become somewhat easier.
Previous comments on this patch suggest that the "activate soft MMU for
this block" comments refer to defunct functionality. I have removed
such blocks for the appropriate targets in this patch.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
kqemu introduces a number of restrictions on the i386 target. The worst is that
it prevents large memory from working in the default build.
Furthermore, kqemu is fundamentally flawed in a number of ways. It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace. Since most modern processors are multicore, this severely
limits the utility of kqemu.
kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel. If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.
N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.
Paul, please Ack or Nack this patch.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>