The x86 and ppc targets call cpu_synchronize_state() from their
*_cpu_dump_state() callbacks to ensure that up to date state is dumped
when KVM is enabled (for example when a KVM internal error occurs).
Move this call up into the generic cpu_dump_state() function so that
other KVM targets (namely MIPS) can take advantage of it.
This requires kvm_cpu_synchronize_state() and cpu_synchronize_state() to
be moved out of the #ifdef NEED_CPU_H in <sysemu/kvm.h> so that they're
accessible to qom/cpu.c.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Andreas Färber <afaerber@suse.de>
Cc: Alexander Graf <agraf@suse.de>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: qemu-ppc@nongnu.org
Cc: kvm@vger.kernel.org
Signed-off-by: Gleb Natapov <gleb@redhat.com>
* 'tcg-next' of git://github.com/rth7680/qemu: (29 commits)
tcg-i386: Make use of zero-extended memory helper routines
tcg: Introduce zero and sign-extended versions of load helpers
exec: Split softmmu_defs.h
target: Include softmmu_exec.h where forgotten
exec: Rename USUFFIX to LSUFFIX
tcg-i386: Don't perform GETPC adjustment in TCG code
exec: Reorganize the GETRA/GETPC macros
configure: Allow x32 as a host
tcg-i386: Adjust tcg_out_tlb_load for x32
tcg-i386: Use intptr_t appropriately
tcg: Fix jit debug for x32
tcg: Use appropriate types in tcg_reg_alloc_call
tcg: Change tcg_out_ld/st offset to intptr_t
tcg: Change tcg_gen_exit_tb argument to uintptr_t
tcg: Use uintptr_t in TCGHelperInfo
tcg: Change relocation offsets to intptr_t
tcg: Change memory offsets to intptr_t
tcg: Change frame pointer offsets to intptr_t
tcg: Define TCG_ptr properly
tcg: Define TCG_TYPE_PTR properly
...
Bit extraction for the FP BF and L field of the MTFSFI and MTFSF
instructions is wrong and doesn't match the reference manual (which
explain the bit number in big endian format). It has been broken in
commit 7d08d85645.
This patch fixes this, which in turn fixes the problem reported by
Khem Raj about the floor() function of libm.
Reported-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
CC: qemu-stable@nongnu.org (1.6)
Signed-off-by: Alexander Graf <agraf@suse.de>
Prepares for changing cpu_single_step() argument to CPUState.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Also use bool type while at it.
Prepares for moving singlestep_enabled field to CPUState.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Make cpustats monitor command available unconditionally.
Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change Monitor::mon_cpu to CPUState as well.
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
When running an L=1 cmp instruction on a 64bit PPC CPU with SF off, it
still behaves identical to what it does when SF is on. Remove the implicit
difference in the code.
Also, on most 32bit CPUs we should always treat the compare as 32bit
compare, as the CPU will ignore the L bit. This is not true for e500mc,
but that's up for a different patch.
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The implementation for rldcl tried to always fetch its
parameters from the opcode, even though the opcode was
already passed in in decoded and different forms.
Use the parameters instead, fixing rldcl.
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Invalid and privileged SPR warnings currently print the wrong
address. While fixing that, also make it clear that we are
printing both the decimal and hexadecimal SPR number.
Before:
Trying to read invalid spr 896 380 at 0000000000000714
After:
Trying to read invalid spr 896 (0x380) at 0000000000000710
Signed-off-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Power ISA 2.05 adds support for extended mtfsf/mtfsfi form, with a new
W field to select the upper part of the FPCSR register.
For that the helper is changed to handle 64-bit input values and mask with
up to 16 bits. The mtfsf/mtfsfi instructions do not have the W bit
marked as invalid anymore. Instead this is checked in the helper, which
therefore needs to access to the insns/insns_flags2. They are added in
the DisasContext struct. Finally change all accesses to the opcode fields
through extract helpers, prefixed with FP for consistency.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix tcg debug error]
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix 32-bit host compile, simplify code]
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
fabs, fnabs and fneg are just flipping the bit sign of an FP register,
this can be implemented in TCG instead of using softfloat.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Broken in b5a73f8d8a, the carry itself was
fixed in 79482e5ab3. But we still need to
produce the full 64-bit addition.
Simplify the conditions at the top of the functions for when we need a
new temporary. Only plain addition is important enough to warrent avoiding
the temporary, and the extra tcg move op that would come with it.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The overflow computation of nego and subf*o instructions has been broken
in commit ffe30937. Contrary to other targets, the instruction is subtract
from an not subtract on PowerPC.
This patch fixes the issue by using the correct argument in the xor
computation. Thanks to Peter Maydell for the hint.
With this change the PPC emulation passes the Gwenole Beauchesne
testsuite again.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The set of computations used in b5a73f8d8a
are only valid if the current word size == target_long size. This failed
to take ppc64 in 32-bit (narrow) mode into account.
Add a NARROW_MODE macro to avoid conditional compilation.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 620 was the very first 64-bit PowerPC implementation, but
hardly anyone ever actually used the chips. qemu notionally supports the
620, but since we don't actually have code to implement the segment table,
the support is broken (quite likely in other ways too).
This patch, therefore, removes all remaining pieces of 620 support, to
stop it cluttering up the platforms we actually care about. This includes
removing support for the ASR register, used only on segment table based
machines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
While ~T0+T1+CF = T1-T0+CF-1 is true for the low 32-bits,
it does not produce the correct carry-out to bit 33. Do
exactly what the manual says.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Which means that callers need not copy data into local tmps.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In preparation for more efficient setting of these fields.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In r5949 / 76db3ba44e (target-ppc: memory
load/store rework) variable little_endian was replaced with ctx.le_mode.
Update the debug code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The bit that makes a dcbz instruction a dcbzl instruction was declared as
reserved in ppc32 ISAs. However, hardware simply ignores the bit, making
code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4.
Thus, mark the bit as unreserved so that we properly emulate a simple dcbz
in case we're running on non-G5s.
While at it, also refactor the code to check the 970 special case during
runtime. This way we don't need to differenciate between a 970 dcbz and
any other dcbz anymore. We also allow for future improvements to add e500mc
dcbz handling.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch fixes bug 1031698 :
https://bugs.launchpad.net/qemu/+bug/1031698
If we look at the (truncated) translation of the conditional branch
instruction in the test submitted in the bug post, the call to the
exception helper is missing in the "bne-false" chunk of translated
code :
IN:
bne- 0x1800278
OUT:
0xb544236d: jne 0xb5442396
0xb5442373: mov %ebp,(%esp)
0xb5442376: mov $0x44,%ebx
0xb544237b: mov %ebx,0x4(%esp)
0xb544237f: mov $0x1800278,%ebx
0xb5442384: mov %ebx,0x25c(%ebp)
0xb544238a: call 0x827475a
^^^^^^^^^^^^^^^^^^
0xb5442396: mov %ebp,(%esp)
0xb5442399: mov $0x44,%ebx
0xb544239e: mov %ebx,0x4(%esp)
0xb54423a2: mov $0x1800270,%ebx
0xb54423a7: mov %ebx,0x25c(%ebp)
Indeed, gen_exception(ctx, excp) called by gen_goto_tb (called by
gen_bcond) changes ctx->exception's value to excp's :
gen_bcond()
{
gen_goto_tb(ctx, 0, ctx->nip + li - 4);
/* ctx->exception value is POWERPC_EXCP_BRANCH */
gen_goto_tb(ctx, 1, ctx->nip);
/* ctx->exception now value is POWERPC_EXCP_TRACE */
}
Making the following gen_goto_tb()'s test false during the second call :
if ((ctx->singlestep_enabled &
(CPU_BRANCH_STEP | CPU_SINGLE_STEP)) &&
ctx->exception == POWERPC_EXCP_BRANCH /* false...*/) {
target_ulong tmp = ctx->nip;
ctx->nip = dest;
/* ... and this is the missing call */
gen_exception(ctx, POWERPC_EXCP_TRACE);
ctx->nip = tmp;
}
So the patch simply adds the missing matching case, fixing our problem.
Signed-off-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
Pass around CPUArchState instead of using global cpu_single_env.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
This patch adds some extra FPU state to CPUPPCState. Specifically,
fpscr is extended to a target_ulong bits, since some recent (64 bit)
CPUs now have more status bits than fit inside 32 bits. Also, we add
the 32 VSR registers present on CPUs with VSX (these extend the
standard FP regs, which together with the Altivec/VMX registers form a
64 x 128bit register file for VSX).
We don't actually support the instructions using these extra registers
in TCG yet, but we still need a place to store the state so we can
sync it with KVM and savevm/loadvm it. This patch updates the savevm
code to not fail on the extended state, but also does not actually
save it - that's a project for another patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For all targets that currently call tcg_gen_debug_insn_start,
add CPU_LOG_TB_OP_OPT to the condition that gates it.
This is useful for comparing optimization dumps, when the
pre-optimization dump is merely noise.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Altivec instructions are not working anymore in PowerPC emulation,
following commit d15f74fb, which inverted two registers in the call
to helper. Fix that.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Acked-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The BookE variant of MSR_SF is MSR_CM. Implement everything it takes in TCG to
support running 64bit code with MSR_CM set.
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0
and rename op_helper.c (which only contains load and store helpers)
to mem_helper.c. Remove AREG0 swapping in
tlb_fill().
Switch to AREG0 free mode. Use cpu_ld{l,uw}_code in translation
and interrupt handling, cpu_{ld,st}{l,uw}_data in loads and stores.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[fix unwanted whitespace line in Makefile.target]
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move code from cpu_state_reset() into ppc_cpu_reset().
Reorder #include of helper_regs.h to use it in translate_init.c.
Adjust whitespace and add braces.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
When we dump the CPU registers, there's a certain chance they haven't been
synchronized with KVM yet, so we have to manually trigger that.
This aligns the code with x86 and fixes a bug where the register state was
bogus on invalid/unknown kvm exit reasons.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
These instructions for loading and storing byte-swapped 64-bit values have
been introduced in PowerISA 2.06.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Scripted conversion:
sed -i "s/CPUState/CPUPPCState/g" target-ppc/*.[hc]
sed -i "s/#define CPUPPCState/#define CPUState/" target-ppc/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
This patch implements the msgsnd instruction. It is part of the
Embedded.Processor Control specification and allows one CPU to
IPI another CPU without going through an interrupt controller.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements the msgclr instruction. It is part of the
Embedded.Processor Control specification and clears pending doorbell
interrupts on the current CPU.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our internal helpers to fetch TLB entries were not able to tell us
that an entry doesn't even exist. Pass an error out if we hit such
a case to not accidently pass beyond the TLB array.
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 2.06 BookE ISA defines an opcode called "tlbilx" which is used
to flush TLB entries. It's the recommended way of flushing in virtualized
environments.
So far we got away without implementing it, but Linux for e500mc uses this
instruction, so we better add it :).
Signed-off-by: Alexander Graf <agraf@suse.de>
The msync instruction as defined today is only valid on 4xx cores, not
on e500 which also supports msync, but treats it the same way as sync.
Rename it to reflect that it's 4xx only.
Signed-off-by: Alexander Graf <agraf@suse.de>
The e500 CPUs don't use 440's msync which falls on the same opcode IDs,
but instead use the real powerpc sync instruction. This is important,
since the invalid mask differs between the two.
Signed-off-by: Alexander Graf <agraf@suse.de>
When using gdb to single step a ppc interrupt routine, the execution
flow passes the rfi instruction without actually returning from the
interrupt.
The patch fixes this by avoiding to update the nip when the debug
exception is raised and a previous POWERPC_EXCP_SYNC was set.
The latter is the case only, if code for rfi or a related instruction
was generated.
Signed-off-by: Sebastian Bauer <mail@sebastianbauer.info>
Signed-off-by: Alexander Graf <agraf@suse.de>
SPE instructions are defined by pairs. Currently, the invalid-bits mask is set
for the first instruction, but the second one can have a different mask.
example:
GEN_SPE(efdcmpeq, efdcfs, 0x17, 0x0B, 0x00600000, 0x00180000, PPC_SPE_DOUBLE),
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements support for the CFAR SPR on POWER7 (Come From
Address Register), which snapshots the PC value at the time of a branch or
an rfid. The latest powerpc-next kernel also catches it and can show it in
xmon or in the signal frames.
This works well enough to let recent kernels boot (which otherwise oops
on the CFAR access). It hasn't been tested enough to be confident that the
CFAR values are actually accurate, but one thing at a time.
Signed-off-by: Ben Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
When accessing an SPE instruction despite it being not available,
throw an SPE exception instead of an APU exception. That way the
guest knows what's going on and actually uses SPE.
Reported-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
* 'ppc-next' of git://repo.or.cz/qemu/agraf:
PPC: move TLBs to their own arrays
PPC: 440: Use 440 style MMU as default, so Qemu knows the MMU type
PPC: E500: Use MAS registers instead of internal TLB representation
PPC: Only set lower 32bits with mtmsr
PPC: update openbios firmware
PPC: mpc8544ds: Add hypervisor node
PPC: calculate kernel,initrd,cmdline locations dynamically
target-ppc: Handle memory-forced I/O controller access
PPC: E500: Implement reboot controller
As Nathan pointed out correctly, the mtmsr instruction does not modify
the high 32 bits of MSR. It also doesn't matter if SF is set or not,
the instruction always behaves the same.
This patch moves it a bit closer to the spec.
Reported-by: Nathan Whitehorn <nwhitehorn@freebsd.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
target-ppc has been switched to softfloat only long ago, but a
few #ifdef CONFIG_SOFTFLOAT have been forgotten. Remove them.
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Most of the code to support e500 style MMUs is already in place, but
we're missing on some of the special TLB0-TLB1 handling code and slightly
different TLB modification.
This patch adds support for the FSL style MMU.
Signed-off-by: Alexander Graf <agraf@suse.de>
To enable quick runtime detection of instruction groups to the currently
selected CPU emulation, we have a feature mask of what exactly the respective
instruction supports.
This feature mask is 64 bits long and we just successfully exceeded those 64
bits. To add more features, we need to think of something.
The easiest solution that came to my mind was to simply add another 64 bits
that we can also match on. Since the comparison is only done on start of the
qemu process to generate an internal opcode calling table, we should be fine
on any performance penalties here.
Signed-off-by: Alexander Graf <agraf@suse.de>
Read them via KVM_GET_SREGS in kvm_arch_get_registers(),
and display them in "info registers".
Also get CR and PID from the existing KVM_GET_REGS.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Function gen_pc_load was introduced in commit
d2856f1ad4.
The only reason for parameter searched_pc was
a debug statement in target-i386/translate.c.
Parameter puc was needed by target-sparc until
commit d7da2a1040.
Remove searched_pc from the debug statement and remove both
parameters from the parameter list of gen_pc_load.
As the function name gen_pc_load was also misleading,
it is now called restore_state_to_opc. This new name
was suggested by Peter Maydell, thanks.
v2: Remove last parameter, too, and rename the function.
v3: Fix [] typo in target-arm/translate.c.
Fix wrong SHA1 object name in commit message (copy+paste error).
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
tcg_gen_exit_tb takes a parameter of type tcg_target_long,
so the type casts of pointer to long should be replaced by
type casts of pointer to tcg_target_long (suggested by Blue Swirl).
These changes are needed for build environments where
sizeof(long) != sizeof(void *), especially for w64.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
On ppc machines with hash table MMUs, the special purpose register SDR1
contains both the base address of the encoded size (hashed) page tables.
At present, we interpret the SDR1 value within the address translation
path. But because the encodings of the size for 32-bit and 64-bit are
different this makes for a confusing branch on the MMU type with a bunch
of curly shifts and masks in the middle of the translate path.
This patch cleans things up by moving the interpretation on SDR1 into the
helper function handling the write to the register. This leaves a simple
pre-sanitized base address and mask for the hash table in the CPUState
structure which is easier to work with in the translation path.
This makes the translation path more readable. It addresses the FIXME
comment currently in the mtsdr1 helper, by validating the SDR1 value during
interpretation. Finally it opens the way for emulating a pSeries-style
partition where the hash table used for translation is not mapped into
the guests's RAM.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
qemu already includes support for the popcntb instruction introduced
in POWER5 (although it doesn't actually allow you to choose POWER5).
However, the logic is slightly incorrect: it will generate results
truncated to 32-bits when the CPU is in 32-bit mode. This is not
normal for powerpc - generally arithmetic instructions on a 64-bit
powerpc cpu will generate full 64 bit results, it's just that only the
low 32 bits will be significant for condition codes.
This patch corrects this nit, which actually simplifies the code slightly.
In addition, this patch implements the popcntw and popcntd
instructions added in POWER7, in preparation for allowing POWER7 as an
emulated CPU.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
For a 64-bit PowerPC target, qemu correctly implements translation
through the segment lookaside buffer. Likewise it supports the
slbmte instruction which is used to load entries into the SLB.
However, it does not emulate the slbmfee and slbmfev instructions
which read SLB entries back into registers. Because these are
only occasionally used in guests (mostly for debugging) we get
away with it.
However, given the recent SLB cleanups, it becomes quite easy to
implement these, and thereby allow, amongst other things, a guest
Linux to use xmon's command to dump the SLB.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
fprintf_function uses format checking with GCC_FMT_ATTR.
Format errors were fixed in
* target-i386/helper.c
* target-mips/translate.c
* target-ppc/translate.c
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The lwarx and ldarx instructions have a bit to give some hint to the
CPU which is safe to ignore. We currently refuse to accept any instruction
with that bit set, as it used to be declared MBZ.
Let's remove the reserved bit and make the instruction work as expected.
This fixes Linux boot for ppc64.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Invalid opcode messages can be perfectly normal, for example if this
code is never executed. Don't print an error message on the console,
but keep the message in the log for debugging purposes.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The shifts in the gen_evsplat* functions were expecting rA to be masked,
not extracted, and so used the wrong shift amounts to sign-extend or pad
with zeroes.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b72.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
We do this so we can check on the corresponding stc{w,d}x. whether the
value has changed. It's a poor man's form of implementing atomic
operations and is valid only for NPTL usermode Linux emulation.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: malc <av1474@comtv.ru>
For 32-bit PPC targets, we translated:
evmergelo rX, rX, rY
as:
rX-lo = rY-lo
rX-hi = rX-lo
which is wrong, because we should be transferring rX-lo first. This
problem is fixed by swapping the order in which we write the parts of
rX.
Similarly, we translated:
evmergelohi rX, rX, rY
as:
rX-lo = rY-hi
rX-hi = rX-lo
In this case, we can't swap the assignment statements, because that
would just cause problems for:
evmergelohi rX, rY, rX
Instead, we detect the first case and save rX-lo in a temporary
variable:
tmp = rX-lo
rX-lo = rY-hi
rX-hi = tmp
These problems don't occur on PPC64 targets because we don't split the
SPE registers into hi/lo parts for such targets.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Do this so other pieces of code can make decisions based on the
capabilities of the CPU we're emulating.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: malc <av1474@comtv.ru>
This replaces a compile time option for some targets and adds
this feature to targets which did not have a compile time option.
Add monitor command to enable or disable single step mode.
Modify monitor command "info status" to display single step mode.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7004 c046a42c-6fe2-441c-8c8c-71466251a162
While searching PC, always store the pc of a new instruction.
Instructions that didn't generate tcg code (such as nop) prevented the
next one to be referenced.
Based on patch for target-alpha, r6930.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6931 c046a42c-6fe2-441c-8c8c-71466251a162
Rename bswap_i32 into bswap32_i32 and bswap_i64 into bswap64_i64
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6829 c046a42c-6fe2-441c-8c8c-71466251a162
Mtfsf can have the L bit set, so all the register contents get stored
in FPSCR. Linux uses it, so let's implement it.
Signed-off-by: Alexander Graf <alex@csgraf.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6753 c046a42c-6fe2-441c-8c8c-71466251a162
Linux uses tlbiel to flush TLB entries in PPC64 mode. This special TLB
flush opcode only flushes an entry for the CPU it runs on, not across
all CPUs in the system.
Signed-off-by: Alexander Graf <alex@csgraf.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6749 c046a42c-6fe2-441c-8c8c-71466251a162
In order to modify SLB entries on recent PPC64 machines, the slbmte
instruction is used.
This patch implements the slbmte instruction and makes the "bridge"
mode code use the slb set functions, so we can move the SLB into
the CPU struct later.
This is required for Linux to run on PPC64.
Signed-off-by: Alexander Graf <alex@csgraf.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6747 c046a42c-6fe2-441c-8c8c-71466251a162
- use ctz32 instead of ffs - 1
- small optimisation of mtcrf
- add the name of both opcodes
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6669 c046a42c-6fe2-441c-8c8c-71466251a162
When the CPU is in little endian mode, it should load values from RAM
in byte swapped manner. This check is in all the ld and st functions,
but misspelled in gen_qemu_ld32s.
This patch fixes the misspelling and makes ppc64 Linux happier.
Signed-off-by: Alexander Graf <alex@csgraf.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6654 c046a42c-6fe2-441c-8c8c-71466251a162
Single-precision and double-precision floating-point instructions should
be separated into their own categories, since some chips only support
single-precision instructions.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6575 c046a42c-6fe2-441c-8c8c-71466251a162
Do this so we can set float statuses once per mtvscr, rather than once
per Altivec instruction.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6508 c046a42c-6fe2-441c-8c8c-71466251a162
These are references to 'loglevel' that aren't on a simple 'if (loglevel &
X) qemu_log()' statement.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6340 c046a42c-6fe2-441c-8c8c-71466251a162
This is a large patch that changes all occurrences of logfile/loglevel
global variables to use the new qemu_log*() macros.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6338 c046a42c-6fe2-441c-8c8c-71466251a162
Use macros to avoid #ifdefs on debugging code.
This patch doesn't try to merge logging macros from different files,
but just unify the debugging code #ifdefs onto a macro on each file. A
further cleanup can unify the debugging macros on a common header, later
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6332 c046a42c-6fe2-441c-8c8c-71466251a162
Based on a patch by Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6190 c046a42c-6fe2-441c-8c8c-71466251a162
The attached patch updates the FSF address in the GPL/LGPL boilerplate
in most GPL/LGPLed files, and also in COPYING.LIB.
Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162