Some hosts (amd64, ia64) have an ABI that ignores the high bits
of the 64-bit register when passing 32-bit arguments. Others
require the value to be properly sign-extended for the type.
I.e. "int32_t" must be sign-extended and "uint32_t" must be
zero-extended to 64-bits.
To effect this, extend the "sizemask" parameter to tcg_gen_callN
to include the signedness of the type of each parameter. If the
tcg target requires it, extend each 32-bit argument into a 64-bit
temp and pass that to the function call.
This ABI feature is required by sparc64, ppc64 and s390x.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This line was a bit clear.
The next lines set or reset this bit (LE) depending of another bit (ILE).
So the first line is useless.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Since commit 2ada0ed, "Return From Interrupt" is broken for PPC processors
because some interrupt specifics bits of SRR1 are copied to MSR.
SRR1 is a save of MSR during interrupt.
During RFI, MSR must be restored from SRR1.
But some bits of SRR1 are interrupt-specific and are not used for MSR saving.
This is the specification (ISA 2.06) at chapter 6.4.3 (Interrupt Processing):
"2. Bits 33:36 and 42:47 of SRR1 or HSRR1 are loaded with information specific
to the interrupt type.
3. Bits 0:32, 37:41, and 48:63 of SRR1 or HSRR1 are loaded with a copy of the
corresponding bits of the MSR."
Below is a representation of MSR bits which are not saved:
0:15 16:31 32 33:36 37:41 42:47 48:63
——— | ——— | — X X X X — — — — — X X X X X X | ————
0000 0000 | 7 | 8 | 3 | F | 0000
History:
In the initial Qemu implementation (e1833e1), the mask 0x783F0000 was used for
saving MSR in SRR1. But all the bits 32:47 were cleared during RFI restoring.
This was wrong. The commit 2ada0ed explains that this breaks Altivec.
Indeed, bit 38 (for Altivec support) must be saved and restored.
The change of 2ada0ed was to restore all the bits of SRR1 to MSR.
But it's also wrong.
Explanation:
As an example, let's see what's happening after a TLB miss.
According to the e300 manual (E300CORERM table 5-6), the TLB miss interrupts
set the bits 44-47 for KEY, I/D, WAY and S/L. These bits are specifics to the
interrupt and must not be copied into MSR at the end of the interrupt.
With the current implementation, a TLB miss overwrite bits POW, TGPR and ILE.
Fix:
It shouldn't be needed to filter-out bits on MSR saving when interrupt occurs.
Specific bits overwrite MSR ones in SRR1.
But at the end of interrupt (RFI), specifics bits must be cleared before
restoring MSR from SRR1. The mask 0x783F0000 apply here.
Discussion:
The bits of the mask 0x783F0000 are cleared after an interrupt.
I cannot find a specification which talks about this
but I assume it is the truth since Linux can run this way.
Maybe it's not perfect but it's better (works for e300).
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When running with --enable-io-thread the timer we have doesn't help,
because it doesn't wake up the CPU thread. So instead we need to
actually kick it.
While at it I refined the logic a bit to not dumbly trigger a timer
every 500ms, but rather do it more often after an interrupt got injected.
If there's no level based interrupt to be expected, we don't need the
timer anyways.
This makes qemu-system-ppc with --enable-io-thread work when using KVM.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Continue vcpu execution in case emulation failure happened while vcpu
was in userspace. In this case #UD will be injected into the guest
allowing guest OS to kill offending process and continue.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Time base SPRs TBL/TBU should be accessible in user/priv modes for reading
as specified in POWER ISA documentation. Therefore SPRs permissions were
changed in gen_tbl function.
Signed-off-by: Dmitry Ilyevsky <ilyevsky@gmail.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Removes a set of ifdefs from exec.c.
Introduce TARGET_VIRT_ADDR_SPACE_BITS for all targets other
than Alpha. This will be used for page_find_alloc, which is
supposed to be using virtual addresses in the first place.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:
- cpu_synchronize_all_states in qemu_savevm_state_complete
(initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
(writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
(writeback after system reset)
These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:
- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE (on init or vmload, all VCPUs stopped as well)
This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.
cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.
Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Invalid opcode messages can be perfectly normal, for example if this
code is never executed. Don't print an error message on the console,
but keep the message in the log for debugging purposes.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The shifts in the gen_evsplat* functions were expecting rA to be masked,
not extracted, and so used the wrong shift amounts to sign-extend or pad
with zeroes.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The CRF_{CH,CL,CH_OR_CL,CH_AND_CL} constants were all off by one bit
position. Because of this, the SPE evcmp* family of instructions would
store values in the result condition register that were also off by one
bit position.
Fixed by using the CRF_{LT,GT,EQ,SO} constants for the shift amounts.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
For some odd reason we sometimes hang inside KVM forever. I'd guess it's
a race condition where we actually have a level triggered interrupt, but
the infrastructure can't expose that yet, so the guest ACKs it, goes to
sleep and never gets notified that there's still an interrupt pending.
As a quick workaround, let's just wake up every 500 ms. That way we can
assure that we're always reinjecting interrupts in time.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
We were masking 1TB SLB entries on the feature bit of 16 MB pages. Obviously
that breaks, so let's just ignore 1TB SLB entries for now and instead do
16MB pages correctly.
This fixes PPC64 Linux boot with -m above 256.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Our guest systems need to know by how much the timebase increases every second,
so there usually is a "timebase-frequency" property in the cpu leaf of the
device tree.
This property is missing in OpenBIOS.
With qemu, Linux's fallback timebase speed and qemu's internal timebase speed
match up. With KVM, that is no longer true. The guest is running at the same
timebase speed as the host.
This leads to massive timing problems. On my test machine, a "sleep 2" takes
about 14 seconds with KVM enabled.
This patch exports the timebase frequency to OpenBIOS, so it can then put them
into the device tree.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The recent transition to always have the DCR helper functions take 32 bit
values broke the PPC64 target, as target_long became 64 bits there.
This patch changes DCR helpers to target_long arguments, and cast the values
to 32 bit when needed.
Fixes PPC64 build with --enable-debug-tcg
Based on a patch from Alexander Graf <agraf@suse.de>
Reported-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
For what I know DCR is always 32 bits wide, so we should also use uint32_t to
pass it along the stacks.
This fixes a warning when compiling qemu-system-ppc64 with KVM enabled, making
it compile without --disable-werror
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Fix the alternate time base the same way as the default timebase. SPR_ATBL
should return a 64-bit value on 64 bit implementations.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
On PPC we have a 64-bit time base. Usually (PPC32) this is accessed using
two separate 32 bit SPR accesses to SPR_TBU and SPR_TBL.
On PPC64 the SPR_TBL register acts as 64 bit though, so we get the full
64 bits as return value. If we only take the lower ones, fine. But Linux
wants to see all 64 bits or it breaks.
This patch makes PPC64 Linux work even after TB crossed the 32-bit boundary,
which usually happened a few seconds after bootup.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
My segment sync patch broke compilation on PPC32, because it was trying to
sync the SLB even though ppc32 CPUs don't have an SLB.
So let's only sync it when we're on a PP64 one!
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
While x86 only needs to sync cr0-4 to know all about its MMU state and enable
qemu to resolve virtual to physical addresses, we need to sync all of the
segment registers on PPC to know which mapping we're in.
So let's grab the segment register contents to be able to use the "x" monitor
command and also enable the gdbstub to resolve virtual addresses.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
No need to alias e300 core for each CPU package.
Differences between microcontrollers have to be implemented in a higher layer
than translate_init.c
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Add CPU declarations of MPC8343, MPC8343E, MPC8347 and MPC8347E.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>