We're considering these two to be in different CPU families (6xx and
7xx), so keep their SPR registration separate.
The code was copied into register_G2_sprs and the common function was
renamed to apply only to the 755.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make sure that every register_*_sprs function only has calls to
spr_register* to register individual SPRs. Do not allow nesting. This
makes the code easier to follow and a look at init_proc_* should
suffice to know what SPRs a CPU has.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that the 601 was removed, all of our CPUs have a timebase, so that
can be moved into the common function.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The top level init_proc calls register_generic_sprs but also registers
some other SPRs outside of that function. Let's group everything into
a single place.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The G2LE CPU initialization code is the same as the G2. Use the latter
for both.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The /* XXX : not implemented */ comments all over cpu_init are
confusing and ambiguous.
Do they mean not implemented by QEMU, not implemented in a specific
access mode? Not implemented by the CPU? Do they apply to just the
register right after or to a whole block? Do they mean we have an
action to take in the future to implement these? Are they only
informative?
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce virtual hypervisor methods that can support a "Nested KVM HV"
implementation using the bare metal 2-level radix MMU, and using HV
exceptions to return from H_ENTER_NESTED (rather than cause interrupts).
HV exceptions can now be raised in the TCG spapr machine when running a
nested KVM HV guest. The main ones are the lev==1 syscall, the hdecr,
hdsi and hisi, hv fu, and hv emu, and h_virt external interrupts.
HV exceptions are intercepted in the exception handler code and instead
of causing interrupts in the guest and switching the machine to HV mode,
they go to the vhyp where it may exit the H_ENTER_NESTED hcall with the
interrupt vector numer as return value as required by the hcall API.
Address translation is provided by the 2-level page table walker that is
implemented for the bare metal radix MMU. The partition scope page table
is pointed to the L1's partition scope by the get_pate vhc method.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220216102545.1808018-9-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This moves the logic to reset the QEMU exception state into its own
function.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-8-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The virtual hypervisor currently always intercepts and handles
hypercalls but with a future change this will not always be the case.
Add a helper for the test so the logic is abstracted from the mechanism.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-7-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In prepartion for implementing a full partition table option for
vhyp, update the get_pate method to take an lpid and return a
success/fail indicator.
The spapr implementation currently just asserts lpid is always 0
and always return success.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-6-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The radix on vhyp MMU uses a single-level radix table walk, with the
partition scope mapping provided by the flat QEMU machine memory.
A subsequent change will use the two-level radix walk on vhyp in some
situations, so provide a helper which can abstract that logic.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-5-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Invalid or missing partition table entry exceptions should cause HV
interrupts. HDSISR is set to bad MMU config, which is consistent with
the ISA and experimentally matches what POWER9 generates.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-2-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v3.1 changed some VSX instructions behavior by changing what the
other words/doubleword in the result should contain when the result is
only one word/doubleword. e.g. xsmaxdp operates on doubleword 0 and
saves the result also in doubleword 0.
Before, the second doubleword result was undefined according to the
ISA, but now it's stated that it should be zeroed.
Even tough the result was undefined before, hardware implementing these
instructions already filled these fields with 0s. Changing every ISA
version in QEMU to this behavior makes the results match what happens
in hardware.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220204181944.65063-1-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't really need to check for exception model while applying
AIL. We can check the lpcr_mask for the presence of
LPCR_AIL/LPCR_HAIL.
This removes one more instance of passing the exception model ID
around.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We currently abort QEMU during the dispatch of an interrupt if we try
to set MSR_HV without having MSR_HVB in the msr_mask. I think we
should verify this for all MSR bits. There is no reason to ever have a
MSR bit set if the corresponding bit is not set in that CPU's
msr_mask.
Note that this is not about the emulated code setting reserved
bits. We clear the new_msr when starting to dispatch an exception, so
if we end up with bits not present in the msr_mask that is a QEMU
programming error.
I kept the HSRR verification for BookS because it is the only CPU
family that has HSRRs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make the cpu-specific powerpc_excp_* functions a bit simpler by moving
the bounds check and logging to powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that all CPU families have their own separate exception
dispatching code we can remove powerpc_excp_legacy.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 7xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 7xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Thre is no HV support in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the BookE code and add a comment explaining why we need to keep
hypercall support even though this CPU does not have a hypervisor
mode.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in the 7xx so remove the LPES0 handling.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 7xx.
Also remove 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DLTLB
POWERPC_EXCP_DSI
POWERPC_EXCP_DSTLB
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_IABR
POWERPC_EXCP_IFTLB
POWERPC_EXCP_ISI
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SMI
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 7xx CPUs
(740, 745, 750, 750cl, 750cx, 750fx, 750gx, 755). This commit copies
powerpc_excp_legacy verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since we've split the exception code by exception model, the exception
model IDs are becoming less useful. These two can be merged.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 6xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 6xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no HV support in the 6xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no Hypervisor mode in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no Hypervisor mode in the 6xx, so remove all LPES0 logic.
Also remove BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 6xx CPUs.
Also remove the 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This only applies to the G2s, the other 6xx CPUs will not have this
vector registered.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 6xx CPUs
(603, 604, G2, MPC5xx, MCP8xx). This commit copies powerpc_excp_legacy
verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't need three separate exception model IDs for the 603, 604 and
G2.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerPC 601 processor is the first generation of processors to
implement the PowerPC architecture. It was designed as a bridge
processor and also could execute most of the instructions of the
previous POWER architecture. It was found on the first Macs and IBM
RS/6000 workstations.
There is not much interest in keeping the CPU model of this
POWER-PowerPC bridge processor. We have the 603 and 604 CPU models of
the 60x family which implement the complete PowerPC instruction set.
Cc: "Hervé Poussineau" <hpoussin@reactos.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203142756.1302515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ppc_radix64_partition_scoped_xlate() logs the host page protection
bits variable but it is uninitialized. The value is set later on in
ppc_radix64_check_prot(). Remove the output.
Fixes: Coverity CID 1468942
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220203142145.1301749-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in BookE, so remove all of the HV logic.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the switch as this function applies to BookE only.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
QEMU does not support BookE as a hypervisor.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
BookE has no DSISR or DAR. The proper registers ESR and DEAR were
already set at this point.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no LPES0 in BookE and no MSR_HV.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The SRR1 should be set to the MSR value. There are no diagnostic bits
in the SRR1 for BookE.
Note that this fixes a bug where MSR_GS would be set and Linux would
go into KVM code when there's no KVM guest.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR or DAR in BookE. Change to ESR and DEAR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in BookE.
Also remove 40x code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookE CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This CPU was partially removed due to lack of support in 2017 by commit
aef7796057 ("ppc: remove non implemented cpu models").
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220128221611.1221715-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 602 was derived from the PowerPC 603, for the gaming market it
seems. It was hardly used and no firmware supporting the CPU could be
found. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The migration code will not look at a VMStateDescription's
minimum_version_id_old field unless that VMSD has set the
load_state_old field to something non-NULL. (The purpose of
minimum_version_id_old is to specify what migration version is needed
for the code in the function pointed to by load_state_old to be able
to handle it on incoming migration.)
We have exactly one VMSD which still has a load_state_old,
in the PPC CPU; every other VMSD which sets minimum_version_id_old
is doing so unnecessarily. Delete all the unnecessary ones.
Commit created with:
sed -i '/\.minimum_version_id_old/d' $(git grep -l '\.minimum_version_id_old')
with the one legitimate use then hand-edited back in.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
It missed vmstate_ppc_cpu.
The 74xx does not have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The whole power saving states logic seems to be dependent on HV mode,
which don't exist for 74xx so I'm removing it all and leaving the
abort message.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the BookE code and add a comment explaining why we need to keep
hypercall support even though this CPU does not have a hypervisor
mode.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 74xx don't have MSR_HV so all the LPES0 logic can be removed.
Also remove the BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 74xx don't have an MSR_HV.
Also remove 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DSI
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_IABR
POWERPC_EXCP_ISI
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SMI
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
POWERPC_EXCP_VPU
POWERPC_EXCP_VPUA
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 74xx
CPUs. This commit copies powerpc_excp_legacy verbatim so the next one
has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since this is now BookS only, we can simplify the code a bit and check
has_hv_mode instead of enumerating the exception models. LPES0 does
not make sense if there is no MSR_HV.
Note that QEMU does not support HV mode on 970 and POWER5+ so we don't
set MSR_HV in msr_mask.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- Always uses HV_EMU if the CPU has MSR_HV;
- Exceptions always delivered in 64 bit.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DSEG
POWERPC_EXCP_DSI
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_FU
POWERPC_EXCP_HDECR
POWERPC_EXCP_HDSI
POWERPC_EXCP_HISI
POWERPC_EXCP_HVIRT
POWERPC_EXCP_HV_EMU
POWERPC_EXCP_HV_FU
POWERPC_EXCP_ISEG
POWERPC_EXCP_ISI
POWERPC_EXCP_MAINT
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SDOOR_HV
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_SYSCALL_VECTORED
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
POWERPC_EXCP_VPU
POWERPC_EXCP_VPUA
POWERPC_EXCP_VSXU
POWERPC_EXCP_HV_MAINT
POWERPC_EXCP_SDOOR
(I added the two above that were not being considered. They used to be
"Invalid exception". Now they become "Unimplemented exception" which
is more accurate.)
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookS CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 Program Interrupt does not set SRR1 with any diagnostic bits,
just a clean copy of the MSR.
We're using the BookE Exception Syndrome Register which is different
from the 405.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg: restored SPR_40x_ESR settings ]
Message-Id: <20220118184448.852996-14-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 ISI does not set SRR1 with any exception syndrome bits, only a
clean copy of the MSR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg : Fixed removal which was done in the wrong routine ]
Message-Id: <20220118184448.852996-13-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 has no DSISR or DAR, so convert the trace entry to
use ESR and DEAR instead.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg : - changed registers to ESR and DEAR.
- updated commit log ]
Message-Id: <20220118184448.852996-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The current Debug exception dispatch is the BookE one, so it is
different from the 405. We effectively don't support the 405 Debug
exception.
This patch removes the BookE code and moves the DEBUG into the "not
implemented" block.
Note that there is in theory a functional change here since we now
abort when a Debug exception happens. However, given how it was never
implemented, I don't believe this to have ever been dispatched for the
405.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR in the 405. It uses DEAR which we already set
earlier at ppc_cpu_do_unaligned_access.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au
Message-Id: <20220118184448.852996-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
405 has no MSR_HV and EPR is BookE only so we can remove it all.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
powerpc_excp_40x applies only to the 405, so remove HV code and
references to BookE.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In powerpc_excp_40x the Critical exception is now for 405 only, so we
can remove the BookE and G2 blocks.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220118184448.852996-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV or MSR_LE;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Interrupts Little Endian;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_CRITICAL
POWERPC_EXCP_DEBUG
POWERPC_EXCP_DSI
POWERPC_EXCP_DTLB
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FIT
POWERPC_EXCP_ISI
POWERPC_EXCP_ITLB
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PIT
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_WDT
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for 40x CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220118184448.852996-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 MSR has the Machine Check Enable bit. We're making use of it
when dispatching Machine Check, so add the bit to the msr_mask.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118184448.852996-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Bit 13 is the Wait State Enable bit. Give it its proper name.
As far as I can see we don't do anything with MSR_POW for the 405, so
this change has no effect.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118184448.852996-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit cd0c6f4735 did not take into account 405 CPUs when adding
support to batching of TCG tlb flushes. Set the TLB_NEED_LOCAL_FLUSH
flag when the SPR_40x_PID is set or a TLB updated.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Fixes: cd0c6f4735 ("ppc: Do some batching of TCG tlb flushes")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220113180352.1234512-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
POWERPC_MMU_BOOKE is not a mask and should not be tested with a
bitwise AND operator.
It went unnoticed because it only impacts the 601 CPU implementation
for which we don't have a known firmware image.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220124081609.3672341-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
cpu_interrupt_exittb() was introduced by commit 044897ef4a
("target/ppc: Fix system lockups caused by interrupt_request state
corruption") as a way to wrap cpu_interrupt() helper in BQL.
After that, commit 6d38666a89 ("ppc: Ignore the CPU_INTERRUPT_EXITTB
interrupt with KVM") added a condition to skip this interrupt if we're
running with KVM.
Problem is that the change made by the above commit, testing for
!kvm_enabled() at the start of cpu_interrupt_exittb():
static inline void cpu_interrupt_exittb(CPUState *cs)
{
if (!kvm_enabled()) {
return;
}
(... do cpu_interrupt(cs, CPU_INTERRUPT_EXITTB) ...)
is doing the opposite of what it intended to do. This will return
immediately if not kvm_enabled(), i.e. it's a emulated CPU, and if
kvm_enabled() it will proceed to fire CPU_INTERRUPT_EXITTB.
Fix the 'skip KVM' condition so the function is a no-op when
kvm_enabled().
CC: Greg Kurz <groug@kaod.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/809
Fixes: 6d38666a89 ("ppc: Ignore the CPU_INTERRUPT_EXITTB interrupt with KVM")
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220121160841.9102-1-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Book-E architecture does not set the error code in 31:27 bits
of SRR1, but instead uses these bits for custom fields such
as GS (Guest Supervisor).
Wrongly setting these fields will result in QEMU crashes
when attempting to execute not executable code due to the attempts
to use Guest Supervisor mode.
Cc: "Cédric Le Goater" <clg@kaod.org>
Cc: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Greg Kurz <groug@kaod.org>
Cc: qemu-ppc@nongnu.org
Cc: qemu-devel@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Vitaly Cheptsov <cheptsov@ispras.ru>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220121093107.15478-1-cheptsov@ispras.ru>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
After a TLB miss exception, GPRs 0-3 must be restored on rfi.
This is managed by hreg_store_msr() which is called by do_rfi()
However, hreg_store_msr() does it if MSR[TGPR] is unset in the
passed MSR value.
The problem is that do_rfi() is given the content of SRR1 as
the value to be set in MSR, but TGPR bit is not part of SRR1
and that bit is used for something else and is sometimes set
to 1, leading to hreg_store_msr() not restoring GPRs.
So, do the same way as for POW bit, force clearing it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Cedric Le Goater <clg@kaod.org>
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220120103824.239573-1-christophe.leroy@csgroup.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 7448 CPU is an evolution of the PowerPC 7447A and the last of the
G4 family. Change its family to reflect correctly its features. This
fixes Linux boot.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220117092555.1616512-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit c8f49e6b93 ("target/ppc: remove 401/403 CPUs") left a few
things behind.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220117091541.1615807-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118104150.1899661-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This breaks migration compatibility from (very) old versions of
QEMU. This should not be a problem for the pseries machine for which
migration is only supported on recent QEMUs ( > 2.x). There is no
clear status on what is supported or not for the other machines. Let's
move forward and remove the .load_state_old handler.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118104150.1899661-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We use the endianness of interrupts to determine which endianness to
use for the guest kernel memory dump. For machines that support HILE
(powernv8 and up) we have been always generating big endian dump
files.
This patch uses the HILE support recently added to
ppc_interrupts_little_endian to fix the endianness of the dumps for
powernv machines.
Here are two dumps created at different moments:
$ file skiboot.dump
skiboot.dump: ELF 64-bit MSB core file, 64-bit PowerPC ...
$ file kernel.dump
kernel.dump: ELF 64-bit LSB core file, 64-bit PowerPC ...
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Next patches will split powerpc_excp in multiple family specific
handlers. This patch adds a wrapper to make the transition clearer.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function is now suitable for
determining the endianness of interrupts for all CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Some CPUs set ILE via an MSR bit. We can make
ppc_interrupts_little_endian handle that case as well. Now we have a
centralized way of determining the endianness of interrupts.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function could be used for interrupts
delivered in Hypervisor mode, so add support for powernv8 and powernv9
to it.
Also drop the comment because it is inaccurate, all CPUs that can run
little endian can have interrupts in little endian. The point is
whether they can take interrupts in an endianness different from
MSR_LE.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the compile time definition and make the logging be controlled
by the `-d mmu` option in the cmdline.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v2.03 introduced Floating Round to Integer instructions : frin,
friz, frip, and frim. Add them to POWER5+.
The PPC_FLOAT_EXT flag also includes the fre (Floating Reciprocal
Estimate) instruction which was introduced in ISA v2.0x. The
architecture document says its optional and that might be the reason
why it has been kept under the PPC_FLOAT_EXT flag. This means 970 CPUs
can not use it under QEMU, which doesn't seem to be a problem.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
popcntb instruction was added in ISA v2.02. Add support for POWER5+
processors since they implement ISA v2.03.
PPC970 CPUs implement v2.01 and do not support popcntb.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220105095142.3990430-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
MMCR0 writes will change only MMCR0 bits which are used to calculate
HFLAGS_PMCC0, HFLAGS_PMCC1 and HFLAGS_INSN_CNT hflags. No other machine
register will be changed during this operation. This means that
hreg_compute_hflags() is overkill for what we need to do.
pmu_update_summaries() is already updating HFLAGS_INSN_CNT without
calling hreg_compure_hflags(). Let's do the same for the other 2 MMCR0
hflags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_cyc_cnt value in pmu_update_cycles
and pmc_update_overflow_timer. This leaves pmc_get_event
and pmc_is_inactive unused, so remove them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_ins_cnt value. Unroll the loop over the
different PMC counters. Treat the PMC4 run-latch specially.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is the combination of frozen bit and counter type, on a per
counter basis. So far this is only used by HFLAGS_INSN_CNT, but
will be used more later.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[danielhb: fixed PMC4 cyc_cnt shift, insn run latch code,
MMCR0_FC handling, "PMC[1-6]" comment]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We can just access it directly in powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[ clg: Took into account removal of inline ]
Message-Id: <20211229165751.3774248-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that 'vector' is known before calling the interrupt-specific setup
code, we can move all of the scv setup into one place.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211229165751.3774248-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
None of the interrupt setup code touches 'vector', so we can move it
earlier in the function. This will allow us to later move the System
Call Vectored setup that is on the top level into the
POWERPC_EXCP_SYSCALL_VECTORED code block.
This patch also moves the verification for when 'excp' does not have
an address associated with it. We now bail a little earlier when that
is the case. This should not cause any visible effects.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The next patch will start accessing the excp_vectors array earlier in
the function, so add a bounds check as first thing here.
This converts the empty return on POWERPC_EXCP_NONE to an error. This
exception number never reaches this function and if it does it
probably means something else went wrong up the line.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There are currently only two interrupts that use alternate SRRs, so
let them write to them directly during the setup code.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The non-signalling versions of VSX scalar convert to shorter/longer
precision insns doesn't silence SNaNs in the hardware. To better match
this behavior, use the non-arithmatic conversion of helper_todouble
instead of float32_to_float64. A test is added to prevent future
regressions.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211228120310.1957990-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Rework slightly ppc_cpu_dump_state() to replace the various 'if'
statements with a 'switch'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-10-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PID SPR of the 405 CPU contains the translation ID of the TLB
which is a 8-bit field. Enforce the mask with a store helper.
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-8-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 timers were broken when booke support was added. Assumption
was made that the register numbers were the same but it's not :
SPR_BOOKE_TSR (0x150)
SPR_BOOKE_TCR (0x154)
SPR_40x_TSR (0x3D8)
SPR_40x_TCR (0x3DA)
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Fixes: ddd1055b07 ("PPC: booke timers")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-6-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no need to deactivate MMU logging at compile time. Remove all
use of defines. Only keep DUMP_PAGE_TABLES for another series since
page tables could be dumped from the monitor.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-4-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
It facilitates reading the logs when mask CPU_LOG_INT is activated. We
should do the same for error codes.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211222064025.1541490-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The compiler should know better how to inline code if necessary.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
For Radix translation, the EA range is 64-bits. when EA(2:11) are
nonzero, a segment interrupt should occur.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Message-Id: <20211231073122.3183583-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
An Event-Based Branch (EBB) allows applications to change the NIA when a
event-based exception occurs. Event-based exceptions are enabled by
setting the Branch Event Status and Control Register (BESCR). If the
event-based exception is enabled when the exception occurs, an EBB
happens.
The following operations happens during an EBB:
- Global Enable (GE) bit of BESCR is set to 0;
- bits 0-61 of the Event-Based Branch Return Register (EBBRR) are set
to the the effective address of the NIA that would have executed if the EBB
didn't happen;
- Instruction fetch and execution will continue in the effective address
contained in the Event-Based Branch Handler Register (EBBHR).
The EBB Handler will process the event and then execute the Return From
Event-Based Branch (rfebb) instruction. rfebb sets BESCR_GE and then
redirects execution to the address pointed in EBBRR. This process is
described in the PowerISA v3.1, Book II, Chapter 6 [1].
This patch implements the rfebb instruction. Descriptions of all
relevant BESCR bits are also added - this patch is only using BESCR_GE,
but the next patches will use the remaining bits.
[1] https://wiki.raptorcs.com/w/images/f/f5/PowerISA_public.v3.1.pdf
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-9-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PM_RUN_INST_CMPL, instructions completed with the run latch set, is
the architected PowerISA v3.1 event defined with PMC4SEL = 0xFA.
Implement it by checking for the CTRL RUN bit before incrementing the
counter. To make this work properly we also need to force a new
translation block each time SPR_CTRL is written. A small tweak in
pmu_increment_insns() is then needed to only increment this event
if the thread has the run latch.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-8-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PMU is already counting cycles by calculating time elapsed in
nanoseconds. Counting instructions is a different matter and requires
another approach.
This patch adds the capability of counting completed instructions (Perf
event PM_INST_CMPL) by counting the amount of instructions translated in
each translation block right before exiting it.
A new pmu_count_insns() helper in translation.c was added to do that.
After verifying that the PMU is counting instructions, call
helper_insns_inc(). This new helper from power8-pmu.c will add the
instructions to the relevant counters. It'll also be responsible for
triggering counter negative overflows as it is already being done with
cycles.
To verify whether the PMU is counting instructions or now, a new hflags
named 'HFLAGS_INSN_CNT' is introduced. This flag will match the internal
state of the PMU. We're be using this flag to avoid calling
helper_insn_inc() when we do not have a valid instruction event being
sampled.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-7-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerISA v3.1 defines that if the proper bits are set (MMCR0_PMC1CE
for PMC1 and MMCR0_PMCjCE for the remaining PMCs), counter negative
conditions are enabled. This means that if the counter value overflows
(i.e. exceeds 0x80000000) a performance monitor alert will occur. This alert
can trigger an event-based exception (to be implemented in the next patches)
if the MMCR0_EBE bit is set.
For now, overflowing the counter when the PMC is counting cycles will
just trigger a performance monitor alert. This is done by starting the
overflow timer to expire in the moment the overflow would be occuring. The
timer will call fire_PMC_interrupt() (via cpu_ppc_pmu_timer_cb) which will
trigger the PMU alert and, if the conditions are met, an EBB exception.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-6-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
MMCR1 determines the events to be sampled by the PMU. Updating the
counters at every MMCR1 write ensures that we're not sampling more
or less events by looking only at MMCR0 and the PMCs.
It is worth noticing that both the Book3S PowerPC PMU, and this IBM
Power8+ PMU that we're modeling, also uses MMCRA, MMCR2 and MMCR3 to
control the PMU. These three registers aren't being handled in this
initial implementation, so for now we're controlling all the PMU
aspects using MMCR0, MMCR1 and the PMCs.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling pmu_update_cycles() on every PMC read/write operation ensures
that the values being fetched are up to date with the current PMU state.
In theory we can get away by just trapping PMCs reads, but we're going
to trap PMC writes to deal with counter overflow logic later on. Let's
put the required wiring for that and make our lives a bit easier in the
next patches.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the barebones of the PMU logic by enabling cycle
counting. The overall logic goes as follows:
- MMCR0 reg initial value is set to 0x80000000 (MMCR0_FC set) to avoid
having to spin the PMU right at system init;
- to retrieve the events that are being profiled, pmc_get_event() will
check the current MMCR0 and MMCR1 value and return the appropriate
PMUEventType. For PMCs 1-4, event 0x2 is the implementation dependent
value of PMU_EVENT_INSTRUCTIONS and event 0x1E is the implementation
dependent value of PMU_EVENT_CYCLES. These events are supported by IBM
Power chips since Power8, at least, and the Linux Perf driver makes use
of these events until kernel v5.15. For PMC1, event 0xF0 is the
architected PowerISA event for cycles. Event 0xFE is the architected
PowerISA event for instructions;
- if the counter is frozen, either via the global MMCR0_FC bit or its
individual frozen counter bits, PMU_EVENT_INACTIVE is returned;
- pmu_update_cycles() will go through each counter and update the
values of all PMCs that are counting cycles. This function will be
called every time a MMCR0 update is done to keep counters values
up to date. Upcoming patches will use this function to allow the
counters to be properly updated during read/write of the PMCs
and MMCR1 writes.
Given that the base CPU frequency is fixed at 1Ghz for both powernv and
pseries clock, cycle calculation assumes that 1 nanosecond equals 1 CPU
cycle. Cycle value is then calculated by adding the elapsed time, in
nanoseconds, of the last cycle update done via pmu_update_cycles().
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch starts an IBM Power8+ compatible PMU implementation by adding
the representation of PMU events that we are going to sample,
PMUEventType. This enum represents a Perf event that is being sampled by
a specific counter 'sprn'. Events that aren't available (i.e. no event
was set in MMCR1) will be of type 'PMU_EVENT_INVALID'. Events that are
inactive due to frozen counter bits state are of type
'PMU_EVENT_INACTIVE'. Other types added in this patch are
PMU_EVENT_CYCLES and PMU_EVENT_INSTRUCTIONS. More types will be added
later on.
Let's also add the required PMU cycle overflow timers. They will be used
to trigger cycle overflows when cycle events are being sampled. This
timer will call cpu_ppc_pmu_timer_cb(), which in turn calls
fire_PMC_interrupt(). Both functions are stubs that will be implemented
later on when EBB support is added.
Two new helper files are created to host this new logic.
cpu_ppc_pmu_init() will init all overflow timers during CPU init time.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This reverts commit 336e91f853.
It breaks the --disable-tcg build:
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
function ‘cpu_ldl_code’ [-Werror=implicit-function-declaration]
We should not have TCG code in powerpc_excp because some kvm-only
routines use it indirectly to dispatch interrupts. See
kvm_handle_debug, spapr_mce_req_event and
spapr_do_system_reset_on_cpu.
We can re-introduce the change once we have split the interrupt
injection code between KVM and TCG.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When Altivec support was added to the e6500 kernel in 2012[1], the
QEMU code was not changed, so we don't register the VPU/VPUA
exceptions for the e6500:
qemu: fatal: Raised an exception without defined vector 73
Note that the error message says 73, instead of 32, which is the IVOR
for VPU. This is because QEMU knows only knows about the VPU interrupt
for the 7400s. In theory, we should not be raising _that_ VPU
interrupt, but instead another one specific for the e6500.
We unfortunately cannot register e6500-specific VPU/VPUA interrupts
because the SPEU/EFPDI interrupts also use IVOR32/33. These are
present only in the e500v1/2 versions. From the user manual:
e500v1, e500v2: only SPEU/EFPDI/EFPRI
e500mc, e5500: no SPEU/EFPDI/EFPRI/VPU/VPUA
e6500: only VPU/VPUA
So I'm leaving IVOR32/33 as SPEU/EFPDI, but altering the dispatch code
to convert the VPU #73 to a #32 when we're in the e6500. Since the
handling for SPEU and VPU is the same this is the only change that's
needed. The EFPDI is not implemented and will cause an abort. I don't
think it worth it changing the error message to take VPUA into
consideration, so I'm not changing anything there.
This bug was discussed in the thread:
https://lists.gnu.org/archive/html/qemu-ppc/2021-06/msg00222.html
1- https://git.kernel.org/torvalds/c/cd66cc2ee52
Reported-by: <mario@locati.it>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211213133542.2608540-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This instruction has VRT and VRB fields instead of T/TX and B/BX.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211213120958.24443-4-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-3-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PPC instruction xsmaxcdp, xsmincdp, xsmaxjdp, and xsminjdp are using
vector registers when they should be using VSX ones. This happens
because the instructions are using GEN_VSX_HELPER_R3, which adds 32
to the register numbers, effectively making them vector registers.
This patch fixes it by changing these instructions to use
GEN_VSX_HELPER_X3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-2-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
They have been there since 2007 without any board using them, most
were protected by a TODO define. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211202191108.1291515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The exception model id for 601v has been removed without mention
why. I assume it was inadvertent and restore it here.
Fixes: b632a148b6 ("target-ppc: Use QOM method dispatch for MMU fault handling")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 603e uses the same exception code as 603 so we don't need a
dedicated entry for it.
This is only a removal of redundant code, no functional change.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The Floating-point Unavailable and Decrementer interrupts are being
registered at the same 0x900 address. The FPU should be at 0x800
instead.
Verified on MPC555, MPC860 and MPC885 user manuals.
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
(Applies to 7441, 7445, 7450, 7451, 7455, 7457, 7447, 7447a and 7448)
The QEMU-side software TLB implementation for the 7450 family of CPUs
is being removed due to lack of known users in the real world. The
last users in the code were removed by the two previous commits.
A brief history:
The feature was added in QEMU by commit 7dbe11acd8 ("Handle all MMU
models in switches...") with the mention that Linux was not able to
handle the TLB miss interrupts and the MMU model would be kept
disabled.
At some point later, commit 8ca3f6c382 ("Allow selection of all
defined PowerPC 74xx (aka G4) CPUs.") enabled the model for the 7450
family without further justification.
We have since the year 2011 [1] been unable to run OpenBIOS in the
7450s and have not heard of any other software that is used with those
CPUs in QEMU. Attempts were made to find a guest OS that implemented
the TLB miss handlers and none were found among Linux 5.15, FreeBSD 13,
MacOS9, MacOSX and MorphOS 3.15.
All CPUs that registered this feature were moved to an MMU model that
replaces the software TLB with a QEMU hardware TLB
implementation. They can now run the same software as the 7400 CPUs,
including the OSes mentioned above.
References:
- https://bugs.launchpad.net/qemu/+bug/812398https://gitlab.com/qemu-project/qemu/-/issues/86
- https://lists.nongnu.org/archive/html/qemu-ppc/2021-11/msg00289.html
message id: 20211119134431.406753-1-farosas@linux.ibm.com
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The e600 CPU is a successor of the 7448 and like all the 7450s CPUs,
it has an optional software TLB feature.
We have determined that there is no OS software support for the 7450
software TLB available these days. See the previous commit for more
information.
This patch disables the SPRs and instructions related to software TLB
from the e600 CPU.
No functional change intended. These facilities should be used by the
OS in interrupt handlers for interrupts that QEMU never generates.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
(Applies to 7441, 7445, 7450, 7451, 7455, 7457, 7447 and 7447a)*
We have since 2011 [1] been unable to run OpenBIOS in the 7450s and
have not heard of any other software that is used with those CPUs in
QEMU. A current discussion [2] shows that the 7450 software TLB is
unsupported in Linux 5.15, FreeBSD 13, MacOS9, MacOSX and MorphOS
3.15. With no known support in firmware or OS, this means that no code
for any of the 7450 CPUs is ever ran in QEMU.
Since the implementation in QEMU of the 7400 MMU is the same as the
7450, except for the software TLB vs. hardware TLB search, this patch
changes all 7450 cpus to the 7400 MMU model. This has the practical
effect of disabling the software TLB feature while keeping other
aspects of address translation working as expected.
This allow us to run software on the 7450 family again.
*- note that the 7448 is currently aliased in QEMU for a 7400, so it
is unaffected by this change.
1- https://bugs.launchpad.net/qemu/+bug/812398https://gitlab.com/qemu-project/qemu/-/issues/86
2- https://lists.nongnu.org/archive/html/qemu-ppc/2021-11/msg00289.html
message id: 20211119134431.406753-1-farosas@linux.ibm.com
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When computing the predicate "is this value currently formatted
for single precision", we do not want to round the value according
to the current rounding mode, nor perform a floating-point equality.
We want to see if the N bits that make up single-precision are the
only ones set within the register, and then a bitwise equality.
Fixes a bug in which a single-precision NaN is considered !SP,
because float64_eq(nan, nan) is always false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-35-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no double-rounding bug here, because the result is
merely an estimate to within 1 part in 256, but perform the
operation with float64r32_div for consistency.
Use float_flag_invalid_snan instead of recomputing the
snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-34-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no double-rounding bug here, because the result is
merely an estimate to within 1 part in 32, but perform the
operation with float64r32_div for consistency.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-33-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_mul. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-32-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_{add,sub,div}. Fixes a double-rounding issue with
performing the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-31-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_sqrt. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-30-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_muladd. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-29-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float_flag_invalid_snan instead of recomputing
the snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-27-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float_flag_invalid_snan instead of recomputing
the snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-26-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsqrt and vxsnan are computed directly by softfloat,
we don't need to recompute it. Split out float_invalid_op_sqrt
to be used in several places. This fixes VSX_SQRT, which did
not order its tests correctly to eliminate NaN with sign set.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-25-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We only needed one ieee arithmetic operation to raise
exceptions. To convert back to register form, we can
use our simpler non-arithmetic function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-24-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsnan is computed directly by softfloat,
we don't need to recompute it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-23-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling helper_frsp directly from other helpers generates
the incorrect retaddr. Split out a helper that takes the
retaddr as a parameter.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-22-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We will process flags other than in valid in helper_float_check_status,
which is invoked after the writeback to FRT.
Fixes a bug in which FRT is not written when OE/UE/XE are enabled.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-21-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Create a common function for all of the madd helpers.
Let the compiler tail call or inline as it chooses.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-20-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vximz, vxisi, and vxsnan are computed directly by
softfloat, we don't need to recompute it. This replaces the
separate float{32,64}_maddsub_update_excp functions with a
single float_invalid_op_madd function.
Fix VSX_MADD by passing sfprf to float_invalid_op_madd,
whereas the previous *_maddsub_update_excp assumed it true.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-19-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Let float64_round_to_int detect and silence snans.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-18-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In GEN_FLOAT_B, we called helper_reset_fpstatus immediately
before calling helper_fri*. Therefore get_float_exception_flags
is known to be zero, and this code can be simplified.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-17-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is the proper type for the enumeration.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211119160502.17432-16-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no reason the callers can't tail call to one function.
Leave it up to the compiler either way.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211119160502.17432-15-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We were returning nanval for any instance of invalid being set,
but that is an incorrect for VXCVI. This failure can be seen
in the float_convs tests.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-14-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsnan is computed directly by softfloat,
we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-13-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Fixes a bug in which e.g XE enabled causes inexact to be raised
before the writeback to the architectural register.
All of the users of GEN_FLOAT_B either set set_fprf, or are one
of the convert-to-integer instructions that require this behaviour.
Split out the two gen_helper_* calls in gen_compute_fprf_float64
and protect only the first with set_fprf.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-12-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxidi, vxzdz, and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-11-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vximz and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-10-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxisi and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-9-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This commit fixes the difference reported in the bug in the reserved
bit 52, it does this by adding this bit to the mask of bits to not be
directly altered in the ppc_store_fpscr function (the hardware used to
compare to QEMU was a Power9).
The bits 0 to 27 were also added to the mask, as they are marked as
reserved in the PowerISA and bit 28 is a reserved extension of the DRN
field (bits 29:31) but can't be set using mtfsfi, while the other DRN
bits may be set using mtfsfi instruction, so bit 28 was also added to
the mask.
Although this is a difference reported in the bug, since it's a reserved
bit it may be a "don't care" case, as put in the bug report. Looking at
the ISA it doesn't explicitly mention this bit can't be set, like it
does for FEX and VX, so I'm unsure if this is necessary.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/266
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20211201163808.440385-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
mtfsf, mtfsfi and mtfsb1 instructions call helper_float_check_status
after updating the value of FPSCR, but helper_float_check_status
checks fp_status and fp_status isn't updated based on FPSCR and
since the value of fp_status is reset earlier in the instruction,
it's always 0.
Because of this helper_float_check_status would change the FI bit to 0
as this bit checks if the last operation was inexact and
float_flag_inexact is always 0.
These instructions also don't throw exceptions correctly since
helper_float_check_status throw exceptions based on fp_status.
This commit created a new helper, helper_fpscr_check_status that checks
FPSCR value instead of fp_status and checks for a larger variety of
exceptions than do_float_check_status.
Since fp_status isn't used, gen_reset_fpstatus() was removed.
The hardware used to compare QEMU's behavior to was a Power9.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20211201163808.440385-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When updating the R bit of a PTE, the Hash64 MMU was using a wrong byte
offset, causing the first byte of the adjacent PTE to be corrupted.
This caused a panic when booting FreeBSD, using the Hash MMU.
Fixes: a2dd4e83e7 ("ppc/hash64: Rework R and C bit updates")
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
'tlbivax' is implemented by gen_tlbivax_booke206() via
gen_helper_booke206_tlbivax(). In case the TLB needs to be flushed,
booke206_invalidate_ea_tlb() is called. All these functions, but
booke206_invalidate_ea_tlb(), uses a 64-bit effective address 'ea'.
booke206_invalidate_ea_tlb() uses an uint32_t 'ea' argument that
truncates the original 'ea' value for apparently no particular reason.
This function retrieves the tlb pointer by calling booke206_get_tlbm(),
which also uses a target_ulong address as parameter - in this case, a
truncated 'ea' address. All the surrounding logic considers the
effective TLB address as a 64 bit value, aside from the signature of
booke206_invalidate_ea_tlb().
Last but not the least, PowerISA 2.07B section 6.11.4.9 [2] makes it
clear that the effective address "EA" is a 64 bit value.
Commit 01662f3e51 introduced this code and no changes were made ever
since. An user detected a problem with tlbivax [1] stating that this
address truncation was the cause. This same behavior might be the source
of several subtle bugs that were never caught.
For all these reasons, this patch assumes that this address truncation
is the result of a mistake/oversight of the original commit, and changes
booke206_invalidate_ea_tlb() 'ea' argument to 'vaddr'.
[1] https://gitlab.com/qemu-project/qemu/-/issues/52
[2] https://wiki.raptorcs.com/wiki/File:PowerISA_V2.07B.pdf
Fixes: 01662f3e51 ("PPC: Implement e500 (FSL) MMU")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/52
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
These instructions should update the GPR indicated by the field RA
instead of RT. This error caused a regression on Mac OS 9 boot and some
graphical glitches in OS X.
Fixes: a39a106634a9 ("target/ppc: Move load and store floating point instructions to decodetree")
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Implemented the instruction XXSPLTIDP using decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-23-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the XXSPLTIW instruction, using decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-22-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed the function that handles XXSPLTIB emulation to using
decodetree, but still use the same logic as before
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-20-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed the function that handles XXSPLTW emulation to using decodetree,
but still using the same logic.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-19-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions plxvp and pstxvp using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-18-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions plxv and pstxv using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-17-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions lxvpx and stxvpx using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-16-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions lxvp and stxvp using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-15-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Moved stxvx and lxvx implementation from the legacy system to
decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-14-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Moved stxv and lxv implementation from the legacy system to
decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-13-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changes get_cpu_vsr to receive a new argument indicating whether the
high or low part of the register is being accessed. This change improves
consistency between the interfaces used to access Vector and VSX
registers and helps to handle endianness in some cases.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-12-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Introduce the macro to centralize checking if the VSX facility is
enabled and handle it correctly.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-11-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instructions:
vextdubvlx: Vector Extract Double Unsigned Byte to VSR using
GPR-specified Left-Index
vextduhvlx: Vector Extract Double Unsigned Halfword to VSR using
GPR-specified Left-Index
vextduwvlx: Vector Extract Double Unsigned Word to VSR using
GPR-specified Left-Index
vextddvlx: Vector Extract Double Doubleword to VSR using
GPR-specified Left-Index
vextdubvrx: Vector Extract Double Unsigned Byte to VSR using
GPR-specified Right-Index
vextduhvrx: Vector Extract Double Unsigned Halfword to VSR using
GPR-specified Right-Index
vextduwvrx: Vector Extract Double Unsigned Word to VSR using
GPR-specified Right-Index
vextddvrx: Vector Extract Double Doubleword to VSR using
GPR-specified Right-Index
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-10-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsbvlx: Vector Insert Byte from VSR using GPR-specified Left-Index
vinshvlx: Vector Insert Halfword from VSR using GPR-specified
Left-Index
vinswvlx: Vector Insert Word from VSR using GPR-specified Left-Index
vinsbvrx: Vector Insert Byte from VSR using GPR-specified Right-Index
vinshvrx: Vector Insert Halfword from VSR using GPR-specified
Right-Index
vinswvrx: Vector Insert Word from VSR using GPR-specified Right-Index
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-8-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsw: Vector Insert Word from GPR using immediate-specified index
vinsd: Vector Insert Doubleword from GPR using immediate-specified
index
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-7-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsblx: Vector Insert Byte from GPR using GPR-specified Left-Index
vinshlx: Vector Insert Halfword from GPR using GPR-specified Left-Index
vinswlx: Vector Insert Word from GPR using GPR-specified Left-Index
vinsdlx: Vector Insert Doubleword from GPR using GPR-specified
Left-Index
vinsbrx: Vector Insert Byte from GPR using GPR-specified Right-Index
vinshrx: Vector Insert Halfword from GPR using GPR-specified
Right-Index
vinswrx: Vector Insert Word from GPR using GPR-specified Right-Index
vinsdrx: Vector Insert Doubleword from GPR using GPR-specified
Right-Index
The helpers and do_vinsx receive i64 to allow code sharing with the
future implementation of Vector Insert from VSR using GPR Index.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-6-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
pdepd and pextd helpers are moved out of #ifdef (TARGET_PPC64) to allow
them to be reused as GVecGen3.fni8.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-4-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The signature of do_cntzdm is changed to allow reuse as GVecGen3i.fni8.
The method is also moved out of #ifdef TARGET_PPC64, as PowerISA doesn't
say vclzdm and vctzdm are 64-bit only.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There's no reason to keep vector-impl.c.inc separate from
vmx-impl.c.inc. Additionally, let GVec handle the multiple calls to
helper_cfuged for us.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-2-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
ddedpd: DFP Decode DPD To BCD
ddedpdq: DFP Decode DPD To BCD Quad
denbcd: DFP Encode BCD To DPD
denbcdq: DFP Encode BCD To DPD Quad
dscli: DFP Shift Significand Left Immediate
dscliq: DFP Shift Significand Left Immediate Quad
dscri: DFP Shift Significand Right Immediate
dscriq: DFP Shift Significand Right Immediate Quad
Also deleted dfp-ops.c.inc, now that all PPC DFP instructions were
moved to decodetree.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-16-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dctdp: DFP Convert To DFP Long
dctqpq: DFP Convert To DFP Extended
drsp: DFP Round To DFP Short
drdpq: DFP Round To DFP Long
dcffix: DFP Convert From Fixed
dcffixq: DFP Convert From Fixed Quad
dctfix: DFP Convert To Fixed
dctfixq: DFP Convert To Fixed Quad
dxex: DFP Extract Biased Exponent
dxexq: DFP Extract Biased Exponent Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-15-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dquai: DFP Quantize Immediate
dquaiq: DFP Quantize Immediate Quad
drintx: DFP Round to FP Integer With Inexact
drintxq: DFP Round to FP Integer With Inexact Quad
drintn: DFP Round to FP Integer Without Inexact
drintnq: DFP Round to FP Integer Without Inexact Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-13-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dtstdc: DFP Test Data Class
dtstdcq: DFP Test Data Class Quad
dtstdg: DFP Test Data Group
dtstdgq: DFP Test Data Group Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-10-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Before moving the existing DFP instructions to decodetree, drop the
nip update that shouldn't be done for these instructions.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-9-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
dctfixqq: DFP Convert To Fixed Quadword Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-8-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
dcffixqq: DFP Convert From Fixed Quadword Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-5-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fernando Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-4-luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move REQUIRE_ALTIVEC to translate.c and rename it to REQUIRE_VECTOR.
Signed-off-by: Bruno Larsen <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Fernando Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211029192417.400707-3-luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
cnttzdm: Count Trailing Zeros Doubleword Under Bit Mask
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-9-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
cntlzdm: Count Leading Zeros Doubleword Under Bit Mask
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-8-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-5-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move load floating point instructions (lfs, lfsu, lfsx, lfsux, lfd, lfdu, lfdx, lfdux)
and store floating point instructions(stfs, stfsu, stfsx, stfsux, stfd, stfdu, stfdx,
stfdux) from legacy system to decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-4-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move resolve_PLS_D from fixedpoint-impl.c.inc to translate.c
because this way the function can be used not only by fixed
point instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <phervalle@gmail.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The do_ea_calc function will calculate the effective address(EA)
according to PowerIsa 3.1. With that, it was replaced part of
do_ldst() that calculates the EA by this new function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle (pherde) <phervalle@gmail.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-2-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is not used by, nor required by, user-only.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We ought to have been recording the virtual address for reporting
to the guest trap handler.
Cc: qemu-ppc@nongnu.org
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
By doing this while sending the exception, we will have already
done the unwinding, which makes the ppc_cpu_do_unaligned_access
code a bit cleaner.
Update the comment about the expected instruction format.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Record DAR, DSISR, and exception_index. That last means
that we must exit to cpu_loop ourselves, instead of letting
exception_index being overwritten.
This is exactly what the user-mode ppc_cpu_tlb_fill does,
so simply rename it as ppc_cpu_record_sigsegv.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
I noticed -cpu help printing enough trailing spaces to make the output
at least 84 characters wide. Looks ugly unless the terminal is wider.
Ugly or not, trailing spaces are stupid.
The culprit is this line in x86_cpu_list_entry():
qemu_printf("x86 %-20s %-58s\n", name, desc);
This prints a string with minimum field left-justified right before a
newline. Change it to
qemu_printf("x86 %-20s %s\n", name, desc);
which avoids the trailing spaces and is simpler to boot.
A search for the pattern with "git-grep -E '%-[0-9]+s\\n'" found a few
more instances. Change them similarly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20211009152401.2982862-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
These will be used to implement new decimal floating point
instructions from Power ISA 3.1.
The remainder is now returned directly by divu128/divs128,
freeing up phigh to receive the high 64 bits of the quotient.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-4-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In preparation for changing the divu128/divs128 implementations
to allow for quotients larger than 64 bits, move the div-by-zero
and overflow checks to the callers.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-2-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Problem state needs to be able to read and write the PMU counters,
otherwise it won't be aware of any sampling result that the PMU produces
after a Perf run.
This patch does that in a similar fashion as already done in the
previous patches. PMCs 5 and 6 have a special condition, aside from the
constraints that are common with PMCs 1-4, where they are not part of the
PMU if MMCR0_PMCC is 0b11.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-5-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Similar to the previous patch, let's add problem state read/write access to
the MMCR2 SPR, which is also a group A PMU SPR that needs to be filtered
to be read/written by userspace.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-4-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Userspace need access to PMU SPRs to be able to operate the PMU. One of
such SPRs is MMCR0.
MMCR0, as defined by PowerISA v3.1, is classified as a 'group A' PMU
register. This class of registers has common read/write rules that are
governed by MMCR0 PMCC bits. MMCR0 is also not fully exposed to problem
state: only MMCR0_FC, MMCR0_PMAO and MMCR0_PMAE bits are
readable/writable in this case.
This patch exposes MMCR0 to userspace by doing the following:
- two new callbacks, spr_read_MMCR0_ureg() and spr_write_MMCR0_ureg(),
are added to be used as problem state read/write callbacks of UMMCR0.
Both callbacks filters the amount of bits userspace is able to
read/write by using a MMCR0_UREG_MASK;
- problem state access control is done by the spr_groupA_read_allowed()
and spr_groupA_write_allowed() helpers. These helpers will read the
current PMCC bits from DisasContext and check whether the read/write
MMCR0 operation is valid or noti;
- to avoid putting exclusive PMU logic into the already loaded
translate.c file, let's create a new 'power8-pmu-regs.c.inc' file that
will hold all the spr_read/spr_write functions of PMU registers.
The 'power8' name of this new file intends to hint about the proven
support of the PMU logic to be added. The code has been tested with the
IBM POWER chip family, POWER8 being the oldest version tested. This
doesn't mean that the PMU logic will break with any other PPC64 chip
that implements Book3s, but rather that we can't assert that it works
properly with any Book3s compliant chip.
CC: Gustavo Romero <gustavo.romero@linaro.org>
Signed-off-by: Gustavo Romero <gromero@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-3-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're going to add PMU support for TCG PPC64 chips, based on IBM POWER8+
emulation and following PowerISA v3.1. This requires several PMU related
registers to be exposed to userspace (problem state). PowerISA v3.1
dictates that the PMCC bits of the MMCR0 register controls the level of
access of the PMU registers to problem state.
This patch start things off by exposing both PMCC bits to hflags,
allowing us to access them via DisasContext in the read/write callbacks
that we're going to add next.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-2-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PowerISA says that mtmsr[d] "does not alter MSR[HV], MSR[S], MSR[ME], or
MSR[LE]", but the current code only filters the GPR-provided value if
L=1. This behavior caused some problems in FreeBSD, and a build option
was added to work around the issue [1], but it seems that the bug was
not reported in launchpad/gitlab. This patch address the issue in qemu,
so the option on FreeBSD should no longer be required.
[1] https://cgit.freebsd.org/src/commit/?id=4efb1ca7d2a44cfb33d7f9e18bd92f8d68dcfee0
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211015181940.197982-1-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can't read env->xer directly, as it does not contain some bits of
XER. Instead, we should have a callback that uses cpu_read_xer to read
the complete register.
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-5-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
env->xer doesn't hold some bits of XER, like OV and CA. To write the
complete register in the core dump we should read XER value with
cpu_read_xer.
Reported-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-4-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The value of XER is split in multiple fields of CPUPPCState, like
env->xer and env->so. To get/set the whole register from gdb, we should
use cpu_read_xer/cpu_write_xer.
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mask of the Byte-Reverse Halfword opcode is a read-only
constant. We can avoid using a TCG temporary by moving the
mask to the constant pool.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211003141711.3673181-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Avoid using TCG temporaries for the -1 and 8 constant values.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211003141711.3673181-2-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
GDB single-stepping is now handled generically.
Reuse gen_debug_exception to handle architectural debug exceptions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The previous placement in tcg/tcg.h was not logical.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
PowerISA v3.0B made tlbie[l] hypervisor privileged when PSR=0 and HR=1.
To allow the check at translation time, we'll use the HR bit of LPCR to
check the MMU mode instead of the PATE.HR.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210917114751.206845-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add a Host Radix field (hr) in DisasContext with LPCR[HR] value to allow
us to decide between Radix and HPT while validating instructions
arguments. Note that PowerISA v3.1 does not require LPCR[HR] and PATE.HR
to match if the thread is in ultravisor/hypervisor real addressing mode,
so ctx->hr may be invalid if ctx->hv and ctx->dr are set.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20210917114751.206845-2-matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
According to the ISA, CR should be set based on the source value, and
not on the packed decimal result.
The way this was implemented would cause GT, LT and EQ to be set
incorrectly when the source value was too large and the 31 least
significant digits of the packed decimal result ended up being all zero.
This would happen for source values of +/-10^31, +/-10^32, etc.
The new implementation fixes this and also skips the result calculation
altogether in case of src overflow.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Message-Id: <20210823150235.35759-1-luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While we may have had some thought of allowing system-mode
to return from this hook, we have no guests that require this.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is nothing target specific about this. The implementation
is host specific, but the declaration is 100% common.
Reviewed-By: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Restrict cpu_exec_interrupt() and its callees to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210911165434.531552-18-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
[rth: Split out of a larger patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
As vector registers are stored in host endianness, we shouldn't swap its
64-bit elements in user mode. Add a 16-byte case in
ppc_maybe_bswap_register to handle the reordering of elements in softmmu
and remove avr_need_swap which is now unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826145656.2507213-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These helpers shouldn't depend on the host endianness, as they only use
shifts, ands, and int128_* methods.
Fixes: 60caf2216b ("target-ppc: add vextu[bhw][lr]x instructions")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Hypervisor Decrementer exception should not be generated while the
CPU is in power-saving mode (see cpu_ppc_hdecr_excp()). However,
discarding the exception before entering the power-saving mode is
wrong since we would loose a previously generated HDEC.
Fixes: 4b236b621b ("ppc: Initial HDEC support")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The POWER10 DD2 CPU adds an extra LPCR[HAIL] bit. DD1 doesn't have
HAIL but since it does not break the modeling and that we don't plan
to support DD1, modify the LPCR mask of all the POWER10 family.
Setting the HAIL bit is a requirement to support the scv instruction
on PowerNV POWER10 platforms since glibc-2.33.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-2-clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
moved store_40x_sler from mmu_common.c to helper_regs.c as it is
a function to store a value in a special purpose register, so
moving it to a file focused in special register manipulation
is more appropriate.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-4-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_store_sdr1 was at first in mmu_helper.c and was moved as part
the patches to enable the disable-tcg option, now it's being moved
back to a file that will be compiled with that option
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-3-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU
stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated
meson.build to compile mmu_common.c and only compile mmu_helper.c when
CONFIG_TCG is set.
Moved function declarations, #define and structs used by both files to
internal.h except for functions that use structures defined in cpu.h,
those were moved to cpu.h.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
arch_init.h only defines the QEMU_ARCH_* enumeration and the
arch_type global. Don't include it in files that don't use those.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210730105947.28215-8-peter.maydell@linaro.org
In commit 8f0a4b6a9b, we started to require L=0 for ppc32 to match what
The Programming Environments Manual say:
"For 32-bit implementations, the L field must be cleared, otherwise
the instruction form is invalid."
The stricter behavior, however, broke AROS boot on sam460ex, which is a
regression from 6.0. This patch partially reverts the change, raising
the exception only for CPUs known to require L=0 (e500 and e500mc) and
logging a guest error for other cases.
Both behaviors are acceptable by the PowerISA, which allows "the system
illegal instruction error handler to be invoked or yield boundedly
undefined results."
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Fixes: 8f0a4b6a9b ("target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree")
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210720135507.2444635-1-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The hook is now unused, with breakpoints checked outside translation.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr. Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The root trace-events only declares a single TCG event:
$ git grep -w tcg trace-events
trace-events:115:# tcg/tcg-op.c
trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d"
and only a tcg/tcg-op.c uses it:
$ git grep -l trace_guest_mem_before_tcg
tcg/tcg-op.c
therefore it is pointless to include "trace-tcg.h" in each target
(because it is not used). Remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210629050935.2570721-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add a target-specific Kconfig. We need the definitions in Kconfig so
the minikconf tool can verify they exits. However CONFIG_FOO is only
enabled for target foo via the meson.build rules.
Two architecture have a particularity, ARM and MIPS. As their
translators have been split you can potentially build a plain 32 bit
build along with a 64-bit version including the 32-bit subset.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210131111316.232778-6-f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707131744.26027-2-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If KVM_CAP_RPT_INVALIDATE KVM capability is enabled, then
- indicate the availability of H_RPT_INVALIDATE hcall to the guest via
ibm,hypertas-functions property.
- Enable the hcall
Both the above are done only if the new sPAPR machine capability
cap-rpt-invalidate is set.
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Message-Id: <20210706112440.1449562-3-bharata@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The function ppc_tlb_invalid_all is not compiled anymore in a TCG-less
environment, and the call to that function has been disabled in this
situation
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210708164957.28096-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Change the assert in ppc_store_sdr1() to allow vhyp to be set on CPUs
without HV bit. This allows using the vhyp interface for firmware
emulation on pegasos2.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <21c7745aabbb68fcc50bb2ffaf16b939ba21261c.1624811233.git.balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MSR is a 32-bit register in BookE and there is no mtmsrd instruction.
Cc: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20210706051321.609046-1-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed hash32 address translation to use the supplied mmu_idx, instead
of using what was stored in the msr, for parity purposes (radix64
already uses that) and for conceptual correctness, all the relevant
functions should always use the supplied mmu_idx, as there are no
guarantees that the mmu_idx stored in the CPU variable will not desync.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210706150316.21005-3-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Intrudoce a header common to all BookS MMUs, that can hold code that is
common to hash32 and book3s-v3 MMUs.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210706150316.21005-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed hash64 address translation to use the supplied mmu_idx instead
of using the one stored in the msr, for parity purposes (other book3s
MMUs already use it).
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210628133610.1143-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit attempts to fix a technical hiccup first mentioned by Richard
Henderson in
https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06247.html
To sumarize the hiccup here, when radix-style mmus are translating an
address, they might need to call a second level of translation, with
hypervisor privileges. However, the way it was being done up until
this point meant that the second level translation had the same
privileges as the first level. It could lead to a bug in address
translation when running KVM inside a TCG guest, but this bug was never
experienced by users, so this isn't as much a bug fix as it is a
correctness cleanup.
This patch attempts that cleanup by making radix64_*_xlate functions
receive the mmu_idx, and passing one with the correct permission for the
second level translation.
The mmuidx macros added by this patch are only correct for non-bookE
mmus, because BookE style set the IS and DS bits inverted and there
might be other subtle differences. However, there doesn't seem to be
BookE cpus that have radix-style mmus, so we left a comment there to
document the issue, in case a machine does have that and was missed.
As part of this cleanup, we now need to send the correct mmmu_idx
when calling get_phys_page_debug, otherwise we might not be able to see the
memory that the CPU could
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210628133610.1143-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function is used by TCGCPUOps, and is thus TCG specific.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-10-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Create one common dispatch for all of the ppc_*_xlate functions.
Use ppc64_v3_radix to directly dispatch between ppc_radix64_xlate
and ppc_hash64_xlate.
Remove the separate *_handle_mmu_fault and *_get_phys_page_debug
functions, using common code for ppc_cpu_tlb_fill and
ppc_cpu_get_phys_page_debug.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-9-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate (mostly), putting all
of the logic for older mmu translation into a single entry point.
For booke, we need to add mmu_idx to the xlate-style interface.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-8-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash32 translation into a single entry point.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-7-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash64 translation into a single function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-6-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of returning non-zero for failure, return true for success.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-5-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This removes some incomplete duplication between
ppc_radix64_handle_mmu_fault and ppc_radix64_get_phys_page_debug.
The former was correct wrt SPR_HRMOR and the latter was not.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These changes were waiting until we didn't need to match
the function type of PowerPCCPUClass.handle_mmu_fault.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-3-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead, use a switch on env->mmu_model. This avoids some
replicated information in cpu setup.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-2-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This isn't used anymore.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-3-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PowerPC CPUs use big endian by default but starting with POWER7,
server grade CPUs use the ILE bit of the LPCR special purpose
register to decide on the endianness to use when handling
interrupts. This gives a clue to QEMU on the endianness the
guest kernel is running, which is needed when generating an
ELF dump of the guest or when delivering an FWNMI machine
check interrupt.
Commit 382d2db62b ("target-ppc: Introduce callback for interrupt
endianness") added a class method to PowerPCCPUClass to modelize
this : default implementation returns a fixed "big endian" value,
while POWER7 and newer do the LPCR_ILE check. This is suboptimal
as it forces to implement the method for every new CPU family, and
it is very unlikely that this will ever be different than what we
have today.
We basically only have three cases to consider:
a) CPU doesn't have an LPCR => big endian
b) CPU has an LPCR but doesn't support the ILE bit => big endian
c) CPU has an LPCR and supports the ILE bit => little or big endian
Instead of class methods, introduce an inline helper that checks the
ILE bit in the LPCR_MASK to decide on the outcome. The new helper
words little endian instead of big endian. This allows to drop a !
operator in ppc_cpu_do_fwnmi_machine_check().
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-2-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We will shortly be interested in distinguishing pointers
from integers in the helper's declaration, as well as a
true void return. We currently have two parallel 1 bit
fields; merge them and expand to a 3 bit field.
Our current maximum is 7 helper arguments, plus the return
makes 8 * 3 = 24 bits used within the uint32_t typemask.
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Commit 6086c75 (target/ppc: Replace POWERPC_EXCP_BRANCH with
DISAS_NORETURN) broke the generation of exceptions when
CPU_SINGLE_STEP or CPU_BRANCH_STEP were set, due to nip always being
reset to the address of the current instruction.
This fix leaves nip untouched when generating the exception.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reported-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210602125103.332793-1-luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Additionally, REQUIRE_64BIT when L=1 to match what is specified in The
Programming Environments Manual:
"For 32-bit implementations, the L field must be cleared, otherwise the
instruction form is invalid."
Some CPUs are known to deviate from this specification by ignoring the
L bit [1]. The stricter behavior, however, can help users that test
software with qemu, making it more likely to detect bugs that would
otherwise be silent.
If deemed necessary, a future patch can adapt this behavior based on
the specific CPU model.
[1] The 601 manual is the only one I've found that explicitly states
that the L bit is ignored, but we also observe this behavior in a 7447A
v1.2.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-15-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[dwg: Corrected whitespace error]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These are all connected by macros in the legacy decoding.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-9-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These are all connected by macros in the legacy decoding.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-7-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The illegal suffix behavior matches what was observed in a
POWER10 DD2.0 machine.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-6-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With prefixed instructions, the number of instructions
remaining until the page crossing is no longer constant.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These will be used by the decodetree trans_* functions
to early-exit when the instruction set is not enabled.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-2-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The only difference in the code for Instruction fetch, Data load and
Data store TLB miss errors is that when called from an unsupported
processor (i.e. not one of 602, 603, 603e, G2, 7x5 or 74xx), they
abort with a message specific to the operation type (insn fetch, data
load/store).
If a processor does not support those interrupts we should not be
registering them in init_excp_<proc> to begin with, so that error
message would never be used.
I'm leaving the message in for completeness, but making it generic and
consolidating the three interrupts into the same case statement body.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210601214649.785647-4-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function is identical to dump_syscall, so use the latter for
system call vectored as well.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210601214649.785647-3-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Followed the suggested overhaul to store_fpscr logic, and moved it to
cpu.c where it can be accessed in !TCG builds.
The overhaul was suggested because storing a value to fpscr should
never raise an exception, so we could remove all the mess that happened
with POWERPC_EXCP_FP.
We also moved fpscr_set_rounding_mode into cpu.c as it could now be moved
there, and it is needed when a value for the fpscr is being stored
directly.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210527163522.23019-1-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This feature will no longer be useful as ppc moves to using decodetree
for TCG. And building with it enabled is no longer possible, due to
changes in opc_handler_t. Since the last commit that mentions it
happened in 2014, I think it is safe to remove it.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210531145629.21300-5-bruno.larsen@eldorado.org.br>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
since both, PPC_DO_STATISTICS and PPC_DUMP_CPU, are obsoleted as
target/ppc moves to decodetree, we can remove this ifdef based decision
tree, and only have what is now the standard option for the macro.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210531145629.21300-4-bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Removed the commented out definition and all ifdefs relating to
PPC_DUMP_STATISTICS, as it's hardly ever used.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210526202104.127910-4-bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function requires surce code modification to be useful, which means
it probably is not used often, and the move to using decodetree means
the statistics won't even be collected anymore.
Also removed setting dump_statistics in ppc_cpu_realize, since it was
only useful when in conjunction with ppc_cpu_dump_statistics.
Suggested-by: Richard Henderson<richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210526202104.127910-3-bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
updated build file to not compile some sources that are unnecessary if
TCG is disabled on the system.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210525115355.8254-5-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Created a file with stubs needed to compile disabling TCG. *_ppc_opcodes
were created to make cpu_init.c have a few less ifdefs, since they are
not needed. softmmu_resize_hpt_* have to be created because the compiler
can't automatically know they aren't used, but they should never be
reached.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210525115355.8254-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
excp_helper.c, mmu-hash64.c and mmu_helper.c have some function
declarations that are TCG-only, and couldn't be easily moved to a
TCG only file, so ifdefs were added around them.
We also needed ifdefs around some header files because helper-proto.h
includes trace/generated-helpers.h, which is never created when building
without TCG, and cpu_ldst.h includes tcg/tcg.h, whose containing folder
is not included as a -iquote. As future cleanup, we could change the
part of the configuration script to add those.
cpu_init.c also had a callback definition that is TCG only and could be
removed as part of a future cleanup (all the dump_statistics part is
almost never used and will become obsolete as we transition to using
decodetree).
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210525115355.8254-3-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The write calback decision when registering the MAS SPR has been turned
into a ternary operation, rather than an if-then-else block.
This was done because when building without TCG, even though the
compiler will optimize away the pointers to spr_write_generic*, it
doesn't optimize away the decision and assignment to the local pointer,
creating compiler errors. This cleanup looked better than using ifdefs,
so we decided to with it.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210525115355.8254-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_store_ptcr, defined in mmu_helper.c, was only used by
helper_store_ptcr, in misc_helper.c. To avoid possible confusion,
the function was folded into the helper.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210526143516.125582-1-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These files included helper-proto.h, but didn't use or declare any
helpers, so the #include has been removed
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210521201759.85475-6-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is preferable to store the current rounding mode and retore from that
than recalculating from fpscr, so we changed the behavior of do_fri and
VSX_ROUND to do it like that.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210521201759.85475-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These functions are used in hw/ppc logic, during machine startup, which
means it must be compiled when --disable-tcg is selected, and so it has
been moved into a common code file
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210521201759.85475-3-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed how the function ppc_store_sdr1, from error_report(...) to
qemu_log_mask(LOG_GUEST_ERROR, ...).
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210521201759.85475-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit e50caf4a5c ("tracing: convert documentation to rST")
converted docs/devel/tracing.txt to docs/devel/tracing.rst.
We still have several references to the old file, so let's fix them
with the following command:
sed -i s/tracing.txt/tracing.rst/ $(git grep -l docs/devel/tracing.txt)
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210517151702.109066-2-sgarzare@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We no longer have any runtime modifications to this struct,
so declare them all const.
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20210227232519.222663-3-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-21-f4bug@amsat.org>
[rth: Drop declaration movement from target/*/cpu.h]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The write_elf*() handlers are used to dump vmcore images.
This feature is only meaningful for system emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-19-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
VirtIO devices are only meaningful with system emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-17-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Migration is specific to system emulation.
- Move the CPUClass::vmsd field to SysemuCPUOps,
- restrict VMSTATE_CPU() macro to sysemu,
- vmstate_dummy is now unused, remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-16-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Introduce a structure to hold handler specific to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-15-f4bug@amsat.org>
[rth: Squash "restrict hw/core/sysemu-cpu-ops.h" patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Quoting Peter Maydell [*]:
There are two ways to handle migration for
a CPU object:
(1) like any other device, so it has a dc->vmsd that covers
migration for the whole object. As usual for objects that are a
subclass of a parent that has state, the first entry in the
VMStateDescription field list is VMSTATE_CPU(), which migrates
the cpu_common fields, followed by whatever the CPU's own migration
fields are.
(2) a backwards-compatible mechanism for CPUs that were
originally migrated using manual "write fields to the migration
stream structures". The on-the-wire migration format
for those is based on the 'env' pointer (which isn't a QOM object),
and the cpu_common part of the migration data is elsewhere.
cpu_exec_realizefn() handles both possibilities:
* for type 1, dc->vmsd is set and cc->vmsd is not,
so cpu_exec_realizefn() does nothing, and the standard
"register dc->vmsd for a device" code does everything needed
* for type 2, dc->vmsd is NULL and so we register the
vmstate_cpu_common directly to handle the cpu-common fields,
and the cc->vmsd to handle the per-CPU stuff
You can't change a CPU from one type to the other without breaking
migration compatibility, which is why some guest architectures
are stuck on the cc->vmsd form. New targets should use dc->vmsd.
To avoid new targets to start using type (2), rename cc->vmsd as
cc->legacy_vmsd. The correct field to implement is dc->vmsd (the
DeviceClass one).
See also commit b170fce3dd ("cpu: Register VMStateDescription
through CPUState") for historic background.
[*] https://www.mail-archive.com/qemu-devel@nongnu.org/msg800849.html
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210517105140.1062037-13-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It is no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-16-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can now use MMU_INST_FETCH from access_type for this.
Unify the I/D code paths, making use of prot_for_access_type.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-15-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-14-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can now use MMU_INST_FETCH from access_type for this.
Unify the I/D code paths, making use of prot_for_access_type.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-13-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-12-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can now use MMU_INST_FETCH from access_type for this.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-11-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can now use MMU_INST_FETCH from access_type for this.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-10-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-9-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can now use MMU_INST_FETCH from access_type for this.
Use prot_for_access_type to simplify everything.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-8-richard.henderson@linaro.org>
[dwg: Remove a stray trailing whitespace]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This replaces 'int rw' with 'MMUAccessType access_type'.
Comparisons vs zero become either MMU_DATA_LOAD or MMU_DATA_STORE,
since we had previously squashed rw to 0 for code access.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-7-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The variable that holds ACCESS_INT, ACCESS_FLOAT, etc is
variously called 'int type' or 'int access_type' within
this file. Standardize on 'int type' throughout.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-6-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We must leave the 'int rwx' parameter to ppc_hash32_handle_mmu_fault
for now, but will clean that up later.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-5-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We must leave the 'int rwx' parameter to ppc_hash64_handle_mmu_fault
for now, but will clean that up later.
Signed-off-by: Ricgard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-4-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We must leave the 'int rwx' parameter to ppc_radix64_handle_mmu_fault
for now, but will clean that up later.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-3-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use this in the three places we currently have a local array
indexed by rwx (which happens to have the same values).
The types will match up correctly with additional changes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210518201146.794854-2-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
TARGET_WORDS_BIGENDIAN may not match the machine endianness if that's a
runtime-configurable parameter.
Fixes: bcb0b7b1a1
Fixes: afae37d98a
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/212
Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
Message-Id: <20210518133020.58927-1-thatlemon@gmail.com>
Tested-by: Paul A. Clarke <pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The special logging is unnecessary. It will have been done
immediately before in the log file.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210517205025.3777947-9-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>