Refactor VSX_SCALAR_CMP_DP, changing its name to VSX_SCALAR_CMP and
prepare the helper to be used for quadword comparisons.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225210936.1749575-41-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
xscmpnedp was added in ISA v3.0 but removed in v3.0B. This patch
removes this instruction as it was not in the final version of v3.0.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-40-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Implement the following PowerISA v3.1 instructions:
vcmpgtsq: Vector Compare Greater Than Signed Quadword
vcmpgtuq: Vector Compare Greater Than Unsigned Quadword
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-13-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Move the following instructions to decodetree:
vextsb2w: Vector Extend Sign Byte To Word
vextsh2w: Vector Extend Sign Halfword To Word
vextsb2d: Vector Extend Sign Byte To Doubleword
vextsh2d: Vector Extend Sign Halfword To Doubleword
vextsw2d: Vector Extend Sign Word To Doubleword
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Based on [1] by Lijun Pan <ljp@linux.ibm.com>, which was never merged
into master.
[1]: https://lists.gnu.org/archive/html/qemu-ppc/2020-07/msg00419.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Based on [1] by Lijun Pan <ljp@linux.ibm.com>, which was never merged
into master.
[1]: https://lists.gnu.org/archive/html/qemu-ppc/2020-07/msg00419.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Changed vmulhuw, vmulhud, vmulhsw, vmulhsd to not
use helpers.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225210936.1749575-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
New macros that add FLAGS and FLAGS2 checking were added for
both TRANS and TRANS64.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
[ferst: - TRANS_FLAGS2 instead of TRANS_FLAGS_E
- Use the new macros in load/store vector insns ]
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the EBB exception support that are triggered by
Performance Monitor alerts. This happens when a Performance Monitor
alert occurs and MMCR0_EBE, BESCR_PME and BESCR_GE are set.
fire_PMC_interrupt() will execute the raise_ebb_perfm_exception() helper
which will check for MMCR0_EBE, BESCR_PME and BESCR_GE bits. If all bits
are set, do_ebb() will attempt to trigger a PERFM EBB event.
If the EBB facility is enabled in both FSCR and HFSCR we consider that
the EBB is valid and set BESCR_PMEO. After that, if we're running in
problem state, fire a POWERPC_EXCP_PERM_EBB immediately. Otherwise we'll
queue a PPC_INTERRUPT_EBB.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225101140.1054160-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PPC_INTERRUPT_EBB is a new interrupt that will be used to deliver EBB
exceptions that had to be postponed because the thread wasn't in problem
state at the time the event-based branch was supposed to occur.
ISA 3.1 also defines two EBB exceptions: Performance Monitor EBB
exception and External EBB exception. They are being added as
POWERPC_EXCP_PERFM_EBB and POWERPC_EXCP_EXTERNAL_EBB.
PPC_INTERRUPT_EBB will check BESCR bits to see the EBB type that
occurred and trigger the appropriate exception. Both exceptions are
doing the same thing in this first implementation: clear BESCR_GE and
enter the branch with env->nip retrieved from SPR_EBBHR.
The checks being done by the interrupt code are msr_pr and BESCR_GE
states. All other checks (EBB facility check, BESCR_PME bit, specific
bits related to the event type) must be done beforehand.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There are still PMU exclusive bits to handle in fire_PMC_interrupt()
before implementing the EBB support. Let's finalize it now to avoid
dealing with PMU and EBB logic at the same time in the next patches.
fire_PMC_interrupt() will fire an Performance Monitor alert depending on
MMCR0_PMAE. If we are required to freeze the timers (MMCR0_FCECE) we'll
also need to update summaries and delete the existing overflow timers.
In all cases we're going to update the cycle counters.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is an exclusive TCG helper. Gating it with CONFIG_TCG and changing
meson.build accordingly will prevent problems --disable-tcg and
--disable-linux-user later on.
We're also changing the uses of !kvm_enabled() to tcg_enabled() to avoid
adding "defined(CONFIG_TCG)" ifdefs, since tcg_enabled() will be
defaulted to false with --disable-tcg and the block will always be
skipped.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The dh_alias redirect is intended to handle TCG types as distinguished
from C types. TCG does not distinguish signed int from unsigned int,
because they are the same size. However, we need to retain this
distinction for dh_typecode, lest we fail to extend abi types properly
for the host call parameters.
This bug was detected when running the 'arm' emulator on an s390
system. The s390 uses TCG_TARGET_EXTEND_ARGS which triggers code
in tcg_gen_callN to extend 32 bit values to 64 bits; the incorrect
sign data in the typemask for each argument caused the values to be
extended as unsigned values.
This simple program exhibits the problem:
static volatile int num = -9;
static volatile int den = -5;
int main(void)
{
int quo = num / den;
printf("num %d den %d quo %d\n", num, den, quo);
exit(0);
}
When run on the broken qemu, this results in:
num -9 den -5 quo 0
The correct result is:
num -9 den -5 quo 1
Fixes: 7319d83a73 ("tcg: Combine dh_is_64bit and dh_is_signed to dh_typecode")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/876
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Many files use "qemu/log.h" declarations but neglect to include
it (they inherit it via "exec/exec-all.h"). "exec/exec-all.h" is
a core component and shouldn't be used that way. Move the
"qemu/log.h" inclusion locally to each unit requiring it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220207082756.82600-10-f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Let's leave cpu_init with just generic CPU initialization and
QOM-related functions.
The rest of the SPR registration functions will be moved in the
following patches along with the code that uses them. These are only
the commonly used ones.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-28-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
These will need to be accessed from other files once we move the CPUs
code to separate files.
The check_pow_hid0 and check_pow_hid0_74xx are too specific to be
moved to a header so I'll deal with them later when splitting this
code between the multiple CPU families.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-27-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Put the SPR registration macros in a header that is accessible outside
of cpu_init.c. The following patches will move CPU-specific code to
separate files and will need to access it.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-26-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The following patches will move CPU-specific code into separate files,
so expose the most used SPR registration functions:
register_sdr1_sprs | 22 callers
register_low_BATs | 20 callers
register_non_embedded_sprs | 19 callers
register_high_BATs | 10 callers
register_thrm_sprs | 8 callers
register_usprgh_sprs | 6 callers
register_6xx_7xx_soft_tlb | only 3 callers, but it helps to
keep the soft TLB code consistent.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-25-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Initial intent for the spr_tcg header was to expose the spr_read|write
callbacks that are only used by TCG code. However, although these
routines are TCG-specific, the KVM code needs access to env->sprs
which creation is currently coupled to the callback registration.
We are probably not going to decouple SPR creation and TCG callback
registration any time soon, so let's rename the header to spr_common
to accomodate the register_*_sprs functions that will be moved out of
cpu_init.c in the following patches.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-24-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This function registers just one SPR and has only two callers, so open
code it.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-23-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The important part of this function is that it applies to non-embedded
CPUs, not that it also applies to the 601. We removed support for the
601 anyway, so rename this function.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-22-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The init_proc_755 function is identical to the 745 one except for the
755-specific registers. I think it is worth it to make them share
code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-21-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
init_proc_603 is defined after init_proc_e300, so I had to move some
code around to make it work.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-19-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is done to improve init_proc readability and to make subsequent
patches that touch this code a bit cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-18-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is done to improve init_proc readability and to make subsequent
patches that touch this code a bit cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-17-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is just to have 755-specific registers contained into a function,
intead of leaving them open-coded in init_proc_755. It makes init_proc
easier to read and keeps later patches that touch this code a bit
cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-16-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 745 and 755 can share the HID registration, so move it all into
register_755_sprs, which applies for both CPUs.
Also rename that function to register_745_sprs, since the 745 is the
earliest of the two. This will help with separating 755-specific
registers in a subsequent patch.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-14-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Move some of the 440 registers that are being repeated in the 440*
CPUs to register_440_sprs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We're considering these two to be from different CPU families, so
duplicate some code to keep them separate.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We're considering these two to be in different CPU families (6xx and
7xx), so keep their SPR registration separate.
The code was copied into register_G2_sprs and the common function was
renamed to apply only to the 755.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make sure that every register_*_sprs function only has calls to
spr_register* to register individual SPRs. Do not allow nesting. This
makes the code easier to follow and a look at init_proc_* should
suffice to know what SPRs a CPU has.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that the 601 was removed, all of our CPUs have a timebase, so that
can be moved into the common function.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The top level init_proc calls register_generic_sprs but also registers
some other SPRs outside of that function. Let's group everything into
a single place.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The G2LE CPU initialization code is the same as the G2. Use the latter
for both.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The /* XXX : not implemented */ comments all over cpu_init are
confusing and ambiguous.
Do they mean not implemented by QEMU, not implemented in a specific
access mode? Not implemented by the CPU? Do they apply to just the
register right after or to a whole block? Do they mean we have an
action to take in the future to implement these? Are they only
informative?
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce virtual hypervisor methods that can support a "Nested KVM HV"
implementation using the bare metal 2-level radix MMU, and using HV
exceptions to return from H_ENTER_NESTED (rather than cause interrupts).
HV exceptions can now be raised in the TCG spapr machine when running a
nested KVM HV guest. The main ones are the lev==1 syscall, the hdecr,
hdsi and hisi, hv fu, and hv emu, and h_virt external interrupts.
HV exceptions are intercepted in the exception handler code and instead
of causing interrupts in the guest and switching the machine to HV mode,
they go to the vhyp where it may exit the H_ENTER_NESTED hcall with the
interrupt vector numer as return value as required by the hcall API.
Address translation is provided by the 2-level page table walker that is
implemented for the bare metal radix MMU. The partition scope page table
is pointed to the L1's partition scope by the get_pate vhc method.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220216102545.1808018-9-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This moves the logic to reset the QEMU exception state into its own
function.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-8-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The virtual hypervisor currently always intercepts and handles
hypercalls but with a future change this will not always be the case.
Add a helper for the test so the logic is abstracted from the mechanism.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-7-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In prepartion for implementing a full partition table option for
vhyp, update the get_pate method to take an lpid and return a
success/fail indicator.
The spapr implementation currently just asserts lpid is always 0
and always return success.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-6-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The radix on vhyp MMU uses a single-level radix table walk, with the
partition scope mapping provided by the flat QEMU machine memory.
A subsequent change will use the two-level radix walk on vhyp in some
situations, so provide a helper which can abstract that logic.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-5-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Invalid or missing partition table entry exceptions should cause HV
interrupts. HDSISR is set to bad MMU config, which is consistent with
the ISA and experimentally matches what POWER9 generates.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-2-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v3.1 changed some VSX instructions behavior by changing what the
other words/doubleword in the result should contain when the result is
only one word/doubleword. e.g. xsmaxdp operates on doubleword 0 and
saves the result also in doubleword 0.
Before, the second doubleword result was undefined according to the
ISA, but now it's stated that it should be zeroed.
Even tough the result was undefined before, hardware implementing these
instructions already filled these fields with 0s. Changing every ISA
version in QEMU to this behavior makes the results match what happens
in hardware.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220204181944.65063-1-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't really need to check for exception model while applying
AIL. We can check the lpcr_mask for the presence of
LPCR_AIL/LPCR_HAIL.
This removes one more instance of passing the exception model ID
around.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We currently abort QEMU during the dispatch of an interrupt if we try
to set MSR_HV without having MSR_HVB in the msr_mask. I think we
should verify this for all MSR bits. There is no reason to ever have a
MSR bit set if the corresponding bit is not set in that CPU's
msr_mask.
Note that this is not about the emulated code setting reserved
bits. We clear the new_msr when starting to dispatch an exception, so
if we end up with bits not present in the msr_mask that is a QEMU
programming error.
I kept the HSRR verification for BookS because it is the only CPU
family that has HSRRs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make the cpu-specific powerpc_excp_* functions a bit simpler by moving
the bounds check and logging to powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that all CPU families have their own separate exception
dispatching code we can remove powerpc_excp_legacy.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 7xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 7xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Thre is no HV support in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the BookE code and add a comment explaining why we need to keep
hypercall support even though this CPU does not have a hypervisor
mode.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in the 7xx so remove the LPES0 handling.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 7xx.
Also remove 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DLTLB
POWERPC_EXCP_DSI
POWERPC_EXCP_DSTLB
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_IABR
POWERPC_EXCP_IFTLB
POWERPC_EXCP_ISI
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SMI
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 7xx CPUs
(740, 745, 750, 750cl, 750cx, 750fx, 750gx, 755). This commit copies
powerpc_excp_legacy verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since we've split the exception code by exception model, the exception
model IDs are becoming less useful. These two can be merged.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 6xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 6xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no HV support in the 6xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no Hypervisor mode in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no Hypervisor mode in the 6xx, so remove all LPES0 logic.
Also remove BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 6xx CPUs.
Also remove the 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This only applies to the G2s, the other 6xx CPUs will not have this
vector registered.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 6xx CPUs
(603, 604, G2, MPC5xx, MCP8xx). This commit copies powerpc_excp_legacy
verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't need three separate exception model IDs for the 603, 604 and
G2.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerPC 601 processor is the first generation of processors to
implement the PowerPC architecture. It was designed as a bridge
processor and also could execute most of the instructions of the
previous POWER architecture. It was found on the first Macs and IBM
RS/6000 workstations.
There is not much interest in keeping the CPU model of this
POWER-PowerPC bridge processor. We have the 603 and 604 CPU models of
the 60x family which implement the complete PowerPC instruction set.
Cc: "Hervé Poussineau" <hpoussin@reactos.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203142756.1302515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ppc_radix64_partition_scoped_xlate() logs the host page protection
bits variable but it is uninitialized. The value is set later on in
ppc_radix64_check_prot(). Remove the output.
Fixes: Coverity CID 1468942
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220203142145.1301749-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in BookE, so remove all of the HV logic.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the switch as this function applies to BookE only.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
QEMU does not support BookE as a hypervisor.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
BookE has no DSISR or DAR. The proper registers ESR and DEAR were
already set at this point.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no LPES0 in BookE and no MSR_HV.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The SRR1 should be set to the MSR value. There are no diagnostic bits
in the SRR1 for BookE.
Note that this fixes a bug where MSR_GS would be set and Linux would
go into KVM code when there's no KVM guest.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR or DAR in BookE. Change to ESR and DEAR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in BookE.
Also remove 40x code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookE CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This CPU was partially removed due to lack of support in 2017 by commit
aef7796057 ("ppc: remove non implemented cpu models").
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220128221611.1221715-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 602 was derived from the PowerPC 603, for the gaming market it
seems. It was hardly used and no firmware supporting the CPU could be
found. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The migration code will not look at a VMStateDescription's
minimum_version_id_old field unless that VMSD has set the
load_state_old field to something non-NULL. (The purpose of
minimum_version_id_old is to specify what migration version is needed
for the code in the function pointed to by load_state_old to be able
to handle it on incoming migration.)
We have exactly one VMSD which still has a load_state_old,
in the PPC CPU; every other VMSD which sets minimum_version_id_old
is doing so unnecessarily. Delete all the unnecessary ones.
Commit created with:
sed -i '/\.minimum_version_id_old/d' $(git grep -l '\.minimum_version_id_old')
with the one legitimate use then hand-edited back in.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
It missed vmstate_ppc_cpu.
The 74xx does not have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The whole power saving states logic seems to be dependent on HV mode,
which don't exist for 74xx so I'm removing it all and leaving the
abort message.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the BookE code and add a comment explaining why we need to keep
hypercall support even though this CPU does not have a hypervisor
mode.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 74xx don't have MSR_HV so all the LPES0 logic can be removed.
Also remove the BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 74xx don't have an MSR_HV.
Also remove 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DSI
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_IABR
POWERPC_EXCP_ISI
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SMI
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
POWERPC_EXCP_VPU
POWERPC_EXCP_VPUA
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 74xx
CPUs. This commit copies powerpc_excp_legacy verbatim so the next one
has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220127201116.1154733-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since this is now BookS only, we can simplify the code a bit and check
has_hv_mode instead of enumerating the exception models. LPES0 does
not make sense if there is no MSR_HV.
Note that QEMU does not support HV mode on 970 and POWER5+ so we don't
set MSR_HV in msr_mask.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- Always uses HV_EMU if the CPU has MSR_HV;
- Exceptions always delivered in 64 bit.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DSEG
POWERPC_EXCP_DSI
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_FU
POWERPC_EXCP_HDECR
POWERPC_EXCP_HDSI
POWERPC_EXCP_HISI
POWERPC_EXCP_HVIRT
POWERPC_EXCP_HV_EMU
POWERPC_EXCP_HV_FU
POWERPC_EXCP_ISEG
POWERPC_EXCP_ISI
POWERPC_EXCP_MAINT
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SDOOR_HV
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_SYSCALL_VECTORED
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
POWERPC_EXCP_VPU
POWERPC_EXCP_VPUA
POWERPC_EXCP_VSXU
POWERPC_EXCP_HV_MAINT
POWERPC_EXCP_SDOOR
(I added the two above that were not being considered. They used to be
"Invalid exception". Now they become "Unimplemented exception" which
is more accurate.)
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookS CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220124184605.999353-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 Program Interrupt does not set SRR1 with any diagnostic bits,
just a clean copy of the MSR.
We're using the BookE Exception Syndrome Register which is different
from the 405.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg: restored SPR_40x_ESR settings ]
Message-Id: <20220118184448.852996-14-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 ISI does not set SRR1 with any exception syndrome bits, only a
clean copy of the MSR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg : Fixed removal which was done in the wrong routine ]
Message-Id: <20220118184448.852996-13-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 has no DSISR or DAR, so convert the trace entry to
use ESR and DEAR instead.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg : - changed registers to ESR and DEAR.
- updated commit log ]
Message-Id: <20220118184448.852996-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The current Debug exception dispatch is the BookE one, so it is
different from the 405. We effectively don't support the 405 Debug
exception.
This patch removes the BookE code and moves the DEBUG into the "not
implemented" block.
Note that there is in theory a functional change here since we now
abort when a Debug exception happens. However, given how it was never
implemented, I don't believe this to have ever been dispatched for the
405.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR in the 405. It uses DEAR which we already set
earlier at ppc_cpu_do_unaligned_access.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au
Message-Id: <20220118184448.852996-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
405 has no MSR_HV and EPR is BookE only so we can remove it all.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
powerpc_excp_40x applies only to the 405, so remove HV code and
references to BookE.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In powerpc_excp_40x the Critical exception is now for 405 only, so we
can remove the BookE and G2 blocks.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220118184448.852996-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV or MSR_LE;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Interrupts Little Endian;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_CRITICAL
POWERPC_EXCP_DEBUG
POWERPC_EXCP_DSI
POWERPC_EXCP_DTLB
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FIT
POWERPC_EXCP_ISI
POWERPC_EXCP_ITLB
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PIT
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_WDT
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118184448.852996-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for 40x CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220118184448.852996-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 MSR has the Machine Check Enable bit. We're making use of it
when dispatching Machine Check, so add the bit to the msr_mask.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118184448.852996-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Bit 13 is the Wait State Enable bit. Give it its proper name.
As far as I can see we don't do anything with MSR_POW for the 405, so
this change has no effect.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118184448.852996-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit cd0c6f4735 did not take into account 405 CPUs when adding
support to batching of TCG tlb flushes. Set the TLB_NEED_LOCAL_FLUSH
flag when the SPR_40x_PID is set or a TLB updated.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Fixes: cd0c6f4735 ("ppc: Do some batching of TCG tlb flushes")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220113180352.1234512-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
POWERPC_MMU_BOOKE is not a mask and should not be tested with a
bitwise AND operator.
It went unnoticed because it only impacts the 601 CPU implementation
for which we don't have a known firmware image.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220124081609.3672341-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
cpu_interrupt_exittb() was introduced by commit 044897ef4a
("target/ppc: Fix system lockups caused by interrupt_request state
corruption") as a way to wrap cpu_interrupt() helper in BQL.
After that, commit 6d38666a89 ("ppc: Ignore the CPU_INTERRUPT_EXITTB
interrupt with KVM") added a condition to skip this interrupt if we're
running with KVM.
Problem is that the change made by the above commit, testing for
!kvm_enabled() at the start of cpu_interrupt_exittb():
static inline void cpu_interrupt_exittb(CPUState *cs)
{
if (!kvm_enabled()) {
return;
}
(... do cpu_interrupt(cs, CPU_INTERRUPT_EXITTB) ...)
is doing the opposite of what it intended to do. This will return
immediately if not kvm_enabled(), i.e. it's a emulated CPU, and if
kvm_enabled() it will proceed to fire CPU_INTERRUPT_EXITTB.
Fix the 'skip KVM' condition so the function is a no-op when
kvm_enabled().
CC: Greg Kurz <groug@kaod.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/809
Fixes: 6d38666a89 ("ppc: Ignore the CPU_INTERRUPT_EXITTB interrupt with KVM")
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220121160841.9102-1-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Book-E architecture does not set the error code in 31:27 bits
of SRR1, but instead uses these bits for custom fields such
as GS (Guest Supervisor).
Wrongly setting these fields will result in QEMU crashes
when attempting to execute not executable code due to the attempts
to use Guest Supervisor mode.
Cc: "Cédric Le Goater" <clg@kaod.org>
Cc: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Greg Kurz <groug@kaod.org>
Cc: qemu-ppc@nongnu.org
Cc: qemu-devel@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Vitaly Cheptsov <cheptsov@ispras.ru>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220121093107.15478-1-cheptsov@ispras.ru>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
After a TLB miss exception, GPRs 0-3 must be restored on rfi.
This is managed by hreg_store_msr() which is called by do_rfi()
However, hreg_store_msr() does it if MSR[TGPR] is unset in the
passed MSR value.
The problem is that do_rfi() is given the content of SRR1 as
the value to be set in MSR, but TGPR bit is not part of SRR1
and that bit is used for something else and is sometimes set
to 1, leading to hreg_store_msr() not restoring GPRs.
So, do the same way as for POW bit, force clearing it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Cedric Le Goater <clg@kaod.org>
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220120103824.239573-1-christophe.leroy@csgroup.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 7448 CPU is an evolution of the PowerPC 7447A and the last of the
G4 family. Change its family to reflect correctly its features. This
fixes Linux boot.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220117092555.1616512-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit c8f49e6b93 ("target/ppc: remove 401/403 CPUs") left a few
things behind.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220117091541.1615807-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118104150.1899661-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This breaks migration compatibility from (very) old versions of
QEMU. This should not be a problem for the pseries machine for which
migration is only supported on recent QEMUs ( > 2.x). There is no
clear status on what is supported or not for the other machines. Let's
move forward and remove the .load_state_old handler.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118104150.1899661-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We use the endianness of interrupts to determine which endianness to
use for the guest kernel memory dump. For machines that support HILE
(powernv8 and up) we have been always generating big endian dump
files.
This patch uses the HILE support recently added to
ppc_interrupts_little_endian to fix the endianness of the dumps for
powernv machines.
Here are two dumps created at different moments:
$ file skiboot.dump
skiboot.dump: ELF 64-bit MSB core file, 64-bit PowerPC ...
$ file kernel.dump
kernel.dump: ELF 64-bit LSB core file, 64-bit PowerPC ...
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Next patches will split powerpc_excp in multiple family specific
handlers. This patch adds a wrapper to make the transition clearer.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function is now suitable for
determining the endianness of interrupts for all CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Some CPUs set ILE via an MSR bit. We can make
ppc_interrupts_little_endian handle that case as well. Now we have a
centralized way of determining the endianness of interrupts.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function could be used for interrupts
delivered in Hypervisor mode, so add support for powernv8 and powernv9
to it.
Also drop the comment because it is inaccurate, all CPUs that can run
little endian can have interrupts in little endian. The point is
whether they can take interrupts in an endianness different from
MSR_LE.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the compile time definition and make the logging be controlled
by the `-d mmu` option in the cmdline.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v2.03 introduced Floating Round to Integer instructions : frin,
friz, frip, and frim. Add them to POWER5+.
The PPC_FLOAT_EXT flag also includes the fre (Floating Reciprocal
Estimate) instruction which was introduced in ISA v2.0x. The
architecture document says its optional and that might be the reason
why it has been kept under the PPC_FLOAT_EXT flag. This means 970 CPUs
can not use it under QEMU, which doesn't seem to be a problem.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
popcntb instruction was added in ISA v2.02. Add support for POWER5+
processors since they implement ISA v2.03.
PPC970 CPUs implement v2.01 and do not support popcntb.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220105095142.3990430-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
MMCR0 writes will change only MMCR0 bits which are used to calculate
HFLAGS_PMCC0, HFLAGS_PMCC1 and HFLAGS_INSN_CNT hflags. No other machine
register will be changed during this operation. This means that
hreg_compute_hflags() is overkill for what we need to do.
pmu_update_summaries() is already updating HFLAGS_INSN_CNT without
calling hreg_compure_hflags(). Let's do the same for the other 2 MMCR0
hflags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_cyc_cnt value in pmu_update_cycles
and pmc_update_overflow_timer. This leaves pmc_get_event
and pmc_is_inactive unused, so remove them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_ins_cnt value. Unroll the loop over the
different PMC counters. Treat the PMC4 run-latch specially.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is the combination of frozen bit and counter type, on a per
counter basis. So far this is only used by HFLAGS_INSN_CNT, but
will be used more later.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[danielhb: fixed PMC4 cyc_cnt shift, insn run latch code,
MMCR0_FC handling, "PMC[1-6]" comment]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We can just access it directly in powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[ clg: Took into account removal of inline ]
Message-Id: <20211229165751.3774248-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that 'vector' is known before calling the interrupt-specific setup
code, we can move all of the scv setup into one place.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211229165751.3774248-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
None of the interrupt setup code touches 'vector', so we can move it
earlier in the function. This will allow us to later move the System
Call Vectored setup that is on the top level into the
POWERPC_EXCP_SYSCALL_VECTORED code block.
This patch also moves the verification for when 'excp' does not have
an address associated with it. We now bail a little earlier when that
is the case. This should not cause any visible effects.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The next patch will start accessing the excp_vectors array earlier in
the function, so add a bounds check as first thing here.
This converts the empty return on POWERPC_EXCP_NONE to an error. This
exception number never reaches this function and if it does it
probably means something else went wrong up the line.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There are currently only two interrupts that use alternate SRRs, so
let them write to them directly during the setup code.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The non-signalling versions of VSX scalar convert to shorter/longer
precision insns doesn't silence SNaNs in the hardware. To better match
this behavior, use the non-arithmatic conversion of helper_todouble
instead of float32_to_float64. A test is added to prevent future
regressions.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211228120310.1957990-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Rework slightly ppc_cpu_dump_state() to replace the various 'if'
statements with a 'switch'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-10-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PID SPR of the 405 CPU contains the translation ID of the TLB
which is a 8-bit field. Enforce the mask with a store helper.
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-8-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 timers were broken when booke support was added. Assumption
was made that the register numbers were the same but it's not :
SPR_BOOKE_TSR (0x150)
SPR_BOOKE_TCR (0x154)
SPR_40x_TSR (0x3D8)
SPR_40x_TCR (0x3DA)
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Fixes: ddd1055b07 ("PPC: booke timers")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-6-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no need to deactivate MMU logging at compile time. Remove all
use of defines. Only keep DUMP_PAGE_TABLES for another series since
page tables could be dumped from the monitor.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-4-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
It facilitates reading the logs when mask CPU_LOG_INT is activated. We
should do the same for error codes.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211222064025.1541490-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The compiler should know better how to inline code if necessary.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
For Radix translation, the EA range is 64-bits. when EA(2:11) are
nonzero, a segment interrupt should occur.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Message-Id: <20211231073122.3183583-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
An Event-Based Branch (EBB) allows applications to change the NIA when a
event-based exception occurs. Event-based exceptions are enabled by
setting the Branch Event Status and Control Register (BESCR). If the
event-based exception is enabled when the exception occurs, an EBB
happens.
The following operations happens during an EBB:
- Global Enable (GE) bit of BESCR is set to 0;
- bits 0-61 of the Event-Based Branch Return Register (EBBRR) are set
to the the effective address of the NIA that would have executed if the EBB
didn't happen;
- Instruction fetch and execution will continue in the effective address
contained in the Event-Based Branch Handler Register (EBBHR).
The EBB Handler will process the event and then execute the Return From
Event-Based Branch (rfebb) instruction. rfebb sets BESCR_GE and then
redirects execution to the address pointed in EBBRR. This process is
described in the PowerISA v3.1, Book II, Chapter 6 [1].
This patch implements the rfebb instruction. Descriptions of all
relevant BESCR bits are also added - this patch is only using BESCR_GE,
but the next patches will use the remaining bits.
[1] https://wiki.raptorcs.com/w/images/f/f5/PowerISA_public.v3.1.pdf
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-9-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PM_RUN_INST_CMPL, instructions completed with the run latch set, is
the architected PowerISA v3.1 event defined with PMC4SEL = 0xFA.
Implement it by checking for the CTRL RUN bit before incrementing the
counter. To make this work properly we also need to force a new
translation block each time SPR_CTRL is written. A small tweak in
pmu_increment_insns() is then needed to only increment this event
if the thread has the run latch.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-8-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PMU is already counting cycles by calculating time elapsed in
nanoseconds. Counting instructions is a different matter and requires
another approach.
This patch adds the capability of counting completed instructions (Perf
event PM_INST_CMPL) by counting the amount of instructions translated in
each translation block right before exiting it.
A new pmu_count_insns() helper in translation.c was added to do that.
After verifying that the PMU is counting instructions, call
helper_insns_inc(). This new helper from power8-pmu.c will add the
instructions to the relevant counters. It'll also be responsible for
triggering counter negative overflows as it is already being done with
cycles.
To verify whether the PMU is counting instructions or now, a new hflags
named 'HFLAGS_INSN_CNT' is introduced. This flag will match the internal
state of the PMU. We're be using this flag to avoid calling
helper_insn_inc() when we do not have a valid instruction event being
sampled.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-7-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerISA v3.1 defines that if the proper bits are set (MMCR0_PMC1CE
for PMC1 and MMCR0_PMCjCE for the remaining PMCs), counter negative
conditions are enabled. This means that if the counter value overflows
(i.e. exceeds 0x80000000) a performance monitor alert will occur. This alert
can trigger an event-based exception (to be implemented in the next patches)
if the MMCR0_EBE bit is set.
For now, overflowing the counter when the PMC is counting cycles will
just trigger a performance monitor alert. This is done by starting the
overflow timer to expire in the moment the overflow would be occuring. The
timer will call fire_PMC_interrupt() (via cpu_ppc_pmu_timer_cb) which will
trigger the PMU alert and, if the conditions are met, an EBB exception.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-6-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
MMCR1 determines the events to be sampled by the PMU. Updating the
counters at every MMCR1 write ensures that we're not sampling more
or less events by looking only at MMCR0 and the PMCs.
It is worth noticing that both the Book3S PowerPC PMU, and this IBM
Power8+ PMU that we're modeling, also uses MMCRA, MMCR2 and MMCR3 to
control the PMU. These three registers aren't being handled in this
initial implementation, so for now we're controlling all the PMU
aspects using MMCR0, MMCR1 and the PMCs.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling pmu_update_cycles() on every PMC read/write operation ensures
that the values being fetched are up to date with the current PMU state.
In theory we can get away by just trapping PMCs reads, but we're going
to trap PMC writes to deal with counter overflow logic later on. Let's
put the required wiring for that and make our lives a bit easier in the
next patches.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the barebones of the PMU logic by enabling cycle
counting. The overall logic goes as follows:
- MMCR0 reg initial value is set to 0x80000000 (MMCR0_FC set) to avoid
having to spin the PMU right at system init;
- to retrieve the events that are being profiled, pmc_get_event() will
check the current MMCR0 and MMCR1 value and return the appropriate
PMUEventType. For PMCs 1-4, event 0x2 is the implementation dependent
value of PMU_EVENT_INSTRUCTIONS and event 0x1E is the implementation
dependent value of PMU_EVENT_CYCLES. These events are supported by IBM
Power chips since Power8, at least, and the Linux Perf driver makes use
of these events until kernel v5.15. For PMC1, event 0xF0 is the
architected PowerISA event for cycles. Event 0xFE is the architected
PowerISA event for instructions;
- if the counter is frozen, either via the global MMCR0_FC bit or its
individual frozen counter bits, PMU_EVENT_INACTIVE is returned;
- pmu_update_cycles() will go through each counter and update the
values of all PMCs that are counting cycles. This function will be
called every time a MMCR0 update is done to keep counters values
up to date. Upcoming patches will use this function to allow the
counters to be properly updated during read/write of the PMCs
and MMCR1 writes.
Given that the base CPU frequency is fixed at 1Ghz for both powernv and
pseries clock, cycle calculation assumes that 1 nanosecond equals 1 CPU
cycle. Cycle value is then calculated by adding the elapsed time, in
nanoseconds, of the last cycle update done via pmu_update_cycles().
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch starts an IBM Power8+ compatible PMU implementation by adding
the representation of PMU events that we are going to sample,
PMUEventType. This enum represents a Perf event that is being sampled by
a specific counter 'sprn'. Events that aren't available (i.e. no event
was set in MMCR1) will be of type 'PMU_EVENT_INVALID'. Events that are
inactive due to frozen counter bits state are of type
'PMU_EVENT_INACTIVE'. Other types added in this patch are
PMU_EVENT_CYCLES and PMU_EVENT_INSTRUCTIONS. More types will be added
later on.
Let's also add the required PMU cycle overflow timers. They will be used
to trigger cycle overflows when cycle events are being sampled. This
timer will call cpu_ppc_pmu_timer_cb(), which in turn calls
fire_PMC_interrupt(). Both functions are stubs that will be implemented
later on when EBB support is added.
Two new helper files are created to host this new logic.
cpu_ppc_pmu_init() will init all overflow timers during CPU init time.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This reverts commit 336e91f853.
It breaks the --disable-tcg build:
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
function ‘cpu_ldl_code’ [-Werror=implicit-function-declaration]
We should not have TCG code in powerpc_excp because some kvm-only
routines use it indirectly to dispatch interrupts. See
kvm_handle_debug, spapr_mce_req_event and
spapr_do_system_reset_on_cpu.
We can re-introduce the change once we have split the interrupt
injection code between KVM and TCG.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When Altivec support was added to the e6500 kernel in 2012[1], the
QEMU code was not changed, so we don't register the VPU/VPUA
exceptions for the e6500:
qemu: fatal: Raised an exception without defined vector 73
Note that the error message says 73, instead of 32, which is the IVOR
for VPU. This is because QEMU knows only knows about the VPU interrupt
for the 7400s. In theory, we should not be raising _that_ VPU
interrupt, but instead another one specific for the e6500.
We unfortunately cannot register e6500-specific VPU/VPUA interrupts
because the SPEU/EFPDI interrupts also use IVOR32/33. These are
present only in the e500v1/2 versions. From the user manual:
e500v1, e500v2: only SPEU/EFPDI/EFPRI
e500mc, e5500: no SPEU/EFPDI/EFPRI/VPU/VPUA
e6500: only VPU/VPUA
So I'm leaving IVOR32/33 as SPEU/EFPDI, but altering the dispatch code
to convert the VPU #73 to a #32 when we're in the e6500. Since the
handling for SPEU and VPU is the same this is the only change that's
needed. The EFPDI is not implemented and will cause an abort. I don't
think it worth it changing the error message to take VPUA into
consideration, so I'm not changing anything there.
This bug was discussed in the thread:
https://lists.gnu.org/archive/html/qemu-ppc/2021-06/msg00222.html
1- https://git.kernel.org/torvalds/c/cd66cc2ee52
Reported-by: <mario@locati.it>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211213133542.2608540-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This instruction has VRT and VRB fields instead of T/TX and B/BX.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211213120958.24443-4-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-3-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>