Rework slightly ppc_cpu_dump_state() to replace the various 'if'
statements with a 'switch'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-10-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PID SPR of the 405 CPU contains the translation ID of the TLB
which is a 8-bit field. Enforce the mask with a store helper.
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-8-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 timers were broken when booke support was added. Assumption
was made that the register numbers were the same but it's not :
SPR_BOOKE_TSR (0x150)
SPR_BOOKE_TCR (0x154)
SPR_40x_TSR (0x3D8)
SPR_40x_TCR (0x3DA)
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Fixes: ddd1055b07 ("PPC: booke timers")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-6-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no need to deactivate MMU logging at compile time. Remove all
use of defines. Only keep DUMP_PAGE_TABLES for another series since
page tables could be dumped from the monitor.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-4-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
It facilitates reading the logs when mask CPU_LOG_INT is activated. We
should do the same for error codes.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211222064025.1541490-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The compiler should know better how to inline code if necessary.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
For Radix translation, the EA range is 64-bits. when EA(2:11) are
nonzero, a segment interrupt should occur.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Message-Id: <20211231073122.3183583-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
An Event-Based Branch (EBB) allows applications to change the NIA when a
event-based exception occurs. Event-based exceptions are enabled by
setting the Branch Event Status and Control Register (BESCR). If the
event-based exception is enabled when the exception occurs, an EBB
happens.
The following operations happens during an EBB:
- Global Enable (GE) bit of BESCR is set to 0;
- bits 0-61 of the Event-Based Branch Return Register (EBBRR) are set
to the the effective address of the NIA that would have executed if the EBB
didn't happen;
- Instruction fetch and execution will continue in the effective address
contained in the Event-Based Branch Handler Register (EBBHR).
The EBB Handler will process the event and then execute the Return From
Event-Based Branch (rfebb) instruction. rfebb sets BESCR_GE and then
redirects execution to the address pointed in EBBRR. This process is
described in the PowerISA v3.1, Book II, Chapter 6 [1].
This patch implements the rfebb instruction. Descriptions of all
relevant BESCR bits are also added - this patch is only using BESCR_GE,
but the next patches will use the remaining bits.
[1] https://wiki.raptorcs.com/w/images/f/f5/PowerISA_public.v3.1.pdf
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-9-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PM_RUN_INST_CMPL, instructions completed with the run latch set, is
the architected PowerISA v3.1 event defined with PMC4SEL = 0xFA.
Implement it by checking for the CTRL RUN bit before incrementing the
counter. To make this work properly we also need to force a new
translation block each time SPR_CTRL is written. A small tweak in
pmu_increment_insns() is then needed to only increment this event
if the thread has the run latch.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-8-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PMU is already counting cycles by calculating time elapsed in
nanoseconds. Counting instructions is a different matter and requires
another approach.
This patch adds the capability of counting completed instructions (Perf
event PM_INST_CMPL) by counting the amount of instructions translated in
each translation block right before exiting it.
A new pmu_count_insns() helper in translation.c was added to do that.
After verifying that the PMU is counting instructions, call
helper_insns_inc(). This new helper from power8-pmu.c will add the
instructions to the relevant counters. It'll also be responsible for
triggering counter negative overflows as it is already being done with
cycles.
To verify whether the PMU is counting instructions or now, a new hflags
named 'HFLAGS_INSN_CNT' is introduced. This flag will match the internal
state of the PMU. We're be using this flag to avoid calling
helper_insn_inc() when we do not have a valid instruction event being
sampled.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-7-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerISA v3.1 defines that if the proper bits are set (MMCR0_PMC1CE
for PMC1 and MMCR0_PMCjCE for the remaining PMCs), counter negative
conditions are enabled. This means that if the counter value overflows
(i.e. exceeds 0x80000000) a performance monitor alert will occur. This alert
can trigger an event-based exception (to be implemented in the next patches)
if the MMCR0_EBE bit is set.
For now, overflowing the counter when the PMC is counting cycles will
just trigger a performance monitor alert. This is done by starting the
overflow timer to expire in the moment the overflow would be occuring. The
timer will call fire_PMC_interrupt() (via cpu_ppc_pmu_timer_cb) which will
trigger the PMU alert and, if the conditions are met, an EBB exception.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-6-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
MMCR1 determines the events to be sampled by the PMU. Updating the
counters at every MMCR1 write ensures that we're not sampling more
or less events by looking only at MMCR0 and the PMCs.
It is worth noticing that both the Book3S PowerPC PMU, and this IBM
Power8+ PMU that we're modeling, also uses MMCRA, MMCR2 and MMCR3 to
control the PMU. These three registers aren't being handled in this
initial implementation, so for now we're controlling all the PMU
aspects using MMCR0, MMCR1 and the PMCs.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling pmu_update_cycles() on every PMC read/write operation ensures
that the values being fetched are up to date with the current PMU state.
In theory we can get away by just trapping PMCs reads, but we're going
to trap PMC writes to deal with counter overflow logic later on. Let's
put the required wiring for that and make our lives a bit easier in the
next patches.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the barebones of the PMU logic by enabling cycle
counting. The overall logic goes as follows:
- MMCR0 reg initial value is set to 0x80000000 (MMCR0_FC set) to avoid
having to spin the PMU right at system init;
- to retrieve the events that are being profiled, pmc_get_event() will
check the current MMCR0 and MMCR1 value and return the appropriate
PMUEventType. For PMCs 1-4, event 0x2 is the implementation dependent
value of PMU_EVENT_INSTRUCTIONS and event 0x1E is the implementation
dependent value of PMU_EVENT_CYCLES. These events are supported by IBM
Power chips since Power8, at least, and the Linux Perf driver makes use
of these events until kernel v5.15. For PMC1, event 0xF0 is the
architected PowerISA event for cycles. Event 0xFE is the architected
PowerISA event for instructions;
- if the counter is frozen, either via the global MMCR0_FC bit or its
individual frozen counter bits, PMU_EVENT_INACTIVE is returned;
- pmu_update_cycles() will go through each counter and update the
values of all PMCs that are counting cycles. This function will be
called every time a MMCR0 update is done to keep counters values
up to date. Upcoming patches will use this function to allow the
counters to be properly updated during read/write of the PMCs
and MMCR1 writes.
Given that the base CPU frequency is fixed at 1Ghz for both powernv and
pseries clock, cycle calculation assumes that 1 nanosecond equals 1 CPU
cycle. Cycle value is then calculated by adding the elapsed time, in
nanoseconds, of the last cycle update done via pmu_update_cycles().
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch starts an IBM Power8+ compatible PMU implementation by adding
the representation of PMU events that we are going to sample,
PMUEventType. This enum represents a Perf event that is being sampled by
a specific counter 'sprn'. Events that aren't available (i.e. no event
was set in MMCR1) will be of type 'PMU_EVENT_INVALID'. Events that are
inactive due to frozen counter bits state are of type
'PMU_EVENT_INACTIVE'. Other types added in this patch are
PMU_EVENT_CYCLES and PMU_EVENT_INSTRUCTIONS. More types will be added
later on.
Let's also add the required PMU cycle overflow timers. They will be used
to trigger cycle overflows when cycle events are being sampled. This
timer will call cpu_ppc_pmu_timer_cb(), which in turn calls
fire_PMC_interrupt(). Both functions are stubs that will be implemented
later on when EBB support is added.
Two new helper files are created to host this new logic.
cpu_ppc_pmu_init() will init all overflow timers during CPU init time.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This reverts commit 336e91f853.
It breaks the --disable-tcg build:
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
function ‘cpu_ldl_code’ [-Werror=implicit-function-declaration]
We should not have TCG code in powerpc_excp because some kvm-only
routines use it indirectly to dispatch interrupts. See
kvm_handle_debug, spapr_mce_req_event and
spapr_do_system_reset_on_cpu.
We can re-introduce the change once we have split the interrupt
injection code between KVM and TCG.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When Altivec support was added to the e6500 kernel in 2012[1], the
QEMU code was not changed, so we don't register the VPU/VPUA
exceptions for the e6500:
qemu: fatal: Raised an exception without defined vector 73
Note that the error message says 73, instead of 32, which is the IVOR
for VPU. This is because QEMU knows only knows about the VPU interrupt
for the 7400s. In theory, we should not be raising _that_ VPU
interrupt, but instead another one specific for the e6500.
We unfortunately cannot register e6500-specific VPU/VPUA interrupts
because the SPEU/EFPDI interrupts also use IVOR32/33. These are
present only in the e500v1/2 versions. From the user manual:
e500v1, e500v2: only SPEU/EFPDI/EFPRI
e500mc, e5500: no SPEU/EFPDI/EFPRI/VPU/VPUA
e6500: only VPU/VPUA
So I'm leaving IVOR32/33 as SPEU/EFPDI, but altering the dispatch code
to convert the VPU #73 to a #32 when we're in the e6500. Since the
handling for SPEU and VPU is the same this is the only change that's
needed. The EFPDI is not implemented and will cause an abort. I don't
think it worth it changing the error message to take VPUA into
consideration, so I'm not changing anything there.
This bug was discussed in the thread:
https://lists.gnu.org/archive/html/qemu-ppc/2021-06/msg00222.html
1- https://git.kernel.org/torvalds/c/cd66cc2ee52
Reported-by: <mario@locati.it>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211213133542.2608540-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This instruction has VRT and VRB fields instead of T/TX and B/BX.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211213120958.24443-4-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-3-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PPC instruction xsmaxcdp, xsmincdp, xsmaxjdp, and xsminjdp are using
vector registers when they should be using VSX ones. This happens
because the instructions are using GEN_VSX_HELPER_R3, which adds 32
to the register numbers, effectively making them vector registers.
This patch fixes it by changing these instructions to use
GEN_VSX_HELPER_X3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-2-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
They have been there since 2007 without any board using them, most
were protected by a TODO define. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211202191108.1291515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The exception model id for 601v has been removed without mention
why. I assume it was inadvertent and restore it here.
Fixes: b632a148b6 ("target-ppc: Use QOM method dispatch for MMU fault handling")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 603e uses the same exception code as 603 so we don't need a
dedicated entry for it.
This is only a removal of redundant code, no functional change.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The Floating-point Unavailable and Decrementer interrupts are being
registered at the same 0x900 address. The FPU should be at 0x800
instead.
Verified on MPC555, MPC860 and MPC885 user manuals.
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
(Applies to 7441, 7445, 7450, 7451, 7455, 7457, 7447, 7447a and 7448)
The QEMU-side software TLB implementation for the 7450 family of CPUs
is being removed due to lack of known users in the real world. The
last users in the code were removed by the two previous commits.
A brief history:
The feature was added in QEMU by commit 7dbe11acd8 ("Handle all MMU
models in switches...") with the mention that Linux was not able to
handle the TLB miss interrupts and the MMU model would be kept
disabled.
At some point later, commit 8ca3f6c382 ("Allow selection of all
defined PowerPC 74xx (aka G4) CPUs.") enabled the model for the 7450
family without further justification.
We have since the year 2011 [1] been unable to run OpenBIOS in the
7450s and have not heard of any other software that is used with those
CPUs in QEMU. Attempts were made to find a guest OS that implemented
the TLB miss handlers and none were found among Linux 5.15, FreeBSD 13,
MacOS9, MacOSX and MorphOS 3.15.
All CPUs that registered this feature were moved to an MMU model that
replaces the software TLB with a QEMU hardware TLB
implementation. They can now run the same software as the 7400 CPUs,
including the OSes mentioned above.
References:
- https://bugs.launchpad.net/qemu/+bug/812398https://gitlab.com/qemu-project/qemu/-/issues/86
- https://lists.nongnu.org/archive/html/qemu-ppc/2021-11/msg00289.html
message id: 20211119134431.406753-1-farosas@linux.ibm.com
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The e600 CPU is a successor of the 7448 and like all the 7450s CPUs,
it has an optional software TLB feature.
We have determined that there is no OS software support for the 7450
software TLB available these days. See the previous commit for more
information.
This patch disables the SPRs and instructions related to software TLB
from the e600 CPU.
No functional change intended. These facilities should be used by the
OS in interrupt handlers for interrupts that QEMU never generates.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
(Applies to 7441, 7445, 7450, 7451, 7455, 7457, 7447 and 7447a)*
We have since 2011 [1] been unable to run OpenBIOS in the 7450s and
have not heard of any other software that is used with those CPUs in
QEMU. A current discussion [2] shows that the 7450 software TLB is
unsupported in Linux 5.15, FreeBSD 13, MacOS9, MacOSX and MorphOS
3.15. With no known support in firmware or OS, this means that no code
for any of the 7450 CPUs is ever ran in QEMU.
Since the implementation in QEMU of the 7400 MMU is the same as the
7450, except for the software TLB vs. hardware TLB search, this patch
changes all 7450 cpus to the 7400 MMU model. This has the practical
effect of disabling the software TLB feature while keeping other
aspects of address translation working as expected.
This allow us to run software on the 7450 family again.
*- note that the 7448 is currently aliased in QEMU for a 7400, so it
is unaffected by this change.
1- https://bugs.launchpad.net/qemu/+bug/812398https://gitlab.com/qemu-project/qemu/-/issues/86
2- https://lists.nongnu.org/archive/html/qemu-ppc/2021-11/msg00289.html
message id: 20211119134431.406753-1-farosas@linux.ibm.com
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211130230123.781844-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When computing the predicate "is this value currently formatted
for single precision", we do not want to round the value according
to the current rounding mode, nor perform a floating-point equality.
We want to see if the N bits that make up single-precision are the
only ones set within the register, and then a bitwise equality.
Fixes a bug in which a single-precision NaN is considered !SP,
because float64_eq(nan, nan) is always false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-35-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no double-rounding bug here, because the result is
merely an estimate to within 1 part in 256, but perform the
operation with float64r32_div for consistency.
Use float_flag_invalid_snan instead of recomputing the
snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-34-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no double-rounding bug here, because the result is
merely an estimate to within 1 part in 32, but perform the
operation with float64r32_div for consistency.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-33-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_mul. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-32-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_{add,sub,div}. Fixes a double-rounding issue with
performing the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-31-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_sqrt. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-30-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float64r32_muladd. Fixes a double-rounding issue with performing
the compuation in float64 and then rounding afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-29-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float_flag_invalid_snan instead of recomputing
the snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-27-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use float_flag_invalid_snan instead of recomputing
the snan-ness of the operand.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-26-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsqrt and vxsnan are computed directly by softfloat,
we don't need to recompute it. Split out float_invalid_op_sqrt
to be used in several places. This fixes VSX_SQRT, which did
not order its tests correctly to eliminate NaN with sign set.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-25-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We only needed one ieee arithmetic operation to raise
exceptions. To convert back to register form, we can
use our simpler non-arithmetic function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-24-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsnan is computed directly by softfloat,
we don't need to recompute it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-23-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling helper_frsp directly from other helpers generates
the incorrect retaddr. Split out a helper that takes the
retaddr as a parameter.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-22-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We will process flags other than in valid in helper_float_check_status,
which is invoked after the writeback to FRT.
Fixes a bug in which FRT is not written when OE/UE/XE are enabled.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-21-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Create a common function for all of the madd helpers.
Let the compiler tail call or inline as it chooses.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-20-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vximz, vxisi, and vxsnan are computed directly by
softfloat, we don't need to recompute it. This replaces the
separate float{32,64}_maddsub_update_excp functions with a
single float_invalid_op_madd function.
Fix VSX_MADD by passing sfprf to float_invalid_op_madd,
whereas the previous *_maddsub_update_excp assumed it true.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-19-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Let float64_round_to_int detect and silence snans.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-18-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In GEN_FLOAT_B, we called helper_reset_fpstatus immediately
before calling helper_fri*. Therefore get_float_exception_flags
is known to be zero, and this code can be simplified.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-17-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is the proper type for the enumeration.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211119160502.17432-16-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no reason the callers can't tail call to one function.
Leave it up to the compiler either way.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211119160502.17432-15-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We were returning nanval for any instance of invalid being set,
but that is an incorrect for VXCVI. This failure can be seen
in the float_convs tests.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-14-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxsnan is computed directly by softfloat,
we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-13-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Fixes a bug in which e.g XE enabled causes inexact to be raised
before the writeback to the architectural register.
All of the users of GEN_FLOAT_B either set set_fprf, or are one
of the convert-to-integer instructions that require this behaviour.
Split out the two gen_helper_* calls in gen_compute_fprf_float64
and protect only the first with set_fprf.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-12-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxidi, vxzdz, and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-11-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vximz and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-10-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that vxisi and vxsnan are computed directly by
softfloat, we don't need to recompute it via classes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-9-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This commit fixes the difference reported in the bug in the reserved
bit 52, it does this by adding this bit to the mask of bits to not be
directly altered in the ppc_store_fpscr function (the hardware used to
compare to QEMU was a Power9).
The bits 0 to 27 were also added to the mask, as they are marked as
reserved in the PowerISA and bit 28 is a reserved extension of the DRN
field (bits 29:31) but can't be set using mtfsfi, while the other DRN
bits may be set using mtfsfi instruction, so bit 28 was also added to
the mask.
Although this is a difference reported in the bug, since it's a reserved
bit it may be a "don't care" case, as put in the bug report. Looking at
the ISA it doesn't explicitly mention this bit can't be set, like it
does for FEX and VX, so I'm unsure if this is necessary.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/266
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20211201163808.440385-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
mtfsf, mtfsfi and mtfsb1 instructions call helper_float_check_status
after updating the value of FPSCR, but helper_float_check_status
checks fp_status and fp_status isn't updated based on FPSCR and
since the value of fp_status is reset earlier in the instruction,
it's always 0.
Because of this helper_float_check_status would change the FI bit to 0
as this bit checks if the last operation was inexact and
float_flag_inexact is always 0.
These instructions also don't throw exceptions correctly since
helper_float_check_status throw exceptions based on fp_status.
This commit created a new helper, helper_fpscr_check_status that checks
FPSCR value instead of fp_status and checks for a larger variety of
exceptions than do_float_check_status.
Since fp_status isn't used, gen_reset_fpstatus() was removed.
The hardware used to compare QEMU's behavior to was a Power9.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20211201163808.440385-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When updating the R bit of a PTE, the Hash64 MMU was using a wrong byte
offset, causing the first byte of the adjacent PTE to be corrupted.
This caused a panic when booting FreeBSD, using the Hash MMU.
Fixes: a2dd4e83e7 ("ppc/hash64: Rework R and C bit updates")
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
'tlbivax' is implemented by gen_tlbivax_booke206() via
gen_helper_booke206_tlbivax(). In case the TLB needs to be flushed,
booke206_invalidate_ea_tlb() is called. All these functions, but
booke206_invalidate_ea_tlb(), uses a 64-bit effective address 'ea'.
booke206_invalidate_ea_tlb() uses an uint32_t 'ea' argument that
truncates the original 'ea' value for apparently no particular reason.
This function retrieves the tlb pointer by calling booke206_get_tlbm(),
which also uses a target_ulong address as parameter - in this case, a
truncated 'ea' address. All the surrounding logic considers the
effective TLB address as a 64 bit value, aside from the signature of
booke206_invalidate_ea_tlb().
Last but not the least, PowerISA 2.07B section 6.11.4.9 [2] makes it
clear that the effective address "EA" is a 64 bit value.
Commit 01662f3e51 introduced this code and no changes were made ever
since. An user detected a problem with tlbivax [1] stating that this
address truncation was the cause. This same behavior might be the source
of several subtle bugs that were never caught.
For all these reasons, this patch assumes that this address truncation
is the result of a mistake/oversight of the original commit, and changes
booke206_invalidate_ea_tlb() 'ea' argument to 'vaddr'.
[1] https://gitlab.com/qemu-project/qemu/-/issues/52
[2] https://wiki.raptorcs.com/wiki/File:PowerISA_V2.07B.pdf
Fixes: 01662f3e51 ("PPC: Implement e500 (FSL) MMU")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/52
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
These instructions should update the GPR indicated by the field RA
instead of RT. This error caused a regression on Mac OS 9 boot and some
graphical glitches in OS X.
Fixes: a39a106634a9 ("target/ppc: Move load and store floating point instructions to decodetree")
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Implemented the instruction XXSPLTIDP using decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-23-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the XXSPLTIW instruction, using decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-22-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed the function that handles XXSPLTIB emulation to using
decodetree, but still use the same logic as before
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-20-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed the function that handles XXSPLTW emulation to using decodetree,
but still using the same logic.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-19-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions plxvp and pstxvp using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-18-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions plxv and pstxv using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-17-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions lxvpx and stxvpx using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-16-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implemented the instructions lxvp and stxvp using decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-15-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Moved stxvx and lxvx implementation from the legacy system to
decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-14-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Moved stxv and lxv implementation from the legacy system to
decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.castro@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-13-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changes get_cpu_vsr to receive a new argument indicating whether the
high or low part of the register is being accessed. This change improves
consistency between the interfaces used to access Vector and VSX
registers and helps to handle endianness in some cases.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-12-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Introduce the macro to centralize checking if the VSX facility is
enabled and handle it correctly.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-11-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instructions:
vextdubvlx: Vector Extract Double Unsigned Byte to VSR using
GPR-specified Left-Index
vextduhvlx: Vector Extract Double Unsigned Halfword to VSR using
GPR-specified Left-Index
vextduwvlx: Vector Extract Double Unsigned Word to VSR using
GPR-specified Left-Index
vextddvlx: Vector Extract Double Doubleword to VSR using
GPR-specified Left-Index
vextdubvrx: Vector Extract Double Unsigned Byte to VSR using
GPR-specified Right-Index
vextduhvrx: Vector Extract Double Unsigned Halfword to VSR using
GPR-specified Right-Index
vextduwvrx: Vector Extract Double Unsigned Word to VSR using
GPR-specified Right-Index
vextddvrx: Vector Extract Double Doubleword to VSR using
GPR-specified Right-Index
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-10-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsbvlx: Vector Insert Byte from VSR using GPR-specified Left-Index
vinshvlx: Vector Insert Halfword from VSR using GPR-specified
Left-Index
vinswvlx: Vector Insert Word from VSR using GPR-specified Left-Index
vinsbvrx: Vector Insert Byte from VSR using GPR-specified Right-Index
vinshvrx: Vector Insert Halfword from VSR using GPR-specified
Right-Index
vinswvrx: Vector Insert Word from VSR using GPR-specified Right-Index
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-8-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsw: Vector Insert Word from GPR using immediate-specified index
vinsd: Vector Insert Doubleword from GPR using immediate-specified
index
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-7-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implements the following PowerISA v3.1 instructions:
vinsblx: Vector Insert Byte from GPR using GPR-specified Left-Index
vinshlx: Vector Insert Halfword from GPR using GPR-specified Left-Index
vinswlx: Vector Insert Word from GPR using GPR-specified Left-Index
vinsdlx: Vector Insert Doubleword from GPR using GPR-specified
Left-Index
vinsbrx: Vector Insert Byte from GPR using GPR-specified Right-Index
vinshrx: Vector Insert Halfword from GPR using GPR-specified
Right-Index
vinswrx: Vector Insert Word from GPR using GPR-specified Right-Index
vinsdrx: Vector Insert Doubleword from GPR using GPR-specified
Right-Index
The helpers and do_vinsx receive i64 to allow code sharing with the
future implementation of Vector Insert from VSR using GPR Index.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-6-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
pdepd and pextd helpers are moved out of #ifdef (TARGET_PPC64) to allow
them to be reused as GVecGen3.fni8.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-4-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The signature of do_cntzdm is changed to allow reuse as GVecGen3i.fni8.
The method is also moved out of #ifdef TARGET_PPC64, as PowerISA doesn't
say vclzdm and vctzdm are 64-bit only.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There's no reason to keep vector-impl.c.inc separate from
vmx-impl.c.inc. Additionally, let GVec handle the multiple calls to
helper_cfuged for us.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211104123719.323713-2-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
ddedpd: DFP Decode DPD To BCD
ddedpdq: DFP Decode DPD To BCD Quad
denbcd: DFP Encode BCD To DPD
denbcdq: DFP Encode BCD To DPD Quad
dscli: DFP Shift Significand Left Immediate
dscliq: DFP Shift Significand Left Immediate Quad
dscri: DFP Shift Significand Right Immediate
dscriq: DFP Shift Significand Right Immediate Quad
Also deleted dfp-ops.c.inc, now that all PPC DFP instructions were
moved to decodetree.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-16-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dctdp: DFP Convert To DFP Long
dctqpq: DFP Convert To DFP Extended
drsp: DFP Round To DFP Short
drdpq: DFP Round To DFP Long
dcffix: DFP Convert From Fixed
dcffixq: DFP Convert From Fixed Quad
dctfix: DFP Convert To Fixed
dctfixq: DFP Convert To Fixed Quad
dxex: DFP Extract Biased Exponent
dxexq: DFP Extract Biased Exponent Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-15-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dquai: DFP Quantize Immediate
dquaiq: DFP Quantize Immediate Quad
drintx: DFP Round to FP Integer With Inexact
drintxq: DFP Round to FP Integer With Inexact Quad
drintn: DFP Round to FP Integer Without Inexact
drintnq: DFP Round to FP Integer Without Inexact Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-13-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the following instructions to decodetree:
dtstdc: DFP Test Data Class
dtstdcq: DFP Test Data Class Quad
dtstdg: DFP Test Data Group
dtstdgq: DFP Test Data Group Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-10-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Before moving the existing DFP instructions to decodetree, drop the
nip update that shouldn't be done for these instructions.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-9-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
dctfixqq: DFP Convert To Fixed Quadword Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-8-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
dcffixqq: DFP Convert From Fixed Quadword Quad
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-5-luis.pires@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fernando Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029192417.400707-4-luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move REQUIRE_ALTIVEC to translate.c and rename it to REQUIRE_VECTOR.
Signed-off-by: Bruno Larsen <bruno.larsen@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Fernando Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211029192417.400707-3-luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
cnttzdm: Count Trailing Zeros Doubleword Under Bit Mask
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-9-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement the following PowerISA v3.1 instruction:
cntlzdm: Count Leading Zeros Doubleword Under Bit Mask
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-8-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-5-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move load floating point instructions (lfs, lfsu, lfsx, lfsux, lfd, lfdu, lfdx, lfdux)
and store floating point instructions(stfs, stfsu, stfsx, stfsux, stfd, stfdu, stfdx,
stfdux) from legacy system to decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <fernando.valle@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-4-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move resolve_PLS_D from fixedpoint-impl.c.inc to translate.c
because this way the function can be used not only by fixed
point instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle <phervalle@gmail.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The do_ea_calc function will calculate the effective address(EA)
according to PowerIsa 3.1. With that, it was replaced part of
do_ldst() that calculates the EA by this new function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Fernando Eckhardt Valle (pherde) <phervalle@gmail.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211029202424.175401-2-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is not used by, nor required by, user-only.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We ought to have been recording the virtual address for reporting
to the guest trap handler.
Cc: qemu-ppc@nongnu.org
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
By doing this while sending the exception, we will have already
done the unwinding, which makes the ppc_cpu_do_unaligned_access
code a bit cleaner.
Update the comment about the expected instruction format.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Record DAR, DSISR, and exception_index. That last means
that we must exit to cpu_loop ourselves, instead of letting
exception_index being overwritten.
This is exactly what the user-mode ppc_cpu_tlb_fill does,
so simply rename it as ppc_cpu_record_sigsegv.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
I noticed -cpu help printing enough trailing spaces to make the output
at least 84 characters wide. Looks ugly unless the terminal is wider.
Ugly or not, trailing spaces are stupid.
The culprit is this line in x86_cpu_list_entry():
qemu_printf("x86 %-20s %-58s\n", name, desc);
This prints a string with minimum field left-justified right before a
newline. Change it to
qemu_printf("x86 %-20s %s\n", name, desc);
which avoids the trailing spaces and is simpler to boot.
A search for the pattern with "git-grep -E '%-[0-9]+s\\n'" found a few
more instances. Change them similarly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20211009152401.2982862-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
These will be used to implement new decimal floating point
instructions from Power ISA 3.1.
The remainder is now returned directly by divu128/divs128,
freeing up phigh to receive the high 64 bits of the quotient.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-4-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In preparation for changing the divu128/divs128 implementations
to allow for quotients larger than 64 bits, move the div-by-zero
and overflow checks to the callers.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-2-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Problem state needs to be able to read and write the PMU counters,
otherwise it won't be aware of any sampling result that the PMU produces
after a Perf run.
This patch does that in a similar fashion as already done in the
previous patches. PMCs 5 and 6 have a special condition, aside from the
constraints that are common with PMCs 1-4, where they are not part of the
PMU if MMCR0_PMCC is 0b11.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-5-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Similar to the previous patch, let's add problem state read/write access to
the MMCR2 SPR, which is also a group A PMU SPR that needs to be filtered
to be read/written by userspace.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-4-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Userspace need access to PMU SPRs to be able to operate the PMU. One of
such SPRs is MMCR0.
MMCR0, as defined by PowerISA v3.1, is classified as a 'group A' PMU
register. This class of registers has common read/write rules that are
governed by MMCR0 PMCC bits. MMCR0 is also not fully exposed to problem
state: only MMCR0_FC, MMCR0_PMAO and MMCR0_PMAE bits are
readable/writable in this case.
This patch exposes MMCR0 to userspace by doing the following:
- two new callbacks, spr_read_MMCR0_ureg() and spr_write_MMCR0_ureg(),
are added to be used as problem state read/write callbacks of UMMCR0.
Both callbacks filters the amount of bits userspace is able to
read/write by using a MMCR0_UREG_MASK;
- problem state access control is done by the spr_groupA_read_allowed()
and spr_groupA_write_allowed() helpers. These helpers will read the
current PMCC bits from DisasContext and check whether the read/write
MMCR0 operation is valid or noti;
- to avoid putting exclusive PMU logic into the already loaded
translate.c file, let's create a new 'power8-pmu-regs.c.inc' file that
will hold all the spr_read/spr_write functions of PMU registers.
The 'power8' name of this new file intends to hint about the proven
support of the PMU logic to be added. The code has been tested with the
IBM POWER chip family, POWER8 being the oldest version tested. This
doesn't mean that the PMU logic will break with any other PPC64 chip
that implements Book3s, but rather that we can't assert that it works
properly with any Book3s compliant chip.
CC: Gustavo Romero <gustavo.romero@linaro.org>
Signed-off-by: Gustavo Romero <gromero@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-3-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're going to add PMU support for TCG PPC64 chips, based on IBM POWER8+
emulation and following PowerISA v3.1. This requires several PMU related
registers to be exposed to userspace (problem state). PowerISA v3.1
dictates that the PMCC bits of the MMCR0 register controls the level of
access of the PMU registers to problem state.
This patch start things off by exposing both PMCC bits to hflags,
allowing us to access them via DisasContext in the read/write callbacks
that we're going to add next.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-2-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PowerISA says that mtmsr[d] "does not alter MSR[HV], MSR[S], MSR[ME], or
MSR[LE]", but the current code only filters the GPR-provided value if
L=1. This behavior caused some problems in FreeBSD, and a build option
was added to work around the issue [1], but it seems that the bug was
not reported in launchpad/gitlab. This patch address the issue in qemu,
so the option on FreeBSD should no longer be required.
[1] https://cgit.freebsd.org/src/commit/?id=4efb1ca7d2a44cfb33d7f9e18bd92f8d68dcfee0
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211015181940.197982-1-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can't read env->xer directly, as it does not contain some bits of
XER. Instead, we should have a callback that uses cpu_read_xer to read
the complete register.
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-5-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
env->xer doesn't hold some bits of XER, like OV and CA. To write the
complete register in the core dump we should read XER value with
cpu_read_xer.
Reported-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-4-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The value of XER is split in multiple fields of CPUPPCState, like
env->xer and env->so. To get/set the whole register from gdb, we should
use cpu_read_xer/cpu_write_xer.
Fixes: da91a00f19 ("target-ppc: Split out SO, OV, CA fields from XER")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211014223234.127012-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mask of the Byte-Reverse Halfword opcode is a read-only
constant. We can avoid using a TCG temporary by moving the
mask to the constant pool.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211003141711.3673181-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Avoid using TCG temporaries for the -1 and 8 constant values.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211003141711.3673181-2-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
GDB single-stepping is now handled generically.
Reuse gen_debug_exception to handle architectural debug exceptions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The previous placement in tcg/tcg.h was not logical.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
PowerISA v3.0B made tlbie[l] hypervisor privileged when PSR=0 and HR=1.
To allow the check at translation time, we'll use the HR bit of LPCR to
check the MMU mode instead of the PATE.HR.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210917114751.206845-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add a Host Radix field (hr) in DisasContext with LPCR[HR] value to allow
us to decide between Radix and HPT while validating instructions
arguments. Note that PowerISA v3.1 does not require LPCR[HR] and PATE.HR
to match if the thread is in ultravisor/hypervisor real addressing mode,
so ctx->hr may be invalid if ctx->hv and ctx->dr are set.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20210917114751.206845-2-matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
According to the ISA, CR should be set based on the source value, and
not on the packed decimal result.
The way this was implemented would cause GT, LT and EQ to be set
incorrectly when the source value was too large and the 31 least
significant digits of the packed decimal result ended up being all zero.
This would happen for source values of +/-10^31, +/-10^32, etc.
The new implementation fixes this and also skips the result calculation
altogether in case of src overflow.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Message-Id: <20210823150235.35759-1-luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While we may have had some thought of allowing system-mode
to return from this hook, we have no guests that require this.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is nothing target specific about this. The implementation
is host specific, but the declaration is 100% common.
Reviewed-By: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Restrict cpu_exec_interrupt() and its callees to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210911165434.531552-18-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
[rth: Split out of a larger patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
As vector registers are stored in host endianness, we shouldn't swap its
64-bit elements in user mode. Add a 16-byte case in
ppc_maybe_bswap_register to handle the reordering of elements in softmmu
and remove avr_need_swap which is now unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826145656.2507213-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These helpers shouldn't depend on the host endianness, as they only use
shifts, ands, and int128_* methods.
Fixes: 60caf2216b ("target-ppc: add vextu[bhw][lr]x instructions")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Hypervisor Decrementer exception should not be generated while the
CPU is in power-saving mode (see cpu_ppc_hdecr_excp()). However,
discarding the exception before entering the power-saving mode is
wrong since we would loose a previously generated HDEC.
Fixes: 4b236b621b ("ppc: Initial HDEC support")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The POWER10 DD2 CPU adds an extra LPCR[HAIL] bit. DD1 doesn't have
HAIL but since it does not break the modeling and that we don't plan
to support DD1, modify the LPCR mask of all the POWER10 family.
Setting the HAIL bit is a requirement to support the scv instruction
on PowerNV POWER10 platforms since glibc-2.33.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-2-clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
moved store_40x_sler from mmu_common.c to helper_regs.c as it is
a function to store a value in a special purpose register, so
moving it to a file focused in special register manipulation
is more appropriate.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-4-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_store_sdr1 was at first in mmu_helper.c and was moved as part
the patches to enable the disable-tcg option, now it's being moved
back to a file that will be compiled with that option
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-3-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU
stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated
meson.build to compile mmu_common.c and only compile mmu_helper.c when
CONFIG_TCG is set.
Moved function declarations, #define and structs used by both files to
internal.h except for functions that use structures defined in cpu.h,
those were moved to cpu.h.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
arch_init.h only defines the QEMU_ARCH_* enumeration and the
arch_type global. Don't include it in files that don't use those.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210730105947.28215-8-peter.maydell@linaro.org
In commit 8f0a4b6a9b, we started to require L=0 for ppc32 to match what
The Programming Environments Manual say:
"For 32-bit implementations, the L field must be cleared, otherwise
the instruction form is invalid."
The stricter behavior, however, broke AROS boot on sam460ex, which is a
regression from 6.0. This patch partially reverts the change, raising
the exception only for CPUs known to require L=0 (e500 and e500mc) and
logging a guest error for other cases.
Both behaviors are acceptable by the PowerISA, which allows "the system
illegal instruction error handler to be invoked or yield boundedly
undefined results."
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Fixes: 8f0a4b6a9b ("target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree")
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210720135507.2444635-1-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The hook is now unused, with breakpoints checked outside translation.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr. Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>