Introduce a new powerpc_excp function specific for PowerPC 7xx CPUs
(740, 745, 750, 750cl, 750cx, 750fx, 750gx, 755). This commit copies
powerpc_excp_legacy verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since we've split the exception code by exception model, the exception
model IDs are becoming less useful. These two can be merged.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 6xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 6xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no HV support in the 6xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no Hypervisor mode in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no Hypervisor mode in the 6xx, so remove all LPES0 logic.
Also remove BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 6xx CPUs.
Also remove the 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This only applies to the G2s, the other 6xx CPUs will not have this
vector registered.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 6xx CPUs
(603, 604, G2, MPC5xx, MCP8xx). This commit copies powerpc_excp_legacy
verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't need three separate exception model IDs for the 603, 604 and
G2.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerPC 601 processor is the first generation of processors to
implement the PowerPC architecture. It was designed as a bridge
processor and also could execute most of the instructions of the
previous POWER architecture. It was found on the first Macs and IBM
RS/6000 workstations.
There is not much interest in keeping the CPU model of this
POWER-PowerPC bridge processor. We have the 603 and 604 CPU models of
the 60x family which implement the complete PowerPC instruction set.
Cc: "Hervé Poussineau" <hpoussin@reactos.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203142756.1302515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ppc_radix64_partition_scoped_xlate() logs the host page protection
bits variable but it is uninitialized. The value is set later on in
ppc_radix64_check_prot(). Remove the output.
Fixes: Coverity CID 1468942
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220203142145.1301749-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in BookE, so remove all of the HV logic.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the switch as this function applies to BookE only.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
QEMU does not support BookE as a hypervisor.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
BookE has no DSISR or DAR. The proper registers ESR and DEAR were
already set at this point.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no LPES0 in BookE and no MSR_HV.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The SRR1 should be set to the MSR value. There are no diagnostic bits
in the SRR1 for BookE.
Note that this fixes a bug where MSR_GS would be set and Linux would
go into KVM code when there's no KVM guest.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR or DAR in BookE. Change to ESR and DEAR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in BookE.
Also remove 40x code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookE CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This CPU was partially removed due to lack of support in 2017 by commit
aef7796057 ("ppc: remove non implemented cpu models").
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220128221611.1221715-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
A mostly generic test for unaligned access raising SIGBUS.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is kinda sorta the opposite of the other tcg hosts, where
we get (normal) alignment checks for free with host SIGBUS and
need to add code to support unaligned accesses.
This inline code expansion is somewhat large, but it takes quite
a few instructions to make a function call to a helper anyway.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Due to mapping changes, we now rarely place the code_gen_buffer
near the main executable. Which means that direct calls will
now rarely be in range.
So, always use indirect calls for tail calls, which allows us to
avoid clobbering %o7, and therefore we need not save and restore it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since 7ecd02a06f, if patch_reloc fails we restart translation
with a smaller TB. SPARC had its function signature changed,
but not the logic. Replace assert with return false.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We had code for checking for 13 and 21-bit shifted constants,
but we can do better and allow 32-bit shifted constants.
This is still 2 insns shorter than the full 64-bit sequence.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This will allow us to control exactly what scratch register is
used for loading the constant.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Handle 32-bit constants with a separate function, so that
tcg_out_movi_int does not need to recurse. This slightly
rearranges the order of tests for small constants, but
produces the same output.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When BH is constant, it is constrained to 11 bits for use in MOVCC.
For the cases in which we must load the constant BH into a register,
we do not need the full logic of tcg_out_movi; we can use the simpler
function for emitting a 13 bit constant.
This eliminates the only case in which TCG_REG_T2 was passed to
tcg_out_movi, which will shortly become invalid.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can use the routines just added for user-only to emit
unaligned accesses in softmmu mode too.
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is kinda sorta the opposite of the other tcg hosts, where
we get (normal) alignment checks for free with host SIGBUS and
need to add code to support unaligned accesses.
Fortunately, the ISA contains pairs of instructions that are
used to implement unaligned memory accesses. Use them.
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reserve a register for the guest_base using aarch64 for reference.
By doing so, we do not have to recompute it for every memory load.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
From armv6, the architecture supports unaligned accesses.
All we need to do is perform the correct alignment check
in tcg_out_tlb_read.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We will shortly allow the use of unaligned memory accesses,
and these require proper alignment. Use get_alignment_bits
to verify and remove USING_SOFTMMU.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now always true, since we require armv6.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now always true, since we require armv6.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Support for unaligned accesses is difficult for pre-v6 hosts.
While debian still builds for armv4, we cannot use a compile
time test, so test the architecture at runtime and error out.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220106134238.3936163-1-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>