Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For 128-bit load/store, use 16-byte alignment. This
requires that we perform the two operations in the
correct order so that we generate the alignment fault
before modifying memory.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the case of gpr load, merge the size and is_signed arguments;
otherwise, simply convert size to memop.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adjust the interface to match what has been done to the
TCGv_i32 load/store functions.
This is less obvious, because at present the only user of
these functions, trans_VLDST_multiple, also wants to manipulate
the endianness to speed up loading multiple bytes. Thus we
retain an "internal" interface which is identical to the
current gen_aa32_{ld,st}_i64 interface.
The "new" interface will gain users as we remove the legacy
interfaces, gen_aa32_ld64 and gen_aa32_st64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Just because operating on a TCGv_i64 temporary does not
mean that we're performing a 64-bit operation. Restrict
the frobbing to actual 64-bit operations.
This bug is not currently visible because all current
users of these two functions always pass MO_64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the only caller. Adjust some commentary to talk
about SCTLR_B instead of the vanishing function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a finalize_memop function that computes alignment and
endianness and returns the final MemOp for the operation.
Split out gen_aa32_{ld,st}_internal_i32 which bypasses any special
handling of endianness or alignment. Adjust gen_aa32_{ld,st}_i32
so that s->be_data is not added by the callers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use this to signal when memory access alignment is required.
This value comes from the CCR register for M-profile, and
from the SCTLR register for A-profile.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that other bits have been moved out of tb->flags,
there's no point in filling from the top.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that these bits have been moved out of tb->flags,
where TBFLAG_ANY was filling from the top, move AM32
to fill from the top, and A32 and M32 to fill from the
bottom. This means fewer changes when adding new bits.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we have all of the proper macros defined, expanding
the CPUARMTBFlags structure and populating the two TB fields
is relatively simple.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for splitting tb->flags across multiple
fields, introduce a structure to hold the value(s).
So far this only migrates the one uint32_t and fixes
all of the places that require adjustment to match.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We're about to split tbflags into two parts. These macros
will ensure that the correct part is used with the correct
set of bits.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We're about to rearrange the macro expansion surrounding tbflags,
and this field name will be expanded using the bit definition of
the same name, resulting in a token pasting error.
So PSTATE_SS -> PSTATE__SS in the uses, and document it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We're about to rearrange the macro expansion surrounding tbflags,
and this field name will be expanded using the bit definition of
the same name, resulting in a token pasting error.
So SCTLR_B -> SCTLR__B in the 3 uses, and document it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The encoding of size = 2 and size = 3 had the incorrect decode
for align, overlapping the stride field. This error was hidden
by what should have been unnecessary masking in translate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210419202257.161730-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The log2_esize parameter is not used except trivially.
Drop the parameter and the deferral to gen_mte_check1.
This fixes a bug in that the parameters as documented
in the header file were the reverse from those in the
implementation. Which meant that translate-sve.c was
passing the parameters in the wrong order.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that mte_check1 and mte_checkN have been merged, we can
merge sve_cont_ldst_mte_check1 and sve_cont_ldst_mte_checkN.
Which means that we can eliminate the function pointer into
sve_ldN_r and sve_stN_r, calling sve_cont_ldst_mte_check directly.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For consistency with the mte_check1 + mte_checkN merge
to mte_check, rename the probe function as well.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The mte_check1 and mte_checkN functions are now identical.
Drop mte_check1 and rename mte_checkN to mte_check.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
After recent changes, mte_checkN does not use ESIZE,
and mte_check1 never used TSIZE. We can combine the
two into a single field: SIZEM1.
Choose to pass size - 1 because size == 0 is never used,
our immediate need in mte_probe_int is for the address
of the last byte (ptr + size - 1), and since almost all
operations are powers of 2, this makes the immediate
constant one bit smaller.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were incorrectly assuming that only the first byte of an MTE access
is checked against the tags. But per the ARM, unaligned accesses are
pre-decomposed into single-byte accesses. So by the time we reach the
actual MTE check in the ARM pseudocode, all accesses are aligned.
We cannot tell a priori whether or not a given scalar access is aligned,
therefore we must at least check. Use mte_probe_int, which is already
set up for checking multiple granules.
Buglink: https://bugs.launchpad.net/bugs/1921948
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out a helper function from mte_checkN to perform
all of the checking and address manpulation. So far,
just use this in mte_checkN itself.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were incorrectly assuming that only the first byte of an MTE access
is checked against the tags. But per the ARM, unaligned accesses are
pre-decomposed into single-byte accesses. So by the time we reach the
actual MTE check in the ARM pseudocode, all accesses are aligned.
Therefore, the first failure is always either the first byte of the
access, or the first byte of the granule.
In addition, some of the arithmetic is off for last-first -> count.
This does not become directly visible until a later patch that passes
single bytes into this function, so ptr == ptr_last.
Buglink: https://bugs.launchpad.net/bugs/1921948
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210416183106.1516563-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweaked a comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Arm ARM specifies that for Thumb encodings of the various plain
store insns, if the Rn field is 1111 then we must UNDEF. This is
different from the Arm encodings, where this case is either
UNPREDICTABLE or has well-defined behaviour. The exclusive stores,
store-release and STRD do not have this UNDEF case for any encoding.
Enforce the UNDEF for this case in the Thumb plain store insns.
Fixes: https://bugs.launchpad.net/qemu/+bug/1922887
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210408162402.5822-1-peter.maydell@linaro.org
We can remove PAGE_WRITE when (internally) marking a page read-only
because it contains translated code. This can get confused when we are
executing signal return code on signal stacks.
Fixes: e56552cf07 ("target/s390x: Implement the MVPG condition-code-option bit")
Found-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-id: 20210422154427.13038-1-alex.bennee@linaro.org
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When adding this file and its new content in commit 3f7a927847
("target/mips: LSA/DLSA R6 decodetree helpers") I did 2 mistakes:
1: Listed authors who haven't been involved in its development,
2: Used an incorrect GNU GPLv2 license text (using 'and' instead
of 'or').
Instead of correcting the GNU GPLv2 license text, replace the license
by the 'GNU LGPL v2.1 or later' one, to be coherent with the other
translation files in the target/mips/ folder.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210420100633.1752440-1-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a left over erroneous check from the days front-ends handled
io start/end themselves. Regardless just because IO could be performed
on the last instruction doesn't obligate the front end to do so.
This fixes an abort faced by the aspeed execute-in-place support which
will necessarily trigger this state (even before the one-shot
CF_LAST_IO fix). The test still seems to hang once it attempts to boot
the Linux kernel but I suspect this is an unrelated issue with icount
and the timer handling code.
The original intention of the cpu_abort (added in commit 2e70f6efa8
when the icount stuff was first added) seems to have been to act as
an assert() to catch an unhandled corner case where the generated code
would be something like:
conditional branch to condlabel if its cc failed
implementation of the insn (a conditional branch or trap)
code emitted by gen_io_end()
condlabel:
gen_goto_tb or equivalent thing to go to next insn
At runtime the cc-failed case would skip over the code emitted by
gen_io_end(), leaving the can_do_io flag incorrectly set.
In commit ba3e792669 we switched to an implementation which
always clears can_do_io at the start of the following TB instead
of trying to clear it at the end of a TB that did IO. So the corner
case that this cpu_abort() was trying to flag is no longer possible,
because the gen_io_end() call has been deleted. We can therefore
safely remove the no-longer-valid assertion.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210416170207.12504-1-alex.bennee@linaro.org
Cc: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a TCG temporary leak when translating CACHE opcode.
Fixes: 0d74a222c2 ("make ITC Configuration Tags accessible to the CPU")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210406202857.1440744-1-f4bug@amsat.org>
We can remove PAGE_WRITE when (internally) marking a page
read-only because it contains translated code.
This can be triggered by tests/tcg/aarch64/bti-2, after
having serviced SIGILL trampolines on the stack.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Found the following cpu feature bits missing from EPYC-Rome model.
ibrs : Indirect Branch Restricted Speculation
ssbd : Speculative Store Bypass Disable
These new features will be added in EPYC-Rome-v2. The -cpu help output
after the change.
x86 EPYC-Rome (alias configured by machine type)
x86 EPYC-Rome-v1 AMD EPYC-Rome Processor
x86 EPYC-Rome-v2 AMD EPYC-Rome Processor
Reported-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <161478622280.16275.6399866734509127420.stgit@bmoger-ubuntu>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit f7fb73b8cd.
This change turned out to be a bit half-baked, and doesn't
work with KVM, which fails with the error:
"qemu-system-aarch64: Failed to retrieve host CPU features"
because KVM does not allow accessing of the PMCR_EL0 value in
the scratch "query CPU ID registers" VM unless we have first
set the KVM_ARM_VCPU_PMU_V3 feature on the VM.
Revert the change for 6.0.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20210331154822.23332-1-peter.maydell@linaro.org
This patch handles icount mode for timer read/write instructions,
because it is required to call gen_io_start in such cases.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <161700373035.1135822.16451510827008616793.stgit@pasha-ThinkPad-X280>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
xtensa_modules variable defined in each xtensa-modules.c.inc is only
used locally by the including file. Make it static.
Reported-by: Yury Gribov <tetra2005@gmail.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
import_core.sh tries to change Makefile.objs when importing new xtensa
core, but that file no longer exists. Rewrite meson.build rule to pick
up all source files that match core-*.c pattern and drop commands that
change Makefile.objs.
Cc: qemu-stable@nongnu.org # v5.2.0
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Python scripts are not inputs, and putting them in @INPUT@. This
puts requirements on the command line format, keeping all inputs
close to the name of the script. Avoid that by not including the
script in the command and not in the inputs.
Also wrap "PYTHONPATH" usage with "env", since setting the environment
this way is not valid under Windows.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
gen_semantics is an executable, not an input. Meson 0.57 special cases
the first argument and @INPUT@ is not expanded there. Fix that by
not including it in the input, only in the command.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds icount handling to mfspr/mtspr instructions
that may deal with hardware timers.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Message-Id: <161700376169.1135890.8707223959310729949.stgit@pasha-ThinkPad-X280>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Stafford Horne <shorne@gmail.com>
These two opcodes only allow a memory operand.
Lacking the check for a register operand, we used the A0 temp
without initialization, which led to a tcg abort.
Buglink: https://bugs.launchpad.net/qemu/+bug/1921138
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210324164650.128608-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Each vCPU core exposes its timebase frequency in the DT. When running
under KVM, this means parsing /proc/cpuinfo in order to get the timebase
frequency of the host CPU.
The parsing appears to slow down the boot quite a bit with higher number
of cores:
# of cores seconds spent in spapr_dt_cpus()
8 0.550122
16 1.342375
32 2.850316
64 5.922505
96 9.109224
128 12.245504
256 24.957236
384 37.389113
The timebase frequency of the host CPU is identical for all
cores and it is an invariant for the VM lifetime. Cache it
instead of doing the same expensive parsing again and again.
Rename kvmppc_get_tbfreq() to kvmppc_get_tbfreq_procfs() and
rename the 'retval' variable to make it clear it is used as
fallback only. Come up with a new version of kvmppc_get_tbfreq()
that calls kvmppc_get_tbfreq_procfs() only once and keep the
value in a static.
Zero is certainly not a valid value for the timebase frequency.
Treat atoi() returning zero as another parsing error and return
the fallback value instead. This allows kvmppc_get_tbfreq() to
use zero as an indicator that kvmppc_get_tbfreq_procfs() hasn't
been called yet.
With this patch applied:
384 0.518382
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <161600382766.1780699.6787739229984093959.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This
means that we don't provide the 6 counters that are required by the
Arm BSA (Base System Architecture) specification if the CPU supports
the Virtualization extensions.
Instead of having a single PMCR_NUM_COUNTERS, make each CPU type
specify the PMCR reset value (obtained from the appropriate TRM), and
use the 'N' field of that value to define the number of counters
provided.
This means that we now supply 6 counters for Cortex-A53, A57, A72,
A15 and A9 as well as '-cpu max'; Cortex-A7 and A8 stay at 4; and
Cortex-R5 goes down to 3.
Note that because we now use the PMCR reset value of the specific
implementation, we no longer set the LC bit out of reset. This has
an UNKNOWN value out of reset for all cores with any AArch32 support,
so guest software should be setting it anyway if it wants it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Message-id: 20210311165947.27470-1-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The definition S390_ADAPTER_SUPPRESSIBLE was moved to "cpu.h", per
suggestion of Thomas Huth. From interface design perspective, IMHO, not
a good thing as it belongs to the public interface of
css_register_io_adapters(). We did this because CONFIG_KVM requeires
NEED_CPU_H and Thomas, and other commenters did not like the
consequences of that.
Moving the interrupt related declarations to s390_flic.h was suggested
by Cornelia Huck.
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Tested-by: Halil Pasic <pasic@linux.ibm.com>
Message-Id: <20210317095622.2839895-2-kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Pretend the fault always happens at page table level 3.
Failure to set this leaves level = 0, which is impossible for
ARMFault_Permission, and produces an invalid syndrome, which
reaches g_assert_not_reached in cpu_loop.
Fixes: 8db94ab4e5 ("linux-user/aarch64: Pass syndrome to EXC_*_ABORT")
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210320000606.1788699-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For Arm M-profile CPUs, on reset the CPU must load its initial PC and
SP from a vector table in guest memory. Because we can't guarantee
reset ordering, we have to handle the possibility that the ROM blob
loader's reset function has not yet run when the CPU resets, in which
case the data in an ELF file specified by the user won't be in guest
memory to be read yet.
We work around the reset ordering problem by checking whether the ROM
blob loader has any data for the address where the vector table is,
using rom_ptr(). Unfortunately this does not handle the possibility
of memory aliasing. For many M-profile boards, memory can be
accessed via multiple possible physical addresses; if the board has
the vector table at address X but the user's ELF file loads data via
a different address Y which is an alias to the same underlying guest
RAM then rom_ptr() will not find it.
Use the new rom_ptr_for_as() function, which deals with memory
aliasing when locating a relevant ROM blob.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210318174823.18066-6-peter.maydell@linaro.org
When decode_insn16() fails, we fall back to decode_RV32_64C() for
further compressed instruction decoding. However, prior to this change,
we did not raise an illegal instruction exception, if decode_RV32_64C()
fails to decode the instruction. This means that we skipped illegal
compressed instructions instead of raising an illegal instruction
exception.
Instead of patching decode_RV32_64C(), we can just remove it,
as it is dead code since f330433b36 anyway.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210322121609.3097928-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The current two-stage lookup detection in riscv_cpu_do_interrupt falls
short of its purpose, as all it checks is whether two-stage address
translation either via the hypervisor-load store instructions or the
MPRV feature would be allowed.
What we really need instead is whether two-stage address translation was
active when the exception was raised. However, in riscv_cpu_do_interrupt
we do not have the information to reliably detect this. Therefore, when
we raise a memory fault exception we have to record whether two-stage
address translation is active.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210319141459.1196741-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The previous implementation was broken in many ways:
- Used mideleg instead of hideleg to mask accesses
- Used MIP_VSSIP instead of VS_MODE_INTERRUPTS to mask writes to vsie
- Did not shift between S bits and VS bits (VSEIP <-> SEIP, ...)
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210311094738.1376795-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The current condition for the use of background registers only
considers the hypervisor load and store instructions,
but not accesses from M mode via MSTATUS_MPRV+MPV.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210311103036.1401073-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210311094902.1377593-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to the specification the "field SPVP of hstatus controls the
privilege level of the access" for the hypervisor virtual-machine load
and store instructions HLV, HLVX and HSV.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210311103005.1400718-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
If PMP permission of any address has been changed by updating PMP entry,
flush all TLB pages to prevent from getting old permission.
Signed-off-by: Jim Shu <cwshu@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1613916082-19528-4-git-send-email-cwshu@andestech.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Like MMU translation, add qemu log of PMP permission checking for
debugging.
Signed-off-by: Jim Shu <cwshu@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1613916082-19528-3-git-send-email-cwshu@andestech.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, PMP permission checking of TLB page is bypassed if TLB hits
Fix it by propagating PMP permission to TLB page permission.
PMP permission checking also use MMU-style API to change TLB permission
and size.
Signed-off-by: Jim Shu <cwshu@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1613916082-19528-2-git-send-email-cwshu@andestech.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
vs() should return -RISCV_EXCP_ILLEGAL_INST instead of -1 if rvv feature
is not enabled.
If -1 is returned, exception will be raised and cs->exception_index will
be set to the negative return value. The exception will then be treated
as an instruction access fault instead of illegal instruction fault.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210223065935.20208-1-frank.chang@sifive.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Coverity reported (CID 1450831) an array overrun in
gen_mxu_D16MAX_D16MIN():
1103 } else if (unlikely((XRb == 0) || (XRa == 0))) {
....
1112 if (opc == OPC_MXU_D16MAX) {
1113 tcg_gen_smax_i32(mxu_gpr[XRa - 1], t0, t1);
1114 } else {
1115 tcg_gen_smin_i32(mxu_gpr[XRa - 1], t0, t1);
1116 }
>>> Overrunning array "mxu_gpr" of 15 8-byte elements at element
index 4294967295 (byte offset 34359738367) using index "XRa - 1U"
(which evaluates to 4294967295).
This happens because the code is confused about which of XRa, XRb and
XRc is the output, and which are the inputs. XRa is the output, but
most of the conditions separating out different special cases are
written as if XRc is the output, with the result that we can end up
in the code path that assumes XRa is non-0 even when it is zero.
Fix the erroneous code, bringing it in to line with the structure
used in functions like gen_mxu_S32MAX_S32MIN() and
gen_mxu_Q8MAX_Q8MIN().
Fixes: CID 1450831
Fixes: bb84cbf385
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210316131353.4533-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
KVM doesn't fully support Hyper-V reenlightenment notifications on
migration. In particular, it doesn't support emulating TSC frequency
of the source host by trapping all TSC accesses so unless TSC scaling
is supported on the destination host and KVM_SET_TSC_KHZ succeeds, it
is unsafe to proceed with migration.
KVM_SET_TSC_KHZ is called from two sites: kvm_arch_init_vcpu() and
kvm_arch_put_registers(). The later (intentionally) doesn't propagate
errors allowing migrations to succeed even when TSC scaling is not
supported on the destination. This doesn't suit 're-enlightenment'
use-case as we have to guarantee that TSC frequency stays constant.
Require 'tsc-frequency=' command line option to be specified for successful
migration when re-enlightenment was enabled by the guest.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210319123801.1111090-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Even the name of this section is 'cpu/msr_hyperv_hypercall',
'hypercall_hypercall' is clearly a typo.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210318160249.1084178-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
env->error_code is only 32-bits wide, so the high 32 bits of EXITINFO1
are being lost. However, even though saving guest state and restoring
host state must be delayed to do_vmexit, because they might take tb_lock,
it is always possible to write to the VMCB. So do this for the exit
code and EXITINFO1, just like it is already being done for EXITINFO2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- get rid of legacy_s390_alloc() and phys_mem_set_alloc()
- tcg: implement the MVPG condition-code-option bit
- fix g_autofree variable handing in the pci vfio code
- use official z15 names in the cpu model definitions
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAmBQgqUSHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vwOkP/3K5y38/8v3VSoOxFbpTvAqSDBtHtCpz
bDIfqFmLenniFKRuGWr/rAUCbFCSymhoy3YNuWXa2wX4WVjOEx3wRCxkShC1/G5B
PBu75UmB58CivL5agHigA0r1JSwquqFfzTr+mU9GAyf5aUJj3iGqX1rx+Ldyeva1
bdSxOY8LZVuM52E5QvDFzjOCw0Ti83yje7OdPuz4FixaXoyZzNEB50E3hxgJPIYE
khHAP1LaxmOGqfCuCBYOkFOUVx3xXnevnNouP2l98fBMa8ctu1JrsEz/9WA7mbDF
1rYDtEE2l4eqWlXkMX1LcSV1eJ5rlf1l/W4uyW9Ti6gi30dZuOJnrfviP2bBPjQE
VrGsmIg35PhP9y7y8ZlBJHgxENw3qrlJ2I1R/IIjivN84EZ9OkxaW6pMt0R+1Q1o
tQAh9/2Md9IHsvZEZMTppIuRn0X0Y4Wfm3/9vY1twrrZFPzc5cSsFhrDam0KRGro
PgnsJ3b1aeKW7k9fS0hy807SKV5stdmTCAGPoP5RYXRIQhuUsaFXN04w+PuIUY7m
kd2IRKrubcWhmSnTELBO97lTmLMCb5vyOX6iKbTbbQ0kg68qB8tna7WlfvfxpH5a
NFk6yvJiygmedX4TaIMxvJt6ZIWNSksCaJWsAb72oySXsbcu5vewMRUs4/4kAtJj
gMwEILszb60S
=8eNE
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck-gitlab/tags/s390x-20210316' into staging
s390x updates:
- get rid of legacy_s390_alloc() and phys_mem_set_alloc()
- tcg: implement the MVPG condition-code-option bit
- fix g_autofree variable handing in the pci vfio code
- use official z15 names in the cpu model definitions
# gpg: Signature made Tue 16 Mar 2021 10:04:21 GMT
# gpg: using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg: issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg: aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck-gitlab/tags/s390x-20210316:
s390x/pci: Add missing initialization for g_autofree variables
target/s390x: Store r1/r2 for page-translation exceptions during MVPG
target/s390x: Implement the MVPG condition-code-option bit
s390x/cpu_model: use official name for 8562
exec: Get rid of phys_mem_set_alloc()
s390x/kvm: Get rid of legacy_s390_alloc()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The PoP states:
When EDAT-1 does not apply, and a program interruption due to a
page-translation exception is recognized by the MOVE PAGE
instruction, the contents of the R1 field of the instruction are
stored in bit positions 0-3 of location 162, and the contents of
the R2 field are stored in bit positions 4-7.
If [...] an ASCE-type, region-first-translation,
region-second-translation, region-third-translation, or
segment-translation exception was recognized, the contents of
location 162 are unpredictable.
So we have to write r1/r2 into the lowcore on page-translation
exceptions. Simply handle all exceptions inside our mvpg helper now.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210315085449.34676-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If the CCO bit is set, MVPG should not generate an exception but
report page translation faults via a CC code.
Create a new helper, access_prepare_nf, which can use probe_access_flags
in non-faulting mode, and then handle watchpoints.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[thuth: Added logic to still inject protection exceptions]
Signed-off-by: Thomas Huth <thuth@redhat.com>
[david: Look at env->tlb_fill_exc to determine if there was an exception]
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210315085449.34676-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The single-frame z15 is called "z15 T02" (and the multi-frame z15
"z15 T01").
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20210311132746.1777754-1-cohuck@redhat.com>
legacy_s390_alloc() was required for dealing with the absence of the ESOP
feature -- on old HW (< gen 10) and old z/VM versions (< 6.3).
As z/VM v6.2 (and even v6.3) is no longer supported since 2017 [1]
and we don't expect to have real users on such old hardware, let's drop
legacy_s390_alloc().
Still check+report an error just in case someone still runs on
such old z/VM environments, or someone runs under weird nested KVM
setups (where we can manually disable ESOP via the CPU model).
No need to check for KVM_CAP_GMAP - that should always be around on
kernels that also have KVM_CAP_DEVICE_CTRL (>= v3.15).
[1] https://www.ibm.com/support/lifecycle/search?q=z%2FVM
Suggested-by: Cornelia Huck <cohuck@redhat.com>
Suggested-by: Thomas Huth <thuth@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210303130916.22553-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Only one interrupt is in progress at the moment.
It is only necessary to set to reset interrupt_request
after all interrupts have been executed.
Signed-off-by: Ivanov Arkasha <ivanovrkasha@gmail.com>
Message-Id: <20210312164754.18437-1-arkaisp2021@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
I found that there are many spelling errors in the comments of qemu/target/avr.
I used spellcheck to check the spelling errors and found some errors in the folder.
Signed-off-by: Lichang Zhao <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Philippe Mathieu-Daude<f4bug@amsat.org>
Message-Id: <20201009064449.2336-12-zhaolichang@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
if width was 0 we would run into the assertion:
qemu-system-tricore: tcg/tcg-op.c:217: tcg_gen_sari_i32: Assertion `arg2 >= 0 && arg2 < 32' failed.o
The instruction manual specifies undefined behaviour for this case. So
we bring this in line with the golden Infineon simlator 'tsim', which
simply writes 0 to the result in case of width=0.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
if r3+1 and r2 are the same then we would overwrite r2 with our first
move and use the wrong result for the shift. Thus we store the result
from the mov in a temp.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
According to the TC 1.3.1. Architecture Manual [1; page 174], results are
undefined, if pos + width > 32 and not 31 or if width = 0.
We found this error because of a different behavior between qemu-tricore
and the real tricore processor. For pos + width = 32, qemu-tricore did not
generate any intermediate code and ran into a different state compared to
the real hardware.
[1] https://www.infineon.com/dgdl/tc_v131_instructionset_v138.pdf?fileId=db3a304412b407950112b409b6dd0352
[BK: Add the why to the commit message]
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Andreas Konopik <andreas.konopik@efs-auto.de>
Signed-off-by: Georg Hofstetter <georg.hofstetter@efs-auto.de>
Signed-off-by: David Brenken <david.brenken@efs-auto.de>
Message-Id: <20210211115329.8984-2-david.brenken@efs-auto.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
'int access_type' and ACCESS_INT are unused, drop them.
Provide the mmu_idx argument to match other targets.
'int rw' is actually the MMUAccessType, rename it.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210127224255.3505711-3-f4bug@amsat.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
This comment describing the tx79 opcodes is helpful. As we
will implement these instructions in tx79_translate.c, move
the comment there.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-15-f4bug@amsat.org>
We have almost 400 lines of code full of /* TODO */ comments
which end calling gen_reserved_instruction().
As we are not going to implement them, and all the caller's
switch() default cases already call gen_reserved_instruction(),
we can remove this altogether.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-14-f4bug@amsat.org>
Simplify the PCPYH (Parallel Copy Halfword) instruction by using
multiple calls to deposit_i64() which can be optimized by some
TCG backends.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-11-f4bug@amsat.org>
We will use gen_rdhwr() outside of translate.c, make it public.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-28-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-10-f4bug@amsat.org>
Introduce decodetree structure to decode the tx79 opcodes.
Start it by moving the existing MFHI1 and MFLO1 opcodes.
Remove unnecessary comments.
As the TX79 share opcodes with the TX19/TX39/TX49 CPUs,
we introduce the decode_ext_txx9() dispatcher where we
will add the other decoders later.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-9-f4bug@amsat.org>
Extract 1600+ lines from the big translate.c into a new file.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210226093111.3865906-14-f4bug@amsat.org>