This adds handling for the SCR_EL3.EEL2 bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com
[PMM: Applied fixes for review issues noted by RTH:
- check for FEATURE_AARCH64 before checking sel2 isar feature
- correct the commit message subject line]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On ARMv8-A, accesses by 32-bit secure EL1 to monitor registers trap to
the upper (64-bit) EL. With Secure EL2 support, we can no longer assume
that that is always EL3, so make room for the value to be computed at
run-time.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-16-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The stage_1_mmu_idx() already effectively keeps track of which
translation regimes have two stages. Don't hard-code another test.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW
bits can invert the secure flag for pagetable walks. This patchset
allows S1_ptw_translate() to change the non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The VTTBR write callback so far assumes that the underlying VM lies in
non-secure state. This handles the secure state scenario.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds the MMU indices for EL2 stage 1 in secure state.
To keep code contained, which is largelly identical between secure and
non-secure modes, the MMU indices are reassigned. The new assignments
provide a systematic pattern with a non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in
secure mode, though it can only be AArch64.
This patch adds the target EL for exceptions from 64-bit S-EL2.
It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure
mode. Those values were never used in practice as the effective value of
HCR was always 0 in secure mode.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-6-remi.denis.courmont@huawei.com
[PMM: tweaked commit message to match reduced scope of patch
following rebase]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds a common helper to compute the effective value of MDCR_EL2.
That is the actual value if EL2 is enabled in the current security
context, or 0 elsewise.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will simplify accessing HCR conditionally in secure state.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not assume that EL2 is available in and only in non-secure context.
That equivalence is broken by ARMv8.4-SEL2.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This checks if EL2 is enabled (meaning EL2 registers take effects) in
the current security context.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-2-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In this context, the HCR value is the effective value, and thus is
zero in secure mode. The tests for HCR.{F,I}MO are sufficient.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The interface for object_property_add_bool is simpler,
making the code easier to understand.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The crypto overhead of emulating pauth can be significant for
some workloads. Add two boolean properties that allows the
feature to be turned off, on with the architected algorithm,
or on with an implementation defined algorithm.
We need two intermediate booleans to control the state while
parsing properties lest we clobber ID_AA64ISAR1 into an invalid
intermediate state.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-3-richard.henderson@linaro.org
[PMM: fixed docs typo, tweaked text to clarify that the impdef
algorithm is specific to QEMU]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Without hardware acceleration, a cryptographically strong
algorithm is too expensive for pauth_computepac.
Even with hardware accel, we are not currently expecting
to link the linux-user binaries to any crypto libraries,
and doing so would generally make the --static build fail.
So choose XXH64 as a reasonably quick and decent hash.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- clean-ups to docker images
- drop duplicate jobs from shippable
- prettier tag generation (+gtags)
- generate browsable source tree
- more Travis->GitLab migrations
- fix checkpatch to deal with commits
- gate gdbstub tests on 8.3.1, expand tests
- support Xfer:auxv:read gdb packet
- better gdbstub cleanup
- use GDB's SVE register layout
- make arm-compat-semihosting common
- add riscv semihosting support
- add HEAPINFO, ELAPSED, TICKFREQ, TMPNAM and ISERROR to semihosting
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmAFXkcACgkQ+9DbCVqe
KkSP7Af/YNU4dWFf/N9CwvKQTSoJmBrO77HXccOJyYDS62hA8eoh83HWNll+xMV7
GxJDwQs0GS8J3oqcq1DktGgTUkCNxUfbHROjI2YXfRzoWnl0PFHY+Z/qRsq+bRhX
C3CiNCS/nM/NW2Q+H6TAD1MnXkia11+hqFhXrBRKVDON83MSvm0AspS5RO5eVpxo
TUTOD1YND+tAPWi5xAN+NyDuvfoY3tG4S4/DFUrHQfpS7uaHY/4qe8gMmJczveeo
uzJln9M7+pV5cgUWwr1fgCkbSyGgra+KX3GNoLIGS34C88cKRXAp7ZF19A3wQpiy
LXljmOinLfKuJqeRGwcnt6f8GrTn7A==
=XR0h
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-misc-180121-2' into staging
Testing, gdbstub and semihosting patches:
- clean-ups to docker images
- drop duplicate jobs from shippable
- prettier tag generation (+gtags)
- generate browsable source tree
- more Travis->GitLab migrations
- fix checkpatch to deal with commits
- gate gdbstub tests on 8.3.1, expand tests
- support Xfer:auxv:read gdb packet
- better gdbstub cleanup
- use GDB's SVE register layout
- make arm-compat-semihosting common
- add riscv semihosting support
- add HEAPINFO, ELAPSED, TICKFREQ, TMPNAM and ISERROR to semihosting
# gpg: Signature made Mon 18 Jan 2021 10:09:11 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-misc-180121-2: (30 commits)
semihosting: Implement SYS_ISERROR
semihosting: Implement SYS_TMPNAM
semihosting: Implement SYS_ELAPSED and SYS_TICKFREQ
riscv: Add semihosting support for user mode
riscv: Add semihosting support
semihosting: Support SYS_HEAPINFO when env->boot_info is not set
semihosting: Change internal common-semi interfaces to use CPUState *
semihosting: Change common-semi API to be architecture-independent
semihosting: Move ARM semihosting code to shared directories
target/arm: use official org.gnu.gdb.aarch64.sve layout for registers
gdbstub: ensure we clean-up when terminated
gdbstub: drop gdbserver_cleanup in favour of gdb_exit
gdbstub: drop CPUEnv from gdb_exit()
gdbstub: add support to Xfer:auxv:read: packet
gdbstub: implement a softmmu based test
Revert "tests/tcg/multiarch/Makefile.target: Disable run-gdbstub-sha1 test"
configure: gate our use of GDB to 8.3.1 or above
test/guest-debug: echo QEMU command as well
scripts/checkpatch.pl: fix git-show invocation to include diffstat
gitlab: migrate the minimal tools and unit tests from Travis
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# default-configs/targets/riscv32-linux-user.mak
# default-configs/targets/riscv64-linux-user.mak
Adapt the arm semihosting support code for RISCV. This implementation
is based on the standard for RISC-V semihosting version 0.2 as
documented in
https://github.com/riscv/riscv-semihosting-spec/releases/tag/0.2
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210107170717.2098982-6-keithp@keithp.com>
Message-Id: <20210108224256.2321-17-alex.bennee@linaro.org>
The public API is now defined in
hw/semihosting/common-semi.h. do_common_semihosting takes CPUState *
instead of CPUARMState *. All internal functions have been renamed
common_semi_ instead of arm_semi_ or arm_. Aside from the API change,
there are no functional changes in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210107170717.2098982-3-keithp@keithp.com>
Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
This commit renames two files which provide ARM semihosting support so
that they can be shared by other architectures:
1. target/arm/arm-semi.c -> hw/semihosting/common-semi.c
2. linux-user/arm/semihost.c -> linux-user/semihost.c
The build system was modified use a new config variable,
CONFIG_ARM_COMPATIBLE_SEMIHOSTING, which has been added to the ARM
softmmu and linux-user default configs. The contents of the source
files has not been changed in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
[AJB: rename arm-compat-semi, select SEMIHOSTING]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210107170717.2098982-2-keithp@keithp.com>
Message-Id: <20210108224256.2321-13-alex.bennee@linaro.org>
While GDB can work with any XML description given to it there is
special handling for SVE registers on the GDB side which makes the
users life a little better. The changes aren't that major and all the
registers save the $vg reported the same. All that changes is:
- report org.gnu.gdb.aarch64.sve
- use gdb nomenclature for names and types
- minor re-ordering of the types to match reference
- re-enable ieee_half (as we know gdb supports it now)
- $vg is now a 64 bit int
- check $vN and $zN aliasing in test
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Machado <luis.machado@linaro.org>
Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
gdb_exit() has never needed anything from env and I doubt we are going
to start now.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210108224256.2321-8-alex.bennee@linaro.org>
At present QEMU RISC-V uses a hardcoded XML to report the feature
"org.gnu.gdb.riscv.csr" [1]. There are two major issues with the
approach being used currently:
- The XML does not specify the "regnum" field of a CSR entry, hence
consecutive numbers are used by the remote GDB client to access
CSRs. In QEMU we have to maintain a map table to convert the GDB
number to the hardware number which is error prone.
- The XML contains some CSRs that QEMU does not implement at all,
which causes an "E14" response sent to remote GDB client.
Change to generate the CSR register list dynamically, based on the
availability presented in the CSR function table. This new approach
will reflect a correct list of CSRs that QEMU actually implements.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb/RISC_002dV-Features.html#RISC_002dV-Features
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210116054123.5457-2-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation to generate the CSR register list for GDB stub
dynamically, let's add the CSR name in the CSR function table.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1610427124-49887-3-git-send-email-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation to generate the CSR register list for GDB stub
dynamically, change csr_ops[] to non-static so that it can be
referenced externally.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1610427124-49887-2-git-send-email-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the privilege specification, any access from S/U mode should fail
if no pmp region is configured.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201223192553.332508-1-atish.patra@wdc.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Target description is not currently implemented in RISC-V
architecture. Thus GDB won't set it properly when attached.
The patch implements the target description response.
Signed-off-by: Sylvain Pelissier <sylvain.pelissier@gmail.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210106204141.14027-1-sylvain.pelissier@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vendor specific CPU definitions are not very useful. Use the
ISA definitions instead, which are more helpful when looking
at the various CPU definitions.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-4-f4bug@amsat.org>
nanoMIPS not a CPU, but an ISA. The nanoMIPS ISA is already
defined as ISA_NANOMIPS32.
Remove this incorrect definition and update the single CPU
implementing it, the I7200.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-3-f4bug@amsat.org>
Commit 823f2897bd ("target/mips: Disable R5900 support")
removed the single CPU using the CPU_R5900 definition.
As it is unused, remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-2-f4bug@amsat.org>
LL/SC opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-14-f4bug@amsat.org>
LLD/SCD opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-13-f4bug@amsat.org>
LDL/LDR/SDL/SDR opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-12-f4bug@amsat.org>
LWLE/LWRE/SWLE/SWRE (EVA) opcodes have been removed from
the Release 6. Add a single decodetree entry for the opcodes,
triggering Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-11-f4bug@amsat.org>
LWL/LWR/SWL/SWR opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-10-f4bug@amsat.org>
CACHE/PREF opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-9-f4bug@amsat.org>
COP1x opcode has been removed from the Release 6.
Add a single decodetree entry for it, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() call.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-8-f4bug@amsat.org>
Special2 opcode have been removed from the Release 6.
Add a single decodetree entry for all the opcode class,
triggering Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() call.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-7-f4bug@amsat.org>
Since we switched to decodetree-generated processing,
we can remove this now unreachable code.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-6-f4bug@amsat.org>
LSA and LDSA opcodes are also available with MIPS release 6.
Introduce the decodetree config files and call the decode()
helpers in the main decode_opc() loop.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-24-f4bug@amsat.org>
Add the LSA opcode to the MSA32 decodetree config, add DLSA
to a new config for the MSA64 ASE, and call decode_msa64()
in the main decode_opc() loop.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-23-f4bug@amsat.org>
Extract gen_lsa() from translate.c and explode it as
gen_LSA() and gen_DLSA().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-22-f4bug@amsat.org>
Now that we can decode the MSA ASE with decode_ase_msa(),
use it and remove the previous code, now unreachable.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-21-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Introduce the 'msa32' decodetree config for the 32-bit MSA ASE.
We start by decoding:
- the branch instructions,
- all instructions based on the MSA opcode.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-20-f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Simplify gen_check_zero_element() by passing the TCGCond
argument along.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-25-f4bug@amsat.org>