Added TMP421 type sensor support in tiogapass platform.
Tested: Tested and verified in tiogapass platform.
Signed-off-by: Karthikeyan Pasupathi <pkarthikeyan1509@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230307103334.3586755-1-pkarthikeyan1509@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Added TMP421 type support in yosemite v2 platform.
Tested: Tested and verified in yosemite V2 platform.
Signed-off-by: Karthikeyan Pasupathi <pkarthikeyan1509@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230307095239.3583613-1-pkarthikeyan1509@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit a4b15a8b introduced a new function blk_pread_nonzeroes(). Instead
of reading directly from the root node of the BlockBackend, it reads
from its 'file' child node. This can happen to mostly work for raw
images (as long as the 'raw' format driver is in use, but not actually
doing anything), but it breaks everything else.
Fix it to read from the root node instead.
Fixes: a4b15a8b9ef25b44fa92a4825312622600c1f37c
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230307140230.59158-1-kwolf@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Currently, when a block backend is attached to a m25p80 device and the
associated file size does not match the flash model, QEMU complains
with the error message "failed to read the initial flash content".
This is confusing for the user.
Instead, use helper blk_check_size_and_read_all() introduced by commit
06f1521795 ("pflash: Require backend size to match device, improve
errors").
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221115151000.2080833-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
According to the device DMA logging uAPI, IOVA ranges to be logged by
the device must be provided all at once upon DMA logging start.
As preparation for the following patches which will add device dirty
page tracking, keep a record of all DMA mapped IOVA ranges so later they
can be used for DMA logging start.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-10-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
In preparation to be used in device dirty tracking, move the code that
calculate a iova/end range from the container/section. This avoids
duplication on the common checks across listener callbacks.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-9-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The checks are replicated against region_add and region_del
and will be soon added in another memory listener dedicated
for dirty tracking.
Move these into a new helper for avoid duplication.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Link: https://lore.kernel.org/r/20230307125450.62409-8-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
In preparation to turn more of the memory listener checks into
common functions, one of the affected places is how we trace when
sections are skipped. Right now there is one for each. Change it
into one single tracepoint `vfio_listener_region_skip` which receives
a name which refers to the callback i.e. region_add and region_del.
Suggested-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-7-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Move the code that finds the container host DMA window against a iova
range. This avoids duplication on the common checks across listener
callbacks.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Link: https://lore.kernel.org/r/20230307125450.62409-6-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
There are already two places where dirty page bitmap allocation and
calculations are done in open code.
To avoid code duplication, introduce VFIOBitmap struct and corresponding
alloc function and use them where applicable.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-5-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
If VFIO dirty pages log start/stop/sync fails during migration,
migration should be aborted as pages dirtied by VFIO devices might not
be reported properly.
This is not the case today, where in such scenario only an error is
printed.
Fix it by aborting migration in the above scenario.
Fixes: 758b96b61d5c ("vfio/migrate: Move switch of dirty tracking into vfio_memory_listener")
Fixes: b6dd6504e303 ("vfio: Add vfio_listener_log_sync to mark dirty pages")
Fixes: 9e7b0442f23a ("vfio: Add ioctl to get dirty pages bitmap during dma unmap")
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-4-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
There are several places where the %m conversion is used if one of
vfio_dma_map(), vfio_dma_unmap() or vfio_get_dirty_bitmap() fail.
The %m usage in these places is wrong since %m relies on errno value while
the above functions don't report errors via errno.
Fix it by using strerror() with the returned value instead.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-3-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Hardly anybody still uses 32-bit x86 environments for running QEMU with
full system emulation, so let's stop wasting our scarce CI minutes with
this job.
(There are still the 32-bit MinGW and TCI jobs around for having
some compile test coverage on 32-bit)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Message-Id: <20230306084658.29709-4-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Hardly anybody still uses 32-bit x86 hosts today, so we should start
deprecating them to stop wasting our time and CI minutes here.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Message-Id: <20230306084658.29709-2-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
nmi.h and notify.h are not needed here, drop them to speed up
the compiling a little bit.
Message-Id: <20230210111438.1114600-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Hexagon's idef-parser machinery uses some bison features that are not
available at older versions. The most preeminent example (as it can
be used as a sentinel) is "%define parse.error verbose". This was
introduced in version 3.0 of the tool, which is able to compile
qemu-hexagon just fine. However, compilation fails with the previous
minor bison release, v2.7. So let's assert the minimum version at
meson.build to give a more comprehensive error message for those trying
to compile QEMU.
[1]: https://www.gnu.org/software/bison/manual/html_node/_0025define-Summary.html#index-_0025define-parse_002eerror
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alessandro Di Federico <ale@rev.ng>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <a6763f9f7b89ea310ab86f9a2b311a05254a1acd.1675779233.git.quic_mathbern@quicinc.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
For long-term distributions that release a new version only very
seldom, we limit the support to five years after the initial release.
Otherwise, we might need to support distros like openSUSE 15 for
up to 7 or even more years in total due to our "two more years
after the next major release" rule, which is just way too much to
handle in a project like QEMU that only has limited human resources.
Message-Id: <20230223193257.1068205-1-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Support for the Zicbiom, ZCicboz, and Zicbop extensions.
* OpenSBI has been updated to version 1.2, see
<https://github.com/riscv-software-src/opensbi/releases/tag/v1.2> for
the release notes.
* Support for setting the virtual address width (ie, sv39/sv48/sv57) on
the command line.
* Support for ACPI on RISC-V.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAmQGYGgTHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRAuExnzX7sYidmyEAC6FEMbbFM5D++qR6w6xM6hXgzcrev6
s1kyRRNVa45uSA78ti/Zi0hsDLNf7ZsNPndF0OIkkO5iAE0OVm3LU7tV1TqKcT82
Dd9VXxe93zEmfnuJazHrMa54SXPhhnNdWHtKlZ6vBfZpbxgx0FFs50xkCsrM5LQZ
hYHxQUqPWQTvF2MdDHrxCuLcdKl+Wg3ysCcgRh2d049KUBrIu6vNaHC2+AGRjCbj
BkrGCkB82fTmVJjzAcVWQxLoAV12pCbJS4og1GtP8hA7WevtB39tbPin9siBKRZp
QBeiIsg0nebkpmZGrb+xWVwlIBNe9yYwJa0KmveQk8v7L5RIzjM1mtDL91VrVljC
KC2tfT570m0Iq2NoFMb3wd/kESHFzVDM/g+XYqRd4KSoiCNP/RbqYNQBwbMc31Tr
E27xfA1D8w2vem0Rk20x3KgPf1Z5OmGXjq6YObTpnAzG8cZlA37qKBP+ortt5aHX
GZSg3CAwknHHVajd4aaegkPsHxm1tRvoTfh38MwkPSNxaA9GD0nz0k9xaYDmeZ2L
olfanNsaQEwcVUId31+7sAENg1TZU0fnj879/nxkMUCazVTdL8/mz+IoTTx0QCST
3+9ATWcyJUlmjbDKIs7kr1L+wJdvvHEJggPAbbPI8ekpXaLZvUYOT6ObzYKNAmwY
wELQBn8QKXcLVA==
=5gAt
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20230306' of https://gitlab.com/palmer-dabbelt/qemu into staging
Sixth RISC-V PR for 8.0
* Support for the Zicbiom, ZCicboz, and Zicbop extensions.
* OpenSBI has been updated to version 1.2, see
<https://github.com/riscv-software-src/opensbi/releases/tag/v1.2> for
the release notes.
* Support for setting the virtual address width (ie, sv39/sv48/sv57) on
the command line.
* Support for ACPI on RISC-V.
# -----BEGIN PGP SIGNATURE-----
#
# iQJHBAABCAAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAmQGYGgTHHBhbG1lckBk
# YWJiZWx0LmNvbQAKCRAuExnzX7sYidmyEAC6FEMbbFM5D++qR6w6xM6hXgzcrev6
# s1kyRRNVa45uSA78ti/Zi0hsDLNf7ZsNPndF0OIkkO5iAE0OVm3LU7tV1TqKcT82
# Dd9VXxe93zEmfnuJazHrMa54SXPhhnNdWHtKlZ6vBfZpbxgx0FFs50xkCsrM5LQZ
# hYHxQUqPWQTvF2MdDHrxCuLcdKl+Wg3ysCcgRh2d049KUBrIu6vNaHC2+AGRjCbj
# BkrGCkB82fTmVJjzAcVWQxLoAV12pCbJS4og1GtP8hA7WevtB39tbPin9siBKRZp
# QBeiIsg0nebkpmZGrb+xWVwlIBNe9yYwJa0KmveQk8v7L5RIzjM1mtDL91VrVljC
# KC2tfT570m0Iq2NoFMb3wd/kESHFzVDM/g+XYqRd4KSoiCNP/RbqYNQBwbMc31Tr
# E27xfA1D8w2vem0Rk20x3KgPf1Z5OmGXjq6YObTpnAzG8cZlA37qKBP+ortt5aHX
# GZSg3CAwknHHVajd4aaegkPsHxm1tRvoTfh38MwkPSNxaA9GD0nz0k9xaYDmeZ2L
# olfanNsaQEwcVUId31+7sAENg1TZU0fnj879/nxkMUCazVTdL8/mz+IoTTx0QCST
# 3+9ATWcyJUlmjbDKIs7kr1L+wJdvvHEJggPAbbPI8ekpXaLZvUYOT6ObzYKNAmwY
# wELQBn8QKXcLVA==
# =5gAt
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Mar 2023 21:51:36 GMT
# gpg: using RSA key 2B3C3747446843B24A943A7A2E1319F35FBB1889
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmerdabbelt@google.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
# Subkey fingerprint: 2B3C 3747 4468 43B2 4A94 3A7A 2E13 19F3 5FBB 1889
* tag 'pull-riscv-to-apply-20230306' of https://gitlab.com/palmer-dabbelt/qemu: (22 commits)
MAINTAINERS: Add entry for RISC-V ACPI
hw/riscv/virt.c: Initialize the ACPI tables
hw/riscv/virt: virt-acpi-build.c: Add RHCT Table
hw/riscv/virt: virt-acpi-build.c: Add RINTC in MADT
hw/riscv/virt: Enable basic ACPI infrastructure
hw/riscv/virt: Add memmap pointer to RiscVVirtState
hw/riscv/virt: Add a switch to disable ACPI
hw/riscv/virt: Add OEM_ID and OEM_TABLE_ID fields
riscv: Correctly set the device-tree entry 'mmu-type'
riscv: Introduce satp mode hw capabilities
riscv: Allow user to set the satp mode
riscv: Change type of valid_vm_1_10_[32|64] to bool
riscv: Pass Object to register_cpu_props instead of DeviceState
roms/opensbi: Upgrade from v1.1 to v1.2
gitlab/opensbi: Move to docker:stable
hw: intc: Use cpu_by_arch_id to fetch CPU state
target/riscv: cpu: Implement get_arch_id callback
disas/riscv Fix ctzw disassemble
hw/riscv/virt.c: add cbo[mz]-block-size fdt properties
target/riscv: add Zicbop cbo.prefetch{i, r, m} placeholder
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Explain that aio_context_notifier_poll() relies on
aio_notify_accept() to catch all the memory writes that were
done before ctx->notified was set to true.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Ever since commit 8c6b0356b539 ("util/async: make bh_aio_poll() O(1)",
2020-02-22), synchronization between qemu_bh_schedule() and aio_bh_poll()
is happening when the bottom half is enqueued in the bh_list; not
when the flags are set. Update the documentation to match.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
mutex->from_push and mutex->handoff in qemu-coroutine-lock implement
the familiar pattern:
write a write b
smp_mb() smp_mb()
read b read a
The memory barrier is required by the C memory model even after a
SEQ_CST read-modify-write operation such as QSLIST_INSERT_HEAD_ATOMIC.
Add it and avoid the unclear qatomic_mb_read() operation.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The barrier comes after an atomic increment, so it is enough to use
smp_mb__after_rmw(); this avoids a double barrier on x86 systems.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Ensure ordering between clearing the COMPUTING flag and checking
IRQFACT, and between setting the IRQFACT flag and checking
COMPUTING. This ensures that no wakeups are lost.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*(). Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers that are not really needed and
complicated the functions unnecessarily. Also, it is relying on
a memory barrier in ResetEvent(); the barrier _ought_ to be there
but there is really no documentation about it, so make it explicit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*(). Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers too. Document more clearly what
is going on.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On ARM, seqcst loads and stores (which QEMU does not use) are compiled
respectively as LDAR and STLR instructions. Even though LDAR is
also used for load-acquire operations, it also waits for all STLRs to
leave the store buffer. Thus, LDAR and STLR alone are load-acquire
and store-release operations, but LDAR also provides store-against-load
ordering as long as the previous store is a STLR.
Compare this to ARMv7, where store-release is DMB+STR and load-acquire
is LDR+DMB, but an additional DMB is needed between store-seqcst and
load-seqcst (e.g. DMB+STR+DMB+LDR+DMB); or with x86, where MOV provides
load-acquire and store-release semantics and the two can be reordered.
Likewise, on ARM sequentially consistent read-modify-write operations only
need to use LDAXR and STLXR respectively for the load and the store, while
on x86 they need to use the stronger LOCK prefix.
In a strange twist of events, however, the _stronger_ semantics
of the ARM instructions can end up causing bugs on ARM, not on x86.
The problems occur when seqcst atomics are mixed with relaxed atomics.
QEMU's atomics try to bridge the Linux API (that most of the developers
are familiar with) and the C11 API, and the two have a substantial
difference:
- in Linux, strongly-ordered atomics such as atomic_add_return() affect
the global ordering of _all_ memory operations, including for example
READ_ONCE()/WRITE_ONCE()
- in C11, sequentially consistent atomics (except for seq-cst fences)
only affect the ordering of sequentially consistent operations.
In particular, since relaxed loads are done with LDR on ARM, they are
not ordered against seqcst stores (which are done with STLR).
QEMU implements high-level synchronization primitives with the idea that
the primitives contain the necessary memory barriers, and the callers can
use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
This is very much incompatible with the C11 view that seqcst accesses
are only ordered against other seqcst accesses, and requires using seqcst
fences as in the following example:
qatomic_set(&y, 1); qatomic_set(&x, 1);
smp_mb(); smp_mb();
... qatomic_read(&x) ... ... qatomic_read(&y) ...
When a qatomic_*() read-modify write operation is used instead of one
or both stores, developers that are more familiar with the Linux API may
be tempted to omit the smp_mb(), which will work on x86 but not on ARM.
This nasty difference between Linux and C11 read-modify-write operations
has already caused issues in util/async.c and more are being found.
Provide something similar to Linux smp_mb__before/after_atomic(); this
has the double function of documenting clearly why there is a memory
barrier, and avoiding a double barrier on x86 and s390x systems.
The new macro can already be put to use in qatomic_mb_set().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The following improvements are made for predicated HVX instructions
During gen_commit_hvx, unconditionally move the "new" value into
the dest
Don't set slot_cancelled
Remove runtime bookkeeping of which registers were updated
Reduce the cases where gen_log_vreg_write[_pair] is called
It's only needed for special operands VxxV and VyV
Remove gen_log_qreg_write
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-15-tsimpson@quicinc.com>
We only need to track slot for predicated stores and predicated HVX
instructions.
Add arguments to the probe helper functions to indicate if the slot
is predicated.
Here is a simple example of the differences in the TCG code generated:
IN:
0x00400094: 0xf900c102 { if (P0) R2 = and(R0,R1) }
BEFORE
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
mov_i32 r2,new_r2
AFTER
---- 00400094
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
set_label $L2
mov_i32 r2,new_r2
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-14-tsimpson@quicinc.com>
We assign the instruction destination register to hex_new_value[num]
instead of a TCG temp that gets copied back to hex_new_value[num].
We introduce new functions get_result_gpr[_pair] to facilitate getting
the proper destination register.
Since we preload hex_new_value for predicated instructions, we don't
need the check for slot_cancelled. So, we call gen_log_reg_write instead.
We update the helper function generation and gen_tcg.h to maintain the
disable-hexagon-idef-parser configuration.
Here is a simple example of the differences in the TCG code generated:
IN:
0x00400094: 0xf900c102 { if (P0) R2 = and(R0,R1) }
BEFORE
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
mov_i32 loc2,$0x0
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 loc2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
and_i32 tmp0,slot_cancelled,$0x8
movcond_i32 new_r2,tmp0,$0x0,loc2,new_r2,eq
mov_i32 r2,new_r2
AFTER
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
mov_i32 r2,new_r2
We'll remove the unnecessary manipulation of slot_cancelled in a
subsequent patch.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-13-tsimpson@quicinc.com>
The F2_sffms instruction [r0 -= sfmpy(r1, r2)] doesn't properly
handle -0. Previously we would negate the input operand by subtracting
from zero. Instead, we negate by changing the sign bit.
Test case added to tests/tcg/hexagon/fpstuff.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-12-tsimpson@quicinc.com>
Made possible by new toolchain container
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-11-tsimpson@quicinc.com>
Replace __builtin_* with inline assembly
The __builtin's are subject to change with different compiler
releases, so might break
Mark arrays as aligned when accessed as HVX vectors
Clean up comments
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-10-tsimpson@quicinc.com>
Add control registers (c4, c5) to clobbers list
Made possible by new toolchain container
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-9-tsimpson@quicinc.com>
Extend the analyze_<tag> functions for HVX vector and predicate writes
Remove calls to ctx_log_vreg_write[_pair] from gen_tcg_funcs.py
During gen_start_packet, reload the predicated HVX registers into
fugure_VRegs and tmp_VRegs
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-8-tsimpson@quicinc.com>
The pkt_has_store_s1 field in CPUHexagonState is only needed in generated
helpers for scalar load instructions. See check_noshuf and mem_load[1248]
in op_helper.c.
We add logic in gen_analyze_funcs.py to set need_pkt_has_store_s1 in
DisasContext when it is needed at runtime.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-7-tsimpson@quicinc.com>
We create a new generator that creates an analyze_<tag> function for
each instruction. Currently, these functions record the writes to
R, P, and C registers by calling ctx_log_reg_write[_pair] or
ctx_log_pred_write.
During gen_start_packet, we invoke the analyze_<tag> function for
each instruction in the packet, and we mark the implicit register
and predicate writes.
Doing the analysis up front has several advantages
- We remove calls to ctx_log_* from gen_tcg_funcs.py and genptr.c
- After the analysis is performed, we can initialize hex_new_value
for each of the predicated assignments rather than during TCG
generation for the instructions
- This is a stepping stone for future work where the analysis will
include the set of registers that are read. In cases where
the packet doesn't have an overlap between the registers that are
written and registers that are read, we can avoid the intermediate
step of writing to hex_new_value. Note that other checks will also
be needed (e.g., no instructions can raise an exception).
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-6-tsimpson@quicinc.com>
These instructions perform a deallocframe+return (jumpr r31)
Add overrides for
L4_return
SL2_return
L4_return_t
L4_return_f
L4_return_tnew_pt
L4_return_fnew_pt
L4_return_tnew_pnt
L4_return_fnew_pnt
SL2_return_t
SL2_return_f
SL2_return_tnew
SL2_return_fnew
This patch eliminates the last helper that uses write_new_pc, so we
remove it from op_helper.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-5-tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-4-tsimpson@quicinc.com>
Add overrides for
J2_callr
J2_callrt
J2_callrf
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-3-tsimpson@quicinc.com>
Removes code paths used by COF instructions, which are no longer
processed by idef-parser.
Tested-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230131223133.8592-1-anjo@rev.ng>
Merge mov with andi.
Suggested-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20230306225724.2105263-1-richard.henderson@linaro.org>
The --disable-hexagon-idef-parser configuration was broken by this patch
2feacf60c23ba6 (target/hexagon: Drop tcg_temp_free from C code)
That config is not tested by CI
Fix is simple: Mark a few TCGv variables as unused
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230306172515.346813-1-tsimpson@quicinc.com>
RISC-V ACPI related functionality for virt machine is added in
virt-acpi-build.c. Add the maintainer entry after moving the
ARM ACPI entry under the main ACPI entry.
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20230302091212.999767-9-sunilvl@ventanamicro.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>