No change to the ultimate load/store routines yet, so some atomicity
conditions not yet honored, but plumbs the change to alignment through
the relevant functions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
No change to the ultimate load/store routines yet, so some atomicity
conditions not yet honored, but plumbs the change to alignment through
the relevant functions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Examine MemOp for atomicity and alignment, adjusting alignment
as required to implement atomicity on the host.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that tcg_out_helper_load_regs is not recursive, we can
merge it into its only caller, tcg_out_helper_load_slots.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With x86_64 as host, we do not have any temporaries with which to
resolve cycles, but we do have xchg. As a side bonus, the set of
graphs that can be made with 3 nodes and all nodes conflicting is
small: two. We can solve the cycle with a single temp.
This is required for x86_64 to handle stores of i128: 1 address
register and 2 data registers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add opcodes for backend support for 128-bit memory operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro
with a function with a memop argument.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The system is required to emulate unaligned accesses, even if the
hardware does not support it. The resulting trap may or may not
be more efficient than the qemu slow path. There are linux kernel
patches in flight to allow userspace to query hardware support;
we can re-evaluate whether to enable this by default after that.
In the meantime, softmmu now matches useronly, where we already
assumed that unaligned accesses are supported.
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Test the final byte of an unaligned access.
Use BSTRINS.D to clear the range of bits, rather than AND.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This should be true of all loongarch64 running Linux.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Drop the target-specific trampolines for the standard slow path.
This lets us use tcg_out_helper_{ld,st}_args, and handles the new
atomicity bits within MemOp.
At the same time, use the full load/store helpers for user-only mode.
Drop inline unaligned access support for user-only mode, as it does
not handle atomicity.
Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2].
This allows the constraints to be simplified.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Emphasize that the constant is unsigned.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Shuffle the order in tcg_out_movi_int to check s13 first, and
drop this check from tcg_out_movi_imm32. This might make the
sequence for in_prologue larger, but not worth worrying about.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Emphasize that the constant is signed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Always reserve r3 for tlb softmmu lookup. Fix a bug in user-only
ALL_QLDST_REGS, in that r14 is clobbered by the BLNE that leads
to the misaligned trap. Remove r0+r1 from user-only ALL_QLDST_REGS;
I believe these had been reserved for bswap, which we no longer
perform during qemu_st.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These features are present for Apple M1.
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notice when the host has additional atomic instructions.
The new variables will also be used in generated code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notice when Intel or AMD have guaranteed that vmovdqa is atomic.
The new variable will also be used in generated code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is an edge condition prior to gcc13 for which optimization
is required to generate 16-byte atomic sequences. Detect this.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can now fold these two pieces of code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
TCG backends may need to defer to a helper to implement
the atomicity required by a given operation. Mirror the
interface used in system mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With the current structure of cputlb.c, there is no difference
between the little-endian and big-endian entry points, aside
from the assert. Unify the pairs of functions.
Hoist the qemu_{ld,st}_helpers arrays to tcg.c.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Create ldst_atomicity.c.inc.
Not required for user-only code loads, because we've ensured that
the page is read-only before beginning to translate code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This field may be used to describe the precise atomicity requirements
of the guest, which may then be used to constrain the methods by which
it may be emulated by the host.
For instance, the AArch64 LDP (32-bit) instruction changes semantics
with ARMv8.4 LSE2, from
MO_64 | MO_ATOM_IFALIGN_PAIR
(64-bits, single-copy atomic only on 4 byte units,
nonatomic if not aligned by 4),
to
MO_64 | MO_ATOM_WITHIN16
(64-bits, single-copy atomic within a 16 byte block)
The former may be implemented with two 4 byte loads, or a single 8 byte
load if that happens to be efficient on the host. The latter may not
be implemented with two 4 byte loads and may also require a helper when
misaligned.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add '9P' to the summary output section of 'VirtFS' to avoid being
confused with virtiofs.
Based-on: <20230503130757.863824-1-pefoley@google.com>
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <E1px7Id-0000NE-OQ@lizzy.crudebyte.com>
xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed
out when free is called. Do the teardown in _disconnect(). This
matches the setup done in _connect().
trace-events are also added for the XenDevOps functions.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Message-Id: <20230502143722.15613-1-jandryuk@gmail.com>
[C.S.: - Remove redundant return in xen_9pfs_free().
- Add comment to trace-events. ]
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Free allocated directory entries in v9fs_rreaddir() if argument
`entries` was passed as NULL, to avoid a memory leak. It is
explicitly allowed by design for `entries` to be NULL. [1]
[1] https://lore.kernel.org/all/1690923.g4PEXVpXuU@silver
Reported-by: Coverity (CID 1487558)
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <E1psh5T-0002XN-1C@lizzy.crudebyte.com>
It's only required for the proxy helper.
Add a new option for the proxy helper rather than enabling it
implicitly.
Change-Id: I95b73fca625529e99d16b0a64e01c65c0c1d43f2
Signed-off-by: Peter Foley <pefoley@google.com>
Message-Id: <20230503130757.863824-1-pefoley@google.com>
[C.S.: - Resolve merge conflict. ]
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
* Some small doc updates
* Introduce replacement for -async-teardown that shows up in the QAPI
* Make machine-qmp-cmds.c and xilinx_ethlite.c target-independent
* Fix s390x LDER instruction
* Fix s390x EXECUTE instruction with relative branches
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmRjagURHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbVyPxAAhlqIbVWir264DQkpLKM/4CVWPxVPwBxh
OPvSG42wM7+uCNefnIWYr4qT1+Iz14w8OYBCEON2u8Pwfgxrjf2ZkS2C79iL3FHG
37NsFGkxhLLeexzYyCpSg3FNikZql+RNg9I9um4NRPH0lgu4L3aQk58WyXFyBHU2
mxvbAEOyiSbGr8bp6ZcU7k1UryRZ6qQoBUzFvMQpUD7jlo88MVUu5D+4xZclH6EV
R6WerbyKUWnfY0rFWxA8RGt785aUVq9iD8tIkPkPhQ/UjvzZKosCHIpjF0qCkd6P
42Ahz6kP7Ce/XlTcS/Q3gIEzKViCFJtZiZIG/N2sBAWqisTkaSKDeQMrM6vAmmBr
ju44CUk2tupZSG20G/Gz7a09ZKr3S7+6BpJ+tUdnK2W9PSU7CycesZ6s9hqKJL8W
QUOMKyEMF/+W+pubdfYJNvUha6hYPoaR9vTNAhC50NiahhhIxIRcyRtpteVgsjwW
lxHMeIz8PUHxp+tvl3CzLZyDWF0maq5/JzhkCoUhvzVUAh+tDYAfWOKxIxEVNPVt
E1Igj6N4TYvkrXltSyxMxs9uHWhNi4ObETbB+7greYOWFVhtKhphnG78wt0uu81O
iZIqdLzWFeqaH5/Li3VnuVhLDnhSfpDiWUNqaVvWu6V5WrXDuIGQoe7pxAhRvZTB
zsOUpGdprPo=
=sWOT
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2023-05-15v2' of https://gitlab.com/thuth/qemu into staging
* Various small test updates
* Some small doc updates
* Introduce replacement for -async-teardown that shows up in the QAPI
* Make machine-qmp-cmds.c and xilinx_ethlite.c target-independent
* Fix s390x LDER instruction
* Fix s390x EXECUTE instruction with relative branches
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmRjagURHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbVyPxAAhlqIbVWir264DQkpLKM/4CVWPxVPwBxh
# OPvSG42wM7+uCNefnIWYr4qT1+Iz14w8OYBCEON2u8Pwfgxrjf2ZkS2C79iL3FHG
# 37NsFGkxhLLeexzYyCpSg3FNikZql+RNg9I9um4NRPH0lgu4L3aQk58WyXFyBHU2
# mxvbAEOyiSbGr8bp6ZcU7k1UryRZ6qQoBUzFvMQpUD7jlo88MVUu5D+4xZclH6EV
# R6WerbyKUWnfY0rFWxA8RGt785aUVq9iD8tIkPkPhQ/UjvzZKosCHIpjF0qCkd6P
# 42Ahz6kP7Ce/XlTcS/Q3gIEzKViCFJtZiZIG/N2sBAWqisTkaSKDeQMrM6vAmmBr
# ju44CUk2tupZSG20G/Gz7a09ZKr3S7+6BpJ+tUdnK2W9PSU7CycesZ6s9hqKJL8W
# QUOMKyEMF/+W+pubdfYJNvUha6hYPoaR9vTNAhC50NiahhhIxIRcyRtpteVgsjwW
# lxHMeIz8PUHxp+tvl3CzLZyDWF0maq5/JzhkCoUhvzVUAh+tDYAfWOKxIxEVNPVt
# E1Igj6N4TYvkrXltSyxMxs9uHWhNi4ObETbB+7greYOWFVhtKhphnG78wt0uu81O
# iZIqdLzWFeqaH5/Li3VnuVhLDnhSfpDiWUNqaVvWu6V5WrXDuIGQoe7pxAhRvZTB
# zsOUpGdprPo=
# =sWOT
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 16 May 2023 04:33:25 AM PDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2023-05-15v2' of https://gitlab.com/thuth/qemu: (21 commits)
tests/tcg/s390x: Test EXECUTE of relative branches
target/s390x: Fix EXECUTE of relative branches
tests/tcg/s390x: Enable the multiarch system tests
tests/tcg/multiarch: Make the system memory test work on big-endian
s390x/tcg: Fix LDER instruction format
hw/net: Move xilinx_ethlite.c to the target-independent source set
hw/core: Move machine-qmp-cmds.c into the target independent source set
cpu: Introduce a wrapper for being able to use TARGET_NAME in common code
hw/core: Use a callback for target specific query-cpus-fast information
docs/about/emulation: fix typo
docs/devel: remind developers to run CI container pipeline when updating images
s390x/pv: Fix spurious warning with asynchronous teardown
util/async-teardown: wire up query-command-line-options
tests/lcitool: Add mtools and xorriso and remove genisoimage as dependencies
tests: libvirt-ci: Update to commit 'c8971e90ac' to pull in mformat and xorriso
Add information how to fix common build error on Windows in symlink-install-tree
hw/pci-bridge: Fix release ordering by embedding PCIBridgeWindows within PCIBridge
tests/qtest: replace qmp_discard_response with qtest_qmp_assert_success
net: stream: test reconnect option with an unix socket
sysemu/kvm: Remove unused headers
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>