Commit Graph

1068 Commits

Author SHA1 Message Date
Mikhail Tyutin
c30d0b861c accel/tcg: Call save_iotlb_data from io_readx as well
Apply save_iotlb_data() to io_readx() as well as to io_writex().
This fixes SEGFAULT on qemu_plugin_hwaddr_phys_addr() call plugins
for addresses inside of MMIO region.

Signed-off-by: Dmitriy Solovev <d.solovev@yadro.com>
Signed-off-by: Mikhail Tyutin <m.tyutin@yadro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230804110903.19968-1-m.tyutin@yadro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-06 10:10:07 -07:00
Richard Henderson
f7eaf9d702 accel/tcg: Do not issue misaligned i/o
In the single-page case we were issuing misaligned i/o to
the memory subsystem, which does not handle it properly.
Split such accesses via do_{ld,st}_mmio_*.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1800
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05 18:17:17 +00:00
Richard Henderson
190aba803f accel/tcg: Issue wider aligned i/o in do_{ld,st}_mmio_*
If the address and size are aligned, send larger chunks
to the memory subsystem.  This will be required to make
more use of these helpers.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05 18:17:17 +00:00
Richard Henderson
1966855e56 accel/tcg: Adjust parameters and locking with do_{ld,st}_mmio_*
Replace MMULookupPageData* with CPUTLBEntryFull, addr, size.
Move QEMU_IOTHREAD_LOCK_GUARD to the caller.

This simplifies the usage from do_ld16_beN and do_st16_leN, where
we weren't locking the entire operation, and required hoop jumping
for passing addr and size.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-05 18:17:17 +00:00
Richard Henderson
ad17868eb1 accel/tcg: Clear tcg_ctx->gen_tb on buffer overflow
On overflow of code_gen_buffer, we unlock the guest pages we had been
translating, but failed to clear gen_tb.  On restart, if we cannot
allocate a TB, we exit to the main loop to perform the flush of all
TBs as soon as possible.  With garbage in gen_tb, we hit an assert:

../src/accel/tcg/tb-maint.c:348:page_unlock__debug: \
    assertion failed: (page_is_locked(pd))

Fixes: deba78709a ("accel/tcg: Always lock pages before translation")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-31 12:19:13 -07:00
Gavin Shan
fe6bda58e0 kvm: Fix crash due to access uninitialized kvm_state
Runs into core dump on arm64 and the backtrace extracted from the
core dump is shown as below. It's caused by accessing uninitialized
@kvm_state in kvm_flush_coalesced_mmio_buffer() due to commit 176d073029
("hw/arm/virt: Use machine_memory_devices_init()"), where the machine's
memory region is added earlier than before.

    main
    qemu_init
    configure_accelerators
    qemu_opts_foreach
    do_configure_accelerator
    accel_init_machine
    kvm_init
    virt_kvm_type
    virt_set_memmap
    machine_memory_devices_init
    memory_region_add_subregion
    memory_region_add_subregion_common
    memory_region_update_container_subregions
    memory_region_transaction_begin
    qemu_flush_coalesced_mmio_buffer
    kvm_flush_coalesced_mmio_buffer

Fix it by bailing early in kvm_flush_coalesced_mmio_buffer() on the
uninitialized @kvm_state. With this applied, no crash is observed on
arm64.

Fixes: 176d073029 ("hw/arm/virt: Use machine_memory_devices_init()")
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230731125946.2038742-1-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-31 14:19:43 +01:00
Luca Bonissi
32b120394c accel/tcg: Fix type of 'last' for pageflags_{find,next}
These should match 'start' as target_ulong, not target_long.

On 32bit targets, the parameter was sign-extended to uint64_t,
so only the first mmap within the upper 2GB memory can succeed.

Signed-off-by: Luca Bonissi <qemu@bonslack.org>
Message-Id: <327460e2-0ebd-9edb-426b-1df80d16c32a@bonslack.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-24 09:48:49 +01:00
Anton Johansson
8c605cf1d4 accel/tcg: Zero-pad vaddr in tlb_debug output
In replacing target_ulong with vaddr and TARGET_FMT_lx with VADDR_PRIx,
the zero-padding of TARGET_FMT_lx got lost.  Readd 16-wide zero-padding
for logging consistency.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230713120746.26897-1-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-24 09:48:41 +01:00
Richard Henderson
2c8412d469 accel/tcg: Take mmap_lock in load_atomic*_or_exit
For user-only, the probe for page writability may race with another
thread's mprotect.  Take the mmap_lock around the operation.  This
is still faster than the start/end_exclusive fallback.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23 17:57:10 +01:00
Richard Henderson
f1ce0b8028 accel/tcg: Fix sense of read-only probes in ldst_atomicity
In the initial commit, cdfac37be0, the sense of the test is incorrect,
as the -1/0 return was confusing.  In bef6f008b9, we mechanically
invert all callers while changing to false/true return, preserving the
incorrectness of the test.

Now that the return sense is sane, it's easy to see that if !write,
then the page is not modifiable (i.e. most likely read-only, with
PROT_NONE handled via SIGSEGV).

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23 17:57:10 +01:00
Peter Maydell
e60a7d0d4d accel/tcg: Zero-pad PC in TCG CPU exec trace lines
In commit f0a08b0913 we changed the type of the PC from
target_ulong to vaddr.  In doing so we inadvertently dropped the
zero-padding on the PC in trace lines (the second item inside the []
in these lines).  They used to look like this on AArch64, for
instance:

Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000]

and now they look like this:
Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000]

and if the PC happens to be somewhere low like 0x5000
then the field is shown as /5000/.

This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier,
depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64
with no width specifier.

Restore the zero-padding by adding an 016 width specifier to
this tracing and a couple of others that were similarly recently
changed to use VADDR_PRIx without a width specifier.

We can't unfortunately restore the "32-bit guests are padded to
8 hex digits and 64-bit guests to 16 hex digits" behaviour so
easily.

Fixes: f0a08b0913 ("accel/tcg/cpu-exec.c: Widen pc to vaddr")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
2023-07-17 11:05:08 +01:00
Richard Henderson
76f9d6ad19 tcg: Use HAVE_CMPXCHG128 instead of CONFIG_CMPXCHG128
We adjust CONFIG_ATOMIC128 and CONFIG_CMPXCHG128 with
CONFIG_ATOMIC128_OPT in atomic128.h.  It is difficult
to tell when those changes have been applied with the
ifdef we must use with CONFIG_CMPXCHG128.  So instead
use HAVE_CMPXCHG128, which triggers -Werror-undef when
the proper header has not been included.

Improves tcg_gen_atomic_cmpxchg_i128 for s390x host, which
requires CONFIG_ATOMIC128_OPT.  Without this we fall back
to EXCP_ATOMIC to single-step 128-bit atomics, which is
slow enough to cause some tests to time out.

Reported-by: Thomas Huth <thuth@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-15 08:02:49 +01:00
Richard Henderson
deba78709a accel/tcg: Always lock pages before translation
We had done this for user-mode by invoking page_protect
within the translator loop.  Extend this to handle system
mode as well.  Move page locking out of tb_link_page.

Reported-by: Liren Wei <lrwei@bupt.edu.cn>
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
2023-07-15 08:02:33 +01:00
Richard Henderson
bef6f008b9 accel/tcg: Return bool from page_check_range
Replace the 0/-1 result with true/false.
Invert the sense of the test of all callers.
Document the function.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230707204054.8792-25-richard.henderson@linaro.org>
2023-07-15 08:02:32 +01:00
Richard Henderson
91e9e116fe accel/tcg: Accept more page flags in page_check_range
Only PAGE_WRITE needs special attention, all others can be
handled as we do for PAGE_READ.  Adjust the mask.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230707204054.8792-24-richard.henderson@linaro.org>
2023-07-15 08:02:32 +01:00
Richard Henderson
f2bb7cf299 accel/tcg: Introduce page_find_range_empty
Use the interval tree to locate an unused range in the VM.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230707204054.8792-17-richard.henderson@linaro.org>
2023-07-15 08:02:32 +01:00
Richard Henderson
c2281ddcf3 accel/tcg: Introduce page_check_range_empty
Examine the interval tree to validate that a region
has no existing mappings.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230707204054.8792-10-richard.henderson@linaro.org>
2023-07-15 08:02:32 +01:00
Richard Henderson
cb62bd15e1 accel/tcg: Split out cpu_exec_longjmp_cleanup
Share the setjmp cleanup between cpu_exec_step_atomic
and cpu_exec_setjmp.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-15 08:02:32 +01:00
Alex Bennée
6d03226b42 plugins: force slow path when plugins instrument memory ops
The lack of SVE memory instrumentation has been an omission in plugin
handling since it was introduced. Fortunately we can utilise the
probe_* functions to force all all memory access to follow the slow
path. We do this by checking the access type and presence of plugin
memory callbacks and if set return the TLB_MMIO flag.

We have to jump through a few hoops in user mode to re-use the flag
but it was the desired effect:

 ./qemu-system-aarch64 -display none -serial mon:stdio \
   -M virt -cpu max -semihosting-config enable=on \
   -kernel ./tests/tcg/aarch64-softmmu/memory-sve \
   -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin

gives (disas doesn't currently understand st1w):

  0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM

And for user-mode:

  ./qemu-aarch64 \
    -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \
    -d plugin \
    ./tests/tcg/aarch64-linux-user/sha512-sve

gives:

  1..10
  ok 1 - do_test(&tests[i])
  0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo
  ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af

(4007c0 is the ld1b in the sha512-sve)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Robert Henry <robhenry@microsoft.com>
Cc: Aaron Lindsay <aaron@os.amperecomputing.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
2023-07-03 12:51:58 +01:00
Mark Cave-Ayland
e665cf72fe accel/tcg: Assert one page in tb_invalidate_phys_page_range__locked
Ensure that that both the start and last addresses are within
the same guest page.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230629082522.606219-3-mark.cave-ayland@ilande.co.uk>
[rth: Use tcg_debug_assert, simplify the expression]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-01 08:26:54 +02:00
Mark Cave-Ayland
3307e08c6f accel/tcg: Fix start page passed to tb_invalidate_phys_page_range__locked
Due to a copy-paste error in tb_invalidate_phys_range, the wrong
start address was passed to tb_invalidate_phys_page_range__locked.
Correct is to use the start of each page in turn.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Fixes: e506ad6a05 ("accel/tcg: Pass last not end to tb_invalidate_phys_range")
Message-Id: <20230629082522.606219-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-01 08:26:54 +02:00
Isaku Yamahata
14a868c626 exec/memory: Add symbol for the min value of memory listener priority
Add MEMORY_LISTNER_PRIORITY_MIN for the symbolic value for the min value of
the memory listener instead of the hard-coded magic value 0.  Add explicit
initialization.

No functional change intended.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <29f88477fe82eb774bcfcae7f65ea21995f865f2.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Isaku Yamahata
8be0461d37 exec/memory: Add symbol for memory listener priority for device backend
Add MEMORY_LISTENER_PRIORITY_DEV_BACKEND for the symbolic value
for memory listener to replace the hard-coded value 10 for the
device backend.

No functional change intended.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <8314d91688030d7004e96958f12e2c83fb889245.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Isaku Yamahata
5369a36c4f exec/memory: Add symbolic value for memory listener priority for accel
Add MEMORY_LISTNER_PRIORITY_ACCEL for the symbolic value for the memory
listener to replace the hard-coded value 10 for accel.

No functional change intended.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <feebe423becc6e2aa375f59f6abce9a85bc15abb.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Philippe Mathieu-Daudé
dec68f7042 accel/kvm: Declare kvm_direct_msi_allowed in stubs
Avoid when calling kvm_direct_msi_enabled() from
arm_gicv3_its_common.c the next commit:

  Undefined symbols for architecture arm64:
    "_kvm_direct_msi_allowed", referenced from:
        _its_class_name in hw_intc_arm_gicv3_its_common.c.o
  ld: symbol(s) not found for architecture arm64

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230405160454.97436-3-philmd@linaro.org>
2023-06-28 14:14:22 +02:00
Philippe Mathieu-Daudé
3b295bcb32 accel: Rename HVF 'struct hvf_vcpu_state' -> AccelCPUState
We want all accelerators to share the same opaque pointer in
CPUState.

Rename the 'hvf_vcpu_state' structure as 'AccelCPUState'.

Use the generic 'accel' field of CPUState instead of 'hvf'.

Replace g_malloc0() by g_new0() for readability.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230624174121.11508-17-philmd@linaro.org>
2023-06-28 14:14:22 +02:00
Philippe Mathieu-Daudé
af03d22a0a accel: Remove unused hThread variable on TCG/WHPX
On Windows hosts, cpu->hThread is assigned but never accessed:
remove it.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-4-philmd@linaro.org>
2023-06-28 13:55:35 +02:00
Richard Henderson
187ba69453 accel/tcg: Move TLB_WATCHPOINT to TLB_SLOW_FLAGS_MASK
This frees up one bit of the primary tlb flags without
impacting the TLB_NOTDIRTY logic.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Richard Henderson
58e8f1f616 accel/tcg: Store some tlb flags in CPUTLBEntryFull
We have run out of bits we can use within the CPUTLBEntry comparators,
as TLB_FLAGS_MASK cannot overlap alignment.

Store slow_flags[] in CPUTLBEntryFull, and merge with the flags from
the comparator.  A new TLB_FORCE_SLOW bit is set within the comparator
as an indication that the slow path must be used.

Move TLB_BSWAP to TLB_SLOW_FLAGS_MASK.  Since we are out of bits,
we cannot create a new bit without moving an old one.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Richard Henderson
97e1576957 accel/tcg: Remove check_tcg_memory_orders_compatible
We now issue host memory barriers to match the guest memory order.
Continue to disable MTTCG only if the guest has not been ported.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Richard Henderson
f86e8f3d13 tcg: Add host memory barriers to cpu_ldst.h interfaces
Bring the helpers into line with the rest of tcg in respecting
guest memory ordering.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Fei Wu
1b65b4f54c accel/tcg: remove CONFIG_PROFILER
TBStats will be introduced to replace CONFIG_PROFILER totally, here
remove all CONFIG_PROFILER related stuffs first.

Signed-off-by: Vanderson M. do Rosario <vandersonmr2@gmail.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Fei Wu <fei2.wu@intel.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230607122411.3394702-2-fei2.wu@intel.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Anton Johansson
b1c09220b4 accel/tcg: Replace target_ulong with vaddr in translator_*()
Use vaddr for guest virtual address in translator_use_goto_tb() and
translator_loop().

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-11-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Anton Johansson
b0326eb999 accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup()
Update atomic_mmu_lookup() and cpu_mmu_lookup() to take the guest
virtual address as a vaddr instead of a target_ulong.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-10-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:33:00 +02:00
Anton Johansson
4f8f41272e accel: Replace target_ulong with vaddr in probe_*()
Functions for probing memory accesses (and functions that call these)
are updated to take a vaddr for guest virtual addresses over
target_ulong.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-9-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
06f3831c08 accel/tcg: Widen pc to vaddr in CPUJumpCache
Related functions dealing with the jump cache are also updated.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-8-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
f0a08b0913 accel/tcg/cpu-exec.c: Widen pc to vaddr
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-7-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
fb2c53cb71 accel/tcg/cputlb.c: Widen addr in MMULookupPageData
Functions accessing MMULookupPageData are also updated.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-6-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
9e39de980f accel/tcg/cputlb.c: Widen CPUTLBEntry access functions
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-5-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
bb5de52524 target: Widen pc/cs_base in cpu_get_tb_cpu_state
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
256d11f9ba accel/tcg/translate-all.c: Widen pc and cs_base
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-3-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Anton Johansson
732d548732 accel: Replace target_ulong in tlb_*()
Replaces target_ulong with vaddr for guest virtual addresses in tlb_*()
functions and auxilliary structs.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-2-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Marcelo Tosatti
3b6f485275 kvm: reuse per-vcpu stats fd to avoid vcpu interruption
A regression has been detected in latency testing of KVM guests.
More specifically, it was observed that the cyclictest
numbers inside of an isolated vcpu (running on isolated pcpu) are:

Where a maximum of 50us is acceptable.

The implementation of KVM_GET_STATS_FD uses run_on_cpu to query
per vcpu statistics, which interrupts the vcpu (and is unnecessary).

To fix this, open the per vcpu stats fd on vcpu initialization,
and read from that fd from QEMU's main thread.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-06-26 10:23:01 +02:00
Philippe Mathieu-Daudé
a3e7f70229 accel/tcg/cpu-exec: Use generic 'helper-proto-common.h' header
We only need lookup_tb_ptr() prototype.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230611085846.21415-3-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Philippe Mathieu-Daudé
de6cd7599b meson: Replace softmmu_ss -> system_ss
We use the user_ss[] array to hold the user emulation sources,
and the softmmu_ss[] array to hold the system emulation ones.
Hold the latter in the 'system_ss[]' array for parity with user
emulation.

Mechanical change doing:

  $ sed -i -e s/softmmu_ss/system_ss/g $(git grep -l softmmu_ss)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-10-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Philippe Mathieu-Daudé
c7b64948f8 meson: Replace CONFIG_SOFTMMU -> CONFIG_SYSTEM_ONLY
Since we *might* have user emulation with softmmu,
use the clearer 'CONFIG_SYSTEM_ONLY' key to check
for system emulation.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Philippe Mathieu-Daudé
905db98a73 accel/tcg: Check for USER_ONLY definition instead of SOFTMMU one
Since we *might* have user emulation with softmmu,
replace the system emulation check by !user emulation one.

Invert some if() ladders for clarity.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-7-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Richard Henderson
2be6a48673 accel/tcg: Handle MO_ATOM_WITHIN16 in do_st16_leN
Otherwise we hit the default assert not reached.
Handle it as MO_ATOM_NONE, because of size and misalignment.
We already handle this correctly in do_ld16_beN.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Richard Henderson
7efd65423a Second RISC-V PR for 8.1
* Skip Vector set tail when vta is zero
 * Move zc* out of the experimental properties
 * Mask the implicitly enabled extensions in isa_string based on priv version
 * Rework CPU extension validation and validate MISA changes
 * Fixup PMP TLB cacheing errors
 * Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
 * Fixup PMP bypass checks
 * Deny access if access is partially inside a PMP entry
 * Correct OpenTitanState parent type/size
 * Fix QEMU crash when NUMA nodes exceed available CPUs
 * Fix pointer mask transformation for vector address
 * Updates and improvements for Smstateen
 * Support disas for Zcm* extensions
 * Support disas for Z*inx extensions
 * Remove unused decomp_rv32/64 value for vector instructions
 * Enable PC-relative translation
 * Assume M-mode FW in pflash0 only when "-bios none"
 * Support using pflash via -blockdev option
 * Add vector registers to log
 * Clean up reference of Vector MTYPE
 * Remove the check for extra Vector tail elements
 * Smepmp: Return error when access permission not allowed in PMP
 * Fixes for smsiaddrcfg and smsiaddrcfgh in AIA
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
 gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
 LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
 F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
 jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
 WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
 Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
 N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
 Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
 zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
 Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
 uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
 =fOTc
 -----END PGP SIGNATURE-----

Merge tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu into staging

Second RISC-V PR for 8.1

* Skip Vector set tail when vta is zero
* Move zc* out of the experimental properties
* Mask the implicitly enabled extensions in isa_string based on priv version
* Rework CPU extension validation and validate MISA changes
* Fixup PMP TLB cacheing errors
* Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
* Fixup PMP bypass checks
* Deny access if access is partially inside a PMP entry
* Correct OpenTitanState parent type/size
* Fix QEMU crash when NUMA nodes exceed available CPUs
* Fix pointer mask transformation for vector address
* Updates and improvements for Smstateen
* Support disas for Zcm* extensions
* Support disas for Z*inx extensions
* Remove unused decomp_rv32/64 value for vector instructions
* Enable PC-relative translation
* Assume M-mode FW in pflash0 only when "-bios none"
* Support using pflash via -blockdev option
* Add vector registers to log
* Clean up reference of Vector MTYPE
* Remove the check for extra Vector tail elements
* Smepmp: Return error when access permission not allowed in PMP
* Fixes for smsiaddrcfg and smsiaddrcfgh in AIA

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
# gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
# LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
# F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
# jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
# WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
# Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
# N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
# Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
# zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
# Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
# uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
# =fOTc
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Jun 2023 03:17:14 AM CEST
# gpg:                using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65  9296 AF7C 9513 0C53 8013

* tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu: (60 commits)
  hw/intc: If mmsiaddrcfgh.L == 1, smsiaddrcfg and smsiaddrcfgh are read-only.
  target/riscv: Smepmp: Return error when access permission not allowed in PMP
  target/riscv/vector_helper.c: Remove the check for extra tail elements
  target/riscv/vector_helper.c: clean up reference of MTYPE
  target/riscv: Fix initialized value for cur_pmmask
  util/log: Add vector registers to log
  docs/system: riscv: Add pflash usage details
  riscv/virt: Support using pflash via -blockdev option
  hw/riscv: virt: Assume M-mode FW in pflash0 only when "-bios none"
  target/riscv: Remove pc_succ_insn from DisasContext
  target/riscv: Enable PC-relative translation
  target/riscv: Use true diff for gen_pc_plus_diff
  target/riscv: Change gen_set_pc_imm to gen_update_pc
  target/riscv: Change gen_goto_tb to work on displacements
  target/riscv: Introduce cur_insn_len into DisasContext
  target/riscv: Fix target address to update badaddr
  disas/riscv.c: Remove redundant parentheses
  disas/riscv.c: Fix lines with over 80 characters
  disas/riscv.c: Remove unused decomp_rv32/64 value for vector instructions
  disas/riscv.c: Support disas for Z*inx extensions
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-14 05:28:51 +02:00
Antonio Caggiano
ed3958910a accel/hvf: Report HV_DENIED error
On MacOS 11 and subsequent versions, in case the resulting binary is not
signed with the proper entitlement, handle and report the HV_DENIED
error.

Signed-off-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Message-Id: <20230608123014.28715-1-quic_acaggian@quicinc.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-13 11:28:58 +02:00