Commit Graph

14094 Commits

Author SHA1 Message Date
BALATON Zoltan
14a43ab333 target/ppc: Unexport some functions from mmu-book3s-v3.h
The ppc_hash64_hpt_base() and ppc_hash64_hpt_mask() functions are
mostly used by mmu-hash64.c only but there is one call to
ppc_hash64_hpt_mask() in hw/ppc/spapr_vhyp_mmu.c.in a helper function
that can be moved to mmu-hash64.c which allows these functions to be
removed from the header.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
9eb0530033 target/ppc/mmu-hash32.c: Move get_pteg_offset32() to the header
This function is a simple shared function, move it to other similar
static inline functions in the header.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
51993bef12 target/ppc/mmu-hash32.c: Inline and remove ppc_hash32_pte_raddr()
This function is used only once and does not add more clarity than
doing it inline.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
bfb5a5eee5 target/ppc/mmu_common.c: Remove mmu_ctx_t
Completely get rid of mmu_ctx_t after converting the remaining
functions to pass raddr and prot without the context struct.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
7e590cf616 target/ppc/mmu_common.c: Stop using ctx in get_bat_6xx_tlb()
Pass raddr and prot in function parameters instead

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
d323338629 target/ppc: Remove bat_size_prot()
There is already a hash32_bat_prot() function that does most if this
and the rest can be inlined. Export hash32_bat_prot() and rename it to
ppc_hash32_bat_prot() to match other functions and use it in
get_bat_6xx_tlb().

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
6ca35e8763 target/ppc/mmu_common.c: Use defines instead of numeric constants
Replace some BAT related constants with defines from mmu-hash32.h

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
68bf3a7bbc target/ppc/mmu_common.c: Rename function parameter
Rename parameter of get_bat_6xx_tlb() from virtual to eaddr to match
other functions.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
aa781c102a target/ppc/mmu_common.c: Stop using ctx in ppc6xx_tlb_check()
Pass raddr and prot in function parameters instead.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
da5c1d20e9 target/ppc/mmu_common.c: Remove key field from mmu_ctx_t
Pass it as a function parameter and remove it from mmu_ctx_t.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
fa7f2cb91b target/ppc/mmu_common.c: Init variable in function that relies on it
The ppc6xx_tlb_check() relies on the caller to initialise raddr field
in ctx. Move this init from the only caller into the function.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
620ba617df target/ppc/mmu-hash32.c: Inline and remove ppc_hash32_pte_prot()
This is used only once and can be inlined.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
719a1da19e target/ppc: Add function to get protection key for hash32 MMU
Add a function to get key bit from SR and use it instead of open coded
version.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
cab21e2ecb target/ppc/mmu_common.c: Remove ptem field from mmu_ctx_t
Instead of passing around ptem in context use it once in the same
function so it can be removed from mmu_ctx_t.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
0ce61ffaf1 target/ppc/mmu_common.c: Inline and remove ppc6xx_tlb_pte_check()
This function is only called once and we can make the caller simpler
by inlining it.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
40df08d223 target/ppc/mmu_common.c: Simplify a switch statement
In mmu6xx_get_physical_address() the switch handles all cases so the
default is never reached and can be dropped. Also group together cases
which just return -4.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
8abd6d4288 target/ppc/mmu_common.c: Remove single use local variable
In mmu6xx_get_physical_address() tagtet_page_bits local is declared
only to use TARGET_PAGE_BITS once. Drop the unneeded variable.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
aaf5845b87 target/ppc/mmu_common.c: Convert local variable to bool
In mmu6xx_get_physical_address() ds is used as bool, declare it as
such. Also use named constant instead of hex value.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
691cf34f21 target/ppc/mmu_common.c: Remove nx field from mmu_ctx_t
Pass it as a parameter instead. Also use named constants instead of
hex values when extracting bits from SR.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
f8e0cc9419 target/ppc/mmu_common.c: Remove pte_update_flags()
This function is used only once, its return value is ignored and one
of its parameter is a return value from a previous call. It is better
to inline it in the caller and remove it.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
f6f8838b05 target/ppc/mmu_common.c: Remove hash field from mmu_ctx_t
Return hash value via a parameter and remove it from mmu_ctx.t.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
0e2d7fc817 target/ppc/mmu_common.c: Remove unused field from mmu_ctx_t
The eaddr field of mmu_ctx_t is set once but never used so can be
removed.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
9e2d6802b5 target/ppc/mmu_common.c: Simplify ppc6xx_tlb_pte_check()
Invert conditions to avoid deep nested ifs and return early instead.
Remove some obvious comments that don't add more clarity.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:34 +10:00
BALATON Zoltan
0e65cea1bd target/ppc/mmu_common.c: Return directly in ppc6xx_tlb_pte_check()
Instead of using a local ret variable return directly and remove the
local.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
7ee01cf863 target/ppc/mmu_common.c: Remove yet another single use local variable
In ppc6xx_tlb_pte_check() the pp variable is used only once to pass it
to a function parameter with the same name. Remove the local and
inline the value. Also use named constant for the hex value to make it
clearer.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
3208c36ad3 target/ppc/mmu_common.c: Remove another single use local variable
In ppc6xx_tlb_pte_check() the pteh variable is used only once to
compare to the h parameter of the function. Inline its value and use
pteh name for the function parameter which is more descriptive.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
f6b50257c7 target/ppc/mmu_common.c: Remove single use local variable
The ptev variable in ppc6xx_tlb_pte_check() is used only once and just
obfuscates an otherwise clear value. Get rid of it.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
15465dd8b9 target/ppc/mmu_common.c: Remove single use local variable
The ptem variable in ppc6xx_tlb_pte_check() is used only once,
simplify by removing it as the value is already clear itself without
adding a local name for it.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
5a902297ee target/ppc/mmu_common.c: Remove local name for a constant
The mmask local variable is a less descriptive local name for a
constant. Drop it and use the constant directly in the two places it
is needed.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
BALATON Zoltan
698faf3304 target/ppc: Reorganise and rename ppc_hash32_pp_prot()
Reorganise ppc_hash32_pp_prot() swapping the if legs so it does not
test for negative first and clean up to make it shorter. Also rename
it to ppc_hash32_prot().

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
625b58fde8 target/ppc : Update VSX storage access insns to use tcg_gen_qemu _ld/st_i128.
Updated many VSX instructions to use tcg_gen_qemu_ld/st_i128, instead of using
tcg_gen_qemu_ld/st_i64 consecutively.
Introduced functions {get,set}_vsr_full to facilitate the above & for future use.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
acbdee4588 target/ppc: Update VMX storage access insns to use tcg_gen_qemu_ld/st_i128.
Updated instructions {l, st}vx to use tcg_gen_qemu_ld/st_i128,
instead of using 64 bits loads/stores in succession.
Introduced functions {get, set}_avr_full in vmx-impl.c.inc to
facilitate the above, and potential future usage.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
bf15bf0a1d target/ppc: Move get/set_avr64 functions to vmx-impl.c.inc.
Those functions are used to ld/st data to and from Altivec registers,
in 64 bits chunks, and are only used in vmx-impl.c.inc file,
hence the clean-up movement.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
e77d736d2a target/ppc: Move VSX fp compare insns to decodetree.
Moving the following instructions to decodetree specification:

	xvcmp{eq, gt, ge, ne}{s, d}p	: XX3-form

The changes were verified by validating that the tcg-ops generated for those
instructions remain the same which were captured using the '-d in_asm,op' flag.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
7419dc5b2b target/ppc: Move VSX vector storage access insns to decodetree.
Moving the following instructions to decodetree specification:

  lxv{b16, d2, h8, w4, ds, ws}x   : X-form
  stxv{b16, d2, h8, w4}x          : X-form

The changes were verified by validating that the tcg-ops generated for those
instructions remain the same, which were captured using the '-d in_asm,op' flag.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
29df8d950e target/ppc: Move VSX vector with length storage access insns to decodetree.
Moving the following instructions to decodetree specification :

        {l, st}xvl(l)           : X-form

The changes were verified by validating that the tcg-ops generated by those
instructions remain the same, which were captured using the '-d in_asm,op' flag.

Also added a new function do_ea_calc_ra to calculate the effective address :
EA <- (RA == 0) ? 0 : GPR[RA], which is now used by the above-said insns,
and shall be used later by (p){lx, stx}vp insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
[np: Fix 32-bit build]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
cff278c9fa target/ppc: Moving VSX scalar storage access insns to decodetree.
Moving the following instructions to decodetree specification :

	lxs{d, iwa, ibz, ihz, iwz, sp}x		: X-form
	stxs{d, ib, ih, iw, sp}x		: X-form

The changes were verified by validating that the tcg-ops generated by those
instructions remain the same, which were captured using the '-d in_asm,op' flag.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
c1167a9257 target/ppc: Move VSX logical instructions to decodetree.
Moving the following instructions to decodetree specification :

	xxl{and, andc, or, orc, nor, xor, nand, eqv}	: XX3-form

The changes were verified by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
638f6d553a target/ppc: Move VSX arithmetic and max/min insns to decodetree.
Moving the following instructions to decodetree specification:

	x{s, v}{add, sub, mul, div}{s, d}p	: XX3-form
	xs{max, min}dp, xv{max, min}{s, d}p	: XX3-form

The changes were verfied by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
48eda6abfd target/ppc: Move ISA300 flag check out of do_helper_XX3.
Moving PPC2_ISA300 flag check out of do_helper_XX3 method in vmx-impl.c.inc
so that the helper can be used with other instructions as well.

Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
8fc7b63ada target/ppc: Improve VMX integer add/sub saturate instructions.
No need for a full comparison; xor produces non-zero bits for QC just fine.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rath.chinmay@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Chinmay Rath
a7e10fab78 target/ppc: Move VMX integer add/sub saturate insns to decodetree.
Moving the following instructions to decodetree specification :

	v{add,sub}{u,s}{b,h,w}s		: VX-form

The changes were verified by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:51:33 +10:00
Nicholas Piggin
3b5ea01e98 ppc/pnv: Add an LPAR per core machine option
Recent POWER CPUs can operate in "LPAR per core" or "LPAR per thread"
modes. In per-core mode, some SPRs and IPI doorbells are shared between
threads in a core. In per-thread mode, supervisor and user state is
not shared between threads.

OpenPOWER systems after POWER8 use LPAR per thread mode, and it is
required for KVM. Enterprise systems use LPAR per core mode, as they
partition the machine by core.

Implement a lpar-per-core machine option for powernv machines. This
is fixed true for POWER8 machines, and defaults off for P9 and P10.

With this change, powernv8 SMT now works sufficiently to run Linux,
with a single socket. Multi-threaded KVM guests still have problems,
as does multi-socket Linux boot.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
78be321894 ppc/pnv: Add POWER10 ChipTOD quirk for big-core
POWER10 has a quirk in its ChipTOD addressing that requires the even
small-core to be selected even when programming the odd small-core.
This allows skiboot chiptod init to run in big-core mode.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
16ffcb3401 ppc/pnv: Implement Power9 CPU core thread state indirect register
Power9 CPUs have a core thread state register accessible via SPRC/SPRD
indirect registers. This register includes a bit for big-core mode,
which skiboot requires.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
59c921f229 ppc: Add has_smt_siblings property to CPUPPCState
The decision to branch out to a slower SMT path in instruction
emulation will become a bit more complicated with the way that
"big-core" topology that will be implemented in subsequent changes.
Hide these details from the wider CPU emulation code with a bool
has_smt_siblings flag that can be set by machine initialisation.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
50d8cfb949 target/ppc: Add helpers to check for SMT sibling threads
Add helpers for TCG code to determine if there are SMT siblings
sharing per-core and per-lpar registers. This simplifies the
callers and makes SMT register topology simpler to modify with
later changes.

Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
feb37fdc82 ppc: Add a core_index to CPUPPCState for SMT vCPUs
The way SMT thread siblings are matched is clunky, using hard-coded
logic that checks the PIR SPR.

Change that to use a new core_index variable in the CPUPPCState,
where all siblings have the same core_index. CPU realize routines have
flexibility in setting core/sibling topology.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
60d30cff84 target/ppc: Move SPR indirect registers into PnvCore
SPRC/SPRD were recently added to all BookS CPUs supported, but
they are only tested on POWER9 and POWER10, so restrict them to
those CPUs.

SPR indirect scratch registers presently replicated per-CPU like
SMT SPRs, but the PnvCore is a better place for them since they
are restricted to P9/P10.

Also add SPR indirect read access to core thread state for POWER9
since skiboot accesses that when booting to check for big-core
mode.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
0ca94b2f11 ppc/pnv: Move timebase state into PnvCore
The timebase state machine is per per-core state and can be driven
by any thread in the core. It is currently implemented as a hack
where the state is in a CPU structure and only thread 0's state is
accessed by the chiptod, which limits programming the timebase
side of the state machine to thread 0 of a core.

Move the state out into PnvCore and share it among all threads.

Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Nicholas Piggin
7f516cdeef target/ppc: Fix msgsnd for POWER8
POWER8 (ISA v2.07S) introduced the doorbell facility, the msgsnd
instruction behaved mostly like msgsndp, it was addressed by TIR
and could only send interrupts between threads on the core.

ISA v3.0 changed msgsnd to be addressed by PIR and can interrupt
any thread in the system.

msgsnd only implements the v3.0 semantics, which can make
multi-threaded POWER8 hang when booting Linux (due to IPIs
failing). This change adds v2.07 semantics.

Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Shivaprasad G Bhat
c0840b46d4 target/ppc/cpu_init: Synchronize HASHPKEYR with KVM for migration
The patch enables HASHPKEYR migration by hooking with the
"KVM one reg" ID KVM_REG_PPC_HASHPKEYR.

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Shivaprasad G Bhat
843b243f86 target/ppc/cpu_init: Synchronize HASHKEYR with KVM for migration
The patch enables HASHKEYR migration by hooking with the
"KVM one reg" ID KVM_REG_PPC_HASHKEYR.

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Shivaprasad G Bhat
ca85beb4b7 target/ppc/cpu_init: Synchronize DEXCR with KVM for migration
The patch enables DEXCR migration by hooking with the
"KVM one reg" ID KVM_REG_PPC_DEXCR.

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Omar Sandoval
2587a57dbb target/ppc/arch_dump: set prstatus pid to cpuid
Every other architecture does this, and debuggers need it to be able to
identify which prstatus note corresponds to which CPU.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Harsh Prateek Bora
cfb52d07f5 target/ppc: handle vcpu hotplug failure gracefully
On ppc64, the PowerVM hypervisor runs with limited memory and a VCPU
creation during hotplug may fail during kvm_ioctl for KVM_CREATE_VCPU,
leading to termination of guest since errp is set to &error_fatal while
calling kvm_init_vcpu. This unexpected behaviour can be avoided by
pre-creating and parking vcpu on success or return error otherwise.
This enables graceful error delivery for any vcpu hotplug failures while
the guest can keep running.

Also introducing KVM AccelCPUClass to init cpu_target_realize for kvm.

Tested OK by repeatedly doing a hotplug/unplug of vcpus as below:

 #virsh setvcpus hotplug 40
 #virsh setvcpus hotplug 70
error: internal error: unable to execute QEMU command 'device_add':
kvmppc_cpu_realize: vcpu hotplug failed with -12

Signed-off by: Harsh Prateek Bora <harshpb@linux.ibm.com>

Reported-by: Anushree Mathur <anushree.mathur@linux.vnet.ibm.com>
Suggested-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Suggested-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Tested-by: Anushree Mathur <anushree.mathur@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-26 09:21:06 +10:00
Richard Henderson
029e13a8a5 bsd-user: Misc changes for 9.1 (I hope)
V2: Add missing bsd-user/aarch64/target.h
 
 This patch series includes two main sets of patches. To make it simple to
 review, I've included the changes from my student which the later changes depend
 on. I've included a change from Jessica and Doug as well. I've reviewed them,
 but more eyes never hurt.
 
 I've also included a number of 'touch up' patches needed either to get the
 aarch64 building, or to implmement suggestions from prior review cycles. The
 main one is what's charitably described as a kludge: force aarch64 to use 4k
 pages. The qemu-project (and blitz branch) hasn't had the necessary changes to
 bsd-user needed to support variable page size.
 
 Sorry this is so late... Live has conspired to delay me.
 -----BEGIN PGP SIGNATURE-----
 Comment: GPGTools - https://gpgtools.org
 
 iQIzBAABCgAdFiEEIDX4lLAKo898zeG3bBzRKH2wEQAFAmahejwACgkQbBzRKH2w
 EQCXuQ/+Pj1Izmox/y9X1trn1T8KC7JdMtimdLiGMaS4C6+gcThXJkIB4l9ZStbV
 7rI540mpqVf0KSRLYwc2/ATyhYU7Ffsz02WPn7Xn/NvmmITp4kjw9Z0gd7C7mPVq
 fS8DJbTyFQDy5dO8FUKLaTfnlYQe+NCnL421t9wFkIrlEepFygRaBaJN5yWVoC+0
 1Ob6dG+JEV5BmNguMufvvI3S7nEFEnSBGpNqW3ljrRHAZjdNhv8d9GBYbj1laR1r
 HQ6r5+u4ZmKCuUbchS0jxGkug0DjuQC7iq+rQ/7fhLYLChkPZ4P2RxNv8ibzKjEV
 wlTy5LaM+WZNzKWdcHfDFMomeSnnUkOOfAMipMney2jedEjTIwCFDnP4zCAuG83V
 RbdXWfleP1rDto3AQ765pFneqm3+su2Dh4TKaTSnq6gd1eORJ2IL8dubCfcVwZCy
 TofemXPWh0HX3kwlD9IB9rqplQZFL78TkQ47btftxinHCLCQOOHRDPVG0IahQPjo
 pgK4yVH7WA7pWV2Xbo4ngG3sX5U1TyBCbfkkAwhq+P3gjnU8zxonx8Tk/qLeEDdH
 KEypi/pkGFQKZY0wc/y4XM+XQh6E1l8gMaQ4gJWK1qlyVtUKM1BiNQ2lweohYzC8
 p6WAfBQLPpzY4mDWfJMF6DsgObLwWmYbgKzuOtHgST1D/Ebk3Zo=
 =RPuN
 -----END PGP SIGNATURE-----

Merge tag 'bsd-user-for-9.1-pull-request' of gitlab.com:bsdimp/qemu into staging

bsd-user: Misc changes for 9.1 (I hope)

V2: Add missing bsd-user/aarch64/target.h

This patch series includes two main sets of patches. To make it simple to
review, I've included the changes from my student which the later changes depend
on. I've included a change from Jessica and Doug as well. I've reviewed them,
but more eyes never hurt.

I've also included a number of 'touch up' patches needed either to get the
aarch64 building, or to implmement suggestions from prior review cycles. The
main one is what's charitably described as a kludge: force aarch64 to use 4k
pages. The qemu-project (and blitz branch) hasn't had the necessary changes to
bsd-user needed to support variable page size.

Sorry this is so late... Live has conspired to delay me.

# -----BEGIN PGP SIGNATURE-----
# Comment: GPGTools - https://gpgtools.org
#
# iQIzBAABCgAdFiEEIDX4lLAKo898zeG3bBzRKH2wEQAFAmahejwACgkQbBzRKH2w
# EQCXuQ/+Pj1Izmox/y9X1trn1T8KC7JdMtimdLiGMaS4C6+gcThXJkIB4l9ZStbV
# 7rI540mpqVf0KSRLYwc2/ATyhYU7Ffsz02WPn7Xn/NvmmITp4kjw9Z0gd7C7mPVq
# fS8DJbTyFQDy5dO8FUKLaTfnlYQe+NCnL421t9wFkIrlEepFygRaBaJN5yWVoC+0
# 1Ob6dG+JEV5BmNguMufvvI3S7nEFEnSBGpNqW3ljrRHAZjdNhv8d9GBYbj1laR1r
# HQ6r5+u4ZmKCuUbchS0jxGkug0DjuQC7iq+rQ/7fhLYLChkPZ4P2RxNv8ibzKjEV
# wlTy5LaM+WZNzKWdcHfDFMomeSnnUkOOfAMipMney2jedEjTIwCFDnP4zCAuG83V
# RbdXWfleP1rDto3AQ765pFneqm3+su2Dh4TKaTSnq6gd1eORJ2IL8dubCfcVwZCy
# TofemXPWh0HX3kwlD9IB9rqplQZFL78TkQ47btftxinHCLCQOOHRDPVG0IahQPjo
# pgK4yVH7WA7pWV2Xbo4ngG3sX5U1TyBCbfkkAwhq+P3gjnU8zxonx8Tk/qLeEDdH
# KEypi/pkGFQKZY0wc/y4XM+XQh6E1l8gMaQ4gJWK1qlyVtUKM1BiNQ2lweohYzC8
# p6WAfBQLPpzY4mDWfJMF6DsgObLwWmYbgKzuOtHgST1D/Ebk3Zo=
# =RPuN
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 25 Jul 2024 08:03:40 AM AEST
# gpg:                using RSA key 2035F894B00AA3CF7CCDE1B76C1CD1287DB01100
# gpg: Good signature from "Warner Losh <wlosh@netflix.com>" [unknown]
# gpg:                 aka "Warner Losh <imp@bsdimp.com>" [unknown]
# gpg:                 aka "Warner Losh <imp@freebsd.org>" [unknown]
# gpg:                 aka "Warner Losh <imp@village.org>" [unknown]
# gpg:                 aka "Warner Losh <wlosh@bsdimp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2035 F894 B00A A3CF 7CCD  E1B7 6C1C D128 7DB0 1100

* tag 'bsd-user-for-9.1-pull-request' of gitlab.com:bsdimp/qemu:
  bsd-user: Add target.h for aarch64.
  bsd-user: Add aarch64 build to tree
  bsd-user: Make compile for non-linux user-mode stuff
  bsd-user: Define TARGET_SIGSTACK_ALIGN and use it to round stack
  bsd-user: Sync fork_start/fork_end with linux-user
  bsd-user: Hard wire aarch64 to be 4k pages only
  bsd-user: Simplify the implementation of execve
  bsd-user:Add AArch64 improvements and signal handling functions
  bsd-user:Add set_mcontext function for ARM AArch64
  bsd-user:Add setup_sigframe_arch function for ARM AArch64
  bsd-user:Add get_mcontext function for ARM AArch64
  bsd-user:Add ARM AArch64 signal handling support
  bsd-user:Add ARM AArch64 support and capabilities
  bsd-user:Add AArch64 register handling and related functions
  bsd-user:Add CPU initialization and management functions

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-25 09:53:57 +10:00
Song Gao
a18ffbcf8b target/loongarch: Fix helper_lddir() a CID INTEGER_OVERFLOW issue
When the lddir level is 4 and the base is a HugePage, we may try to put value 4
into a field in the TLBENTRY that is only 2 bits wide.

Fixes: Coverity CID 1547717
Fixes: 9c70db9a43 ("target/loongarch: Fix tlb huge page loading issue")
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240724015853.1317396-1-gaosong@loongson.cn>
2024-07-24 16:52:18 +08:00
Richard Henderson
6410f877f5 Misc HW patch queue
- Restrict probe_access*() functions to TCG (Phil)
 - Extract do_invalidate_device_tlb from vtd_process_device_iotlb_desc (Clément)
 - Fixes in Loongson IPI model (Bibo & Phil)
 - Make docs/interop/firmware.json compatible with qapi-gen.py script (Thomas)
 - Correct MPC I2C MMIO region size (Zoltan)
 - Remove useless cast in Loongson3 Virt machine (Yao)
 - Various uses of range overlap API (Yao)
 - Use ERRP_GUARD macro in nubus_virtio_mmio_realize (Zhao)
 - Use DMA memory API in Goldfish UART model (Phil)
 - Expose fifo8_pop_buf and introduce fifo8_drop (Phil)
 - MAINTAINERS updates (Zhao, Phil)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmagFF8ACgkQ4+MsLN6t
 wN5bKg//f5TwUhsy2ff0FJpHheDOj/9Gc2nZ1U/Fp0E5N3sz3A7MGp91wye6Xwi3
 XG34YN9LK1AVzuCdrEEs5Uaxs1ZS1R2mV+fZaGHwYYxPDdnXxGyp/2Q0eyRxzbcN
 zxE2hWscYSZbPVEru4HvZJKfp4XnE1cqA78fJKMAdtq0IPq38tmQNRlJ+gWD9dC6
 ZUHXPFf3DnucvVuwqb0JYO/E+uJpcTtgR6pc09Xtv/HFgMiS0vKZ1I/6LChqAUw9
 eLMpD/5V2naemVadJe98/dL7gIUnhB8GTjsb4ioblG59AO/uojutwjBSQvFxBUUw
 U5lX9OSn20ouwcGiqimsz+5ziwhCG0R6r1zeQJFqUxrpZSscq7NQp9ygbvirm+wS
 edLc8yTPf4MtYOihzPP9jLPcXPZjEV64gSnJISDDFYWANCrysX3suaFEOuVYPl+s
 ZgQYRVSSYOYHgNqBSRkPKKVUxskSQiqLY3SfGJG4EA9Ktt5lD1cLCXQxhdsqphFm
 Ws3zkrVVL0EKl4v/4MtCgITIIctN1ZJE9u3oPJjASqSvK6EebFqAJkc2SidzKHz0
 F3iYX2AheWNHCQ3HFu023EvFryjlxYk95fs2f6Uj2a9yVbi813qsvd3gcZ8t0kTT
 +dmQwpu1MxjzZnA6838R6OCMnC+UpMPqQh3dPkU/5AF2fc3NnN8=
 =J/I2
 -----END PGP SIGNATURE-----

Merge tag 'hw-misc-20240723' of https://github.com/philmd/qemu into staging

Misc HW patch queue

- Restrict probe_access*() functions to TCG (Phil)
- Extract do_invalidate_device_tlb from vtd_process_device_iotlb_desc (Clément)
- Fixes in Loongson IPI model (Bibo & Phil)
- Make docs/interop/firmware.json compatible with qapi-gen.py script (Thomas)
- Correct MPC I2C MMIO region size (Zoltan)
- Remove useless cast in Loongson3 Virt machine (Yao)
- Various uses of range overlap API (Yao)
- Use ERRP_GUARD macro in nubus_virtio_mmio_realize (Zhao)
- Use DMA memory API in Goldfish UART model (Phil)
- Expose fifo8_pop_buf and introduce fifo8_drop (Phil)
- MAINTAINERS updates (Zhao, Phil)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmagFF8ACgkQ4+MsLN6t
# wN5bKg//f5TwUhsy2ff0FJpHheDOj/9Gc2nZ1U/Fp0E5N3sz3A7MGp91wye6Xwi3
# XG34YN9LK1AVzuCdrEEs5Uaxs1ZS1R2mV+fZaGHwYYxPDdnXxGyp/2Q0eyRxzbcN
# zxE2hWscYSZbPVEru4HvZJKfp4XnE1cqA78fJKMAdtq0IPq38tmQNRlJ+gWD9dC6
# ZUHXPFf3DnucvVuwqb0JYO/E+uJpcTtgR6pc09Xtv/HFgMiS0vKZ1I/6LChqAUw9
# eLMpD/5V2naemVadJe98/dL7gIUnhB8GTjsb4ioblG59AO/uojutwjBSQvFxBUUw
# U5lX9OSn20ouwcGiqimsz+5ziwhCG0R6r1zeQJFqUxrpZSscq7NQp9ygbvirm+wS
# edLc8yTPf4MtYOihzPP9jLPcXPZjEV64gSnJISDDFYWANCrysX3suaFEOuVYPl+s
# ZgQYRVSSYOYHgNqBSRkPKKVUxskSQiqLY3SfGJG4EA9Ktt5lD1cLCXQxhdsqphFm
# Ws3zkrVVL0EKl4v/4MtCgITIIctN1ZJE9u3oPJjASqSvK6EebFqAJkc2SidzKHz0
# F3iYX2AheWNHCQ3HFu023EvFryjlxYk95fs2f6Uj2a9yVbi813qsvd3gcZ8t0kTT
# +dmQwpu1MxjzZnA6838R6OCMnC+UpMPqQh3dPkU/5AF2fc3NnN8=
# =J/I2
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 24 Jul 2024 06:36:47 AM AEST
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'hw-misc-20240723' of https://github.com/philmd/qemu: (28 commits)
  MAINTAINERS: Add myself as a reviewer of machine core
  MAINTAINERS: Cover guest-agent in QAPI schema
  util/fifo8: Introduce fifo8_drop()
  util/fifo8: Expose fifo8_pop_buf()
  util/fifo8: Rename fifo8_pop_buf() -> fifo8_pop_bufptr()
  util/fifo8: Rename fifo8_peek_buf() -> fifo8_peek_bufptr()
  util/fifo8: Use fifo8_reset() in fifo8_create()
  util/fifo8: Fix style
  chardev/char-fe: Document returned value on error
  hw/char/goldfish: Use DMA memory API
  hw/nubus/virtio-mmio: Fix missing ERRP_GUARD() in realize handler
  dump: make range overlap check more readable
  crypto/block-luks: make range overlap check more readable
  system/memory_mapping: make range overlap check more readable
  sparc/ldst_helper: make range overlap check more readable
  cxl/mailbox: make range overlap check more readable
  util/range: Make ranges_overlap() return bool
  hw/mips/loongson3_virt: remove useless type cast
  hw/i2c/mpc_i2c: Fix mmio region size
  docs/interop/firmware.json: convert "Example" section
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-24 15:39:43 +10:00
Richard Henderson
43f59bf765 * target/i386/kvm: support for reading RAPL MSRs using a helper program
* hpet: emulation improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaelL4UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMXoQf+K77lNlHLETSgeeP3dr7yZPOmXjjN
 qFY/18jiyLw7MK1rZC09fF+n9SoaTH8JDKupt0z9M1R10HKHLIO04f8zDE+dOxaE
 Rou3yKnlTgFPGSoPPFr1n1JJfxtYlLZRoUzaAcHUaa4W7JR/OHJX90n1Rb9MXeDk
 jV6P0v1FWtIDdM6ERm9qBGoQdYhj6Ra2T4/NZKJFXwIhKEkxgu4yO7WXv8l0dxQz
 jE4fKotqAvrkYW1EsiVZm30lw/19duhvGiYeQXoYhk8KKXXjAbJMblLITSNWsCio
 3l6Uud/lOxekkJDAq5nH3H9hCBm0WwvwL+0vRf3Mkr+/xRGvrhtmUdp8NQ==
 =00mB
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386/kvm: support for reading RAPL MSRs using a helper program
* hpet: emulation improvements

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaelL4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroMXoQf+K77lNlHLETSgeeP3dr7yZPOmXjjN
# qFY/18jiyLw7MK1rZC09fF+n9SoaTH8JDKupt0z9M1R10HKHLIO04f8zDE+dOxaE
# Rou3yKnlTgFPGSoPPFr1n1JJfxtYlLZRoUzaAcHUaa4W7JR/OHJX90n1Rb9MXeDk
# jV6P0v1FWtIDdM6ERm9qBGoQdYhj6Ra2T4/NZKJFXwIhKEkxgu4yO7WXv8l0dxQz
# jE4fKotqAvrkYW1EsiVZm30lw/19duhvGiYeQXoYhk8KKXXjAbJMblLITSNWsCio
# 3l6Uud/lOxekkJDAq5nH3H9hCBm0WwvwL+0vRf3Mkr+/xRGvrhtmUdp8NQ==
# =00mB
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 23 Jul 2024 03:19:58 AM AEST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  hpet: avoid timer storms on periodic timers
  hpet: store full 64-bit target value of the counter
  hpet: accept 64-bit reads and writes
  hpet: place read-only bits directly in "new_val"
  hpet: remove unnecessary variable "index"
  hpet: ignore high bits of comparator in 32-bit mode
  hpet: fix and cleanup persistence of interrupt status
  Add support for RAPL MSRs in KVM/Qemu
  tools: build qemu-vmsr-helper
  qio: add support for SO_PEERCRED for socket channel
  target/i386: do not crash if microvm guest uses SGX CPUID leaves

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-24 11:25:40 +10:00
Yao Xingtao
2a48b590f7 sparc/ldst_helper: make range overlap check more readable
use ranges_overlap() instead of open-coding the overlap check to improve
the readability of the code.

Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240722040742.11513-9-yaoxt.fnst@fujitsu.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-23 20:30:36 +02:00
Warner Losh
1c687f65b4 bsd-user: Make compile for non-linux user-mode stuff
We include the files that define PR_MTE_TCF_SHIFT only on Linux, but use
them unconditionally. Restrict its use to Linux-only.

"It's ugly, but it's not actually wrong."

Signed-off-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:30 -06:00
Warner Losh
b314fd06cf bsd-user: Hard wire aarch64 to be 4k pages only
Only support 4k pages for aarch64 binaries. The variable page size stuff
isn't working just yet, so put in this lessor-of-evils kludge until that
is complete.

Signed-off-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:50:55 -06:00
Richard Henderson
71bce0e1fb accel/tcg: Export set/clear_helper_retaddr
target/arm: Use set_helper_retaddr for dc_zva, sve and sme
 target/ppc: Tidy dcbz helpers
 target/ppc: Use set_helper_retaddr for dcbz
 target/s390x: Use set_helper_retaddr in mem_helper.c
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmafJKIdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+FBAf7Bup+karxeGHZx2rN
 cPeF248bcCWTxBWHK7dsYze4KqzsrlNIJlPeOKErU2bbbRDZGhOp1/N95WVz+P8V
 6Ny63WTsAYkaFWKxE6Jf0FWJlGw92btk75pTV2x/TNZixg7jg0vzVaYkk0lTYc5T
 m5e4WycYEbzYm0uodxI09i+wFvpd+7WCnl6xWtlJPWZENukvJ36Ss43egFMDtuMk
 vTJuBkS9wpwZ9MSi6EY6M+Raieg8bfaotInZeDvE/yRPNi7CwrA7Dgyc1y626uBA
 joGkYRLzhRgvT19kB3bvFZi1AXa0Pxr+j0xJqwspP239Gq5qezlS5Bv/DrHdmGHA
 jaqSwg==
 =XgUE
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20240723' of https://gitlab.com/rth7680/qemu into staging

accel/tcg: Export set/clear_helper_retaddr
target/arm: Use set_helper_retaddr for dc_zva, sve and sme
target/ppc: Tidy dcbz helpers
target/ppc: Use set_helper_retaddr for dcbz
target/s390x: Use set_helper_retaddr in mem_helper.c

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmafJKIdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+FBAf7Bup+karxeGHZx2rN
# cPeF248bcCWTxBWHK7dsYze4KqzsrlNIJlPeOKErU2bbbRDZGhOp1/N95WVz+P8V
# 6Ny63WTsAYkaFWKxE6Jf0FWJlGw92btk75pTV2x/TNZixg7jg0vzVaYkk0lTYc5T
# m5e4WycYEbzYm0uodxI09i+wFvpd+7WCnl6xWtlJPWZENukvJ36Ss43egFMDtuMk
# vTJuBkS9wpwZ9MSi6EY6M+Raieg8bfaotInZeDvE/yRPNi7CwrA7Dgyc1y626uBA
# joGkYRLzhRgvT19kB3bvFZi1AXa0Pxr+j0xJqwspP239Gq5qezlS5Bv/DrHdmGHA
# jaqSwg==
# =XgUE
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 23 Jul 2024 01:33:54 PM AEST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]

* tag 'pull-tcg-20240723' of https://gitlab.com/rth7680/qemu:
  target/riscv: Simplify probing in vext_ldff
  target/s390x: Use set/clear_helper_retaddr in mem_helper.c
  target/s390x: Use user_or_likely in access_memmove
  target/s390x: Use user_or_likely in do_access_memset
  target/ppc: Improve helper_dcbz for user-only
  target/ppc: Merge helper_{dcbz,dcbzep}
  target/ppc: Split out helper_dbczl for 970
  target/ppc: Hoist dcbz_size out of dcbz_common
  target/ppc/mem_helper.c: Remove a conditional from dcbz_common()
  target/arm: Use set/clear_helper_retaddr in SVE and SME helpers
  target/arm: Use set/clear_helper_retaddr in helper-a64.c
  accel/tcg: Move {set,clear}_helper_retaddr to cpu_ldst.h

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 15:19:39 +10:00
Richard Henderson
26b09663a9 Maintainer updates for testing, gdbstub, semihosting, plugins
- bump python in *BSD images via libvirt-ci
   - remove old unused Leon3 Avocado test
   - re-factor gdb command extension
   - add stoptrigger plugin to contrib
   - ensure plugin mem callbacks properly sized
   - reduce check-tcg noise of inline plugin test
   - fix register dumping in execlog plugin
   - restrict semihosting to TCG builds
   - fix regex in MTE test
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmae5OcACgkQ+9DbCVqe
 KkR8cgf/eM2Sm7EG7zIQ8SbY53DS07ls6uT7Mfn4374GEmj4Cy1I+WNoLGM5vq1r
 qWAC9q2LgJVMQoWJA6Fi3SCKiylBp3/jIdJ7CWN5qj/NmePHSV3EisQXf2qOWWL9
 qOX2hJI7IIYNI2v3IvCzN/fB8F8U60iXERFHRypBH2p6Mz+EGMC3CEhesOEUta6o
 2IMkRW8MoDv9x4B+FnNYav6CfqZjhRenu1CGgVGvWYRds2QDVNB/14kOunmBuwSs
 gPb7AhhnpobDYVxMarlJNPMbOdFjtDkYCajCNW7ffLcl+OjhoVR6cJcFpbOMv4kZ
 8Nok8aDjUDWwUbmU0rBynca+1k8OTg==
 =TjRc
 -----END PGP SIGNATURE-----

Merge tag 'pull-maintainer-9.1-rc0-230724-1' of https://gitlab.com/stsquad/qemu into staging

Maintainer updates for testing, gdbstub, semihosting, plugins

  - bump python in *BSD images via libvirt-ci
  - remove old unused Leon3 Avocado test
  - re-factor gdb command extension
  - add stoptrigger plugin to contrib
  - ensure plugin mem callbacks properly sized
  - reduce check-tcg noise of inline plugin test
  - fix register dumping in execlog plugin
  - restrict semihosting to TCG builds
  - fix regex in MTE test

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmae5OcACgkQ+9DbCVqe
# KkR8cgf/eM2Sm7EG7zIQ8SbY53DS07ls6uT7Mfn4374GEmj4Cy1I+WNoLGM5vq1r
# qWAC9q2LgJVMQoWJA6Fi3SCKiylBp3/jIdJ7CWN5qj/NmePHSV3EisQXf2qOWWL9
# qOX2hJI7IIYNI2v3IvCzN/fB8F8U60iXERFHRypBH2p6Mz+EGMC3CEhesOEUta6o
# 2IMkRW8MoDv9x4B+FnNYav6CfqZjhRenu1CGgVGvWYRds2QDVNB/14kOunmBuwSs
# gPb7AhhnpobDYVxMarlJNPMbOdFjtDkYCajCNW7ffLcl+OjhoVR6cJcFpbOMv4kZ
# 8Nok8aDjUDWwUbmU0rBynca+1k8OTg==
# =TjRc
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 23 Jul 2024 09:01:59 AM AEST
# gpg:                using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]

* tag 'pull-maintainer-9.1-rc0-230724-1' of https://gitlab.com/stsquad/qemu:
  tests/tcg/aarch64: Fix test-mte.py
  semihosting: Restrict to TCG
  target/xtensa: Restrict semihosting to TCG
  target/riscv: Restrict semihosting to TCG
  target/mips: Restrict semihosting to TCG
  target/m68k: Restrict semihosting to TCG
  target/mips: Add semihosting stub
  target/m68k: Add semihosting stub
  semihosting: Include missing 'gdbstub/syscalls.h' header
  plugins/execlog.c: correct dump of registers values
  tests/plugins: use qemu_plugin_outs for inline stats
  plugins: fix mem callback array size
  plugins/stoptrigger: TCG plugin to stop execution under conditions
  gdbstub: Re-factor gdb command extensions
  tests/avocado: Remove non-working sparc leon3 test
  testing: bump to latest libvirt-ci

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 12:15:11 +10:00
Richard Henderson
3f57638a7e target/riscv: Simplify probing in vext_ldff
The current pairing of tlb_vaddr_to_host with extra is either
inefficient (user-only, with page_check_range) or incorrect
(system, with probe_pages).

For proper non-fault behaviour, use probe_access_flags with
its nonfault parameter set to true.

Reviewed-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:57:42 +10:00
Richard Henderson
2730df9190 target/s390x: Use set/clear_helper_retaddr in mem_helper.c
Avoid a race condition with munmap in another thread.
For access_memset and access_memmove, manage the value
within the helper.  For uses of access_{get,set}_byte,
manage the value across the for loops.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:57:31 +10:00
Richard Henderson
573b778301 target/s390x: Use user_or_likely in access_memmove
Invert the conditional, indent the block, and use the macro
that expands to true for user-only.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:57:19 +10:00
Richard Henderson
814e46594d target/s390x: Use user_or_likely in do_access_memset
Eliminate the ifdef by using a predicate that is
always true with CONFIG_USER_ONLY.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:57:19 +10:00
Richard Henderson
f6bcc5b8f9 target/ppc: Improve helper_dcbz for user-only
Mark the reserve_addr check unlikely.  Use tlb_vaddr_to_host
instead of probe_write, relying on the memset itself to test
for page writability.  Use set/clear_helper_retaddr so that
we can properly unwind on segfault.

With this, a trivial loop around guest memset will no longer
spend nearly 25% of runtime within page_get_flags.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:16 +10:00
Richard Henderson
c6d84fd7cf target/ppc: Merge helper_{dcbz,dcbzep}
Merge the two and pass the mmu_idx directly from translation.
Swap the argument order in dcbz_common to avoid extra swaps.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:16 +10:00
Richard Henderson
62fe57c6d2 target/ppc: Split out helper_dbczl for 970
We can determine at translation time whether the insn is or
is not dbczl.  We must retain a runtime check against the
HID5 register, but we can move that to a separate function
that never affects other ppc models.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:16 +10:00
Richard Henderson
521a80d895 target/ppc: Hoist dcbz_size out of dcbz_common
The 970 logic does not apply to dcbzep, which is an e500 insn.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:16 +10:00
BALATON Zoltan
73a93ae5f4 target/ppc/mem_helper.c: Remove a conditional from dcbz_common()
Instead of passing a bool and select a value within dcbz_common() let
the callers pass in the right value to avoid this conditional
statement. On PPC dcbz is often used to zero memory and some code uses
it a lot. This change improves the run time of a test case that copies
memory with a dcbz call in every iteration from 6.23 to 5.83 seconds.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20240622204833.5F7C74E6000@zero.eik.bme.hu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
2024-07-23 10:56:16 +10:00
Richard Henderson
3b9991e35c target/arm: Use set/clear_helper_retaddr in SVE and SME helpers
Avoid a race condition with munmap in another thread.
Use around blocks that exclusively use "host_fn".
Keep the blocks as small as possible, but without setting
and clearing for every operation on one page.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:04 +10:00
Richard Henderson
8009519b30 target/arm: Use set/clear_helper_retaddr in helper-a64.c
Use these in helper_dc_dva and the FEAT_MOPS routines to
avoid a race condition with munmap in another thread.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23 10:56:04 +10:00
Anthony Harivel
0418f90809 Add support for RAPL MSRs in KVM/Qemu
Starting with the "Sandy Bridge" generation, Intel CPUs provide a RAPL
interface (Running Average Power Limit) for advertising the accumulated
energy consumption of various power domains (e.g. CPU packages, DRAM,
etc.).

The consumption is reported via MSRs (model specific registers) like
MSR_PKG_ENERGY_STATUS for the CPU package power domain. These MSRs are
64 bits registers that represent the accumulated energy consumption in
micro Joules. They are updated by microcode every ~1ms.

For now, KVM always returns 0 when the guest requests the value of
these MSRs. Use the KVM MSR filtering mechanism to allow QEMU handle
these MSRs dynamically in userspace.

To limit the amount of system calls for every MSR call, create a new
thread in QEMU that updates the "virtual" MSR values asynchronously.

Each vCPU has its own vMSR to reflect the independence of vCPUs. The
thread updates the vMSR values with the ratio of energy consumed of
the whole physical CPU package the vCPU thread runs on and the
thread's utime and stime values.

All other non-vCPU threads are also taken into account. Their energy
consumption is evenly distributed among all vCPUs threads running on
the same physical CPU package.

To overcome the problem that reading the RAPL MSR requires priviliged
access, a socket communication between QEMU and the qemu-vmsr-helper is
mandatory. You can specified the socket path in the parameter.

This feature is activated with -accel kvm,rapl=true,path=/path/sock.sock

Actual limitation:
- Works only on Intel host CPU because AMD CPUs are using different MSR
  adresses.

- Only the Package Power-Plane (MSR_PKG_ENERGY_STATUS) is reported at
  the moment.

Signed-off-by: Anthony Harivel <aharivel@redhat.com>
Link: https://lore.kernel.org/r/20240522153453.1230389-4-aharivel@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-22 19:19:37 +02:00
Collin Walling
eed0e8ffa3 target/s390x: filter deprecated properties based on model expansion type
Currently, there is no way to execute the query-cpu-model-expansion
command to retrieve a comprehenisve list of deprecated properties, as
the result is dependent per-model. To enable this, the expansion output
is modified as such:

When reporting a "full" CPU model, show the *entire* list of deprecated
properties regardless if they are supported on the model. A full
expansion outputs all known CPU model properties anyway, so it makes
sense to report all deprecated properties here too.

This allows management apps to query a single model (e.g. host) to
acquire the full list of deprecated properties.

Additionally, when reporting a "static" CPU model, the command will
only show deprecated properties that are a subset of the model's
*enabled* properties. This is more accurate than how the query was
handled before, which blindly reported deprecated properties that
were never otherwise introduced for certain models.

Acked-by: David Hildenbrand <david@redhat.com>
Suggested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-ID: <20240719181741.35146-1-walling@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-07-22 13:56:11 +02:00
Philippe Mathieu-Daudé
41b37a178b target/xtensa: Restrict semihosting to TCG
The semihosting feature depends on TCG (due to the probe_access
API access). Although TCG is the single accelerator currently
available for the xtensa target, use the Kconfig "imply" directive
which is more correct (if we were to support a different accel).

Reported-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240717105723.58965-8-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-15-alex.bennee@linaro.org>
2024-07-22 09:38:14 +01:00
Philippe Mathieu-Daudé
10425887ba target/riscv: Restrict semihosting to TCG
Semihosting currently uses the TCG probe_access API. To prepare for
encoding the TCG dependency in Kconfig, do not enable it unless TCG
is available.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20240717105723.58965-7-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-14-alex.bennee@linaro.org>
2024-07-22 09:38:11 +01:00
Philippe Mathieu-Daudé
75cdcc7a2c target/mips: Restrict semihosting to TCG
Semihosting currently uses the TCG probe_access API. To prepare for
encoding the TCG dependency in Kconfig, do not enable it unless TCG
is available.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20240717105723.58965-6-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-13-alex.bennee@linaro.org>
2024-07-22 09:38:10 +01:00
Philippe Mathieu-Daudé
099505b375 target/m68k: Restrict semihosting to TCG
The semihosting feature depends on TCG (due to the probe_access
API access). Although TCG is the single accelerator currently
available for the m68k target, use the Kconfig "imply" directive
which is more correct (if we were to support a different accel).

Reported-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240717105723.58965-5-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-12-alex.bennee@linaro.org>
2024-07-22 09:38:08 +01:00
Philippe Mathieu-Daudé
fca2ffcb0b target/mips: Add semihosting stub
Since the SEMIHOSTING feature is optional, we need
a stub to link when it is disabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240717105723.58965-4-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-11-alex.bennee@linaro.org>
2024-07-22 09:38:06 +01:00
Philippe Mathieu-Daudé
bf9ab9d131 target/m68k: Add semihosting stub
Since the SEMIHOSTING feature is optional, we need
a stub to link when it is disabled.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240717105723.58965-3-philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-10-alex.bennee@linaro.org>
2024-07-22 09:38:03 +01:00
Alex Bennée
e8122a7118 gdbstub: Re-factor gdb command extensions
Coverity reported a memory leak (CID 1549757) in this code and its
admittedly rather clumsy handling of extending the command table.
Instead of handing over a full array of the commands lets use the
lighter weight GPtrArray and simply test for the presence of each
entry as we go. This avoids complications of transferring ownership of
arrays and keeps the final command entries as static entries in the
target code.

Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: Gustavo Bueno Romero <gustavo.romero@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240718094523.1198645-4-alex.bennee@linaro.org>
2024-07-22 09:37:44 +01:00
Song Gao
1c15dd632b target/loongarch/gdbstub: Add vector registers support
GDB already support LoongArch vector extension[1], QEMU gdb adds
LoongArch vector registers support, so that users can use 'info all-registers'
to get all vector registers values.

[1]: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=1e9569f383a3d5a88ee07d0c2401bd95613c222e

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewd-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240711024454.3075183-1-gaosong@loongson.cn>
2024-07-19 10:40:04 +08:00
Richard Henderson
23fa74974d target-arm queue:
* Fix handling of LDAPR/STLR with negative offset
  * LDAPR should honour SCTLR_ELx.nAA
  * Use float_status copy in sme_fmopa_s
  * hw/display/bcm2835_fb: fix fb_use_offsets condition
  * hw/arm/smmuv3: Support and advertise nesting
  * Use FPST_F16 for SME FMOPA (widening)
  * tests/arm-cpu-features: Do not assume PMU availability
  * hvf: arm: Do not advance PC when raising an exception
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaZFlUZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3iJuEACtVh1Wp93XMsL3llAZkQlx
 DUCnDCvAM2qiiTIMOqPQzeKTIkRV9aFh1YWzOtMFKai6UkBU6p1b4bPqb5SIr99G
 Ayps4+WzAHsjTqBGEpIIDWL6GqMwv9azBnRAYNb+Cg9O3SzEnCdGOKCfGYTXXPRz
 zQ1NIgqZSUC5jg3XgkU22J3VMsOUWijbzxnGXhOyemSIEhREl+t6Ns3ca3n47/jk
 JIw1g6o0mpefPPkaLq6ftVwpn1L63iYQugn4VCrIhtIoOM8vmnShbI9/GwzL4AYk
 n28nwPl948Xby13kCYmu6Slt8Rmm7M33pBDJzsVtbaeBSd44XHrov8Y1+e1FhAco
 lxrWY/2rG9HiWKGLdAeCKwVxB186DKiTmuK7lcN+eBu3VbOLjDiVE0d1bK4HqGyc
 nzA/Aq81Y9p5Z7wzX40sVFlq0j1pQDQWk6GgPfMA4ueHKEEobxC3C+k1q9m02gjQ
 qesOFzViiGe0j7JER84qqcatIaTk09xfbXL/uMZx8oP/iKa1pyMUx2blChXOXVTx
 oGkO2h3/QCpRIos8d8WM/bso16EkpraInM4748iumSLuxDxTwiIikK/hpsCLDwUN
 dLsH/hAMz+yQOFubFoRt4IlsGVnk5asmTDMb4S8RojdF2KzHuzbJMgdEOe62631g
 IOAc7Tn3TIm5MpAxXOXgJA==
 =/aEm
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20240718' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Fix handling of LDAPR/STLR with negative offset
 * LDAPR should honour SCTLR_ELx.nAA
 * Use float_status copy in sme_fmopa_s
 * hw/display/bcm2835_fb: fix fb_use_offsets condition
 * hw/arm/smmuv3: Support and advertise nesting
 * Use FPST_F16 for SME FMOPA (widening)
 * tests/arm-cpu-features: Do not assume PMU availability
 * hvf: arm: Do not advance PC when raising an exception

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaZFlUZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3iJuEACtVh1Wp93XMsL3llAZkQlx
# DUCnDCvAM2qiiTIMOqPQzeKTIkRV9aFh1YWzOtMFKai6UkBU6p1b4bPqb5SIr99G
# Ayps4+WzAHsjTqBGEpIIDWL6GqMwv9azBnRAYNb+Cg9O3SzEnCdGOKCfGYTXXPRz
# zQ1NIgqZSUC5jg3XgkU22J3VMsOUWijbzxnGXhOyemSIEhREl+t6Ns3ca3n47/jk
# JIw1g6o0mpefPPkaLq6ftVwpn1L63iYQugn4VCrIhtIoOM8vmnShbI9/GwzL4AYk
# n28nwPl948Xby13kCYmu6Slt8Rmm7M33pBDJzsVtbaeBSd44XHrov8Y1+e1FhAco
# lxrWY/2rG9HiWKGLdAeCKwVxB186DKiTmuK7lcN+eBu3VbOLjDiVE0d1bK4HqGyc
# nzA/Aq81Y9p5Z7wzX40sVFlq0j1pQDQWk6GgPfMA4ueHKEEobxC3C+k1q9m02gjQ
# qesOFzViiGe0j7JER84qqcatIaTk09xfbXL/uMZx8oP/iKa1pyMUx2blChXOXVTx
# oGkO2h3/QCpRIos8d8WM/bso16EkpraInM4748iumSLuxDxTwiIikK/hpsCLDwUN
# dLsH/hAMz+yQOFubFoRt4IlsGVnk5asmTDMb4S8RojdF2KzHuzbJMgdEOe62631g
# IOAc7Tn3TIm5MpAxXOXgJA==
# =/aEm
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 18 Jul 2024 11:19:17 PM AEST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]

* tag 'pull-target-arm-20240718' of https://git.linaro.org/people/pmaydell/qemu-arm: (26 commits)
  hvf: arm: Do not advance PC when raising an exception
  tests/arm-cpu-features: Do not assume PMU availability
  tests/tcg/aarch64: Add test cases for SME FMOPA (widening)
  target/arm: Use FPST_F16 for SME FMOPA (widening)
  target/arm: Use float_status copy in sme_fmopa_s
  hw/arm/smmu: Refactor SMMU OAS
  hw/arm/smmuv3: Support and advertise nesting
  hw/arm/smmuv3: Handle translation faults according to SMMUPTWEventInfo
  hw/arm/smmuv3: Support nested SMMUs in smmuv3_notify_iova()
  hw/arm/smmu: Support nesting in the rest of commands
  hw/arm/smmu: Introduce smmu_iotlb_inv_asid_vmid
  hw/arm/smmu: Support nesting in smmuv3_range_inval()
  hw/arm/smmu-common: Support nested translation
  hw/arm/smmu-common: Add support for nested TLB
  hw/arm/smmu-common: Rework TLB lookup for nesting
  hw/arm/smmuv3: Translate CD and TT using stage-2 table
  hw/arm/smmu: Introduce CACHED_ENTRY_TO_ADDR
  hw/arm/smmu: Consolidate ASID and VMID types
  hw/arm/smmu: Split smmuv3_translate()
  hw/arm/smmu: Use enum for SMMU stage
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-19 07:02:17 +10:00
Akihiko Odaki
30a1690f24 hvf: arm: Do not advance PC when raising an exception
hvf did not advance PC when raising an exception for most unhandled
system registers, but it mistakenly advanced PC when raising an
exception for GICv3 registers.

Cc: qemu-stable@nongnu.org
Fixes: a2260983c6 ("hvf: arm: Add support for GICv3")
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240716-pmu-v3-4-8c7c1858a227@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Richard Henderson
207d30b5fd target/arm: Use FPST_F16 for SME FMOPA (widening)
This operation has float16 inputs and thus must use
the FZ16 control not the FZ control.

Cc: qemu-stable@nongnu.org
Fixes: 3916841ac7 ("target/arm: Implement FMOPA, FMOPS (widening)")
Reported-by: Daniyal Khan <danikhan632@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20240717060149.204788-3-richard.henderson@linaro.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2374
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Daniyal Khan
31d93fedf4 target/arm: Use float_status copy in sme_fmopa_s
We made a copy above because the fp exception flags
are not propagated back to the FPST register, but
then failed to use the copy.

Cc: qemu-stable@nongnu.org
Fixes: 558e956c71 ("target/arm: Implement FMOPA, FMOPS (non-widening)")
Signed-off-by: Daniyal Khan <danikhan632@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20240717060149.204788-2-richard.henderson@linaro.org
[rth: Split from a larger patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Peter Maydell
25489b521b target/arm: LDAPR should honour SCTLR_ELx.nAA
In commit c1a1f80518 when we added the FEAT_LSE2 relaxations to
the alignment requirements for atomic and ordered loads and stores,
we didn't quite get it right for LDAPR/LDAPRH/LDAPRB with no
immediate offset.  These instructions were handled in the old decoder
as part of disas_ldst_atomic(), but unlike all the other insns that
function decoded (LDADD, LDCLR, etc) these insns are "ordered", not
"atomic", so they should be using check_ordered_align() rather than
check_atomic_align().  Commit c1a1f80518 used
check_atomic_align() regardless for everything in
disas_ldst_atomic().  We then carried that incorrect check over in
the decodetree conversion, where LDAPR/LDAPRH/LDAPRB are now handled
by trans_LDAPR().

The effect is that when FEAT_LSE2 is implemented, these instructions
don't honour the SCTLR_ELx.nAA bit and will generate alignment
faults when they should not.

(The LDAPR insns with an immediate offset were in disas_ldst_ldapr_stlr()
and then in trans_LDAPR_i() and trans_STLR_i(), and have always used
the correct check_ordered_align().)

Use check_ordered_align() in trans_LDAPR().

Cc: qemu-stable@nongnu.org
Fixes: c1a1f80518 ("target/arm: Relax ordered/atomic alignment checks for LSE2")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240709134504.3500007-3-peter.maydell@linaro.org
2024-07-18 13:49:28 +01:00
Peter Maydell
5669d26ec6 target/arm: Fix handling of LDAPR/STLR with negative offset
When we converted the LDAPR/STLR instructions to decodetree we
accidentally introduced a regression where the offset is negative.
The 9-bit immediate field is signed, and the old hand decoder
correctly used sextract32() to get it out of the insn word,
but the ldapr_stlr_i pattern in the decode file used "imm:9"
instead of "imm:s9", so it treated the field as unsigned.

Fix the pattern to treat the field as a signed immediate.

Cc: qemu-stable@nongnu.org
Fixes: 2521b6073b ("target/arm: Convert LDAPR/STLR (imm) to decodetree")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2419
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240709134504.3500007-2-peter.maydell@linaro.org
2024-07-18 13:49:28 +01:00
Richard Henderson
0d9f1016d4 RISC-V PR for 9.1
* Support the zimop, zcmop, zama16b and zabha extensions
 * Validate the mode when setting vstvec CSR
 * Add decode support for Zawrs extension
 * Update the KVM regs to Linux 6.10-rc5
 * Add smcntrpmf extension support
 * Raise an exception when CSRRS/CSRRC writes a read-only CSR
 * Re-insert and deprecate 'riscv,delegate' in virt machine device tree
 * roms/opensbi: Update to v1.5
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmaYeUcACgkQr3yVEwxT
 gBMtdw//U2NbmnmECa0uXuE7fdFul0tUkl2oHb9Cr8g5Se5g/HVFqexAKOFZ8Lcm
 DvTl94zJ2dms4RntcmJHwTIusa+oU6qqOekediotjgpeH4BHZNCOHe0E9hIAHn9F
 uoJ1P186L7VeVr7OFAAgSCE7F6egCk7iC0h8L8/vuL4xcuyfbZ2r7ybiTl1+45N2
 YBBv5/00wsYnyMeqRYYtyqgX9QR017JRqNSfTJSbKxhQM/L1GA1xxisUvIGeyDqc
 Pn8E3dMN6sscR6bPs4RP+SBi0JIlRCgth/jteSUkbYf42osw3/5sl4oK/e6Xiogo
 SjELOF7QJNxE8H6EUIScDaCVB5ZhvELZcuOL2NRdUuVDkjhWXM633HwfEcXkZdFK
 W/H9wOvNxPAJIOGXOpv10+MLmhdyIOZwE0uk6evHvdcTn3FP9DurdUCc1se0zKOA
 Qg/H6usTbLGNQ7KKTNQ6GpQ6u89iE1CIyZqYVvB1YuF5t7vtAmxvNk3SVZ6aq3VL
 lPJW2Zd1eO09Q+kRnBVDV7MV4OJrRNsU+ryd91NrSVo9aLADtyiNC28dCSkjU3Gn
 6YQZt65zHuhH5IBB/PGIPo7dLRT8KNWOiYVoy3c6p6DC6oXsKIibh0ue1nrVnnVQ
 NRqyxPYaj6P8zzqwTk+iJj36UXZZVtqPIhtRu9MrO6Opl2AbsXI=
 =pM6B
 -----END PGP SIGNATURE-----

Merge tag 'pull-riscv-to-apply-20240718-1' of https://github.com/alistair23/qemu into staging

RISC-V PR for 9.1

* Support the zimop, zcmop, zama16b and zabha extensions
* Validate the mode when setting vstvec CSR
* Add decode support for Zawrs extension
* Update the KVM regs to Linux 6.10-rc5
* Add smcntrpmf extension support
* Raise an exception when CSRRS/CSRRC writes a read-only CSR
* Re-insert and deprecate 'riscv,delegate' in virt machine device tree
* roms/opensbi: Update to v1.5

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmaYeUcACgkQr3yVEwxT
# gBMtdw//U2NbmnmECa0uXuE7fdFul0tUkl2oHb9Cr8g5Se5g/HVFqexAKOFZ8Lcm
# DvTl94zJ2dms4RntcmJHwTIusa+oU6qqOekediotjgpeH4BHZNCOHe0E9hIAHn9F
# uoJ1P186L7VeVr7OFAAgSCE7F6egCk7iC0h8L8/vuL4xcuyfbZ2r7ybiTl1+45N2
# YBBv5/00wsYnyMeqRYYtyqgX9QR017JRqNSfTJSbKxhQM/L1GA1xxisUvIGeyDqc
# Pn8E3dMN6sscR6bPs4RP+SBi0JIlRCgth/jteSUkbYf42osw3/5sl4oK/e6Xiogo
# SjELOF7QJNxE8H6EUIScDaCVB5ZhvELZcuOL2NRdUuVDkjhWXM633HwfEcXkZdFK
# W/H9wOvNxPAJIOGXOpv10+MLmhdyIOZwE0uk6evHvdcTn3FP9DurdUCc1se0zKOA
# Qg/H6usTbLGNQ7KKTNQ6GpQ6u89iE1CIyZqYVvB1YuF5t7vtAmxvNk3SVZ6aq3VL
# lPJW2Zd1eO09Q+kRnBVDV7MV4OJrRNsU+ryd91NrSVo9aLADtyiNC28dCSkjU3Gn
# 6YQZt65zHuhH5IBB/PGIPo7dLRT8KNWOiYVoy3c6p6DC6oXsKIibh0ue1nrVnnVQ
# NRqyxPYaj6P8zzqwTk+iJj36UXZZVtqPIhtRu9MrO6Opl2AbsXI=
# =pM6B
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 18 Jul 2024 12:09:11 PM AEST
# gpg:                using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65  9296 AF7C 9513 0C53 8013

* tag 'pull-riscv-to-apply-20240718-1' of https://github.com/alistair23/qemu: (30 commits)
  roms/opensbi: Update to v1.5
  hw/riscv/virt.c: re-insert and deprecate 'riscv,delegate'
  target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR
  target/riscv: Expose the Smcntrpmf config
  target/riscv: Do not setup pmu timer if OF is disabled
  target/riscv: More accurately model priv mode filtering.
  target/riscv: Start counters from both mhpmcounter and mcountinhibit
  target/riscv: Enforce WARL behavior for scounteren/hcounteren
  target/riscv: Save counter values during countinhibit update
  target/riscv: Implement privilege mode filtering for cycle/instret
  target/riscv: Only set INH fields if priv mode is available
  target/riscv: Add cycle & instret privilege mode filtering support
  target/riscv: Add cycle & instret privilege mode filtering definitions
  target/riscv: Add cycle & instret privilege mode filtering properties
  target/riscv: Fix the predicate functions for mhpmeventhX CSRs
  target/riscv: Combine set_mode and set_virt functions.
  target/riscv/kvm: update KVM regs to Linux 6.10-rc5
  disas/riscv: Add decode for Zawrs extension
  target/riscv: Validate the mode in write_vstvec
  disas/riscv: Support zabha disassemble
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-18 21:23:24 +10:00
Yu-Ming Chang
38c83e8d3a target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR
Both CSRRS and CSRRC always read the addressed CSR and cause any read side
effects regardless of rs1 and rd fields. Note that if rs1 specifies a register
holding a zero value other than x0, the instruction will still attempt to write
the unmodified value back to the CSR and will cause any attendant side effects.

So if CSRRS or CSRRC tries to write a read-only CSR with rs1 which specifies
a register holding a zero value, an illegal instruction exception should be
raised.

Signed-off-by: Yu-Ming Chang <yumin686@andestech.com>
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <172100444279.18077.6893072378718059541-0@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
6f6592d62e target/riscv: Expose the Smcntrpmf config
Create a new config for Smcntrpmf extension so that it can be enabled/
disabled from the qemu commandline.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-13-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
dd4c123636 target/riscv: Do not setup pmu timer if OF is disabled
The timer is setup function is invoked in both hpmcounter
write and mcountinhibit write path. If the OF bit set, the
LCOFI interrupt is disabled. There is no benefitting in
setting up the qemu timer until LCOFI is cleared to indicate
that interrupts can be fired again.

Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-12-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Rajnesh Kanwal
74112400df target/riscv: More accurately model priv mode filtering.
In case of programmable counters configured to count inst/cycles
we often end-up with counter not incrementing at all from kernel's
perspective.

For example:
- Kernel configures hpm3 to count instructions and sets hpmcounter
  to -10000 and all modes except U mode are inhibited.
- In QEMU we configure a timer to expire after ~10000 instructions.
- Problem is, it's often the case that kernel might not even schedule
  Umode task and we hit the timer callback in QEMU.
- In the timer callback we inject the interrupt into kernel, kernel
  runs the handler and reads hpmcounter3 value.
- Given QEMU maintains individual counters to count for each privilege
  mode, and given umode never ran, the umode counter didn't increment
  and QEMU returns same value as was programmed by the kernel when
  starting the counter.
- Kernel checks for overflow using previous and current value of the
  counter and reprograms the counter given there wasn't an overflow
  as per the counter value. (Which itself is a problem. We have QEMU
  telling kernel that counter3 overflowed but the counter value
  returned by QEMU doesn't seem to reflect that.).

This change makes sure that timer is reprogrammed from the handler
if the counter didn't overflow based on the counter value.

Second, this change makes sure that whenever the counter is read,
it's value is updated to reflect the latest count.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-11-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Rajnesh Kanwal
22c721c34c target/riscv: Start counters from both mhpmcounter and mcountinhibit
Currently we start timer counter from write_mhpmcounter path only
without checking for mcountinhibit bit. This changes adds mcountinhibit
check and also programs the counter from write_mcountinhibit as well.

When a counter is stopped using mcountinhibit we simply update
the value of the counter based on current host ticks and save
it for future reads.

We don't need to disable running timer as pmu_timer_trigger_irq
will discard the interrupt if the counter has been inhibited.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-10-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
8cff74c26d target/riscv: Enforce WARL behavior for scounteren/hcounteren
scounteren/hcountern are also WARL registers similar to mcountern.
Only set the bits for the available counters during the write to
preserve the WARL behavior.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-9-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
46023470e0 target/riscv: Save counter values during countinhibit update
Currently, if a counter monitoring cycle/instret is stopped via
mcountinhibit we just update the state while the value is saved
during the next read. This is not accurate as the read may happen
many cycles after the counter is stopped. Ideally, the read should
return the value saved when the counter is stopped.

Thus, save the value of the counter during the inhibit update
operation and return that value during the read if corresponding bit
in mcountihibit is set.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-8-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
b2d7a7c7e4 target/riscv: Implement privilege mode filtering for cycle/instret
Privilege mode filtering can also be emulated for cycle/instret by
tracking host_ticks/icount during each privilege mode switch. This
patch implements that for both cycle/instret and mhpmcounters. The
first one requires Smcntrpmf while the other one requires Sscofpmf
to be enabled.

The cycle/instret are still computed using host ticks when icount
is not enabled. Otherwise, they are computed using raw icount which
is more accurate in icount mode.

Co-Developed-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-7-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
3b31b7baff target/riscv: Only set INH fields if priv mode is available
Currently, the INH fields are set in mhpmevent uncoditionally
without checking if a particular priv mode is supported or not.

Suggested-by: Alistair Francis <alistair23@gmail.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-6-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
b54a84c15e target/riscv: Add cycle & instret privilege mode filtering support
QEMU only calculates dummy cycles and instructions, so there is no
actual means to stop the icount in QEMU. Hence this patch merely adds
the functionality of accessing the cfg registers, and cause no actual
effects on the counting of cycle and instret counters.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-5-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
6d1e3893cf target/riscv: Add cycle & instret privilege mode filtering definitions
This adds the definitions for ISA extension smcntrpmf.

Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-4-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
251dccc09a target/riscv: Add cycle & instret privilege mode filtering properties
This adds the properties for ISA extension smcntrpmf. Patches
implementing it will follow.

Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-3-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
be470e5977 target/riscv: Fix the predicate functions for mhpmeventhX CSRs
mhpmeventhX CSRs are available for RV32. The predicate function
should check that first before checking sscofpmf extension.

Fixes: 1466448345 ("target/riscv: Add sscofpmf extension support")
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-2-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Rajnesh Kanwal
68c05fb530 target/riscv: Combine set_mode and set_virt functions.
Combining riscv_cpu_set_virt_enabled() and riscv_cpu_set_mode()
functions. This is to make complete mode change information
available through a single function.

This allows to easily differentiate between HS->VS, VS->HS
and VS->VS transitions when executing state update codes.
For example: One use-case which inspired this change is
to update mode-specific instruction and cycle counters
which requires information of both prev mode and current
mode.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-1-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Daniel Henrique Barboza
3cb9f20499 target/riscv/kvm: update KVM regs to Linux 6.10-rc5
Two new regs added: ztso and zacas.

Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709085431.455541-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Jiayi Li
910c18a917 target/riscv: Validate the mode in write_vstvec
Base on the riscv-privileged spec, vstvec substitutes for the usual stvec.
Therefore, the encoding of the MODE should also be restricted to 0 and 1.

Signed-off-by: Jiayi Li <lijiayi@eswincomputing.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-ID: <20240701022553.1982-1-lijiayi@eswincomputing.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
LIU Zhiwei
8aebaa2591 target/riscv: Expose zabha extension as a cpu property
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-11-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
d34e406602 target/riscv: Add amocas.[b|h] for Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-10-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
8d07887bcb target/riscv: Move gen_cmpxchg before adding amocas.[b|h]
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-9-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
be4a8db7f3 target/riscv: Add AMO instructions for Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-8-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
24da9cbaca target/riscv: Move gen_amo before implement Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-7-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
a60ce58fd9 target/riscv: Support Zama16b extension
Zama16b is the property that misaligned load/stores/atomics within
a naturally aligned 16-byte region are atomic.

According to the specification, Zama16b applies only to AMOs, loads
and stores defined in the base ISAs, and loads and stores of no more
than XLEN bits defined in the F, D, and Q extensions. Thus it should
not apply to zacas or RVC instructions.

For an instruction in that set, if all accessed bytes lie within 16B granule,
the instruction will not raise an exception for reasons of address alignment,
and the instruction will give rise to only one memory operation for the
purposes of RVWMO—i.e., it will execute atomically.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-6-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
197e4d2988 target/riscv: Add zcmop extension
Zcmop defines eight 16-bit MOP instructions named C.MOP.n, where n is
an odd integer between 1 and 15, inclusive. C.MOP.n is encoded in
the reserved encoding space corresponding to C.LUI xn, 0.

Unlike the MOPs defined in the Zimop extension, the C.MOP.n instructions
are defined to not write any register.

In current implementation, C.MOP.n only has an check function, without any
other more behavior.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Deepak Gupta <debug@rivosinc.com>
Message-ID: <20240709113652.1239-4-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
6eab278d38 target/riscv: Add zimop extension
Zimop extension defines an encoding space for 40 MOPs.The Zimop
extension defines 32 MOP instructions named MOP.R.n, where n is
an integer between 0 and 31, inclusive. The Zimop extension
additionally defines 8 MOP instructions named MOP.RR.n, where n
is an integer between 0 and 7.

These 40 MOPs initially are defined to simply write zero to x[rd],
but are designed to be redefined by later extensions to perform some
other action.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Deepak Gupta <debug@rivosinc.com>
Message-ID: <20240709113652.1239-2-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
Richard Henderson
d74ec4d7dd trivial patches for 2024-07-17
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaXpakACgkQcBtPaxpp
 Plnvvwf8DdybFjyhAVmiG6+6WhB5s0hJhZRiWzUY6ieMbgPzCUgWzfr/pJh6q44x
 rw+aVfe2kf1ysycx3DjcJpucrC1rQD/qV6dB3IA1rxidBOZfCb8iZwoaB6yS9Epp
 4uXIdfje4zO6oCMN17MTXvuQIEUK3ZHN0EQOs7vsA2d8/pHqBqRoixjz9KnKHlpk
 P6kyIXceZ4wLAtwFJqa/mBBRnpcSdaWuQpzpBsg1E3BXRXXfeuXJ8WmGp0kEOpzQ
 k7+2sPpuah2z7D+jNFBW0+3ZYDvO9Z4pomQ4al4w+DHDyWBF49WnnSdDSDbWwxI5
 K0vUlsDVU8yTnIEgN8BL82F8eub5Ug==
 =ZYHJ
 -----END PGP SIGNATURE-----

Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging

trivial patches for 2024-07-17

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaXpakACgkQcBtPaxpp
# Plnvvwf8DdybFjyhAVmiG6+6WhB5s0hJhZRiWzUY6ieMbgPzCUgWzfr/pJh6q44x
# rw+aVfe2kf1ysycx3DjcJpucrC1rQD/qV6dB3IA1rxidBOZfCb8iZwoaB6yS9Epp
# 4uXIdfje4zO6oCMN17MTXvuQIEUK3ZHN0EQOs7vsA2d8/pHqBqRoixjz9KnKHlpk
# P6kyIXceZ4wLAtwFJqa/mBBRnpcSdaWuQpzpBsg1E3BXRXXfeuXJ8WmGp0kEOpzQ
# k7+2sPpuah2z7D+jNFBW0+3ZYDvO9Z4pomQ4al4w+DHDyWBF49WnnSdDSDbWwxI5
# K0vUlsDVU8yTnIEgN8BL82F8eub5Ug==
# =ZYHJ
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 17 Jul 2024 09:06:17 PM AEST
# gpg:                using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@debian.org>" [full]
# gpg:                 aka "Michael Tokarev <mjt@corpit.ru>" [full]

* tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu:
  meson: Update meson-buildoptions.sh
  backends/rng-random: Get rid of qemu_open_old()
  backends/iommufd: Get rid of qemu_open_old()
  backends/hostmem-epc: Get rid of qemu_open_old()
  hw/vfio/container: Get rid of qemu_open_old()
  hw/usb/u2f-passthru: Get rid of qemu_open_old()
  hw/usb/host-libusb: Get rid of qemu_open_old()
  hw/i386/sgx: Get rid of qemu_open_old()
  tests/avocado: Remove the non-working virtio_check_params test
  doc/net/l2tpv3: Update boolean fields' description to avoid short-form use
  target/hexagon/imported/mmvec: Fix superfluous trailing semicolon
  util/oslib-posix: Fix superfluous trailing semicolon
  hw/i386/x86: Fix superfluous trailing semicolon
  accel/kvm/kvm-all: Fix superfluous trailing semicolon
  README.rst: add the missing punctuations
  block/curl: rewrite http header parsing function

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-18 10:07:23 +10:00
Zhao Liu
29ea19465b target/hexagon/imported/mmvec: Fix superfluous trailing semicolon
Fix the superfluous trailing semicolon in target/hexagon/imported/mmvec/
ext.idef.

Cc: Brian Cain <bcain@quicinc.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-07-17 14:04:15 +03:00
Richard Henderson
58ee924b97 * target/i386/tcg: fixes for seg_helper.c
* SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT,
   but also don't use it by default
 * scsi: honor bootindex again for legacy drives
 * hpet, utils, scsi, build, cpu: miscellaneous bugfixes
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD
 TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv
 XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp
 y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba
 Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK
 JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ==
 =cZhV
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386/tcg: fixes for seg_helper.c
* SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT,
  but also don't use it by default
* scsi: honor bootindex again for legacy drives
* hpet, utils, scsi, build, cpu: miscellaneous bugfixes

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD
# TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv
# XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp
# y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba
# Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK
# JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ==
# =cZhV
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 17 Jul 2024 02:34:05 AM AEST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  target/i386/tcg: save current task state before loading new one
  target/i386/tcg: use X86Access for TSS access
  target/i386/tcg: check for correct busy state before switching to a new task
  target/i386/tcg: Compute MMU index once
  target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl
  target/i386/tcg: Reorg push/pop within seg_helper.c
  target/i386/tcg: use PUSHL/PUSHW for error code
  target/i386/tcg: Allow IRET from user mode to user mode with SMAP
  target/i386/tcg: Remove SEG_ADDL
  target/i386/tcg: fix POP to memory in long mode
  hpet: fix HPET_TN_SETVAL for high 32-bits of the comparator
  hpet: fix clamping of period
  docs: Update description of 'user=username' for '-run-with'
  qemu/timer: Add host ticks function for LoongArch
  scsi: fix regression and honor bootindex again for legacy drives
  hw/scsi/lsi53c895a: bump instruction limit in scripts processing to fix regression
  disas: Fix build against Capstone v6
  cpu: Free queued CPU work
  Revert "qemu-char: do not operate on sources from finalize callbacks"
  i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-17 15:40:28 +10:00
Peter Maydell
de680286b5 accel/tcg: Make cpu_exec_interrupt hook mandatory
The TCGCPUOps::cpu_exec_interrupt hook is currently not mandatory; if
it is left NULL then we treat it as if it had returned false. However
since pretty much every architecture needs to handle interrupts,
almost every target we have provides the hook. The one exception is
Tricore, which doesn't currently implement the architectural
interrupt handling.

Add a "do nothing" implementation of cpu_exec_hook for Tricore,
assert on startup that the CPU does provide the hook, and remove
the runtime NULL check before calling it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240712113949.4146855-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-16 20:04:08 +02:00
Paolo Bonzini
6a079f2e68 target/i386/tcg: save current task state before loading new one
This is how the steps are ordered in the manual.  EFLAGS.NT is
overwritten after the fact in the saved image.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
8b13106508 target/i386/tcg: use X86Access for TSS access
This takes care of probing the vaddr range in advance, and is also faster
because it avoids repeated TLB lookups.  It also matches the Intel manual
better, as it says "Checks that the current (old) TSS, new TSS, and all
segment descriptors used in the task switch are paged into system memory";
note however that it's not clear how the processor checks for segment
descriptors, and this check is not included in the AMD manual.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
05d41bbcb3 target/i386/tcg: check for correct busy state before switching to a new task
This step is listed in the Intel manual: "Checks that the new task is available
(call, jump, exception, or interrupt) or busy (IRET return)".

The AMD manual lists the same operation under the "Preventing recursion"
paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor
checks the busy bit in the IRET case.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
8053862af9 target/i386/tcg: Compute MMU index once
Add the MMU index to the StackAccess struct, so that it can be cached
or (in the next patch) computed from information that is not in
CPUX86State.

Co-developed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
fffe424b38 target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl
Disconnect mmu index computation from the current pl
as stored in env->hflags.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
059368bcf5 target/i386/tcg: Reorg push/pop within seg_helper.c
Interrupts and call gates should use accesses with the DPL as
the privilege level.  While computing the applicable MMU index
is easy, the harder thing is how to plumb it in the code.

One possibility could be to add a single argument to the PUSH* macros
for the privilege level, but this is repetitive and risks confusion
between the involved privilege levels.

Another possibility is to pass both CPL and DPL, and adjusting both
PUSH* and POP* to use specific privilege levels (instead of using
cpu_{ld,st}*_data). This makes the code more symmetric.

However, a more complicated but much nicer approach is to use a structure
to contain the stack parameters, env, unwind return address, and rewrite
the macros into functions.  The struct provides an easy home for the MMU
index as well.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
312ef3243e target/i386/tcg: use PUSHL/PUSHW for error code
Do not pre-decrement esp, let the macros subtract the appropriate
operand size.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
0bd385e7e3 target/i386/tcg: Allow IRET from user mode to user mode with SMAP
This fixes a bug wherein i386/tcg assumed an interrupt return using
the IRET instruction was always returning from kernel mode to either
kernel mode or user mode. This assumption is violated when IRET is used
as a clever way to restore thread state, as for example in the dotnet
runtime. There, IRET returns from user mode to user mode.

This bug is that stack accesses from IRET and RETF, as well as accesses
to the parameters in a call gate, are normal data accesses using the
current CPL.  This manifested itself as a page fault in the guest Linux
kernel due to SMAP preventing the access.

This bug appears to have been in QEMU since the beginning.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Co-developed-by: Robert R. Henry <rrh.henry@gmail.com>
Signed-off-by: Robert R. Henry <rrh.henry@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
a7cf494993 target/i386/tcg: Remove SEG_ADDL
This truncation is now handled by MMU_*32_IDX.  The introduction of
MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses
with a high segment base (e.g.  big real mode or vm86 mode), which did
not use SEG_ADDL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
3afc6539a8 target/i386/tcg: fix POP to memory in long mode
In long mode, POP to memory will write a full 64-bit value.  However,
the call to gen_writeback() in gen_POP will use MO_32 because the
decoding table is incorrect.

The bug was latent until commit aea49fbb01 ("target/i386: use gen_writeback()
within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v
now receives op->ot instead of the "ot" returned by gen_pop_T0.

Analyzed-by: Clément Chigot <chigot@adacore.com>
Fixes: 5e9e21bcc4 ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07)
Tested-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Michael Roth
9d38d9dca2 i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT
Currently if the 'legacy-vm-type' property of the sev-guest object is
'on', QEMU will attempt to use the newer KVM_SEV_INIT2 kernel
interface in conjunction with the newer KVM_X86_SEV_VM and
KVM_X86_SEV_ES_VM KVM VM types.

This can lead to measurement changes if, for instance, an SEV guest was
created on a host that originally had an older kernel that didn't
support KVM_SEV_INIT2, but is booted on the same host later on after the
host kernel was upgraded.

Instead, if legacy-vm-type is 'off', QEMU should fail if the
KVM_SEV_INIT2 interface is not provided by the current host kernel.
Modify the fallback handling accordingly.

In the future, VMSA features and other flags might be added to QEMU
which will require legacy-vm-type to be 'off' because they will rely
on the newer KVM_SEV_INIT2 interface. It may be difficult to convey to
users what values of legacy-vm-type are compatible with which
features/options, so as part of this rework, switch legacy-vm-type to a
tri-state OnOffAuto option. 'auto' in this case will automatically
switch to using the newer KVM_SEV_INIT2, but only if it is required to
make use of new VMSA features or other options only available via
KVM_SEV_INIT2.

Defining 'auto' in this way would avoid inadvertantly breaking
compatibility with older kernels since it would only be used in cases
where users opt into newer features that are only available via
KVM_SEV_INIT2 and newer kernels, and provide better default behavior
than the legacy-vm-type=off behavior that was previously in place, so
make it the default for 9.1+ machine types.

Cc: Daniel P. Berrangé <berrange@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
cc: kvm@vger.kernel.org
Signed-off-by: Michael Roth <michael.roth@amd.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Link: https://lore.kernel.org/r/20240710041005.83720-1-michael.roth@amd.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 10:45:06 +02:00
Song Gao
3ef4b21a5c target/loongarch: Fix cpu_reset set wrong CSR_CRMD
After cpu_reset, DATF in CSR_CRMD is 0, DATM is 0.
See the manual[1] 6.4.

  [1]: https://github.com/loongson/LoongArch-Documentation/releases/download/2023.04.20/LoongArch-Vol1-v1.10-EN.pdf

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240705021839.1004374-2-gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Song Gao
bba1c36da0 target/loongarch: Set CSR_PRCFG1 and CSR_PRCFG2 values
We set the value of register CSR_PRCFG3, but left out CSR_PRCFG1
and CSR_PRCFG2. Set CSR_PRCFG1 and CSR_PRCFG2 according to the
default values of the physical machine.

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240705021839.1004374-1-gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Feiyang Chen
785875874d target/loongarch: Remove avail_64 in trans_srai_w() and simplify it
Since srai.w is a valid instruction on la32, remove the avail_64 check
and simplify trans_srai_w().

Fixes: c0c0461e3a ("target/loongarch: Add avail_64 to check la64-only instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Feiyang Chen <chris.chenfeiyang@gmail.com>
Message-Id: <20240628033357.50027-1-chris.chenfeiyang@gmail.com>
Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Bibo Mao
d38e31ef74 target/loongarch/kvm: Add software breakpoint support
With KVM virtualization, debug exception is injected to guest kernel
rather than host for normal break intruction. Here hypercall
instruction with special code is used for sw breakpoint usage,
and detailed instruction comes from kvm kernel with user API
KVM_REG_LOONGARCH_DEBUG_INST.

Now only software breakpoint is supported, and it is allowed to
insert/remove software breakpoint. We can debug guest kernel with gdb
method after kernel is loaded, hardware breakpoint will be added in later.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Tested-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240607035016.2975799-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Richard Henderson
7f49089158 target/arm: Convert PMULL to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
f7a8456586 target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
26cb9dbed8 target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
7575c5710c target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
eb191187f6 target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
97b06ab705 target/arm: Convert SMULL, UMULL, SMLAL, UMLAL, SMLSL, UMLSL to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:34 +01:00
Peter Maydell
4f7b1ecba8 target: Set TCGCPUOps::cpu_exec_halt to target's has_work implementation
Currently the TCGCPUOps::cpu_exec_halt method is optional, and if it
is not set then the default is to call the CPUClass::has_work
method (which has an identical function signature).

We would like to make the cpu_exec_halt method mandatory so we can
remove the runtime check and fallback handling.  In preparation for
that, make all the targets which don't need special handling in their
cpu_exec_halt set it to their cpu_has_work implementation instead of
leaving it unset.  (This is every target except for arm and i386.)

In the riscv case this requires us to make the function not
be local to the source file it's defined in.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11 11:41:34 +01:00
Peter Maydell
fcee3707eb target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()
In commit a96edb687e we set the cpu_exec_halt field of the
TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the
arm_v7m_tcg_ops struct unchanged.  That isn't wrong, because for
M-profile FEAT_WFxT doesn't exist and the default handling for "no
cpu_exec_halt method" is correct, but it's perhaps a little
confusing.  We would also like to make setting the cpu_exec_halt
method mandatory.

Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use
for A-profile.  (On M-profile we never set up the wfxt timer so there
is no change in behaviour here.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11 11:41:34 +01:00
Richard Henderson
efceb7d2bd target/arm: Use cpu_env in cpu_untagged_addr
In a completely artifical memset benchmark object_dynamic_cast_assert
dominates the profile, even above guest address resolution and
the underlying host memset.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240702154911.1667418-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11 11:41:33 +01:00
Peter Maydell
a8ab8706d4 target/arm: Allow FPCR bits that aren't in FPSCR
In order to allow FPCR bits that aren't in the FPSCR (like the new
bits that are defined for FEAT_AFP), we need to make sure that writes
to the FPSCR only write to the bits of FPCR that are architecturally
mapped, and not the others.

Implement this with a new function vfp_set_fpcr_masked() which
takes a mask of which bits to update.

(We could do the same for FPSR, but we leave that until we actually
are likely to need it.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-10-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
db397a81ee target/arm: Rename FPSR_MASK and FPCR_MASK and define them symbolically
Now that we store FPSR and FPCR separately, the FPSR_MASK and
FPCR_MASK macros are slightly confusingly named and the comment
describing them is out of date.  Rename them to FPSCR_FPSR_MASK and
FPSCR_FPCR_MASK, document that they are the mask of which FPSCR bits
are architecturally mapped to which AArch64 register, and define them
symbolically rather than as hex values.  (This latter requires
defining some extra macros for bits which we haven't previously
defined.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-9-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
a26db547b7 target/arm: Rename FPCR_ QC, NZCV macros to FPSR_
The QC, N, Z, C, V bits live in the FPSR, not the FPCR. Rename the
macros that define these bits accordingly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-8-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
ce07ea61ed target/arm: Store FPSR and FPCR in separate CPU state fields
Now that we have refactored the set/get functions so that the FPSCR
format is no longer the authoritative one, we can keep FPSR and FPCR
in separate CPU state fields.

As well as the get and set functions, we also have a scattering of
places in the code which directly access vfp.xregs[ARM_VFP_FPSCR] to
extract single fields which are stored there.  These all change to
directly access either vfp.fpsr or vfp.fpcr, depending on the
location of the field.  (Most commonly, this is the NZCV flags.)

We make the field in the CPU state struct 64 bits, because
architecturally FPSR and FPCR are 64 bits.  However we leave the
types of the arguments and return values of the get/set functions as
32 bits, since we don't need to make that change with the current
architecture and various callsites would be unable to handle
set bits in the high half (for instance the gdbstub protocol
assumes they're only 32 bit registers).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-7-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
81ae37dbb4 target/arm: Implement store_cpu_field_low32() macro
We already have a load_cpu_field_low32() to load the low half of a
64-bit CPU struct field to a TCGv_i32; however we haven't yet needed
the store equivalent.  We'll want that in the next patch, so
implement it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-6-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
abf1046a15 target/arm: Support migration when FPSR/FPCR won't fit in the FPSCR
To support FPSR and FPCR bits that don't exist in the AArch32 FPSCR
view of floating point control and status (such as the FEAT_AFP ones),
we need to make sure those bits can be migrated. This commit allows
that, whilst maintaining backwards and forwards migration compatibility
for CPUs where there are no such bits:

On sending:
 * If either the FPCR or the FPSR include set bits that are not
   visible in the AArch32 FPSCR view of floating point control/status
   then we send the FPCR and FPSR as two separate fields in a new
   cpu/vfp/fpcr_fpsr subsection, and we send a 0 for the old
   FPSCR field in cpu/vfp
 * Otherwise, we don't send the fpcr_fpsr subsection, and we send
   an FPSCR-format value in cpu/vfp as we did previously

On receiving:
 * if we see a non-zero FPSCR field, that is the right information
 * if we see a fpcr_fpsr subsection then that has the information
 * if we see neither, then FPSCR/FPCR/FPSR are all zero on the source;
   cpu_pre_load() ensures the CPU state defaults to that
 * if we see both, then the migration source is buggy or malicious;
   either the fpcr_fpsr or the FPSCR will "win" depending which
   is first in the migration stream; we don't care which that is

We make the new FPCR and FPSR on-the-wire data be 64 bits, because
architecturally these registers are that wide, and this avoids the
need to engage in further migration-compatibility contortions in
future if some new architecture revision defines bits in the high
half of either register.

(We won't ever send the new migration subsection until we add support
for a CPU feature which enables setting overlapping FPCR bits, like
FEAT_AFP.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-5-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
b167325e93 target/arm: Make vfp_set_fpscr() call vfp_set_{fpcr, fpsr}
Make vfp_set_fpscr() call vfp_set_fpsr() and vfp_set_fpcr()
instead of the other way around.

The masking we do when getting and setting vfp.xregs[ARM_VFP_FPSCR]
is a little awkward, but we are going to change where we store the
underlying FPSR and FPCR information in a later commit, so it will
go away then.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-4-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
2de7cf9e05 target/arm: Make vfp_get_fpscr() call vfp_get_{fpcr, fpsr}
In AArch32, the floating point control and status bits are all in a
single register, FPSCR.  In AArch64, these were split into separate
FPCR and FPSR registers, but the bit layouts remained the same, with
no overlaps, so that you could construct an FPSCR value by ORing FPCR
and FPSR, or equivalently could produce FPSR and FPCR by masking an
FPSCR value.  For QEMU's implementation, we opted to use masking to
produce FPSR and FPCR, because we started with an AArch32
implementation of FPSCR.

The addition of the (AArch64-only) FEAT_AFP adds new bits to the FPCR
which overlap with some bits in the FPSR.  This means we'll no longer
be able to consider the FPSCR-encoded value as the primary one, but
instead need to treat FPSR/FPCR as the primary encoding and construct
the FPSCR from those.  (This remains possible because the FEAT_AFP
bits in FPCR don't appear in the FPSCR.)

As the first step in this refactoring, make vfp_get_fpscr() call
vfp_get_fpcr() and vfp_get_fpsr(), instead of the other way around.

Note that vfp_get_fpcsr_from_host() returns only bits in the FPSR
(for the cumulative fp exception bits), so we can simply rename
it without needing to add a new function for getting FPCR bits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-3-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Peter Maydell
ea8618382a target/arm: Correct comments about M-profile FPSCR
The M-profile FPSCR LTPSIZE is bits [18:16]; this is the same
field as A-profile FPSCR Len, not Stride. Correct the comment
in vfp_get_fpscr().

We also implemented M-profile FPSCR.QC, but forgot to delete
a TODO comment from vfp_set_fpscr(); remove it now.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-2-peter.maydell@linaro.org
2024-07-11 11:41:33 +01:00
Gustavo Romero
f81198cefa gdbstub: Add support for MTE in user mode
This commit implements the stubs to handle the qIsAddressTagged,
qMemTag, and QMemTag GDB packets, allowing all GDB 'memory-tag'
subcommands to work with QEMU gdbstub on aarch64 user mode. It also
implements the get/set functions for the special GDB MTE register
'tag_ctl', used to control the MTE fault type at runtime.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-Id: <20240628050850.536447-11-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240705084047.857176-40-alex.bennee@linaro.org>
2024-07-05 12:35:33 +01:00
Gustavo Romero
0c9b437c90 target/arm: Make some MTE helpers widely available
Make the MTE helpers allocation_tag_mem_probe, load_tag1, and store_tag1
available to other subsystems.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240628050850.536447-6-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240705084047.857176-35-alex.bennee@linaro.org>
2024-07-05 12:35:11 +01:00
Gustavo Romero
41bfb6704e target/arm: Fix exception case in allocation_tag_mem_probe
If page in 'ptr_access' is inaccessible and probe is 'true'
allocation_tag_mem_probe should not throw an exception, but currently it
does, so fix it.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240628050850.536447-5-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240705084047.857176-34-alex.bennee@linaro.org>
2024-07-05 12:35:07 +01:00
Richard Henderson
5915139aba * meson: Pass objects and dependencies to declare_dependency(), not static_library()
* meson: Drop the .fa library suffix
 * target/i386: drop AMD machine check bits from Intel CPUID
 * target/i386: add avx-vnni-int16 feature
 * target/i386: SEV bugfixes
 * target/i386: SEV-SNP -cpu host support
 * char: fix exit issues
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaGceoUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNcpgf/XziKojGOTvYsE7xMijOUswYjCG5m
 ZVLqxTug8Q0zO/9mGvluKBTWmh8KhRWOovX5iZL8+F0gPoYPG4ONpNhh3wpA9+S7
 H7ph4V6sDJBX4l3OrOK6htD8dO5D9kns1iKGnE0lY60PkcHl+pU8BNWfK1zYp5US
 geiyzuRFRRtDmoNx5+o+w+D+W5msPZsnlj5BnPWM+O/ykeFfSrk2ztfdwHKXUhCB
 5FJcu2sWVx+wsdVzdjgT8USi5+VTK4vabq3SfccmNRxBRnJOCU5MrR63stMDceo4
 TswSB88I0WRV1848AudcGZRkjvKaXLyHJ+QTjg2dp7itEARJ3MGsvOpS5A==
 =3kv7
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* meson: Pass objects and dependencies to declare_dependency(), not static_library()
* meson: Drop the .fa library suffix
* target/i386: drop AMD machine check bits from Intel CPUID
* target/i386: add avx-vnni-int16 feature
* target/i386: SEV bugfixes
* target/i386: SEV-SNP -cpu host support
* char: fix exit issues

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaGceoUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNcpgf/XziKojGOTvYsE7xMijOUswYjCG5m
# ZVLqxTug8Q0zO/9mGvluKBTWmh8KhRWOovX5iZL8+F0gPoYPG4ONpNhh3wpA9+S7
# H7ph4V6sDJBX4l3OrOK6htD8dO5D9kns1iKGnE0lY60PkcHl+pU8BNWfK1zYp5US
# geiyzuRFRRtDmoNx5+o+w+D+W5msPZsnlj5BnPWM+O/ykeFfSrk2ztfdwHKXUhCB
# 5FJcu2sWVx+wsdVzdjgT8USi5+VTK4vabq3SfccmNRxBRnJOCU5MrR63stMDceo4
# TswSB88I0WRV1848AudcGZRkjvKaXLyHJ+QTjg2dp7itEARJ3MGsvOpS5A==
# =3kv7
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 04 Jul 2024 02:56:58 AM PDT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  target/i386/SEV: implement mask_cpuid_features
  target/i386: add support for masking CPUID features in confidential guests
  char-stdio: Restore blocking mode of stdout on exit
  target/i386: add avx-vnni-int16 feature
  i386/sev: Fallback to the default SEV device if none provided in sev_get_capabilities()
  i386/sev: Fix error message in sev_get_capabilities()
  target/i386: do not include undefined bits in the AMD topoext leaf
  target/i386: SEV: fix formatting of CPUID mismatch message
  target/i386: drop AMD machine check bits from Intel CPUID
  target/i386: pass X86CPU to x86_cpu_get_supported_feature_word
  meson: Drop the .fa library suffix
  Revert "meson: Propagate gnutls dependency"
  meson: Pass objects and dependencies to declare_dependency()
  meson: merge plugin_ldflags into emulator_link_args
  meson: move block.syms dependency out of libblock
  meson: move shared_module() calls where modules are already walked

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-04 09:16:07 -07:00
Paolo Bonzini
188569c10d target/i386/SEV: implement mask_cpuid_features
Drop features that are listed as "BitMask" in the PPR and currently
not supported by AMD processors.  The only ones that may become useful
in the future are TSC deadline timer and x2APIC, everything else is
not needed for SEV-SNP guests (e.g. VIRT_SSBD) or would require
processor support (e.g. TSC_ADJUST).

This allows running SEV-SNP guests with "-cpu host".

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-04 11:56:20 +02:00
Paolo Bonzini
c28d8b097f target/i386: add support for masking CPUID features in confidential guests
Some CPUID features may be provided by KVM for some guests, independent of
processor support, for example TSC deadline or TSC adjust.  If these are
not supported by the confidential computing firmware, however, the guest
will fail to start.  Add support for removing unsupported features from
"-cpu host".

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-04 07:47:11 +02:00
Richard Henderson
1406b7fc4b virtio: features,fixes
A bunch of improvements:
 - vhost dirty log is now only scanned once, not once per device
 - virtio and vhost now support VIRTIO_F_NOTIFICATION_DATA
 - cxl gained DCD emulation support
 - pvpanic gained shutdown support
 - beginning of patchset for Generic Port Affinity Structure
 - s3 support
 - friendlier error messages when boot fails on some illegal configs
 - for vhost-user, VHOST_USER_SET_LOG_BASE is now only sent once
 - part of vhost-user support for any POSIX system -
   not yet enabled due to qtest failures
 - sr-iov VF setup code has been reworked significantly
 - new tests, particularly for risc-v ACPI
 - bugfixes
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmaF068PHG1zdEByZWRo
 YXQuY29tAAoJECgfDbjSjVRp+DMIAMC//mBXIZlPprfhb5cuZklxYi31Acgu5TUr
 njqjCkN+mFhXXZuc3B67xmrQ066IEPtsbzCjSnzuU41YK4tjvO1g+LgYJBv41G16
 va2k8vFM5pdvRA+UC9li1CCIPxiEcszxOdzZemj3szWLVLLUmwsc5OZLWWeFA5m8
 vXrrT9miODUz3z8/Xn/TVpxnmD6glKYIRK/IJRzzC4Qqqwb5H3ji/BJV27cDUtdC
 w6ns5RYIj5j4uAiG8wQNDggA1bMsTxFxThRDUwxlxaIwAcexrf1oRnxGRePA7PVG
 BXrt5yodrZYR2sR6svmOOIF3wPMUDKdlAItTcEgYyxaVo5rAdpc=
 =p9h4
 -----END PGP SIGNATURE-----

Merge tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging

virtio: features,fixes

A bunch of improvements:
- vhost dirty log is now only scanned once, not once per device
- virtio and vhost now support VIRTIO_F_NOTIFICATION_DATA
- cxl gained DCD emulation support
- pvpanic gained shutdown support
- beginning of patchset for Generic Port Affinity Structure
- s3 support
- friendlier error messages when boot fails on some illegal configs
- for vhost-user, VHOST_USER_SET_LOG_BASE is now only sent once
- part of vhost-user support for any POSIX system -
  not yet enabled due to qtest failures
- sr-iov VF setup code has been reworked significantly
- new tests, particularly for risc-v ACPI
- bugfixes

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmaF068PHG1zdEByZWRo
# YXQuY29tAAoJECgfDbjSjVRp+DMIAMC//mBXIZlPprfhb5cuZklxYi31Acgu5TUr
# njqjCkN+mFhXXZuc3B67xmrQ066IEPtsbzCjSnzuU41YK4tjvO1g+LgYJBv41G16
# va2k8vFM5pdvRA+UC9li1CCIPxiEcszxOdzZemj3szWLVLLUmwsc5OZLWWeFA5m8
# vXrrT9miODUz3z8/Xn/TVpxnmD6glKYIRK/IJRzzC4Qqqwb5H3ji/BJV27cDUtdC
# w6ns5RYIj5j4uAiG8wQNDggA1bMsTxFxThRDUwxlxaIwAcexrf1oRnxGRePA7PVG
# BXrt5yodrZYR2sR6svmOOIF3wPMUDKdlAItTcEgYyxaVo5rAdpc=
# =p9h4
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 03 Jul 2024 03:41:51 PM PDT
# gpg:                using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469
# gpg:                issuer "mst@redhat.com"
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined]
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17  0970 C350 3912 AFBE 8E67
#      Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA  8A0D 281F 0DB8 D28D 5469

* tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (85 commits)
  hw/pci: Replace -1 with UINT32_MAX for romsize
  pcie_sriov: Register VFs after migration
  pcie_sriov: Remove num_vfs from PCIESriovPF
  pcie_sriov: Release VFs failed to realize
  pcie_sriov: Reuse SR-IOV VF device instances
  pcie_sriov: Ensure VF function number does not overflow
  pcie_sriov: Do not manually unrealize
  hw/ppc/spapr_pci: Do not reject VFs created after a PF
  hw/ppc/spapr_pci: Do not create DT for disabled PCI device
  hw/pci: Rename has_power to enabled
  virtio-iommu: Clear IOMMUDevice when VFIO device is unplugged
  virtio: remove virtio_tswap16s() call in vring_packed_event_read()
  hw/cxl/events: Mark cxl-add-dynamic-capacity and cxl-release-dynamic-capcity unstable
  hw/cxl/events: Improve QMP interfaces and documentation for add/release dynamic capacity.
  tests/data/acpi/rebuild-expected-aml.sh: Add RISC-V
  pc-bios/meson.build: Add support for RISC-V in unpack_edk2_blobs
  meson.build: Add RISC-V to the edk2-target list
  tests/data/acpi/virt: Move ARM64 ACPI tables under aarch64/${machine} path
  tests/data/acpi: Move x86 ACPI tables under x86/${machine} path
  tests/qtest/bios-tables-test.c: Set "arch" for x86 tests
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-03 20:54:17 -07:00
David Woodhouse
93c76555d8 hw/i386/fw_cfg: Add etc/e820 to fw_cfg late
In e820_add_entry() the e820_table is reallocated with g_renew() to make
space for a new entry. However, fw_cfg_arch_create() just uses the
existing e820_table pointer. This leads to a use-after-free if anything
adds a new entry after fw_cfg is set up.

Shift the addition of the etc/e820 file to the machine done notifier, via
a new fw_cfg_add_e820() function.

Also make e820_table private and use an e820_get_table() accessor function
for it, which sets a flag that will trigger an assert() for any *later*
attempts to add to the table.

Make e820_add_entry() return void, as most callers don't check for error
anyway.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <a2708734f004b224f33d3b4824e9a5a262431568.camel@infradead.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-07-03 18:14:06 -04:00
Paolo Bonzini
138c3377a9 target/i386: add avx-vnni-int16 feature
AVX-VNNI-INT16 (CPUID[EAX=7,ECX=1).EDX[10]) is supported by Clearwater
Forest processor, add it to QEMU as it does not need any specific
enablement.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Michal Privoznik
f4e5f302b3 i386/sev: Fallback to the default SEV device if none provided in sev_get_capabilities()
When management tools (e.g. libvirt) query QEMU capabilities,
they start QEMU with a minimalistic configuration and issue
various commands on monitor. One of the command issued is/might
be "query-sev-capabilities" to learn values like cbitpos or
reduced-phys-bits. But as of v9.0.0-1145-g16dcf200dc the monitor
command returns an error instead.

This creates a chicken-egg problem because in order to query
those aforementioned values QEMU needs to be started with a
'sev-guest' object. But to start QEMU with the values must be
known.

I think it's safe to assume that the default path ("/dev/sev")
provides the same data as user provided one. So fall back to it.

Fixes: 16dcf200dc
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Link: https://lore.kernel.org/r/157f93712c23818be193ce785f648f0060b33dee.1719218926.git.mprivozn@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Michal Privoznik
ab5f4edf72 i386/sev: Fix error message in sev_get_capabilities()
When a custom path is provided to sev-guest object and opening
the path fails an error message is reported. But the error
message still mentions DEFAULT_SEV_DEVICE ("/dev/sev") instead of
the custom path.

Fixes: 16dcf200dc
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/b4648905d399780063dc70851d3d6a3cd28719a5.1719218926.git.mprivozn@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Paolo Bonzini
29a51b2bb5 target/i386: do not include undefined bits in the AMD topoext leaf
Commit d7c72735f6 ("target/i386: Add new EPYC CPU versions with updated
cache_info", 2023-05-08) ensured that AMD-defined CPU models did not
have the 'complex_indexing' bit set, but left it set in "-cpu host"
which uses the default ("legacy") cache information.

Reimplement that commit using a CPU feature, so that it can be applied
to all guests using a new machine type, independent of the CPU model.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Paolo Bonzini
9b40d376f6 target/i386: SEV: fix formatting of CPUID mismatch message
Fixes: 70943ad8e4 ("i386/sev: Add support for SNP CPUID validation", 2024-06-05)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Paolo Bonzini
0b2757412c target/i386: drop AMD machine check bits from Intel CPUID
The recent addition of the SUCCOR bit to kvm_arch_get_supported_cpuid()
causes the bit to be visible when "-cpu host" VMs are started on Intel
processors.

While this should in principle be harmless, it's not tidy and we don't
even know for sure that it doesn't cause any guest OS to take unexpected
paths.  Since x86_cpu_get_supported_feature_word() can return different
different values depending on the guest, adjust it to hide the SUCCOR
bit if the guest has non-AMD vendor.

Suggested-by: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: John Allen <john.allen@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Paolo Bonzini
8dee384832 target/i386: pass X86CPU to x86_cpu_get_supported_feature_word
This allows modifying the bits in "-cpu max"/"-cpu host" depending on
the guest CPU vendor (which, at least by default, is the host vendor in
the case of KVM).

For example, machine check architecture differs between Intel and AMD,
and bits from AMD should be dropped when configuring the guest for
an Intel model.

Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: John Allen <john.allen@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-03 18:41:26 +02:00
Richard Henderson
ff6d8490e3 Misc HW patches queue
- Prevent NULL deref in sPAPR network model (Oleg)
 - Automatic deprecation of versioned machine types (Daniel)
 - Correct 'dump-guest-core' property name in hint (Akihiko)
 - Prevent IRQ leak in MacIO IDE model (Mark)
 - Remove dead #ifdef'ry related to unsupported macOS 12.0 (Akihiko)
 - Remove "hw/hw.h" where unnecessary (Thomas)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmaDiSQACgkQ4+MsLN6t
 wN4jmBAA2kxwFAGbKvokANDAZBwWmJdnuIPcqS+jdo/wCuQXOo1ROADd3NFlgQWx
 z1xOv/LiAmQiUeeiP+nlA8gWCdW93PErU07og1p1+N2D1sBO6oG5QDlT/tTFuEGd
 IL21jG2xWkEemd3PSN2pHKrytpS0e4S0cNZIKgTUTKdv+Mb2ZEiQi7K4zUTjcmjz
 nlsSjTXdyKBmoiqNGhITWfbR2IUWjtCpzUO44ceqXd5HDpvfGhpKI7Uwun1W2xNU
 yw1XrAFd64Qhd/lvc28G1DLfDdtRIoaRGxgLzQbU6621s0o50Ecs6TNHseuUAKvd
 tQhOtM8IEuZ6jVw8nswCPIcJyjbeY29kjI4WmD2weF1fZbDey6Emlrf+dkJUIuCb
 TximyTXw3rb1nREUVsEQLF69BKjTjE5+ETaplcTWGHCoH2+uA/5MqygalTH1Ub9W
 TwVWSUwpNvIJ3RTsT20YVowkill8piF+ECldTKzJuWjqDviiJDoMm5EFdkkcUB20
 nMyhGoiXtiQ4NYU0/B6HbHOXZkqLbhWcx9G281xJ+RRwjUyVxXD3zHGR9AoOp9ls
 EAo/2URJtGN95LJmzCtaD+oo0wRZ5+7lmnqHPPXkYUdwFm4bhe3dP4NggIrS0cXn
 19wvBqQuPwywxIbFEu6327YtfPRcImWIlFthWnm9lUyDmbOqDKw=
 =fLCx
 -----END PGP SIGNATURE-----

Merge tag 'hw-misc-20240702' of https://github.com/philmd/qemu into staging

Misc HW patches queue

- Prevent NULL deref in sPAPR network model (Oleg)
- Automatic deprecation of versioned machine types (Daniel)
- Correct 'dump-guest-core' property name in hint (Akihiko)
- Prevent IRQ leak in MacIO IDE model (Mark)
- Remove dead #ifdef'ry related to unsupported macOS 12.0 (Akihiko)
- Remove "hw/hw.h" where unnecessary (Thomas)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmaDiSQACgkQ4+MsLN6t
# wN4jmBAA2kxwFAGbKvokANDAZBwWmJdnuIPcqS+jdo/wCuQXOo1ROADd3NFlgQWx
# z1xOv/LiAmQiUeeiP+nlA8gWCdW93PErU07og1p1+N2D1sBO6oG5QDlT/tTFuEGd
# IL21jG2xWkEemd3PSN2pHKrytpS0e4S0cNZIKgTUTKdv+Mb2ZEiQi7K4zUTjcmjz
# nlsSjTXdyKBmoiqNGhITWfbR2IUWjtCpzUO44ceqXd5HDpvfGhpKI7Uwun1W2xNU
# yw1XrAFd64Qhd/lvc28G1DLfDdtRIoaRGxgLzQbU6621s0o50Ecs6TNHseuUAKvd
# tQhOtM8IEuZ6jVw8nswCPIcJyjbeY29kjI4WmD2weF1fZbDey6Emlrf+dkJUIuCb
# TximyTXw3rb1nREUVsEQLF69BKjTjE5+ETaplcTWGHCoH2+uA/5MqygalTH1Ub9W
# TwVWSUwpNvIJ3RTsT20YVowkill8piF+ECldTKzJuWjqDviiJDoMm5EFdkkcUB20
# nMyhGoiXtiQ4NYU0/B6HbHOXZkqLbhWcx9G281xJ+RRwjUyVxXD3zHGR9AoOp9ls
# EAo/2URJtGN95LJmzCtaD+oo0wRZ5+7lmnqHPPXkYUdwFm4bhe3dP4NggIrS0cXn
# 19wvBqQuPwywxIbFEu6327YtfPRcImWIlFthWnm9lUyDmbOqDKw=
# =fLCx
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 01 Jul 2024 09:59:16 PM PDT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'hw-misc-20240702' of https://github.com/philmd/qemu: (22 commits)
  Remove inclusion of hw/hw.h from files that don't need it
  net/vmnet: Drop ifdef for macOS versions older than 12.0
  block/file-posix: Drop ifdef for macOS versions older than 12.0
  audio: Drop ifdef for macOS versions older than 12.0
  hvf: Drop ifdef for macOS versions older than 12.0
  hw/ide/macio: switch from using qemu_allocate_irq() to qdev input GPIOs
  system/physmem: Fix reference to dump-guest-core
  docs: document special exception for machine type deprecation & removal
  hw/i386: remove obsolete manual deprecation reason string of i440fx machines
  hw/ppc: remove obsolete manual deprecation reason string of spapr machines
  hw: skip registration of outdated versioned machine types
  hw: set deprecation info for all versioned machine types
  include/hw: temporarily disable deletion of versioned machine types
  include/hw: add macros for deprecation & removal of versioned machines
  hw/i386: convert 'q35' machine definitions to use new macros
  hw/i386: convert 'i440fx' machine definitions to use new macros
  hw/m68k: convert 'virt' machine definitions to use new macros
  hw/ppc: convert 'spapr' machine definitions to use new macros
  hw/s390x: convert 'ccw' machine definitions to use new macros
  hw/arm: convert 'virt' machine definitions to use new macros
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-02 07:11:47 -07:00
Akihiko Odaki
f64933c8ae hvf: Drop ifdef for macOS versions older than 12.0
macOS versions older than 12.0 are no longer supported.

docs/about/build-platforms.rst says:
> Support for the previous major version will be dropped 2 years after
> the new major version is released or when the vendor itself drops
> support, whichever comes first.

macOS 12.0 was released 2021:
https://www.apple.com/newsroom/2021/10/macos-monterey-is-now-available/

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240629-macos-v1-1-6e70a6b700a0@daynix.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-02 06:58:48 +02:00
Gustavo Romero
02ff2add77 target/arm: Enable FEAT_Debugv8p8 for -cpu max
Enable FEAT_Debugv8p8 for max CPU. This feature is out of scope for QEMU
since it concerns the external debug interface for JTAG, but is
mandatory in Armv8.8 implementations, hence it is reported as supported
in the ID registers.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240624180915.4528-4-gustavo.romero@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Gustavo Romero
c5f9e8bb93 target/arm: Move initialization of debug ID registers
Move the initialization of the debug ID registers to aa32_max_features,
which is used to set the 32-bit ID registers. This ensures that the
debug ID registers are consistently set for the max CPU in a single
place.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240624180915.4528-3-gustavo.romero@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Gustavo Romero
4df378ab51 target/arm: Fix indentation
Fix comment indentation adding a missing space.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240624180915.4528-2-gustavo.romero@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
2cd5078d57 target/arm: Delete dead code from disas_simd_indexed
MLA, MLS, SQDMULH, SQRDMULH, were converted with 8db93dcd3d
and f80701cb44, and this code should have been removed then.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
80b02a565e target/arm: Convert FCMLA to decodetree
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
0f46ebee63 target/arm: Convert FCADD to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
6515b13e87 target/arm: Add data argument to do_fp3_vector
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
9676c9d9b5 target/arm: Convert BFMMLA, SMMLA, UMMLA, USMMLA to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
1c6ecab431 target/arm: Convert BFMLALB, BFMLALT to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
9130827c4c target/arm: Convert BFDOT to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
849b7c1661 target/arm: Convert SUDOT, USDOT to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:53 +01:00
Richard Henderson
65dd60a65b target/arm: Convert SDOT, UDOT to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:52 +01:00
Richard Henderson
f698e45270 target/arm: Convert SQRDMLAH, SQRDMLSH to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 15:40:52 +01:00
Richard Henderson
7619129f0d target/arm: Fix FJCVTZS vs flush-to-zero
Input denormals cause the Javascript inexact bit
(output to Z) to be set.

Cc: qemu-stable@nongnu.org
Fixes: 6c1f6f2733 ("target/arm: Implement ARMv8.3-JSConv")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2375
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-4-richard.henderson@linaro.org
[PMM: fixed hardcoded tab in test case]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 12:48:55 +01:00
Richard Henderson
a5b72ccc0f target/arm: Fix SQDMULH (by element) with Q=0
The inner loop, bounded by eltspersegment, must not be
larger than the outer loop, bounded by elements.

Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 12:48:55 +01:00
Richard Henderson
76bccf3cb9 target/arm: Fix VCMLA Dd, Dn, Dm[idx]
The inner loop, bounded by eltspersegment, must not be
larger than the outer loop, bounded by elements.

Cc: qemu-stable@nongnu.org
Fixes: 18fc240578 ("target/arm: Implement SVE fp complex multiply add (indexed)")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2376
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240625183536.1672454-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 12:48:55 +01:00
Zide Chen
05fc711c3a target/i386: Advertise MWAIT iff host supports
host_cpu_realizefn() sets CPUID_EXT_MONITOR without consulting host/KVM
capabilities. This may cause problems:

- If MWAIT/MONITOR is not available on the host, advertising this
  feature to the guest and executing MWAIT/MONITOR from the guest
  triggers #UD and the guest doesn't boot.  This is because typically
  #UD takes priority over VM-Exit interception checks and KVM doesn't
  emulate MONITOR/MWAIT on #UD.

- If KVM doesn't support KVM_X86_DISABLE_EXITS_MWAIT, MWAIT/MONITOR
  from the guest are intercepted by KVM, which is not what cpu-pm=on
  intends to do.

In these cases, MWAIT/MONITOR should not be exposed to the guest.

The logic in kvm_arch_get_supported_cpuid() to handle CPUID_EXT_MONITOR
is correct and sufficient, and we can't set CPUID_EXT_MONITOR after
x86_cpu_filter_features().

This was not an issue before commit 662175b91f ("i386: reorder call to
cpu_exec_realizefn") because the feature added in the accel-specific
realizefn could be checked against host availability and filtered out.

Additionally, it seems not a good idea to handle guest CPUID leaves in
host_cpu_realizefn(), and this patch merges host_cpu_enable_cpu_pm()
into kvm_cpu_realizefn().

Fixes: f5cc5a5c16 ("i386: split cpu accelerators from cpu.c, using AccelCPUClass")
Fixes: 662175b91f ("i386: reorder call to cpu_exec_realizefn")
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-30 19:51:44 +03:00
Richard Henderson
b31d386781 target/i386/sev: Fix printf formats
hwaddr uses HWADDR_PRIx, sizeof yields size_t so uses %zu,
and gsize uses G_GSIZE_FORMAT.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240626194950.1725800-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Richard Henderson
cb61b17462 target/i386/sev: Use size_t for object sizes
This code was using both uint32_t and uint64_t for len.
Consistently use size_t instead.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240626194950.1725800-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
1ab620bf36 target/i386: SEV: store pointer to decoded id_auth in SevSnpGuest
Do not rely on finish->id_auth_uaddr, so that there are no casts from
pointer to uint64_t.  They break on 32-bit hosts.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
803b7718e6 target/i386: SEV: rename sev_snp_guest->id_auth
Free the "id_auth" name for the binary version of the data.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
dd1b2fb554 target/i386: SEV: store pointer to decoded id_block in SevSnpGuest
Do not rely on finish->id_block_uaddr, so that there are no casts from
pointer to uint64_t.  They break on 32-bit hosts.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
68c3aa3e97 target/i386: SEV: rename sev_snp_guest->id_block
Free the "id_block" name for the binary version of the data.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
74f73c2918 target/i386: remove unused enum
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 19:26:54 +02:00
Paolo Bonzini
460231ad36 target/i386: give CC_OP_POPCNT low bits corresponding to MO_TL
Handle it like the other arithmetic cc_ops.  This simplifies a
bit the implementation of bit test instructions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 14:44:52 +02:00
Paolo Bonzini
944f400134 target/i386: use cpu_cc_dst for CC_OP_POPCNT
It is the only CCOp, among those that compute ZF from one of the cc_op_*
registers, that uses cpu_cc_src.  Do not make it the odd one off,
instead use cpu_cc_dst like the others.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 14:44:52 +02:00
Paolo Bonzini
e36b976da4 target/i386: fix CC_OP dump
POPCNT was missing, and the entries were all out of order after
ADCX/ADOX/ADCOX were moved close to EFLAGS.  Just use designated
initializers.

Fixes: 4885c3c495 ("target-i386: Use ctpop helper", 2017-01-10)
Fixes: cc155f1971 ("target/i386: rewrite flags writeback for ADCX/ADOX", 2024-06-11)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-28 14:44:52 +02:00
Alvin Chang
2f5a2315b8 target/riscv: Apply modularized matching conditions for icount trigger
We have implemented trigger_common_match(), which checks if the enabled
privilege levels of the trigger match CPU's current privilege level. We
can invoke trigger_common_match() to check the privilege levels of the
type 3 triggers.

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240626132247.2761286-4-alvinga@andestech.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-27 13:09:16 +10:00
Alvin Chang
72dec1666f target/riscv: Apply modularized matching conditions for watchpoint
We have implemented trigger_common_match(), which checks if the enabled
privilege levels of the trigger match CPU's current privilege level.
Remove the related code in riscv_cpu_debug_check_watchpoint() and invoke
trigger_common_match() to check the privilege levels of the type 2 and
type 6 triggers for the watchpoints.

This commit also changes the behavior of looping the triggers. In
previous implementation, if we have a type 2 trigger and
env->virt_enabled is true, we directly return false to stop the loop.
Now we keep looping all the triggers until we find a matched trigger.

Only load/store bits and loaded/stored address should be further checked
in riscv_cpu_debug_check_watchpoint().

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240626132247.2761286-3-alvinga@andestech.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-27 12:25:09 +10:00
Alvin Chang
5e20b88953 target/riscv: Add functions for common matching conditions of trigger
According to RISC-V Debug specification version 0.13 [1] (also applied
to version 1.0 [2] but it has not been ratified yet), there are several
common matching conditions before firing a trigger, including the
enabled privilege levels of the trigger.

This commit adds trigger_common_match() to prepare the common matching
conditions for the type 2/3/6 triggers. For now, we just implement
trigger_priv_match() to check if the enabled privilege levels of the
trigger match CPU's current privilege level.

Remove the related code in riscv_cpu_debug_check_breakpoint() and invoke
trigger_common_match() to check the privilege levels of the type 2 and
type 6 triggers for the breakpoints.

This commit also changes the behavior of looping the triggers. In
previous implementation, if we have a type 2 trigger and
env->virt_enabled is true, we directly return false to stop the loop.
Now we keep looping all the triggers until we find a matched trigger.

Only the execution bit and the executed PC should be futher checked in
riscv_cpu_debug_check_breakpoint().

[1]: https://github.com/riscv/riscv-debug-spec/releases/tag/task_group_vote
[2]: https://github.com/riscv/riscv-debug-spec/releases/tag/1.0.0-rc1-asciidoc

Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240626132247.2761286-2-alvinga@andestech.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-27 12:23:33 +10:00
Frank Chang
cb5b50d91d target/riscv: Remove extension auto-update check statements
Remove the old-fashioned extension auto-update check statements as
they are replaced by the extension implied rules.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240625114629.27793-7-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:15:33 +10:00
Frank Chang
3dd2168c33 target/riscv: Add Zc extension implied rule
Zc extension has special implied rules that need to be handled separately.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240625114629.27793-6-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:12:21 +10:00
Frank Chang
340c3ca5f2 target/riscv: Add multi extension implied rules
Add multi extension implied rules to enable the implied extensions of
the multi extension recursively.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240625114629.27793-5-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:10:47 +10:00
Frank Chang
171773391a target/riscv: Add MISA extension implied rules
Add MISA extension implied rules to enable the implied extensions
of MISA recursively.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240625114629.27793-4-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:09:12 +10:00
Frank Chang
047da861f9 target/riscv: Introduce extension implied rule helpers
Introduce helpers to enable the extensions based on the implied rules.
The implied extensions are enabled recursively, so we don't have to
expand all of them manually. This also eliminates the old-fashioned
ordering requirement. For example, Zvksg implies Zvks, Zvks implies
Zvksed, etc., removing the need to check the implied rules of Zvksg
before Zvks.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240625114629.27793-3-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:07:35 +10:00
Frank Chang
f04f770920 target/riscv: Introduce extension implied rules definition
RISCVCPUImpliedExtsRule is created to store the implied rules.
'is_misa' flag is used to distinguish whether the rule is derived
from the MISA or other extensions.
'ext' stores the MISA bit if 'is_misa' is true. Otherwise, it stores
the offset of the extension defined in RISCVCPUConfig. 'ext' will also
serve as the key of the hash tables to look up the rule in the following
commit.

Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Tested-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240625114629.27793-2-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:06:00 +10:00
Clément Léger
c165408779 target/riscv: fix instructions count handling in icount mode
When icount is enabled, rather than returning the virtual CPU time, we
should return the instruction count itself. Add an instructions bool
parameter to get_ticks() to correctly return icount_get_raw() when
icount_enabled() == 1 and instruction count is queried. This will modify
the existing behavior which was returning an instructions count close to
the number of cycles (CPI ~= 1).

Signed-off-by: Clément Léger <cleger@rivosinc.com>

Reviewed-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240618112649.76683-1-cleger@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:04:11 +10:00
Branislav Brzak
209b7c2935 target/riscv: Fix froundnx.h nanbox check
helper_froundnx_h function mistakenly uses single percision nanbox
check instead of the half percision one. This patch fixes the issue.

Signed-off-by: Branislav Brzak <brzakbranislav@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240608214546.226963-1-brzakbranislav@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 23:02:35 +10:00
Fea.Wang
3adf4def19 target/riscv: Support the version for ss1p13
Add RISC-V privilege 1.13 support.

Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liwei1518@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-ID: <20240606135454.119186-7-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:59:25 +10:00
Fea.Wang
8392a7c148 target/riscv: Reserve exception codes for sw-check and hw-err
Based on the priv-1.13.0, add the exception codes for Software-check and
Hardware-error.

Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606135454.119186-6-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:57:49 +10:00
Fea.Wang
27796989ac target/riscv: Add MEDELEGH, HEDELEGH csrs for RV32
Based on privileged spec 1.13, the RV32 needs to implement MEDELEGH
and HEDELEGH for exception codes 32-47 for reserving and exception codes
48-63 for custom use. Add the CSR number though the implementation is
just reading zero and writing ignore. Besides, for accessing HEDELEGH, it
should be controlled by mstateen0 'P1P13' bit.

Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606135454.119186-5-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:56:01 +10:00
Fea.Wang
7750e10656 target/riscv: Add 'P1P13' bit in SMSTATEEN0
Based on privilege 1.13 spec, there should be a bit56 for 'P1P13' in
mstateen0 that controls access to the hedeleg.

Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liwei1518@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606135454.119186-4-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:54:13 +10:00
Fea.Wang
0c2d5f7396 target/riscv: Define macros and variables for ss1p13
Add macros and variables for RISC-V privilege 1.13 support.

Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liwei1518@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606135454.119186-3-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:52:24 +10:00
Jim Shu
a1a8e7768f target/riscv: Reuse the conversion function of priv_spec
Public the conversion function of priv_spec in cpu.h, so that tcg-cpu.c
could also use it.

Signed-off-by: Jim Shu <jim.shu@sifive.com>
Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606135454.119186-2-fea.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:50:37 +10:00
Chao Du
b60520392d target/riscv/kvm: handle the exit with debug reason
If the breakpoint belongs to the userspace then set the ret value.

Signed-off-by: Chao Du <duchao@eswincomputing.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606014501.20763-3-duchao@eswincomputing.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:46:48 +10:00
Chao Du
80b605056d target/riscv/kvm: add software breakpoints support
This patch implements insert/remove software breakpoint process.

For RISC-V, GDB treats single-step similarly to breakpoint: add a
breakpoint at the next step address, then continue. So this also
works for single-step debugging.

Implement kvm_arch_update_guest_debug(): Set the control flag
when there are active breakpoints. This will help KVM to know
the status in the userspace.

Add some stubs which are necessary for building, and will be
implemented later.

Signed-off-by: Chao Du <duchao@eswincomputing.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240606014501.20763-2-duchao@eswincomputing.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:45:14 +10:00
Jerry Zhang Jian
15b8ddb18a target/riscv: zvbb implies zvkb
According to RISC-V crypto spec, Zvkb extension is a
subset of the Zvbb extension [1].

1: 1769c2609b/doc/vector/riscv-crypto-vector-zvkb.adoc (L10)

Signed-off-by: Jerry Zhang Jian <jerry.zhangjian@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240528130349.20193-1-jerry.zhangjian@sifive.com>
[ Changes by AF:
 - Tidy up commit message
 - Rebase
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:30:52 +10:00
Rajnesh Kanwal
92c82a126e target/riscv: Move Guest irqs out of the core local irqs range.
Qemu maps IRQs 0:15 for core interrupts and 16 onward for
guest interrupts which are later translated to hgiep in
`riscv_cpu_set_irq()` function.

With virtual IRQ support added, software now can fully
use the whole local interrupt range without any actual
hardware attached.

This change moves the guest interrupt range after the
core local interrupt range to avoid clash.

Fixes: 1697837ed9 ("target/riscv: Add M-mode virtual interrupt and IRQ filtering support.")
Fixes: 40336d5b1d ("target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.")

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240520125157.311503-3-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:29:17 +10:00
Rajnesh Kanwal
87088fadb3 target/riscv: Extend virtual irq csrs masks to be 64 bit wide.
AIA extends the width of all IRQ CSRs to 64bit even
in 32bit systems by adding missing half CSRs.

This seems to be missed while adding support for
virtual IRQs. The whole logic seems to be correct
except the width of the masks.

Fixes: 1697837ed9 ("target/riscv: Add M-mode virtual interrupt and IRQ filtering support.")
Fixes: 40336d5b1d ("target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.")

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240520125157.311503-2-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-06-26 22:27:37 +10:00
Richard Henderson
e2bc7787c8 maintainer updates (plugins, gdbstub):
- add missing include guard comment to gdbstub.h
   - move gdbstub enums into separate header
   - move qtest_[get|set]_virtual_clock functions
   - allow plugins to manipulate the virtual clock
   - introduce an Instructions Per Second plugin
   - fix inject_mem_cb rw mask tests
   - allow qemu_plugin_vcpu_mem_cb to shortcut when no memory cbs
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmZ5OjoACgkQ+9DbCVqe
 KkQPlwf/VK673BAjYktuCLnf3DgWvIkkiHWwzBREP5MmseUloLjK2CQPLY/xWZED
 pbA/1OSzHViD/mvG5wTxwef36b9PIleWj5/YwBxGlrb/rh6hCd9004pZK4EMI3qU
 53SK8Qron8TIXjey6XfmAY8rcl030GsHr0Zqf5i2pZKE5g0iaGlM3Cwkpo0SxQsu
 kMNqiSs9NzX7LxB+YeuAauIvC1YA2F/MGTXeFCTtO9Beyp5oV7oOI+2zIvLjlG5M
 Z5hKjG/STkNOteoIBGZpe1+QNpoGHSBoGE3nQnGpXb82iLx1KVBcKuQ6GoWGv1Wo
 hqiSh9kJX479l0mLML+IzaDsgSglbg==
 =pvWx
 -----END PGP SIGNATURE-----

Merge tag 'pull-maintainer-june24-240624-1' of https://gitlab.com/stsquad/qemu into staging

maintainer updates (plugins, gdbstub):

  - add missing include guard comment to gdbstub.h
  - move gdbstub enums into separate header
  - move qtest_[get|set]_virtual_clock functions
  - allow plugins to manipulate the virtual clock
  - introduce an Instructions Per Second plugin
  - fix inject_mem_cb rw mask tests
  - allow qemu_plugin_vcpu_mem_cb to shortcut when no memory cbs

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmZ5OjoACgkQ+9DbCVqe
# KkQPlwf/VK673BAjYktuCLnf3DgWvIkkiHWwzBREP5MmseUloLjK2CQPLY/xWZED
# pbA/1OSzHViD/mvG5wTxwef36b9PIleWj5/YwBxGlrb/rh6hCd9004pZK4EMI3qU
# 53SK8Qron8TIXjey6XfmAY8rcl030GsHr0Zqf5i2pZKE5g0iaGlM3Cwkpo0SxQsu
# kMNqiSs9NzX7LxB+YeuAauIvC1YA2F/MGTXeFCTtO9Beyp5oV7oOI+2zIvLjlG5M
# Z5hKjG/STkNOteoIBGZpe1+QNpoGHSBoGE3nQnGpXb82iLx1KVBcKuQ6GoWGv1Wo
# hqiSh9kJX479l0mLML+IzaDsgSglbg==
# =pvWx
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 24 Jun 2024 02:19:54 AM PDT
# gpg:                using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]

* tag 'pull-maintainer-june24-240624-1' of https://gitlab.com/stsquad/qemu:
  accel/tcg: Avoid unnecessary call overhead from qemu_plugin_vcpu_mem_cb
  plugins: fix inject_mem_cb rw masking
  contrib/plugins: add Instructions Per Second (IPS) example for cost modeling
  plugins: add migration blocker
  plugins: add time control API
  qtest: move qtest_{get, set}_virtual_clock to accel/qtest/qtest.c
  sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time
  qtest: use cpu interface in qtest_clock_warp
  sysemu: add set_virtual_time to accel ops
  plugins: Ensure register handles are not NULL
  gdbstub: move enums into separate header
  include/exec: add missing include guard comment

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-06-24 13:51:11 -07:00
Alex Bennée
5b7d54d4ed gdbstub: move enums into separate header
This is an experiment to further reduce the amount we throw into the
exec headers. It might not be as useful as I initially thought because
just under half of the users also need gdbserver_start().

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240620152220.2192768-3-alex.bennee@linaro.org>
2024-06-24 10:14:17 +01:00
Thomas Huth
d6a7c3f44c target/s390x: Add a CONFIG switch to disable legacy CPUs
The oldest model that IBM still supports is the z13. Considering
that each generation can "emulate" the previous two generations
in hardware (via the "IBC" feature of the CPUs), this means that
everything that is older than z114/196 is not an officially supported
CPU model anymore. The Linux kernel still support the z10, so if
we also take this into account, everything older than that can
definitely be considered as a legacy CPU model.

For downstream builds of QEMU, we would like to be able to disable
these legacy CPUs in the build. Thus add a CONFIG switch that can be
used to disable them (and old machine types that use them by default).

Message-Id: <20240614125019.588928-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-06-24 08:22:30 +02:00
Omar Sandoval
87c9d801a6 target/s390x/arch_dump: use correct byte order for pid
The pid field of prstatus needs to be big endian like all of the other
fields.

Fixes: f738f296ea ("s390x/arch_dump: pass cpuid into notes sections")
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <5929f76d536d355afd04af51bf293695a1065118.1718771802.git.osandov@osandov.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-06-24 08:13:58 +02:00
Richard Henderson
02d9c38236 tcg/loongarch64: Support 64- and 256-bit vectors
tcg/loongarch64: Fix tcg_out_movi vs some pcrel pointers
 util/bufferiszero: Split out host include files
 util/bufferiszero: Add loongarch64 vector acceleration
 accel/tcg: Fix typo causing tb->page_addr[1] to not be recorded
 target/sparc: use signed denominator in sdiv helper
 linux-user: Make TARGET_NR_setgroups affect only the current thread
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmZzRoMdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9Y7gf/ZUTGjCUdAO7W7J5e
 Z3JLUNOfUHO6PxoE05963XJc+APwKiuL6Yo2bnJo6km7WM50CoaX9/7L9CXD7STg
 s3eUJ2p7FfvOADZgO373nqRrB/2mhvoywhDbVJBl+NcRvRUDW8rMqrlSKIAwDIsC
 kwwTWlCfpBSlUgm/c6yCVmt815+sGUPD2k/p+pIzAVUG6fGYAosC2fwPzPajiDGX
 Q+obV1fryKq2SRR2dMnhmPRtr3pQBBkISLuTX6xNM2+CYhYqhBrAlQaOEGhp7Dx3
 ucKjvQFpHgPOSdQxb/HaDv81A20ZUQaydiNNmuKQcTtMx3MsQFR8NyVjH7L+fbS8
 JokjaQ==
 =yVKz
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20240619' of https://gitlab.com/rth7680/qemu into staging

tcg/loongarch64: Support 64- and 256-bit vectors
tcg/loongarch64: Fix tcg_out_movi vs some pcrel pointers
util/bufferiszero: Split out host include files
util/bufferiszero: Add loongarch64 vector acceleration
accel/tcg: Fix typo causing tb->page_addr[1] to not be recorded
target/sparc: use signed denominator in sdiv helper
linux-user: Make TARGET_NR_setgroups affect only the current thread

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmZzRoMdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9Y7gf/ZUTGjCUdAO7W7J5e
# Z3JLUNOfUHO6PxoE05963XJc+APwKiuL6Yo2bnJo6km7WM50CoaX9/7L9CXD7STg
# s3eUJ2p7FfvOADZgO373nqRrB/2mhvoywhDbVJBl+NcRvRUDW8rMqrlSKIAwDIsC
# kwwTWlCfpBSlUgm/c6yCVmt815+sGUPD2k/p+pIzAVUG6fGYAosC2fwPzPajiDGX
# Q+obV1fryKq2SRR2dMnhmPRtr3pQBBkISLuTX6xNM2+CYhYqhBrAlQaOEGhp7Dx3
# ucKjvQFpHgPOSdQxb/HaDv81A20ZUQaydiNNmuKQcTtMx3MsQFR8NyVjH7L+fbS8
# JokjaQ==
# =yVKz
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 19 Jun 2024 01:58:43 PM PDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]

* tag 'pull-tcg-20240619' of https://gitlab.com/rth7680/qemu: (24 commits)
  tcg/loongarch64: Fix tcg_out_movi vs some pcrel pointers
  target/sparc: use signed denominator in sdiv helper
  linux-user: Make TARGET_NR_setgroups affect only the current thread
  accel/tcg: Fix typo causing tb->page_addr[1] to not be recorded
  util/bufferiszero: Add loongarch64 vector acceleration
  util/bufferiszero: Split out host include files
  tcg/loongarch64: Enable v256 with LASX
  tcg/loongarch64: Support LASX in tcg_out_vec_op
  tcg/loongarch64: Split out vdvjukN in tcg_out_vec_op
  tcg/loongarch64: Remove temp_vec from tcg_out_vec_op
  tcg/loongarch64: Support LASX in tcg_out_{mov,ld,st}
  tcg/loongarch64: Split out vdvjvk in tcg_out_vec_op
  tcg/loongarch64: Support LASX in tcg_out_addsub_vec
  tcg/loongarch64: Simplify tcg_out_addsub_vec
  tcg/loongarch64: Support LASX in tcg_out_dupi_vec
  tcg/loongarch64: Use tcg_out_dup_vec in tcg_out_dupi_vec
  tcg/loongarch64: Support LASX in tcg_out_dupm_vec
  tcg/loongarch64: Support LASX in tcg_out_dup_vec
  tcg/loongarch64: Simplify tcg_out_dup_vec
  util/loongarch64: Detect LASX vector support
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-06-19 14:00:39 -07:00
Clément Chigot
6b4965373e target/sparc: use signed denominator in sdiv helper
The result has to be done with the signed denominator (b32) instead of
the unsigned value passed in argument (b).

Cc: qemu-stable@nongnu.org
Fixes: 1326010322 ("target/sparc: Remove CC_OP_DIV")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2319
Signed-off-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240606144331.698361-1-chigot@adacore.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-06-19 13:50:22 -07:00
Philippe Mathieu-Daudé
4860af2c4f target/s390x: Use s390_skeys_get|set() helper
Commit c9274b6bf0 ("target/s390x: start moving TCG-only code
to tcg/") moved mem_helper.c, but the trace-events file is
still in the parent directory, so is the generated trace.h.

Call the s390_skeys_get|set() helper, removing the need
for the trace event shared with the tcg/ sub-directory,
fixing the following build failure:

 In file included from ../target/s390x/tcg/mem_helper.c:33:
 ../target/s390x/tcg/trace.h:1:10: fatal error: 'trace/trace-target_s390x_tcg.h' file not found
 #include "trace/trace-target_s390x_tcg.h"

Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20240613104415.9643-3-philmd@linaro.org>
2024-06-19 12:42:03 +02:00
Philippe Mathieu-Daudé
8291239113 target/i386: Remove X86CPU::kvm_no_smi_migration field
X86CPU::kvm_no_smi_migration was only used by the
pc-i440fx-2.3 machine, which got removed. Remove it
and simplify kvm_put_vcpu_events().

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20240617071118.60464-23-philmd@linaro.org>
2024-06-19 12:40:49 +02:00
Philippe Mathieu-Daudé
63f16d97c6 target/i386/kvm: Remove x86_cpu_change_kvm_default() and 'kvm-cpu.h'
x86_cpu_change_kvm_default() was only used out of kvm-cpu.c by
the pc-i440fx-2.1 machine, which got removed. Make it static,
and remove its declaration. "kvm-cpu.h" is now empty, remove it.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20240617071118.60464-10-philmd@linaro.org>
2024-06-19 12:40:49 +02:00
Paolo Bonzini
109238a8d9 target/i386: SEV: do not assume machine->cgs is SEV
There can be other confidential computing classes that are not derived
from sev-common.  Avoid aborting when encountering them.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
0c4da54883 target/i386: convert CMPXCHG to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
7b1f25ac3a target/i386: convert XADD to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
11ffaf8c73 target/i386: convert LZCNT/TZCNT/BSF/BSR/POPCNT to new decoder
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
6476902740 target/i386: convert SHLD/SHRD to new decoder
Use the same flag generation code as SHL and SHR, but use
the existing gen_shiftd_rm_T1 function to compute the result
as well as CC_SRC.

Decoding-wise, SHLD/SHRD by immediate count as a 4 operand
instruction because s->T0 and s->T1 actually occupy three op
slots.  The infrastructure used by opcodes in the 0F 3A table
works fine.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
e4e5981daf target/i386: adapt gen_shift_count for SHLD/SHRD
SHLD/SHRD can have 3 register operands - s->T0, s->T1 and either
1 or CL - and therefore decode->op[2] is taken by the low part
of the register being shifted.  Pass X86_OP_* to gen_shift_count
from its current callers and hardcode cpu_regs[R_ECX] as the
shift count.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
87b2037b65 target/i386: pull load/writeback out of gen_shiftd_rm_T1
Use gen_ld_modrm/gen_st_modrm, moving them and gen_shift_flags to the
caller.  This way, gen_shiftd_rm_T1 becomes something that the new
decoder can call.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
ae541c0eb4 target/i386: convert non-grouped, helper-based 2-byte opcodes
These have very simple generators and no need for complex group
decoding.  Apart from LAR/LSL which are simplified to use
gen_op_deposit_reg_v and movcond, the code is generally lifted
from translate.c into the generators.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
556c4c5cc4 target/i386: split X86_CHECK_prot into PE and VM86 checks
SYSENTER is allowed in VM86 mode, but not in real mode.  Split the check
so that PE and !VM86 are covered by separate bits.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
ea89aa895e target/i386: finish converting 0F AE to the new decoder
This is already partly implemented due to VLDMXCSR and VSTMXCSR; finish
the job.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
10340080cd target/i386: fix bad sorting of entries in the 0F table
Aesthetic change only.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
e0448caebf target/i386: replace read_crN helper with read_cr8
All other control registers are stored plainly in CPUX86State.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:39 +02:00
Paolo Bonzini
a1af7fba5a target/i386: convert MOV from/to CR and DR to new decoder
Complete implementation of C and D operand types, then the operations
are just MOVs.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-17 09:47:35 +02:00
Paolo Bonzini
024538287e target/i386: fix processing of intercept 0 (read CR0)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
c0df9563a3 target/i386: replace NoSeg special with NoLoadEA
This is a bit more generic, as it can be applied to MPX as well.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
4e2dc59cf9 target/i386: change X86_ENTRYwr to use T0, use it for moves
Just like X86_ENTRYr, X86_ENTRYwr is easily changed to use only T0.
In this case, the motivation is to use it for the MOV instruction
family.  The case when you need to preserve the input value is the
odd one, as it is used basically only for BLS* instructions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
c2b6b6a65a target/i386: change X86_ENTRYr to use T0
I am not sure why I made it use T1.  It is a bit more symmetric with
respect to X86_ENTRYwr (which uses T0 for the "w"ritten operand
and T1 for the "r"ead operand), but it is also less flexible because it
does not let you apply zextT0/sextT0.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
e628387cf9 target/i386: put BLS* input in T1, use generic flag writeback
This makes for easier cpu_cc_* setup, and not using set_cc_op()
should come in handy if QEMU ever implements APX.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
cc155f1971 target/i386: rewrite flags writeback for ADCX/ADOX
Avoid using set_cc_op() in preparation for implementing APX; treat
CC_OP_EFLAGS similar to the case where we have the "opposite" cc_op
(CC_OP_ADOX for ADCX and CC_OP_ADCX for ADOX), except the resulting
cc_op is not CC_OP_ADCOX. This is written easily as two "if"s, whose
conditions are both false for CC_OP_EFLAGS, both true for CC_OP_ADCOX,
and one each true for CC_OP_ADCX/ADOX.

The new logic also makes it easy to drop usage of tmp0.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Paolo Bonzini
4228eb8cc6 target/i386: remove CPUX86State argument from generator functions
CPUX86State argument would only be used to fetch bytes, but that has to be
done before the generator function is called.  So remove it, and all
temptation together with it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:22 +02:00
Pankaj Gupta
cd7093a7a1 i386/sev: Return when sev_common is null
Fixes Coverity CID 1546885.

Fixes: 16dcf200dc ("i386/sev: Introduce "sev-common" type to encapsulate common SEV state")
Signed-off-by: Pankaj Gupta <pankaj.gupta@amd.com>
Message-ID: <20240607183611.1111100-4-pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-11 14:29:14 +02:00