This fixes the clobbering of the entry->next pointer when
unmapping the first entry in a bucket of a mapcache.
Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets")
Reported-by: Anthony PERARD <anthony.perard@vates.tech>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Reviewed-by: Anthony PERARD <anthony.perard@vates.tech>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Bail out in qemu_ram_block_from_host() when
xen_ram_addr_from_mapcache() does not find an existing
mapping.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Add Edgar as Xen subsystem maintainer in QEMU. Edgar has been a QEMU
maintainer for years, and has already made key changes to one of the
most difficult areas of the Xen subsystem (the mapcache).
Edgar volunteered helping us maintain the Xen subsystem in QEMU and we
are very happy to welcome him to the team. His knowledge and expertise
with QEMU internals will be of great help.
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Anthony PERARD <anthony@xenproject.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240709000610.382391-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The USART devices were previously connecting their outbound IRQs
directly to the CPU because the EXTI wasn't handling direct lines
interrupts.
Now the USART connects to the EXTI inbound GPIOs, and the EXTI connects
its IRQs to the CPU.
The existing QTest for the USART (tests/qtest/stm32l4x5_usart-test.c)
checks that USART1_IRQ in the CPU is pending when expected so it
confirms that the connection through the EXTI still works.
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240707085927.122867-4-ines.varhol@telecom-paris.fr
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The previous implementation for EXTI interrupts only handled
"configurable" interrupts, like those originating from STM32L4x5 SYSCFG
(the only device currently connected to the EXTI up until now).
In order to connect STM32L4x5 USART to the EXTI, this commit adds
handling for direct interrupts (interrupts without configurable edge).
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Message-id: 20240707085927.122867-3-ines.varhol@telecom-paris.fr
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Up until now, the EXTI implementation had 16 inbound GPIOs connected to
the 16 outbound GPIOs of STM32L4x5 SYSCFG.
The EXTI actually handles 40 lines (namely 5 from STM32L4x5 USART
devices which are already implemented in QEMU).
In order to connect USART devices to EXTI, this commit consolidates
constants `EXTI_NUM_INTERRUPT_OUT_LINES` (40) and
`EXTI_NUM_GPIO_EVENT_IN_LINES` (16) into `EXTI_NUM_LINES` (40).
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240707085927.122867-2-ines.varhol@telecom-paris.fr
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that all targets set TCGCPUOps::cpu_exec_halt, we can make it
mandatory and remove the fallback handling that calls cpu_has_work.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Currently the TCGCPUOps::cpu_exec_halt method is optional, and if it
is not set then the default is to call the CPUClass::has_work
method (which has an identical function signature).
We would like to make the cpu_exec_halt method mandatory so we can
remove the runtime check and fallback handling. In preparation for
that, make all the targets which don't need special handling in their
cpu_exec_halt set it to their cpu_has_work implementation instead of
leaving it unset. (This is every target except for arm and i386.)
In the riscv case this requires us to make the function not
be local to the source file it's defined in.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
In commit a96edb687e we set the cpu_exec_halt field of the
TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the
arm_v7m_tcg_ops struct unchanged. That isn't wrong, because for
M-profile FEAT_WFxT doesn't exist and the default handling for "no
cpu_exec_halt method" is correct, but it's perhaps a little
confusing. We would also like to make setting the cpu_exec_halt
method mandatory.
Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use
for A-profile. (On M-profile we never set up the wfxt timer so there
is no change in behaviour here.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
In a completely artifical memset benchmark object_dynamic_cast_assert
dominates the profile, even above guest address resolution and
the underlying host memset.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240702154911.1667418-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The current implementation of bcm2835_thermal_ops sets
impl.max_access_size and valid.min_access_size to 4, but leaves
impl.min_access_size and valid.max_access_size unset, defaulting to 1.
This causes issues when the memory system is presented with an access
of size 2 at an offset of 3, leading to an attempt to synthesize it as
a pair of byte accesses at offsets 3 and 4, which trips an assert.
Additionally, the lack of valid.max_access_size setting causes another
issue: the memory system tries to synthesize a read using a 4-byte
access at offset 3 even though the device doesn't allow unaligned
accesses.
This patch addresses these issues by explicitly setting both
impl.min_access_size and valid.max_access_size to 4, ensuring proper
handling of access sizes.
Error log:
ERROR:hw/misc/bcm2835_thermal.c:55:bcm2835_thermal_read: code should not be reached
Bail out! ERROR:hw/misc/bcm2835_thermal.c:55:bcm2835_thermal_read: code should not be reached
Aborted
Reproducer:
cat << EOF | qemu-system-aarch64 -display \
none -machine accel=qtest, -m 512M -machine raspi3b -m 1G -qtest stdio
readw 0x3f212003
EOF
Signed-off-by: Zheyu Ma <zheyuma97@gmail.com>
Message-id: 20240702154042.3018932-1-zheyuma97@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In pl011_get_baudrate(), when we calculate the baudrate we can
accidentally divide by zero. This happens because although (as the
specification requires) we treat UARTIBRD = 0 as invalid, we aren't
correctly limiting UARTIBRD and UARTFBRD values to the 16-bit and 6-bit
ranges the hardware allows, and so some non-zero values of UARTIBRD can
result in a zero divisor.
Enforce the correct register field widths on guest writes and on inbound
migration to avoid the division by zero.
ASAN log:
==2973125==ERROR: AddressSanitizer: FPE on unknown address 0x55f72629b348
(pc 0x55f72629b348 bp 0x7fffa24d0e00 sp 0x7fffa24d0d60 T0)
#0 0x55f72629b348 in pl011_get_baudrate hw/char/pl011.c:255:17
#1 0x55f726298d94 in pl011_trace_baudrate_change hw/char/pl011.c:260:33
#2 0x55f726296fc8 in pl011_write hw/char/pl011.c:378:9
Reproducer:
cat << EOF | qemu-system-aarch64 -display \
none -machine accel=qtest, -m 512M -machine realview-pb-a8 -qtest stdio
writeq 0x1000b024 0xf8000000
EOF
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Zheyu Ma <zheyuma97@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240702155752.3022007-1-zheyuma97@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In order to allow FPCR bits that aren't in the FPSCR (like the new
bits that are defined for FEAT_AFP), we need to make sure that writes
to the FPSCR only write to the bits of FPCR that are architecturally
mapped, and not the others.
Implement this with a new function vfp_set_fpcr_masked() which
takes a mask of which bits to update.
(We could do the same for FPSR, but we leave that until we actually
are likely to need it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-10-peter.maydell@linaro.org
Now that we store FPSR and FPCR separately, the FPSR_MASK and
FPCR_MASK macros are slightly confusingly named and the comment
describing them is out of date. Rename them to FPSCR_FPSR_MASK and
FPSCR_FPCR_MASK, document that they are the mask of which FPSCR bits
are architecturally mapped to which AArch64 register, and define them
symbolically rather than as hex values. (This latter requires
defining some extra macros for bits which we haven't previously
defined.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-9-peter.maydell@linaro.org
The QC, N, Z, C, V bits live in the FPSR, not the FPCR. Rename the
macros that define these bits accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-8-peter.maydell@linaro.org
Now that we have refactored the set/get functions so that the FPSCR
format is no longer the authoritative one, we can keep FPSR and FPCR
in separate CPU state fields.
As well as the get and set functions, we also have a scattering of
places in the code which directly access vfp.xregs[ARM_VFP_FPSCR] to
extract single fields which are stored there. These all change to
directly access either vfp.fpsr or vfp.fpcr, depending on the
location of the field. (Most commonly, this is the NZCV flags.)
We make the field in the CPU state struct 64 bits, because
architecturally FPSR and FPCR are 64 bits. However we leave the
types of the arguments and return values of the get/set functions as
32 bits, since we don't need to make that change with the current
architecture and various callsites would be unable to handle
set bits in the high half (for instance the gdbstub protocol
assumes they're only 32 bit registers).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-7-peter.maydell@linaro.org
We already have a load_cpu_field_low32() to load the low half of a
64-bit CPU struct field to a TCGv_i32; however we haven't yet needed
the store equivalent. We'll want that in the next patch, so
implement it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-6-peter.maydell@linaro.org
To support FPSR and FPCR bits that don't exist in the AArch32 FPSCR
view of floating point control and status (such as the FEAT_AFP ones),
we need to make sure those bits can be migrated. This commit allows
that, whilst maintaining backwards and forwards migration compatibility
for CPUs where there are no such bits:
On sending:
* If either the FPCR or the FPSR include set bits that are not
visible in the AArch32 FPSCR view of floating point control/status
then we send the FPCR and FPSR as two separate fields in a new
cpu/vfp/fpcr_fpsr subsection, and we send a 0 for the old
FPSCR field in cpu/vfp
* Otherwise, we don't send the fpcr_fpsr subsection, and we send
an FPSCR-format value in cpu/vfp as we did previously
On receiving:
* if we see a non-zero FPSCR field, that is the right information
* if we see a fpcr_fpsr subsection then that has the information
* if we see neither, then FPSCR/FPCR/FPSR are all zero on the source;
cpu_pre_load() ensures the CPU state defaults to that
* if we see both, then the migration source is buggy or malicious;
either the fpcr_fpsr or the FPSCR will "win" depending which
is first in the migration stream; we don't care which that is
We make the new FPCR and FPSR on-the-wire data be 64 bits, because
architecturally these registers are that wide, and this avoids the
need to engage in further migration-compatibility contortions in
future if some new architecture revision defines bits in the high
half of either register.
(We won't ever send the new migration subsection until we add support
for a CPU feature which enables setting overlapping FPCR bits, like
FEAT_AFP.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-5-peter.maydell@linaro.org
Make vfp_set_fpscr() call vfp_set_fpsr() and vfp_set_fpcr()
instead of the other way around.
The masking we do when getting and setting vfp.xregs[ARM_VFP_FPSCR]
is a little awkward, but we are going to change where we store the
underlying FPSR and FPCR information in a later commit, so it will
go away then.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-4-peter.maydell@linaro.org
In AArch32, the floating point control and status bits are all in a
single register, FPSCR. In AArch64, these were split into separate
FPCR and FPSR registers, but the bit layouts remained the same, with
no overlaps, so that you could construct an FPSCR value by ORing FPCR
and FPSR, or equivalently could produce FPSR and FPCR by masking an
FPSCR value. For QEMU's implementation, we opted to use masking to
produce FPSR and FPCR, because we started with an AArch32
implementation of FPSCR.
The addition of the (AArch64-only) FEAT_AFP adds new bits to the FPCR
which overlap with some bits in the FPSR. This means we'll no longer
be able to consider the FPSCR-encoded value as the primary one, but
instead need to treat FPSR/FPCR as the primary encoding and construct
the FPSCR from those. (This remains possible because the FEAT_AFP
bits in FPCR don't appear in the FPSCR.)
As the first step in this refactoring, make vfp_get_fpscr() call
vfp_get_fpcr() and vfp_get_fpsr(), instead of the other way around.
Note that vfp_get_fpcsr_from_host() returns only bits in the FPSR
(for the cumulative fp exception bits), so we can simply rename
it without needing to add a new function for getting FPCR bits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-3-peter.maydell@linaro.org
The M-profile FPSCR LTPSIZE is bits [18:16]; this is the same
field as A-profile FPSCR Len, not Stride. Correct the comment
in vfp_get_fpscr().
We also implemented M-profile FPSCR.QC, but forgot to delete
a TODO comment from vfp_set_fpscr(); remove it now.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240628142347.1283015-2-peter.maydell@linaro.org
When opening an image with discard=off, we punch hole in the image when
writing zeroes, making the image sparse. This breaks users that want to
ensure that writes cannot fail with ENOSPACE by using fully allocated
images[1].
bdrv_co_pwrite_zeroes() correctly disables BDRV_REQ_MAY_UNMAP if we
opened the child without discard=unmap or discard=on. But we don't go
through this function when accessing the top node. Move the check down
to bdrv_co_do_pwrite_zeroes() which seems to be used in all code paths.
This change implements the documented behavior, punching holes only when
opening the image with discard=on or discard=unmap. This may not be the
best default but can improve it later.
The test depends on a file system supporting discard, deallocating the
entire file when punching hole with the length of the entire file.
Tested with xfs, ext4, and tmpfs.
[1] https://lists.nongnu.org/archive/html/qemu-discuss/2024-06/msg00003.html
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20240628202058.1964986-3-nsoffer@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The test works since we punch holes by default even when opening the
image without discard=on or discard=unmap. Fix the test to enable
discard.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The error message is actually expressive, considering QEMU only. But
when called from Libvirt, talking about "size" can be confusing, because
in Libvirt "size" translates to the memory backend size in QEMU (maximum
size) and "current" translates to the QEMU "size" property.
Let's simply avoid talking about the "size" property and spell out that
some device memory is still plugged.
Message-ID: <20240416141426.588544-1-david@redhat.com>
Tested-by: Mario Casquero <mcasquer@redhat.com>
Cc: Liang Cong <lcong@redhat.com>
Cc: Mario Casquero <mcasquer@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
* support AST2700 network
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmaNJCcACgkQUaNDx8/7
7KF7pw//So48XdPJhdQukO/PDLGSYL8rRjDfZbQFLLw10MozcZZ/Nz/BCzrNxJRg
rHP/shyO3XL1YZ6U1LNXk6E845giVriSpRRjGX9CuK4fypM9xom6qAIOtOLeH7hG
iTMW++IxN/JgVmVOKYn3C+2+odiq6NzZxFrblVtGPUDtNkkC9BaYGHnccMsl5zQh
LOSPJxqLiiuDjZPqdwa4fMbtEeNTU3A0WLlWxX7yPfJt2T20a4wE6bdWVGcI6fiV
QbCmLLrMXhuZFx+uT4B2hbHi+hGS5H+F3QBOefum6z+i9NEbfAZSyusd8/qTEify
fSBqxL4LD6K4WKL1Hg9959cBcm5zWgPXk7znus4E/TZuUTdSHaPC7clESIcYqWPS
veEAppmHneO4cdmK1m+Gv4gpWD/adS4ZfV7O+C3z149ms0gL4JrK6QndPdE5QuIW
u47PhIT3oIM0WznnMusoCndFxs6Gl/GBkzdxW0gdoJKBRfymbsroWeZamAWTznbV
mL8Td8bEP/NcV40cm1PtpZyl7j0MzxcKDUHKv9ioQTXLUpkl5LSsIGmd1m78WRlE
J6bUJ3jqQT6/s5i3TVqTGe7xuqMkg+9Er8rn5nAWgSronsf4nprAfOU8Lj+b06BM
YRroGgU2lAQrv17liQExrG3Tj1SH+oEp1q0qEq7qo824HlGjBkI=
=UygB
-----END PGP SIGNATURE-----
Merge tag 'pull-aspeed-20240709' of https://github.com/legoater/qemu into staging
aspeed queue:
* support AST2700 network
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmaNJCcACgkQUaNDx8/7
# 7KF7pw//So48XdPJhdQukO/PDLGSYL8rRjDfZbQFLLw10MozcZZ/Nz/BCzrNxJRg
# rHP/shyO3XL1YZ6U1LNXk6E845giVriSpRRjGX9CuK4fypM9xom6qAIOtOLeH7hG
# iTMW++IxN/JgVmVOKYn3C+2+odiq6NzZxFrblVtGPUDtNkkC9BaYGHnccMsl5zQh
# LOSPJxqLiiuDjZPqdwa4fMbtEeNTU3A0WLlWxX7yPfJt2T20a4wE6bdWVGcI6fiV
# QbCmLLrMXhuZFx+uT4B2hbHi+hGS5H+F3QBOefum6z+i9NEbfAZSyusd8/qTEify
# fSBqxL4LD6K4WKL1Hg9959cBcm5zWgPXk7znus4E/TZuUTdSHaPC7clESIcYqWPS
# veEAppmHneO4cdmK1m+Gv4gpWD/adS4ZfV7O+C3z149ms0gL4JrK6QndPdE5QuIW
# u47PhIT3oIM0WznnMusoCndFxs6Gl/GBkzdxW0gdoJKBRfymbsroWeZamAWTznbV
# mL8Td8bEP/NcV40cm1PtpZyl7j0MzxcKDUHKv9ioQTXLUpkl5LSsIGmd1m78WRlE
# J6bUJ3jqQT6/s5i3TVqTGe7xuqMkg+9Er8rn5nAWgSronsf4nprAfOU8Lj+b06BM
# YRroGgU2lAQrv17liQExrG3Tj1SH+oEp1q0qEq7qo824HlGjBkI=
# =UygB
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 09 Jul 2024 04:51:03 AM PDT
# gpg: using RSA key A0F66548F04895EBFE6B0B6051A343C7CFFBECA1
# gpg: Good signature from "Cédric Le Goater <clg@kaod.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: A0F6 6548 F048 95EB FE6B 0B60 51A3 43C7 CFFB ECA1
* tag 'pull-aspeed-20240709' of https://github.com/legoater/qemu:
machine_aspeed.py: update to test network for AST2700
machine_aspeed.py: update to test ASPEED OpenBMC SDK v09.02 for AST2700
hw/block: m25p80: support quad mode for w25q01jvq
aspeed/soc: set dma64 property for AST2700 ftgmac100
hw/net:ftgmac100: update TX and RX packet buffers address to 64 bits
hw/net:ftgmac100: introduce TX and RX ring base address high registers to support 64 bits
hw/net:ftgmac100: update ring base address to 64 bits
hw/net:ftgmac100: update memory region size to 64KB
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
vfio_display_edid_init() can fail for many reasons and return silently.
It would be good to report the error.
Old mdev driver may not support vfio edid region and we allow to go
through in this case.
vfio_display_edid_update() isn't changed because it can be called at
runtime when UI changes (i.e. window resize).
Fixes: 08479114b0 ("vfio/display: add edid support.")
Suggested-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
EDID related device region info is leaked in vfio_display_edid_init()
error path and VFIODisplay destroying path.
Fixes: 08479114b0 ("vfio/display: add edid support.")
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In 94df5b2180 ("virtio-iommu: Fix 64kB host page size VFIO device
assignment"), in case of bypass mode, we transiently enabled the
IOMMU MR to allow the set_page_size_mask() to be called and pass
information about the page size mask constraint of cold plugged
VFIO devices. Now we do not use the IOMMU MR callback anymore, we
can just get rid of this hack.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Everything is now in place to use the Host IOMMU Device callbacks
to retrieve the page size mask usable with a given assigned device.
This new method brings the advantage to pass the info much earlier
to the virtual IOMMU and before the IOMMU MR gets enabled. So let's
remove the call to memory_region_iommu_set_page_size_mask in
vfio common.c and remove the single implementation of the IOMMU MR
callback in the virtio-iommu.c
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Retrieve the Host IOMMU Device page size mask when this latter is set.
This allows to get the information much sooner than when relying on
IOMMU MR set_page_size_mask() call, whcih happens when the IOMMU MR
gets enabled. We introduce check_page_size_mask() helper whose code
is inherited from current virtio_iommu_set_page_size_mask()
implementation. This callback will be removed in a subsequent patch.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
This callback will be used to retrieve the page size mask supported
along a given Host IOMMU device.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
The error handle argument is not used anywhere. let's remove it.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Introduce vfio_container_get_iova_ranges() to retrieve the usable
IOVA regions of the base container and use it in the Host IOMMU
device implementations of get_iova_ranges() callback.
We also fix a UAF bug as the list was shallow copied while
g_list_free_full() was used both on the single call site, in
virtio_iommu_set_iommu_device() but also in
vfio_container_instance_finalize(). Instead use g_list_copy_deep.
Fixes: cf2647a76e ("virtio-iommu: Compute host reserved regions")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Suggested-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
In case no IOMMUPciBus/IOMMUDevice are found we need to properly
set the error handle and return.
Fixes : Coverity CID 1549006
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Fixes: cf2647a76e ("virtio-iommu: Compute host reserved regions")
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Update test case to test network connection via SSH.
Test command:
```
cd build
pyvenv/bin/avocado run ../qemu/tests/avocado/machine_aspeed.py:AST2x00MachineSDK.test_aarch64_ast2700_evb_sdk_v09_02
```
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Update test case to test ASPEED OpenBMC SDK v09.02 for AST2700.
ASPEED fixed TX mask issue from linux/drivers/ftgmac100.c.
It is required to use ASPEED OpenBMC SDK since v09.02
for AST2700 QEMU network testing.
A test image is downloaded from the ASPEED Forked OpenBMC GitHub
release repository :
https://github.com/AspeedTech-BMC/openbmc/releases/
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
According to the w25q01jv datasheet at page 16,
it is required to set QE bit in "Status Register 2".
Besides, users are able to utilize "Write Status Register 1(0x01)"
command to set QE bit in "Status Register 2" and
utilize "Read Status Register 2(0x35)" command to get the QE bit status.
To support quad mode for w25q01jvq, update collecting data needed
2 bytes for WRSR command in decode_new_cmd function and
verify QE bit at the second byte of collecting data bit 2
in complete_collecting_data.
Update RDCR_EQIO command to set bit 2 of return data
if quad mode enable in decode_new_cmd.
Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
ASPEED AST2700 SOC is a 64 bits quad core CPUs (Cortex-a35)
And the base address of dram is "0x4 00000000" which
is 64bits address.
Set dma64 property for ftgmac100 model to support
64bits dram address DMA.
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
ASPEED AST2700 SOC is a 64 bits quad core CPUs (Cortex-a35)
And the base address of dram is "0x4 00000000" which
is 64bits address.
It have "TXDES 2" and "RXDES 2" to save the high part
physical address of packet buffer.
Ex: TX packet buffer address [34:0]
The "TXDES 2" bits [18:16] which corresponds the bits [34:32]
of the 64 bits address of the TX packet buffer address
and "TXDES 3" bits [31:0] which corresponds the bits [31:0]
of the 64 bits address of the TX packet buffer address.
Update TX and RX packet buffers address type to
64 bits for dram 64 bits address DMA support.
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
ASPEED AST2700 SOC is a 64 bits quad core CPUs (Cortex-a35)
And the base address of dram is "0x4 00000000" which
is 64bits address.
It have "Normal Priority Transmit Ring Base Address Register High(0x17C)",
"High Priority Transmit Ring Base Address Register High(0x184)" and
"Receive Ring Base Address Register High(0x18C)" to save the high part physical
address of descriptor manager.
Ex: TX descriptor manager address [34:0]
The "Normal Priority Transmit Ring Base Address Register High(0x17C)"
bits [2:0] which corresponds the bits [34:32] of the 64 bits address of
the TX ring buffer address.
The "Normal Priority Transmit Ring Base Address Register(0x20)" bits [31:0]
which corresponds the bits [31:0] of the 64 bits address
of the TX ring buffer address.
Introduce a new sub region which size is 0x100 for the set of new registers
and map it at 0x100 in the container region.
This sub region range is from 0x100 to 0x1ff.
Introduce a new property and object attribute to activate the region for new registers.
Introduce a new memop handlers for the new register read and write.
Signed-off-by: Jamin Lin <jamin_lin@aspeedtech.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>