Commit Graph

74 Commits

Author SHA1 Message Date
Mostafa Saleh
48f9e9eb29 hw/arm/smmu: Fix IPA for stage-2 events
For the following events (ARM IHI 0070 F.b - 7.3 Event records):
- F_TRANSLATION
- F_ACCESS
- F_PERMISSION
- F_ADDR_SIZE

If fault occurs at stage 2, S2 == 1 and:
  - If translating an IPA for a transaction (whether by input to
    stage 2-only configuration, or after successful stage 1 translation),
    CLASS == IN, and IPA is provided.

At the moment only CLASS == IN is used which indicates input
translation.

However, this was not implemented correctly, as for stage 2, the code
only sets the  S2 bit but not the IPA.

This field has the same bits as FetchAddr in F_WALK_EABT which is
populated correctly, so we don’t change that.
The setting of this field should be done from the walker as the IPA address
wouldn't be known in case of nesting.

For stage 1, the spec says:
  If fault occurs at stage 1, S2 == 0 and:
  CLASS == IN, IPA is UNKNOWN.

So, no need to set it to for stage 1, as ptw_info is initialised by zero in
smmuv3_translate().

Fixes: e703f7076a “hw/arm/smmuv3: Add page table walk for stage-2”
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Mostafa Saleh <smostafa@google.com>
Message-id: 20240715084519.1189624-3-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:29 +01:00
Nicolin Chen
69970205ed hw/arm/smmu-common: Replace smmu_iommu_mr with smmu_find_sdev
The caller of smmu_iommu_mr wants to get sdev for smmuv3_flush_config().

Do it directly instead of bridging with an iommu mr pointer.

Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Message-id: 20240619002218.926674-1-nicolinc@nvidia.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01 12:48:55 +01:00
Peter Maydell
ad80e36744 hw, target: Add ResetType argument to hold and exit phase methods
We pass a ResetType argument to the Resettable class enter
phase method, but we don't pass it to hold and exit, even though
the callsites have it readily available. This means that if
a device cared about the ResetType it would need to record it
in the enter phase method to use later on. Pass the type to
all three of the phase methods to avoid having to do that.

Commit created with

  for dir in hw target include; do \
      spatch --macro-file scripts/cocci-macro-file.h \
             --sp-file scripts/coccinelle/reset-type.cocci \
             --keep-comments --smpl-spacing --in-place \
             --include-headers --dir $dir; done

and no manual edits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Luc Michel <luc.michel@amd.com>
Message-id: 20240412160809.1260625-5-peter.maydell@linaro.org
2024-04-25 10:21:06 +01:00
Luc Michel
15f6c16e6e hw/arm/smmuv3: add support for stage 1 access fault
An access fault is raised when the Access Flag is not set in the
looked-up PTE and the AFFD field is not set in the corresponding context
descriptor. This was already implemented for stage 2. Implement it for
stage 1 as well.

Signed-off-by: Luc Michel <luc.michel@amd.com>
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Mostafa Saleh <smostafa@google.com>
Message-id: 20240213082211.3330400-1-luc.michel@amd.com
[PMM: tweaked comment text]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-15 13:38:11 +00:00
Zhao Liu
9953bf34ee hw/arm/smmuv3: Consolidate the use of device_class_set_parent_realize()
Use device_class_set_parent_realize() to set parent realize() directly.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-13 10:59:25 +03:00
Richard Henderson
607ef5706c hw/arm: Constify VMState
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-19-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Peter Maydell
4cdd146d8b hw/arm/smmuv3: Advertise SMMUv3.1-XNX feature
The SMMUv3.1-XNX feature is mandatory for an SMMUv3.1 if S2P is
supported, so we should theoretically have implemented it as part of
the recent S2P work.  Fortunately, for us the implementation is a
no-op.

This feature is about interpretation of the stage 2 page table
descriptor XN bits, which control execute permissions.

For QEMU, the permission bits passed to an IOMMU (via MemTxAttrs and
IOMMUAccessFlags) only indicate read and write; we do not distinguish
data reads from instruction reads outside the CPU proper.  In the
SMMU architecture's terms, our interconnect between the client device
and the SMMU doesn't have the ability to convey the INST attribute,
and we therefore use the default value of "data" for this attribute.

We also do not support the bits in the Stream Table Entry that can
override the on-the-bus transaction attribute permissions (we do not
set SMMU_IDR1.ATTR_PERMS_OVR=1).

These two things together mean that for our implementation, it never
has to deal with transactions with the INST attribute, and so it can
correctly ignore the XN bits entirely.  So we already implement
FEAT_XNX's "XN field is now 2 bits, not 1" behaviour to the extent
that we need to.

Advertise the presence of the feature in SMMU_IDR3.XNX.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20230914145705.1648377-4-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
27fd85d35b hw/arm/smmuv3: Sort ID register setting into field order
In smmuv3_init_regs() when we set the various bits in the ID
registers, we do this almost in order of the fields in the
registers, but not quite. Move the initialization of
SMMU_IDR3.RIL and SMMU_IDR5.OAS into their correct places.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20230914145705.1648377-3-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
9e2135ee93 hw/arm/smmuv3.c: Avoid shadowing variable
Avoid shadowing a variable in smmuv3_notify_iova():

../../hw/arm/smmuv3.c: In function ‘smmuv3_notify_iova’:
../../hw/arm/smmuv3.c:1043:23: warning: declaration of ‘event’ shadows a previous local [-Wshadow=local]
 1043 |         SMMUEventInfo event = {.inval_ste_allowed = true};
      |                       ^~~~~
../../hw/arm/smmuv3.c:1038:19: note: shadowed declaration is here
 1038 |     IOMMUTLBEvent event;
      |                   ^~~~~

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20230922152944.3583438-4-peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-09-29 10:07:19 +02:00
Peter Maydell
c6445544d4 hw/arm/smmu: Handle big-endian hosts correctly
The implementation of the SMMUv3 has multiple places where it reads a
data structure from the guest and directly operates on it without
doing a guest-to-host endianness conversion.  Since all SMMU data
structures are little-endian, this means that the SMMU doesn't work
on a big-endian host.  In particular, this causes the Avocado test
  machine_aarch64_virt.py:Aarch64VirtMachine.test_alpine_virt_tcg_gic_max
to fail on an s390x host.

Add appropriate byte-swapping on reads and writes of guest in-memory
data structures so that the device works correctly on big-endian
hosts.

As part of this we constrain queue_read() to operate only on Cmd
structs and queue_write() on Evt structs, because in practice these
are the only data structures the two functions are used with, and we
need to know what the data structure is to be able to byte-swap its
parts correctly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20230717132641.764660-1-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
2023-07-25 10:56:51 +01:00
Mostafa Saleh
8cefcc3b71 hw/arm/smmuv3: Add knob to choose translation stage and enable stage-2
As everything is in place, we can use a new system property to
advertise which stage is supported and remove bad_ste from STE
stage2 config.

The property added arm-smmuv3.stage can have 3 values:
- "1": Stage-1 only is advertised.
- "2": Stage-2 only is advertised.

If not passed or an unsupported value is passed, it will default to
stage-1.

Advertise VMID16.

Don't try to decode CD, if stage-2 is configured.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Mostafa Saleh <smostafa@google.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-11-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 15:50:16 +01:00
Mostafa Saleh
32bd7baec2 hw/arm/smmuv3: Add stage-2 support in iova notifier
In smmuv3_notify_iova, read the granule based on translation stage
and use VMID if valid value is sent.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-10-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 15:50:16 +01:00
Mostafa Saleh
ccc3ee3871 hw/arm/smmuv3: Add CMDs related to stage-2
CMD_TLBI_S2_IPA: As S1+S2 is not enabled, for now this can be the
same as CMD_TLBI_NH_VAA.

CMD_TLBI_S12_VMALL: Added new function to invalidate TLB by VMID.

For stage-1 only commands, add a check to throw CERROR_ILL if used
when stage-1 is not supported.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Mostafa Saleh <smostafa@google.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-9-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 15:50:16 +01:00
Mostafa Saleh
2eaeb7d593 hw/arm/smmuv3: Add VMID to TLB tagging
Allow TLB to be tagged with VMID.

If stage-1 is only supported, VMID is set to -1 and ignored from STE
and CMD_TLBI_NH* cmds.

Update smmu_iotlb_insert trace event to have vmid.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-8-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 15:50:16 +01:00
Mostafa Saleh
cd617556ad hw/arm/smmuv3: Make TLB lookup work for stage-2
Right now, either stage-1 or stage-2 are supported, this simplifies
how we can deal with TLBs.
This patch makes TLB lookup work if stage-2 is enabled instead of
stage-1.
TLB lookup is done before a PTW, if a valid entry is found we won't
do the PTW.
To be able to do TLB lookup, we need the correct tagging info, as
granularity and input size, so we get this based on the supported
translation stage. The TLB entries are added correctly from each
stage PTW.

When nested translation is supported, this would need to change, for
example if we go with a combined TLB implementation, we would need to
use the min of the granularities in TLB.

As stage-2 shouldn't be tagged by ASID, it will be set to -1 if S1P
is not enabled.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-7-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 15:50:16 +01:00
Mostafa Saleh
21eb5b5cde hw/arm/smmuv3: Parse STE config for stage-2
Parse stage-2 configuration from STE and populate it in SMMUS2Cfg.
Validity of field values are checked when possible.

Only AA64 tables are supported and Small Translation Tables (STT) are
not supported.

According to SMMUv3 UM(IHI0070E) "5.2 Stream Table Entry": All fields
with an S2 prefix (with the exception of S2VMID) are IGNORED when
stage-2 bypasses translation (Config[1] == 0).

Which means that VMID can be used(for TLB tagging) even if stage-2 is
bypassed, so we parse it unconditionally when S2P exists. Otherwise
it is set to -1.(only S1P)

As stall is not supported, if S2S is set the translation would abort.
For S2R, we reuse the same code used for stage-1 with flag
record_faults. However when nested translation is supported we would
need to separate stage-1 and stage-2 faults.

Fix wrong shift in STE_S2HD, STE_S2HA, STE_S2S.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20230516203327.2051088-6-smostafa@google.com
[PMM: fixed format string]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 13:02:53 +01:00
Mostafa Saleh
bcc919e756 hw/arm/smmuv3: Refactor stage-1 PTW
In preparation for adding stage-2 support, rename smmu_ptw_64 to
smmu_ptw_64_s1 and refactor some of the code so it can be reused in
stage-2 page table walk.

Remove AA64 check from PTW as decode_cd already ensures that AA64 is
used, otherwise it faults with C_BAD_CD.

A stage member is added to SMMUPTWEventInfo to differentiate
between stage-1 and stage-2 ptw faults.

Add stage argument to trace_smmu_ptw_level be consistent with other
trace events.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230516203327.2051088-4-smostafa@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30 13:02:53 +01:00
Mostafa Saleh
c2ecb424fb hw/arm/smmuv3: Add GBPA register
GBPA register can be used to globally abort all
transactions.

It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
be zero(Do not abort incoming transactions).

Other fields have default values of Use Incoming.

If UPDATE is not set, the write is ignored. This is the only permitted
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)

As this patch adds a new state to the SMMU (GBPA), it is added
in a new subsection for forward migration compatibility.
GBPA is only migrated if its value is different from the reset value.
It does this to be backward migration compatible if SW didn't write
the register.

Signed-off-by: Mostafa Saleh <smostafa@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20230214094009.2445653-1-smostafa@google.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Peter Maydell
503819a347 hw/arm: Convert TYPE_ARM_SMMUV3 to 3-phase reset
Convert the TYPE_ARM_SMMUV3 device to 3-phase reset.  The legacy
reset method doesn't do anything that's invalid in the hold phase, so
the conversion only requires changing it to a hold phase method, and
using the 3-phase versions of the "save the parent reset method and
chain to it" code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221109161444.3397405-3-peter.maydell@linaro.org
2022-12-15 11:18:20 +00:00
Peter Maydell
f8e7163d9e hw/arm/smmuv3: Advertise support for SMMUv3.2-BBML2
The Arm SMMUv3 includes an optional feature equivalent to the CPU
FEAT_BBM, which permits an OS to switch a range of memory between
"covered by a huge page" and "covered by a sequence of normal pages"
without having to engage in the traditional 'break-before-make'
dance. (This is particularly important for the SMMU, because devices
performing I/O through an SMMU are less likely to be able to cope with
the window in the sequence where an access results in a translation
fault.)  The SMMU spec explicitly notes that one of the valid ways to
be a BBM level 2 compliant implementation is:
 * if there are multiple entries in the TLB for an address,
   choose one of them and use it, ignoring the others

Our SMMU TLB implementation (unlike our CPU TLB) does allow multiple
TLB entries for an address, because the translation table level is
part of the SMMUIOTLBKey, and so our IOTLB hashtable can include
entries for the same address where the leaf was at different levels
(i.e. both hugepage and normal page). Our TLB lookup implementation in
smmu_iotlb_lookup() will always find the entry with the lowest level
(i.e. it prefers the hugepage over the normal page) and ignore any
others. TLB invalidation correctly removes all TLB entries matching
the specified address or address range (unless the guest specifies the
leaf level explicitly, in which case it gets what it asked for). So we
can validly advertise support for BBML level 2.

Note that we still can't yet advertise ourselves as an SMMU v3.2,
because v3.2 requires support for the S2FWB feature, which we don't
yet implement.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20220426160422.2353158-4-peter.maydell@linaro.org
2022-04-28 13:59:23 +01:00
Jean-Philippe Brucker
264a3b2eba hw/arm/smmuv3: Add space in guest error message
Make the translation error message prettier by adding a missing space
before the parenthesis.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20220427111543.124620-2-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-28 13:57:33 +01:00
Jean-Philippe Brucker
ced716942a hw/arm/smmuv3: Cache event fault record
The Record bit in the Context Descriptor tells the SMMU to report fault
events to the event queue. Since we don't cache the Record bit at the
moment, access faults from a cached Context Descriptor are never
reported. Store the Record bit in the cached SMMUTransCfg.

Fixes: 9bde7f0674 ("hw/arm/smmuv3: Implement translate callback")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20220427111543.124620-1-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-28 13:57:33 +01:00
Xiang Chen
c3ca7d56c4 hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate()
It always calls the IOMMU MR translate() callback with flag=IOMMU_NONE in
memory_region_iommu_replay(). Currently, smmuv3_translate() return an
IOMMUTLBEntry with perm set to IOMMU_NONE even if the translation success,
whereas it is expected to return the actual permission set in the table
entry.
So pass the actual perm to returned IOMMUTLBEntry in the table entry.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1650094695-121918-1-git-send-email-chenxiang66@hisilicon.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22 14:44:55 +01:00
Eric Auger
43530095e1 hw/arm/smmuv3: Fix device reset
We currently miss a bunch of register resets in the device reset
function. This sometimes prevents the guest from rebooting after
a system_reset (with virtio-blk-pci). For instance, we may get
the following errors:

invalid STE
smmuv3-iommu-memory-region-0-0 translation failed for iova=0x13a9d2000(SMMU_EVT_C_BAD_STE)
Invalid read at addr 0x13A9D2000, size 2, region '(null)', reason: rejected
invalid STE
smmuv3-iommu-memory-region-0-0 translation failed for iova=0x13a9d2000(SMMU_EVT_C_BAD_STE)
Invalid write at addr 0x13A9D2000, size 2, region '(null)', reason: rejected
invalid STE

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20220202111602.627429-1-eric.auger@redhat.com
Fixes: 10a83cb988 ("hw/arm/smmuv3: Skeleton")
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-02-08 10:56:28 +00:00
Philippe Mathieu-Daudé
ba06fe8add dma: Let dma_memory_read/write() take MemTxAttrs argument
Let devices specify transaction attributes when calling
dma_memory_read() or dma_memory_write().

Patch created mechanically using spatch with this script:

  @@
  expression E1, E2, E3, E4;
  @@
  (
  - dma_memory_read(E1, E2, E3, E4)
  + dma_memory_read(E1, E2, E3, E4, MEMTXATTRS_UNSPECIFIED)
  |
  - dma_memory_write(E1, E2, E3, E4)
  + dma_memory_write(E1, E2, E3, E4, MEMTXATTRS_UNSPECIFIED)
  )

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Li Qiang <liq3ea@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20211223115554.3155328-6-philmd@redhat.com>
2021-12-30 17:16:32 +01:00
Eric Auger
219729cfbf hw/arm/smmuv3: Another range invalidation fix
6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range")
failed to completely fix misalignment issues with range
invalidation. For instance invalidations patterns like "invalidate 32
4kB pages starting from 0xff395000 are not correctly handled" due
to the fact the previous fix only made sure the number of invalidated
pages were a power of 2 but did not properly handle the start
address was not aligned with the range. This can be noticed when
boothing a fedora 33 with protected virtio-blk-pci.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Fixes: 6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range")
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 15:44:45 +01:00
Thomas Huth
ee86213aa3 Do not include exec/address-spaces.h if it's not really necessary
Stop including exec/address-spaces.h in files that don't need it.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210416171314.2074665-5-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02 17:24:51 +02:00
Kunkun Jiang
bf559ee402 hw/arm/smmuv3: Support 16K translation granule
The driver can query some bits in SMMUv3 IDR5 to learn which
translation granules are supported. Arm recommends that SMMUv3
implementations support at least 4K and 64K granules. But in
the vSMMUv3, there seems to be no reason not to support 16K
translation granule. In addition, if 16K is not supported,
vSVA will failed to be enabled in the future for 16K guest
kernel. So it'd better to support it.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30 11:16:49 +01:00
Zenghui Yu
017a913af4 hw/arm/smmuv3: Emulate CFGI_STE_RANGE for an aligned range of StreamIDs
In emulation of the CFGI_STE_RANGE command, we now take StreamID as the
start of the invalidation range, regardless of whatever the Range is,
whilst the spec clearly states that

 - "Invalidation is performed for an *aligned* range of 2^(Range+1)
    StreamIDs."

 - "The bottom Range+1 bits of the StreamID parameter are IGNORED,
    aligning the range to its size."

Take CFGI_ALL (where Range == 31) as an example, if there are some random
bits in the StreamID field, we'll fail to perform the full invalidation but
get a strange range (e.g., SMMUSIDRange={.start=1, .end=0}) instead. Rework
the emulation a bit to get rid of the discrepancy with the spec.

Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Acked-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20210402100449.528-1-yuzenghui@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-12 11:06:24 +01:00
Eric Auger
1194140b7f hw/arm/smmuv3: Fix SMMU_CMD_CFGI_STE_RANGE handling
If the whole SID range (32b) is invalidated (SMMU_CMD_CFGI_ALL),
@end overflows and we fail to handle the command properly.

Once this gets fixed, the current code really is awkward in the
sense it loops over the whole range instead of removing the
currently cached configs through a hash table lookup.

Fix both the overflow and the lookup.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210309102742.30442-7-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-12 12:40:10 +00:00
Eric Auger
6d9cd115b9 hw/arm/smmuv3: Enforce invalidation on a power of two range
As of today, the driver can invalidate a number of pages that is
not a power of 2. However IOTLB unmap notifications and internal
IOTLB invalidations work with masks leading to erroneous
invalidations.

In case the range is not a power of 2, split invalidations into
power of 2 invalidations.

When looking for a single page entry in the vSMMU internal IOTLB,
let's make sure that if the entry is not found using a
g_hash_table_remove() we iterate over all the entries to find a
potential range that overlaps it.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20210309102742.30442-6-eric.auger@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-12 12:40:10 +00:00
Peter Xu
958ec334bc vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support
Previous work on dev-iotlb message broke vhost on either SMMU or virtio-iommu
since dev-iotlb (or PCIe ATS) is not yet supported for those archs.

An initial idea is that we can let IOMMU to export this information to vhost so
that vhost would know whether the vIOMMU would support dev-iotlb, then vhost
can conditionally register to dev-iotlb or the old iotlb way.  We can work
based on some previous patch to introduce PCIIOMMUOps as Yi Liu proposed [1].

However it's not as easy as I thought since vhost_iommu_region_add() does not
have a PCIDevice context at all since it's completely a backend.  It seems
non-trivial to pass over a PCI device to the backend during init.  E.g. when
the IOMMU notifier registered hdev->vdev is still NULL.

To make the fix smaller and easier, this patch goes the other way to leverage
the flag_changed() hook of vIOMMUs so that SMMU and virtio-iommu can trap the
dev-iotlb registration and fail it.  Then vhost could try the fallback solution
as using UNMAP invalidation for it's translations.

[1] https://lore.kernel.org/qemu-devel/1599735398-6829-4-git-send-email-yi.l.liu@intel.com/

Reported-by: Eric Auger <eric.auger@redhat.com>
Fixes: b68ba1ca57
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20210204191228.187550-1-peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-02-05 08:52:58 -05:00
Zenghui Yu
dcda883cd2 hw/arm/smmuv3: Fix addr_mask for range-based invalidation
When handling guest range-based IOTLB invalidation, we should decode the TG
field into the corresponding translation granule size so that we can pass
the correct invalidation range to backend. Set @granule to (tg * 2 + 10) to
properly emulate the architecture.

Fixes: d52915616c ("hw/arm/smmuv3: Get prepared for range invalidation")
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Acked-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20210130043220.1345-1-yuzenghui@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-02 17:00:54 +00:00
Eugenio Pérez
5039caf3c4 memory: Add IOMMUTLBEvent
This way we can tell between regular IOMMUTLBEntry (entry of IOMMU
hardware) and notifications.

In the notifications, we set explicitly if it is a MAPs or an UNMAP,
instead of trusting in entry permissions to differentiate them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20201116165506.31315-3-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
2020-12-08 13:48:57 -05:00
Eugenio Pérez
3b5ebf8532 memory: Rename memory_region_notify_one to memory_region_notify_iommu_one
Previous name didn't reflect the iommu operation.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20201116165506.31315-2-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-12-08 13:48:57 -05:00
Philippe Mathieu-Daudé
744a790ec0 hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):

  CID 1432363 (#1 of 1): Unintentional integer overflow:

  overflow_before_widen:
    Potentially overflowing expression 1 << scale with type int
    (32 bits, signed) is evaluated using 32-bit arithmetic, and
    then used in a context that expects an expression of type
    hwaddr (64 bits, unsigned).

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20201030144617.1535064-1-philmd@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02 16:52:16 +00:00
Zenghui Yu
a55aab6181 hw/arm/smmuv3: Set the restoration priority of the vSMMUv3 explicitly
Ensure the vSMMUv3 will be restored before all PCIe devices so that DMA
translation can work properly during migration.

Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20201019091508.197-1-yuzenghui@huawei.com
Acked-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-27 11:10:44 +00:00
Eric Auger
de206dfd80 hw/arm/smmuv3: Advertise SMMUv3.2 range invalidation
Expose the RIL bit so that the guest driver uses range
invalidation. Although RIL is a 3.2 features, We let
the AIDR advertise SMMUv3.1 support as v3.x implementation
is allowed to implement features from v3.(x+1).

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-12-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
e7c3b9d9a0 hw/arm/smmuv3: Support HAD and advertise SMMUv3.1 support
HAD is a mandatory features with SMMUv3.1 if S1P is set, which is
our case. Other 3.1 mandatory features come with S2P which we don't
have.

So let's support HAD and advertise SMMUv3.1 support in AIDR.

HAD support allows the CD to disable hierarchical attributes, ie.
if the HAD0/1 bit is set, the APTable field of table descriptors
walked through TTB0/1 is ignored.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-11-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
5888f0ad12 hw/arm/smmuv3: Let AIDR advertise SMMUv3.0 support
Add the support for AIDR register. It currently advertises
SMMU V3.0 spec.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-10-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
d52915616c hw/arm/smmuv3: Get prepared for range invalidation
Enhance the smmu_iotlb_inv_iova() helper with range invalidation.
This uses the new fields passed in the NH_VA and NH_VAA commands:
the size of the range, the level and the granule.

As NH_VA and NH_VAA both use those fields, their decoding and
handling is factorized in a new smmuv3_s1_range_inval() helper.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-8-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
c0f9ef7037 hw/arm/smmuv3: Introduce smmuv3_s1_range_inval() helper
Let's introduce an helper for S1 IOVA range invalidation.
This will be used for NH_VA and NH_VAA commands. It decodes
the same fields, trace, calls the UNMAP notifiers and
invalidate the corresponding IOTLB entries.

At the moment, we do not support 3.2 range invalidation yet.
So it reduces to a single IOVA invalidation.

Note the leaf bit now is also decoded for the CMD_TLBI_NH_VAA
command. At the moment it is only used for tracing.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-7-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
9e54dee71f hw/arm/smmu-common: Manage IOTLB block entries
At the moment each entry in the IOTLB corresponds to a page sized
mapping (4K, 16K or 64K), even if the page belongs to a mapped
block. In case of block mapping this unefficiently consumes IOTLB
entries.

Change the value of the entry so that it reflects the actual
mapping it belongs to (block or page start address and size).

Also the level/tg of the entry is encoded in the key. In subsequent
patches we will enable range invalidation. This latter is able
to provide the level/tg of the entry.

Encoding the level/tg directly in the key will allow to invalidate
using g_hash_table_remove() when num_pages equals to 1.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-6-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
a755015855 hw/arm/smmu: Introduce SMMUTLBEntry for PTW and IOTLB value
Introduce a specialized SMMUTLBEntry to store the result of
the PTW and cache in the IOTLB. This structure extends the
generic IOMMUTLBEntry struct with the level of the entry and
the granule size.

Those latter will be useful when implementing range invalidation.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-5-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Eric Auger
6808bca939 hw/arm/smmu-common: Add IOTLB helpers
Add two helpers: one to lookup for a given IOTLB entry and
one to insert a new entry. We also move the tracing there.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200728150815.11446-3-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 10:02:06 +01:00
Philippe Mathieu-Daudé
18610bfd3e hw: Remove unnecessary cast when calling dma_memory_read()
Since its introduction in commit d86a77f8ab, dma_memory_read()
always accepted void pointer argument. Remove the unnecessary
casts.

This commit was produced with the included Coccinelle script
scripts/coccinelle/exec_rw_const.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
v4: Drop parenthesis when removing cast (Eric Blake)
2020-02-20 14:47:08 +01:00
Simon Veith
b255cafb59 hw/arm/smmuv3: Report F_STE_FETCH fault address in correct word position
The smmuv3_record_event() function that generates the F_STE_FETCH error
uses the EVT_SET_ADDR macro to record the fetch address, placing it in
32-bit words 4 and 5.

The correct position for this address is in words 6 and 7, per the
SMMUv3 Architecture Specification.

Update the function to use the EVT_SET_ADDR2 macro instead, which is the
macro intended for writing to these words.

ref. ARM IHI 0070C, section 7.3.4.

Signed-off-by: Simon Veith <sveith@amazon.de>
Acked-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1576509312-13083-7-git-send-email-sveith@amazon.de
Cc: Eric Auger <eric.auger@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Acked-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20 14:03:00 +00:00
Simon Veith
41678c33aa hw/arm/smmuv3: Align stream table base address to table size
Per the specification, and as observed in hardware, the SMMUv3 aligns
the SMMU_STRTAB_BASE address to the size of the table by masking out the
respective least significant bits in the ADDR field.

Apply this masking logic to our smmu_find_ste() lookup function per the
specification.

ref. ARM IHI 0070C, section 6.3.23.

Signed-off-by: Simon Veith <sveith@amazon.de>
Acked-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1576509312-13083-5-git-send-email-sveith@amazon.de
Cc: Eric Auger <eric.auger@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20 14:03:00 +00:00
Simon Veith
05ff2fb80c hw/arm/smmuv3: Check stream IDs against actual table LOG2SIZE
When checking whether a stream ID is in range of the stream table, we
have so far been only checking it against our implementation limit
(SMMU_IDR1_SIDSIZE). However, the guest can program the
STRTAB_BASE_CFG.LOG2SIZE field to a size that is smaller than this
limit.

Check the stream ID against this limit as well to match the hardware
behavior of raising C_BAD_STREAMID events in case the limit is exceeded.
Also, ensure that we do not go one entry beyond the end of the table by
checking that its index is strictly smaller than the table size.

ref. ARM IHI 0070C, section 6.3.24.

Signed-off-by: Simon Veith <sveith@amazon.de>
Acked-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1576509312-13083-4-git-send-email-sveith@amazon.de
Cc: Eric Auger <eric.auger@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20 14:03:00 +00:00
Simon Veith
3d44c60500 hw/arm/smmuv3: Apply address mask to linear strtab base address
In the SMMU_STRTAB_BASE register, the stream table base address only
occupies bits [51:6]. Other bits, such as RA (bit [62]), must be masked
out to obtain the base address.

The branch for 2-level stream tables correctly applies this mask by way
of SMMU_BASE_ADDR_MASK, but the one for linear stream tables does not.

Apply the missing mask in that case as well so that the correct stream
base address is used by guests which configure a linear stream table.

Linux guests are unaffected by this change because they choose a 2-level
stream table layout for the QEMU SMMUv3, based on the size of its stream
ID space.

ref. ARM IHI 0070C, section 6.3.23.

Signed-off-by: Simon Veith <sveith@amazon.de>
Acked-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1576509312-13083-2-git-send-email-sveith@amazon.de
Cc: Eric Auger <eric.auger@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Acked-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20 14:03:00 +00:00