Commit Graph

15 Commits

Author SHA1 Message Date
Mayuresh Chitale
4bf501dc01 target/riscv: pmp: Clear pmp/smepmp bits on reset
As per the Priv and Smepmp specifications, certain bits such as the 'L'
bit of pmp entries and mseccfg.MML can only be cleared upon reset and it
is necessary to do so to allow 'M' mode firmware to correctly reinitialize
the pmp/smpemp state across reboots. As required by the spec, also clear
the 'A' field of pmp entries.

Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231019065644.1431798-1-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-11-07 11:06:02 +10:00
Weiwei Li
e9c39713ea target/riscv: Change the return type of pmp_hart_has_privs() to bool
We no longer need the pmp_index for matched PMP entry now.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13 17:09:13 +10:00
Weiwei Li
dc7b599332 target/riscv: Update pmp_get_tlb_size()
PMP entries before (including) the matched PMP entry may only cover partial
of the TLB page, and this may split the page into regions with different
permissions. Such as for PMP0 (0x80000008~0x8000000F, R) and PMP1 (0x80000000~
0x80000FFF, RWX), write access to 0x80000000 will match PMP1. However we cannot
cache the translation result in the TLB since this will make the write access
to 0x80000008 bypass the check of PMP0. So we should check all of them instead
of the matched PMP entry in pmp_get_tlb_size() and set the tlb_size to 1 in
this case.
Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13 17:06:39 +10:00
Weiwei Li
c45eff30cb target/riscv: Fix format for indentation
Fix identation problems, and try to use the same indentation strategy
in the same file.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230405085813.40643-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-05-05 10:49:50 +10:00
LIU Zhiwei
824cac681c target/riscv: Fix PMP propagation for tlb
Only the pmp index that be checked by pmp_hart_has_privs can be used
by pmp_get_tlb_size to avoid an error pmp index.

Before modification, we may use an error pmp index. For example,
we check address 0x4fc, and the size 0x4 in pmp_hart_has_privs. If there
is an pmp rule, valid range is [0x4fc, 0x500), then pmp_hart_has_privs
will return true;

However, this checked pmp index is discarded as pmp_hart_has_privs
return bool value. In pmp_is_range_in_tlb, it will traverse all pmp
rules. The tlb_sa will be 0x0, and tlb_ea will be 0xfff. If there is
a pmp rule [0x10, 0x14), it will be misused as it is legal in
pmp_get_tlb_size.

As we have already known the correct pmp index, just remove the
remove the pmp_is_range_in_tlb and get tlb size directly from
pmp_get_tlb_size.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221012060016.30856-1-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-01-06 10:42:55 +10:00
Weiwei Li
77442380ec target/riscv: rvk: add CSR support for Zkr
- add SEED CSR which must be accessed with a read-write instruction:
   A read-only instruction such as CSRRS/CSRRC with rs1=x0 or CSRRSI/CSRRCI
with uimm=0 will raise an illegal instruction exception.
 - add USEED, SSEED fields for MSECCFG CSR

Co-authored-by: Ruibo Lu <luruibo2000@163.com>
Co-authored-by: Zewen Ye <lustrew@foxmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-13-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-04-29 10:47:45 +10:00
Philippe Mathieu-Daudé
3cb1a410ef target: Include missing 'cpu.h'
These target-specific files use the target-specific CPU state
but lack to include "cpu.h"; i.e.:

    ../target/riscv/pmp.h:61:23: error: unknown type name 'CPURISCVState'
    void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
                          ^
    ../target/nios2/mmu.h:43:18: error: unknown type name 'CPUNios2State'
    void mmu_flip_um(CPUNios2State *env, unsigned int um);
                     ^
    ../target/microblaze/mmu.h:88:19: error: unknown type name 'CPUMBState'; did you mean 'CPUState'?
    uint32_t mmu_read(CPUMBState *env, bool ea, uint32_t rn);
                      ^~~~~~~~~~
                      CPUState

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-10-f4bug@amsat.org>
2022-03-06 13:15:42 +01:00
Hou Weiying
2582a95c3c target/riscv: Add ePMP CSR access functions
Signed-off-by: Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
Signed-off-by: Hou Weiying <weiying_hou@outlook.com>
Signed-off-by: Myriad-Dreamin <camiyoru@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 270762cb2507fba6a9eeb99a774cf49f7da9cc32.1618812899.git.alistair.francis@wdc.com
[ Changes by AF:
 - Rebase on master
 - Fix build errors
 - Fix some style issues
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
2021-05-11 20:02:06 +10:00
Jim Shu
b297129ae1 target/riscv: propagate PMP permission to TLB page
Currently, PMP permission checking of TLB page is bypassed if TLB hits
Fix it by propagating PMP permission to TLB page permission.

PMP permission checking also use MMU-style API to change TLB permission
and size.

Signed-off-by: Jim Shu <cwshu@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1613916082-19528-2-git-send-email-cwshu@andestech.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-03-22 21:54:40 -04:00
Atish Patra
d102f19a20 target/riscv/pmp: Raise exception if no PMP entry is configured
As per the privilege specification, any access from S/U mode should fail
if no pmp region is configured.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201223192553.332508-1-atish.patra@wdc.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-01-16 10:57:21 -08:00
Yifei Jiang
24beb03e46 target/riscv: Add PMP state description
In the case of supporting PMP feature, add PMP state description
to vmstate_riscv_cpu.

'vmstate_pmp_addr' and 'num_rules' could be regenerated by
pmp_update_rule(). But there exists the problem of updating
num_rules repeatedly in pmp_update_rule(). So here extracts
pmp_update_rule_addr() and pmp_update_rule_nums() to update
'vmstate_pmp_addr' and 'num_rules' respectively.

Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-4-jiangyifei@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-11-03 07:17:23 -08:00
Zong Li
af3fc195e3 target/riscv: Change the TLB page size depends on PMP entries.
The minimum granularity of PMP is 4 bytes, it is small than 4KB page
size, therefore, the pmp checking would be ignored if its range doesn't
start from the alignment of one page. This patch detects the pmp entries
and sets the small page size to TLB if there is a PMP entry which cover
the page size.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <6b0bf48662ef26ab4c15381a08e78a74ebd7ca79.1595924470.git.zong.li@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-08-21 22:37:55 -07:00
Hesham Almatary
cc0fdb2985
RISC-V: Check for the effective memory privilege mode during PMP checks
The current PMP check function checks for env->priv which is not the effective
memory privilege mode.

For example, mstatus.MPRV could be set while executing in M-Mode, and in that
case the privilege mode for the PMP check should be S-Mode rather than M-Mode
(in env->priv) if mstatus.MPP == PRV_S.

This patch passes the effective memory privilege mode to the PMP check.
Functions that call the PMP check should pass the correct memory privilege mode
after reading mstatus' MPRV/MPP or hstatus.SPRV (if Hypervisor mode exists).

Suggested-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Hesham Almatary <Hesham.Almatary@cl.cam.ac.uk>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-06-23 23:44:41 -07:00
Markus Armbruster
a8b991b52d Clean up ill-advised or unusual header guards
Leading underscores are ill-advised because such identifiers are
reserved.  Trailing underscores are merely ugly.  Strip both.

Our header guards commonly end in _H.  Normalize the exceptions.

Done with scripts/clean-header-guards.pl.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-7-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Changes to slirp/ dropped, as we're about to spin it off]
2019-05-13 08:58:55 +02:00
Michael Clark
65c5b75c38
RISC-V Physical Memory Protection
Implements the physical memory protection extension as specified in
Privileged ISA Version 1.10.

PMP (Physical Memory Protection) is as-of-yet unused and needs testing.
The SiFive verification team have PMP test cases that will be run.

Nothing currently depends on PMP support. It would be preferable to keep
the code in-tree for folk that are interested in RISC-V PMP support.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daire McNamara <daire.mcnamara@emdalo.com>
Signed-off-by: Ivan Griffin <ivan.griffin@emdalo.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
2018-03-07 08:30:28 +13:00