mbox series

[v5,00/13] target/riscv: Fix PMP related problem

Message ID 20230428143621.142390-1-liweiwei@iscas.ac.cn (mailing list archive)
Headers show
Series target/riscv: Fix PMP related problem | expand

Message

Weiwei Li April 28, 2023, 2:36 p.m. UTC
This patchset tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:

TLB will be cached if the matched PMP entry cover the whole page.  However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).

Writing to pmpaddr didn't trigger tlb flush (patch 3). 

We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (patch 5).

The port is available here:
https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v5

v5:

Mov the original Patch 6 to Patch 3

add Patch 4 to change the return type of pmp_hart_has_privs() to bool 

add Patch 5 to make RLB/MML/MMWP bits writable only when Smepmp is enabled

add Patch 6 to remove unused paramters in pmp_hart_has_privs_default()

add Patch 7 to flush tlb when MMWP or MML bits are changed

add Patch 8 to update the next rule addr in pmpaddr_csr_write()

add Patch 13 to deny access if access is partially inside the PMP entry

v4:

Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1

Restore log message change in Patch 2

Update commit message and the way to improve the problem in Patch 6

v3:

Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1

Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.

Fix commit message problems in Patch 8 (Patch 7 in new patchset)

v2:

Update commit message for patch 1

Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2

Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6

Add patch 7 to make the short up really work in pmp_hart_has_privs

Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately

Weiwei Li (13):
  target/riscv: Update pmp_get_tlb_size()
  target/riscv: Move pmp_get_tlb_size apart from
    get_physical_address_pmp
  target/riscv: Make the short cut really work in pmp_hart_has_privs
  target/riscv: Change the return type of pmp_hart_has_privs() to bool
  target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is
    enabled
  target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
  target/riscv: Flush TLB when MMWP or MML bits are changed
  target/riscv: Update the next rule addr in pmpaddr_csr_write()
  target/riscv: Flush TLB when pmpaddr is updated
  target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
  accel/tcg: Uncache the host address for instruction fetch when tlb
    size < 1
  target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
  target/riscv: Deny access if access is partially inside the PMP entry

 accel/tcg/cputlb.c        |   5 +
 target/riscv/cpu_helper.c |  27 ++----
 target/riscv/pmp.c        | 198 ++++++++++++++++++++++----------------
 target/riscv/pmp.h        |  11 +--
 4 files changed, 135 insertions(+), 106 deletions(-)