mbox series

[v6,00/12] target/riscv: Fix PMP related problem

Message ID 20230517091519.34439-1-liweiwei@iscas.ac.cn (mailing list archive)
Headers show
Series target/riscv: Fix PMP related problem | expand

Message

Weiwei Li May 17, 2023, 9:15 a.m. UTC
This patchset originally tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:

* TLB will be cached if the matched PMP entry cover the whole page.  However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).
* Writing to pmpaddr and MML/MMWP didn't trigger tlb flush (patch 7 and  9). 
* We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (original patch 11).

The port is available here:
https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v6

v6:

* Update comments in Patch 1
* Remove the merged Patch 11

v5:

* Mov the original Patch 6 to Patch 3
* add Patch 4 to change the return type of pmp_hart_has_privs() to bool 
* add Patch 5 to make RLB/MML/MMWP bits writable only when Smepmp is enabled
* add Patch 6 to remove unused paramters in pmp_hart_has_privs_default()
* add Patch 7 to flush tlb when MMWP or MML bits are changed
* add Patch 8 to update the next rule addr in pmpaddr_csr_write()
* add Patch 13 to deny access if access is partially inside the PMP entry

v4:

* Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1

* Restore log message change in Patch 2

* Update commit message and the way to improve the problem in Patch 6

v3:

* Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1

* Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.

* Fix commit message problems in Patch 8 (Patch 7 in new patchset)

v2:

* Update commit message for patch 1
* Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2
* Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6
* Add patch 7 to make the short up really work in pmp_hart_has_privs
* Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately

Weiwei Li (12):
  target/riscv: Update pmp_get_tlb_size()
  target/riscv: Move pmp_get_tlb_size apart from
    get_physical_address_pmp
  target/riscv: Make the short cut really work in pmp_hart_has_privs
  target/riscv: Change the return type of pmp_hart_has_privs() to bool
  target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is
    enabled
  target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
  target/riscv: Flush TLB when MMWP or MML bits are changed
  target/riscv: Update the next rule addr in pmpaddr_csr_write()
  target/riscv: Flush TLB when pmpaddr is updated
  target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
  target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
  target/riscv: Deny access if access is partially inside the PMP entry

 target/riscv/cpu_helper.c |  27 ++---
 target/riscv/pmp.c        | 203 ++++++++++++++++++++++----------------
 target/riscv/pmp.h        |  11 +--
 3 files changed, 135 insertions(+), 106 deletions(-)

Comments

Alistair Francis May 18, 2023, 9:46 a.m. UTC | #1
On Wed, May 17, 2023 at 7:16 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
> This patchset originally tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:
>
> * TLB will be cached if the matched PMP entry cover the whole page.  However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).
> * Writing to pmpaddr and MML/MMWP didn't trigger tlb flush (patch 7 and  9).
> * We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (original patch 11).
>
> The port is available here:
> https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v6
>
> v6:
>
> * Update comments in Patch 1
> * Remove the merged Patch 11
>
> v5:
>
> * Mov the original Patch 6 to Patch 3
> * add Patch 4 to change the return type of pmp_hart_has_privs() to bool
> * add Patch 5 to make RLB/MML/MMWP bits writable only when Smepmp is enabled
> * add Patch 6 to remove unused paramters in pmp_hart_has_privs_default()
> * add Patch 7 to flush tlb when MMWP or MML bits are changed
> * add Patch 8 to update the next rule addr in pmpaddr_csr_write()
> * add Patch 13 to deny access if access is partially inside the PMP entry
>
> v4:
>
> * Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1
>
> * Restore log message change in Patch 2
>
> * Update commit message and the way to improve the problem in Patch 6
>
> v3:
>
> * Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1
>
> * Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.
>
> * Fix commit message problems in Patch 8 (Patch 7 in new patchset)
>
> v2:
>
> * Update commit message for patch 1
> * Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2
> * Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6
> * Add patch 7 to make the short up really work in pmp_hart_has_privs
> * Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately
>
> Weiwei Li (12):
>   target/riscv: Update pmp_get_tlb_size()
>   target/riscv: Move pmp_get_tlb_size apart from
>     get_physical_address_pmp
>   target/riscv: Make the short cut really work in pmp_hart_has_privs
>   target/riscv: Change the return type of pmp_hart_has_privs() to bool
>   target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is
>     enabled
>   target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
>   target/riscv: Flush TLB when MMWP or MML bits are changed
>   target/riscv: Update the next rule addr in pmpaddr_csr_write()
>   target/riscv: Flush TLB when pmpaddr is updated
>   target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
>   target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
>   target/riscv: Deny access if access is partially inside the PMP entry

Thanks!

Applied to riscv-to-apply.next

Alistair

>
>  target/riscv/cpu_helper.c |  27 ++---
>  target/riscv/pmp.c        | 203 ++++++++++++++++++++++----------------
>  target/riscv/pmp.h        |  11 +--
>  3 files changed, 135 insertions(+), 106 deletions(-)
>
> --
> 2.25.1
>
>