Message ID | 20190424165955.23718-10-julien.grall@arm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen/arm: Provide a generic function to update Xen PT | expand |
On 24.04.19 19:59, Julien Grall wrote: > Currently, the virtual address of the 3rd level page-tables is obtained > using mfn_to_virt(). > > On Arm32, mfn_to_virt can only work on xenheap page. While in practice > all the page-tables updated will reside in xenheap, in practive the > page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary. > > Furthermore, a follow-up change will update xen_pt_update_entry() to > walk all the levels and therefore be more generic. Some of the > page-tables will also part of Xen memory and therefore will not be > reachable using mfn_to_virt(). > > The easiest way to reach those pages is to use {, un}map_domain_page(). > While on arm32 this means an extra mapping in the normal cases, this is not > very important as xen page-tables are not updated often. > > In order to allow future change in the way Xen page-tables are mapped, > two new helpers are introduced to map/unmap the page-tables. > > Signed-off-by: Julien Grall <julien.grall@arm.com> > --- > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 97e876d866..115e8340f1 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -969,6 +969,16 @@ static int create_xen_table(lpae_t *entry) return 0; } +static lpae_t *xen_map_table(mfn_t mfn) +{ + return map_domain_page(mfn); +} + +static void xen_unmap_table(const lpae_t *table) +{ + unmap_domain_page(table); +} + /* Sanity check of the entry */ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags) { @@ -1031,6 +1041,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags) static int xen_pt_update_entry(unsigned long addr, mfn_t mfn, unsigned int flags) { + int rc; lpae_t pte, *entry; lpae_t *third = NULL; @@ -1049,15 +1060,17 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn, BUG_ON(!lpae_is_valid(*entry)); - third = mfn_to_virt(lpae_get_mfn(*entry)); + third = xen_map_table(lpae_get_mfn(*entry)); entry = &third[third_table_offset(addr)]; + rc = -EINVAL; if ( !xen_pt_check_entry(*entry, mfn, flags) ) - return -EINVAL; + goto out; /* If we are only populating page-table, then we are done. */ + rc = 0; if ( flags & _PAGE_POPULATE ) - return 0; + goto out; /* We are removing the page */ if ( !(flags & _PAGE_PRESENT) ) @@ -1082,7 +1095,12 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn, write_pte(entry, pte); - return 0; + rc = 0; + +out: + xen_unmap_table(third); + + return rc; } static DEFINE_SPINLOCK(xen_pt_lock);
Currently, the virtual address of the 3rd level page-tables is obtained using mfn_to_virt(). On Arm32, mfn_to_virt can only work on xenheap page. While in practice all the page-tables updated will reside in xenheap, in practive the page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary. Furthermore, a follow-up change will update xen_pt_update_entry() to walk all the levels and therefore be more generic. Some of the page-tables will also part of Xen memory and therefore will not be reachable using mfn_to_virt(). The easiest way to reach those pages is to use {, un}map_domain_page(). While on arm32 this means an extra mapping in the normal cases, this is not very important as xen page-tables are not updated often. In order to allow future change in the way Xen page-tables are mapped, two new helpers are introduced to map/unmap the page-tables. Signed-off-by: Julien Grall <julien.grall@arm.com> --- xen/arch/arm/mm.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-)