Message ID | 20170914125852.22129-18-wei.liu2@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 14.09.17 at 14:58, <wei.liu2@citrix.com> wrote: > The l1 mask needs to stay in x86/mm.c while l{2,3,4} masks are only > needed by PV code. Both x86 common mm code and PV mm code use > base_disallow_mask and l1 maks. > > Export base_disallow_mask and l1 mask in asm-x86/mm.h. So that's because in patch 20 you need to keep get_page_from_l1e() in x86/mm.c, due to being used by shadow code. But is shadow using the same disallow mask for HVM guests actually correct? Perhaps it would be better for callers of get_page_from_l1e() to pass in their disallow masks, even if it would just so happen that PV and shadow use the same ones? Tim, do you have any thoughts or insights here? Jan
On Fri, Sep 22, 2017 at 07:52:13AM -0600, Jan Beulich wrote: > >>> On 14.09.17 at 14:58, <wei.liu2@citrix.com> wrote: > > The l1 mask needs to stay in x86/mm.c while l{2,3,4} masks are only > > needed by PV code. Both x86 common mm code and PV mm code use > > base_disallow_mask and l1 maks. > > > > Export base_disallow_mask and l1 mask in asm-x86/mm.h. > > So that's because in patch 20 you need to keep > get_page_from_l1e() in x86/mm.c, due to being used by shadow > code. But is shadow using the same disallow mask for HVM guests > actually correct? Perhaps it would be better for callers of > get_page_from_l1e() to pass in their disallow masks, even if it > would just so happen that PV and shadow use the same ones? I think this is a fine idea, fwiw.
At 07:52 -0600 on 22 Sep (1506066733), Jan Beulich wrote: > >>> On 14.09.17 at 14:58, <wei.liu2@citrix.com> wrote: > > The l1 mask needs to stay in x86/mm.c while l{2,3,4} masks are only > > needed by PV code. Both x86 common mm code and PV mm code use > > base_disallow_mask and l1 maks. > > > > Export base_disallow_mask and l1 mask in asm-x86/mm.h. > > So that's because in patch 20 you need to keep > get_page_from_l1e() in x86/mm.c, due to being used by shadow > code. But is shadow using the same disallow mask for HVM guests > actually correct? Perhaps it would be better for callers of > get_page_from_l1e() to pass in their disallow masks, even if it > would just so happen that PV and shadow use the same ones? > Tim, do you have any thoughts or insights here? IIRC the shadow code isn't relying on the disallow mask for anything in HVM guests; it's built the shadows according to its own rules. I don't think that anything's improved by making the disallow mask explicit but I have no objection to it if it makes other refactoring simpler. Tim.
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 86c7466fa0..e11aac3b90 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -152,9 +152,7 @@ bool __read_mostly machine_to_phys_mapping_valid; struct rangeset *__read_mostly mmio_ro_ranges; -static uint32_t base_disallow_mask; -/* Global bit is allowed to be set on L1 PTEs. Intended for user mappings. */ -#define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE_GLOBAL) +uint32_t __read_mostly base_disallow_mask; #define L2_DISALLOW_MASK base_disallow_mask @@ -163,14 +161,6 @@ static uint32_t base_disallow_mask; #define L4_DISALLOW_MASK (base_disallow_mask) -#define l1_disallow_mask(d) \ - ((d != dom_io) && \ - (rangeset_is_empty((d)->iomem_caps) && \ - rangeset_is_empty((d)->arch.ioport_caps) && \ - !has_arch_pdevs(d) && \ - is_pv_domain(d)) ? \ - L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS)) - static s8 __read_mostly opt_mmio_relax; static int __init parse_mmio_relax(const char *s) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 56b2b94195..fbb98e80c6 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -612,4 +612,17 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) return mfn <= (virt_to_mfn(eva - 1) + 1); } +extern uint32_t base_disallow_mask; + +/* Global bit is allowed to be set on L1 PTEs. Intended for user mappings. */ +#define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE_GLOBAL) + +#define l1_disallow_mask(d) \ + ((d != dom_io) && \ + (rangeset_is_empty((d)->iomem_caps) && \ + rangeset_is_empty((d)->arch.ioport_caps) && \ + !has_arch_pdevs(d) && \ + is_pv_domain(d)) ? \ + L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS)) + #endif /* __ASM_X86_MM_H__ */
The l1 mask needs to stay in x86/mm.c while l{2,3,4} masks are only needed by PV code. Both x86 common mm code and PV mm code use base_disallow_mask and l1 maks. Export base_disallow_mask and l1 mask in asm-x86/mm.h. Signed-off-by: Wei Liu <wei.liu2@citrix.com> --- xen/arch/x86/mm.c | 12 +----------- xen/include/asm-x86/mm.h | 13 +++++++++++++ 2 files changed, 14 insertions(+), 11 deletions(-)