Message ID | caf43a60c79fd8380efe0bc178c6b31e040c179c.1576061451.git.hongyxia@amazon.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Refactor super page shattering | expand |
On 11.12.2019 11:58, Hongyan Xia wrote: > map_pages_to_xen and modify_xen_mappings are performing almost exactly > the same operations when shattering an l3 PTE, the only difference > being whether we want to flush. > > Signed-off-by: Hongyan Xia <hongyxia@amazon.com> > > --- > Changes in v3: > - style and indentation changes. > - return -ENOMEM instead of -1. > > Changes in v2: > - improve asm. > - re-read pl3e from memory when taking the lock. > - move the allocation of l2t inside the shatter function. > --- > xen/arch/x86/mm.c | 98 +++++++++++++++++++++++------------------------ > 1 file changed, 49 insertions(+), 49 deletions(-) > > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c > index 7d4dd80a85..97f11b6016 100644 > --- a/xen/arch/x86/mm.c > +++ b/xen/arch/x86/mm.c > @@ -5151,6 +5151,52 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) > flush_area_local((const void *)v, f) : \ > flush_area_all((const void *)v, f)) > > +/* Shatter an l3 entry and populate l2. If virt is passed in, also do flush. */ > +static int shatter_l3e(l3_pgentry_t *pl3e, unsigned long virt, bool locking) > +{ > + unsigned int i; > + l3_pgentry_t ol3e = *pl3e; > + l2_pgentry_t l2e = l2e_from_intpte(l3e_get_intpte(ol3e)); > + l2_pgentry_t *l2t = alloc_xen_pagetable(); > + > + if ( !l2t ) > + return -ENOMEM; > + > + for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) > + { > + l2e_write(l2t + i, l2e); > + l2e = l2e_from_intpte( > + l2e_get_intpte(l2e) + (PAGE_SIZE << PAGETABLE_ORDER)); Andrew - iirc you had suggested this (in some different form, but to the same effect) to improve code generation. If you're convinced that the downside of the loss of type safety is worth the win in generated code, I'm not going to stand in the way here, but it'll then need to be you to ack these two patches in their eventually final shape. > + } > + > + if ( locking ) > + spin_lock(&map_pgdir_lock); > + if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && > + (l3e_get_flags(*pl3e) & _PAGE_PSE) ) > + { > + l3e_write_atomic(pl3e, > + l3e_from_paddr((paddr_t)virt_to_maddr(l2t), __PAGE_HYPERVISOR)); Why the cast? (I'm sorry if this was there on v3 already and I didn't spot it. And if this remains the only thing to adjust, then I guess this could be taken care of while committing.) Jan
On 11.12.2019 11:58, Hongyan Xia wrote: > @@ -5578,27 +5597,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) > } > > /* PAGE1GB: shatter the superpage and fall through. */ > - pl2e = alloc_xen_pagetable(); > - if ( !pl2e ) > + if ( shatter_l3e(pl3e, 0, locking) ) > return -ENOMEM; Hmm, I didn't expect I'd need to comment on this again: As per my v2 reply, you should hand on the return value from the function, not make up your own. This is so that in case the function gains another error path with a different error code, it wouldn't become indistinguishable to callers further up. Jan
On Wed, 2019-12-11 at 16:32 +0100, Jan Beulich wrote: > On 11.12.2019 11:58, Hongyan Xia wrote: > > @@ -5578,27 +5597,8 @@ int modify_xen_mappings(unsigned long s, > > unsigned long e, unsigned int nf) > > } > > > > /* PAGE1GB: shatter the superpage and fall through. */ > > - pl2e = alloc_xen_pagetable(); > > - if ( !pl2e ) > > + if ( shatter_l3e(pl3e, 0, locking) ) > > return -ENOMEM; > > Hmm, I didn't expect I'd need to comment on this again: As per > my v2 reply, you should hand on the return value from the > function, not make up your own. This is so that in case the > function gains another error path with a different error code, > it wouldn't become indistinguishable to callers further up. > I was basically thinking about the conversation we had that ENOMEM is probably the only error value map_pages_to_xen would return ever, and it is unlikely to gain another return value in the future, so initially I just let shatter return -1 and the caller return -ENOMEM. There is no problem for me if we want to change it to handle different error values. Hongyan
On Wed, 2019-12-11 at 16:29 +0100, Jan Beulich wrote: > > + } > > + > > + if ( locking ) > > + spin_lock(&map_pgdir_lock); > > + if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && > > + (l3e_get_flags(*pl3e) & _PAGE_PSE) ) > > + { > > + l3e_write_atomic(pl3e, > > + l3e_from_paddr((paddr_t)virt_to_maddr(l2t), > > __PAGE_HYPERVISOR)); > > Why the cast? (I'm sorry if this was there on v3 already and I > didn't spot it. And if this remains the only thing to adjust, > then I guess this could be taken care of while committing.) > > Jan Sadly there is no l3e_from_maddr or virt_to_paddr to call directly. Of course, paddr_t and maddr have the same underlying type (unsigned long), so it works without a cast. I just added the cast to make it explicit that these two are not exactly the same. Hongyan
On 11.12.2019 17:27, Xia, Hongyan wrote: > On Wed, 2019-12-11 at 16:32 +0100, Jan Beulich wrote: >> On 11.12.2019 11:58, Hongyan Xia wrote: >>> @@ -5578,27 +5597,8 @@ int modify_xen_mappings(unsigned long s, >>> unsigned long e, unsigned int nf) >>> } >>> >>> /* PAGE1GB: shatter the superpage and fall through. */ >>> - pl2e = alloc_xen_pagetable(); >>> - if ( !pl2e ) >>> + if ( shatter_l3e(pl3e, 0, locking) ) >>> return -ENOMEM; >> >> Hmm, I didn't expect I'd need to comment on this again: As per >> my v2 reply, you should hand on the return value from the >> function, not make up your own. This is so that in case the >> function gains another error path with a different error code, >> it wouldn't become indistinguishable to callers further up. > > I was basically thinking about the conversation we had that ENOMEM is > probably the only error value map_pages_to_xen would return ever, and > it is unlikely to gain another return value in the future, so initially > I just let shatter return -1 and the caller return -ENOMEM. There is no > problem for me if we want to change it to handle different error > values. The alternative to your prior 0 / -1 returning would have been to have the function return bool. In this case "inventing" an error code here would be fine. The 0 / -1 approach would introduce another instance of what we're trying to get rid of elsewhere. Jan
On 11.12.2019 17:34, Xia, Hongyan wrote: > On Wed, 2019-12-11 at 16:29 +0100, Jan Beulich wrote: >>> + } >>> + >>> + if ( locking ) >>> + spin_lock(&map_pgdir_lock); >>> + if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && >>> + (l3e_get_flags(*pl3e) & _PAGE_PSE) ) >>> + { >>> + l3e_write_atomic(pl3e, >>> + l3e_from_paddr((paddr_t)virt_to_maddr(l2t), >>> __PAGE_HYPERVISOR)); >> >> Why the cast? (I'm sorry if this was there on v3 already and I >> didn't spot it. And if this remains the only thing to adjust, >> then I guess this could be taken care of while committing.) > > Sadly there is no l3e_from_maddr or virt_to_paddr to call directly. Of > course, paddr_t and maddr have the same underlying type (unsigned > long), so it works without a cast. I just added the cast to make it > explicit that these two are not exactly the same. Yes, there continues to be a naming disconnect. But no, this is not a reason to add a cast. Casts should be used as sparingly as possible, since they tend to hide problems. Jan
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 7d4dd80a85..97f11b6016 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5151,6 +5151,52 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) flush_area_local((const void *)v, f) : \ flush_area_all((const void *)v, f)) +/* Shatter an l3 entry and populate l2. If virt is passed in, also do flush. */ +static int shatter_l3e(l3_pgentry_t *pl3e, unsigned long virt, bool locking) +{ + unsigned int i; + l3_pgentry_t ol3e = *pl3e; + l2_pgentry_t l2e = l2e_from_intpte(l3e_get_intpte(ol3e)); + l2_pgentry_t *l2t = alloc_xen_pagetable(); + + if ( !l2t ) + return -ENOMEM; + + for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) + { + l2e_write(l2t + i, l2e); + l2e = l2e_from_intpte( + l2e_get_intpte(l2e) + (PAGE_SIZE << PAGETABLE_ORDER)); + } + + if ( locking ) + spin_lock(&map_pgdir_lock); + if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && + (l3e_get_flags(*pl3e) & _PAGE_PSE) ) + { + l3e_write_atomic(pl3e, + l3e_from_paddr((paddr_t)virt_to_maddr(l2t), __PAGE_HYPERVISOR)); + l2t = NULL; + } + if ( locking ) + spin_unlock(&map_pgdir_lock); + + if ( virt ) + { + unsigned int flush_flags = + FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER); + + if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL ) + flush_flags |= FLUSH_TLB_GLOBAL; + flush_area(virt, flush_flags); + } + + if ( l2t ) + free_xen_pagetable(l2t); + + return 0; +} + int map_pages_to_xen( unsigned long virt, mfn_t mfn, @@ -5244,9 +5290,6 @@ int map_pages_to_xen( if ( (l3e_get_flags(ol3e) & _PAGE_PRESENT) && (l3e_get_flags(ol3e) & _PAGE_PSE) ) { - unsigned int flush_flags = - FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER); - /* Skip this PTE if there is no change. */ if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES - 1)) + @@ -5267,33 +5310,9 @@ int map_pages_to_xen( continue; } - pl2e = alloc_xen_pagetable(); - if ( pl2e == NULL ) + /* Pass virt to indicate we need to flush. */ + if ( shatter_l3e(pl3e, virt, locking) ) return -ENOMEM; - - for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) - l2e_write(pl2e + i, - l2e_from_pfn(l3e_get_pfn(ol3e) + - (i << PAGETABLE_ORDER), - l3e_get_flags(ol3e))); - - if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL ) - flush_flags |= FLUSH_TLB_GLOBAL; - - if ( locking ) - spin_lock(&map_pgdir_lock); - if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && - (l3e_get_flags(*pl3e) & _PAGE_PSE) ) - { - l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e), - __PAGE_HYPERVISOR)); - pl2e = NULL; - } - if ( locking ) - spin_unlock(&map_pgdir_lock); - flush_area(virt, flush_flags); - if ( pl2e ) - free_xen_pagetable(pl2e); } pl2e = virt_to_xen_l2e(virt); @@ -5578,27 +5597,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) } /* PAGE1GB: shatter the superpage and fall through. */ - pl2e = alloc_xen_pagetable(); - if ( !pl2e ) + if ( shatter_l3e(pl3e, 0, locking) ) return -ENOMEM; - for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) - l2e_write(pl2e + i, - l2e_from_pfn(l3e_get_pfn(*pl3e) + - (i << PAGETABLE_ORDER), - l3e_get_flags(*pl3e))); - if ( locking ) - spin_lock(&map_pgdir_lock); - if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && - (l3e_get_flags(*pl3e) & _PAGE_PSE) ) - { - l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e), - __PAGE_HYPERVISOR)); - pl2e = NULL; - } - if ( locking ) - spin_unlock(&map_pgdir_lock); - if ( pl2e ) - free_xen_pagetable(pl2e); } /*
map_pages_to_xen and modify_xen_mappings are performing almost exactly the same operations when shattering an l3 PTE, the only difference being whether we want to flush. Signed-off-by: Hongyan Xia <hongyxia@amazon.com> --- Changes in v3: - style and indentation changes. - return -ENOMEM instead of -1. Changes in v2: - improve asm. - re-read pl3e from memory when taking the lock. - move the allocation of l2t inside the shatter function. --- xen/arch/x86/mm.c | 98 +++++++++++++++++++++++------------------------ 1 file changed, 49 insertions(+), 49 deletions(-)