Message ID | b3126189-2fec-ec14-7129-7897cde980a8@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | IOMMU: superpage support when not sharing pagetables | expand |
On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote: > When a page table ends up with all contiguous entries (including all > identical attributes), it can be replaced by a superpage entry at the > next higher level. The page table itself can then be scheduled for > freeing. > > The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap > for whenever we (and obviously hardware) start supporting 512G mappings. > > Note that cache sync-ing is likely more strict than necessary. This is > both to be on the safe side as well as to maintain the pattern of all > updates of (potentially) live tables being accompanied by a flush (if so > needed). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> > --- > Unlike the freeing of all-empty page tables, this causes quite a bit of > back and forth for PV domains, due to their mapping/unmapping of pages > when they get converted to/from being page tables. It may therefore be > worth considering to delay re-coalescing a little, to avoid doing so > when the superpage would otherwise get split again pretty soon. But I > think this would better be the subject of a separate change anyway. > > Of course this could also be helped by more "aware" kernel side > behavior: They could avoid immediately mapping freed page tables > writable again, in anticipation of re-using that same page for another > page table elsewhere. Could we provide an option to select whether to use super-pages for IOMMU, so that PV domains could keep the previous behavior? Thanks, Roger.
On 02.06.2022 11:35, Roger Pau Monné wrote: > On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote: >> When a page table ends up with all contiguous entries (including all >> identical attributes), it can be replaced by a superpage entry at the >> next higher level. The page table itself can then be scheduled for >> freeing. >> >> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap >> for whenever we (and obviously hardware) start supporting 512G mappings. >> >> Note that cache sync-ing is likely more strict than necessary. This is >> both to be on the safe side as well as to maintain the pattern of all >> updates of (potentially) live tables being accompanied by a flush (if so >> needed). >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> >> Reviewed-by: Kevin Tian <kevin.tian@intel.com> > > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Thanks. >> --- >> Unlike the freeing of all-empty page tables, this causes quite a bit of >> back and forth for PV domains, due to their mapping/unmapping of pages >> when they get converted to/from being page tables. It may therefore be >> worth considering to delay re-coalescing a little, to avoid doing so >> when the superpage would otherwise get split again pretty soon. But I >> think this would better be the subject of a separate change anyway. >> >> Of course this could also be helped by more "aware" kernel side >> behavior: They could avoid immediately mapping freed page tables >> writable again, in anticipation of re-using that same page for another >> page table elsewhere. > > Could we provide an option to select whether to use super-pages for > IOMMU, so that PV domains could keep the previous behavior? Hmm, I did (a while ago) consider adding a command line option, largely to have something in case of problems, but here you're asking about a per-domain setting. Possible, sure, but I have to admit I'm always somewhat hesitant when it comes to changes requiring to touch the tool stack in non-trivial ways (required besides a separate Dom0 control). It's also not clear what granularity we'd want to allow control at: Just yes/no, or also an upper bound on the page sizes permitted, or even a map of (dis)allowed page sizes? Finally, what would the behavior be for HVM guests using shared page tables? Should the IOMMU option there also suppress CPU-side large pages? Or should the IOMMU option, when not fulfillable with shared page tables, lead to use of separate page tables (and an error if shared page tables were explicitly requested)? Jan
On Thu, Jun 02, 2022 at 11:58:48AM +0200, Jan Beulich wrote: > On 02.06.2022 11:35, Roger Pau Monné wrote: > > On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote: > >> When a page table ends up with all contiguous entries (including all > >> identical attributes), it can be replaced by a superpage entry at the > >> next higher level. The page table itself can then be scheduled for > >> freeing. > >> > >> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap > >> for whenever we (and obviously hardware) start supporting 512G mappings. > >> > >> Note that cache sync-ing is likely more strict than necessary. This is > >> both to be on the safe side as well as to maintain the pattern of all > >> updates of (potentially) live tables being accompanied by a flush (if so > >> needed). > >> > >> Signed-off-by: Jan Beulich <jbeulich@suse.com> > >> Reviewed-by: Kevin Tian <kevin.tian@intel.com> > > > > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> > > Thanks. > > >> --- > >> Unlike the freeing of all-empty page tables, this causes quite a bit of > >> back and forth for PV domains, due to their mapping/unmapping of pages > >> when they get converted to/from being page tables. It may therefore be > >> worth considering to delay re-coalescing a little, to avoid doing so > >> when the superpage would otherwise get split again pretty soon. But I > >> think this would better be the subject of a separate change anyway. > >> > >> Of course this could also be helped by more "aware" kernel side > >> behavior: They could avoid immediately mapping freed page tables > >> writable again, in anticipation of re-using that same page for another > >> page table elsewhere. > > > > Could we provide an option to select whether to use super-pages for > > IOMMU, so that PV domains could keep the previous behavior? > > Hmm, I did (a while ago) consider adding a command line option, largely > to have something in case of problems, but here you're asking about a > per-domain setting. Possible, sure, but I have to admit I'm always > somewhat hesitant when it comes to changes requiring to touch the tool > stack in non-trivial ways (required besides a separate Dom0 control). Well, per-domain is always better IMO, but I don't want to block you on this, so I guess a command line option would be OK. Per-domain would IMO be helpful in this case because an admin might wish to disable IOMMU super-pages just for PV guests, in order to prevent the back-and-forth in that case. We could also do so with a command line option but it's not the most user-friendly approach. > It's also not clear what granularity we'd want to allow control at: > Just yes/no, or also an upper bound on the page sizes permitted, or > even a map of (dis)allowed page sizes? I would be fine with just yes/no. I don't think we need to complicate the logic, this should be a fallback in case things don't work as expected. > Finally, what would the behavior be for HVM guests using shared page > tables? Should the IOMMU option there also suppress CPU-side large > pages? Or should the IOMMU option, when not fulfillable with shared > page tables, lead to use of separate page tables (and an error if > shared page tables were explicitly requested)? I think the option should error out (or be ignored?) when used with shared page tables, there are already options to control the page sizes for the CPU side page tables, and those should be used when using shared page tables. Thanks, Roger.
--- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2219,14 +2219,35 @@ static int __must_check cf_check intel_i * While the (ab)use of PTE_kind_table here allows to save some work in * the function, the main motivation for it is that it avoids a so far * unexplained hang during boot (while preparing Dom0) on a Westmere - * based laptop. + * based laptop. This also has the intended effect of terminating the + * loop when super pages aren't supported anymore at the next level. */ - pt_update_contig_markers(&page->val, - address_level_offset(dfn_to_daddr(dfn), level), - level, - (hd->platform_ops->page_sizes & - (1UL << level_to_offset_bits(level + 1)) - ? PTE_kind_leaf : PTE_kind_table)); + while ( pt_update_contig_markers(&page->val, + address_level_offset(dfn_to_daddr(dfn), level), + level, + (hd->platform_ops->page_sizes & + (1UL << level_to_offset_bits(level + 1)) + ? PTE_kind_leaf : PTE_kind_table)) ) + { + struct page_info *pg = maddr_to_page(pg_maddr); + + unmap_vtd_domain_page(page); + + new.val &= ~(LEVEL_MASK << level_to_offset_bits(level)); + dma_set_pte_superpage(new); + + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level, + flush_flags, false); + BUG_ON(pg_maddr < PAGE_SIZE); + + page = map_vtd_domain_page(pg_maddr); + pte = &page[address_level_offset(dfn_to_daddr(dfn), level)]; + *pte = new; + iommu_sync_cache(pte, sizeof(*pte)); + + *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all; + iommu_queue_free_pgtable(hd, pg); + } spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); --- a/xen/drivers/passthrough/vtd/iommu.h +++ b/xen/drivers/passthrough/vtd/iommu.h @@ -232,7 +232,7 @@ struct context_entry { /* page table handling */ #define LEVEL_STRIDE (9) -#define LEVEL_MASK ((1 << LEVEL_STRIDE) - 1) +#define LEVEL_MASK (PTE_NUM - 1UL) #define PTE_NUM (1 << LEVEL_STRIDE) #define level_to_agaw(val) ((val) - 2) #define agaw_to_level(val) ((val) + 2)