Message ID | b595ffb3231dfef3c6b6c896a8e1cba0e838978c.1550088114.git.khalid.aziz@oracle.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | Add support for eXclusive Page Frame Ownership | expand |
On Wed, Feb 13, 2019 at 05:01:27PM -0700, Khalid Aziz wrote: > +++ b/kernel/dma/swiotlb.c > @@ -396,8 +396,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, > { > unsigned long pfn = PFN_DOWN(orig_addr); > unsigned char *vaddr = phys_to_virt(tlb_addr); > + struct page *page = pfn_to_page(pfn); > > - if (PageHighMem(pfn_to_page(pfn))) { > + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { I think this just wants a page_unmapped or similar helper instead of needing the xpfo_page_is_unmapped check. We actually have quite a few similar construct in the arch dma mapping code for architectures that require cache flushing. > +bool xpfo_page_is_unmapped(struct page *page) > +{ > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return false; > + > + xpfo = lookup_xpfo(page); > + if (unlikely(!xpfo) && !xpfo->inited) > + return false; > + > + return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); > +} > +EXPORT_SYMBOL(xpfo_page_is_unmapped); And at least for swiotlb there is no need to export this helper, as it is always built in.
On 2/14/19 12:47 AM, Christoph Hellwig wrote: > On Wed, Feb 13, 2019 at 05:01:27PM -0700, Khalid Aziz wrote: >> +++ b/kernel/dma/swiotlb.c >> @@ -396,8 +396,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, >> { >> unsigned long pfn = PFN_DOWN(orig_addr); >> unsigned char *vaddr = phys_to_virt(tlb_addr); >> + struct page *page = pfn_to_page(pfn); >> >> - if (PageHighMem(pfn_to_page(pfn))) { >> + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { > > I think this just wants a page_unmapped or similar helper instead of > needing the xpfo_page_is_unmapped check. We actually have quite > a few similar construct in the arch dma mapping code for architectures > that require cache flushing. As I am not the original author of this patch, I am interpreting the original intent. I think xpfo_page_is_unmapped() was added to account for kernel build without CONFIG_XPFO. xpfo_page_is_unmapped() has an alternate definition to return false if CONFIG_XPFO is not defined. xpfo_is_unmapped() is cleaned up further in patch 11 ("xpfo, mm: remove dependency on CONFIG_PAGE_EXTENSION") to a one-liner "return PageXpfoUnmapped(page);". xpfo_is_unmapped() can be eliminated entirely by adding an else clause to the following code added by that patch: --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -398,6 +402,15 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif +#ifdef CONFIG_XPFO +PAGEFLAG(XpfoUser, xpfo_user, PF_ANY) +TESTCLEARFLAG(XpfoUser, xpfo_user, PF_ANY) +TESTSETFLAG(XpfoUser, xpfo_user, PF_ANY) +PAGEFLAG(XpfoUnmapped, xpfo_unmapped, PF_ANY) +TESTCLEARFLAG(XpfoUnmapped, xpfo_unmapped, PF_ANY) +TESTSETFLAG(XpfoUnmapped, xpfo_unmapped, PF_ANY) +#endif + /* * On an anonymous page mapped into a user virtual memory area, * page->mapping points to its anon_vma, not to a struct address_space; Adding the following #else to above conditional: #else TESTPAGEFLAG_FALSE(XpfoUser) TESTPAGEFLAG_FALSE(XpfoUnmapped) should allow us to eliminate xpfo_is_unmapped(). Right? Thanks, Khalid > >> +bool xpfo_page_is_unmapped(struct page *page) >> +{ >> + struct xpfo *xpfo; >> + >> + if (!static_branch_unlikely(&xpfo_inited)) >> + return false; >> + >> + xpfo = lookup_xpfo(page); >> + if (unlikely(!xpfo) && !xpfo->inited) >> + return false; >> + >> + return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); >> +} >> +EXPORT_SYMBOL(xpfo_page_is_unmapped); > > And at least for swiotlb there is no need to export this helper, > as it is always built in. >
On Thu, Feb 14, 2019 at 09:56:24AM -0700, Khalid Aziz wrote: > On 2/14/19 12:47 AM, Christoph Hellwig wrote: > > On Wed, Feb 13, 2019 at 05:01:27PM -0700, Khalid Aziz wrote: > >> +++ b/kernel/dma/swiotlb.c > >> @@ -396,8 +396,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, > >> { > >> unsigned long pfn = PFN_DOWN(orig_addr); > >> unsigned char *vaddr = phys_to_virt(tlb_addr); > >> + struct page *page = pfn_to_page(pfn); > >> > >> - if (PageHighMem(pfn_to_page(pfn))) { > >> + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { > > > > I think this just wants a page_unmapped or similar helper instead of > > needing the xpfo_page_is_unmapped check. We actually have quite > > a few similar construct in the arch dma mapping code for architectures > > that require cache flushing. > > As I am not the original author of this patch, I am interpreting the > original intent. I think xpfo_page_is_unmapped() was added to account > for kernel build without CONFIG_XPFO. xpfo_page_is_unmapped() has an > alternate definition to return false if CONFIG_XPFO is not defined. > xpfo_is_unmapped() is cleaned up further in patch 11 ("xpfo, mm: remove > dependency on CONFIG_PAGE_EXTENSION") to a one-liner "return > PageXpfoUnmapped(page);". xpfo_is_unmapped() can be eliminated entirely > by adding an else clause to the following code added by that patch: The point I'm making it that just about every PageHighMem() check before code that does a kmap* later needs to account for xpfo as well. So instead of opencoding the above, be that using xpfo_page_is_unmapped or PageXpfoUnmapped, we really need one self-describing helper that checks if a page is unmapped for any reason and needs a kmap to access it.
On 2/14/19 10:44 AM, Christoph Hellwig wrote: > On Thu, Feb 14, 2019 at 09:56:24AM -0700, Khalid Aziz wrote: >> On 2/14/19 12:47 AM, Christoph Hellwig wrote: >>> On Wed, Feb 13, 2019 at 05:01:27PM -0700, Khalid Aziz wrote: >>>> +++ b/kernel/dma/swiotlb.c >>>> @@ -396,8 +396,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, >>>> { >>>> unsigned long pfn = PFN_DOWN(orig_addr); >>>> unsigned char *vaddr = phys_to_virt(tlb_addr); >>>> + struct page *page = pfn_to_page(pfn); >>>> >>>> - if (PageHighMem(pfn_to_page(pfn))) { >>>> + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { >>> >>> I think this just wants a page_unmapped or similar helper instead of >>> needing the xpfo_page_is_unmapped check. We actually have quite >>> a few similar construct in the arch dma mapping code for architectures >>> that require cache flushing. >> >> As I am not the original author of this patch, I am interpreting the >> original intent. I think xpfo_page_is_unmapped() was added to account >> for kernel build without CONFIG_XPFO. xpfo_page_is_unmapped() has an >> alternate definition to return false if CONFIG_XPFO is not defined. >> xpfo_is_unmapped() is cleaned up further in patch 11 ("xpfo, mm: remove >> dependency on CONFIG_PAGE_EXTENSION") to a one-liner "return >> PageXpfoUnmapped(page);". xpfo_is_unmapped() can be eliminated entirely >> by adding an else clause to the following code added by that patch: > > The point I'm making it that just about every PageHighMem() check > before code that does a kmap* later needs to account for xpfo as well. > > So instead of opencoding the above, be that using xpfo_page_is_unmapped > or PageXpfoUnmapped, we really need one self-describing helper that > checks if a page is unmapped for any reason and needs a kmap to access > it. > Understood. XpfoUnmapped is a the state for a page when it is a free page. When this page is allocated to userspace and userspace passes this page back to kernel in a syscall, kernel will always go through kmap to map it temporarily any way. When the page is freed back to the kernel, its mapping in physmap is restored. If the free page is allocated to kernel, its physmap entry is preserved. So I am inclined to say a page being XpfoUnmapped should not affect need or lack of need for kmap elsewhere. Does that make sense? Thanks, Khalid
diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index b15234745fb4..cba37ffb09b1 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -36,6 +36,8 @@ void xpfo_kunmap(void *kaddr, struct page *page); void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp); void xpfo_free_pages(struct page *page, int order); +bool xpfo_page_is_unmapped(struct page *page); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -43,6 +45,8 @@ static inline void xpfo_kunmap(void *kaddr, struct page *page) { } static inline void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) { } static inline void xpfo_free_pages(struct page *page, int order) { } +static inline bool xpfo_page_is_unmapped(struct page *page) { return false; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 045930e32c0e..820a54b57491 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -396,8 +396,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, { unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); + struct page *page = pfn_to_page(pfn); - if (PageHighMem(pfn_to_page(pfn))) { + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { /* The buffer does not have a mapping. Map it in and copy */ unsigned int offset = orig_addr & ~PAGE_MASK; char *buffer; diff --git a/mm/xpfo.c b/mm/xpfo.c index 24b33d3c20cb..67884736bebe 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -221,3 +221,18 @@ void xpfo_kunmap(void *kaddr, struct page *page) spin_unlock(&xpfo->maplock); } EXPORT_SYMBOL(xpfo_kunmap); + +bool xpfo_page_is_unmapped(struct page *page) +{ + struct xpfo *xpfo; + + if (!static_branch_unlikely(&xpfo_inited)) + return false; + + xpfo = lookup_xpfo(page); + if (unlikely(!xpfo) && !xpfo->inited) + return false; + + return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); +} +EXPORT_SYMBOL(xpfo_page_is_unmapped);