diff mbox series

[v10,1/8] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages

Message ID 20220922163926.7077-2-logang@deltatee.com (mailing list archive)
State Superseded
Headers show
Series Userspace P2PDMA with O_DIRECT NVMe devices | expand

Commit Message

Logan Gunthorpe Sept. 22, 2022, 4:39 p.m. UTC
GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
allow obtaining P2PDMA pages. If GUP is called without the flag and a
P2PDMA page is found, it will return an error.

FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/mm.h |  1 +
 mm/gup.c           | 22 +++++++++++++++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

Comments

Jason Gunthorpe Sept. 23, 2022, 6:13 p.m. UTC | #1
On Thu, Sep 22, 2022 at 10:39:19AM -0600, Logan Gunthorpe wrote:
> GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
> allow obtaining P2PDMA pages. If GUP is called without the flag and a
> P2PDMA page is found, it will return an error.
> 
> FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.

What is causing this? It is really troublesome, I would like to fix
it. eg I would like to have P2PDMA pages in VFIO iommu page tables and
in RDMA MR's - both require longterm.

Is it just because ZONE_DEVICE was created for DAX and carried that
revocable assumption over? Does anything in your series require
revocable?

> @@ -2383,6 +2392,10 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
>  		page = pte_page(pte);
>  
> +		if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
> +			     is_pci_p2pdma_page(page)))
> +			goto pte_unmap;
> +
>  		folio = try_grab_folio(page, 1, flags);
>  		if (!folio)
>  			goto pte_unmap;

On closer look this is not in the right place, we cannot touch the
content of *page without holding a ref, and that doesn't happen until
until try_grab_folio() completes.

It would be simpler to put this check in try_grab_folio/try_grab_page
after the ref has been obtained. That will naturally cover all the
places that need it.

Jason
Logan Gunthorpe Sept. 23, 2022, 7:08 p.m. UTC | #2
On 2022-09-23 12:13, Jason Gunthorpe wrote:
> On Thu, Sep 22, 2022 at 10:39:19AM -0600, Logan Gunthorpe wrote:
>> GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
>> allow obtaining P2PDMA pages. If GUP is called without the flag and a
>> P2PDMA page is found, it will return an error.
>>
>> FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.
> 
> What is causing this? It is really troublesome, I would like to fix
> it. eg I would like to have P2PDMA pages in VFIO iommu page tables and
> in RDMA MR's - both require longterm.

You had said it was required if we were relying on unmap_mapping_range()...

https://lore.kernel.org/all/20210928200506.GX3544071@ziepe.ca/T/#u

> Is it just because ZONE_DEVICE was created for DAX and carried that
> revocable assumption over? Does anything in your series require
> revocable?

We still rely on unmap_mapping_range() indirectly in the unbind path.
So I expect if something takes a LONGERM mapping that would block until
whatever process holds the pin releases it. That's less than ideal and
I'm not sure what can be done about it.

>> @@ -2383,6 +2392,10 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>  		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
>>  		page = pte_page(pte);
>>  
>> +		if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
>> +			     is_pci_p2pdma_page(page)))
>> +			goto pte_unmap;
>> +
>>  		folio = try_grab_folio(page, 1, flags);
>>  		if (!folio)
>>  			goto pte_unmap;
> 
> On closer look this is not in the right place, we cannot touch the
> content of *page without holding a ref, and that doesn't happen until
> until try_grab_folio() completes.
> 
> It would be simpler to put this check in try_grab_folio/try_grab_page
> after the ref has been obtained. That will naturally cover all the
> places that need it.

Ok, I can make that change.

Logan
Jason Gunthorpe Sept. 23, 2022, 7:53 p.m. UTC | #3
On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
> 
> 
> On 2022-09-23 12:13, Jason Gunthorpe wrote:
> > On Thu, Sep 22, 2022 at 10:39:19AM -0600, Logan Gunthorpe wrote:
> >> GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
> >> allow obtaining P2PDMA pages. If GUP is called without the flag and a
> >> P2PDMA page is found, it will return an error.
> >>
> >> FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.
> > 
> > What is causing this? It is really troublesome, I would like to fix
> > it. eg I would like to have P2PDMA pages in VFIO iommu page tables and
> > in RDMA MR's - both require longterm.
> 
> You had said it was required if we were relying on unmap_mapping_range()...

Ah.. Ok.  Dan and I have been talking about this a lot, and it turns
out the DAX approach of unmap_mapping_range() still has problems,
really the same problem as FOLL_LONGTERM:

https://lore.kernel.org/all/Yy2pC%2FupZNEkVmc5@nvidia.com/

ie nothing actually waits for the page refs to go to zero during
memunmap_pages(). (indeed they are not actually zero because currently
they are instantly reset to 1 if they become zero)

The current design requires that the pgmap user hold the pgmap_ref in
a way that it remains elevated until page_free() is called for every
page that was ever used.

I'm encouraging Dan to work on better infrastructure in pgmap core
because every pgmap implementation has this issue currently.

For that reason it is probably not so relavent to this series.

Perhaps just clarify in the commit message that the FOLL_LONGTERM
restriction is to copy DAX until the pgmap page refcounts are fixed.

> > Is it just because ZONE_DEVICE was created for DAX and carried that
> > revocable assumption over? Does anything in your series require
> > revocable?
> 
> We still rely on unmap_mapping_range() indirectly in the unbind
> path. So I expect if something takes a LONGERM mapping that would
> block until whatever process holds the pin releases it. That's less
> than ideal and I'm not sure what can be done about it.

We could improve the blocking with some kind of FOLL_LONGTERM notifier
thingy eg after the unmap_mapping_rage() broadcast that a range of
PFNs is going away and FOLL_LONGTERM users can do a revoke if they
support it. It is a rare enough we don't necessarily need to optimize
this alot, and blocking unbind until some FDs close is annoying not
critical.. (eg you already can't unmount a filesystem to unbind the
device on the nvme while FS FDs are open)

Jason
Logan Gunthorpe Sept. 23, 2022, 8:11 p.m. UTC | #4
On 2022-09-23 13:53, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
> I'm encouraging Dan to work on better infrastructure in pgmap core
> because every pgmap implementation has this issue currently.
> 
> For that reason it is probably not so relavent to this series.
> 
> Perhaps just clarify in the commit message that the FOLL_LONGTERM
> restriction is to copy DAX until the pgmap page refcounts are fixed.

Ok, I'll add that note.

Per the fix for the try_grab_page(), to me it doesn't fit well in 
try_grab_page() without doing a bunch of cleanup to change the
error handling, and the same would have to be added to try_grab_folio().
So I think it's better to leave it where it was, but move it below the 
respective grab calls. Does the incremental patch below look correct?

I am confused about what happens if neither FOLL_PIN or FOLL_GET 
are set (which the documentation for try_grab_x() says is possible, but
other documentation suggests that FOLL_GET is automatically set). 
In which case it'd be impossible to do the check if we can't 
access the page.

I'm assuming that seeing there are other accesses to the page in these
two instances of try_grab_x() that these spots will always have FOLL_GET
or  FOLL_PIN set and thus this isn't an issue. Another reason not
to push the check into try_grab_x().

Logan

--

diff --git a/mm/gup.c b/mm/gup.c
index 108848b67f6f..f05ba3e8e29a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -601,12 +601,6 @@ static struct page *follow_page_pte(struct vm_area_struct >
                goto out;
        }
 
-       if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
-                    is_pci_p2pdma_page(page))) {
-               page = ERR_PTR(-EREMOTEIO);
-               goto out;
-       }
-
        VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
                       !PageAnonExclusive(page), page);
 
@@ -615,6 +609,13 @@ static struct page *follow_page_pte(struct vm_area_struct >
                page = ERR_PTR(-ENOMEM);
                goto out;
        }
+
+       if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) {
+               gup_put_folio(page_folio(page), 1, flags);
+               page = ERR_PTR(-EREMOTEIO);
+               goto out;
+       }
+
        /*
         * We need to make the page accessible if and only if we are going
         * to access its content (the FOLL_PIN case).  Please see
@@ -2392,14 +2393,16 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr,>
                VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
                page = pte_page(pte);
 
-               if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
-                            is_pci_p2pdma_page(page)))
-                       goto pte_unmap;
-
                folio = try_grab_folio(page, 1, flags);
                if (!folio)
                        goto pte_unmap;
 
+               if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
+                            is_pci_p2pdma_page(page))) {
+                       gup_put_folio(folio, 1, flags);
+                       goto pte_unmap;
+               }
+
                if (unlikely(page_is_secretmem(page))) {
                        gup_put_folio(folio, 1, flags);
                        goto pte_unmap;
Jason Gunthorpe Sept. 23, 2022, 10:58 p.m. UTC | #5
On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
> 
> 
> On 2022-09-23 13:53, Jason Gunthorpe wrote:
> > On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
> > I'm encouraging Dan to work on better infrastructure in pgmap core
> > because every pgmap implementation has this issue currently.
> > 
> > For that reason it is probably not so relavent to this series.
> > 
> > Perhaps just clarify in the commit message that the FOLL_LONGTERM
> > restriction is to copy DAX until the pgmap page refcounts are fixed.
> 
> Ok, I'll add that note.
> 
> Per the fix for the try_grab_page(), to me it doesn't fit well in 
> try_grab_page() without doing a bunch of cleanup to change the
> error handling, and the same would have to be added to try_grab_folio().
> So I think it's better to leave it where it was, but move it below the 
> respective grab calls. Does the incremental patch below look correct?

Oh? I was thinking of just a very simple thing:

--- a/mm/gup.c
+++ b/mm/gup.c
@@ -225,6 +225,11 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags)
                node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, 1);
        }
 
+       if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) {
+               gup_put_folio(page_folio(page), 1, flags);
+              return false;
+       }
+
        return true;
 }


> I am confused about what happens if neither FOLL_PIN or FOLL_GET 
> are set (which the documentation for try_grab_x() says is possible, but
> other documentation suggests that FOLL_GET is automatically set). 
> In which case it'd be impossible to do the check if we can't 
> access the page.

try_grab_page is operating under the PTL so it can probably touch the
page OK (though perhaps we don't need to even check anything)

try_grab_folio cannot be called without PIN/GET, so like this perhaps:

@@ -123,11 +123,14 @@ static inline struct folio *try_get_folio(struct page *page, int refs)
  */
 struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
 {
+       struct folio *folio;
+
+       if (WARN_ON((flags & (FOLL_GET | FOLL_PIN)) == 0))
+               return NULL;
+
        if (flags & FOLL_GET)
-               return try_get_folio(page, refs);
+               folio = try_get_folio(page, refs);
        else if (flags & FOLL_PIN) {
-               struct folio *folio;
-
                /*
                 * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
                 * right zone, so fail and let the caller fall back to the slow
@@ -160,11 +163,14 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
                                        refs * (GUP_PIN_COUNTING_BIAS - 1));
                node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);
 
-               return folio;
        }
 
-       WARN_ON_ONCE(1);
-       return NULL;
+       if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) {
+               gup_put_folio(page, 1, flags);
+               return NULL;
+       }
+
+       return folio;
 }

Jason
Logan Gunthorpe Sept. 23, 2022, 11:01 p.m. UTC | #6
On 2022-09-23 16:58, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2022-09-23 13:53, Jason Gunthorpe wrote:
>>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
>>> I'm encouraging Dan to work on better infrastructure in pgmap core
>>> because every pgmap implementation has this issue currently.
>>>
>>> For that reason it is probably not so relavent to this series.
>>>
>>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
>>> restriction is to copy DAX until the pgmap page refcounts are fixed.
>>
>> Ok, I'll add that note.
>>
>> Per the fix for the try_grab_page(), to me it doesn't fit well in 
>> try_grab_page() without doing a bunch of cleanup to change the
>> error handling, and the same would have to be added to try_grab_folio().
>> So I think it's better to leave it where it was, but move it below the 
>> respective grab calls. Does the incremental patch below look correct?
> 
> Oh? I was thinking of just a very simple thing:

Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
error used for bad P2PDMA page everywhere.

Plus the concern that some of the callsites of try_grab_page() might not have
a get or a pin and thus it's not safe which was the whole point of the change
anyway.

Plus we have to do the same for try_grab_folio().

Logan
Jason Gunthorpe Sept. 23, 2022, 11:07 p.m. UTC | #7
On Fri, Sep 23, 2022 at 05:01:26PM -0600, Logan Gunthorpe wrote:
> 
> 
> 
> On 2022-09-23 16:58, Jason Gunthorpe wrote:
> > On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
> >>
> >>
> >> On 2022-09-23 13:53, Jason Gunthorpe wrote:
> >>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
> >>> I'm encouraging Dan to work on better infrastructure in pgmap core
> >>> because every pgmap implementation has this issue currently.
> >>>
> >>> For that reason it is probably not so relavent to this series.
> >>>
> >>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
> >>> restriction is to copy DAX until the pgmap page refcounts are fixed.
> >>
> >> Ok, I'll add that note.
> >>
> >> Per the fix for the try_grab_page(), to me it doesn't fit well in 
> >> try_grab_page() without doing a bunch of cleanup to change the
> >> error handling, and the same would have to be added to try_grab_folio().
> >> So I think it's better to leave it where it was, but move it below the 
> >> respective grab calls. Does the incremental patch below look correct?
> > 
> > Oh? I was thinking of just a very simple thing:
> 
> Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
> error used for bad P2PDMA page everywhere.

I'd rather not see GUP made more fragile just for that..

> Plus the concern that some of the callsites of try_grab_page() might not have
> a get or a pin and thus it's not safe which was the whole point of the change
> anyway.

try_grab_page() calls folio_ref_inc(), that is only legal if it knows
the page is already a valid pointer under the PTLs, so it is safe to
check the pgmap as well.

Jason
Logan Gunthorpe Sept. 23, 2022, 11:14 p.m. UTC | #8
On 2022-09-23 17:07, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 05:01:26PM -0600, Logan Gunthorpe wrote:
>>
>>
>>
>> On 2022-09-23 16:58, Jason Gunthorpe wrote:
>>> On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
>>>>
>>>>
>>>> On 2022-09-23 13:53, Jason Gunthorpe wrote:
>>>>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
>>>>> I'm encouraging Dan to work on better infrastructure in pgmap core
>>>>> because every pgmap implementation has this issue currently.
>>>>>
>>>>> For that reason it is probably not so relavent to this series.
>>>>>
>>>>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
>>>>> restriction is to copy DAX until the pgmap page refcounts are fixed.
>>>>
>>>> Ok, I'll add that note.
>>>>
>>>> Per the fix for the try_grab_page(), to me it doesn't fit well in 
>>>> try_grab_page() without doing a bunch of cleanup to change the
>>>> error handling, and the same would have to be added to try_grab_folio().
>>>> So I think it's better to leave it where it was, but move it below the 
>>>> respective grab calls. Does the incremental patch below look correct?
>>>
>>> Oh? I was thinking of just a very simple thing:
>>
>> Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
>> error used for bad P2PDMA page everywhere.
> 
> I'd rather not see GUP made more fragile just for that..

Not sure how that's more fragile... You're way seems more dangerous given
the large number of call sites we are adding it to when it might not apply.

> 
>> Plus the concern that some of the callsites of try_grab_page() might not have
>> a get or a pin and thus it's not safe which was the whole point of the change
>> anyway.
> 
> try_grab_page() calls folio_ref_inc(), that is only legal if it knows
> the page is already a valid pointer under the PTLs, so it is safe to
> check the pgmap as well.

My point is it doesn't get a reference or a pin unless FOLL_PIN or FOLL_GET is
set and the documentation states that neither might be set, in which case 
folio_ref_inc() will not be called...


Logan
Jason Gunthorpe Sept. 23, 2022, 11:21 p.m. UTC | #9
On Fri, Sep 23, 2022 at 05:14:11PM -0600, Logan Gunthorpe wrote:
> 
> 
> On 2022-09-23 17:07, Jason Gunthorpe wrote:
> > On Fri, Sep 23, 2022 at 05:01:26PM -0600, Logan Gunthorpe wrote:
> >>
> >>
> >>
> >> On 2022-09-23 16:58, Jason Gunthorpe wrote:
> >>> On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
> >>>>
> >>>>
> >>>> On 2022-09-23 13:53, Jason Gunthorpe wrote:
> >>>>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
> >>>>> I'm encouraging Dan to work on better infrastructure in pgmap core
> >>>>> because every pgmap implementation has this issue currently.
> >>>>>
> >>>>> For that reason it is probably not so relavent to this series.
> >>>>>
> >>>>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
> >>>>> restriction is to copy DAX until the pgmap page refcounts are fixed.
> >>>>
> >>>> Ok, I'll add that note.
> >>>>
> >>>> Per the fix for the try_grab_page(), to me it doesn't fit well in 
> >>>> try_grab_page() without doing a bunch of cleanup to change the
> >>>> error handling, and the same would have to be added to try_grab_folio().
> >>>> So I think it's better to leave it where it was, but move it below the 
> >>>> respective grab calls. Does the incremental patch below look correct?
> >>>
> >>> Oh? I was thinking of just a very simple thing:
> >>
> >> Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
> >> error used for bad P2PDMA page everywhere.
> > 
> > I'd rather not see GUP made more fragile just for that..
> 
> Not sure how that's more fragile... You're way seems more dangerous given
> the large number of call sites we are adding it to when it might not
> apply.

No, that is the point, it *always* applies. A devmap struct page of
the wrong type should never exit gup, from any path, no matter what.

We have two central functions that validate a page is OK to return,
that *everyone* must call.

If you don't put it there then we will probably miss copying it into a
call site eventually.

> > try_grab_page() calls folio_ref_inc(), that is only legal if it knows
> > the page is already a valid pointer under the PTLs, so it is safe to
> > check the pgmap as well.
> 
> My point is it doesn't get a reference or a pin unless FOLL_PIN or FOLL_GET is
> set and the documentation states that neither might be set, in which case 
> folio_ref_inc() will not be called...

That isn't how GUP is structured, all the calls to try_grab_page() are
in places where PIN/GET might be set and are safe for that usage.

If we know PIN/GET is not set then we don't even need to call the
function because it is a NOP.

Jason
Logan Gunthorpe Sept. 23, 2022, 11:35 p.m. UTC | #10
On 2022-09-23 17:21, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 05:14:11PM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2022-09-23 17:07, Jason Gunthorpe wrote:
>>> On Fri, Sep 23, 2022 at 05:01:26PM -0600, Logan Gunthorpe wrote:
>>>>
>>>>
>>>>
>>>> On 2022-09-23 16:58, Jason Gunthorpe wrote:
>>>>> On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
>>>>>>
>>>>>>
>>>>>> On 2022-09-23 13:53, Jason Gunthorpe wrote:
>>>>>>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
>>>>>>> I'm encouraging Dan to work on better infrastructure in pgmap core
>>>>>>> because every pgmap implementation has this issue currently.
>>>>>>>
>>>>>>> For that reason it is probably not so relavent to this series.
>>>>>>>
>>>>>>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
>>>>>>> restriction is to copy DAX until the pgmap page refcounts are fixed.
>>>>>>
>>>>>> Ok, I'll add that note.
>>>>>>
>>>>>> Per the fix for the try_grab_page(), to me it doesn't fit well in 
>>>>>> try_grab_page() without doing a bunch of cleanup to change the
>>>>>> error handling, and the same would have to be added to try_grab_folio().
>>>>>> So I think it's better to leave it where it was, but move it below the 
>>>>>> respective grab calls. Does the incremental patch below look correct?
>>>>>
>>>>> Oh? I was thinking of just a very simple thing:
>>>>
>>>> Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
>>>> error used for bad P2PDMA page everywhere.
>>>
>>> I'd rather not see GUP made more fragile just for that..
>>
>> Not sure how that's more fragile... You're way seems more dangerous given
>> the large number of call sites we are adding it to when it might not
>> apply.
> 
> No, that is the point, it *always* applies. A devmap struct page of
> the wrong type should never exit gup, from any path, no matter what.
> 
> We have two central functions that validate a page is OK to return,
> that *everyone* must call.
> 
> If you don't put it there then we will probably miss copying it into a
> call site eventually.

Most of the call sites don't apply though, with huge pages and gate pages...

>>> try_grab_page() calls folio_ref_inc(), that is only legal if it knows
>>> the page is already a valid pointer under the PTLs, so it is safe to
>>> check the pgmap as well.
>>
>> My point is it doesn't get a reference or a pin unless FOLL_PIN or FOLL_GET is
>> set and the documentation states that neither might be set, in which case 
>> folio_ref_inc() will not be called...
> 
> That isn't how GUP is structured, all the calls to try_grab_page() are
> in places where PIN/GET might be set and are safe for that usage.
> 
> If we know PIN/GET is not set then we don't even need to call the
> function because it is a NOP.

That's not what the documentation for the function says:

"Either FOLL_PIN or FOLL_GET (or neither) may be set... Return: true for success, 
 or if no action was required (if neither FOLL_PIN nor FOLL_GET was set, nothing 
 is done)."

https://elixir.bootlin.com/linux/v6.0-rc6/source/mm/gup.c#L194

Logan
Logan Gunthorpe Sept. 23, 2022, 11:51 p.m. UTC | #11
On 2022-09-23 17:21, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 05:14:11PM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2022-09-23 17:07, Jason Gunthorpe wrote:
>>> On Fri, Sep 23, 2022 at 05:01:26PM -0600, Logan Gunthorpe wrote:
>>>>
>>>>
>>>>
>>>> On 2022-09-23 16:58, Jason Gunthorpe wrote:
>>>>> On Fri, Sep 23, 2022 at 02:11:03PM -0600, Logan Gunthorpe wrote:
>>>>>>
>>>>>>
>>>>>> On 2022-09-23 13:53, Jason Gunthorpe wrote:
>>>>>>> On Fri, Sep 23, 2022 at 01:08:31PM -0600, Logan Gunthorpe wrote:
>>>>>>> I'm encouraging Dan to work on better infrastructure in pgmap core
>>>>>>> because every pgmap implementation has this issue currently.
>>>>>>>
>>>>>>> For that reason it is probably not so relavent to this series.
>>>>>>>
>>>>>>> Perhaps just clarify in the commit message that the FOLL_LONGTERM
>>>>>>> restriction is to copy DAX until the pgmap page refcounts are fixed.
>>>>>>
>>>>>> Ok, I'll add that note.
>>>>>>
>>>>>> Per the fix for the try_grab_page(), to me it doesn't fit well in 
>>>>>> try_grab_page() without doing a bunch of cleanup to change the
>>>>>> error handling, and the same would have to be added to try_grab_folio().
>>>>>> So I think it's better to leave it where it was, but move it below the 
>>>>>> respective grab calls. Does the incremental patch below look correct?
>>>>>
>>>>> Oh? I was thinking of just a very simple thing:
>>>>
>>>> Really would like it to return -EREMOTEIO instead of -ENOMEM as that's the
>>>> error used for bad P2PDMA page everywhere.
>>>
>>> I'd rather not see GUP made more fragile just for that..

And on further consideration I really think the correct error return is 
important here. This will be a user facing error that'll be easy enough
to hit: think code that might be run on any file and if the file is 
hosted on a block device that doesn't support P2PDMA then the user
will see the very uninformative "Cannot allocate memory" error.

Userspace code that's written for purpose can look at the EREMOTEIO error
and tell the user something useful, if we return the correct error.
If we return ENOMEM in this case, that is not possible because
lots of things might have caused that error.

Logan
Jason Gunthorpe Sept. 26, 2022, 10:57 p.m. UTC | #12
On Fri, Sep 23, 2022 at 05:51:49PM -0600, Logan Gunthorpe wrote:

> And on further consideration I really think the correct error return is 
> important here. This will be a user facing error that'll be easy enough
> to hit: think code that might be run on any file and if the file is 
> hosted on a block device that doesn't support P2PDMA then the user
> will see the very uninformative "Cannot allocate memory" error.
> 
> Userspace code that's written for purpose can look at the EREMOTEIO error
> and tell the user something useful, if we return the correct error.
> If we return ENOMEM in this case, that is not possible because
> lots of things might have caused that error.

That is reasonable, but I'd still prefer to see it done more
centrally.

>> If we know PIN/GET is not set then we don't even need to call the
>> function because it is a NOP.

> That's not what the documentation for the function says:

> "Either FOLL_PIN or FOLL_GET (or neither) may be set... Return: true for success,
>  or if no action was required (if neither FOLL_PIN nor FOLL_GET was set, nothing
>  is done)."

I mean the way the code is structured is at the top of the call chain
the PIN/GET/0 is decided and then the callchain is run. All the
callsites of try_grab_page() must be safe to call under FOLL_PIN
because their caller is making the decision what flag to use.

Jason
Logan Gunthorpe Sept. 28, 2022, 9:38 p.m. UTC | #13
On 2022-09-26 16:57, Jason Gunthorpe wrote:
> On Fri, Sep 23, 2022 at 05:51:49PM -0600, Logan Gunthorpe wrote:
>> Userspace code that's written for purpose can look at the EREMOTEIO error
>> and tell the user something useful, if we return the correct error.
>> If we return ENOMEM in this case, that is not possible because
>> lots of things might have caused that error.
> 
> That is reasonable, but I'd still prefer to see it done more
> centrally.
> 
> I mean the way the code is structured is at the top of the call chain
> the PIN/GET/0 is decided and then the callchain is run. All the
> callsites of try_grab_page() must be safe to call under FOLL_PIN
> because their caller is making the decision what flag to use.

Ok, so I've done some auditing here.

I've convinced myself it's safe to access the page before incrementing
the reference:

 * In the try_grab_page() case it must be safe as all call sites do seem
to be called under the appropriate ptl or mmap_lock (though this is hard
to audit). It's also true that it touches the page struct in the sense
of the reference.
 * In the try_grab_folio() case there already is already a similar
FOLL_LONGTERM check in that function *before* getting the reference and
the page should be stable due to the existing gup fast guarantees.

So we don't need to do the check after we have the reference and release
it when it fails. This simplifies things.

Moving the check into try_grab_x() should be possible with some cleanup.

For try_grab_page(), there are a few call sites that WARN_ON if it
fails, assuming it cannot fail seeing the page is stable.
try_grab_page() already has a WARN_ON on failure so it appears fine to
remove the second WARN_ON and add a new failure path that doesn't WARN.

For try_grab_folio() there's one call site in follow_hugetlb_page() that
assumes success and warns on failure; but this call site only applies to
hugetlb pages which should never be P2PDMA pages (nor non-longterm pages
which is another existing failure path). So I've added a note in the
comment with a couple other conditions that should not be possible.

I expect this work is way too late for the merge window now so I'll send
v11 after the window. In the meantime, if you want to do a quick review
on the first two patches, it would speed things up if there are obvious
changes. You can see these patches on this git branch:

  https://github.com/sbates130272/linux-p2pmem/  p2pdma_user_cmb_v11pre

Thanks,

Logan
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 21f8b27bd9fd..3cea77c8a9ea 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2897,6 +2897,7 @@  struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 #define FOLL_SPLIT_PMD	0x20000	/* split huge pmd before returning */
 #define FOLL_PIN	0x40000	/* pages must be released via unpin_user_page */
 #define FOLL_FAST_ONLY	0x80000	/* gup_fast: prevent fall-back to slow gup */
+#define FOLL_PCI_P2PDMA	0x100000 /* allow returning PCI P2PDMA pages */
 
 /*
  * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
diff --git a/mm/gup.c b/mm/gup.c
index 5abdaf487460..108848b67f6f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -601,6 +601,12 @@  static struct page *follow_page_pte(struct vm_area_struct *vma,
 		goto out;
 	}
 
+	if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
+		     is_pci_p2pdma_page(page))) {
+		page = ERR_PTR(-EREMOTEIO);
+		goto out;
+	}
+
 	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
 		       !PageAnonExclusive(page), page);
 
@@ -1039,6 +1045,9 @@  static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if ((gup_flags & FOLL_LONGTERM) && (gup_flags & FOLL_PCI_P2PDMA))
+		return -EOPNOTSUPP;
+
 	if (vma_is_secretmem(vma))
 		return -EFAULT;
 
@@ -2383,6 +2392,10 @@  static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
+			     is_pci_p2pdma_page(page)))
+			goto pte_unmap;
+
 		folio = try_grab_folio(page, 1, flags);
 		if (!folio)
 			goto pte_unmap;
@@ -2462,6 +2475,12 @@  static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 			undo_dev_pagemap(nr, nr_start, flags, pages);
 			break;
 		}
+
+		if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) {
+			undo_dev_pagemap(nr, nr_start, flags, pages);
+			break;
+		}
+
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		if (unlikely(!try_grab_page(page, flags))) {
@@ -2950,7 +2969,8 @@  static int internal_get_user_pages_fast(unsigned long start,
 
 	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
 				       FOLL_FORCE | FOLL_PIN | FOLL_GET |
-				       FOLL_FAST_ONLY | FOLL_NOFAULT)))
+				       FOLL_FAST_ONLY | FOLL_NOFAULT |
+				       FOLL_PCI_P2PDMA)))
 		return -EINVAL;
 
 	if (gup_flags & FOLL_PIN)