diff mbox

drm/vma-manager: Don't unmap COW'd pages when zapping bo ptes

Message ID 1384941349-3550-1-git-send-email-thellstrom@vmware.com (mailing list archive)
State New, archived
Headers show

Commit Message

Thomas Hellstrom Nov. 20, 2013, 9:55 a.m. UTC
Not sure if there are any user-space users of private bo mappings, but
if there are, or will be, zapping the COW'd pages when, for example,
moving a bo would confuse the user immensely since the net effect for the
user would be that pages written to would lose their contents.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
---
 include/drm/drm_vma_manager.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

David Herrmann Nov. 20, 2013, 10:14 a.m. UTC | #1
Hi

On Wed, Nov 20, 2013 at 10:55 AM, Thomas Hellstrom
<thellstrom@vmware.com> wrote:
> Not sure if there are any user-space users of private bo mappings, but
> if there are, or will be, zapping the COW'd pages when, for example,
> moving a bo would confuse the user immensely since the net effect for the
> user would be that pages written to would lose their contents.

You're only talking about moving bos, but what happens when you evict
a bo? You *must* clear COW mappings, too. Otherwise, they might still
get read-access to the evicted bo.

Note that we currently cause SIGBUS on access to mmap'd bos if they
are evicted and cannot be restored (think: user-space mapped the
VESA-FB but a real driver took over). So maybe we want two independent
helpers here?

Thanks
David

> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> ---
>  include/drm/drm_vma_manager.h |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/drm/drm_vma_manager.h b/include/drm/drm_vma_manager.h
> index c18a593..347077d 100644
> --- a/include/drm/drm_vma_manager.h
> +++ b/include/drm/drm_vma_manager.h
> @@ -231,9 +231,9 @@ static inline void drm_vma_node_unmap(struct drm_vma_offset_node *node,
>                                       struct address_space *file_mapping)
>  {
>         if (file_mapping && drm_vma_node_has_offset(node))
> -               unmap_mapping_range(file_mapping,
> -                                   drm_vma_node_offset_addr(node),
> -                                   drm_vma_node_size(node) << PAGE_SHIFT, 1);
> +               unmap_shared_mapping_range
> +                       (file_mapping, drm_vma_node_offset_addr(node),
> +                        drm_vma_node_size(node) << PAGE_SHIFT);
>  }
>
>  /**
> --
> 1.7.10.4
Thomas Hellstrom Nov. 20, 2013, 10:38 a.m. UTC | #2
On 11/20/2013 11:14 AM, David Herrmann wrote:
> Hi
>
> On Wed, Nov 20, 2013 at 10:55 AM, Thomas Hellstrom
> <thellstrom@vmware.com> wrote:
>> Not sure if there are any user-space users of private bo mappings, but
>> if there are, or will be, zapping the COW'd pages when, for example,
>> moving a bo would confuse the user immensely since the net effect for the
>> user would be that pages written to would lose their contents.
> You're only talking about moving bos, but what happens when you evict
> a bo? You *must* clear COW mappings, too. Otherwise, they might still
> get read-access to the evicted bo.

Hmm. When I talk about COW'd pages, I mean individual anonymous pages.
If there is a COW mapping left in a VMA, it points to an anonymous page 
and will never access the evicted bo.
All (read-only-enabled) ptes pointing to the evicted bo would still be 
zapped, so I'm afraid I don't understand your comment?

>
> Note that we currently cause SIGBUS on access to mmap'd bos if they
> are evicted and cannot be restored (think: user-space mapped the
> VESA-FB but a real driver took over). So maybe we want two independent
> helpers here?
It shouldn't be needed.

Thanks,
Thomas


>
> Thanks
> David
>
>> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
>> ---
>>   include/drm/drm_vma_manager.h |    6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/drm/drm_vma_manager.h b/include/drm/drm_vma_manager.h
>> index c18a593..347077d 100644
>> --- a/include/drm/drm_vma_manager.h
>> +++ b/include/drm/drm_vma_manager.h
>> @@ -231,9 +231,9 @@ static inline void drm_vma_node_unmap(struct drm_vma_offset_node *node,
>>                                        struct address_space *file_mapping)
>>   {
>>          if (file_mapping && drm_vma_node_has_offset(node))
>> -               unmap_mapping_range(file_mapping,
>> -                                   drm_vma_node_offset_addr(node),
>> -                                   drm_vma_node_size(node) << PAGE_SHIFT, 1);
>> +               unmap_shared_mapping_range
>> +                       (file_mapping, drm_vma_node_offset_addr(node),
>> +                        drm_vma_node_size(node) << PAGE_SHIFT);
>>   }
>>
>>   /**
>> --
>> 1.7.10.4
Daniel Vetter Nov. 20, 2013, 2:24 p.m. UTC | #3
On Wed, Nov 20, 2013 at 01:55:49AM -0800, Thomas Hellstrom wrote:
> Not sure if there are any user-space users of private bo mappings, but
> if there are, or will be, zapping the COW'd pages when, for example,
> moving a bo would confuse the user immensely since the net effect for the
> user would be that pages written to would lose their contents.
> 
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>

Presuming I'm not horribly confused about that all the vm slang in the
kerneldoc means this changes is

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Now I still hold that userspace creating anynomous bo mappings is rather
crazy, but meh ;-) I guess the real question is whether we have anyone
relying on this out there (or planing to), in which case we either need to
funnel this through stable kernels or whack a drm feature flag onto
drivers/kernels with this fixed. But I seriously hope the answer is no.

Cheers, Daniel
> ---
>  include/drm/drm_vma_manager.h |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/drm/drm_vma_manager.h b/include/drm/drm_vma_manager.h
> index c18a593..347077d 100644
> --- a/include/drm/drm_vma_manager.h
> +++ b/include/drm/drm_vma_manager.h
> @@ -231,9 +231,9 @@ static inline void drm_vma_node_unmap(struct drm_vma_offset_node *node,
>  				      struct address_space *file_mapping)
>  {
>  	if (file_mapping && drm_vma_node_has_offset(node))
> -		unmap_mapping_range(file_mapping,
> -				    drm_vma_node_offset_addr(node),
> -				    drm_vma_node_size(node) << PAGE_SHIFT, 1);
> +		unmap_shared_mapping_range
> +			(file_mapping, drm_vma_node_offset_addr(node),
> +			 drm_vma_node_size(node) << PAGE_SHIFT);
>  }
>  
>  /**
> -- 
> 1.7.10.4
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
Thomas Hellstrom Nov. 20, 2013, 10:35 p.m. UTC | #4
On 11/20/2013 03:24 PM, Daniel Vetter wrote:
> On Wed, Nov 20, 2013 at 01:55:49AM -0800, Thomas Hellstrom wrote:
>> Not sure if there are any user-space users of private bo mappings, but
>> if there are, or will be, zapping the COW'd pages when, for example,
>> moving a bo would confuse the user immensely since the net effect for the
>> user would be that pages written to would lose their contents.
>>
>> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Presuming I'm not horribly confused about that all the vm slang in the
> kerneldoc means this changes is
>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Now I still hold that userspace creating anynomous bo mappings is rather
> crazy, but meh ;-) I guess the real question is whether we have anyone
> relying on this out there (or planing to), in which case we either need to
> funnel this through stable kernels or whack a drm feature flag onto
> drivers/kernels with this fixed. But I seriously hope the answer is no.
>
> Cheers, Daniel
>
Thanks for reviewing. I don't think there's a need to take this through 
stable, since I don't know of anyone using private VMAs,

Actually I'll need to hold off on this for a while since on some archs 
this may cause
ptes of shared pages to not be zapped. If the arch doesn't have 
PTE_SPECIAL, shared pages on MIXEDMAP vmas will come through as
vm_normal_page, and since page->mapping is usually (un)set to NULL by 
our drivers, this will result in false positives for COW'ed pages.

So that test is buggy, or us not setting page->mapping to the mapping we 
use is buggy. It's too late in the day to decide which.

/Thomas
Daniel Vetter Nov. 21, 2013, 8:18 a.m. UTC | #5
On Wed, Nov 20, 2013 at 11:35:31PM +0100, Thomas Hellstrom wrote:
> On 11/20/2013 03:24 PM, Daniel Vetter wrote:
> >On Wed, Nov 20, 2013 at 01:55:49AM -0800, Thomas Hellstrom wrote:
> >>Not sure if there are any user-space users of private bo mappings, but
> >>if there are, or will be, zapping the COW'd pages when, for example,
> >>moving a bo would confuse the user immensely since the net effect for the
> >>user would be that pages written to would lose their contents.
> >>
> >>Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> >Presuming I'm not horribly confused about that all the vm slang in the
> >kerneldoc means this changes is
> >
> >Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >
> >Now I still hold that userspace creating anynomous bo mappings is rather
> >crazy, but meh ;-) I guess the real question is whether we have anyone
> >relying on this out there (or planing to), in which case we either need to
> >funnel this through stable kernels or whack a drm feature flag onto
> >drivers/kernels with this fixed. But I seriously hope the answer is no.
> >
> >Cheers, Daniel
> >
> Thanks for reviewing. I don't think there's a need to take this
> through stable, since I don't know of anyone using private VMAs,
> 
> Actually I'll need to hold off on this for a while since on some
> archs this may cause
> ptes of shared pages to not be zapped. If the arch doesn't have
> PTE_SPECIAL, shared pages on MIXEDMAP vmas will come through as
> vm_normal_page, and since page->mapping is usually (un)set to NULL
> by our drivers, this will result in false positives for COW'ed
> pages.
> 
> So that test is buggy, or us not setting page->mapping to the
> mapping we use is buggy. It's too late in the day to decide which.

On second thought we also use this helper when zapping a bo when
deleting it or purging it's backing storage. And there I'd say it would be
consistent with normal semantics to also zap cow pages. So I'm no longer
sure this is the right approach in general. Maybe we should instead wire
up the switch, but not sure whether that's indeed worth the effort.
-Daniel
Thomas Hellstrom Nov. 21, 2013, 8:29 a.m. UTC | #6
On 11/21/2013 09:18 AM, Daniel Vetter wrote:
> On Wed, Nov 20, 2013 at 11:35:31PM +0100, Thomas Hellstrom wrote:
>> On 11/20/2013 03:24 PM, Daniel Vetter wrote:
>>> On Wed, Nov 20, 2013 at 01:55:49AM -0800, Thomas Hellstrom wrote:
>>>> Not sure if there are any user-space users of private bo mappings, but
>>>> if there are, or will be, zapping the COW'd pages when, for example,
>>>> moving a bo would confuse the user immensely since the net effect for the
>>>> user would be that pages written to would lose their contents.
>>>>
>>>> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
>>> Presuming I'm not horribly confused about that all the vm slang in the
>>> kerneldoc means this changes is
>>>
>>> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>
>>> Now I still hold that userspace creating anynomous bo mappings is rather
>>> crazy, but meh ;-) I guess the real question is whether we have anyone
>>> relying on this out there (or planing to), in which case we either need to
>>> funnel this through stable kernels or whack a drm feature flag onto
>>> drivers/kernels with this fixed. But I seriously hope the answer is no.
>>>
>>> Cheers, Daniel
>>>
>> Thanks for reviewing. I don't think there's a need to take this
>> through stable, since I don't know of anyone using private VMAs,
>>
>> Actually I'll need to hold off on this for a while since on some
>> archs this may cause
>> ptes of shared pages to not be zapped. If the arch doesn't have
>> PTE_SPECIAL, shared pages on MIXEDMAP vmas will come through as
>> vm_normal_page, and since page->mapping is usually (un)set to NULL
>> by our drivers, this will result in false positives for COW'ed
>> pages.
>>
>> So that test is buggy, or us not setting page->mapping to the
>> mapping we use is buggy. It's too late in the day to decide which.
> On second thought we also use this helper when zapping a bo when
> deleting it or purging it's backing storage. And there I'd say it would be
> consistent with normal semantics to also zap cow pages.

Agreed. That's consistent with truncation. Perhaps this was what David 
meant.
Out of interest, why do you delete a bo or purge backing store before 
all vmas are gone?

>   So I'm no longer
> sure this is the right approach in general. Maybe we should instead wire
> up the switch, but not sure whether that's indeed worth the effort.

I'll leave it as is. If I happen to sort out the non-cow zapping, I'll 
post a new patch.

/Thomas



> -Daniel
Daniel Vetter Nov. 21, 2013, 9:15 a.m. UTC | #7
On Thu, Nov 21, 2013 at 09:29:26AM +0100, Thomas Hellstrom wrote:
> On 11/21/2013 09:18 AM, Daniel Vetter wrote:
> >On Wed, Nov 20, 2013 at 11:35:31PM +0100, Thomas Hellstrom wrote:
> >>On 11/20/2013 03:24 PM, Daniel Vetter wrote:
> >>>On Wed, Nov 20, 2013 at 01:55:49AM -0800, Thomas Hellstrom wrote:
> >>>>Not sure if there are any user-space users of private bo mappings, but
> >>>>if there are, or will be, zapping the COW'd pages when, for example,
> >>>>moving a bo would confuse the user immensely since the net effect for the
> >>>>user would be that pages written to would lose their contents.
> >>>>
> >>>>Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> >>>Presuming I'm not horribly confused about that all the vm slang in the
> >>>kerneldoc means this changes is
> >>>
> >>>Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>>
> >>>Now I still hold that userspace creating anynomous bo mappings is rather
> >>>crazy, but meh ;-) I guess the real question is whether we have anyone
> >>>relying on this out there (or planing to), in which case we either need to
> >>>funnel this through stable kernels or whack a drm feature flag onto
> >>>drivers/kernels with this fixed. But I seriously hope the answer is no.
> >>>
> >>>Cheers, Daniel
> >>>
> >>Thanks for reviewing. I don't think there's a need to take this
> >>through stable, since I don't know of anyone using private VMAs,
> >>
> >>Actually I'll need to hold off on this for a while since on some
> >>archs this may cause
> >>ptes of shared pages to not be zapped. If the arch doesn't have
> >>PTE_SPECIAL, shared pages on MIXEDMAP vmas will come through as
> >>vm_normal_page, and since page->mapping is usually (un)set to NULL
> >>by our drivers, this will result in false positives for COW'ed
> >>pages.
> >>
> >>So that test is buggy, or us not setting page->mapping to the
> >>mapping we use is buggy. It's too late in the day to decide which.
> >On second thought we also use this helper when zapping a bo when
> >deleting it or purging it's backing storage. And there I'd say it would be
> >consistent with normal semantics to also zap cow pages.
> 
> Agreed. That's consistent with truncation. Perhaps this was what
> David meant.
> Out of interest, why do you delete a bo or purge backing store
> before all vmas are gone?

I don't think we actually delete it before all the vmas are gone, but
purging can happen. Userspace (libdrm) aggressively caches both bos and
their memory mappings, but sets them to purgeable. Hence when the kernel
is low on memory and we preferentially remove those objects we can end up
purging backing storage for which userspace still has a cached mmaping.

Of course userspace first needs to adjust the madvise flag to willneed
before it can use that mmap region again. And if the kernel tells it that
the object is unfortunately gone userspace can only do an munmap and close
its gem handle.
-Daniel
diff mbox

Patch

diff --git a/include/drm/drm_vma_manager.h b/include/drm/drm_vma_manager.h
index c18a593..347077d 100644
--- a/include/drm/drm_vma_manager.h
+++ b/include/drm/drm_vma_manager.h
@@ -231,9 +231,9 @@  static inline void drm_vma_node_unmap(struct drm_vma_offset_node *node,
 				      struct address_space *file_mapping)
 {
 	if (file_mapping && drm_vma_node_has_offset(node))
-		unmap_mapping_range(file_mapping,
-				    drm_vma_node_offset_addr(node),
-				    drm_vma_node_size(node) << PAGE_SHIFT, 1);
+		unmap_shared_mapping_range
+			(file_mapping, drm_vma_node_offset_addr(node),
+			 drm_vma_node_size(node) << PAGE_SHIFT);
 }
 
 /**