Message ID | 20220318095531.15479-3-xiaoguang.wang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add zero copy feature for tcmu | expand |
On 18.03.22 10:55, Xiaoguang Wang wrote: > Module target_core_user will use it to implement zero copy feature. > > Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> > --- > mm/memory.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/memory.c b/mm/memory.c > index 1f745e4d11c2..9974d0406dad 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, > mmu_notifier_invalidate_range_end(&range); > tlb_finish_mmu(&tlb); > } > +EXPORT_SYMBOL_GPL(zap_page_range); > > /** > * zap_page_range_single - remove user pages in a given range To which VMAs will you be applying zap_page_range? I assume only to some special ones where you previously vm_insert_page(s)_mkspecial'ed pages, not to some otherwise random VMAs, correct?
hi, > On 18.03.22 10:55, Xiaoguang Wang wrote: >> Module target_core_user will use it to implement zero copy feature. >> >> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> >> --- >> mm/memory.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 1f745e4d11c2..9974d0406dad 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, >> mmu_notifier_invalidate_range_end(&range); >> tlb_finish_mmu(&tlb); >> } >> +EXPORT_SYMBOL_GPL(zap_page_range); >> >> /** >> * zap_page_range_single - remove user pages in a given range > To which VMAs will you be applying zap_page_range? I assume only to some > special ones where you previously vm_insert_page(s)_mkspecial'ed pages, > not to some otherwise random VMAs, correct? Yes, you're right :) Regards, Xiaoguang Wang >
On 22.03.22 14:02, Xiaoguang Wang wrote: > hi, > >> On 18.03.22 10:55, Xiaoguang Wang wrote: >>> Module target_core_user will use it to implement zero copy feature. >>> >>> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> >>> --- >>> mm/memory.c | 1 + >>> 1 file changed, 1 insertion(+) >>> >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 1f745e4d11c2..9974d0406dad 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, >>> mmu_notifier_invalidate_range_end(&range); >>> tlb_finish_mmu(&tlb); >>> } >>> +EXPORT_SYMBOL_GPL(zap_page_range); >>> >>> /** >>> * zap_page_range_single - remove user pages in a given range >> To which VMAs will you be applying zap_page_range? I assume only to some >> special ones where you previously vm_insert_page(s)_mkspecial'ed pages, >> not to some otherwise random VMAs, correct? > Yes, you're right :) I'd suggest exposing a dedicated function that performs sanity checks on the vma (VM_PFNMAP ?) and only zaps within a single VMA. Essentially zap_page_range_single(), excluding "struct zap_details *details" and including sanity checks. Reason is that we don't want anybody to blindly zap_page_range() within random VMAs from a kernel module.
hi, > On 22.03.22 14:02, Xiaoguang Wang wrote: >> hi, >> >>> On 18.03.22 10:55, Xiaoguang Wang wrote: >>>> Module target_core_user will use it to implement zero copy feature. >>>> >>>> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> >>>> --- >>>> mm/memory.c | 1 + >>>> 1 file changed, 1 insertion(+) >>>> >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 1f745e4d11c2..9974d0406dad 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, >>>> mmu_notifier_invalidate_range_end(&range); >>>> tlb_finish_mmu(&tlb); >>>> } >>>> +EXPORT_SYMBOL_GPL(zap_page_range); >>>> >>>> /** >>>> * zap_page_range_single - remove user pages in a given range >>> To which VMAs will you be applying zap_page_range? I assume only to some >>> special ones where you previously vm_insert_page(s)_mkspecial'ed pages, >>> not to some otherwise random VMAs, correct? >> Yes, you're right :) > I'd suggest exposing a dedicated function that performs sanity checks on > the vma (VM_PFNMAP ?) and only zaps within a single VMA. > > Essentially zap_page_range_single(), excluding "struct zap_details > *details" and including sanity checks. > > Reason is that we don't want anybody to blindly zap_page_range() within > random VMAs from a kernel module. OK, I see, thanks. Xu Yu and I will try to implement a new helper in following new version patches. Regards, Xiaoguang Wang >
On Fri, Mar 18, 2022 at 05:55:30PM +0800, Xiaoguang Wang wrote:
> Module target_core_user will use it to implement zero copy feature.
NAK, this has no business being exported to modules.
On Tue, Mar 22, 2022 at 02:08:13PM +0100, David Hildenbrand wrote: > Reason is that we don't want anybody to blindly zap_page_range() within > random VMAs from a kernel module. Not just that, but there is no business for modules doing this at all. These kinds of VM hooks for random drivers are not a good idea.
On Fri, Mar 18, 2022 at 05:55:30PM +0800, Xiaoguang Wang wrote: > Module target_core_user will use it to implement zero copy feature. > > Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> > --- > mm/memory.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/memory.c b/mm/memory.c > index 1f745e4d11c2..9974d0406dad 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, > mmu_notifier_invalidate_range_end(&range); > tlb_finish_mmu(&tlb); > } > +EXPORT_SYMBOL_GPL(zap_page_range); BTW, what is the counter part api of remap_pfn_range() for serving the unmap? Or does it really need to unmap the vm space for this zero-copy case? If it isn't necessary to unmap, maybe remap_pfn_range() is faster than vm_insert_page(s)_mkspecial + zap_page_range() since zap_page_range() looks a bit heavy. Thanks, Ming
diff --git a/mm/memory.c b/mm/memory.c index 1f745e4d11c2..9974d0406dad 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } +EXPORT_SYMBOL_GPL(zap_page_range); /** * zap_page_range_single - remove user pages in a given range
Module target_core_user will use it to implement zero copy feature. Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> --- mm/memory.c | 1 + 1 file changed, 1 insertion(+)