Message ID | 20220210141733.1908-3-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/memory-failure.c: A few cleanup patches for memory failure | expand |
On Thu, Feb 10, 2022 at 10:17:27PM +0800, Miaohe Lin wrote: > It's unnecessary to walk the page table when vma_address() return -EFAULT. > Return early if so to save some cpu cycles. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> Does this patch fix the real problem rather than just saving cpu cycles? Without this patch, "address == -EFAULT" seems to make pgd_offset() return invalid pointer and result in some serious result like general protection fault. If that's the case, this patch might be worth sending to stable. Thanks, Naoya Horiguchi > --- > mm/memory-failure.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index b3ff7e99a421..f86819145ea8 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -315,6 +315,8 @@ static unsigned long dev_pagemap_mapping_shift(struct page *page, > pmd_t *pmd; > pte_t *pte; > > + if (address == -EFAULT) > + return 0; > pgd = pgd_offset(vma->vm_mm, address); > if (!pgd_present(*pgd)) > return 0; > --- > 2.23.0 >
On 2022/2/14 22:48, Naoya Horiguchi wrote: > On Thu, Feb 10, 2022 at 10:17:27PM +0800, Miaohe Lin wrote: >> It's unnecessary to walk the page table when vma_address() return -EFAULT. >> Return early if so to save some cpu cycles. >> >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> Many thanks for your review and Acked-by tag! > > Does this patch fix the real problem rather than just saving cpu cycles? > Without this patch, "address == -EFAULT" seems to make pgd_offset() return > invalid pointer and result in some serious result like general protection fault. I think you're right. We might dereference the invalid pointer in the following pagetable walk and results in general protection fault. > If that's the case, this patch might be worth sending to stable. But I'am not sure vma_address will return -EFAULT for dax pages in the real workload? If so, I will send a v2 with Fixes tag. Thanks again. > > Thanks, > Naoya Horiguchi > >> --- >> mm/memory-failure.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c >> index b3ff7e99a421..f86819145ea8 100644 >> --- a/mm/memory-failure.c >> +++ b/mm/memory-failure.c >> @@ -315,6 +315,8 @@ static unsigned long dev_pagemap_mapping_shift(struct page *page, >> pmd_t *pmd; >> pte_t *pte; >> >> + if (address == -EFAULT) >> + return 0; >> pgd = pgd_offset(vma->vm_mm, address); >> if (!pgd_present(*pgd)) >> return 0; >> --- >> 2.23.0 >> > . >
On Tue, Feb 15, 2022 at 10:40:02AM +0800, Miaohe Lin wrote: > On 2022/2/14 22:48, Naoya Horiguchi wrote: > > On Thu, Feb 10, 2022 at 10:17:27PM +0800, Miaohe Lin wrote: > >> It's unnecessary to walk the page table when vma_address() return -EFAULT. > >> Return early if so to save some cpu cycles. > >> > >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > > > Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> > > Many thanks for your review and Acked-by tag! You're welcome :) > > > > > Does this patch fix the real problem rather than just saving cpu cycles? > > Without this patch, "address == -EFAULT" seems to make pgd_offset() return > > invalid pointer and result in some serious result like general protection fault. > > I think you're right. We might dereference the invalid pointer in the following pagetable > walk and results in general protection fault. > > > If that's the case, this patch might be worth sending to stable. > > But I'am not sure vma_address will return -EFAULT for dax pages in the real workload? > If so, I will send a v2 with Fixes tag. Hm, actually I'm not sure either. But dev_pagemap_mapping_shift() is called only when vma associated to the error page is found already in collect_procs_{file,anon}, so vma_address() should not return -EFAULT except with some bug. So VM_BUG_ON() might be more suitable? Thanks, Naoya Horiguchi
On 2022/2/15 16:37, HORIGUCHI NAOYA(堀口 直也) wrote: > On Tue, Feb 15, 2022 at 10:40:02AM +0800, Miaohe Lin wrote: >> On 2022/2/14 22:48, Naoya Horiguchi wrote: >>> On Thu, Feb 10, 2022 at 10:17:27PM +0800, Miaohe Lin wrote: >>>> It's unnecessary to walk the page table when vma_address() return -EFAULT. >>>> Return early if so to save some cpu cycles. >>>> >>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >>> >>> Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> >> >> Many thanks for your review and Acked-by tag! > > You're welcome :) > >> >>> >>> Does this patch fix the real problem rather than just saving cpu cycles? >>> Without this patch, "address == -EFAULT" seems to make pgd_offset() return >>> invalid pointer and result in some serious result like general protection fault. >> >> I think you're right. We might dereference the invalid pointer in the following pagetable >> walk and results in general protection fault. >> >>> If that's the case, this patch might be worth sending to stable. >> >> But I'am not sure vma_address will return -EFAULT for dax pages in the real workload? >> If so, I will send a v2 with Fixes tag. > > Hm, actually I'm not sure either. But dev_pagemap_mapping_shift() is called only > when vma associated to the error page is found already in collect_procs_{file,anon}, > so vma_address() should not return -EFAULT except with some bug. > So VM_BUG_ON() might be more suitable? Agree. anon_vma_interval_tree_foreach/vma_interval_tree_foreach in collect_procs_{file,anon} should have guaranteed the validity of the vma_address(). And rmap_walk_anon and rmap_walk_file do the VM_BUG_ON_VMA(address == -EFAULT, vma). So VM_BUG_ON() might be really more suitable. Will do this in v2. Many thanks. > > Thanks, > Naoya Horiguchi >
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index b3ff7e99a421..f86819145ea8 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -315,6 +315,8 @@ static unsigned long dev_pagemap_mapping_shift(struct page *page, pmd_t *pmd; pte_t *pte; + if (address == -EFAULT) + return 0; pgd = pgd_offset(vma->vm_mm, address); if (!pgd_present(*pgd)) return 0;
It's unnecessary to walk the page table when vma_address() return -EFAULT. Return early if so to save some cpu cycles. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/memory-failure.c | 2 ++ 1 file changed, 2 insertions(+)