Message ID | 20200615221607.7764-19-peterx@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | None | expand |
On Mon, 15 Jun 2020 15:16:00 PDT (-0700), peterx@redhat.com wrote: > Use the new mm_fault_accounting() helper for page fault accounting. > > Avoid doing page fault accounting multiple times if the page fault is retried. > > CC: Paul Walmsley <paul.walmsley@sifive.com> > CC: Palmer Dabbelt <palmer@dabbelt.com> > CC: Albert Ou <aou@eecs.berkeley.edu> > CC: linux-riscv@lists.infradead.org > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > arch/riscv/mm/fault.c | 21 +++------------------ > 1 file changed, 3 insertions(+), 18 deletions(-) > > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c > index be84e32adc4c..9262338614d1 100644 > --- a/arch/riscv/mm/fault.c > +++ b/arch/riscv/mm/fault.c > @@ -30,7 +30,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) > struct vm_area_struct *vma; > struct mm_struct *mm; > unsigned long addr, cause; > - unsigned int flags = FAULT_FLAG_DEFAULT; > + unsigned int flags = FAULT_FLAG_DEFAULT, major = 0; > int code = SEGV_MAPERR; > vm_fault_t fault; > > @@ -65,9 +65,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs) > > if (user_mode(regs)) > flags |= FAULT_FLAG_USER; > - > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); > - > retry: > down_read(&mm->mmap_sem); > vma = find_vma(mm, addr); > @@ -111,6 +108,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) > * the fault. > */ > fault = handle_mm_fault(vma, addr, flags); > + major |= fault & VM_FAULT_MAJOR; > > /* > * If we need to retry but a fatal signal is pending, handle the > @@ -128,21 +126,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) > BUG(); > } > > - /* > - * Major/minor page fault accounting is only done on the > - * initial attempt. If we go through a retry, it is extremely > - * likely that the page will be found in page cache at that point. > - */ > if (flags & FAULT_FLAG_ALLOW_RETRY) { > - if (fault & VM_FAULT_MAJOR) { > - tsk->maj_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, > - 1, regs, addr); > - } else { > - tsk->min_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, > - 1, regs, addr); > - } > if (fault & VM_FAULT_RETRY) { > flags |= FAULT_FLAG_TRIED; > > @@ -156,6 +140,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) > } > > up_read(&mm->mmap_sem); > + mm_fault_accounting(tsk, regs, addr, major); > return; > > /* AFAICT this changes the behavior of the perf event: it used to count any fault, whereas now it only counts those that succeed successfully. If everyone else is doing it that way then I'm happy to change us over, but this definately isn't just avoiding retries.
On Thu, Jun 18, 2020 at 04:49:23PM -0700, Palmer Dabbelt wrote: > AFAICT this changes the behavior of the perf event: it used to count any fault, > whereas now it only counts those that succeed successfully. If everyone else > is doing it that way then I'm happy to change us over, but this definately > isn't just avoiding retries. Right, I'm preparing v2 to keep the old behavior of PERF_COUNT_SW_PAGE_FAULTS, while with a nicer approach. Thanks for looking!
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index be84e32adc4c..9262338614d1 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -30,7 +30,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) struct vm_area_struct *vma; struct mm_struct *mm; unsigned long addr, cause; - unsigned int flags = FAULT_FLAG_DEFAULT; + unsigned int flags = FAULT_FLAG_DEFAULT, major = 0; int code = SEGV_MAPERR; vm_fault_t fault; @@ -65,9 +65,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); - retry: down_read(&mm->mmap_sem); vma = find_vma(mm, addr); @@ -111,6 +108,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * the fault. */ fault = handle_mm_fault(vma, addr, flags); + major |= fault & VM_FAULT_MAJOR; /* * If we need to retry but a fatal signal is pending, handle the @@ -128,21 +126,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; @@ -156,6 +140,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) } up_read(&mm->mmap_sem); + mm_fault_accounting(tsk, regs, addr, major); return; /*
Use the new mm_fault_accounting() helper for page fault accounting. Avoid doing page fault accounting multiple times if the page fault is retried. CC: Paul Walmsley <paul.walmsley@sifive.com> CC: Palmer Dabbelt <palmer@dabbelt.com> CC: Albert Ou <aou@eecs.berkeley.edu> CC: linux-riscv@lists.infradead.org Signed-off-by: Peter Xu <peterx@redhat.com> --- arch/riscv/mm/fault.c | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-)