Message ID | 20250226114815.758217-1-bgeffon@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: fix finish_fault() handling for large folios | expand |
On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote: > When handling faults for anon shmem finish_fault() will attempt to install > ptes for the entire folio. Unfortunately if it encounters a single > non-pte_none entry in that range it will bail, even if the pte that > triggered the fault is still pte_none. When this situation happens the > fault will be retried endlessly never making forward progress. > > This patch fixes this behavior and if it detects that a pte in the range > is not pte_none it will fall back to setting just the pte for the > address that triggered the fault. Surely there's a similar problem in do_anonymous_page()? At any rate, what a horrid function finish_fault() has become. Special cases all over the place. What we should be doing is deciding the range of PTEs to insert, bounded by the folio, the VMA and any non-none entries. Maybe I'll get a chance to fix this up.
On Wed, Feb 26, 2025 at 9:04 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote: > > When handling faults for anon shmem finish_fault() will attempt to install > > ptes for the entire folio. Unfortunately if it encounters a single > > non-pte_none entry in that range it will bail, even if the pte that > > triggered the fault is still pte_none. When this situation happens the > > fault will be retried endlessly never making forward progress. > > > > This patch fixes this behavior and if it detects that a pte in the range > > is not pte_none it will fall back to setting just the pte for the > > address that triggered the fault. > > Surely there's a similar problem in do_anonymous_page()? > > At any rate, what a horrid function finish_fault() has become. > Special cases all over the place. What we should be doing is > deciding the range of PTEs to insert, bounded by the folio, the VMA > and any non-none entries. Maybe I'll get a chance to fix this up. I agree, I wasn't thrilled that the fix looked like this but I was trying to keep the change minimal to aid in backporting to stable kernels where this behavior is broken. With that being said, do you have a preference on a minimal way we can fix this before finish_fault() gets a proper cleanup?
On 2025/2/26 19:48, Brian Geffon wrote: > When handling faults for anon shmem finish_fault() will attempt to install > ptes for the entire folio. Unfortunately if it encounters a single > non-pte_none entry in that range it will bail, even if the pte that > triggered the fault is still pte_none. When this situation happens the > fault will be retried endlessly never making forward progress. > > This patch fixes this behavior and if it detects that a pte in the range > is not pte_none it will fall back to setting just the pte for the > address that triggered the fault. Could you describe in detail how this situation occurs? How is the none pte inserted within the range of the large folio? Because we have checks in shmem to determine if a large folio is suitable. Anyway, if we find the pte_range_none() is false, we can fallback to per-page fault as the following code shows (untested), which seems more simple? diff --git a/mm/memory.c b/mm/memory.c index a8196ae72e9a..8a2a9fda5410 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5219,7 +5219,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED); int type, nr_pages; - unsigned long addr = vmf->address; + unsigned long addr; + bool fallback_per_page = false; + + +fallback: + addr = vmf->address; /* Did we COW the page? */ if (is_cow) @@ -5258,7 +5263,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) * approach also applies to non-anonymous-shmem faults to avoid * inflating the RSS of the process. */ - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) + || unlikely(fallback_per_page)) { nr_pages = 1; } else if (nr_pages > 1) { pgoff_t idx = folio_page_idx(folio, page); @@ -5294,9 +5300,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) ret = VM_FAULT_NOPAGE; goto unlock; } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); - ret = VM_FAULT_NOPAGE; - goto unlock; + fallback_per_page = true; + pte_unmap_unlock(vmf->pte, vmf->ptl); + goto fallback; } folio_ref_add(folio, nr_pages - 1); > > Cc: stable@vger.kernel.org > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > Cc: Hugh Dickins <hughd@google.com> > Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio") > Reported-by: Marek Maslanka <mmaslanka@google.com> > Signed-off-by: Brian Geffon <bgeffon@google.com> > --- > mm/memory.c | 19 ++++++++++++++++--- > 1 file changed, 16 insertions(+), 3 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index b4d3d4893267..32de626ec1da 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > ret = VM_FAULT_NOPAGE; > goto unlock; > } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { > - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); > - ret = VM_FAULT_NOPAGE; > - goto unlock; > + /* > + * We encountered a set pte, let's just try to install the > + * pte for the original fault if that pte is still pte none. > + */ > + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE; > + > + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) { > + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); > + ret = VM_FAULT_NOPAGE; > + goto unlock; > + } > + > + vmf->pte = vmf->pte + idx; > + page = folio_page(folio, idx); > + addr = vmf->address; > + nr_pages = 1; > } > > folio_ref_add(folio, nr_pages - 1);
On 26.02.25 15:03, Matthew Wilcox wrote: > On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote: >> When handling faults for anon shmem finish_fault() will attempt to install >> ptes for the entire folio. Unfortunately if it encounters a single >> non-pte_none entry in that range it will bail, even if the pte that >> triggered the fault is still pte_none. When this situation happens the >> fault will be retried endlessly never making forward progress. >> >> This patch fixes this behavior and if it detects that a pte in the range >> is not pte_none it will fall back to setting just the pte for the >> address that triggered the fault. > > Surely there's a similar problem in do_anonymous_page()? I recall we handle it in there correctly the last time I stared at it. We check pte_none to decide which folio size we can allocate (including basing the decision on other factors like VMA etc), and after retaking the PTL, we recheck vmf_pte_changed / pte_range_none() to make sure there were no races.
On Wed, Feb 26, 2025 at 10:17 AM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > > On 2025/2/26 19:48, Brian Geffon wrote: > > When handling faults for anon shmem finish_fault() will attempt to install > > ptes for the entire folio. Unfortunately if it encounters a single > > non-pte_none entry in that range it will bail, even if the pte that > > triggered the fault is still pte_none. When this situation happens the > > fault will be retried endlessly never making forward progress. > > > > This patch fixes this behavior and if it detects that a pte in the range > > is not pte_none it will fall back to setting just the pte for the > > address that triggered the fault. > > Could you describe in detail how this situation occurs? How is the none > pte inserted within the range of the large folio? Because we have checks > in shmem to determine if a large folio is suitable. We're seeing it because of racing shmem_undo_range() calls, basically we have a portion of the folio is zapped and that prevents finish_fault() from ever completing because it has an assumption that the entire range must be pte_none. > > Anyway, if we find the pte_range_none() is false, we can fallback to > per-page fault as the following code shows (untested), which seems more > simple? Yah, I like this approach. I'll send a v2 with you as a Suggested-by. > > diff --git a/mm/memory.c b/mm/memory.c > index a8196ae72e9a..8a2a9fda5410 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5219,7 +5219,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && > !(vma->vm_flags & VM_SHARED); > int type, nr_pages; > - unsigned long addr = vmf->address; > + unsigned long addr; > + bool fallback_per_page = false; > + > + > +fallback: > + addr = vmf->address; > > /* Did we COW the page? */ > if (is_cow) > @@ -5258,7 +5263,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > * approach also applies to non-anonymous-shmem faults to avoid > * inflating the RSS of the process. > */ > - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { > + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) > + || unlikely(fallback_per_page)) { > nr_pages = 1; > } else if (nr_pages > 1) { > pgoff_t idx = folio_page_idx(folio, page); > @@ -5294,9 +5300,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > ret = VM_FAULT_NOPAGE; > goto unlock; > } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { > - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); > - ret = VM_FAULT_NOPAGE; > - goto unlock; > + fallback_per_page = true; > + pte_unmap_unlock(vmf->pte, vmf->ptl); > + goto fallback; > } > > folio_ref_add(folio, nr_pages - 1); > > > > > Cc: stable@vger.kernel.org > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > > Cc: Hugh Dickins <hughd@google.com> > > Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio") > > Reported-by: Marek Maslanka <mmaslanka@google.com> > > Signed-off-by: Brian Geffon <bgeffon@google.com> > > --- > > mm/memory.c | 19 ++++++++++++++++--- > > 1 file changed, 16 insertions(+), 3 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index b4d3d4893267..32de626ec1da 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > > ret = VM_FAULT_NOPAGE; > > goto unlock; > > } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { > > - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); > > - ret = VM_FAULT_NOPAGE; > > - goto unlock; > > + /* > > + * We encountered a set pte, let's just try to install the > > + * pte for the original fault if that pte is still pte none. > > + */ > > + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE; > > + > > + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) { > > + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); > > + ret = VM_FAULT_NOPAGE; > > + goto unlock; > > + } > > + > > + vmf->pte = vmf->pte + idx; > > + page = folio_page(folio, idx); > > + addr = vmf->address; > > + nr_pages = 1; > > } > > > > folio_ref_add(folio, nr_pages - 1);
On Wed, Feb 26, 2025 at 04:42:46PM +0100, David Hildenbrand wrote: > On 26.02.25 15:03, Matthew Wilcox wrote: > > On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote: > > > When handling faults for anon shmem finish_fault() will attempt to install > > > ptes for the entire folio. Unfortunately if it encounters a single > > > non-pte_none entry in that range it will bail, even if the pte that > > > triggered the fault is still pte_none. When this situation happens the > > > fault will be retried endlessly never making forward progress. > > > > > > This patch fixes this behavior and if it detects that a pte in the range > > > is not pte_none it will fall back to setting just the pte for the > > > address that triggered the fault. > > > > Surely there's a similar problem in do_anonymous_page()? > > I recall we handle it in there correctly the last time I stared at it. > > We check pte_none to decide which folio size we can allocate (including > basing the decision on other factors like VMA etc), and after retaking the > PTL, we recheck vmf_pte_changed / pte_range_none() to make sure there were > no races. Ah, so then we'll retry and allocate a folio of the right size the next time? Rather than the shmem case where the folio is already allocated and we can't change that?
On 26.02.25 17:28, Matthew Wilcox wrote: > On Wed, Feb 26, 2025 at 04:42:46PM +0100, David Hildenbrand wrote: >> On 26.02.25 15:03, Matthew Wilcox wrote: >>> On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote: >>>> When handling faults for anon shmem finish_fault() will attempt to install >>>> ptes for the entire folio. Unfortunately if it encounters a single >>>> non-pte_none entry in that range it will bail, even if the pte that >>>> triggered the fault is still pte_none. When this situation happens the >>>> fault will be retried endlessly never making forward progress. >>>> >>>> This patch fixes this behavior and if it detects that a pte in the range >>>> is not pte_none it will fall back to setting just the pte for the >>>> address that triggered the fault. >>> >>> Surely there's a similar problem in do_anonymous_page()? >> >> I recall we handle it in there correctly the last time I stared at it. >> >> We check pte_none to decide which folio size we can allocate (including >> basing the decision on other factors like VMA etc), and after retaking the >> PTL, we recheck vmf_pte_changed / pte_range_none() to make sure there were >> no races. > > Ah, so then we'll retry and allocate a folio of the right size the next > time? IIRC we'll retry the fault in case we had a race. Likely, if we had a race, somebody else installed a (large) folio and we essentially have to second fault. If, for some reason, the race only touched parts of the PTEs we tried to modify, we'll get another fault and allocate something (smaller) that would fit into the new empty range. So yes, we're more flexible because we're allocating the folios and don't have to take whatever folio size is in the pagecache in consideration.
diff --git a/mm/memory.c b/mm/memory.c index b4d3d4893267..32de626ec1da 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf) ret = VM_FAULT_NOPAGE; goto unlock; } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); - ret = VM_FAULT_NOPAGE; - goto unlock; + /* + * We encountered a set pte, let's just try to install the + * pte for the original fault if that pte is still pte none. + */ + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE; + + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) { + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); + ret = VM_FAULT_NOPAGE; + goto unlock; + } + + vmf->pte = vmf->pte + idx; + page = folio_page(folio, idx); + addr = vmf->address; + nr_pages = 1; } folio_ref_add(folio, nr_pages - 1);
When handling faults for anon shmem finish_fault() will attempt to install ptes for the entire folio. Unfortunately if it encounters a single non-pte_none entry in that range it will bail, even if the pte that triggered the fault is still pte_none. When this situation happens the fault will be retried endlessly never making forward progress. This patch fixes this behavior and if it detects that a pte in the range is not pte_none it will fall back to setting just the pte for the address that triggered the fault. Cc: stable@vger.kernel.org Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio") Reported-by: Marek Maslanka <mmaslanka@google.com> Signed-off-by: Brian Geffon <bgeffon@google.com> --- mm/memory.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)