Message ID | 20220504214437.2850685-5-zokeefe@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: userspace hugepage collapse | expand |
On Wed, 4 May 2022, Zach O'Keefe wrote: > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index c94bc43dff3e..6095fcb3f07c 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -92,6 +92,10 @@ struct collapse_control { > > /* Last target selected in khugepaged_find_target_node() */ > int last_target_node; > + > + struct page *hpage; > + int (*alloc_charge_hpage)(struct mm_struct *mm, > + struct collapse_control *cc); > }; > > /** Embedding this function pointer into collapse_contol seems like it would need some pretty strong rationale. Not to say that it should be a non-starter, but I think the changelog needs to clearly indicate why this is better/cleaner than embedding the needed info for a single allocation and charge function to use. If the callbacks would truly be so different that unifying them would be more complex, I think this makes sense.
Thanks for the review, David! On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index c94bc43dff3e..6095fcb3f07c 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > /* Last target selected in khugepaged_find_target_node() */ > > int last_target_node; > > + > > + struct page *hpage; > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > + struct collapse_control *cc); > > }; > > > > /** > > Embedding this function pointer into collapse_contol seems like it would > need some pretty strong rationale. Not to say that it should be a > non-starter, but I think the changelog needs to clearly indicate why this > is better/cleaner than embedding the needed info for a single allocation > and charge function to use. If the callbacks would truly be so different > that unifying them would be more complex, I think this makes sense. Mostly, this boils down to khugepaged having different a allocation pattern for NUMA/UMA ; the former scans the pages first to determine the right node, the latter preallocates before scanning. khugepaged has the luxury on UMA systems of just holding onto a hugepage indefinitely for the next collapse target. For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... else" casing, defining this as a context-defined operation seemed appropriate. Collapsing both alloc and charging together was mostly a code cleanliness decision resulting from not wanting to embed a ->gfp() hook (gfp flags are used both by allocation and memcg charging). Alternatively, a .gfp member could exist - it would just need to be refreshed periodically in the khugepaged codepath. That all said - let me take another crack at seeing if I can make this work without the need for a function pointer here.
On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > Thanks for the review, David! > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > --- a/mm/khugepaged.c > > > +++ b/mm/khugepaged.c > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > int last_target_node; > > > + > > > + struct page *hpage; > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > + struct collapse_control *cc); > > > }; > > > > > > /** > > > > Embedding this function pointer into collapse_contol seems like it would > > need some pretty strong rationale. Not to say that it should be a > > non-starter, but I think the changelog needs to clearly indicate why this > > is better/cleaner than embedding the needed info for a single allocation > > and charge function to use. If the callbacks would truly be so different > > that unifying them would be more complex, I think this makes sense. > > Mostly, this boils down to khugepaged having different a allocation > pattern for NUMA/UMA ; the former scans the pages first to determine > the right node, the latter preallocates before scanning. khugepaged > has the luxury on UMA systems of just holding onto a hugepage > indefinitely for the next collapse target. > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > else" casing, defining this as a context-defined operation seemed > appropriate. > > Collapsing both alloc and charging together was mostly a code > cleanliness decision resulting from not wanting to embed a ->gfp() > hook (gfp flags are used both by allocation and memcg charging). > Alternatively, a .gfp member could exist - it would just need to be > refreshed periodically in the khugepaged codepath. > > That all said - let me take another crack at seeing if I can make this > work without the need for a function pointer here. I had a patch that removed UMA allocation, please refer to https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t It was not made upstream due to some requests for further cleanup, but unfortunately I haven't got time to look into it yet. If this page were merged, would that make your life easier?
On Fri, May 13, 2022 at 4:17 PM Yang Shi <shy828301@gmail.com> wrote: > > On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > Thanks for the review, David! > > > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > > --- a/mm/khugepaged.c > > > > +++ b/mm/khugepaged.c > > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > > int last_target_node; > > > > + > > > > + struct page *hpage; > > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > > + struct collapse_control *cc); > > > > }; > > > > > > > > /** > > > > > > Embedding this function pointer into collapse_contol seems like it would > > > need some pretty strong rationale. Not to say that it should be a > > > non-starter, but I think the changelog needs to clearly indicate why this > > > is better/cleaner than embedding the needed info for a single allocation > > > and charge function to use. If the callbacks would truly be so different > > > that unifying them would be more complex, I think this makes sense. > > > > Mostly, this boils down to khugepaged having different a allocation > > pattern for NUMA/UMA ; the former scans the pages first to determine > > the right node, the latter preallocates before scanning. khugepaged > > has the luxury on UMA systems of just holding onto a hugepage > > indefinitely for the next collapse target. > > > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > > else" casing, defining this as a context-defined operation seemed > > appropriate. > > > > Collapsing both alloc and charging together was mostly a code > > cleanliness decision resulting from not wanting to embed a ->gfp() > > hook (gfp flags are used both by allocation and memcg charging). > > Alternatively, a .gfp member could exist - it would just need to be > > refreshed periodically in the khugepaged codepath. > > > > That all said - let me take another crack at seeing if I can make this > > work without the need for a function pointer here. > > I had a patch that removed UMA allocation, please refer to > https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t > > It was not made upstream due to some requests for further cleanup, but > unfortunately I haven't got time to look into it yet. > > If this page were merged, would that make your life easier? Hey Yang, First, sorry for missing that patch in the first place. I actually have some patches queued up that do a similar cleanup of khugepaged_prealloc_page() that was mentioned, but decided to not include them here. Second, removing the NUMA/UMA story does make this patch easier, I think (esp since the sched change was dropped for now). This is something I wanted while writing this series, but without the larger context referenced in your patch (most users don't build NUMA=n even on single node systems, and the pcp hugepage lists optimization) couldn't justify myself. Best, Zach
On Fri, May 13, 2022 at 4:56 PM Zach O'Keefe <zokeefe@google.com> wrote: > > On Fri, May 13, 2022 at 4:17 PM Yang Shi <shy828301@gmail.com> wrote: > > > > On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > Thanks for the review, David! > > > > > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > > > --- a/mm/khugepaged.c > > > > > +++ b/mm/khugepaged.c > > > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > > > int last_target_node; > > > > > + > > > > > + struct page *hpage; > > > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > > > + struct collapse_control *cc); > > > > > }; > > > > > > > > > > /** > > > > > > > > Embedding this function pointer into collapse_contol seems like it would > > > > need some pretty strong rationale. Not to say that it should be a > > > > non-starter, but I think the changelog needs to clearly indicate why this > > > > is better/cleaner than embedding the needed info for a single allocation > > > > and charge function to use. If the callbacks would truly be so different > > > > that unifying them would be more complex, I think this makes sense. > > > > > > Mostly, this boils down to khugepaged having different a allocation > > > pattern for NUMA/UMA ; the former scans the pages first to determine > > > the right node, the latter preallocates before scanning. khugepaged > > > has the luxury on UMA systems of just holding onto a hugepage > > > indefinitely for the next collapse target. > > > > > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > > > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > > > else" casing, defining this as a context-defined operation seemed > > > appropriate. > > > > > > Collapsing both alloc and charging together was mostly a code > > > cleanliness decision resulting from not wanting to embed a ->gfp() > > > hook (gfp flags are used both by allocation and memcg charging). > > > Alternatively, a .gfp member could exist - it would just need to be > > > refreshed periodically in the khugepaged codepath. > > > > > > That all said - let me take another crack at seeing if I can make this > > > work without the need for a function pointer here. > > > > I had a patch that removed UMA allocation, please refer to > > https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t > > > > It was not made upstream due to some requests for further cleanup, but > > unfortunately I haven't got time to look into it yet. > > > > If this page were merged, would that make your life easier? > > Hey Yang, > > First, sorry for missing that patch in the first place. I actually > have some patches queued up that do a similar cleanup of > khugepaged_prealloc_page() that was mentioned, but decided to not > include them here. > > Second, removing the NUMA/UMA story does make this patch easier, I > think (esp since the sched change was dropped for now). This is > something I wanted while writing this series, but without the larger > context referenced in your patch (most users don't build NUMA=n even > on single node systems, and the pcp hugepage lists optimization) > couldn't justify myself. Thanks, it would be better to add this patch into your series as a prerequisite so that you could make the MADV_COLLAPSE easier. I don't think I will be able to find time to rework the patch and solve all the review comments at any time soon. If you'd like to take it, please do it. > > Best, > Zach
On Tue, May 17, 2022 at 10:51 AM Yang Shi <shy828301@gmail.com> wrote: > > On Fri, May 13, 2022 at 4:56 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > On Fri, May 13, 2022 at 4:17 PM Yang Shi <shy828301@gmail.com> wrote: > > > > > > On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > > > Thanks for the review, David! > > > > > > > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > > > > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > > > > --- a/mm/khugepaged.c > > > > > > +++ b/mm/khugepaged.c > > > > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > > > > int last_target_node; > > > > > > + > > > > > > + struct page *hpage; > > > > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > > > > + struct collapse_control *cc); > > > > > > }; > > > > > > > > > > > > /** > > > > > > > > > > Embedding this function pointer into collapse_contol seems like it would > > > > > need some pretty strong rationale. Not to say that it should be a > > > > > non-starter, but I think the changelog needs to clearly indicate why this > > > > > is better/cleaner than embedding the needed info for a single allocation > > > > > and charge function to use. If the callbacks would truly be so different > > > > > that unifying them would be more complex, I think this makes sense. > > > > > > > > Mostly, this boils down to khugepaged having different a allocation > > > > pattern for NUMA/UMA ; the former scans the pages first to determine > > > > the right node, the latter preallocates before scanning. khugepaged > > > > has the luxury on UMA systems of just holding onto a hugepage > > > > indefinitely for the next collapse target. > > > > > > > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > > > > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > > > > else" casing, defining this as a context-defined operation seemed > > > > appropriate. > > > > > > > > Collapsing both alloc and charging together was mostly a code > > > > cleanliness decision resulting from not wanting to embed a ->gfp() > > > > hook (gfp flags are used both by allocation and memcg charging). > > > > Alternatively, a .gfp member could exist - it would just need to be > > > > refreshed periodically in the khugepaged codepath. > > > > > > > > That all said - let me take another crack at seeing if I can make this > > > > work without the need for a function pointer here. > > > > > > I had a patch that removed UMA allocation, please refer to > > > https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t > > > > > > It was not made upstream due to some requests for further cleanup, but > > > unfortunately I haven't got time to look into it yet. > > > > > > If this page were merged, would that make your life easier? > > > > Hey Yang, > > > > First, sorry for missing that patch in the first place. I actually > > have some patches queued up that do a similar cleanup of > > khugepaged_prealloc_page() that was mentioned, but decided to not > > include them here. > > > > Second, removing the NUMA/UMA story does make this patch easier, I > > think (esp since the sched change was dropped for now). This is > > something I wanted while writing this series, but without the larger > > context referenced in your patch (most users don't build NUMA=n even > > on single node systems, and the pcp hugepage lists optimization) > > couldn't justify myself. > > Thanks, it would be better to add this patch into your series as a > prerequisite so that you could make the MADV_COLLAPSE easier. > > I don't think I will be able to find time to rework the patch and > solve all the review comments at any time soon. If you'd like to take > it, please do it. > Sounds good, Yang. I'll take a crack at it for v6. Thanks for taking the time, Zach > > > > Best, > > Zach
On Tue, May 17, 2022 at 3:35 PM Zach O'Keefe <zokeefe@google.com> wrote: > > On Tue, May 17, 2022 at 10:51 AM Yang Shi <shy828301@gmail.com> wrote: > > > > On Fri, May 13, 2022 at 4:56 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > On Fri, May 13, 2022 at 4:17 PM Yang Shi <shy828301@gmail.com> wrote: > > > > > > > > On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > > > > > Thanks for the review, David! > > > > > > > > > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > > > > > > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > > > > > --- a/mm/khugepaged.c > > > > > > > +++ b/mm/khugepaged.c > > > > > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > > > > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > > > > > int last_target_node; > > > > > > > + > > > > > > > + struct page *hpage; > > > > > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > > > > > + struct collapse_control *cc); > > > > > > > }; > > > > > > > > > > > > > > /** > > > > > > > > > > > > Embedding this function pointer into collapse_contol seems like it would > > > > > > need some pretty strong rationale. Not to say that it should be a > > > > > > non-starter, but I think the changelog needs to clearly indicate why this > > > > > > is better/cleaner than embedding the needed info for a single allocation > > > > > > and charge function to use. If the callbacks would truly be so different > > > > > > that unifying them would be more complex, I think this makes sense. > > > > > > > > > > Mostly, this boils down to khugepaged having different a allocation > > > > > pattern for NUMA/UMA ; the former scans the pages first to determine > > > > > the right node, the latter preallocates before scanning. khugepaged > > > > > has the luxury on UMA systems of just holding onto a hugepage > > > > > indefinitely for the next collapse target. > > > > > > > > > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > > > > > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > > > > > else" casing, defining this as a context-defined operation seemed > > > > > appropriate. > > > > > > > > > > Collapsing both alloc and charging together was mostly a code > > > > > cleanliness decision resulting from not wanting to embed a ->gfp() > > > > > hook (gfp flags are used both by allocation and memcg charging). > > > > > Alternatively, a .gfp member could exist - it would just need to be > > > > > refreshed periodically in the khugepaged codepath. > > > > > > > > > > That all said - let me take another crack at seeing if I can make this > > > > > work without the need for a function pointer here. > > > > > > > > I had a patch that removed UMA allocation, please refer to > > > > https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t > > > > > > > > It was not made upstream due to some requests for further cleanup, but > > > > unfortunately I haven't got time to look into it yet. > > > > > > > > If this page were merged, would that make your life easier? > > > > > > Hey Yang, > > > > > > First, sorry for missing that patch in the first place. I actually > > > have some patches queued up that do a similar cleanup of > > > khugepaged_prealloc_page() that was mentioned, but decided to not > > > include them here. > > > > > > Second, removing the NUMA/UMA story does make this patch easier, I > > > think (esp since the sched change was dropped for now). This is > > > something I wanted while writing this series, but without the larger > > > context referenced in your patch (most users don't build NUMA=n even > > > on single node systems, and the pcp hugepage lists optimization) > > > couldn't justify myself. > > > > Thanks, it would be better to add this patch into your series as a > > prerequisite so that you could make the MADV_COLLAPSE easier. > > > > I don't think I will be able to find time to rework the patch and > > solve all the review comments at any time soon. If you'd like to take > > it, please do it. > > > > Sounds good, Yang. I'll take a crack at it for v6. Hi Zach, Fortunately I got some time, if you haven't spent too much effort in this, I could rework the patch then resubmit. Just let me know. > > Thanks for taking the time, > Zach > > > > > > > Best, > > > Zach
On Wed, May 25, 2022 at 10:59 AM Yang Shi <shy828301@gmail.com> wrote: > > On Tue, May 17, 2022 at 3:35 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > On Tue, May 17, 2022 at 10:51 AM Yang Shi <shy828301@gmail.com> wrote: > > > > > > On Fri, May 13, 2022 at 4:56 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > > > On Fri, May 13, 2022 at 4:17 PM Yang Shi <shy828301@gmail.com> wrote: > > > > > > > > > > On Fri, May 13, 2022 at 4:05 PM Zach O'Keefe <zokeefe@google.com> wrote: > > > > > > > > > > > > Thanks for the review, David! > > > > > > > > > > > > On Thu, May 12, 2022 at 1:02 PM David Rientjes <rientjes@google.com> wrote: > > > > > > > > > > > > > > On Wed, 4 May 2022, Zach O'Keefe wrote: > > > > > > > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > > > index c94bc43dff3e..6095fcb3f07c 100644 > > > > > > > > --- a/mm/khugepaged.c > > > > > > > > +++ b/mm/khugepaged.c > > > > > > > > @@ -92,6 +92,10 @@ struct collapse_control { > > > > > > > > > > > > > > > > /* Last target selected in khugepaged_find_target_node() */ > > > > > > > > int last_target_node; > > > > > > > > + > > > > > > > > + struct page *hpage; > > > > > > > > + int (*alloc_charge_hpage)(struct mm_struct *mm, > > > > > > > > + struct collapse_control *cc); > > > > > > > > }; > > > > > > > > > > > > > > > > /** > > > > > > > > > > > > > > Embedding this function pointer into collapse_contol seems like it would > > > > > > > need some pretty strong rationale. Not to say that it should be a > > > > > > > non-starter, but I think the changelog needs to clearly indicate why this > > > > > > > is better/cleaner than embedding the needed info for a single allocation > > > > > > > and charge function to use. If the callbacks would truly be so different > > > > > > > that unifying them would be more complex, I think this makes sense. > > > > > > > > > > > > Mostly, this boils down to khugepaged having different a allocation > > > > > > pattern for NUMA/UMA ; the former scans the pages first to determine > > > > > > the right node, the latter preallocates before scanning. khugepaged > > > > > > has the luxury on UMA systems of just holding onto a hugepage > > > > > > indefinitely for the next collapse target. > > > > > > > > > > > > For MADV_COLLAPSE, we never preallocate, and so its pattern doesn't > > > > > > depend on NUMA or UMA configs. Trying to avoid "if (khugepaged) ... > > > > > > else" casing, defining this as a context-defined operation seemed > > > > > > appropriate. > > > > > > > > > > > > Collapsing both alloc and charging together was mostly a code > > > > > > cleanliness decision resulting from not wanting to embed a ->gfp() > > > > > > hook (gfp flags are used both by allocation and memcg charging). > > > > > > Alternatively, a .gfp member could exist - it would just need to be > > > > > > refreshed periodically in the khugepaged codepath. > > > > > > > > > > > > That all said - let me take another crack at seeing if I can make this > > > > > > work without the need for a function pointer here. > > > > > > > > > > I had a patch that removed UMA allocation, please refer to > > > > > https://lore.kernel.org/linux-mm/20210817202146.3218-1-shy828301@gmail.com/#t > > > > > > > > > > It was not made upstream due to some requests for further cleanup, but > > > > > unfortunately I haven't got time to look into it yet. > > > > > > > > > > If this page were merged, would that make your life easier? > > > > > > > > Hey Yang, > > > > > > > > First, sorry for missing that patch in the first place. I actually > > > > have some patches queued up that do a similar cleanup of > > > > khugepaged_prealloc_page() that was mentioned, but decided to not > > > > include them here. > > > > > > > > Second, removing the NUMA/UMA story does make this patch easier, I > > > > think (esp since the sched change was dropped for now). This is > > > > something I wanted while writing this series, but without the larger > > > > context referenced in your patch (most users don't build NUMA=n even > > > > on single node systems, and the pcp hugepage lists optimization) > > > > couldn't justify myself. > > > > > > Thanks, it would be better to add this patch into your series as a > > > prerequisite so that you could make the MADV_COLLAPSE easier. > > > > > > I don't think I will be able to find time to rework the patch and > > > solve all the review comments at any time soon. If you'd like to take > > > it, please do it. > > > > > > > Sounds good, Yang. I'll take a crack at it for v6. > > Hi Zach, > > Fortunately I got some time, if you haven't spent too much effort in > this, I could rework the patch then resubmit. Just let me know. > Hey Yang, Thanks for letting me know. In my v6 series, I actually ended up incorporating this work *afterwards*. Reason being there are some patches in my series, like "mm/khugepaged: pipe enum scan_result codes back to callers" which makes cleanup of khugepaged_scan_mm_slot() and khugepaged_scan_mm_slot() easier, IMO[1]. I'll privately send what I have planned for v6 and we can discuss. Thanks, Zach [1] https://lore.kernel.org/linux-mm/20220504214437.2850685-6-zokeefe@google.com/ > > > > Thanks for taking the time, > > Zach > > > > > > > > > > Best, > > > > Zach
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c94bc43dff3e..6095fcb3f07c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -92,6 +92,10 @@ struct collapse_control { /* Last target selected in khugepaged_find_target_node() */ int last_target_node; + + struct page *hpage; + int (*alloc_charge_hpage)(struct mm_struct *mm, + struct collapse_control *cc); }; /** @@ -866,18 +870,19 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) return true; } -static bool khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) +static bool khugepaged_alloc_page(gfp_t gfp, int node, + struct collapse_control *cc) { - VM_BUG_ON_PAGE(*hpage, *hpage); + VM_BUG_ON_PAGE(cc->hpage, cc->hpage); - *hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); - if (unlikely(!*hpage)) { + cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); + if (unlikely(!cc->hpage)) { count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - *hpage = ERR_PTR(-ENOMEM); + cc->hpage = ERR_PTR(-ENOMEM); return false; } - prep_transhuge_page(*hpage); + prep_transhuge_page(cc->hpage); count_vm_event(THP_COLLAPSE_ALLOC); return true; } @@ -941,9 +946,10 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) return true; } -static bool khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) +static bool khugepaged_alloc_page(gfp_t gfp, int node, + struct collapse_control *cc) { - VM_BUG_ON(!*hpage); + VM_BUG_ON(!cc->hpage); return true; } @@ -1067,8 +1073,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, return true; } -static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, - struct collapse_control *cc) +static int alloc_charge_hpage(struct mm_struct *mm, struct collapse_control *cc) { #ifdef CONFIG_NUMA const struct cpumask *cpumask; @@ -1084,17 +1089,17 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, set_cpus_allowed_ptr(current, cpumask); } #endif - if (!khugepaged_alloc_page(hpage, gfp, node)) + if (!khugepaged_alloc_page(gfp, node, cc)) return SCAN_ALLOC_HUGE_PAGE_FAIL; - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp))) + if (unlikely(mem_cgroup_charge(page_folio(cc->hpage), mm, gfp))) return SCAN_CGROUP_CHARGE_FAIL; - count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC); + count_memcg_page_event(cc->hpage, THP_COLLAPSE_ALLOC); return SCAN_SUCCEED; } static void collapse_huge_page(struct mm_struct *mm, unsigned long address, - struct page **hpage, int referenced, - int unmapped, struct collapse_control *cc) + int referenced, int unmapped, + struct collapse_control *cc) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1116,11 +1121,11 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, */ mmap_read_unlock(mm); - result = alloc_charge_hpage(hpage, mm, cc); + result = cc->alloc_charge_hpage(mm, cc); if (result != SCAN_SUCCEED) goto out_nolock; - new_page = *hpage; + new_page = cc->hpage; mmap_read_lock(mm); result = hugepage_vma_revalidate(mm, address, &vma); @@ -1232,21 +1237,21 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); - *hpage = NULL; + cc->hpage = NULL; khugepaged_pages_collapsed++; result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: - if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(page_folio(*hpage)); + if (!IS_ERR_OR_NULL(cc->hpage)) + mem_cgroup_uncharge(page_folio(cc->hpage)); trace_mm_collapse_huge_page(mm, isolated, result); return; } static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, struct page **hpage, + unsigned long address, struct collapse_control *cc) { pmd_t *pmd; @@ -1392,8 +1397,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, pte_unmap_unlock(pte, ptl); if (ret) { /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, hpage, referenced, unmapped, - cc); + collapse_huge_page(mm, address, referenced, unmapped, cc); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, @@ -1658,7 +1662,6 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * @mm: process address space where collapse happens * @file: file that collapse on * @start: collapse start address - * @hpage: new allocated huge page for collapse * @cc: collapse context and scratchpad * * Basic scheme is simple, details are more complex: @@ -1677,8 +1680,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * + unlock and free huge page; */ static void collapse_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct page **hpage, - struct collapse_control *cc) + pgoff_t start, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; struct page *new_page; @@ -1692,11 +1694,11 @@ static void collapse_file(struct mm_struct *mm, struct file *file, VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); - result = alloc_charge_hpage(hpage, mm, cc); + result = cc->alloc_charge_hpage(mm, cc); if (result != SCAN_SUCCEED) goto out; - new_page = *hpage; + new_page = cc->hpage; /* * Ensure we have slots for all the pages in the range. This is @@ -1979,7 +1981,7 @@ static void collapse_file(struct mm_struct *mm, struct file *file, * Remove pte page tables, so we can re-fault the page as huge. */ retract_page_tables(mapping, start); - *hpage = NULL; + cc->hpage = NULL; khugepaged_pages_collapsed++; } else { @@ -2026,14 +2028,13 @@ static void collapse_file(struct mm_struct *mm, struct file *file, unlock_page(new_page); out: VM_BUG_ON(!list_empty(&pagelist)); - if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(page_folio(*hpage)); + if (!IS_ERR_OR_NULL(cc->hpage)) + mem_cgroup_uncharge(page_folio(cc->hpage)); /* TODO: tracepoints */ } static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct page **hpage, - struct collapse_control *cc) + pgoff_t start, struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2106,7 +2107,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - collapse_file(mm, file, start, hpage, cc); + collapse_file(mm, file, start, cc); } } @@ -2114,8 +2115,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, } #else static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct page **hpage, - struct collapse_control *cc) + pgoff_t start, struct collapse_control *cc) { BUILD_BUG(); } @@ -2126,7 +2126,6 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) #endif static unsigned int khugepaged_scan_mm_slot(unsigned int pages, - struct page **hpage, struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) @@ -2203,13 +2202,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, mmap_read_unlock(mm); ret = 1; - khugepaged_scan_file(mm, file, pgoff, hpage, - cc); + khugepaged_scan_file(mm, file, pgoff, cc); fput(file); } else { ret = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, - hpage, cc); + khugepaged_scan.address, cc); } /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; @@ -2267,15 +2264,15 @@ static int khugepaged_wait_event(void) static void khugepaged_do_scan(struct collapse_control *cc) { - struct page *hpage = NULL; unsigned int progress = 0, pass_through_head = 0; unsigned int pages = READ_ONCE(khugepaged_pages_to_scan); bool wait = true; + cc->hpage = NULL; lru_add_drain_all(); while (progress < pages) { - if (!khugepaged_prealloc_page(&hpage, &wait)) + if (!khugepaged_prealloc_page(&cc->hpage, &wait)) break; cond_resched(); @@ -2289,14 +2286,14 @@ static void khugepaged_do_scan(struct collapse_control *cc) if (khugepaged_has_work() && pass_through_head < 2) progress += khugepaged_scan_mm_slot(pages - progress, - &hpage, cc); + cc); else progress = pages; spin_unlock(&khugepaged_mm_lock); } - if (!IS_ERR_OR_NULL(hpage)) - put_page(hpage); + if (!IS_ERR_OR_NULL(cc->hpage)) + put_page(cc->hpage); } static bool khugepaged_should_wakeup(void) @@ -2330,6 +2327,7 @@ static int khugepaged(void *none) struct mm_slot *mm_slot; struct collapse_control cc = { .last_target_node = NUMA_NO_NODE, + .alloc_charge_hpage = &alloc_charge_hpage, }; set_freezable();
Add a hook to struct collapse_context that allows contexts to define their own allocation semantics and charging logic. For example, khugepaged has specific NUMA and UMA implementations as well as gfp flags tied to /sys/kernel/mm/transparent_hugepage/khugepaged/defrag. Additionally, move [pre]allocated hugepage pointer into struct collapse_context. Signed-off-by: Zach O'Keefe <zokeefe@google.com> --- mm/khugepaged.c | 90 ++++++++++++++++++++++++------------------------- 1 file changed, 44 insertions(+), 46 deletions(-)