diff mbox series

[4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue

Message ID 20220512085043.5234-5-mgorman@techsingularity.net (mailing list archive)
State New
Headers show
Series Drain remote per-cpu directly v3 | expand

Commit Message

Mel Gorman May 12, 2022, 8:50 a.m. UTC
The VM_BUG_ON check for a valid page can be avoided with a simple
change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general
and even more unlikely if the page allocation failed so mark the
branch unlikely.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Tested-by: Minchan Kim <minchan@kernel.org>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Comments

Nicolas Saenz Julienne May 13, 2022, 12:03 p.m. UTC | #1
On Thu, 2022-05-12 at 09:50 +0100, Mel Gorman wrote:
> The VM_BUG_ON check for a valid page can be avoided with a simple
> change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general
> and even more unlikely if the page allocation failed so mark the
> branch unlikely.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> Tested-by: Minchan Kim <minchan@kernel.org>
> Acked-by: Minchan Kim <minchan@kernel.org>
> ---

Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>

Thanks,
Vlastimil Babka May 19, 2022, 10:57 a.m. UTC | #2
On 5/12/22 10:50, Mel Gorman wrote:
> The VM_BUG_ON check for a valid page can be avoided with a simple
> change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general
> and even more unlikely if the page allocation failed so mark the
> branch unlikely.

Hm, so that makes a DEBUG_VM config avoid the check. On the other hand,
it puts it on the path returning from rmqueue_pcplist() for all configs,
and that should be the fast path. So unless things further change in the
following patches, it doesn't seem that useful?

> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> Tested-by: Minchan Kim <minchan@kernel.org>
> Acked-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1c4c54503a5d..b543333dce8f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3765,17 +3765,18 @@ struct page *rmqueue(struct zone *preferred_zone,
>  
>  	page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,
>  							migratetype);
> -	if (unlikely(!page))
> -		return NULL;
>  
>  out:
>  	/* Separate test+clear to avoid unnecessary atomics */
> -	if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {
> +	if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) {
>  		clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
>  		wakeup_kswapd(zone, 0, 0, zone_idx(zone));
>  	}
>  
> -	VM_BUG_ON_PAGE(page && bad_range(zone, page), page);
> +	if (unlikely(!page))
> +		return NULL;
> +
> +	VM_BUG_ON_PAGE(bad_range(zone, page), page);
>  	return page;
>  }
>
Mel Gorman May 19, 2022, 12:13 p.m. UTC | #3
On Thu, May 19, 2022 at 12:57:01PM +0200, Vlastimil Babka wrote:
> On 5/12/22 10:50, Mel Gorman wrote:
> > The VM_BUG_ON check for a valid page can be avoided with a simple
> > change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general
> > and even more unlikely if the page allocation failed so mark the
> > branch unlikely.
> 
> Hm, so that makes a DEBUG_VM config avoid the check. On the other hand,
> it puts it on the path returning from rmqueue_pcplist() for all configs,
> and that should be the fast path. So unless things further change in the
> following patches, it doesn't seem that useful?
> 

You're right -- the fast path ends up with both a if
(page) and if (!page) checks. Andrew, can you drop the patch
mm-page_alloc-remove-unnecessary-page-==-null-check-in-rmqueue.patch from
your tree please?

Originally the flow was important when I was writing the patch and later
became unnecessary. However, it reminded me of another problem I thought
of when writing this and then forgotten to note it in the changelog. If
the page allocation fails then ZONE_BOOSTED_WATERMARK should still be
tested and cleared before waking kswapd. It could happen if an allocation
attempt tried to fallback to another migratetype and still fail to find
a suitable page. This is true whether going through the PCP lists or not.

So what do you think of me adding this patch to a follow-up series?

--8<--
mm/page_alloc: Remove mistaken page == NULL check in rmqueue

If a page allocation fails, the ZONE_BOOSTER_WATERMARK should be tested,
cleared and kswapd woken whether the allocation attempt was via the PCP
or directly via the buddy list.

Remove the page == NULL so the ZONE_BOOSTED_WATERMARK bit is checked
unconditionally. As it is unlikely that ZONE_BOOSTED_WATERMARK is set,
mark the branch accordingly.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1c4c54503a5d..61d5bc2efffe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3765,12 +3765,10 @@ struct page *rmqueue(struct zone *preferred_zone,
 
 	page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,
 							migratetype);
-	if (unlikely(!page))
-		return NULL;
 
 out:
 	/* Separate test+clear to avoid unnecessary atomics */
-	if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {
+	if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) {
 		clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
 		wakeup_kswapd(zone, 0, 0, zone_idx(zone));
 	}
Vlastimil Babka May 19, 2022, 12:26 p.m. UTC | #4
On 5/19/22 14:13, Mel Gorman wrote:
> On Thu, May 19, 2022 at 12:57:01PM +0200, Vlastimil Babka wrote:
>> On 5/12/22 10:50, Mel Gorman wrote:
>>> The VM_BUG_ON check for a valid page can be avoided with a simple
>>> change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general
>>> and even more unlikely if the page allocation failed so mark the
>>> branch unlikely.
>>
>> Hm, so that makes a DEBUG_VM config avoid the check. On the other hand,
>> it puts it on the path returning from rmqueue_pcplist() for all configs,
>> and that should be the fast path. So unless things further change in the
>> following patches, it doesn't seem that useful?
>>
> 
> You're right -- the fast path ends up with both a if
> (page) and if (!page) checks. Andrew, can you drop the patch
> mm-page_alloc-remove-unnecessary-page-==-null-check-in-rmqueue.patch from
> your tree please?
> 
> Originally the flow was important when I was writing the patch and later
> became unnecessary. However, it reminded me of another problem I thought
> of when writing this and then forgotten to note it in the changelog. If
> the page allocation fails then ZONE_BOOSTED_WATERMARK should still be
> tested and cleared before waking kswapd. It could happen if an allocation
> attempt tried to fallback to another migratetype and still fail to find
> a suitable page. This is true whether going through the PCP lists or not.
> 
> So what do you think of me adding this patch to a follow-up series?

LGTM.

> 
> --8<--
> mm/page_alloc: Remove mistaken page == NULL check in rmqueue
> 
> If a page allocation fails, the ZONE_BOOSTER_WATERMARK should be tested,
> cleared and kswapd woken whether the allocation attempt was via the PCP
> or directly via the buddy list.
> 
> Remove the page == NULL so the ZONE_BOOSTED_WATERMARK bit is checked
> unconditionally. As it is unlikely that ZONE_BOOSTED_WATERMARK is set,
> mark the branch accordingly.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1c4c54503a5d..61d5bc2efffe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3765,12 +3765,10 @@ struct page *rmqueue(struct zone *preferred_zone,
>  
>  	page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,
>  							migratetype);
> -	if (unlikely(!page))
> -		return NULL;
>  
>  out:
>  	/* Separate test+clear to avoid unnecessary atomics */
> -	if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {
> +	if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) {
>  		clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
>  		wakeup_kswapd(zone, 0, 0, zone_idx(zone));
>  	}
>
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1c4c54503a5d..b543333dce8f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3765,17 +3765,18 @@  struct page *rmqueue(struct zone *preferred_zone,
 
 	page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,
 							migratetype);
-	if (unlikely(!page))
-		return NULL;
 
 out:
 	/* Separate test+clear to avoid unnecessary atomics */
-	if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {
+	if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) {
 		clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
 		wakeup_kswapd(zone, 0, 0, zone_idx(zone));
 	}
 
-	VM_BUG_ON_PAGE(page && bad_range(zone, page), page);
+	if (unlikely(!page))
+		return NULL;
+
+	VM_BUG_ON_PAGE(bad_range(zone, page), page);
 	return page;
 }