diff mbox series

mm/page_alloc: cache the result of node_dirty_ok()

Message ID 20220430011032.64071-1-vvghjk1234@gmail.com (mailing list archive)
State New
Headers show
Series mm/page_alloc: cache the result of node_dirty_ok() | expand

Commit Message

Wonhyuk Yang April 30, 2022, 1:10 a.m. UTC
To spread dirty page, nodes are checked whether
it reached the dirty limit using the expensive
node_dirty_ok(). To reduce the number of calling
node_dirty_ok(), last node that hit the dirty
limit is cached.

Instead of caching the node, caching both node
and it's result of node_dirty_ok() can reduce
the number of calling node_dirty_ok() more than
before.

Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
---
 mm/page_alloc.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

Comments

Andrew Morton April 30, 2022, 6:38 p.m. UTC | #1
On Sat, 30 Apr 2022 10:10:32 +0900 Wonhyuk Yang <vvghjk1234@gmail.com> wrote:

> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
> 
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
> 
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4068,7 +4068,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  {
>  	struct zoneref *z;
>  	struct zone *zone;
> -	struct pglist_data *last_pgdat_dirty_limit = NULL;
> +	struct pglist_data *last_pgdat = NULL;
> +	bool last_pgdat_dirty_limit = false;
>  	bool no_fallback;
>  
>  retry:
> @@ -4107,13 +4108,13 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  		 * dirty-throttling and the flusher threads.
>  		 */
>  		if (ac->spread_dirty_pages) {
> -			if (last_pgdat_dirty_limit == zone->zone_pgdat)
> -				continue;
> +			if (last_pgdat != zone->zone_pgdat) {
> +				last_pgdat = zone->zone_pgdat;
> +				last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
> +			}
>  
> -			if (!node_dirty_ok(zone->zone_pgdat)) {
> -				last_pgdat_dirty_limit = zone->zone_pgdat;
> +			if (!last_pgdat_dirty_limit)
>  				continue;
> -			}
>  		}
>  
>  		if (no_fallback && nr_online_nodes > 1 &&

Looks reasonable to me.  Hopefully Mel and Johannes can review.

I think last_pgdat_dirty_limit isn't a great name.  It records the
dirty_ok state of last_pgdat.  So why not call it last_pgdat_dirty_ok?

--- a/mm/page_alloc.c~mm-page_alloc-cache-the-result-of-node_dirty_ok-fix
+++ a/mm/page_alloc.c
@@ -4022,7 +4022,7 @@ get_page_from_freelist(gfp_t gfp_mask, u
 	struct zoneref *z;
 	struct zone *zone;
 	struct pglist_data *last_pgdat = NULL;
-	bool last_pgdat_dirty_limit = false;
+	bool last_pgdat_dirty_ok = false;
 	bool no_fallback;
 
 retry:
@@ -4063,10 +4063,10 @@ retry:
 		if (ac->spread_dirty_pages) {
 			if (last_pgdat != zone->zone_pgdat) {
 				last_pgdat = zone->zone_pgdat;
-				last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
+				last_pgdat_dirty_ok = node_dirty_ok(zone->zone_pgdat);
 			}
 
-			if (!last_pgdat_dirty_limit)
+			if (!last_pgdat_dirty_ok)
 				continue;
 		}
Mel Gorman May 2, 2022, 9:53 a.m. UTC | #2
On Sat, Apr 30, 2022 at 10:10:32AM +0900, Wonhyuk Yang wrote:
> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
> 
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
> 
> Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>

Acked-by: Mel Gorman <mgorman@suse.de>

I agree with Andrew that last_pgdat_dirty_ok is a better name. The old
name was also bad but seeing as the area is being changed, fixing the
name is harmless.
Johannes Weiner May 3, 2022, 4:19 p.m. UTC | #3
On Sat, Apr 30, 2022 at 10:10:32AM +0900, Wonhyuk Yang wrote:
> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
> 
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
> 
> Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Looks good to me. I like Andrew's naming fixlet as well.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e42038382c1..aba62cf31a0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4068,7 +4068,8 @@  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 {
 	struct zoneref *z;
 	struct zone *zone;
-	struct pglist_data *last_pgdat_dirty_limit = NULL;
+	struct pglist_data *last_pgdat = NULL;
+	bool last_pgdat_dirty_limit = false;
 	bool no_fallback;
 
 retry:
@@ -4107,13 +4108,13 @@  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 		 * dirty-throttling and the flusher threads.
 		 */
 		if (ac->spread_dirty_pages) {
-			if (last_pgdat_dirty_limit == zone->zone_pgdat)
-				continue;
+			if (last_pgdat != zone->zone_pgdat) {
+				last_pgdat = zone->zone_pgdat;
+				last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
+			}
 
-			if (!node_dirty_ok(zone->zone_pgdat)) {
-				last_pgdat_dirty_limit = zone->zone_pgdat;
+			if (!last_pgdat_dirty_limit)
 				continue;
-			}
 		}
 
 		if (no_fallback && nr_online_nodes > 1 &&