From patchwork Fri Jan 13 11:12:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B86C7C54EBD for ; Fri, 13 Jan 2023 11:13:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 569E98E0008; Fri, 13 Jan 2023 06:13:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51A618E0001; Fri, 13 Jan 2023 06:13:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 409858E0008; Fri, 13 Jan 2023 06:13:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 315F38E0001 for ; Fri, 13 Jan 2023 06:13:22 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E805B1C5FF6 for ; Fri, 13 Jan 2023 11:13:21 +0000 (UTC) X-FDA: 80349514602.05.271093A Received: from outbound-smtp60.blacknight.com (outbound-smtp60.blacknight.com [46.22.136.244]) by imf13.hostedemail.com (Postfix) with ESMTP id 4506820013 for ; Fri, 13 Jan 2023 11:13:20 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.244 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608400; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nmoznsuK0w9j+EF/lpoydTt9vDWWrFHmoqctmIZ8VnY=; b=jca6MaZKyPn1CbAktUFocxM5rZK3NlIZav8d7VhcZbvqrm6pXdlx63oxXSQsaQS80EtCDl uO8gX6qGHEuz+sKsOep2DPhP8JfI2tMiJ32pB2tFbxi4dh/CIElpKUA/BltahGBVULC9Rl 2DuZWuOzEqJkOABOtkzlyhtbiPbXGus= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.244 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608400; a=rsa-sha256; cv=none; b=HLmPzQHzsq+Jrt4R6ZJ676YF4fsU4ExT+05/yRf5ehfHpjz1U9DoA0GCN4HkKgTWx0V3Jw 5m46QzwiB/Fnoc4uUj3T3wToOUZyedxOstvouQ/2xSjrl38nnJpQEm5lY2w/HjuqUwX/Vi kzgn53TfixtPP4vn98325xjsRksQCkk= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp60.blacknight.com (Postfix) with ESMTPS id E8DE4FA9B0 for ; Fri, 13 Jan 2023 11:13:18 +0000 (GMT) Received: (qmail 10897 invoked from network); 13 Jan 2023 11:13:18 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:13:18 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 5/6] mm/page_alloc: Explicitly define how __GFP_HIGH non-blocking allocations accesses reserves Date: Fri, 13 Jan 2023 11:12:16 +0000 Message-Id: <20230113111217.14134-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4506820013 X-Stat-Signature: rbfrzjjfsz3ehtbicyq7afk9yqtqfaax X-HE-Tag: 1673608400-974781 X-HE-Meta: U2FsdGVkX1/3/bpEeBBwjBQft7uRFKk153pQCgCeEvj2L7D45kFIvCYOtgvG4ZiUrSZ3ShRWlGXel6TuEbtoLaBuyKcy+dbHOTP9rNJz87+FhIV8TZoZHclQ/zOA+tJUUXzjZemijK1ruFSlZfroBd1WnFboKkHkrLEQWdjX6rNePN38EKyoZ6VHbirTvK3c4GBl9iJ/llnpA6aCAoko8fnGY+Q9D2FyuLxcPyi8l7FELnEw95MWBUgG1psir9YgVAhVADXJ1yCo32/Pp/aN5tZfl9vIb2k7AD4ooK59LCDUW8kpBZRMaP0SFE2mJLIgVazHtD9g+x1TXgH1KbjDnDif79yoacWzy2Ej7vU8xB1PmVMq/MIb/JbkSXicwtQCtJ7ioUeU+H3gSBeZmjpEoUgutIjMHqXlH4FqKdeEpMk4JVW7MxeAneO8BGt71GXUBRP0JPYn7r4ovCRfKhETp6TMUkXkTcgcPXmjpdS8d+aIkvcdxMJgs4eB12rFi8kCFmsfzcIlEcd8Wytxvh4Ld40hm2n7aBaQA1/RKFeNBJ9i35v82u4x8r4De65996Cq2pa87D2o/V8f9mFw7J9Fx59Iq308DiA1aowBJ7qXAZGLdqxBEt19cBb03Vx1ZZrJEC+SYocYw20GmCY1qZYWS8iT/5g4gZUeFIukAqZkgM9WPIjyWpNwollP49/jY+VQjf52mbBN0NFDma3Jdsi4w/g5+Yg/XqBXHAhs/KN/OuxWAs3A/f/SnScUULGEPNKmLKLMWNQM9dKOFp2M8UnrmIWOnbsFkeq6cg292lP3LLzAylP5soS2QwZpLwObF9PLuF509cljpov+jrDGjM7qU1deCaH+qfsKIURQbGr9vp1mWF/bJH6OKa8PyCjI1FSwumBmdsfj6DBUZoCk0Ncvg05peufhSorKAaRNbfzM5Ja7muySHbKtk/LcmYlnqkGWkahDmWJCCcD7lziRQjF nTdEDwx3 5V6BbC/gpYsrDZc3SMJs3pTSavqO7Jr+JcBfjOGeAHrVozthepRsbqQ+G1avj7JImgOYR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GFP_ATOMIC allocations get flagged ALLOC_HARDER which is a vague description. In preparation for the removal of GFP_ATOMIC redefine __GFP_ATOMIC to simply mean non-blocking and renaming ALLOC_HARDER to ALLOC_NON_BLOCK accordingly. __GFP_HIGH is required for access to reserves but non-blocking is granted more access. For example, GFP_NOWAIT is non-blocking but has no special access to reserves. A __GFP_NOFAIL blocking allocation is granted access similar to __GFP_HIGH if the only alternative is an OOM kill. Signed-off-by: Mel Gorman Acked-by: Michal Hocko Acked-by: Vlastimil Babka --- mm/internal.h | 7 +++++-- mm/page_alloc.c | 44 ++++++++++++++++++++++++-------------------- 2 files changed, 29 insertions(+), 22 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 8706d46863df..23a37588073a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -735,7 +735,10 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_OOM ALLOC_NO_WATERMARKS #endif -#define ALLOC_HARDER 0x10 /* try to alloc harder */ +#define ALLOC_NON_BLOCK 0x10 /* Caller cannot block. Allow access + * to 25% of the min watermark or + * 62.5% if __GFP_HIGH is set. + */ #define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% * of the min watermark. */ @@ -750,7 +753,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ /* Flags that allow allocations below the min watermark. */ -#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) +#define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6f41b84a97ac..b9ae0ba0a2ab 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3989,18 +3989,19 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, * __GFP_HIGH allows access to 50% of the min reserve as well * as OOM. */ - if (alloc_flags & ALLOC_MIN_RESERVE) + if (alloc_flags & ALLOC_MIN_RESERVE) { min -= min / 2; - /* - * Non-blocking allocations can access some of the reserve - * with more access if also __GFP_HIGH. The reasoning is that - * a non-blocking caller may incur a more severe penalty - * if it cannot get memory quickly, particularly if it's - * also __GFP_HIGH. - */ - if (alloc_flags & ALLOC_HARDER) - min -= min / 4; + /* + * Non-blocking allocations (e.g. GFP_ATOMIC) can + * access more reserves than just __GFP_HIGH. Other + * non-blocking allocations requests such as GFP_NOWAIT + * or (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) do not get + * access to the min reserve. + */ + if (alloc_flags & ALLOC_NON_BLOCK) + min -= min / 4; + } /* * OOM victims can try even harder than the normal reserve @@ -4851,28 +4852,30 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). + * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); - if (gfp_mask & __GFP_ATOMIC) { + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ if (!(gfp_mask & __GFP_NOMEMALLOC)) { - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_NON_BLOCK; if (order > 0) alloc_flags |= ALLOC_HIGHATOMIC; } /* - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the - * comment for __cpuset_node_allowed(). + * Ignore cpuset mems for non-blocking __GFP_HIGH (probably + * GFP_ATOMIC) rather than fail, see the comment for + * __cpuset_node_allowed(). */ - alloc_flags &= ~ALLOC_CPUSET; + if (alloc_flags & ALLOC_MIN_RESERVE) + alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) alloc_flags |= ALLOC_MIN_RESERVE; @@ -5303,12 +5306,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, WARN_ON_ONCE_GFP(costly_order, gfp_mask); /* - * Help non-failing allocations by giving them access to memory - * reserves but do not use ALLOC_NO_WATERMARKS because this + * Help non-failing allocations by giving some access to memory + * reserves normally used for high priority non-blocking + * allocations but do not use ALLOC_NO_WATERMARKS because this * could deplete whole memory reserves which would just make - * the situation worse + * the situation worse. */ - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac); + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE, ac); if (page) goto got_pg;