From patchwork Fri Jun 24 12:54:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12894469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA6DC433EF for ; Fri, 24 Jun 2022 12:55:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87AAB8E0219; Fri, 24 Jun 2022 08:55:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 82A698E020E; Fri, 24 Jun 2022 08:55:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 719FA8E0219; Fri, 24 Jun 2022 08:55:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 63CFF8E020E for ; Fri, 24 Jun 2022 08:55:08 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3AD4B352E6 for ; Fri, 24 Jun 2022 12:55:08 +0000 (UTC) X-FDA: 79613124696.31.1D719A0 Received: from outbound-smtp05.blacknight.com (outbound-smtp05.blacknight.com [81.17.249.38]) by imf12.hostedemail.com (Postfix) with ESMTP id A85904001D for ; Fri, 24 Jun 2022 12:55:07 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp05.blacknight.com (Postfix) with ESMTPS id ADEDA132004 for ; Fri, 24 Jun 2022 13:55:06 +0100 (IST) Received: (qmail 7825 invoked from network); 24 Jun 2022 12:55:06 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 24 Jun 2022 12:55:06 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , Yu Zhao , Marek Szyprowski , LKML , Linux-MM , Mel Gorman Subject: [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Date: Fri, 24 Jun 2022 13:54:19 +0100 Message-Id: <20220624125423.6126-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220624125423.6126-1-mgorman@techsingularity.net> References: <20220624125423.6126-1-mgorman@techsingularity.net> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656075307; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v818/rSVUFtM5Aui90TwZmp3kJVhKKoVc8bks+ZdaDQ=; b=DOmJKxwLKfQt1dE8PWr0fJ2pijjfEGkYOqKdN4mvm2ll06EbCiuOy7B1YJG5Ir0+lWvYjr Do42cLPClxqFB8XAHtT11GU8vLNZb3bGbKJBPHNISSuH7l3wXnGgXjoXN7jhrdQwLOLjeD huNSMgyPoytJ0sTcCYu/ScVG+i/gosg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf12.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.38 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656075307; a=rsa-sha256; cv=none; b=Za5n/7x1CJXYV2ogcsHXGyMd4aGAeT964i/2K0Z4n7rHDPEWHSuboFA+skU/H39cvVnNDd feeEKP1BwhimrLQupeH+RupBb/z1GH+ZqQ8RALn5jYI7nORReFEBA1jK72ZXa/gbUbKbmb S4Tar2rAqw10W3jnPgNKwQs+kApn+eQ= X-Rspamd-Queue-Id: A85904001D Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf12.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.38 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: mmk4pmsc7tcwuc1fgaghko5ocifc6j5n X-HE-Tag: 1656075307-928010 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation page to allow the buddy removal code to be reused in a later patch. No functional change. Signed-off-by: Mel Gorman Tested-by: Minchan Kim Acked-by: Minchan Kim Reviewed-by: Nicolas Saenz Julienne Acked-by: Vlastimil Babka --- mm/page_alloc.c | 81 ++++++++++++++++++++++++++++--------------------- 1 file changed, 47 insertions(+), 34 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index febd97f4a2fc..44d198af4b35 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3637,6 +3637,43 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, #endif } +static __always_inline +struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, + unsigned int order, unsigned int alloc_flags, + int migratetype) +{ + struct page *page; + unsigned long flags; + + do { + page = NULL; + spin_lock_irqsave(&zone->lock, flags); + /* + * order-0 request can reach here when the pcplist is skipped + * due to non-CMA allocation context. HIGHATOMIC area is + * reserved for high-order atomic allocation, so order-0 + * request should skip it. + */ + if (order > 0 && alloc_flags & ALLOC_HARDER) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { + page = __rmqueue(zone, order, migratetype, alloc_flags); + if (!page) { + spin_unlock_irqrestore(&zone->lock, flags); + return NULL; + } + } + __mod_zone_freepage_state(zone, -(1 << order), + get_pcppage_migratetype(page)); + spin_unlock_irqrestore(&zone->lock, flags); + } while (check_new_pages(page, order)); + + __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); + zone_statistics(preferred_zone, zone, 1); + + return page; +} + /* Remove page from the per-cpu list, caller must protect the list */ static inline struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, @@ -3717,9 +3754,14 @@ struct page *rmqueue(struct zone *preferred_zone, gfp_t gfp_flags, unsigned int alloc_flags, int migratetype) { - unsigned long flags; struct page *page; + /* + * We most definitely don't want callers attempting to + * allocate greater than order-1 page units with __GFP_NOFAIL. + */ + WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); + if (likely(pcp_allowed_order(order))) { /* * MIGRATE_MOVABLE pcplist could have the pages on CMA area and @@ -3733,35 +3775,10 @@ struct page *rmqueue(struct zone *preferred_zone, } } - /* - * We most definitely don't want callers attempting to - * allocate greater than order-1 page units with __GFP_NOFAIL. - */ - WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); - - do { - page = NULL; - spin_lock_irqsave(&zone->lock, flags); - /* - * order-0 request can reach here when the pcplist is skipped - * due to non-CMA allocation context. HIGHATOMIC area is - * reserved for high-order atomic allocation, so order-0 - * request should skip it. - */ - if (order > 0 && alloc_flags & ALLOC_HARDER) - page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); - if (!page) { - page = __rmqueue(zone, order, migratetype, alloc_flags); - if (!page) - goto failed; - } - __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); - spin_unlock_irqrestore(&zone->lock, flags); - } while (check_new_pages(page, order)); - - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone, 1); + page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, + migratetype); + if (unlikely(!page)) + return NULL; out: /* Separate test+clear to avoid unnecessary atomics */ @@ -3772,10 +3789,6 @@ struct page *rmqueue(struct zone *preferred_zone, VM_BUG_ON_PAGE(page && bad_range(zone, page), page); return page; - -failed: - spin_unlock_irqrestore(&zone->lock, flags); - return NULL; } #ifdef CONFIG_FAIL_PAGE_ALLOC