From patchwork Thu May 11 05:22:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13237522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94871C7EE22 for ; Thu, 11 May 2023 06:13:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09B4A6B0072; Thu, 11 May 2023 02:13:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 04A736B0074; Thu, 11 May 2023 02:13:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7B156B0075; Thu, 11 May 2023 02:13:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D98156B0072 for ; Thu, 11 May 2023 02:13:45 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8AA7440706 for ; Thu, 11 May 2023 06:13:45 +0000 (UTC) X-FDA: 80776958010.14.3586C74 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf06.hostedemail.com (Postfix) with ESMTP id 4BD9818000C for ; Thu, 11 May 2023 06:13:41 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683785623; a=rsa-sha256; cv=none; b=GWv2iHTABckJWlcpUHOdNZg4KBTw8bnYDiyDNuy/ZTWkT6KbyFHnm7CYLOfsTAE4ozaG+B CQaOqwqCbgH0jVRgEVdjBxS+QNY0qZQMKpT5AFiaFZfpvwDc4cFs3nG6ImWm+bxRax9mhG oWuBO+aH8ywNIWUC01KE385K3sbkIcc= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683785623; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references; bh=LdsTI0B0sR0dWH8uXtsF4wlOe+BmEUWOB5z9Wpzc034=; b=Usv0usfhMHQQsG01nBu6ylyETmbfkOM/cTlujCX707/qHGtBLwZO+Ni6TQcvUQX/jl5cjZ 9h7yxhUSWcsf02Up0Eaw6UAO+E1njGAyxN+w/V4SljFF6w+1ojVg0Cy0Bh0849JoSZ/61Y XcAuizfaR8a6U8llJlqsoPX0nTQ3Y6A= Received: from SHSQR01.spreadtrum.com (localhost [127.0.0.2] (may be forged)) by SHSQR01.spreadtrum.com with ESMTP id 34B5Nevb052947 for ; Thu, 11 May 2023 13:23:40 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34B5MmNo050137; Thu, 11 May 2023 13:22:48 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Thu, 11 May 2023 13:22:44 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv5] mm: optimization on page allocation when CMA enabled Date: Thu, 11 May 2023 13:22:30 +0800 Message-ID: <1683782550-25799-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34B5MmNo050137 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4BD9818000C X-Stat-Signature: st8y6rte4kbq1f77iek6oy718fmu8ky3 X-HE-Tag: 1683785621-278065 X-HE-Meta: U2FsdGVkX19ef8J4JH53dZ4RSK4IDV+aCtiXyFIx5Rb4tXW+PeIWto65i7QVsizU4wq5s5FSrZnvlvThuIRTcIh8GDSKwdxBp5IHgeCg/KhvIApCf7huus+A4LQ4iuSxnda1TtU8JjS21h071B1o8L0ShMlVlI5HgYYgeqpgZjhd6sYZBXfRyCCivCctiZN/57GhLUDxfljfzVyQyqddCyRfAK1aSJFZPs2z2fh8oXAGdgVWMsyX1yfsPzodhBKyTsiywPvKKsXsJVufm2CH6517mps+APsFxyU+s1i4HseFATzDdZf0GkCS0oeMYrVsygd8IDL4akma5yFCXSQ4C19MUO3QiPKj09vTeQzfOy2mF10FB0Z7teKDwmbvEYIAcq1f86l1xR9Nd39yRIwh5fhohDKuCzt23iaflmZB/yYACnQwDBunKQhUNlSkSj7u+NGJBeDIAo5X9F06r1zDAyrHUTIiyOihdCcQial5otC7RIaX+gjoMdM2PnuRz5Tonna+vl0ZjKNJIbxt1YaSb/6LMLNC+lKMT6t7KYXbfQeNb/lSTgaIIWG/0skS+XBNiY+rJqVqd45Awc54hBleLTTmTWrF3ggH+EJ7QLaypwBh5AQZNrOl8nS8dlBptL7GTzkf9mxGZx5rzQcabK8ReztsZ+ka1Bu+Ac1A3OwlAtgzO1vyQlqCyJA4136qK1fLcndiIW/TT5m5PQ4smVO/riY6Zq7qhX8R5SfS5ZYdTZh6P9SooEmJcnWnq7LZGaIbouHQGuK5pt0+1fdgg78ehFgW2Mq74X/90egpSoBq7siNvcVMu72pE90j4YMxHb1NEqSFxHXKo4E3R8z7cJB2/F0QePY7CQcC1q6o86KpIKHF6+FArLFhoas/va3XEHLJq70u1e2/qDfQRl9pBiSCNUEdGgjiFAd99doIKD5H+RkYrt4aSFYFn+/NX50MR0I5XEh3UUWk9xIbJ6frFtF 0pNSIZnm k1axPaXGutxGn9hhkByL2+eQRARU0av/o5RztcqSypp5fz+z1s35KqClLlYkkF7wr7D7ulYShxn2086ZtQ8oNmJ7yW7ukfEmTdKCQYz4SD4/e7hBmR4BQmkhgQ8ssck/jJq3L8iS0/jWU54djmnwyzIq0oXWDjlU+uGMdBMGWbObjakZKcCXcB0CORp68s1lcKtfhpu3+b57iKpVuQmiMDfq47Zhd7NP8+TYxD8T0b9dcsbzJsL6ghlCBfy3ahidjlxO2xcM71oV8KoFLgq8kjK+N07tdPAyw0SqhHlXQAPtDBHd9nBnlFP8hQR93mYo2npVj7+7NmPIQtGUYiZ9LJJs7hXHRabRvLC9jnAM2Dov0iMzD8Hnovn7BPg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang Let us look at the timeline of scenarios below with WMARK_LOW=25MB WMARK_MIN=5MB (managed pages 1.9GB). We can find that CMA begin to be used until 'C' under the method of 'fixed 2 times of free cma over free pages' which could have the scenario 'A' and 'B' into a fault state, that is, free UNMOVABLE & RECLAIMABLE pages is lower than corresponding watermark without reclaiming which should be deemed as against current memory policy. This commit try to solve this by checking zone_watermark_ok again with removing CMA pages which could lead to a proper time point of CMA's utilization. -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | --- Free_CMA/Free_pages(MB) A(12/30) --> B(12/25) --> C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Signed-off-by: Zhaoyang Huang --- v2: do proportion check when zone_watermark_ok, update commit message v3: update coding style and simplify the logic when zone_watermark_ok v4: code update according to Roman's suggest v5: update commit message --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..4719800 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,43 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,12 +3121,11 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page;