From patchwork Wed May 10 02:20:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13236231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84676C77B75 for ; Wed, 10 May 2023 02:21:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D69756B0071; Tue, 9 May 2023 22:21:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D1A0E6B0072; Tue, 9 May 2023 22:21:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C08C76B0074; Tue, 9 May 2023 22:21:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B2E676B0071 for ; Tue, 9 May 2023 22:21:41 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6B3101C6B61 for ; Wed, 10 May 2023 02:21:40 +0000 (UTC) X-FDA: 80772744360.21.A7747C0 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf18.hostedemail.com (Postfix) with ESMTP id C97CB1C0002 for ; Wed, 10 May 2023 02:21:37 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683685298; a=rsa-sha256; cv=none; b=4pxh1i3aZyBCQmo7W/59wv6Ht+JPc/2R0trnh6yHGbcfFetaeMwsYJrM8rSGjB/6iF5UxE ArXw4rho2NiNF0H/pnsub1Zvsjug1WA3ZR6+1cGBAwxKRcJ8lkw4btWxoKHwEXwyqusZYz tP+VVnKsGpm8LAXCeQ2Pg6KTyO4KQEM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683685298; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references; bh=PQCIijOL/uAK4ELWEeeMgdchyBkBMDzvQtR6BnGtZHw=; b=iWIuiwfwXnwpIFnb9yqE4n8JeoK8rqM/ngl93nuLzjAOMcuuHVm6RsEl0rM+HbFucri5UA 3rB2rKVZZxlmnIqZnnJa4d3lQpNiwRiDI6R5W30726AyUR/3ACveALOqNWKnovCVMOWt4f BpGcZuReCAXNcBRY/iac9mhs3V81Zuo= Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34A2L8Yv001544; Wed, 10 May 2023 10:21:08 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Wed, 10 May 2023 10:21:06 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv4] mm: optimization on page allocation when CMA enabled Date: Wed, 10 May 2023 10:20:51 +0800 Message-ID: <1683685251-2059-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34A2L8Yv001544 X-Rspam-User: X-Stat-Signature: abumymi98mbroqi3pm41it6pues3tngq X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C97CB1C0002 X-HE-Tag: 1683685297-746271 X-HE-Meta: U2FsdGVkX1/wbsclaBxvLA0cCM0W7iEYXWUz8jG2ChnjGP0iCPlb8f95RQdTzaUQMp8em+HbSJzqBoaEZRfTPv9eYZ6EbkFXxrWdF+5r96QUG1i6XsXBYPxtDCXYi9Kga+pwBUKVOJJmMeBwIOYnvg+oPmmlVEew8dAEKpoUb9HdkByaNKAk7ULF3Ywae6/GahUzzrRH6kWDjmJfDiePVwIK/3t9AJJDuEbeUt2MOlEb8d/hmRwWNGTHi8WCIC4L1xujbYSDcrhQAYCXxOAGlKgf1E4oftUlxDcH3e4MNC9hLB7+VRDDWePk+4vjFk65vYrVMVLRlFN8Cs+koQPj7lokh5aTNhXU3lqyyrKFlBfeAXcnrkH84Ia0tzlxxELhY+4jT5KxbfWN/qlQ7MwLgeW8yPTEbDQcfaLREPoyRkqCylp0Myocdr/6RBbo3Od2H8c+R8i43vFA6uvs/dzLJgyUupPHIvnZn0pUkqeJoQFxsswfd6pCEypdKSu0suJ0NEm2eJibFLBu36VfRKBvEZCUwASWYU7YdmuqG9zD6Iute8aOTaJoJ3Rr/2IPurHHKitmpwOWdncUFi1HDtqGD8JyJxVv6tRTUG6kyBSb5V1uNqsyvhYhzfYYFxpJcfiLdMqJ0yhJ0wZQh+hV6EfkbRaiVDk4sqdGG7UpclSXPXbyL8W3fw6LeNoS/HzvD5+5QLUI5ZCkOMnrXmfjpe57xSkB39uW2dREAMn/6IndPZFcH2wUGkm144ZfRm19wgWei0W6KfBAfWdnutAEdMaREzHYYfX81X0/vt+7jK6yqu8K0kGUhNw/AnpSP2RzotnYt7H02PKwsOYhQa+R0H1tsRe6HsJtJbVceZPM1kB2EFddm8C7Wf/+xYaLL/ZUEBfqzQ+QZ4zMCzfyJFqiIjL4dH2Vnfy2tFoV378MXxRMZQpInnrofFmfKAY0H+/zlFGAtVPu0IoonK3jAYD+a/v O8WMCbCk B4y0kNqZuwXgYIhpi0Yl4l6FsnxUwsY9nFmXlrkabvUB9WpLvxMQwdXm5KZDey7I7EGiV6TkxI7u97S3BkvBMzv/A24hjmF8y2qASrgP7cDqVYMy14uIiHQlvqyuWMqQyfeRRqsg72mfpRFjzzdcilpIxTg3VhGjjCqhUG0TfB/3pSjKkoPwWYp03buQYm60EkxFxUIyHPMKNIRgAEaMbI/b1Uuzf8mTohefnwAi5uEL9EH7P123brVeLgq8nxOZC/4LwNHIB1ygNri9vbzjab0fm9pZU7q0BNf19OXa+deX/qLYq9SofWU/70g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use CMA since C which actually has caused U&R lower than WMARK_LOW (this should be deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE should either stay around WATERMARK_LOW when no allocation or do reclaim via entering slowpath) -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | --- Free_CMA/Free_pages(MB) A(12/30) B(12/25) C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Signed-off-by: Zhaoyang Huang --- v2: do proportion check when zone_watermark_ok, update commit message v3: update coding style and simplify the logic when zone_watermark_ok v4: code update according to Roman's suggest --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..4719800 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,43 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,12 +3121,11 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page;