From patchwork Fri Apr 28 11:00:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13226286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9263BC77B60 for ; Fri, 28 Apr 2023 11:01:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17FA16B0071; Fri, 28 Apr 2023 07:01:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 108366B0072; Fri, 28 Apr 2023 07:01:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEA576B0074; Fri, 28 Apr 2023 07:01:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DC32C6B0071 for ; Fri, 28 Apr 2023 07:01:28 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9638D1403FE for ; Fri, 28 Apr 2023 11:01:28 +0000 (UTC) X-FDA: 80730508656.08.CD0BC1A Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by imf08.hostedemail.com (Postfix) with ESMTP id 30A89160023 for ; Fri, 28 Apr 2023 11:01:23 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682679686; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references; bh=na7Kbo0nJD+VtG6/qhyN0YsoD/S98VoCbnRkatZc62c=; b=i2Z1lqiEF1VKbT5tInfxSVKESdips/4fVIt/qLWt2xHpirkl45iHNqBJ/2XielECBOcskh dAg+mjUZuEfxv/C2Nru7elnW6jZUe3A45mC6WkEAJJs2rT8AuWhSUQsp9Dbn1ZGfPi9oeg iRV843ebAtct9nWwu8kUFnMeZ69rb0M= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682679686; a=rsa-sha256; cv=none; b=dWo1+yZGBwLsjjYFGjOWZtDWAGieh7MBX9wzikDAPadFzU4H95JjNSQOfK9jhpdWInredv 6S7RYs6L1Ii/M+DHoG7tnHFUkfbojIAplyPjiaXy22qNLN468Z3uFoABuY+Z8sGT4+fL4a Nvr8VSWgMUHqIeshRcdGAbIKqEVcASY= Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 33SB17r7034907; Fri, 28 Apr 2023 19:01:07 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Fri, 28 Apr 2023 19:01:01 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Roman Gushchin , , , Zhaoyang Huang , Subject: [PATCH] mm: optimization on page allocation when CMA enabled Date: Fri, 28 Apr 2023 19:00:41 +0800 Message-ID: <1682679641-13652-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 33SB17r7034907 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 30A89160023 X-Stat-Signature: 33igux66jma5ww5n7mia1drzaraukora X-HE-Tag: 1682679683-923074 X-HE-Meta: U2FsdGVkX19bh2daWAE3OinIZUgQ6vXN+WMmvZoKiCr392+SaEIe+em+CzdTqA8MbDqkk8FFS3ZipwI7dZ8U3TDRRzRydvn4NDhCNF8oPrbslSHVQaTRSJvQ08+3kLwQWwGTugfKGcmoUmUz+JAed0McDeAN4KlLsQaMfLPfiBawO2dHvEzeaKnpEo06iZXlkEq4aD2wxauHNE8QrmFvNKgXK95Lhgpioi8rWXSOsHPoim6im8qQNsII97FvUbSO6VJ3Eugavrc9ql9N17Jyf/j+JX3a71TU80oTmAscL0SOrVRFG2iWEz4ijY+07srLpzw4coT4zF5zbb0OwJ6Kx3NR8b9yL6iRUWT7UNPeNKWXdM7FwUQA/OiNfNX5C32EP7fyoC6uJ4lDvt5EUcT8ZueGIWoqeq73C9fhaaKS21ggU/OXimFqXtzbqA8kw5QduLSjWH/jP5qv92jLoXf3ye0EpxEVDmQmG6X5gZdrkveuVhWwa7TIbaw1d6lupddtW9OOlZriN+/rfqHiUETZZTh3ftZ7zgVCeL6QLRxfPANNRLZwb7FsY3c2GifrwW43E5qSyJLH/0SRktClkThEpGiStW0XmmDFuQDopHBnt54j09nFcm7ICCrC9XBMcNmxJtlZa8vxIArypXCTwipoy1fecpEwCWlQTjI2IQrDyX2VHXC64NaYrj3wT+v2uCPPE9OD6RqIkc6CYUQV+9y0mTdhkHO9rdpuhoe1/gSzKtNwZV9JSxZd4pDHOZiaTfk5oiySMYDjMJ7pRGCeyRZ/0SA/0YzlSkddpDoHKL9nzxm4/HNrZSl6CA+CdFBJB+N+7AliF3dv3Tetg22sYVUqXJLCCV4+XbzQug1YTNlNOytHZl6XnyUX7JljKWnjJmLithYLOpmLzD2988yrNbNYnbCBkGS/tcbR5U/jyIfhnqg2RbO0SWQ0UADdLpvOS/sOfbKQ0fDZdQfBLVaYa1W 5pBTUhSs EbyQjdvVdNvziebqNcdRQHVqAH2+M8pKXLtGx+cCHiswaxJBJvUUHSyfhPaGAdtXF1+s2O5zwBHUijq5/KAxiuEssaJrB+AJ/whXAVYxH4oYB5U9CDWls6dLjakrHXLYBIy9+C2kLqqaluHI1uFIxE39DFJ9C58+1/5gX80Q34zxYVl+99x+nKghJtAbc7U7RhR4t3Z4kdxos/3SU3O7ID3kg8DKZjFRZJ1yCJ/p0FKJ2lwaOCmZm4GWtfei8KnjA0LNqzOGOe9DQxhC3lBk5iQ5018/0ULVTUTV2FPnmZIRQMRA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang Please be notice bellowing typical scenario that commit 168676649 introduce, that is, 12MB free cma pages 'help' GFP_MOVABLE to keep draining/fragmenting U&R page blocks until they shrink to 12MB without enter slowpath which against current reclaiming policy. This commit change the criteria from hard coded '1/2' to watermark check which leave U&R free pages stay around WMARK_LOW when being fallback. DMA32 free:25900kB boost:0kB min:4176kB low:25856kB high:29516kB Signed-off-by: Zhaoyang Huang --- mm/page_alloc.c | 40 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..97768fe 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,39 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, } +#ifdef CONFIG_CMA +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long cma_proportion = 0; + unsigned long cma_free_proportion = 0; + unsigned long watermark = 0; + unsigned long wm_fact[ALLOC_WMARK_MASK] = {1, 1, 2}; + long count = 0; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /*check if GFP_MOVABLE pass previous watermark check via the help of CMA*/ + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) + { + alloc_flags &= ALLOC_WMARK_MASK; + /* WMARK_LOW failed lead to using cma first, this helps U&R stay + * around low when being drained by GFP_MOVABLE + */ + if (alloc_flags <= ALLOC_WMARK_LOW) + cma_first = true; + /*check proportion for WMARK_HIGH*/ + else { + count = atomic_long_read(&zone->managed_pages); + cma_proportion = zone->cma_pages * 100 / count; + cma_free_proportion = zone_page_state(zone, NR_FREE_CMA_PAGES) * 100 + / zone_page_state(zone, NR_FREE_PAGES); + cma_first = (cma_free_proportion >= wm_fact[alloc_flags] * cma_proportion + || cma_free_proportion >= 50); + } + } + return cma_first; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3087,10 +3120,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * allocating from CMA when over half of the zone's free memory * is in the CMA area. */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); + if (migratetype == MIGRATE_MOVABLE) { + bool cma_first = __if_use_cma_first(zone, order, alloc_flags); + page = cma_first ? __rmqueue_cma_fallback(zone, order) : NULL; if (page) return page; }