From patchwork Tue Jan 16 07:13:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13520502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94531C4706C for ; Tue, 16 Jan 2024 07:14:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 292C56B0075; Tue, 16 Jan 2024 02:14:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 21B5A6B0078; Tue, 16 Jan 2024 02:14:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BE386B007B; Tue, 16 Jan 2024 02:14:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id ED9546B0075 for ; Tue, 16 Jan 2024 02:14:56 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BA0078025C for ; Tue, 16 Jan 2024 07:14:56 +0000 (UTC) X-FDA: 81684312192.25.50B4B2C Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf29.hostedemail.com (Postfix) with ESMTP id CE4CB120029 for ; Tue, 16 Jan 2024 07:14:52 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705389295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=veslwpgZFukF/sd9huNEi63cQaxkQ7JXSbhTwcjHZRw=; b=KRfFWX2FVZM8rIJpAOy6ubpHRysK9S5FpHdp4SFOdimR7Gpqd7cXXYbmYk8M1YPjdobDtK FZNKEkxeQyfBC0QnueW3cXDKLIfHrQzZKQumkBx6R/ZxfC3adP8YsKhIkWi81z10mHfiqR GN7S+UbLxwUfMhZdWLHp3AuECZNv8tg= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705389295; a=rsa-sha256; cv=none; b=Dc0JNz/qwTzOqkIlsZOdu6b9e+844UO/n3HpLMjpBnNOZJVppUs392NOpo62CFjO6GmtDY jsh6IDlpprahd+DP4kCNy8EiRlIHnBASsziG0H24GC8MQnYZgAEvkygFnH9yziI3J3ULdM +yiGfn6psWvVOwwORhtf8n4rk3YH22o= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TDgHS4CYbz1wnCF; Tue, 16 Jan 2024 15:14:28 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 2BA701800B9; Tue, 16 Jan 2024 15:14:47 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 16 Jan 2024 15:14:46 +0800 From: Kefeng Wang To: Andrew Morton , , CC: , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() Date: Tue, 16 Jan 2024 15:13:02 +0800 Message-ID: <20240116071302.2282230-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: CE4CB120029 X-Stat-Signature: pf59e5339a7bhmwzf8ri4z6n1qr17j7q X-Rspam-User: X-HE-Tag: 1705389292-233131 X-HE-Meta: U2FsdGVkX19DAU8crQK7ACcSissjaq6bzjPIykGyO7oQa15p2BF8O5jrt6J3tzDzn8p8V6P9FNKOPfRfezkXi+GePzpU4MBzt0isBr1VZK5JPcfJmzGLON0Ivel6XXh8dQtsbw49fXIhUnsEThrisqMEBbQw6It+YuRXcI9y0ZgC+8Z7/w6Uqz8Q0gtV/VSzsTOARSWdyBPvqrpCN4SVToorwa5JoUDZVyCzTGIBGSoUnMQCLectUc2flXt161VTzwwQXbSHdorqNvnSj1YGl8kjhy75DdRokGMbqRHKLN4hKwf1JRGhc4J1JM5e2rKsO2RchUMgmr+fhosmiEOamAu5cvzOUrkllcw09UVbCh9EAGwP6AMJXgQYum6k+Ej50hk7UKFyxL4K8YtMq9eXCckHn8HJV4KW/BwLDOjLBLWJYjWtecDf30kl36gahnFBmkMlbvaG8IWIwv5nted2c2RIlwst/XXqVeoNLilpWAsFo+g40ZFrjCDniVRnRcOrjZAf5EP+u/4jSXez/834uvpA4kPTIRdKqybzovZtmRTztjUjk+HT8xcWp7yG4xyw13nerCwbxQCBeQNz15PG+qwq2BXTF6TAUr5dV9oOVC1gpVolYaau9ejIbj5H4Sm4Moyw/otk2znDJ3I0Dc/NCq5B1fhIHzelBRzoGXhY86S4debWMbgXTsOpoT0qlLx6/PbzH+uMCTr8ymIQqIHqTx8tHZ2Zxye94P0OZBs+htybq5ZhLarOrATjl7PJUL5480M6UjQlRPt8kNHS+4UEhJD6dCikdM2iQ7h0t6PczRrI+ohNGjYOQhzz3bi+JXjMB2/sklUVij9t7tukSpZ3f5Y/AnwiOMc82iVriD1PPA4i/3Txq6+/6ZfOe/ps+MbW40WOflhM7sFO+TnARhbiOgffDv5saMm3TJOFrmTiWLACjBjJSapYPeL/FQRsHvGjhRcWKaGR7BTgBycuXtG YnMjCZcI pAVCgwBJgTgOWtFaGsoE5Jj1wiY0wvGhyyQtmiRvM71x7Bwl+fksHcFKOK1hNvuegucKM3ewi1mgioeYxqZx4iFi8GoEmV/ArmmJfTehehdjUVlgMUfruFTMce/nslo2pEesGZVUgJ4NUjQGSx48A6NXUlvekYUCEcNQYFb6GLBSqteahsMEqnrJe9A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to allocate as much as possible of large folio, move the mem charge into alloc_anon_folio() and try the next order if mem_cgroup_charge() fails, also we change the GFP_KERNEL to gfp to be consistent with PMD THP. Signed-off-by: Kefeng Wang --- mm/memory.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 5e88d5379127..2e31a407e6f9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4206,15 +4206,21 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); folio = vma_alloc_folio(gfp, order, vma, addr, true); if (folio) { + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { + folio_put(folio); + goto next; + } + folio_throttle_swaprate(folio, gfp); clear_huge_page(&folio->page, vmf->address, 1 << order); return folio; } +next: order = next_order(&orders, order); } fallback: #endif - return vma_alloc_zeroed_movable_folio(vmf->vma, vmf->address); + return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } /* @@ -4281,10 +4287,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) nr_pages = folio_nr_pages(folio); addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); - if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) - goto oom_free_page; - folio_throttle_swaprate(folio, GFP_KERNEL); - /* * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before @@ -4338,8 +4340,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) release: folio_put(folio); goto unlock; -oom_free_page: - folio_put(folio); oom: return VM_FAULT_OOM; }