From patchwork Wed Jan 17 10:39:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13521602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBF40C47258 for ; Wed, 17 Jan 2024 10:41:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07F3C6B00D0; Wed, 17 Jan 2024 05:41:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02F586B00D1; Wed, 17 Jan 2024 05:41:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E39096B00D2; Wed, 17 Jan 2024 05:41:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D3F4C6B00D0 for ; Wed, 17 Jan 2024 05:41:11 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 86068A219B for ; Wed, 17 Jan 2024 10:41:11 +0000 (UTC) X-FDA: 81688460742.28.724F347 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf01.hostedemail.com (Postfix) with ESMTP id 8FCC940007 for ; Wed, 17 Jan 2024 10:41:08 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705488069; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=CNV7NZx0bCgaRCfjL4teLKBA2WQva71XEq2yQyCQMss=; b=fD9Sw8kSoRzdB222NBC5pQ2cDbw1IiVah/Wl9EPWb5MjvKfLrknZMW4Oor2A67Nm9bkZuc h8yiVTbp6fN8AJCVnsFBW11FI++aYECIWADYv7nk4nMOimuFRwkgb43vcLwBFOTQtEYYRC 0fePC92DuZXByV+53mnnesPPpN/dTGQ= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705488069; a=rsa-sha256; cv=none; b=do3rmKf56dIyItRpawxZx3lgorWFIlXrlj5oquGCY1VLnL/wNKEwEyR0GoWC9Ap2YuK7hp j2zKj+ysXdDaUsO/+kBA8Y5UWnvtlAfDbtX3F61CNrWyY/Kk4aHAOmIrQKLU6NPQzgBRDP OdlFc41mJWoHXXqFDhQKsP2CZvTPgU0= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TFMnR1xjRz1gxdv; Wed, 17 Jan 2024 18:39:23 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 36FE41A0172; Wed, 17 Jan 2024 18:41:01 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 17 Jan 2024 18:41:00 +0800 From: Kefeng Wang To: Andrew Morton , , CC: , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH v2] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() Date: Wed, 17 Jan 2024 18:39:54 +0800 Message-ID: <20240117103954.2756050-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 8FCC940007 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: h9hsqa63by9yuwomzg1ab4c7oa1rnw69 X-HE-Tag: 1705488068-481211 X-HE-Meta: U2FsdGVkX195kMW74ocqckfrPx7jWR5+iP9vbgl6SU1UvxYNTkcQWJY2KcIV8ybLubtZFdDItlGwT8RiOaNTduaIdU5/UWXcfFkmspCtugvdZ6R1pZ2ezCUp4X/gZNeYgLOCaRNYRIpPvYmqznUzhMnDf6XXNhzsB4r9PsqED67F0TBXJSay4t9ZDlIuuBT30X86iP5Sr4cL94/x2HFOasj96yhcspbLGnjGjD+pqUkdEuN0SJf7h3jd4ImP4r7WsZDD5izxtZ2rtyVJNG5g4ZbZHqsLhS1M9BULroJxPOpUm4gdmEX3BR2FO3Bo9/Z99eMDiwBy3q0+3+l5o64xn6m8m/+g2gQXbkeLwNNpcci47s9x1+CRd6eeEqOoKxwb/znIU0l95wxD4rW7j3kgpYLo5BhXRH56+Z0/1gIciA0sZNsb/zofj3MtvdKICm0KYPLORG5lGja76MXu22tDYsfw1geZKgqxnJh4mrN70GM8MfapuBi1NCogYobE+WCszCp+czLeZUc22ieQuWyVEcPOMPRvRP89l1Tz3LbRkPMT4gNEq8F60AOYvlweei1hCCk9VI90AdQvyIOJiKF8YTCtuhE63NOsHwuTUBEzsc0fRJUH+L/Gp/+POnZvqun4ddSid0tBCXoR5LDwYFsITikRyhCeyrfvXLu5KcpTyV22aAUY/bvpRgJXYa9l9LpDiouMOHT0+Xhui1+bOQYQUrX5F9Oaz01CP9L0OYuvyRSmZJA5x8LplbASDmxBwFuHftS70D9qJoVYF++2ZHe52Hexg5iuDrImQmJ4qthe+aZRyxwUNo6Q9vZD+/mBuGixnMXyNC5cxlRjDbc9tAWxbJaFmKeDF2/4XeDgPrbzvMYGcXmVJjbFgLs42D7REl1mDqQSs+jlKwoGYdlDIacfVbpc+dFh8rTMgdBWcQqPuL9EWbtr5jqRHpp9M1Ei/zzwP1pzXRogShE+xzOk33L 1GokqxN6 o14f4P5QlUCuslFK2YGigJd8irJgA00gKpuhYWC9cjESf8qkQ+2UK0C8nRDkZ3mLpm32RmejfKOIqWWDRy5+A+mdFuQ8+G/BbuD39WAs7OVkNbPKmN3WUrXL/hLeZJtuPIGlIk89nSK7ObbzfgpUGhUBNTEgLyK67FpZ3ojlRF7JT83QeYQXpjtSVv8Q6ssS+EYgR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mem_cgroup_charge() uses the GFP flags in a fairly sophisticated way. In addition to checking gfpflags_allow_blocking(), it pays attention to __GFP_NORETRY and __GFP_RETRY_MAYFAIL to ensure that processes within this memcg do not exceed their quotas. Using the same GFP flags ensures that we handle large anonymous folios correctly, including falling back to smaller orders when there is plenty of memory available in the system but this memcg is close to its limits. Signed-off-by: Kefeng Wang Reviewed-by: Ryan Roberts --- v2: - fix built when !CONFIG_TRANSPARENT_HUGEPAGE - update changelog suggested by Matthew Wilcox mm/memory.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 5e88d5379127..551f0b21bc42 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4153,8 +4153,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) static struct folio *alloc_anon_folio(struct vm_fault *vmf) { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE struct vm_area_struct *vma = vmf->vma; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long orders; struct folio *folio; unsigned long addr; @@ -4206,15 +4206,21 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); folio = vma_alloc_folio(gfp, order, vma, addr, true); if (folio) { + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { + folio_put(folio); + goto next; + } + folio_throttle_swaprate(folio, gfp); clear_huge_page(&folio->page, vmf->address, 1 << order); return folio; } +next: order = next_order(&orders, order); } fallback: #endif - return vma_alloc_zeroed_movable_folio(vmf->vma, vmf->address); + return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } /* @@ -4281,10 +4287,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) nr_pages = folio_nr_pages(folio); addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); - if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) - goto oom_free_page; - folio_throttle_swaprate(folio, GFP_KERNEL); - /* * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before @@ -4338,8 +4340,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) release: folio_put(folio); goto unlock; -oom_free_page: - folio_put(folio); oom: return VM_FAULT_OOM; }