From patchwork Mon Jan 22 01:16:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13524738 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 100E8C47422 for ; Mon, 22 Jan 2024 02:52:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8368F6B006E; Sun, 21 Jan 2024 21:52:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E63C6B0071; Sun, 21 Jan 2024 21:52:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AF208D0001; Sun, 21 Jan 2024 21:52:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5C1B86B006E for ; Sun, 21 Jan 2024 21:52:43 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2883E1206FD for ; Mon, 22 Jan 2024 02:52:43 +0000 (UTC) X-FDA: 81705424206.07.1F8C4EC Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf17.hostedemail.com (Postfix) with ESMTP id 198B84001F for ; Mon, 22 Jan 2024 02:52:39 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705891961; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=PC3N7cmON7JAepJVM0xHBKEVztGr9q1szv7Z2BbkR3k=; b=tIZuXRhXa6fwiWUfI4p0hZ2Si0HUtRPPWpkpoJj14TteMGif3Nx8EHaRxG7JKmXF0lA2fy GzKbvyIV13p9kOpFqgAF65GG7GH/Z2NOhAzz5GRs4mQlEnF1Kwp4cIizWwpifeLTRFVYYM 4auFUV5NT6f8BMHePi1GeLMDoE3FpnQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705891961; a=rsa-sha256; cv=none; b=2EO7ua05FlLdMIg0IZTo7x7pNyjpiTd+fT+l5ISiyKrVmBMjV7EauAgP9fyXpCfgdFK/oM mkj2StwOlakJ3UuitsCO9Yzu2PnZlOxiUpP3JE8VZ/hpIFY7m/YjnSSfDnOn0gn5Vif+E+ d56JKHatdMcpuB1U0I1MMRwGhMXg5tk= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4TJC3D2dhfzvTXb; Mon, 22 Jan 2024 09:16:08 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 0B9781404FC; Mon, 22 Jan 2024 09:17:38 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 22 Jan 2024 09:17:37 +0800 From: Kefeng Wang To: Andrew Morton , , CC: , Matthew Wilcox , David Hildenbrand , Michal Hocko , Roman Gushchin , Johannes Weiner , Shakeel Butt , Muchun Song , Kefeng Wang Subject: [PATCH v3] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() Date: Mon, 22 Jan 2024 09:16:12 +0800 Message-ID: <20240122011612.501029-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 198B84001F X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: p436ofs6xyf19wuu76w3akfjq7mydtfh X-HE-Tag: 1705891959-57040 X-HE-Meta: U2FsdGVkX1+/L7iiJs0Ew4qtO7zkaNZa5mDopg8wefkBRiJ272oCkCk4VG56sBJFP7glb8UsIvxr5nGL+4L3kq91MwWU6aoospnCwDARAZx7a7m3PeCG0+NC0G3kpU9FrOHWp8GP4ILo71FK4+1baF+FdzSBsKvCwh89KigvOYnGAHCc3wj3c31YJAdxvuNKwhjeXenpvbS1wDIiG+0Hzy2oQWUs3uzM9tH4PSN7MpD5Am+G/6Mgp+VVBOgT/MnGCrrMuMsMfinNMMrgf/SEjV0DQ5d1aAVQE03BhuuJZoxtTUvmsFv7kj7g7qaHh8UYdSsPIvlnRbu2zIfnYNVbGJ7GmrC1iG6LtU74Gw8Xuo3ILHSCJQtuum7VDyYP4BdweXK9Pxus0jdIB28462a8VVughRL82HLiObmc+nq9g48omFuZd70Jaseqw5oEKVYz7gRzN75lN4hrADsDB4NNREvzCrgmt7Y0ftCUXHIyOZzhvbJ+u9CkjLOXdqIfsNH3LW6pbxNoWyHmPkCwOnXIRwnlL1nQOdVUAk3F4PKPmWIVf6+0xlBwk8R03IIuJX9s69GeoxObxOM+caUm4k4zPSYx22wwcb7Nmcui6OT4tUg0eQNll7+XUblg7N9kMyTMHMuFj8cKbTOplA7RxlWKuldZE8jaR5dhzoMk5AUrrWmQLoB0XHFZaH2pkcAoAaVrTguCzJ9YvXOmqsG4g5JLfz9+eMj3btMjLrzH2nwHHVpnBopYDcjo5b76dePGErBQtwsiyPAYUt2NNJz6qooaxduLfKem5pl7CPU45U5V0MQZs4h6n2laOMrDxv0xjjA/5rNuANOJ6uL6cWT1z0vfYlyc7muOXFD+bVIHBT5HSvz34jncHCnxvJWFklyNhPLFXvRcW2X3dftROyyGqs7YgJguB/zD1y0FyGGD7ozNd5ntZKGJ7GBgxYeERCRxpx8NcDCKyJdUrnKzDU6x+Vr +KXU2Wm1 5/ICbd3DTjEADX3k5VTYpvPKWLbLy30Fc0ZYSJ2fqvVA8pDgzVOk7Xp0jP8DcclW/yJl/bSRJYooBfObf1OWGYMVu+jHr9Ad7UbCV8yXNAMH3IkfcVCSebW6oKMllGxa5PzryGJEMK97DsIimmv8sTnbVlXXYZc1NRsdjJMcv43HlWixiTDkcYu7T5gwwpBjfYFj+QYfGnGQ/uAGAldxVBPsAyoIt/CEE/JHh1l+5JcNnt2Oz49NT0oTK31bB5AFLGpPt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The GFP flags from vma_thp_gfp_mask() according to user configuration only used for large folio allocation but not for memory cgroup charge, and GFP_KERNEL is used for both order-0 and large order folio when memory cgroup charge at present. However, mem_cgroup_charge() uses the GFP flags in a fairly sophisticated way. In addition to checking gfpflags_allow_blocking(), it pays attention to __GFP_NORETRY and __GFP_RETRY_MAYFAIL to ensure that processes within this memcg do not exceed their quotas. So we'd better to move mem_cgroup_charge() into alloc_anon_folio(), 1) it will make us to allocate as much as possible large order folio, because we could try the next order if mem_cgroup_charge() fails, although the memcg's memory usage is close to its limits. 2) using same GFP flags for allocation and charge is to be consistent with PMD THP firstly, in addition, according to GFP flag returned from vma_thp_gfp_mask(), GFP_TRANSHUGE_LIGHT could make us skip direct reclaim, _GFP_NORETRY will make us skip mem_cgroup_oom() and won't trigger memory cgroup oom from large order(order <= COSTLY_ORDER) folio charging. Reviewed-by: Ryan Roberts Signed-off-by: Kefeng Wang --- v3: - update changelog suggested by Michal Hocko - add RB from Ryan v2: - fix built when !CONFIG_TRANSPARENT_HUGEPAGE - update changelog suggested by Matthew Wilcox mm/memory.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 5e88d5379127..551f0b21bc42 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4153,8 +4153,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) static struct folio *alloc_anon_folio(struct vm_fault *vmf) { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE struct vm_area_struct *vma = vmf->vma; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long orders; struct folio *folio; unsigned long addr; @@ -4206,15 +4206,21 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); folio = vma_alloc_folio(gfp, order, vma, addr, true); if (folio) { + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { + folio_put(folio); + goto next; + } + folio_throttle_swaprate(folio, gfp); clear_huge_page(&folio->page, vmf->address, 1 << order); return folio; } +next: order = next_order(&orders, order); } fallback: #endif - return vma_alloc_zeroed_movable_folio(vmf->vma, vmf->address); + return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } /* @@ -4281,10 +4287,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) nr_pages = folio_nr_pages(folio); addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); - if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) - goto oom_free_page; - folio_throttle_swaprate(folio, GFP_KERNEL); - /* * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before @@ -4338,8 +4340,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) release: folio_put(folio); goto unlock; -oom_free_page: - folio_put(folio); oom: return VM_FAULT_OOM; }