From patchwork Tue May 21 04:02:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13668893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09057C25B74 for ; Tue, 21 May 2024 04:03:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96DAA6B0089; Tue, 21 May 2024 00:03:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91DBC6B008A; Tue, 21 May 2024 00:03:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BEB36B008C; Tue, 21 May 2024 00:03:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5E5636B0089 for ; Tue, 21 May 2024 00:03:28 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A42EA40ABB for ; Tue, 21 May 2024 04:03:27 +0000 (UTC) X-FDA: 82141058454.09.D30AFDC Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf30.hostedemail.com (Postfix) with ESMTP id C369580018 for ; Tue, 21 May 2024 04:03:25 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TQTqVnEr; spf=pass (imf30.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716264205; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SYO5X0diZdV5MQxI/s9TBFrpTwF5UWPwxcd05AKrshs=; b=FVTjk4l0l5swIWVyUBTHmar6yfqDDxjhUh/htF5KQD7ANZYv84AQNsONJsgZwwTOp7GuLf g3UIO5tTJ0xPoXrBxVqOeqYwcsNh4MmzsLaJM90Gjsr5WmwL/q/A43dS08U4nCwNWsklR6 UAWSGcp/1PsB8XANyUyvV/bLiWd1jG4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716264205; a=rsa-sha256; cv=none; b=2caTk3XyFr9ZHa0QBu3teSe7XnUwEOGHH12SwJBhBRX2YTe5ZsFrMnfg4Izqlebrs99e+P 1FRLZ2W9h9DnGl0V5MVQfYahOJf6JD+pjkaru1dtt8KYOGEGZ3KIPXQQM7KZbMlyAtXZzm CslwYbYXC/rhJ3zltI/eEYZwE0LN4z0= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TQTqVnEr; spf=pass (imf30.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1ed835f3c3cso28335885ad.3 for ; Mon, 20 May 2024 21:03:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716264205; x=1716869005; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SYO5X0diZdV5MQxI/s9TBFrpTwF5UWPwxcd05AKrshs=; b=TQTqVnErmcL144l1jWEujv1nbZHQvB5OkHNfS9IbV25rvCrLLwE2q/MCaeRt1EJISs uf3YSbEUsRoUl/JkUSkje3me5zj1+wufLVvD9aqL0PZ4TiGuMwa+2vBkYx+hdDBWkstO LgHHk6eSPntFiG5znRBVZNCpIZh6hMvQ7B3NLm0QmYXcBNjzJsg+xlg1nBo4hHKlLEIM 3MTqOK3f889b1MjQBEYLba0ybdQ+LKRe8exeUjDu1xdYagCzC1wzeGjGBkcbAUpUJ0gO 6WjXoGNh9bI7X+o590hF3FCTznOkCidoxMwSdfTGfvpcuvM8g++IdJdJJcaomKGDeKb4 4ilQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716264205; x=1716869005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SYO5X0diZdV5MQxI/s9TBFrpTwF5UWPwxcd05AKrshs=; b=Emh+hZBbIL63z0AAPbtBHl6rBkesNRZWKOZkDn1GLseIB4o/GZNRgQH+dk+JnFDI6P FIpjBFILtUqZg9Q3eqkAbUxbEFJfdlY6q3x9l4zYIsrFKaeeXqTzIwB+HyQSTQWzWGq7 xp3MwBgw9TbhidhNLwlTtIa+Yl5lGn7WaxH5Mx8zXdnLYSRGaYCFDIZbH0ItgY7zVBAM W8kbY/q0dhSra5GRNix2yjYvBG847oE5OdP0Rn5SwnznEn8K6l6GWcjkBnjpTNOcBuxq u9oZovhjU8QRI07wB70o5SoAGD5UluiuCkFkQSqsuMKTmqqWlNa2iWYvr/FE+2YCpTab 5tng== X-Forwarded-Encrypted: i=1; AJvYcCUYB57EnvMqI5zTpd2DcFeAgC+TTa62PE/FvOUtaBFJtfPV93l+jCmkwm/GvgRY95co9K+L6K7DJSdyIrOeK9/V7P8= X-Gm-Message-State: AOJu0YyVGk3iqf4tKb95xNooqVML6i3KpI6vb9AZr/ByWZqukPgiquL1 teYrT1Wh3ymFJ0BANYcyGJyyp9Up7DS+1eUAo6MZZE9hD0O68fPLQy+U1TIt X-Google-Smtp-Source: AGHT+IFHWJ8Khw4UYbDOUMggQ1QP0WPVAp1DDDpri8gSJPT7ekQdp3hNnE42q4+S2/k3OBjhL7AVhw== X-Received: by 2002:a05:6a21:1509:b0:1b1:c77f:56c3 with SMTP id adf61e73a8af0-1b1c77f59a0mr12895842637.7.1716264204618; Mon, 20 May 2024 21:03:24 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::302d]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bad62bfsm211683945ad.74.2024.05.20.21.03.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 May 2024 21:03:24 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v6 1/3] mm/rmap: remove duplicated exit code in pagewalk loop Date: Tue, 21 May 2024 12:02:42 +0800 Message-Id: <20240521040244.48760-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240521040244.48760-1-ioworker0@gmail.com> References: <20240521040244.48760-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C369580018 X-Rspam-User: X-Stat-Signature: 8bh5kwe9s5cknyi6tn7nn958b1g3gf1i X-HE-Tag: 1716264205-389397 X-HE-Meta: U2FsdGVkX19MST6qGZHA88rsjv/fgUw8wt0SbE5RrEVYfdXD6HCG92pKWc0KABeFDwt3k0BsgtIneddXNEtZi0mIq5O18k1OGD8Kj/j6ZRjP+OMmZ5MJpIWtM/F2sd6fKRG3366d28CKdYr/Mdh5rjZDLmDrA19E6dUZJWV8Mf4hdSH7nlAstEtyMHBtMDc3jfsKe9+WKMQxLfFNHAOtNgSrziazXYvxJbN6sXK8cMAh41s8Vqh2Vtj7QQI7rSl/hJWjq+E6mDShJ4kjOaxPrg/uqjAvjr+58QDD5nwv7pSMILnNZukhQlcJsgOpyy7S66gywE0wwDHPwxFk8WTKwlSFROTMG8qGy/0g6OgLXqV4moFQ7u+IkD4qkCeF+bsZ8qFtP+Fqc+D5mmDkHea+pdcw5uOwbxwYnbsU/o8BApEpRyT1Cqg5i8HUpVDGJQv8lpkld1L3oYtMJ1TPn6czjPnZCDhnI2QyuiTwIaJG7f3DOtkc/GoyzPYuJziyDcwiPhyupAkkbZRZfN8LayZVcIfs4wcWbV4u2ov+6sqrLvxKpXOg5ynHyeDlLP8QPSkkK6s/frngGhTSSUD6F1ML/pL2mpXxkXZ4zjx9x4Wo3n1cjcfLd+94Sf204qSOkvOJivU/SWbaR+MP1aDCL1mzNBq0ViBXh0fuAHjwOFfNcC9ZVTsfUIhdGuKPB6ghUHokkDZjXWMZe8uEVTNEK8bsFR4d1hrQ+jgIUr85+YVLecXk8IA4eWFwCzI4mXjWMwsH6Ds1PRJnPaaeNgyVf5C/tAa57WS0AReV8JEqBdornoYON4Fak3ZiibYDknxdAbxM8ubgx/91m0R1XUAjKMPxnQDmwoiZnsZsvFy254uqnVDH1voQ+qY0MI5NasUoD8UPF2ud95tPf/I7Ym6y1IYADljow98cIpIGs2DbLURuyOsiHnGpUmJz5vTHwqIShutpghFtv4d8arzBYxuBdGQ 2HwC3CJc z/n5aiCOsN/QY8Ey6jquYT9WRiSgszuP2h0vh28H07YfeA+zq7dXp67vjkNSrXr8+k5N8vEQhmtzSPGecYAjqdZgobxNHIw16icaZGW3vzS2FTbc5GdSSIHcWG5am/YVIZigVSbtYwl0srOu2EczDJYXc6Z5K3g23HVPsIx5gCe4v4nUfWtN/T2qwwJ6fWsC6Y3edn3KNvrN7cd2/aTPDyLeZpVDh4L4HfhhvZqpev7wTsm0LHs0fp0av8PDiVr420DLD2hlGrOxSMRrdQG2oTya/YT5LjKOR1/6JwOnxvpG5a2PJgXQY3Av5Hk7AtvTjfTZvB1j7NOAl58LSfpGndOMOs9baSZFsqQ1VEJUpTce12mRhvLH62rXCDdsCjTDMNrmwMJuW3XSPGsWrCCqp+2C3FwLc0/YC9gd3wDiqcY7eHQqOmt4OJajROHXnyN+fH/IfzCRcVHXeB8rP1uYcPMIIEEEHSkXdv59lWLih3xB+be2huEluF7U8Yw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce the labels walk_done and walk_done_err as exit points to eliminate duplicated exit code in the pagewalk loop. Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Signed-off-by: Lance Yang Reviewed-by: David Hildenbrand --- mm/rmap.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index e8fc5ecb59b2..ddffa30c79fb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1679,9 +1679,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); - page_vma_mapped_walk_done(&pvmw); - ret = false; - break; + goto walk_done_err; } pfn = pte_pfn(ptep_get(pvmw.pte)); @@ -1719,11 +1717,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (!anon) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) { - page_vma_mapped_walk_done(&pvmw); - ret = false; - break; - } + if (!hugetlb_vma_trylock_write(vma)) + goto walk_done_err; if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, @@ -1738,8 +1733,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * actual page and drop map count * to zero. */ - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done; } hugetlb_vma_unlock_write(vma); } @@ -1811,9 +1805,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (unlikely(folio_test_swapbacked(folio) != folio_test_swapcache(folio))) { WARN_ON_ONCE(1); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } /* MADV_FREE page check */ @@ -1852,23 +1844,17 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ set_pte_at(mm, address, pvmw.pte, pteval); folio_set_swapbacked(folio); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } /* See folio_try_share_anon_rmap(): clear PTE first. */ @@ -1876,9 +1862,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_try_share_anon_rmap_pte(folio, subpage)) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done_err; } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); @@ -1918,6 +1902,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + continue; +walk_done_err: + ret = false; +walk_done: + page_vma_mapped_walk_done(&pvmw); + break; } mmu_notifier_invalidate_range_end(&range); From patchwork Tue May 21 04:02:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13668894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A86BBC25B75 for ; Tue, 21 May 2024 04:03:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28FDC6B008C; Tue, 21 May 2024 00:03:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2185A6B0092; Tue, 21 May 2024 00:03:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0929C6B0093; Tue, 21 May 2024 00:03:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D62F86B008C for ; Tue, 21 May 2024 00:03:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4FA3CA057F for ; Tue, 21 May 2024 04:03:37 +0000 (UTC) X-FDA: 82141058874.16.1BE1E75 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf15.hostedemail.com (Postfix) with ESMTP id 654FFA000A for ; Tue, 21 May 2024 04:03:35 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bq2hY3Id; spf=pass (imf15.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716264215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Kr99+UnOmjT7S1avYjwp5ZtjL07TKstHpgea7ghLKEc=; b=yMP5yXaY5ODuZPaiH1aS8DH3C7ViEQHmziWAtWAaslKKzAosg2AVtvXG0w8MqtWVw14lu+ u+K9jfD/KzSYxyFF4aJ/H7vSreD3xWYJTpWM4qxZdCikEKKa5bRY0SomIO7ybicDbJhl95 9FBbnNpdq7K9HuifinpG6CLfQ/8FcSM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716264215; a=rsa-sha256; cv=none; b=gTo4nT8+4Iw+4H1sMvCy/Nx6SBMoaorx9Ut4SnvKw/Ap02I337D3rFz0BQWIy6L60t0SSn YI0ups04z/dXX+Hgaj1FS5GWXwKAaVqcJN06ANi8IaZyYqL/RMyPqu79w4qbDBYvZLftXm oOlMEE9GTYQXPTy6puDhJTBMMZc8Ak4= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bq2hY3Id; spf=pass (imf15.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1e651a9f3ffso91978875ad.1 for ; Mon, 20 May 2024 21:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716264214; x=1716869014; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Kr99+UnOmjT7S1avYjwp5ZtjL07TKstHpgea7ghLKEc=; b=bq2hY3Id2Va3p8XG7eFAnaxGTiA98s8GYl9Fj3b6n6e+Szq/3gAAvB3OQO///0UsHB tJP1bolI9ZBX/TOArot1xJNmKuQEvAFmf7KfLULgkaWQtobE2DHnXBsg3VY/zcKu6f1y We+MnIh/ioRtin7ZnPtphjPIKNjhygX0G9pLBokg5aIR2yCbpplABYT1or0wHp7OKqAQ ykFMQQw0Tu22gRM3l3hSuw5aAi7Y6Jlzt9RdIlS2klHKdxOHaBlLWzDbvx7XZ/HoccZN UDkDFX4C2ilFk4yL3VLKbS8v13seZaHlOXdZq3yWfi4qhtLo3tS/5VlkcuJMu+jdo0LH 4hbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716264214; x=1716869014; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kr99+UnOmjT7S1avYjwp5ZtjL07TKstHpgea7ghLKEc=; b=OGdqOsvo7JEe39UWXNJDyJiUpTQA19aUNwO1MBdpv4qqOkJNU5UZ6xz7rq1k8XSX2V cwn0pAdWlYcf8qcLEu0AMERhm0DYvYnBlP95FW8o2NEhquqd//UY5gnU1V+FC51d2zgh koSuVm9+wPGUg6frZL+kDoRPGUmJfE6lGV+YkpDg7nHM/gxA9DBTwArQlzkhAl2cXtAi k2xcTAHXKl3JTkipsuw5WTqIZ+bbtBMZ73pCbbgeg0tDaHWPbs6ZsL0rDCq5iKMvDzUI XiraMithQPr+8p16o7dC8dILvHBtUP7RElgtPPAdOOBfhdJD10UALxN5YFOBA6f2nj+p 92cg== X-Forwarded-Encrypted: i=1; AJvYcCULroHF9HATjPtm2WqG6Dq4k04vve1AgddFy3a8iWl2dv/lM5qZa5O0/E12dYE+XLyAJFcu7a1dJDNbEuZrrdzzFsg= X-Gm-Message-State: AOJu0YyLzgfDpQb72CvW+20pOTyqMhGnT7a32xdNhudzV3uNmC2gitu7 +Wvgg/pnSfjjlReFeKIQchm+6gDwue9L6wNKRmv8xgTq9rbLeANg X-Google-Smtp-Source: AGHT+IHNlRE9Ql30bKl4wf8mPivbg9Uu62KGA61WCjKuW56gBBFvSattqBS4RrJD3TWyboPzl/3vNg== X-Received: by 2002:a17:902:d2cf:b0:1f3:dd7:ad42 with SMTP id d9443c01a7336-1f30dd7af64mr10263255ad.0.1716264214253; Mon, 20 May 2024 21:03:34 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::302d]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bad62bfsm211683945ad.74.2024.05.20.21.03.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 May 2024 21:03:33 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v6 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop Date: Tue, 21 May 2024 12:02:43 +0800 Message-Id: <20240521040244.48760-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240521040244.48760-1-ioworker0@gmail.com> References: <20240521040244.48760-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: f3aweoqou6yawgfzdh91dqfq4czwuxrb X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 654FFA000A X-HE-Tag: 1716264215-347456 X-HE-Meta: U2FsdGVkX18gqf3xIWeroClz4EF5NwTt3E0cAtdag0+LOSXdVUUKaEzv6Hsk13GaIusC56g3mfxDKH9EwXJC0OKGxYHXwOGoC4R7qgjtMwzDShtP3PzNH18vltcopJblgLeGLy4RSHEv+dX9SCary5+7G5GfGJTqmwSYso4/w2iK+al08XiwQxNckwUfDkfgGFP4wBuiLsFTg8EuELEur4TOYmPw82Zw4AyOZiBAtVnKGOMs3QXmQtcN7Zm5wgfMETwVZvGQFTyvbT/aBFO9A64wLo8RxT0Bq8/q9edpZg91fO9CbSCmXHu6MbcyQK/2Cs9GVNGs2pJ07irYeyo+D6q81QG0RyFNUWNpWF0xvYQUfz3vbF4AjCI3YNSTUtA2t/Yj0XA5L51kO7GUpXdivBeLHmHVCy2objSfBxaKiEy9tVwSzdEpQ+NK6PWyoNIZrBMPjQ2+EKOQvLNFaHdngx8fP/bZz8ENcmYx/xhHe/Lv3MPFhZSmSFGrsRsrc//FUzOVJbAAc/dlfRj9J0D1bDh5NTFQJ+W701DE3lSAxAqzITwqnOjonIvvh3fJQ9IqoJ0rQOcoGKHhICnR4oOlka9hKdA7TU8ONXMQBZFTEsuE8pW9T47tqj1d/y+p7nsSjvH0ZKcuKhaVeqYu9AhjgsCzf1iMmo/KquZClfmEMOBW3xPZsGufsafKoH6QxzS4cfxywoLm9Edib2zTJq6te0LEz3iECLnmOAnWMd0qZMBKasKPWsBa2oPkNFNDPs0ZU9SJNtHd/K4hA+U00UalU0EhUIWbsIdPV8xDdy3s4ra0fUELVfzxfdOzB8Fe1pfWJns+GODbWmGCVKwcsMLcjZ6lJ+W/LrHQld+eNKH26v8i2c0zQDJY/2hgBvqDM7/V4H7W/sfxIgmwjqlOQ/wb3G5qXWf5Ca+Dppjpp1MWUGVvn3uxxd9Ddm0aJ+FrhkyzT4VuV56oNKtvQaUODAU VZkZIJdJ IZzLET8SP13sUGg9mdySIDOb0ditg0yFCwobIzkenjlDZ5Bo4TfHdBbFRRehJzD3Uf7JKMBNadYHx+g//3bVHI5fDayZKcQXRIEf8jdSsXWnss8Uhr8qsp3XfaIK9jHzPoacdgdliMlSTcbVBusE+xNq/MuAr6aJSddZ77YaB1z9h1+gbvx8OKI8a64PKP1uzi3NZALRU7wxrjegDOZ+nref70wTPb2YvNCKXFPU08F6prZC0sbLt2z7Uqz2tTLMtg3Hk64pvLa7XEVNOEzROjgbETFTm7nbtXK2Wy1qCRNCabAQAQ2aaIOoGQhQu3pX0X1dFFg1tq+FL4b7ADEygBcKfwGEEcJT7NShuIa+MDY8ZjNEO1A1AZYwnuHAeOdzO/09/dFS6b5EJMLRTegoV4jOYAnfrzyQhwfwZAtbM2yYZt3kdYM1XqcNXZBj5mB9jR2dHPv1fAmgCs0K1IRKhuynT3M3i1xvLpoJLmhOZrMU+gEKFYzrJSr31Q84ojEHR9oS//seZoPq58YDO7lA8/kouG0Yv5vF9MYOt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for supporting try_to_unmap_one() to unmap PMD-mapped folios, start the pagewalk first, then call split_huge_pmd_address() to split the folio. Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during the page walk. It’s probably necessary to mlock this THP to prevent it from being picked up during page reclaim. Suggested-by: David Hildenbrand Suggested-by: Baolin Wang Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 6 ++++++ mm/huge_memory.c | 42 +++++++++++++++++++++-------------------- mm/rmap.c | 26 ++++++++++++++++++------- 3 files changed, 47 insertions(+), 27 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index c8d3ec116e29..9fcb0b0b6ed1 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -409,6 +409,9 @@ static inline bool thp_migration_supported(void) return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION); } +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmd, bool freeze, struct folio *folio); + #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline bool folio_test_pmd_mappable(struct folio *folio) @@ -471,6 +474,9 @@ static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) {} static inline void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, bool freeze, struct folio *folio) {} +static inline void split_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmd, + bool freeze, struct folio *folio) {} #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 317de2afd371..425272c6c50b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2581,6 +2581,27 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, pmd_populate(mm, pmd, pgtable); } +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmd, bool freeze, struct folio *folio) +{ + VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); + VM_BUG_ON(freeze && !folio); + + /* + * When the caller requests to set up a migration entry, we + * require a folio to check the PMD against. Otherwise, there + * is a risk of replacing the wrong folio. + */ + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) { + if (folio && folio != pmd_folio(*pmd)) + return; + __split_huge_pmd_locked(vma, pmd, address, freeze); + } +} + void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) { @@ -2592,26 +2613,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pmd_lock(vma->vm_mm, pmd); - - /* - * If caller asks to setup a migration entry, we need a folio to check - * pmd against. Otherwise we can end up replacing wrong folio. - */ - VM_BUG_ON(freeze && !folio); - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { - /* - * It's safe to call pmd_page when folio is set because it's - * guaranteed that pmd is present. - */ - if (folio && folio != pmd_folio(*pmd)) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); - } - -out: + split_huge_pmd_locked(vma, range.start, pmd, freeze, folio); spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } diff --git a/mm/rmap.c b/mm/rmap.c index ddffa30c79fb..08a93347f283 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (flags & TTU_SYNC) pvmw.flags = PVMW_SYNC; - if (flags & TTU_SPLIT_HUGE_PMD) - split_huge_pmd_address(vma, address, false, folio); - /* * For THP, we have to assume the worse case ie pmd for invalidation. * For hugetlb, it could be much worse if we need to do pud @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { - /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_FOLIO(!pvmw.pte, folio); - /* * If the folio is in an mlock()d vma, we must not swap it out. */ if (!(flags & TTU_IGNORE_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Restore the mlock which got missed */ - if (!folio_test_large(folio)) + if (!folio_test_large(folio) || + (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD))) mlock_vma_folio(folio, vma); goto walk_done_err; } + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { + /* + * We temporarily have to drop the PTL and start once + * again from that now-PTE-mapped page table. + */ + split_huge_pmd_locked(vma, range.start, pvmw.pmd, false, + folio); + pvmw.pmd = NULL; + spin_unlock(pvmw.ptl); + pvmw.ptl = NULL; + flags &= ~TTU_SPLIT_HUGE_PMD; + continue; + } + + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_FOLIO(!pvmw.pte, folio); + pfn = pte_pfn(ptep_get(pvmw.pte)); subpage = folio_page(folio, pfn - folio_pfn(folio)); address = pvmw.address; From patchwork Tue May 21 04:02:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13668895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3F2DC25B74 for ; Tue, 21 May 2024 04:03:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3054F6B0093; Tue, 21 May 2024 00:03:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28E046B0095; Tue, 21 May 2024 00:03:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E0EB6B0096; Tue, 21 May 2024 00:03:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DD95F6B0093 for ; Tue, 21 May 2024 00:03:45 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8DE481C18E2 for ; Tue, 21 May 2024 04:03:45 +0000 (UTC) X-FDA: 82141059210.30.0D74C36 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf10.hostedemail.com (Postfix) with ESMTP id B4352C0017 for ; Tue, 21 May 2024 04:03:43 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NNNc55l3; spf=pass (imf10.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716264223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uX/01s4vYIQbr18bE5flsb5nej8ApaXUJjiKJpBs8CU=; b=BcNjX64dCxsBMFhdxkGowLdzyCCE2dRvAd+Vcq6nvIm6YJqzrnv7YMnuDpWiQUN2nYA2vL fS+9UVOxfPfF9I8LgmzjK9U6ziGkMR+cWTbsJBku0y/qFoII8i7VkqQ475NW1lkoZsequw fFnpkOW3F+ZKZ/l+aQVEpm1SZ7exOLs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NNNc55l3; spf=pass (imf10.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716264223; a=rsa-sha256; cv=none; b=ti7FBewDhAEu15tNd0i/eBZAAmQK02FNuZujb6b3LqKmEe22ziWu38piS3zjqqYFY+Ahjl 817ULiOH5SmWGNv8tfCgj3XIb3dz43G10JY5I285OaEV/pmsMJXzBg2caTeKzuQxAAFSh0 LMHtxxT68pv9zlvh2VAbyaswiL5oCBg= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1ecd9a81966so23344735ad.0 for ; Mon, 20 May 2024 21:03:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716264223; x=1716869023; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uX/01s4vYIQbr18bE5flsb5nej8ApaXUJjiKJpBs8CU=; b=NNNc55l3+bgdZiB6g5dZI+Q5JDyRBZ3smc3l6Lk9WwT/UPDzkxPYv5jqGGtIFZuz9C JAp1oD2RJrV44S84hPtdInno5dyIhjJwjKW6dya1xU6iAAi8zUQm/wV2gh7wY0AoSwux +s9AH52SOLvmvxPtiSDW7RvtxusdQ7dstW6nIpoQR02s8jhw3VJa8qKCvrSbBBhCneZu MCJINO0KA4H/nShNcXo5xWJvN7a7tbmfd7fy8ZPpiGF34Y0XVuRJOZZnOmo/SfCG6PxT mOwhJfuRESs30nrGMCkCOY49xMp+B8t454e2ujLSojdgbExEg68NhO2qrG2/i6gIRTxQ t+YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716264223; x=1716869023; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uX/01s4vYIQbr18bE5flsb5nej8ApaXUJjiKJpBs8CU=; b=xSs27bxx+spc0zt+yJJM+YpsPcoPT1+6sd/XQXs/ufRaKCNaX4oHHZALyqNcwiIqbl no27k58dRp2htf1vdHiaL6H2wIjyhr/V7w7VAZp1YZXkV/H13RNLULn/kdc4SSLWsmun I1oYk4nr7yv0/m7OjaHtdtG61pufzyQh3IZpfm+M9V58LwFaoOfKpMOZFnhnXt7srTUI j+F4AVLTS5zWc+L5vcrbktDfboZwI1OyK0OPR97i1Woxi4JkmoMVrsAtMEug5Nj4BKf8 soYGnDWaPs3UNA95rHNAKMnmVm0hlVXU2esWE8OqZOf+n4hMUAqGI7btI8IzHdGgOCOa Y/tg== X-Forwarded-Encrypted: i=1; AJvYcCVA7u/XAlureh063o2sCEQ9gP1tyXz0VQQ98eitZ7L0Yvbvr5oWQD20fCB6r6UWsqO0Sl3WtMgb0NzsOpZm0Vcu2g0= X-Gm-Message-State: AOJu0YwiukD7NCVyeQAU638ckxs771A6X4pslOFkUfhwpaYlUry95EzW CbNR8A2w/L26Kkdq26BkGyhyAkHAp5GmTga88//XzR85XIaHqdan X-Google-Smtp-Source: AGHT+IHUenY7Vns4wwcjPnGW95ACeeeUgKgYAe9jFb6BEdQnC4grlMSeDLzUCyAEyHVP1CHFiQcmZg== X-Received: by 2002:a17:903:230b:b0:1f2:fb9e:f697 with SMTP id d9443c01a7336-1f2fb9ef9d5mr87868225ad.2.1716264222544; Mon, 20 May 2024 21:03:42 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::302d]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bad62bfsm211683945ad.74.2024.05.20.21.03.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 May 2024 21:03:42 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v6 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() Date: Tue, 21 May 2024 12:02:44 +0800 Message-Id: <20240521040244.48760-4-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240521040244.48760-1-ioworker0@gmail.com> References: <20240521040244.48760-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B4352C0017 X-Stat-Signature: npfbkbecebjqn4dodsa61wa7nity8uyd X-HE-Tag: 1716264223-431601 X-HE-Meta: U2FsdGVkX191tHjexfRdc9o4kojAttxu33W8Gho2+yTEN3Bx4DdIY0Ms/uPkojjQehvzTnD25+EZfJ4UiwKBMOtnU4+v0ZubqTiCWTskL63dZdlhlHVHGzKuSdL2Dg2AQQeQnnQy7AUpKi/MSwlEG/E+7r1r7CB4OgeRdn3eZHM0uu+zpsGRzbTm/IQbPnf5m3BtIqKeWpWG4pEDXhebTyk/tp/vXLHfYucFp1L1YefRLt4l0M80p7NyWIgSapxvNiZ0nAjbSz7n0uYc38vJmPZWElQw23yu97hPWxXWmQbDx1Fz3ruv1SngUF5f+x9W4zM0EZLfq1dQMEfPEV6iFLLdWFhTuSG+UjBBPXE6BmVlqWlmpUylKQWCMX+oh5wAZL9fPJDh90GdeM1XLc5GqCco1LQ4u7+Exat0D8bLrQQ8qlxH4TdV43eKR63crppLIDeslAEKR8MIm2qSiSe5484oepja62yM+epGHCqoe7nVoHKsUNEvuA4Z4UwXE/09xE+H52yDQNvtg2WR9QKUN+gMnrWNjzTuoxJ8diXvNiBxRul2PgVls0CPNYBrNKa3vKzSZdQrSO0DDsGeXtxirDTKSW3/Lwj3ZIpg+NVHUxxk6sBDdFeZOYkpxSJVphex9S2913SmfywYaGskhfqNmmrnTK8iV9Z7zYsZDdBRw08q0yn3UV9uH7/MFiaUWZ6DXlobRJGOToC4jjl4W2svvwHxcyraDTY1uylqncSgDXaF4vzRdd6rNEoj8E9rNEyi9HRTs3IwETvOZZYA7g1gFAIhu3OMnUVmLhyNkAhGXT0Rm2u9hS1g4/iehfqfyd+ndgMxjlDK1TlDdcM8PtaqZNqknFkeokTws5NM+RaCQwSgfVNvlYVJGx36VuXok3s1dTw0IhFigsMy9tKTxg1D5AkF3k/rECCTVcvwx9/Le4opq3Hp3piV0JHf03yqkG9d8jjvqh8kCxBj5iGsPDu 29NLFVjp DzcZcCGJBUfRyWPpKma6ZWl8stHCuEOPrZAlxeZ5Yk3ekeZV1IyCdrL/vrJK+UXwvLBKqwdNI9z/Z0EtUDomRib0aR41EdRnUITdShCYed9CZy3glO/1clpFPAThkOc3mmVxoFVsur5ISzlgIb2WnnxUDushE4eciY0fzDRu5gL38nWx6AO662lowWglFTn3HJnCqWRSK7AS/C5Swisy38kRcXaf2VNN5mxEqXYsE26Gp+VtnOyfoPzrVGGs8tI6P5qK80ava7gDZe1i2NsZFrC4k6FQ4WA0rkaE0AFvS6kgngrdAF1s9XUewB7U81oKbVLShNkcFJG+HsLnglmXI34jnHr2WSrwKPW5MNFI9uLnyAJkDXXNw2wDxcn7wmZqTF8jxg54u7vVg99SaOmUHWm/4jTsWSKFBtJQ8hkjZb2EaPoBwS96BzKrJrtvbgxK3O/wlz1hLufx/HHxuSpU5fZfBg+xIm5Q5yTgG9P7ibwltJG/4kGty+ZsE4Ftd1cIKo/VkfWTqHo3G/v15y9gZwH6+CMd8LjGpqooyeOAO7luph5A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When the user no longer requires the pages, they would use madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they typically would not re-write to that memory again. During memory reclaim, if we detect that the large folio and its PMD are both still marked as clean and there are no unexpected references (such as GUP), so we can just discard the memory lazily, improving the efficiency of memory reclamation in this case. On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using mem_cgroup_force_empty() results in the following runtimes in seconds (shorter is better): -------------------------------------------- | Old | New | Change | -------------------------------------------- | 0.683426 | 0.049197 | -92.80% | -------------------------------------------- Suggested-by: Zi Yan Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 9 +++++ mm/huge_memory.c | 80 +++++++++++++++++++++++++++++++++++++++++ mm/rmap.c | 41 ++++++++++++++------- 3 files changed, 117 insertions(+), 13 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9fcb0b0b6ed1..cfd7ec2b6d0a 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -411,6 +411,8 @@ static inline bool thp_migration_supported(void) void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, bool freeze, struct folio *folio); +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio); #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -478,6 +480,13 @@ static inline void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, bool freeze, struct folio *folio) {} +static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + return false; +} + #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 425272c6c50b..4793ffa912ca 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2687,6 +2687,86 @@ static void unmap_folio(struct folio *folio) try_to_unmap_flush(); } +static bool __discard_trans_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + VM_WARN_ON_FOLIO(folio_test_swapbacked(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + struct mm_struct *mm = vma->vm_mm; + int ref_count, map_count; + pmd_t orig_pmd = *pmdp; + struct page *page; + + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd))) + return false; + + page = pmd_page(orig_pmd); + if (unlikely(page_folio(page) != folio)) + return false; + + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) { + folio_set_swapbacked(folio); + return false; + } + + orig_pmd = pmdp_huge_clear_flush(vma, addr, pmdp); + + /* + * Syncing against concurrent GUP-fast: + * - clear PMD; barrier; read refcount + * - inc refcount; barrier; read PMD + */ + smp_mb(); + + ref_count = folio_ref_count(folio); + map_count = folio_mapcount(folio); + + /* + * Order reads for folio refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + /* + * If the folio or its PMD is redirtied at this point, or if there + * are unexpected references, we will give up to discard this folio + * and remap it. + * + * The only folio refs must be one from isolation plus the rmap(s). + */ + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) + folio_set_swapbacked(folio); + + if (folio_test_swapbacked(folio) || ref_count != map_count + 1) { + set_pmd_at(mm, addr, pmdp, orig_pmd); + return false; + } + + folio_remove_rmap_pmd(folio, page, vma); + zap_deposited_table(mm, pmdp); + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + if (vma->vm_flags & VM_LOCKED) + mlock_drain_local(); + folio_put(folio); + + return true; +} + +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); + + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) + return __discard_trans_pmd_locked(vma, addr, pmdp, folio); + + return false; +} + static void remap_page(struct folio *folio, unsigned long nr) { int i = 0; diff --git a/mm/rmap.c b/mm/rmap.c index 08a93347f283..249d6e305bec 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1630,6 +1630,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, enum ttu_flags flags = (enum ttu_flags)(long)arg; unsigned long pfn; unsigned long hsz = 0; + bool pmd_mapped = false; /* * When racing against e.g. zap_pte_range() on another cpu, @@ -1677,18 +1678,26 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto walk_done_err; } - if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { - /* - * We temporarily have to drop the PTL and start once - * again from that now-PTE-mapped page table. - */ - split_huge_pmd_locked(vma, range.start, pvmw.pmd, false, - folio); - pvmw.pmd = NULL; - spin_unlock(pvmw.ptl); - pvmw.ptl = NULL; - flags &= ~TTU_SPLIT_HUGE_PMD; - continue; + if (!pvmw.pte) { + pmd_mapped = true; + if (unmap_huge_pmd_locked(vma, range.start, pvmw.pmd, + folio)) + goto walk_done; + + if (flags & TTU_SPLIT_HUGE_PMD) { + /* + * We temporarily have to drop the PTL and start + * once again from that now-PTE-mapped page + * table. + */ + split_huge_pmd_locked(vma, range.start, + pvmw.pmd, false, folio); + pvmw.pmd = NULL; + spin_unlock(pvmw.ptl); + pvmw.ptl = NULL; + flags &= ~TTU_SPLIT_HUGE_PMD; + continue; + } } /* Unexpected PMD-mapped THP? */ @@ -1816,7 +1825,13 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (unlikely(folio_test_swapbacked(folio) != folio_test_swapcache(folio))) { - WARN_ON_ONCE(1); + /* + * unmap_huge_pmd_locked() will unmark a + * PMD-mapped folio as lazyfree if the folio or + * its PMD was redirtied. + */ + if (!pmd_mapped) + WARN_ON_ONCE(1); goto walk_done_err; }