From patchwork Mon Sep 18 07:33:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 13388984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F8CCD37B0 for ; Mon, 18 Sep 2023 07:33:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FEFD6B0299; Mon, 18 Sep 2023 03:33:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AEF66B029B; Mon, 18 Sep 2023 03:33:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 876F86B029C; Mon, 18 Sep 2023 03:33:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 780EA6B0299 for ; Mon, 18 Sep 2023 03:33:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 560D4140BB3 for ; Mon, 18 Sep 2023 07:33:41 +0000 (UTC) X-FDA: 81248903442.30.CFC3177 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by imf13.hostedemail.com (Postfix) with ESMTP id 4EF0D2002A for ; Mon, 18 Sep 2023 07:33:39 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=c9z2Ne3b; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf13.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695022419; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FUQNLCZpEOmigWGh007/AmH0DjLIzdShytogcEh/Th8=; b=h0ZlxvcdGH8Zij9y6pcRvIj3caCkFGilgg+QvQhaPqhj+dcWsBbn1Y9VxJ+4/UlCjpNmVG 40KyW7p0f0D8PncJb656kxWPyFoWeNKNJVPc6/gf1L7NYhPApAw7KNcJ3m4nqxqlfhSl2J SU9V3Rxsmmi2Imaeqwz1fgCLGrwtlXo= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=c9z2Ne3b; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf13.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695022419; a=rsa-sha256; cv=none; b=Em5BPhGdXIKbdVTsP2kaltJ6UutyyQVTpNrXk6QZGaCRu1QvVoJTNWyBGoTgrExfus2XWE vwjCEuyMi+6Po1D4FsYRHUDmWt2eyIqXtCEbYJyvvNPMiLn5CU/m6sxVRFxkR55lCwN4YB i1qH5NHqJ3O81JAscrywzHzINrumh1M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695022419; x=1726558419; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mlfQoxCuP8sG2p4kkpY1b2EQ7BccGlB5GeB5heEQ4Is=; b=c9z2Ne3bcRPhO2tiwD5iKVAHOF6c45+YfA53xjhLFh4SDCHfiKFSZxPe B+SV7NUc0/ediSNOw4TDWjRBJmHO2Y6Ohrr/9ETZfpVOlKvGNX9DPP5ST iq8MnkC6UD7LmRMr2yx8AEndpAgwaCPsLPbFbDJxlSzsOWHXdBMwRzINt vB/Hah4RGmy2bgOXAnW9rUpBIBjPLjso0ZE4r+pRuNasLApzX0+76d1Yu 0/bJHcL0f9ir0de7xuq4YX+d7Q3evAUq+LyRzhn6WOCVDErlqhDN3xFgl 2AdzjCY+dLn7wOkBsVLZxNIWPEf+ny9jjRMu4aF2fB6BjxIMDiMWKnEIU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="465932099" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="465932099" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2023 00:33:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="888929628" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="888929628" Received: from fyin-dev.sh.intel.com ([10.239.159.24]) by fmsmga001.fm.intel.com with ESMTP; 18 Sep 2023 00:32:52 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v3 1/3] mm: add functions folio_in_range() and folio_within_vma() Date: Mon, 18 Sep 2023 15:33:16 +0800 Message-Id: <20230918073318.1181104-2-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918073318.1181104-1-fengwei.yin@intel.com> References: <20230918073318.1181104-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4EF0D2002A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ippczi7gs1ysdxxybks115nddkq6mj46 X-HE-Tag: 1695022419-284901 X-HE-Meta: U2FsdGVkX18BxN1fzqQ1wJ7kCOqXwOetOE1M1Et36NC31GAqPInJvWgohba1I26SIj4lBtu0eF92YpV9HfmOdkL4gn58nW9vZXbzuXtLJ4loJgU9FI1zpKe4trfhkhjbWRiaBlrAuUoAXvrFuHZkI2+V7xVQsf6xvo64+G9tr8D5oqEf2x8a7jBB0OUUeiadWCFSA+moA0jZjW5Yb42c+NLH4OOkPlDw7gZh0wcnA8w+sq4foPNYs7Um8FL8AxnQFl+cpTyDrmQ4LipylFbO4MXY8bOH+mU179Fskt23KtBvnh7wQJ7BwwYIv3NRckUYh9BWxH6KNrNNHO7UbyA1E7CD+KswEcZByI/VPkgXWH86kIIHGxBFUUdll/vpyll8KvUS1vi/MVGho8bYAR6TQY1BwN5xi6ahEnhGMsopbunicFm+h4AsNxbc90K0WZtOuWVQ8zG88u3uru2+eyken/68CksDH7HjzVy9qV2z+9Kk0Px5fX9JtPgXwnJ7a8RpxldFJxBdmoW8XGtFyZrRd1xd+g/kraY3oIRxxUyHQ3vbyO08XlSjGC5eJyFJcoiqWs5/gLy2XrCAvBAJHS29Ezul6NlJy1EhHVhBhUWVcaCs9ON6URrefNRqRFLQURbEF5vQ9X6s4YUStXm0aPCcip19W5wmoUv7Es5Q3hSf8H+6yD2/KBhg+hFLutjuTUAqB0H/ibgzhkkA1JnAs7qOFsgdXtVjyVS8OQwQp7zPnkEEqhsOjQ/4dmNxQQ9kaVX+fOlsWtTB2fp5nYKoxE6yQvxjtoLWILzcPQH4TH6MGQKkKTQC1XI5mPacEy1M+a2IBQWeGm2wFFbIZboiWz9Ij580PRKkOH3ubdzqm/Mrfm3mgS7kdoD5ixOdvt7WWTvGuya74WEmzoR9X/+WIBQ8MWibV33e0cl26wzUIFCQ+IOsRBdE0hRvjx0JfMV3vvH5WbY6ysK0f1mdGGx7LaY g5IuLZvl 0tV1g+1+hSsGbZRfe2RrRjunkcOipmOX0kdO9koTL46lRqfwny8kFtoxTNJ+YKdoVdS2PJfPYX3GUESBPSdamNsBP77kNGuz4SHf1uzZXZgkh7SfVD8mWVQUubdqFz0V6JteZxo8oTfhjabU/T+fHWyF9sbg4EbmE7o+gAnxKhjZLBLkYFOWpa/9yLForIukb3SOD+uL+BEBTRU8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Also a helper function folio_within_vma() to check whether folio is in the range of vma based on folio_in_range(). Signed-off-by: Yin Fengwei --- mm/internal.h | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 346d82260964..9e2a5b32c659 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -587,6 +587,56 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +/* + * NOTE: This function can't tell whether the folio is "fully mapped" in the + * range. + * "fully mapped" means all the pages of folio is associated with the page + * table of range while this function just check whether the folio range is + * within the range [start, end). Funcation caller nees to do page table + * check if it cares about the page table association. + * + * Typical usage (like mlock or madvise) is: + * Caller knows at least 1 page of folio is associated with page table of VMA + * and the range [start, end) is intersect with the VMA range. Caller wants + * to know whether the folio is fully associated with the range. It calls + * this function to check whether the folio is in the range first. Then checks + * the page table to know whether the folio is fully mapped to the range. + */ +static inline bool +folio_within_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start > end) + return false; + + if (start < vma->vm_start) + start = vma->vm_start; + + if (end > vma->vm_end) + end = vma->vm_end; + + pgoff = folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) + return false; + + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return !(addr < start || end - addr < folio_size(folio)); +} + +static inline bool +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) +{ + return folio_within_range(folio, vma, vma->vm_start, vma->vm_end); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write, From patchwork Mon Sep 18 07:33:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 13388985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36E15CD37B0 for ; Mon, 18 Sep 2023 07:33:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B37956B029B; Mon, 18 Sep 2023 03:33:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AE7FC6B029D; Mon, 18 Sep 2023 03:33:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AF8E6B029E; Mon, 18 Sep 2023 03:33:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8C6486B029B for ; Mon, 18 Sep 2023 03:33:55 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6A6D0A0BEA for ; Mon, 18 Sep 2023 07:33:55 +0000 (UTC) X-FDA: 81248904030.08.1C63B8B Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by imf10.hostedemail.com (Postfix) with ESMTP id 67107C0026 for ; Mon, 18 Sep 2023 07:33:53 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FZmopuke; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf10.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695022433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m8XwfveiKZpcsZ0ykKmHjrMAo7vMGeW1L6bdmvGCvqU=; b=HiYmNHNvPMbw2OOMhh4wz7jnL/xCwFGpa5Kg/2Nru8OhvEjFSVf7iiwivx2MnfZX3OFqkq pn3QL/V5VFObzGtuayEaxzZkiQOPFJbcPSIOZPdazltq/nsNpHNJHxfinezxiaf9f3T67n NxWvEZSfb1jNXNM5fYU9MXfIT8eCk0I= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FZmopuke; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf10.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695022433; a=rsa-sha256; cv=none; b=OsFh4ULjSctR9GZEdgo+MJEfDJiW6Q6gEtoz76bOZ7hXv1ibMSgO13noq6BHQ6XD5dSJ0T emTayEUfSE8939jWZmP2tS+4XPbbYlWk/oZMgs1RRM7Fv5JmxQLGNOSePpnPpVHPTIIwhX PmY+375gLRNWWsa0k+++gO2u3ctpsaI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695022433; x=1726558433; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2yvamEVzhzGGtx2e9kET8hOBuj45FWaLJl5k8kbHNoc=; b=FZmopuketPgakCxcKBD3aUW2uPG2WxDwexYUgvo6ltp5zcOydIrUvRPO KCpS7XymaUCHfCLUANt7RwapUKzzBCd+m7GKVs8Xnn4arUnBDdEtRULS/ CRgkVrSLUGL5sJ7A0tRKpUTCWGdxbEq/kRSlNn81g/G1ZtBXk2hh8encw R5t2mmj6o+fVfi0sk1P2Ch6IH3Ho1TDTCW9FqrICaawA6qxoDK2cWF1z1 keg5tF/00brmFhAOOwW/4qIZFkR7qZ24qbjdsQy6kqtOPhd6RWv27/jd2 YrCU1fhKogyukDSQepgtCbB3r2mV7kNzfJVyXwoEdl5hI/QBuuz1Tpy2K w==; X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="364630958" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="364630958" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2023 00:33:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="815915431" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="815915431" Received: from fyin-dev.sh.intel.com ([10.239.159.24]) by fmsmga004.fm.intel.com with ESMTP; 18 Sep 2023 00:33:48 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v3 2/3] mm: handle large folio when large folio in VM_LOCKED VMA range Date: Mon, 18 Sep 2023 15:33:17 +0800 Message-Id: <20230918073318.1181104-3-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918073318.1181104-1-fengwei.yin@intel.com> References: <20230918073318.1181104-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 9rj8kymxhy5xniq7ynjfnr69cq7su4nj X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 67107C0026 X-HE-Tag: 1695022433-307323 X-HE-Meta: U2FsdGVkX19dxbkHNm99r+qtitN9B95DXr184rE0PV+DGmjuARyzneOuZruMSy0mWI+OqOdLgcPxj2+FxhYSSLkeHkzFMFS225+3CkbneJ0SGZ/MEvLsa6B6zwLhmXr4wRmzXZTE60EFhSBarQW88qil7Yl3cefqKcEd3nhKfhfJCE7j/o+iFy+p0z+pcT2zDb2k68bB9eFQdCQw/plG/txlgiXhIG7ryQCAI6BkUwiLQUTFDqe7atv6Ngxq1fRx5NR/UN8CwmquaVHcOEEXdKvIAqSfQTBoXIeUZrJi2iG4h+5cLcBI/eJZga/JTXS9++pZ9JNLhEI4Z9njTc5o+e7Yrv5HIkFHtGqSzxjws17pasFRFElXYPo67PORyQrImLtm0/CR4+VoOI0+rbtzwWLG/FNWcfp3lZYtScHFRJ4rUtsvM2v5aWOlm3A+BQIX6BwUeH49TyH6Eko8Yu3APKQNFFx/gZchOuuISMtoyOLXfWzKqsMx02a/BAHAanu+GoHr8NRHm7+x5IuJJucy2+nFmfu0rDawRD02y+8AlxbuCiUk+4i6tdoUlTmTxvcABpr/jEh29VYoWD+uaRqhZcGJiTLKBQBdjpb53lVzqP3ICH+S5UWIVfHhR9AK9oNdGWF5Cg0132E6GZm9WppVygvmfSodWWO3tV697z3WjVm9vH/HXQE72ugKF3X/D4jWVqAJzyji+c1eh3lhuqkuRTgHpkjP2gk6YrOSFQZlNf6ZEXG7HgWDRb+8Xp9EQ7SGtQfc8i8DoHTo0nRViEqIkpr3hJLgljwYv8B9J68nuVHWJXpjRplEgLkcp7UY7ONWY/yJMAoTGfDwdZB2L7nIbzhvYYOYq1JIs5TNU1K5Wx/atY5KHxtM2/QgAGRKqnEfZU+7Dfu1bPNSpEXz3UV04Fgi8P8PbV1hl+9muBi+4vfeu/XcEHgrcv3OF5qdZswOynet7u+5fB2GpENfEZS T2nD3W/B RDAwDoxI6krH+S0bWj4kYe9Re98yZcSKABqhFvC/8/+umPytTieIo68xdtmh8to96i9/q+QhDMS3xYbzLhxBKIkLz9JvF4dfhapomK0GfV6Eo4aXJYyXXCPEpX2HXhrGKAB3qXhqc2ui4T4D5VmCZP/MxSpwwQJ4Xz+jlE/A6z4pSXcKWVVWwYlQLyiIyyPpHB2UHmYYB7xiR1c8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If large folio is in the range of VM_LOCKED VMA, it should be mlocked to avoid being picked by page reclaim. Which may split the large folio and then mlock each pages again. Mlock this kind of large folio to prevent them being picked by page reclaim. For the large folio which cross the boundary of VM_LOCKED VMA or not fully mapped to VM_LOCKED VMA, we'd better not to mlock it. So if the system is under memory pressure, this kind of large folio will be split and the pages ouf of VM_LOCKED VMA can be reclaimed. Ideally, for large folio, we should mlock it when the large folio is fully mapped to VMA and munlock it if any page are unmampped from VMA. But it's not easy to detect whether the large folio is fully mapped to VMA in some cases (like add/remove rmap). So we update mlock_vma_folio() and munlock_vma_folio() to mlock/munlock the folio according to vma->vm_flags. Let caller to decide whether they should call these two functions. For add rmap, only mlock normal 4K folio and postpone large folio handling to page reclaim phase. It is possible to reuse page table iterator to detect whether folio is fully mapped or not during page reclaim phase. For remove rmap, invoke munlock_vma_folio() to munlock folio unconditionly because rmap makes folio not fully mapped to VMA. Signed-off-by: Yin Fengwei --- mm/internal.h | 23 ++++++++++-------- mm/rmap.c | 66 ++++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 68 insertions(+), 21 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 9e2a5b32c659..c1441fd9898e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -645,14 +645,10 @@ folio_within_vma(struct folio *folio, struct vm_area_struct *vma) * mlock is usually called at the end of page_add_*_rmap(), munlock at * the end of page_remove_rmap(); but new anon folios are managed by * folio_add_lru_vma() calling mlock_new_folio(). - * - * @compound is used to include pmd mappings of THPs, but filter out - * pte mappings of THPs, which cannot be consistently counted: a pte - * mapping of the THP head cannot be distinguished by the page alone. */ void mlock_folio(struct folio *folio); static inline void mlock_vma_folio(struct folio *folio, - struct vm_area_struct *vma, bool compound) + struct vm_area_struct *vma) { /* * The VM_SPECIAL check here serves two purposes. @@ -662,17 +658,24 @@ static inline void mlock_vma_folio(struct folio *folio, * file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may * still be set while VM_SPECIAL bits are added: so ignore it then. */ - if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED) && - (compound || !folio_test_large(folio))) + if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED)) mlock_folio(folio); } void munlock_folio(struct folio *folio); static inline void munlock_vma_folio(struct folio *folio, - struct vm_area_struct *vma, bool compound) + struct vm_area_struct *vma) { - if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !folio_test_large(folio))) + /* + * munlock if the function is called. Ideally, we should only + * do munlock if any page of folio is unmapped from VMA and + * cause folio not fully mapped to VMA. + * + * But it's not easy to confirm that's the situation. So we + * always munlock the folio and page reclaim will correct it + * if it's wrong. + */ + if (unlikely(vma->vm_flags & VM_LOCKED)) munlock_folio(folio); } diff --git a/mm/rmap.c b/mm/rmap.c index 789a2beb8b3a..e4b92e585df9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -798,6 +798,7 @@ struct folio_referenced_arg { unsigned long vm_flags; struct mem_cgroup *memcg; }; + /* * arg: folio_referenced_arg will be passed */ @@ -807,17 +808,33 @@ static bool folio_referenced_one(struct folio *folio, struct folio_referenced_arg *pra = arg; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); int referenced = 0; + unsigned long start = address, ptes = 0; while (page_vma_mapped_walk(&pvmw)) { address = pvmw.address; - if ((vma->vm_flags & VM_LOCKED) && - (!folio_test_large(folio) || !pvmw.pte)) { - /* Restore the mlock which got missed */ - mlock_vma_folio(folio, vma, !pvmw.pte); - page_vma_mapped_walk_done(&pvmw); - pra->vm_flags |= VM_LOCKED; - return false; /* To break the loop */ + if (vma->vm_flags & VM_LOCKED) { + if (!folio_test_large(folio) || !pvmw.pte) { + /* Restore the mlock which got missed */ + mlock_vma_folio(folio, vma); + page_vma_mapped_walk_done(&pvmw); + pra->vm_flags |= VM_LOCKED; + return false; /* To break the loop */ + } + /* + * For large folio fully mapped to VMA, will + * be handled after the pvmw loop. + * + * For large folio cross VMA boundaries, it's + * expected to be picked by page reclaim. But + * should skip reference of pages which are in + * the range of VM_LOCKED vma. As page reclaim + * should just count the reference of pages out + * the range of VM_LOCKED vma. + */ + ptes++; + pra->mapcount--; + continue; } if (pvmw.pte) { @@ -842,6 +859,23 @@ static bool folio_referenced_one(struct folio *folio, pra->mapcount--; } + if ((vma->vm_flags & VM_LOCKED) && + folio_test_large(folio) && + folio_within_vma(folio, vma)) { + unsigned long s_align, e_align; + + s_align = ALIGN_DOWN(start, PMD_SIZE); + e_align = ALIGN_DOWN(start + folio_size(folio) - 1, PMD_SIZE); + + /* folio doesn't cross page table boundary and fully mapped */ + if ((s_align == e_align) && (ptes == folio_nr_pages(folio))) { + /* Restore the mlock which got missed */ + mlock_vma_folio(folio, vma); + pra->vm_flags |= VM_LOCKED; + return false; /* To break the loop */ + } + } + if (referenced) folio_clear_idle(folio); if (folio_test_clear_young(folio)) @@ -1252,7 +1286,14 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, VM_WARN_ON_FOLIO(page_mapcount(page) > 1 && PageAnonExclusive(page), folio); - mlock_vma_folio(folio, vma, compound); + /* + * For large folio, only mlock it if it's fully mapped to VMA. It's + * not easy to check whether the large folio is fully mapped to VMA + * here. Only mlock normal 4K folio and leave page reclaim to handle + * large folio. + */ + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); } /** @@ -1352,7 +1393,9 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, if (nr) __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr); - mlock_vma_folio(folio, vma, compound); + /* See comments in page_add_anon_rmap() */ + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); } /** @@ -1463,7 +1506,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, * it's only reliable while mapped. */ - munlock_vma_folio(folio, vma, compound); + munlock_vma_folio(folio, vma); } /* @@ -1524,7 +1567,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (!(flags & TTU_IGNORE_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Restore the mlock which got missed */ - mlock_vma_folio(folio, vma, false); + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); page_vma_mapped_walk_done(&pvmw); ret = false; break; From patchwork Mon Sep 18 07:33:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 13388986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE279C46CA1 for ; Mon, 18 Sep 2023 07:34:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B9536B029D; Mon, 18 Sep 2023 03:34:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 569DC6B029F; Mon, 18 Sep 2023 03:34:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 431746B02A0; Mon, 18 Sep 2023 03:34:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 330626B029D for ; Mon, 18 Sep 2023 03:34:09 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0FC2F1C9B73 for ; Mon, 18 Sep 2023 07:34:09 +0000 (UTC) X-FDA: 81248904618.17.0E7A25D Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by imf02.hostedemail.com (Postfix) with ESMTP id 0849B8000B for ; Mon, 18 Sep 2023 07:34:06 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=no2FPlDO; spf=pass (imf02.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695022447; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=omkQZJY6GX4oLfOXjnuAwvtA5BrOk+Wt2K7yMw3C+oU=; b=WNPOc+og6/cUQ7GLWjRPRw1IQicUiFz4Z+gF/aIFLeX7vSFspGZi9b73Y0rXF/dwOgepnE xgj4VPY/uZu7YGeDPqIUYpB6kzPoT6Y6bQBdI8o3HXSEKA0M8JNEpitk29H9xTaa7DRStJ 7mZ+WpNeMmc+PuMVXZm3m4Z8VrwPbJQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695022447; a=rsa-sha256; cv=none; b=hwNtuxp43ucep8d/6CkhojqvTbWa5GTYLEE3TD8q79asGcTzbHVluMj/htTl2xmzzD/his D9ztNHpvPMUo9yUi1txV/kiSrsZB1/rK1tv7PAQw0JoCgzQ8IQx1dL3I3xeUpza+bPgpPD 4LLFza6sxulTBE8Kb+0vtth5UbpKG4Q= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=no2FPlDO; spf=pass (imf02.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695022447; x=1726558447; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q8dtjLXEWzMrNVlVSld+SWqxhUjc/Zr37nPzybsHJrg=; b=no2FPlDO/am7ummY826gAFNYZhiInjA7VoZiXMLxb5vBZ8jO3M7GVx98 Dgd9JR54QZq4wGj/QO5+1Rp1zxtz8hY4xKHuDehFvCJThPClDksyjS488 hPy6Sd+zgwmpdzkTfWyIB4BkrhAtB4d9aThwMTZgqrlovl2NvvpbUwXY6 ejTL3ASybQ4K2hGJHR2HY54xchnzCrPzuIxODUV6ATFihYyqljZqXEzuJ oH40RKBvZ5DS71JE+g1uiIcsq/buRpVhpifLRUdZIEY3OaLGox6libBw0 8Vx9ETI8k+0EaeF9ro+7MmmIKvSKzoH6NmwOn7MT/FoLn2sBZsGYUPYRu g==; X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="364630978" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="364630978" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2023 00:34:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10836"; a="815915477" X-IronPort-AV: E=Sophos;i="6.02,156,1688454000"; d="scan'208";a="815915477" Received: from fyin-dev.sh.intel.com ([10.239.159.24]) by fmsmga004.fm.intel.com with ESMTP; 18 Sep 2023 00:34:02 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v3 3/3] mm: mlock: update mlock_pte_range to handle large folio Date: Mon, 18 Sep 2023 15:33:18 +0800 Message-Id: <20230918073318.1181104-4-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918073318.1181104-1-fengwei.yin@intel.com> References: <20230918073318.1181104-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0849B8000B X-Rspam-User: X-Stat-Signature: b3fdfbh1hq5cjc68qgmyu3ur3opaxzzw X-Rspamd-Server: rspam03 X-HE-Tag: 1695022446-224853 X-HE-Meta: U2FsdGVkX1/3PfuHikwLTGuJxvWQ1YtSjvfx1fXZ58w4Wa1glNG+5PKdyYEbzy6SSvp92LAxXydOzBQc0bdRRj16ahqhSiphkoA2WoMHLtuOsAWwoafPZOIlsvsq9gAbQFix80kggHQjCJtMOOoXMmgrMO7laX+E2Zui/UmMElfasQuyAVqykJzH5xo8v4FvLyTVUFDEYtgapqpMoU0Si/shPPDxdGvpxlBvey/bTBkSVB16C/yQv9HEPVX+aZm0paZ2ngIVfsYGzIy03gHy3lg/c899T1L4JxQQ5V0iinC4re0Q+0Tp8go0yInH/kplFIwgvGwV1yvuPvEWXAlvSlie68t4oJ648zF2+gVfXtLT/omt34OzbXss/3M+hMtKXdn5b1L261JS+MlyURsH2bzpVQrPr+6nrL5alwxpGlEfy2o55eh2vgzH0uLlJi89MA3D122h+zBxEmdEbyPh9oiAMLG4FrzE6IJtfD7qXM66p768xBgP3cBd0nD8dM6MPzikO+4Iw9t0Pti0dDGK/NNz86/ay3S0YG827gl5DAM+2vVIB2mz2sfT9D5i039t+9WD9lbmSvJTGPjGshLp5oNH/L9C6sbcXnFAU4YqegwDsZdDLEf5HqqbMaWVq7WfWR+whaOtYbWWo2OLu5EuWO1tcfprIPnczaqJyWTlchcYMHSwZ/O1FqNbT8Uul2wyq48yuW9y3YqIaCfWXkprW4tQEKmECtUNf8+22AV9TzP8TmXFjBOcXxffKgis0Umt2L4ke8fJimO8tPIgYCYhcbsndhUOeWksi43zvFD/6s2+06uDrUUAZZLf60EwHRPSP9UomslJHdh0puHuFEk+AciyowgcJ6/3CE+5zz4bcsgPAwyo/uBiRe6TQHIhlaO5xFGRrD0l3MolnTxYuWmSzLTCqTWUBNtbIQ6DX0MJXVgOoTmxtxG/UTe4zT8afdWZwXranL6H11iR3k8v9D/ k3GL5sGc REYYutT7h8Nc0T4iqaTBlkyAPJ0bJIEeRqc6hOa4ZLPG/shUnZhuniZ+d6U9DBymovdJqHn162iitZtg5XDCam0RxwgzC4bmdXfqOaTGPGLwx/dPLMIorBWQdu+iqJsRkYlQcoWGfJeM2sY/qTp3KTWaMaCkoCzz1vRbwqq7nZ/2PbXwSyHj0+XddYVG71tHxoYn4R5iQUxUtXa0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current kernel only lock base size folio during mlock syscall. Add large folio support with following rules: - Only mlock large folio when it's in VM_LOCKED VMA range and fully mapped to page table. fully mapped folio is required as if folio is not fully mapped to a VM_LOCKED VMA, if system is in memory pressure, page reclaim is allowed to pick up this folio, split it and reclaim the pages which are not in VM_LOCKED VMA. - munlock will apply to the large folio which is in VMA range or cross the VMA boundary. This is required to handle the case that the large folio is mlocked, later the VMA is split in the middle of large folio. Signed-off-by: Yin Fengwei --- mm/mlock.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 64 insertions(+), 2 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 06bdfab83b58..42b6865f8f82 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -305,6 +305,58 @@ void munlock_folio(struct folio *folio) local_unlock(&mlock_fbatch.lock); } +static inline unsigned int folio_mlock_step(struct folio *folio, + pte_t *pte, unsigned long addr, unsigned long end) +{ + unsigned int count, i, nr = folio_nr_pages(folio); + unsigned long pfn = folio_pfn(folio); + pte_t ptent = ptep_get(pte); + + if (!folio_test_large(folio)) + return 1; + + count = pfn + nr - pte_pfn(ptent); + count = min_t(unsigned int, count, (end - addr) >> PAGE_SHIFT); + + for (i = 0; i < count; i++, pte++) { + pte_t entry = ptep_get(pte); + + if (!pte_present(entry)) + break; + if (pte_pfn(entry) - pfn >= nr) + break; + } + + return i; +} + +static inline bool allow_mlock_munlock(struct folio *folio, + struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned int step) +{ + /* + * For unlock, allow munlock large folio which is partially + * mapped to VMA. As it's possible that large folio is + * mlocked and VMA is split later. + * + * During memory pressure, such kind of large folio can + * be split. And the pages are not in VM_LOCKed VMA + * can be reclaimed. + */ + if (!(vma->vm_flags & VM_LOCKED)) + return true; + + /* folio not in range [start, end), skip mlock */ + if (!folio_within_range(folio, vma, start, end)) + return false; + + /* folio is not fully mapped, skip mlock */ + if (step != folio_nr_pages(folio)) + return false; + + return true; +} + static int mlock_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -314,6 +366,8 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, pte_t *start_pte, *pte; pte_t ptent; struct folio *folio; + unsigned int step = 1; + unsigned long start = addr; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -334,6 +388,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, walk->action = ACTION_AGAIN; return 0; } + for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) { ptent = ptep_get(pte); if (!pte_present(ptent)) @@ -341,12 +396,19 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, folio = vm_normal_folio(vma, addr, ptent); if (!folio || folio_is_zone_device(folio)) continue; - if (folio_test_large(folio)) - continue; + + step = folio_mlock_step(folio, pte, addr, end); + if (!allow_mlock_munlock(folio, vma, start, end, step)) + goto next_entry; + if (vma->vm_flags & VM_LOCKED) mlock_folio(folio); else munlock_folio(folio); + +next_entry: + pte += step - 1; + addr += (step - 1) << PAGE_SHIFT; } pte_unmap(start_pte); out: