From patchwork Mon Mar 6 09:22:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 13160761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD412C678D4 for ; Mon, 6 Mar 2023 09:22:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A30E6B007E; Mon, 6 Mar 2023 04:22:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 748546B007D; Mon, 6 Mar 2023 04:22:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60FA06B007E; Mon, 6 Mar 2023 04:22:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 520366B0075 for ; Mon, 6 Mar 2023 04:22:52 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 23A53160BBA for ; Mon, 6 Mar 2023 09:22:52 +0000 (UTC) X-FDA: 80537933784.01.C73D1A9 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf11.hostedemail.com (Postfix) with ESMTP id 0ECF34000B for ; Mon, 6 Mar 2023 09:22:49 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FxbdWRyJ; spf=pass (imf11.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678094570; a=rsa-sha256; cv=none; b=mzw/s9lf7nr8sPmW3wjru53aCYHSRI6D3C1+UJqFJ5GYxAeWEVGwRf2nSQdvpg0uhqZ9fq bxQqo8tkyF5Uy/hWgTI9+cBMgJoVbNC/M90/vV2icQ0Ngqsi+tTSEOyYObnBjje1BAHxPl qnHaeEKclqNfgHkpgu9VoKQMnQoXDe0= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FxbdWRyJ; spf=pass (imf11.hostedemail.com: domain of fengwei.yin@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678094570; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=158cfA4gBZ7Jw81IuKZLazNoOKiwHKCVyba2V7h0Rk4=; b=1lEn1Eba8mej9TM569d2caaeSxbb9B99h54otSszpcX4uoYyJESNXfgbz+aWwNdxjBuBu8 2JUtcPFCw0ENmv6vOUyMTIz4fxMqqTYBAdsAmUXcvLBQuryURd7kRJjJfd2YdHlmBV+M5X fIK4D0Xe0g+92V3F8z7Ecxzv+rm0BkY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678094570; x=1709630570; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EL1UNEPykI4yEh+CYDaUYZvpEG8ybNFoNbxj0OEbvtA=; b=FxbdWRyJoDLyUw2mWgPesAdAnqPBKsuHISxvXcrxzZYKB2YQ/+JOL+NP Etr3/t2kMeWWM2XO65gF2sn+esvajyYNQ3lHZNyX3cRsxk1sjNYKY/dw8 b/6e1Mxdv7nRgxv2xaQQj4VhmR8nrVHEbCzZQeYYjK0SqBOXKYVO/vtJl gqfJ/gRA6yrzHBdDjQj0DdUBo75FKT5n8ay0UTWTTsXdiYJhqDR1QRgM5 1X0KH7/V3Ey3D65kVt0wfMHwii7MKYbf+HLh5He6CyV7YASCZw0ftJNJU nAvwrpOVudh55PLB5Qv44Kl/HyjQvv0TwbnN7iKoSXc1QTQT4G9iWfTe2 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10640"; a="363124100" X-IronPort-AV: E=Sophos;i="5.98,236,1673942400"; d="scan'208";a="363124100" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2023 01:22:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10640"; a="799932523" X-IronPort-AV: E=Sophos;i="5.98,236,1673942400"; d="scan'208";a="799932523" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga004.jf.intel.com with ESMTP; 06 Mar 2023 01:22:46 -0800 From: Yin Fengwei To: linux-mm@kvack.org, akpm@linux-foundation.org, willy@infradead.org, mike.kravetz@oracle.com, sidhartha.kumar@oracle.com, naoya.horiguchi@nec.com, jane.chu@oracle.com, david@redhat.com Cc: fengwei.yin@intel.com Subject: [PATCH v3 5/5] try_to_unmap_one: batched remove rmap, update folio refcount Date: Mon, 6 Mar 2023 17:22:59 +0800 Message-Id: <20230306092259.3507807-6-fengwei.yin@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230306092259.3507807-1-fengwei.yin@intel.com> References: <20230306092259.3507807-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 0ECF34000B X-Rspamd-Server: rspam01 X-Stat-Signature: peuabauz3ce9ruzbksqaa6ko5xu47hwg X-HE-Tag: 1678094569-1427 X-HE-Meta: U2FsdGVkX18guf1dV5rfU5N0TQuJgLqPFQ6ZafsHhEyq9bQp7DmOfNaNrD2YtuUcepU1rvOpYIa/755v2Zo2qDg1oP3IhvbJtNCVSVQL/1d3T85cMT4rB3mu3XwVcIZOmJQK/LmM6xd44PqMf1EiJkzpy6OuKL3qlk9319wW+5sVje6ufSFVvTby+UP+AHVhW21qGG/a5UGf0maLK1n0Jwm3DTVYDOGMGZryE9cB1c2d5Cuaw9jkIiQmxKVNziR5YnBzlK2pKKoy37R/rRRKYruNufpir06Tn7qGB9D5/h/EPzgEUL8gcGmMmTdO41fZNy9rrVPydk/QKa1orrwZI7McqYsJGuNzH5OrQS+rjj0YJai+N2uAL7usI+G3OYoqBU8rH1ba1AFqbVANu+y3uDAeD7t0OiwO4c0JjvKDGplPtLFJL7n710QtYd1RHyMX43oNFnaUBe96PyWMJORoPh3CFi/lujhhJf+bhM++ZRhUmtroEERm3N2/IajsQcNhsrubJE5NVc3a1GL4InYHoesXX3KyRBTC+b+71Gyx564zXjOfX+MEeqEFxP0S+fn2LR4If4laIsPOq7yOjAWDrRT2bVilE/rjgXADvwcagFIhGLwH+eXIAzFElOpcWO83gjxFTj2W7bSAOSIsAoQzn1k9hvjV8T8/sbveLLrtMlaXbYt4gSCO23Yq8WQox++ZUBFW6YVwxvz0Yas5JwcwrqJcbxeefUQD7vS8fr7mUlSyxmWwnGZKnH8b10YK+uLKQtMY59of2kJEFcres7OK3ntCwbK1CzxB6N4fHwNY2nqfZ0AqUJupsLzBVdel+JZDmetn7JVUe0oYIl3xXDaQf4gNbR/Q4RxHkCPtHgqYJeIPI+4AntyjJgnN+qle/Lvh2GG7dxJz31zxuSTMjaKuvTKCbDLa3CEaE0nIeIAPQmBbNUDS+ocajUJcTydUy+c57KwXRoLLWWI9yJDJS9K E4z07T/y CmEZpyJkEv4F3JOX3/EwiYvSMMzfFDjyR7jx+YNA3IDohNPcHBMv3m+otVnmUPRZQNbalhc0cRlwm8islSSbUkPYY1H15kDn4qBO3h6mmISFptXfGlNqgrNUT2T81FMpl/OFP1s74v598CerpmJNYZJ7mSSjanyBdPU83LiaBtD9hy5pQrP9szevZPdVVD/xRpJRcaWYNYlS4dBavoR2SaaadTA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If unmap one page fails, or the vma walk will skip next pte, or the vma walk will end on next pte, batched remove map, update folio refcount. Signed-off-by: Yin Fengwei --- include/linux/rmap.h | 1 + mm/page_vma_mapped.c | 30 +++++++++++++++++++++++++++ mm/rmap.c | 48 ++++++++++++++++++++++++++++++++++---------- 3 files changed, 68 insertions(+), 11 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index d2569b42e21a..18193d1d5a8e 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -424,6 +424,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) } bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); +bool pvmw_walk_skip_or_end_on_next(struct page_vma_mapped_walk *pvmw); /* * Used by swapoff to help locate where page is expected in vma. diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 4e448cfbc6ef..19e997dfb5c6 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -291,6 +291,36 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return false; } +/** + * pvmw_walk_skip_or_end_on_next - check if next pte will be skipped or + * end the walk + * @pvmw: pointer to struct page_vma_mapped_walk. + * + * This function can only be called with correct pte lock hold + */ +bool pvmw_walk_skip_or_end_on_next(struct page_vma_mapped_walk *pvmw) +{ + unsigned long address = pvmw->address + PAGE_SIZE; + + if (address >= vma_address_end(pvmw)) + return true; + + if ((address & (PMD_SIZE - PAGE_SIZE)) == 0) + return true; + + if (pte_none(*pvmw->pte)) + return true; + + pvmw->pte++; + if (!check_pte(pvmw)) { + pvmw->pte--; + return true; + } + pvmw->pte--; + + return false; +} + /** * page_mapped_in_vma - check whether a page is really mapped in a VMA * @page: the page to test diff --git a/mm/rmap.c b/mm/rmap.c index bb3fcb8df579..a64e9cbb52dd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1741,6 +1741,26 @@ static bool try_to_unmap_one_page(struct folio *folio, return false; } +static void folio_remove_rmap_and_update_count(struct folio *folio, + struct page *start, struct vm_area_struct *vma, int count) +{ + if (count == 0) + return; + + /* + * No need to call mmu_notifier_invalidate_range() it has be + * done above for all cases requiring it to happen under page + * table lock before mmu_notifier_invalidate_range_end() + * + * See Documentation/mm/mmu_notifier.rst + */ + folio_remove_rmap_range(folio, start, count, vma, + folio_test_hugetlb(folio)); + if (vma->vm_flags & VM_LOCKED) + mlock_drain_local(); + folio_ref_sub(folio, count); +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -1748,10 +1768,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *arg) { DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); - struct page *subpage; + struct page *start = NULL; bool ret = true; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + int count = 0; /* * When racing against e.g. zap_pte_range() on another cpu, @@ -1812,26 +1833,31 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, break; } - subpage = folio_page(folio, + if (!start) + start = folio_page(folio, pte_pfn(*pvmw.pte) - folio_pfn(folio)); ret = try_to_unmap_one_page(folio, vma, range, pvmw, address, flags); if (!ret) { + folio_remove_rmap_and_update_count(folio, + start, vma, count); page_vma_mapped_walk_done(&pvmw); break; } + count++; /* - * No need to call mmu_notifier_invalidate_range() it has be - * done above for all cases requiring it to happen under page - * table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst + * If next pte will be skipped in page_vma_mapped_walk() or + * the walk will end at it, batched remove rmap and update + * page refcount. We can't do it after page_vma_mapped_walk() + * return false because the pte lock will not be hold. */ - page_remove_rmap(subpage, vma, false); - if (vma->vm_flags & VM_LOCKED) - mlock_drain_local(); - folio_put(folio); + if (pvmw_walk_skip_or_end_on_next(&pvmw)) { + folio_remove_rmap_and_update_count(folio, + start, vma, count); + count = 0; + start = NULL; + } } mmu_notifier_invalidate_range_end(&range);