From patchwork Sun Feb 25 07:56:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13570754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C16F7C47DD9 for ; Sun, 25 Feb 2024 08:24:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3BAC6B00F4; Sun, 25 Feb 2024 03:24:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB6C36B00F8; Sun, 25 Feb 2024 03:24:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83F596B00F6; Sun, 25 Feb 2024 03:24:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 610CE6B00F5 for ; Sun, 25 Feb 2024 03:24:16 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EA6B5A03FE for ; Sun, 25 Feb 2024 08:24:15 +0000 (UTC) X-FDA: 81829638870.27.7EFCCA0 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by imf09.hostedemail.com (Postfix) with ESMTP id 2EBDF14000D for ; Sun, 25 Feb 2024 08:24:13 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=m++fgHJP; spf=pass (imf09.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708849454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0+trpL1ypBsEM42inypymiWldqUMRalG2DWR73Ds48g=; b=nR0fCv/De8QePIw4he+dPoCN+zPjMG21K1sim91SPzIAeUzbhOGQuaGz9UTbiM0CVJ+EPc GrqPNlL/3zbE1bVvCt49WKZ3fxbRxNccX1s04+NElpIlUzRChMIVVJFoKUJ6aSXcZAw/bf kWGg4y7K2OQSb56TCWyfuNlLDDaXmZk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708849454; a=rsa-sha256; cv=none; b=yNM/Q1PcgsEFnmHpxvzDXgXaTEir0TMvMD03U9fBEW+lCjJtISmnd+N340lMcqf0ZxJymF V1tsCXXMsVN2LP4PdLkYJDrlKpnVnsZz4MxWwYGTzv0F2oc8ciiG5rn2JoQToQf1CnMjyS z6vOgdeI5qKGLDIyFhnfQ4TA66jUpGM= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=m++fgHJP; spf=pass (imf09.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708849454; x=1740385454; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Lrj5AnfXRm1ue52z3xjuiLSu7Sue1rj/iVQQ5WFlJ/c=; b=m++fgHJPsLMArCkG04juNzJjeAd+kt6oKm6PQp8m0xWACsgsJqNAbcaC cinTSxxSBf5i7Y3XE1aSGd6B/IUsM5RlFOoEwiBWXQnnHyJUhcmUY2LjS 9sAO7H6rrKOuiq6FsBh9eJKdFx43hz/epTlTLYRlzPIld46Dzj6hgt7JZ 9fXtqS8K313G5uWRC9MCdiUhdOkto5LqI/Rgegcjl+9YYgDv8nrySiJ6H mULQF++cZjxuh68a3wG0CUKHBanVxIT+uD6U38WgKLr69BWxUsI1yGaB1 FxfDePnclESF98LtL4nMu+yIU3j6bncygNwRr8UsVQvq5F8neZh1d2fQJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10994"; a="2988346" X-IronPort-AV: E=Sophos;i="6.06,183,1705392000"; d="scan'208";a="2988346" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2024 00:24:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,183,1705392000"; d="scan'208";a="6783243" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2024 00:24:11 -0800 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu Subject: [PATCH v12 1/8] mm/gup: Introduce unpin_folio/unpin_folios helpers Date: Sat, 24 Feb 2024 23:56:57 -0800 Message-ID: <20240225080008.1019653-2-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240225080008.1019653-1-vivek.kasireddy@intel.com> References: <20240225080008.1019653-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Stat-Signature: 9tiqmbpyqcopzhw3dea1uc3nfh9axirx X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2EBDF14000D X-Rspam-User: X-HE-Tag: 1708849453-392826 X-HE-Meta: U2FsdGVkX18+znfYiPLhh028cBZQrr2MZ9a67sgAzJoR7roXJAHMlSnhskRnRlFNz5lZLUC82+yL1AYmWf82dCxsaLfZRGFqAymBz4iJl2CwfKgG5y22dHP+jhYet/3bbzfkVVgzBBmH5QHg5qDFxb+SKondvM+vTXoqC0ftlgO2qgfBfY61V9+ksyFkYLt/Pmqy5AX9c6zENqnXi5evYAE0q+4l6O/6Tby5ktUuOk25upIN210sDsuWfR/4FjpuHukoSA41hIQeRdSNwysop5wy17nXCIQU9Bb8onSH7QGHbmceH6+jLm6i+XxnwsOXfWnFhXRzYIiI4nGjlOh7HZnW7cVkvni15xD2aH6HuFnQ6d790I5slhAkQ/zOImWJZtwGdbJjJ514nF5A5xbXx8mVy+JqXKalPnQ+qTtjcr9Ofp0l88Kc5MFjgNJjnswPHQzJRNhWzp6+rUZakv5Uoq/cqp+A9jgh6yqP4F7tnHcwNped7QTAl4jgtQyrlwhKdXJelTNr7lTfnYSbOAI010vArUUof4KTwe0j6YVWDOyCJ5hNckzOucPNr/1f4R+hGe7uHqIwWfpWVi/Z0922RtdcpMELBsO7M4/PvWi/KEPLqkV2MKbF0RblkgAAbQhSL4MTfX782eIcBuSW3fU30BfcxsBTZT+M+Ms1VTCyZSrkKVvNG71PjX796rVbRkE24zOYFo3DFgnz1zQ/aVuJ5vdPTvL39VSqqEME4KySCUcluaYhagZZEBPJ7ZDLLCLQ1bpKgPgEcEbkyJcIaOiYy9mNYVKT20IhPhk5dwN7G3B3hVLdH5l4pDDaxw7A+hylQHC8SeiHIzIYTiN7mVky9oxuqphTNRcQHybR3sLdvgdQgoE4AfoJBzhSUFwHw44d1cqu9UdMvyPbotpPetSbRsUeX7Ofjs/EBXEl+wzRd/0FjLAv7h4vVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: These helpers are the folio versions of unpin_user_page/unpin_user_pages. They are currently only useful for unpinning folios pinned by memfd_pin_folios() or other associated routines. However, they could find new uses in the future, when more and more folio-only helpers are added to GUP. Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Peter Xu Suggested-by: David Hildenbrand Signed-off-by: Vivek Kasireddy --- include/linux/mm.h | 2 ++ mm/gup.c | 81 ++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 74 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6f4825d82965..36e4c2b22600 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1601,11 +1601,13 @@ static inline void put_page(struct page *page) #define GUP_PIN_COUNTING_BIAS (1U << 10) void unpin_user_page(struct page *page); +void unpin_folio(struct folio *folio); void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, bool make_dirty); void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty); void unpin_user_pages(struct page **pages, unsigned long npages); +void unpin_folios(struct folio **folios, unsigned long nfolios); static inline bool is_cow_mapping(vm_flags_t flags) { diff --git a/mm/gup.c b/mm/gup.c index df83182ec72d..0a45eda6aaeb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,6 +30,23 @@ struct follow_page_context { unsigned int page_mask; }; +static inline void sanity_check_pinned_folios(struct folio **folios, + unsigned long nfolios) +{ + if (!IS_ENABLED(CONFIG_DEBUG_VM)) + return; + + for (; nfolios; nfolios--, folios++) { + struct folio *folio = *folios; + + if (is_zero_folio(folio) || + !folio_test_anon(folio)) + continue; + + VM_BUG_ON_FOLIO(!PageAnonExclusive(&folio->page), folio); + } +} + static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -52,15 +69,11 @@ static inline void sanity_check_pinned_pages(struct page **pages, struct page *page = *pages; struct folio *folio = page_folio(page); - if (is_zero_page(page) || - !folio_test_anon(folio)) - continue; - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page), page); - else - /* Either a PTE-mapped or a PMD-mapped THP. */ - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page) && - !PageAnonExclusive(page), page); + sanity_check_pinned_folios(&folio, 1); + + /* Either a PTE-mapped or a PMD-mapped THP. */ + if (folio_test_large(folio) && !folio_test_hugetlb(folio)) + VM_BUG_ON_PAGE(!PageAnonExclusive(page), page); } } @@ -276,6 +289,21 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); +/** + * unpin_folio() - release a dma-pinned folio + * @folio: pointer to folio to be released + * + * Folios that were pinned via memfd_pin_folios() or other similar routines + * must be released either using unpin_folio() or unpin_folios(). This is so + * that such folios can be separately tracked and uniquely handled. + */ +void unpin_folio(struct folio *folio) +{ + sanity_check_pinned_folios(&folio, 1); + gup_put_folio(folio, 1, FOLL_PIN); +} +EXPORT_SYMBOL(unpin_folio); + /** * folio_add_pin - Try to get an additional pin on a pinned folio * @folio: The folio to be pinned @@ -488,6 +516,41 @@ void unpin_user_pages(struct page **pages, unsigned long npages) } EXPORT_SYMBOL(unpin_user_pages); +/** + * unpin_folios() - release an array of gup-pinned folios. + * @folios: array of folios to be marked dirty and released. + * @nfolios: number of folios in the @folios array. + * + * For each folio in the @folios array, release the folio using unpin_folio(). + * + * Please see the unpin_folio() documentation for details. + */ +void unpin_folios(struct folio **folios, unsigned long nfolios) +{ + unsigned long i = 0, j; + + /* + * If this WARN_ON() fires, then the system *might* be leaking folios + * (by leaving them pinned), but probably not. More likely, gup/pup + * returned a hard -ERRNO error to the caller, who erroneously passed + * it here. + */ + if (WARN_ON(IS_ERR_VALUE(nfolios))) + return; + + sanity_check_pinned_folios(folios, nfolios); + while (i < nfolios) { + for (j = i + 1; j < nfolios; j++) + if (folios[i] != folios[j]) + break; + + if (folios[i]) + gup_put_folio(folios[i], j - i, FOLL_PIN); + i = j; + } +} +EXPORT_SYMBOL(unpin_folios); + /* * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's * lifecycle. Avoid setting the bit unless necessary, or it might cause write