From patchwork Fri May 26 21:41:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13257338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEA07C77B73 for ; Fri, 26 May 2023 21:41:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4378D280001; Fri, 26 May 2023 17:41:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E6BB900002; Fri, 26 May 2023 17:41:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D4E6280001; Fri, 26 May 2023 17:41:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1D3A6900002 for ; Fri, 26 May 2023 17:41:59 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id DF00240274 for ; Fri, 26 May 2023 21:41:58 +0000 (UTC) X-FDA: 80833729116.22.506A316 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 27B0C40010 for ; Fri, 26 May 2023 21:41:55 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TgRH0QtE; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685137316; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1h88QXkQdFyWOO8na2GWe9TzIac4MB0oRM4R5bwyOic=; b=SAbO2S/I2kcDelueROAfOdjI7wr1nN5SgyXjBi/OA6F/f6w7C9FRXJ5rF87nhLD+5l9Xdw JBk0w4Kr4F8lnNBGpdaSr7PdnJx7rUef52peejyB4qKc7Bt0MDgEhI1PPKUfQBSM/424OF VPaEjiZEyRwPNfIx+ZN0YzgXCsxNdxM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685137316; a=rsa-sha256; cv=none; b=W+e5U8kZZ7Dbs0VpC4+slSzc5pj1VtsNdB/c+5W11xOrFnPYKxxtQkU0bU2vZ2W7WzNIO1 Zh5D7Eomw4TxQ4aiJ3IX0qSTE3yrpwNYiULZVgxzGjOK+aaGI6D3szAoiw/scYMxWv7HPF LS/ks7/zgNseexdgfDHDUSTXHjHID5U= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TgRH0QtE; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685137315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1h88QXkQdFyWOO8na2GWe9TzIac4MB0oRM4R5bwyOic=; b=TgRH0QtEkUDqRxpcTA1aUf9Qd4ze8dvcRzCQWcd1DYYr6Zh9qcPOiYkDvsdx+jMIXjANZg cCGeYXDKcdZf2Qj/0H2KFOlLq+6Ok1WoYz7zXwGrpu2AyK4qyvoEQWhe1o7AoGDRFi11fW kcRMMGUpL7N937NaQI+0yN44AbZR9ms= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-488-yJ1R9c3TMlO_XregAeFROg-1; Fri, 26 May 2023 17:41:51 -0400 X-MC-Unique: yJ1R9c3TMlO_XregAeFROg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E2CD7185A78B; Fri, 26 May 2023 21:41:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 66AD2492B0A; Fri, 26 May 2023 21:41:47 +0000 (UTC) From: David Howells To: Christoph Hellwig , David Hildenbrand , Lorenzo Stoakes Cc: David Howells , Jens Axboe , Al Viro , Matthew Wilcox , Jan Kara , Jeff Layton , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , Christian Brauner , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Subject: [PATCH v4 1/3] mm: Don't pin ZERO_PAGE in pin_user_pages() Date: Fri, 26 May 2023 22:41:40 +0100 Message-Id: <20230526214142.958751-2-dhowells@redhat.com> In-Reply-To: <20230526214142.958751-1-dhowells@redhat.com> References: <20230526214142.958751-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Rspamd-Queue-Id: 27B0C40010 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: m8mx1i6ues4h4pwpj383dyb68fyzcp35 X-HE-Tag: 1685137315-202189 X-HE-Meta: U2FsdGVkX18QpVxwwURVYzOcCUqjh8S11j5Mh4trKkhKogFo8G4/mrSCGdET5pHEBXDEmOUfsT53jlCDQ8C92227Qf1KrfRCgdW58UYS5Owwk+rssPxdlS5l0SHF5zQt5rQeo8LeX8O3scjYlDUJ46ZTw0He+HtTjecAsdb85iygC3rnLYQYM8Tn+LKts6NvBCJxe3EF/bbfeuc31NvFoG3+fDiuBhi7Mvt4uyCzA//BQGi6ZqEVFl69jMpYEO/eG746f3J7kR5qGYfIlgh/zQe/SjACBagN1XucoXS0Wa4lxZrKcUHbw+R8TyGJKt+Eyh/Jnim3UiW87Xf1JlpX8CK+dF7GS1RIJStJsOfaqDqf6qy5Pk8gYybxRAW4TuUsP/sAOypanmq8YzVtkUtqPWy7cvfLFqvC5xREkl+Z4NgAyxvAUv9CgOaeAe3LHSe3z7xyZ116MZD0NUr5yP+e2rlpxY6Acg3TrUNMQKIFYyqJxvFbdde+ClkhrVOq4XHFkxdvjBnBHDM7KEkfcTLcWL9OqH0k5xs8YdGAKXwsB6hpYd9jmUFcfEG9N+7oehhc3GEgrzZSTtpSB0AYfVR2dqHUf/OcfALV7cbrn8pQ4B6bQs5dlwvMQ08rm3Pu+rcrwNwaXxfZPX5MZvZ/SoLoC//D8V4MxKbq0vMdGNCHg2KK/7Gq4WJpbsNyRC4VbWNnaVfoTGyCHtErgW00AVsd6A2CbBC7aRBP4Gu67gD7xlHOkqP0sbse70ECCheB8zRnHLBjPe0qAOmMgBvlxoYyDXjYZ5I+9TibIF1IQUNpomwTI06Lr6f1ijty+u6OLyaMjJpIzXv1xtBfOIOjTExXvnF4giU38ED9jIgEt2bxC+weWmi7byZKscKOGdcRLFawR+9BNtTaaRkmjE/wbsNjv/EnVI7fscqvC0MJWx2Q5zQSe277MyE0wKtRo/L2wwHmvD2r9NTOrp7f5NeZV7o qq2/VfLp DqPF3OERyB74z9zYaSGxa85uLGPknipq3p1lYPxfToNf5jBLFdYicdaB/hUeR/Ec+ZH2f25qHJ8vBCOgvPmI7Nuf4d8Ic9Ay3luMakaL9MHTrI8wgsPDUXoXsLM9Yi1bW/SQCMdbplI47e5xNzMWAUSibNbiNRTqZ7tq5/jw+v26L0WjzXLw09zW4e2fVr1S4Z2nCN8DBC80Bb+bsT9xMWdJYmZPDUtjNrFOiPqKil9hVr8MPb9qzrWwQ7A6XXqjtDJHyhPDOOXjslueGACpRhLFatrbd0txD7yatFmbtrMw0z+Iu4SOqKuSZ/9RiMDSnIUVUOlRHcN8MqPJssWGIItuMIFVTq6/fGER7o/2gdQF1Ox14t5OzBKrTSWGmj7Pvmav8cmb9v4Zjp1O6m8Yawgsa3iTMzBblc8T01BW38G58cZbFu9sLu4o3OfSlJYbnIZkKjxfN7L7U9EPq/rRjS0s91b7aUI9YqreIuzIP1UH3YUWB/4whQy9yitLQQ8VioqQ9fjGxkh9buc9waVV8L/autVFoUhAC3jTHBjozQ8VULAUe9B/FPajvC9frjil6e5j/4ANUBr5fWQJ+7p5fJo0yJMDnQ1GzgasbiBKOqNBaWwOkv0nDW1kWyhdqqIP6t/Hr0Ww6+c0rV8HTM6E8AN54SaqdFXgbAMgYJW3Mx5tXmFGEaFOUCI2Oag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make pin_user_pages*() leave a ZERO_PAGE unpinned if it extracts a pointer to it from the page tables and make unpin_user_page*() correspondingly ignore a ZERO_PAGE when unpinning. We don't want to risk overrunning a zero page's refcount as we're only allowed ~2 million pins on it - something that userspace can conceivably trigger. Add a pair of functions to test whether a page or a folio is a ZERO_PAGE. Signed-off-by: David Howells cc: Christoph Hellwig cc: David Hildenbrand cc: Lorenzo Stoakes cc: Andrew Morton cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: Jan Kara cc: Jeff Layton cc: Jason Gunthorpe cc: Logan Gunthorpe cc: Hillf Danton cc: Christian Brauner cc: Linus Torvalds cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-kernel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Lorenzo Stoakes Reviewed-by: Christoph Hellwig Acked-by: David Hildenbrand --- Notes: ver #3) - Move is_zero_page() and is_zero_folio() to mm.h for dependency reasons. - Add more comments and adjust the docs. ver #2) - Fix use of ZERO_PAGE(). - Add is_zero_page() and is_zero_folio() wrappers. - Return the zero page obtained, not ZERO_PAGE(0) unconditionally. Documentation/core-api/pin_user_pages.rst | 6 +++++ include/linux/mm.h | 26 +++++++++++++++++-- mm/gup.c | 31 ++++++++++++++++++++++- 3 files changed, 60 insertions(+), 3 deletions(-) diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index 9fb0b1080d3b..d3c1f6d8c0e0 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -112,6 +112,12 @@ pages: This also leads to limitations: there are only 31-10==21 bits available for a counter that increments 10 bits at a time. +* Because of that limitation, special handling is applied to the zero pages + when using FOLL_PIN. We only pretend to pin a zero page - we don't alter its + refcount or pincount at all (it is permanent, so there's no need). The + unpinning functions also don't do anything to a zero page. This is + transparent to the caller. + * Callers must specifically request "dma-pinned tracking of pages". In other words, just calling get_user_pages() will not suffice; a new set of functions, pin_user_page() and related, must be used. diff --git a/include/linux/mm.h b/include/linux/mm.h index 27ce77080c79..3c2f6b452586 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1910,6 +1910,28 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, return page_maybe_dma_pinned(page); } +/** + * is_zero_page - Query if a page is a zero page + * @page: The page to query + * + * This returns true if @page is one of the permanent zero pages. + */ +static inline bool is_zero_page(const struct page *page) +{ + return is_zero_pfn(page_to_pfn(page)); +} + +/** + * is_zero_folio - Query if a folio is a zero page + * @folio: The folio to query + * + * This returns true if @folio is one of the permanent zero pages. + */ +static inline bool is_zero_folio(const struct folio *folio) +{ + return is_zero_page(&folio->page); +} + /* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ #ifdef CONFIG_MIGRATION static inline bool is_longterm_pinnable_page(struct page *page) @@ -1920,8 +1942,8 @@ static inline bool is_longterm_pinnable_page(struct page *page) if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE) return false; #endif - /* The zero page may always be pinned */ - if (is_zero_pfn(page_to_pfn(page))) + /* The zero page can be "pinned" but gets special handling. */ + if (is_zero_page(page)) return true; /* Coherent device memory must always allow eviction. */ diff --git a/mm/gup.c b/mm/gup.c index bbe416236593..ad28261dcafd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -51,7 +51,8 @@ static inline void sanity_check_pinned_pages(struct page **pages, struct page *page = *pages; struct folio *folio = page_folio(page); - if (!folio_test_anon(folio)) + if (is_zero_page(page) || + !folio_test_anon(folio)) continue; if (!folio_test_large(folio) || folio_test_hugetlb(folio)) VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page), page); @@ -131,6 +132,13 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) else if (flags & FOLL_PIN) { struct folio *folio; + /* + * Don't take a pin on the zero page - it's not going anywhere + * and it is used in a *lot* of places. + */ + if (is_zero_page(page)) + return page_folio(page); + /* * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a * right zone, so fail and let the caller fall back to the slow @@ -180,6 +188,8 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { + if (is_zero_folio(folio)) + return; node_stat_mod_folio(folio, NR_FOLL_PIN_RELEASED, refs); if (folio_test_large(folio)) atomic_sub(refs, &folio->_pincount); @@ -224,6 +234,13 @@ int __must_check try_grab_page(struct page *page, unsigned int flags) if (flags & FOLL_GET) folio_ref_inc(folio); else if (flags & FOLL_PIN) { + /* + * Don't take a pin on the zero page - it's not going anywhere + * and it is used in a *lot* of places. + */ + if (is_zero_page(page)) + return 0; + /* * Similar to try_grab_folio(): be sure to *also* * increment the normal page refcount field at least once, @@ -3079,6 +3096,9 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for further details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page() will not remove pins from it. */ int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) @@ -3110,6 +3130,9 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, @@ -3143,6 +3166,9 @@ EXPORT_SYMBOL(pin_user_pages_remote); * * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, @@ -3161,6 +3187,9 @@ EXPORT_SYMBOL(pin_user_pages); * pin_user_pages_unlocked() is the FOLL_PIN variant of * get_user_pages_unlocked(). Behavior is the same, except that this one sets * FOLL_PIN and rejects FOLL_GET. + * + * Note that if a zero_page is amongst the returned pages, it will not have + * pins in it and unpin_user_page*() will not remove pins from it. */ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags) From patchwork Fri May 26 21:41:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13257339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CA06C77B7C for ; Fri, 26 May 2023 21:42:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B648D280002; Fri, 26 May 2023 17:42:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEB75900002; Fri, 26 May 2023 17:42:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B2F3280002; Fri, 26 May 2023 17:42:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 87599900002 for ; Fri, 26 May 2023 17:42:00 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 47397140790 for ; Fri, 26 May 2023 21:42:00 +0000 (UTC) X-FDA: 80833729200.15.48E0316 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 8200B180006 for ; Fri, 26 May 2023 21:41:58 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dmeHILRf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685137318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ygH+ZqonbpI6GaSwXkJ0fs6Q5AaJM3JSYEP5qBjfvE4=; b=z9TJPN6qcO5QFfYYiJswM+BylzC2ouhl2/VIFXdfGrL2jB+qlwNOeEXJzhum5JuRsNMTkG byjsNOTvAxCRk2K9U3tVWugwFu0i4aoChetFiRXndDRTTpehnBuoDg4h9p7ju87JweECdD u2vwC6P6Gyr33YvZXkcTNqAYAv7S2ww= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dmeHILRf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685137318; a=rsa-sha256; cv=none; b=VAJrtPb1+DEYRvOhczdcDiDZt76n4x6BqzAniQxsMiE172Lb1fNSsVV4pnUTJ2Z6InjoNo ZQ+4CXS3usvrjVV07CZfKCRt2kgVaBuI20dK8cFw2szDGi6jE+CF0AV/Ieg1bMVt4mGD3f S2lLBfQsmhCA2KF1yWi0zuvG0fNJS28= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685137317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ygH+ZqonbpI6GaSwXkJ0fs6Q5AaJM3JSYEP5qBjfvE4=; b=dmeHILRfQCgud9X6NIAcjtaGiRuwZYMOwtZqPG0IcC7eVNcfPZcIIvaqbML9pXqqTvc2K3 9ubOfHjUi/o9pfG/cVJ0Rv75b/NTgChEhDJyCz6uJFuAyRx0MlkyOnDIjRoaA6dx3XlQ0D P0DYssm+ZIbx3uNH+Uv7PRgyh7U6TTY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-563-16yAqvjWNa-RXszar_4pfQ-1; Fri, 26 May 2023 17:41:54 -0400 X-MC-Unique: 16yAqvjWNa-RXszar_4pfQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 858EA3803508; Fri, 26 May 2023 21:41:53 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 03C5A2166B2E; Fri, 26 May 2023 21:41:50 +0000 (UTC) From: David Howells To: Christoph Hellwig , David Hildenbrand , Lorenzo Stoakes Cc: David Howells , Jens Axboe , Al Viro , Matthew Wilcox , Jan Kara , Jeff Layton , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , Christian Brauner , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Subject: [PATCH v4 2/3] mm: Provide a function to get an additional pin on a page Date: Fri, 26 May 2023 22:41:41 +0100 Message-Id: <20230526214142.958751-3-dhowells@redhat.com> In-Reply-To: <20230526214142.958751-1-dhowells@redhat.com> References: <20230526214142.958751-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspamd-Queue-Id: 8200B180006 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: assuwyxpdd9nortfqgxkchyjwmm9h3gg X-HE-Tag: 1685137318-157857 X-HE-Meta: U2FsdGVkX18jkSQHL++CLXbNWXW1Ff2UVEItjDkzSsSVX8Jo7gBLCGeHoVnyaiNWm0oS1NUqGsCxWlpUI9rdogdgguZK3vZ89AOCbo2rqI8VRlF22aktH3wjBzW4s1bUupZ2ww3eYJ2TaHu+xGTRkJoTnldCIcpspGFcX4WH73LQTCe7nPA1Jm/Ze+x0RYG9qwjNbtxWVvvQSDEfdeeM7PU/f74kT2m3lDdlJ9ZVWpXldlVDEsmUXTBHkfObZ5tVDMiTF7tqwSQ1jBiCPewjJxeIfUz2+PKaDyXdZLM/3rhxaBoexP7mwdrld4S3JXGnFIhFVcDp4FxnE2WPZl1KJ4RCf54HIRXCHbShiWq46h154J1Z5EJAZ5kUnqPPTuFpLkrbs7ec1JhPq/g4Uj5JIgpZdj4o+02tjo2QYKYy7VMh8a08/ODjizMjxnaccfDkYR6xk8ktjg1/5qv+BZ4IlwCesI7uX85QPwO7zDE8YNjn3UMiTmz113S6o9f85741zQ3ihnm3e6L/cY+XMM3du8qpARau094eK9/CqGwqBFaipWonYxg2SQjPOk8pLSF8dTR8UtWrdOzHuwYGDJ+jblmbyKHavuKVgGolMX58uBNzT5ig0NQv1SAIU1+X10OpPg/mVzCX3Myn/CkajwKZO3QX6tILmq4G3GPIi7XgnRCPYWLTXBUQ99sBZYBjytMTaCHVKwCPAB0GrupzR/geHCoqJdsiJD7c+F0x/HcIkoHsvMEDrB2A54PCk8YhEVt2o/DbnWh7wwge5UxsKB1VvKzOZ6oT9aRVtYERKYuP7Tt4kzN9KWPvrhGDiyxkmBXhLFr0y2kbLnRtzpga4zMiW8C/ad7nSDxQ6eoL151a7vlKO5PCMSYcgPebEm5UKkK91ZBx8u3iBGWjsZRThKxXXCj/+wSMVimez0z3gNvvC/kBidA+QPXn58rk+OjqEFkv70k4am6/qWHeDHUWUbI eDGO/gQM 3hyogvkgcVSYJ7aTVXjZNbjiJFQg/bpqt7iYiHdHqfRqOX+x+cDK6asYygFOfw1ZLgQZoKYREDhNxazdvb9xxe6HuDEfEBHuvbwy10lo9HyHpjtcz8epuywyjpLNBUfAOYEOhzKPgvLPOxGTeLzs6EpAQL7EPu/8hHtPniSZT2fAXQ8sywTs7NMbaqlkclGdckmBo1HnRahLjkNiR+HZ7oBMAYcKOb2hSuRYdXcrFw4dNWOrlCdca/mC5lmoxqSuWOCKtsQmhH50CUH0FP8OALM/zqJpchfWT18zA9aXvhJ3syWIWOOR2s11Znrn/VdknXM0n8GEcYiSqZyyQ8LUKMMIWDjaA3LmXY9LeadyWFnNsq0OCDtuIFDitCG35f/723lsija9jHnk1iRCe6E8thceCjYs384Q2ITz7adaBgerbDvzFAJGdJSuFIRUByL3GKwgNC/yu2izZFtESmUQWjTO7yJPIbaRsMLufOoOV7KyBP0vm2gpaGKavbfo9ZNr/ld369SVZqhBmBLht5d98IKgELz+h7Fjcby9TLyYlT2zZQM0VmZaj1xDsyVNpYiKPCgOydWrXefbe+bxqX5f+XEQPKYTUMIrigulTVDFcNBXnKwbbW+rco2rL/MO/f3ExkOnvfVGqhmekhAAb+/RrUJ1LGCNThx4KeRcSG0W1FURfeAvKUjtZcZPd9/6BvqtjzLCY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a function to get an additional pin on a page that we already have a pin on. This will be used in fs/direct-io.c when dispatching multiple bios to a page we've extracted from a user-backed iter rather than redoing the extraction. Signed-off-by: David Howells cc: Christoph Hellwig cc: David Hildenbrand cc: Lorenzo Stoakes cc: Andrew Morton cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: Jan Kara cc: Jeff Layton cc: Jason Gunthorpe cc: Logan Gunthorpe cc: Hillf Danton cc: Christian Brauner cc: Linus Torvalds cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-kernel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Christoph Hellwig Acked-by: David Hildenbrand --- Notes: ver #4) - Use _inc rather than _add ops when we're just adding 1. ver #3) - Rename to folio_add_pin(). - Change to using is_zero_folio() include/linux/mm.h | 1 + mm/gup.c | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3c2f6b452586..200068d98686 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2405,6 +2405,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages); int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages); +void folio_add_pin(struct folio *folio); int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc); int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, diff --git a/mm/gup.c b/mm/gup.c index ad28261dcafd..0814576b7366 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -275,6 +275,33 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); +/** + * folio_add_pin - Try to get an additional pin on a pinned folio + * @folio: The folio to be pinned + * + * Get an additional pin on a folio we already have a pin on. Makes no change + * if the folio is a zero_page. + */ +void folio_add_pin(struct folio *folio) +{ + if (is_zero_folio(folio)) + return; + + /* + * Similar to try_grab_folio(): be sure to *also* increment the normal + * page refcount field at least once, so that the page really is + * pinned. + */ + if (folio_test_large(folio)) { + WARN_ON_ONCE(atomic_read(&folio->_pincount) < 1); + folio_ref_inc(folio); + atomic_inc(&folio->_pincount); + } else { + WARN_ON_ONCE(folio_ref_count(folio) < GUP_PIN_COUNTING_BIAS); + folio_ref_add(folio, GUP_PIN_COUNTING_BIAS); + } +} + static inline struct folio *gup_folio_range_next(struct page *start, unsigned long npages, unsigned long i, unsigned int *ntails) { From patchwork Fri May 26 21:41:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13257340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D5E3C77B7C for ; Fri, 26 May 2023 21:42:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98D11900002; Fri, 26 May 2023 17:42:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93D3E280003; Fri, 26 May 2023 17:42:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DE43900004; Fri, 26 May 2023 17:42:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 716C9900002 for ; Fri, 26 May 2023 17:42:04 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4B54E140FC8 for ; Fri, 26 May 2023 21:42:04 +0000 (UTC) X-FDA: 80833729368.16.4151C9A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 924258001A for ; Fri, 26 May 2023 21:42:01 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RmIOcaao; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685137321; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4Cmt7wVWF/53m28oT2gNRjy02BoBlgv0E5157juSdEU=; b=e+qwI02V6kPbpSdTl70pDi8Wj8vBWEnhDY1+FHwDJDX3ArcduvX9d26P+zhzNZ56pkQlbi izd1jfscVJbD5CpqiL+dVJ7YqOvDLNhkKYT1rwxNqLmvtaBzapFMeaQJqvyNIM8kt4AkPl p03NBIN6sNgecV0bgDpeqnq8jRiavIs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RmIOcaao; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685137321; a=rsa-sha256; cv=none; b=tmMKtL4KiuL8rpHcelSwkcbddFox8GonMe+XugLd8FsToVNmRdgFJRzlp+PNfuJ7JDZKfT 1PfMvxtDlElpfi1gp+hIyz+RRIgXPSXQYjc0MqEbj4000LWh48gBspuUKnKPP5NunbiG7A spKN2BiBSsCeFHRRRObTa3OFraz6qUA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685137320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4Cmt7wVWF/53m28oT2gNRjy02BoBlgv0E5157juSdEU=; b=RmIOcaaoWF1spfKLHF0afVEL3QEGCYfcYd13gkPZ5PJAHfAHi6E+DmPIA/baOVAn35VS7m HaAkFH88y5Qe3jA8BHOUSniVRlf9aTfSZnnkE5pFzitqaL+457G35Xpd0kt+oyJgGZ8xWo ZCoCiwCncHsjlAXmLCdZ+riS3SuvP0I= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-548-EQeH2I93OGuY2_IlSki5ZQ-1; Fri, 26 May 2023 17:41:57 -0400 X-MC-Unique: EQeH2I93OGuY2_IlSki5ZQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BDE0E1C068D1; Fri, 26 May 2023 21:41:56 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43B0940CFD45; Fri, 26 May 2023 21:41:54 +0000 (UTC) From: David Howells To: Christoph Hellwig , David Hildenbrand , Lorenzo Stoakes Cc: David Howells , Jens Axboe , Al Viro , Matthew Wilcox , Jan Kara , Jeff Layton , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , Christian Brauner , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Subject: [PATCH v4 3/3] block: Use iov_iter_extract_pages() and page pinning in direct-io.c Date: Fri, 26 May 2023 22:41:42 +0100 Message-Id: <20230526214142.958751-4-dhowells@redhat.com> In-Reply-To: <20230526214142.958751-1-dhowells@redhat.com> References: <20230526214142.958751-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspamd-Queue-Id: 924258001A X-Rspam-User: X-Stat-Signature: 5ocep8hanh4za9dx4xhkkfrchijw7f4c X-Rspamd-Server: rspam01 X-HE-Tag: 1685137321-526532 X-HE-Meta: U2FsdGVkX184mGPNG11lkzD+QdYYVjC8NyMYfZ10vze50VvmUymQQR2q9xXXc8gAg94MYBx6ci9d5vP2XKKtBb/bKx8/HAHKy6JyBxRJSUjOXLNFtOZzX7rB0mTsHzNpTWuYnqM/HqG9TNfCpiDZhLVigHnFWrfeX4Pj0/w5ZOm1KTb+OOpplBjXMBKmywGiRhrY7Cp2pu2FhOk0nAlXTB7T+P+uIk35qVlh0ezcoBn8wTA9ZaH6okI/0klz0IRbpX9NGe+uVuo8F16W7wrTqdBsC2BIBprplh3xoXKyqJ9clQ7JyweP/33LO2+HWC7PrGpJxvmQWU/0yoSb9qCDz245Jt3Kn5K+SgodvPXlL3nGmw35tTlGRikhuJIWqt0YjZO9RSZUkM+ul70dIcZPpJ8w2AA2qVgqljCrXHMr1zMP+J8t527CsjGCpKhNpGRkGn/AC7rYf0GE0IJsgwpBplyWSiNd0F0ocy2h2QHJr7zbCuVYki/we2YEVMm9uaszhNafT3TquSY7BoRjZpyLmhuyppxcqfp10yMX3Fft0hiVf9oJU2R5bsQaglGa6tFC+R/bkWra5sSAwgwnI1QAK+9vS2y7Yiklg5ArON46ZluE+bq9vEVWydwKTouA+bOfhvOzzU4uXxI+Cs+fTyA6GT4dYjvBZkC5FTSxaslft8ChVX3FlHNByLuKxco4LUyDbxh7+3Cz1wy0EX11seVEQsB1PNJ2oV2/yR2Lmzw85ykBgrx0kumKw9LlSzMstfedFpVkt/92BB0PVyVrYQbbG9QJEs3F2E5GlTN12OUfbYb+D4icT8J9wT/BApcKhGEIp67oQLQONjwjMUEYrrz8ziNph7gSN8TiepkI93L9TGcGpEGEwlNhZ+7+w8H5rd4H9FQZSkr00I2Ivd5MJ0E5B+KiuffN02LqTUrXpUmXn9lNM4EYB63gIkmH62VOKn54jHmHdLiihJi9RxeAl7q zO9cIX+T MozgO0YnoQ+AJeIYyskV9H7leA2C+qNakXNxdo9jXKlpp7AYyoVtPKvMDCfIHoIeE6Ocpea8Pz6fyM/vfNB7Mr4cmJ9pDD6+BuJkcJh1OrbciCLAT08i8sLjJ61Bi4wl29WlNNIvJrKv8mH/9D6kz+q+8YDHQIATJQPeyQX+Ktg5Z3YWmkmopM9sEoxCRGvmOEm3zyZg8nfdBGIM+1apHNvnogEcXZF3eUjQVgCUco6p1FQA/vsvcwoh0dW911erk7O/0oAVXh/G2dxfdyVfJFnCt5Vp54ZhznF8ewQHfOznIhgDrIzlM4XmLqoSm4rV0G9Z+Dd5BpG3vwQmDYbVjPlYisyFzqFhjCgpBAbnNlRsLShO2Cqe7ucQlE+lDtC50kGc+d5Dj2UUuQf4NMfnji4umIuk0ENovM3f51sgkFiofW5FEJaYdjFZlSJkVyyLSGc7IoXJ5V9E66If8b6TjYkN3OBPIvDudCxohqQ/B5mQW/qM1VdmHaNR/3NXmBzrJBkTtphZfOoMWxNfb2GoTZHJxZAj03mB7vkLqot4aofadkEcJu8U4kctNXBmdHH9UShDFnl/ym7ZCnMBSriuDDoQJ28qksCXJXiU69WEw3r0rJKPb4qlPDD+1P8Gfsky7uLHaO0y2Fxcf5VQrcA4M1RTgCK7CJatHlzQ8diIIAy5dUeALYgj3cGc6YQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the old block-based direct-I/O code to use iov_iter_extract_pages() to pin user pages or leave kernel pages unpinned rather than taking refs when submitting bios. This makes use of the preceding patches to not take pins on the zero page (thereby allowing insertion of zero pages in with pinned pages) and to get additional pins on pages, allowing an extracted page to be used in multiple bios without having to re-extract it. Signed-off-by: David Howells cc: Christoph Hellwig cc: David Hildenbrand cc: Lorenzo Stoakes cc: Andrew Morton cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: Jan Kara cc: Jeff Layton cc: Jason Gunthorpe cc: Logan Gunthorpe cc: Hillf Danton cc: Christian Brauner cc: Linus Torvalds cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-kernel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Christoph Hellwig --- Notes: ver #3) - Rename need_unpin to is_pinned in struct dio. - page_get_additional_pin() was renamed to folio_add_pin(). ver #2) - Need to set BIO_PAGE_PINNED conditionally, not BIO_PAGE_REFFED. fs/direct-io.c | 72 ++++++++++++++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 29 deletions(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index ad20f3428bab..0643f1bb4b59 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -42,8 +42,8 @@ #include "internal.h" /* - * How many user pages to map in one call to get_user_pages(). This determines - * the size of a structure in the slab cache + * How many user pages to map in one call to iov_iter_extract_pages(). This + * determines the size of a structure in the slab cache */ #define DIO_PAGES 64 @@ -121,12 +121,13 @@ struct dio { struct inode *inode; loff_t i_size; /* i_size when submitted */ dio_iodone_t *end_io; /* IO completion function */ + bool is_pinned; /* T if we have pins on the pages */ void *private; /* copy from map_bh.b_private */ /* BIO completion state */ spinlock_t bio_lock; /* protects BIO fields below */ - int page_errors; /* errno from get_user_pages() */ + int page_errors; /* err from iov_iter_extract_pages() */ int is_async; /* is IO async ? */ bool defer_completion; /* defer AIO completion to workqueue? */ bool should_dirty; /* if pages should be dirtied */ @@ -165,14 +166,14 @@ static inline unsigned dio_pages_present(struct dio_submit *sdio) */ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) { + struct page **pages = dio->pages; const enum req_op dio_op = dio->opf & REQ_OP_MASK; ssize_t ret; - ret = iov_iter_get_pages2(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, - &sdio->from); + ret = iov_iter_extract_pages(sdio->iter, &pages, LONG_MAX, + DIO_PAGES, 0, &sdio->from); if (ret < 0 && sdio->blocks_available && dio_op == REQ_OP_WRITE) { - struct page *page = ZERO_PAGE(0); /* * A memory fault, but the filesystem has some outstanding * mapped blocks. We need to use those blocks up to avoid @@ -180,8 +181,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) */ if (dio->page_errors == 0) dio->page_errors = ret; - get_page(page); - dio->pages[0] = page; + dio->pages[0] = ZERO_PAGE(0); sdio->head = 0; sdio->tail = 1; sdio->from = 0; @@ -201,9 +201,9 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) /* * Get another userspace page. Returns an ERR_PTR on error. Pages are - * buffered inside the dio so that we can call get_user_pages() against a - * decent number of pages, less frequently. To provide nicer use of the - * L1 cache. + * buffered inside the dio so that we can call iov_iter_extract_pages() + * against a decent number of pages, less frequently. To provide nicer use of + * the L1 cache. */ static inline struct page *dio_get_page(struct dio *dio, struct dio_submit *sdio) @@ -219,6 +219,18 @@ static inline struct page *dio_get_page(struct dio *dio, return dio->pages[sdio->head]; } +static void dio_pin_page(struct dio *dio, struct page *page) +{ + if (dio->is_pinned) + folio_add_pin(page_folio(page)); +} + +static void dio_unpin_page(struct dio *dio, struct page *page) +{ + if (dio->is_pinned) + unpin_user_page(page); +} + /* * dio_complete() - called when all DIO BIO I/O has been completed * @@ -402,8 +414,8 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, bio->bi_end_io = dio_bio_end_aio; else bio->bi_end_io = dio_bio_end_io; - /* for now require references for all pages */ - bio_set_flag(bio, BIO_PAGE_REFFED); + if (dio->is_pinned) + bio_set_flag(bio, BIO_PAGE_PINNED); sdio->bio = bio; sdio->logical_offset_in_bio = sdio->cur_page_fs_offset; } @@ -444,8 +456,9 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) */ static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio) { - while (sdio->head < sdio->tail) - put_page(dio->pages[sdio->head++]); + if (dio->is_pinned) + unpin_user_pages(dio->pages + sdio->head, + sdio->tail - sdio->head); } /* @@ -676,7 +689,7 @@ static inline int dio_new_bio(struct dio *dio, struct dio_submit *sdio, * * Return zero on success. Non-zero means the caller needs to start a new BIO. */ -static inline int dio_bio_add_page(struct dio_submit *sdio) +static inline int dio_bio_add_page(struct dio *dio, struct dio_submit *sdio) { int ret; @@ -688,7 +701,7 @@ static inline int dio_bio_add_page(struct dio_submit *sdio) */ if ((sdio->cur_page_len + sdio->cur_page_offset) == PAGE_SIZE) sdio->pages_in_io--; - get_page(sdio->cur_page); + dio_pin_page(dio, sdio->cur_page); sdio->final_block_in_bio = sdio->cur_page_block + (sdio->cur_page_len >> sdio->blkbits); ret = 0; @@ -743,11 +756,11 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, goto out; } - if (dio_bio_add_page(sdio) != 0) { + if (dio_bio_add_page(dio, sdio) != 0) { dio_bio_submit(dio, sdio); ret = dio_new_bio(dio, sdio, sdio->cur_page_block, map_bh); if (ret == 0) { - ret = dio_bio_add_page(sdio); + ret = dio_bio_add_page(dio, sdio); BUG_ON(ret != 0); } } @@ -804,13 +817,13 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, */ if (sdio->cur_page) { ret = dio_send_cur_page(dio, sdio, map_bh); - put_page(sdio->cur_page); + dio_unpin_page(dio, sdio->cur_page); sdio->cur_page = NULL; if (ret) return ret; } - get_page(page); /* It is in dio */ + dio_pin_page(dio, page); /* It is in dio */ sdio->cur_page = page; sdio->cur_page_offset = offset; sdio->cur_page_len = len; @@ -825,7 +838,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ret = dio_send_cur_page(dio, sdio, map_bh); if (sdio->bio) dio_bio_submit(dio, sdio); - put_page(sdio->cur_page); + dio_unpin_page(dio, sdio->cur_page); sdio->cur_page = NULL; } return ret; @@ -926,7 +939,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, ret = get_more_blocks(dio, sdio, map_bh); if (ret) { - put_page(page); + dio_unpin_page(dio, page); goto out; } if (!buffer_mapped(map_bh)) @@ -971,7 +984,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, /* AKPM: eargh, -ENOTBLK is a hack */ if (dio_op == REQ_OP_WRITE) { - put_page(page); + dio_unpin_page(dio, page); return -ENOTBLK; } @@ -984,7 +997,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, if (sdio->block_in_file >= i_size_aligned >> blkbits) { /* We hit eof */ - put_page(page); + dio_unpin_page(dio, page); goto out; } zero_user(page, from, 1 << blkbits); @@ -1024,7 +1037,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, sdio->next_block_for_io, map_bh); if (ret) { - put_page(page); + dio_unpin_page(dio, page); goto out; } sdio->next_block_for_io += this_chunk_blocks; @@ -1039,8 +1052,8 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, break; } - /* Drop the ref which was taken in get_user_pages() */ - put_page(page); + /* Drop the pin which was taken in get_user_pages() */ + dio_unpin_page(dio, page); } out: return ret; @@ -1135,6 +1148,7 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, /* will be released by direct_io_worker */ inode_lock(inode); } + dio->is_pinned = iov_iter_extract_will_pin(iter); /* Once we sampled i_size check for reads beyond EOF */ dio->i_size = i_size_read(inode); @@ -1259,7 +1273,7 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, ret2 = dio_send_cur_page(dio, &sdio, &map_bh); if (retval == 0) retval = ret2; - put_page(sdio.cur_page); + dio_unpin_page(dio, sdio.cur_page); sdio.cur_page = NULL; } if (sdio.bio)