From patchwork Fri Jan 20 17:55:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13110332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FAE1C25B4E for ; Fri, 20 Jan 2023 17:56:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4E926B0071; Fri, 20 Jan 2023 12:56:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CFEE56B0081; Fri, 20 Jan 2023 12:56:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B78456B0082; Fri, 20 Jan 2023 12:56:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A10016B0071 for ; Fri, 20 Jan 2023 12:56:13 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 57710C0DC5 for ; Fri, 20 Jan 2023 17:56:13 +0000 (UTC) X-FDA: 80375931426.16.ECFA383 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 740FA4000E for ; Fri, 20 Jan 2023 17:56:11 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="HV/vycVY"; spf=pass (imf12.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674237371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YOc0D7umTnyUP1ok6rrw95EYcgD0TtRB9xXUvJKEMgA=; b=iIqcK9I6aTiKlfCCcDBmovPoOSTp7sMHj+lqr8XXXHNI+WDFbzva8HCsoe09XKpogpcTqg qM5bd8iXX8txTFbGmWt++EGJoDWFmiJUsi1i28wpi5oEX7kzWWK2G5zrn8qTYcGKGcvwte uRZhZU8YSlYQ0/Omd4fvQ2xQQ13k4+o= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="HV/vycVY"; spf=pass (imf12.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674237371; a=rsa-sha256; cv=none; b=IRhk0jR6oG7KCXxIttW7Q9XnHhyDMiJlmmnwZeC6dSAq7zy850efAxRgMzT79I9nErjUmH sU0820dN18+Rub6WAuw/TIhtbeTUMSsw+D2RCVkaQ6rV5P1nwVRzQ8qZDPFwVD/CqeRGs1 vXlgvzam2AVaJIhNxDK09bIxTlUYDyQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674237370; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YOc0D7umTnyUP1ok6rrw95EYcgD0TtRB9xXUvJKEMgA=; b=HV/vycVYhSgVNfIq/yN0T80WC1xR3JO75tJ1wWIKjq3qG8c2XM8wm3T2Xt9c454a0Rtv6s XE++AVbdSUoX/NRFQQXXxDfG8rZCvlgahw1LZxK67sNRL78X9U12nDvez870m4I+cqbDKT c9y9x/QydcukNruCUJ2A0xvsxsaVx+g= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-639-FJgQK12-PNayxGNY9zL-hQ-1; Fri, 20 Jan 2023 12:56:07 -0500 X-MC-Unique: FJgQK12-PNayxGNY9zL-hQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BD6ED802BEE; Fri, 20 Jan 2023 17:56:06 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.33.36.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 102662166B2A; Fri, 20 Jan 2023 17:56:04 +0000 (UTC) From: David Howells To: Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jens Axboe , Jan Kara , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig , John Hubbard , linux-mm@kvack.org Subject: [PATCH v7 2/8] iov_iter: Add a function to extract a page list from an iterator Date: Fri, 20 Jan 2023 17:55:50 +0000 Message-Id: <20230120175556.3556978-3-dhowells@redhat.com> In-Reply-To: <20230120175556.3556978-1-dhowells@redhat.com> References: <20230120175556.3556978-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 740FA4000E X-Stat-Signature: 68hmeoad6qydnprzj49a18a9wpdjddtx X-Rspam-User: X-HE-Tag: 1674237371-146341 X-HE-Meta: U2FsdGVkX1/3RXptllkrFqOXJWBxM0jAu0jLbhhegI1aTsd88ebvp2/cho1tRXjclY3PuLz1OIK4O0FdY7xWKgJzc6vjb+ZzJdyGnVckn2f59HCaVQNHtLAlU4JuEpkYz3BqCux9lIE5k27HndMMV4ohnKLtKtxCm1ZfhdHvCAsq5TxkLFcJMu5O1FBBIQzxbTg222USLtgWnWJJ5rQ1VuyjX0xIaUIRJb68QCgUs0nQ5Mjti5lUvncvK1BqmK9h+7kwkQ9L7OwL6VlD+gdmcNIrazYm5mTPVmtMl72pxvSooegQSaPlW914qvK2jwbqGdYUZjUsk2LZ/SxF3vanNMxfjYM21yfW8S34agJiPdnz1gMDzrt4ODpQyJeR817lJl/+auvTIwJZbc2ua7iIrkuYJcCF+f+7C7sBgWIRS8x2icnF66exoimoV3yAMTix5HOxuQqrEkMCuzB9VTjYka4FAMi39Plxix0tyfUyznAwJbxp7dpJxZql0y3lzAtgacJD83KYUf8Q377yet1QqnsRGx9SSjDkL3pW1gxxwbxsJ6BGsLdvXp61EHDGc2iWy3eVeh2uXDiBgFSVyaDtCweOvxlbjd1OLT/dtmssNjJtqOwdhzrFS+GavSeAQnaSHF6kVDwdLqvEZZTcEFIi6dFpN/yhwJsNzUhCD8MQ2UrW4eVgCTEmdaI7ZN58CY3i++W86uGy9hfHuBpPLkVSOODz9NHw6Ytx7X8guoYjqvWHy77SdJ30uDSLMIkEY9kNcpXXWX/N3hzrdIPvQvxynYEz5U0N9th/Bspc6UpFiPSKREB4YQduHP2ooTWiFpwMJWgsYPHGPDBuFAUGabbtXEBwV8vh8LPshXaQu7L+QcCTnakFQw+v4MmA0AaOfzhkl4DnV+NTpYbtoc1e9yQbUlAd7OFeU+BPTHC5l3G//dyADTctXi1GNJDcL2v+Ciylz4q8n5Yu+FITwH8sjNo K2rqay8/ exiE9VoD8M/aw+Gx3UFZYTLQ5KrckXP+M++XeYirNSd4o9qQX5WuIR4F42K4H7blow41JVh8vw1hx2VQOfbXdef31osUVoCd/VaidzS4qup4BjaiLUP50Vc5or2SD3EK3R8qS1RNfL+rv3s6Ik/NNj1lmjWCXM+nNl02EGeYhvChDEBwLyGpg3bAw8pu4vuHflfLRRhRHVBbaPTP73N3iikZr3Xc4xCB8IU0vPxF4il8EJx1z0be/m+Hd+JeqYFIdV1O4BpeaPW1+RtCudhLw3f2tqYQkBJ7qmdfihcYS54VQQmLhyNxEYRi+DUulxTJ4JI2QEfztKmnq6jjEPxdMQH5k5hDrRBfhFPgkn8ZUucZiP6od10F7GJhRcBKqgtUhn47KCaQZF1xFyHpT7QRu99cFVoQgUu+wBiWAsK3vb97/TaeejazVSrzg1NPM2LHdWZzxREVpWc3KGbhdueE3YbxIWPBYePs+aXbdTwsAFUNCJCCHRtxbHGlnmRXUnPP2Izq/RscVds0VlpqKeQYg8ZqkQtkuMHYt8XBlP2UBI5sK0o9/lcA8A0OWRsJwkIf1VWMJz6fPm27ihkLl9F1+5UK8EUc9yHvpwU9c5IWSFaqtK2ggvg+J7gI6LQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function, iov_iter_extract_pages(), to extract a list of pages from an iterator. The pages may be returned with a reference added or a pin added or neither, depending on the type of iterator and the direction of transfer. The caller must pass FOLL_READ_FROM_MEM or FOLL_WRITE_TO_MEM as part of gup_flags to indicate how the iterator contents are to be used. Add a second function, iov_iter_extract_mode(), to determine how the cleanup should be done. There are three cases: (1) Transfer *into* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have pins obtained on them (but not references) so that fork() doesn't CoW the pages incorrectly whilst the I/O is in progress. iov_iter_extract_mode() will return FOLL_PIN for this case. The caller should use something like unpin_user_page() to dispose of the page. (2) Transfer is *out of* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have references obtained on them, but not pins. iov_iter_extract_mode() will return FOLL_GET. The caller should use something like put_page() for page disposal. (3) Any other sort of iterator. No refs or pins are obtained on the page, the assumption is made that the caller will manage page retention. ITER_ALLOW_P2PDMA is not permitted. iov_iter_extract_mode() will return 0. The pages don't need additional disposal. Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: John Hubbard cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/166920903885.1461876.692029808682876184.stgit@warthog.procyon.org.uk/ # v2 Link: https://lore.kernel.org/r/166997421646.9475.14837976344157464997.stgit@warthog.procyon.org.uk/ # v3 Link: https://lore.kernel.org/r/167305163883.1521586.10777155475378874823.stgit@warthog.procyon.org.uk/ # v4 Link: https://lore.kernel.org/r/167344728530.2425628.9613910866466387722.stgit@warthog.procyon.org.uk/ # v5 Link: https://lore.kernel.org/r/167391053207.2311931.16398133457201442907.stgit@warthog.procyon.org.uk/ # v6 --- Notes: ver #7) - Switch to passing in iter-specific flags rather than FOLL_* flags. - Drop the direction flags for now. - Use ITER_ALLOW_P2PDMA to request FOLL_PCI_P2PDMA. - Disallow use of ITER_ALLOW_P2PDMA with non-user-backed iter. - Add support for extraction from KVEC-type iters. - Use iov_iter_advance() rather than open-coding it. - Make BVEC- and KVEC-type skip over initial empty vectors. ver #6) - Add back the function to indicate the cleanup mode. - Drop the cleanup_mode return arg to iov_iter_extract_pages(). - Pass FOLL_SOURCE/DEST_BUF in gup_flags. Check this against the iter data_source. ver #4) - Use ITER_SOURCE/DEST instead of WRITE/READ. - Allow additional FOLL_* flags, such as FOLL_PCI_P2PDMA to be passed in. ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. include/linux/uio.h | 28 +++ lib/iov_iter.c | 424 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 452 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 46d5080314c6..a4233049ab7a 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -363,4 +363,32 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, /* Flags for iov_iter_get/extract_pages*() */ #define ITER_ALLOW_P2PDMA 0x01 /* Allow P2PDMA on the extracted pages */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + unsigned int extract_flags, size_t *offset0); + +/** + * iov_iter_extract_mode - Indicate how pages from the iterator will be retained + * @iter: The iterator + * @extract_flags: How the iterator is to be used + * + * Examine the iterator and @extract_flags and indicate by returning FOLL_PIN, + * FOLL_GET or 0 as to how, if at all, pages extracted from the iterator will + * be retained by the extraction function. + * + * FOLL_GET indicates that the pages will have a reference taken on them that + * the caller must put. This can be done for DMA/async DIO write from a page. + * + * FOLL_PIN indicates that the pages will have a pin placed in them that the + * caller must unpin. This is must be done for DMA/async DIO read to a page to + * avoid CoW problems in fork. + * + * 0 indicates that no measures are taken and that it's up to the caller to + * retain the pages. + */ +#define iov_iter_extract_mode(iter, extract_flags) \ + (user_backed_iter(iter) ? \ + (iter->data_source == ITER_SOURCE) ? \ + FOLL_GET : FOLL_PIN : 0) + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fb04abe7d746..843abe566efb 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1916,3 +1916,427 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->iov -= state->nr_segs - i->nr_segs; i->nr_segs = state->nr_segs; } + +/* + * Extract a list of contiguous pages from an ITER_PIPE iterator. This does + * not get references of its own on the pages, nor does it get a pin on them. + * If there's a partial page, it adds that first and will then allocate and add + * pages into the pipe to make up the buffer space to the amount required. + * + * The caller must hold the pipe locked and only transferring into a pipe is + * supported. + */ +static ssize_t iov_iter_extract_pipe_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + unsigned int nr, offset, chunk, j; + struct page **p; + size_t left; + + if (!sanity(i)) + return -EFAULT; + + offset = pipe_npages(i, &nr); + if (!nr) + return -EFAULT; + *offset0 = offset; + + maxpages = min_t(size_t, nr, maxpages); + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + left = maxsize; + for (j = 0; j < maxpages; j++) { + struct page *page = append_pipe(i, left, &offset); + if (!page) + break; + chunk = min_t(size_t, left, PAGE_SIZE - offset); + left -= chunk; + *p++ = page; + } + if (!j) + return -EFAULT; + return maxsize - left; +} + +/* + * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page *page, **p; + unsigned int nr = 0, offset; + loff_t pos = i->xarray_start + i->iov_offset; + pgoff_t index = pos >> PAGE_SHIFT; + XA_STATE(xas, i->xarray, index); + + offset = pos & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + rcu_read_lock(); + for (page = xas_load(&xas); page; page = xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + + /* Has the page moved or been split? */ + if (unlikely(page != xas_reload(&xas))) { + xas_reset(&xas); + continue; + } + + p[nr++] = find_subpage(page, xas.xa_index); + if (nr == maxpages) + break; + } + rcu_read_unlock(); + + maxsize = min_t(size_t, nr * PAGE_SIZE - offset, maxsize); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from an ITER_BVEC iterator. This does + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page **p, *page; + size_t skip = i->iov_offset, offset; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->bvec->bv_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->kvec++; + skip = 0; + } + + skip += i->bvec->bv_offset; + page = i->bvec->bv_page + skip / PAGE_SIZE; + offset = skip % PAGE_SIZE; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + for (k = 0; k < maxpages; k++) + p[k] = page + k; + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of virtually contiguous pages from an ITER_KVEC iterator. + * This does not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page **p, *page; + const void *kaddr; + size_t skip = i->iov_offset, offset, len; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->kvec->iov_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->kvec++; + skip = 0; + } + + offset = skip % PAGE_SIZE; + *offset0 = offset; + kaddr = i->kvec->iov_base; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + kaddr -= offset; + len = offset + maxsize; + for (k = 0; k < maxpages; k++) { + size_t seg = min_t(size_t, len, PAGE_SIZE); + + if (is_vmalloc_or_module_addr(kaddr)) + page = vmalloc_to_page(kaddr); + else + page = virt_to_page(kaddr); + + p[k] = page; + len -= seg; + kaddr += PAGE_SIZE; + } + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Get the first segment from an ITER_UBUF or ITER_IOVEC iterator. The + * iterator must not be empty. + */ +static unsigned long iov_iter_extract_first_user_segment(const struct iov_iter *i, + size_t *size) +{ + size_t skip; + long k; + + if (iter_is_ubuf(i)) + return (unsigned long)i->ubuf + i->iov_offset; + + for (k = 0, skip = i->iov_offset; k < i->nr_segs; k++, skip = 0) { + size_t len = i->iov[k].iov_len - skip; + + if (unlikely(!len)) + continue; + if (*size > len) + *size = len; + return (unsigned long)i->iov[k].iov_base + skip; + } + BUG(); // if it had been empty, we wouldn't get called +} + +/* + * Extract a list of contiguous pages from a user iterator and get references + * on them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred out of the buffer described by + * the iterator (ie. this is the source). + * + * The pages are returned with incremented refcounts that the caller must undo + * once the transfer is complete, but no additional pins are obtained. + * + * This is only safe to be used where background IO/DMA is not going to be + * modifying the buffer, and so won't cause a problem with CoW on fork. + */ +static ssize_t iov_iter_extract_user_pages_and_get(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + unsigned long addr; + unsigned int gup_flags = FOLL_GET; + size_t offset; + int res; + + if (WARN_ON_ONCE(i->data_source != ITER_SOURCE)) + return -EFAULT; + + if (extract_flags & ITER_ALLOW_P2PDMA) + gup_flags |= FOLL_PCI_P2PDMA; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = iov_iter_extract_first_user_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = get_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from a user iterator and get a pin on + * each of them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred into the buffer described by the + * iterator (ie. this is the destination). + * + * It does not get refs on the pages, but the pages must be unpinned by the + * caller once the transfer is complete. + * + * This is safe to be used where background IO/DMA *is* going to be modifying + * the buffer; using a pin rather than a ref makes sure that CoW happens + * correctly in the parent during fork. + */ +static ssize_t iov_iter_extract_user_pages_and_pin(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + unsigned long addr; + unsigned int gup_flags = FOLL_PIN | FOLL_WRITE; + size_t offset; + int res; + + if (WARN_ON_ONCE(i->data_source != ITER_DEST)) + return -EFAULT; + + if (extract_flags & ITER_ALLOW_P2PDMA) + gup_flags |= FOLL_PCI_P2PDMA; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + if (iov_iter_extract_mode(i, extract_flags) == FOLL_GET) + return iov_iter_extract_user_pages_and_get(i, pages, maxsize, + maxpages, extract_flags, + offset0); + else + return iov_iter_extract_user_pages_and_pin(i, pages, maxsize, + maxpages, extract_flags, + offset0); +} + +/** + * iov_iter_extract_pages - Extract a list of contiguous pages from an iterator + * @i: The iterator to extract from + * @pages: Where to return the list of pages + * @maxsize: The maximum amount of iterator to extract + * @maxpages: The maximum size of the list of pages + * @extract_flags: Flags to qualify request + * @offset0: Where to return the starting offset into (*@pages)[0] + * + * Extract a list of contiguous pages from the current point of the iterator, + * advancing the iterator. The maximum number of pages and the maximum amount + * of page contents can be set. + * + * If *@pages is NULL, a page list will be allocated to the required size and + * *@pages will be set to its base. If *@pages is not NULL, it will be assumed + * that the caller allocated a page list at least @maxpages in size and this + * will be filled in. + * + * @extract_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA be + * allowed on the pages extracted. + * + * The iov_iter_extract_mode() function can be used to query how cleanup should + * be performed. + * + * Extra refs or pins on the pages may be obtained as follows: + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is to be + * transferred /OUT OF/ the buffer (@i->data_source == ITER_SOURCE), refs + * will be taken on the pages, but pins will not be added. This can be + * used for DMA from a page; it cannot be used for DMA to a page, as it + * may cause page-COW problems in fork. iov_iter_extract_mode() will + * return FOLL_GET. + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is to be + * transferred /INTO/ the described buffer (@i->data_source |= ITER_DEST), + * pins will be added to the pages, but refs will not be taken. This must + * be used for DMA to a page. iov_iter_extract_mode() will return + * FOLL_PIN. + * + * (*) If the iterator is ITER_PIPE, this must describe a destination for the + * data. Additional pages may be allocated and added to the pipe (which + * will hold the refs), but neither refs nor pins will be obtained for the + * caller. The caller must hold the pipe lock. iov_iter_extract_mode() + * will return 0. + * + * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are + * merely listed; no extra refs or pins are obtained. + * iov_iter_extract_mode() will return 0. + * + * Note also: + * + * (*) Peer-to-peer DMA (ITER_ALLOW_P2PDMA) is only permitted with user-backed + * iterators. + * + * (*) Use with ITER_DISCARD is not supported as that has no content. + * + * On success, the function sets *@pages to the new pagelist, if allocated, and + * sets *offset0 to the offset into the first page.. + * + * It may also return -ENOMEM and -EFAULT. + */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + maxsize = min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT); + if (!maxsize) + return 0; + + if (likely(user_backed_iter(i))) + return iov_iter_extract_user_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (WARN_ON_ONCE(extract_flags & ITER_ALLOW_P2PDMA)) + return -EIO; + if (iov_iter_is_kvec(i)) + return iov_iter_extract_kvec_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_bvec(i)) + return iov_iter_extract_bvec_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_pipe(i)) + return iov_iter_extract_pipe_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_xarray(i)) + return iov_iter_extract_xarray_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(iov_iter_extract_pages); From patchwork Fri Jan 20 17:55:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13110333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38E8BC05027 for ; Fri, 20 Jan 2023 17:56:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27A416B0081; Fri, 20 Jan 2023 12:56:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2297E6B0082; Fri, 20 Jan 2023 12:56:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F3AB6B0083; Fri, 20 Jan 2023 12:56:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E8D856B0082 for ; Fri, 20 Jan 2023 12:56:13 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9ED26160EEF for ; Fri, 20 Jan 2023 17:56:13 +0000 (UTC) X-FDA: 80375931426.14.F0E4676 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id E0AE780019 for ; Fri, 20 Jan 2023 17:56:11 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DnX8E6YR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674237371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CYkyXriQe+db2xB55AZm41G9E0QZGdRv8ItoT4z1ez8=; b=gp7vGKs+fCvbkzIJWweAONkWhbvkwBT+6gY1aZGyyooaITcFsdfdOtdwl2hoUDiiWz6x9e ffkac3/pg9CFsK2Iei2EE5pdzDYE0SLz0wN9N4Ky+rQOyw8r7cZw1TzyhQRWIAjlsa3F6d fEgOJQoGNg8sNaGQJh178OZXVvFaT4o= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DnX8E6YR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674237371; a=rsa-sha256; cv=none; b=3G1iDnLy7qP2gObQLvpCt3AuW7Ub4lmDssK5CUD2UzuyCIW7eLoXgwDSfQMi4kghFZIsJo /9jzY1BG2cxRz/jrzXepRarJ4kL+arzmfOrVfyUd3Sw+sPDnKUTwqBln3kx1GZUYnUg6qQ Lb1csQQV0HKwlBdfuNqQkbzozuF/LD0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674237371; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CYkyXriQe+db2xB55AZm41G9E0QZGdRv8ItoT4z1ez8=; b=DnX8E6YR6YfSVXc1ZjIMYWIzRI4t5utlbccxAQyFE91VBg8gBW7I6+/J3YECk0wF4Qm05O R6GDYXD3AnNsAvMuHTk2pLyr7QmRkC3CePiWorH1JpzVxzlegQ3ZvBn4rBdI7j1vnijPxy YyBfVmWZZiDi6/kw919N3kzUJpzTb5g= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-569-rOeDDMKeNQmo4B5_T1HLmg-1; Fri, 20 Jan 2023 12:56:09 -0500 X-MC-Unique: rOeDDMKeNQmo4B5_T1HLmg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 04CEB185A794; Fri, 20 Jan 2023 17:56:09 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.33.36.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C3A5492C3C; Fri, 20 Jan 2023 17:56:07 +0000 (UTC) From: David Howells To: Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jens Axboe , Jan Kara , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-mm@kvack.org Subject: [PATCH v7 3/8] mm: Provide a helper to drop a pin/ref on a page Date: Fri, 20 Jan 2023 17:55:51 +0000 Message-Id: <20230120175556.3556978-4-dhowells@redhat.com> In-Reply-To: <20230120175556.3556978-1-dhowells@redhat.com> References: <20230120175556.3556978-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: E0AE780019 X-Stat-Signature: zt11mckcru4ear66d5qufw9wgyp6mja9 X-HE-Tag: 1674237371-820587 X-HE-Meta: U2FsdGVkX19jvtoflDUdaZs1QMI9+J/MYhmpm/ByLR1mx1PYBjnKLdBEi8lIhtOAJgA56gmI52n1bXUeiczXY3ZPlUoICsveaPuI47iuXwQK4Z3q0fEQ/C/2p+gg3t2L6AuWElTAGYZDbguprgLkWT/1CetMlmQzQ60KXAFr6tbpFG0lZVKgMrogwju5vAtp9L60qlpGPJINKg4QB/JvonSEwjy3lL3lf00LvvkjcRIS+TBhGPKkj7AuKFewuglr0yGG+AEZcx3+FcuzBDsIjOFqW50fSJnxRqBSfIB5E0AvBJTxuZgmmpFSV0hqlL/DiHeEAxq26Tva65pa4Af7LsZ819pButfRuRQKA4VGVvXaS8g6g/wDlFJ/sjnB9MOq+lw4fH9TFBBE7fIXIkXZesd7qp/x8xzV0laCG1kFZnGyqVp3FF7n7N161gzbQAKP9wm4LygyQdXpbawNbMek1/s0T3t2fTo7yHwtT+ak07F+ZUs8KnKizgzKaKrT0zD1AOfOYNdot8EFhw5AGSfSrto7LwZEXyUdxIBORsrgb3qGTnnZe681zsAhiHxA6lRQI4q28xzs/Xpf9uxgd/di1CEjNB1x1ERwGJn4z4qrrEnPilNQeK0DLnm+djAZfRwfD+cTkfW1MuqaES6aMPCg4+l3XEKH32bgozPLTFm9dB0ZjOAeObHPZO8XNyEBjFgoj/O34NsvUhF+VlNg59qnWd95fdgXpkHYeEY8tHcYzpJYl5yFjw127ekPfNrCH39DVYKbQmLQJoEXkx3xFJwcVcZ5v0nBNvpFMZxsMQksQnbXPxlOlMtBf9jUdjY16js0Dqu0cRRzQknvRP2QmuQ/siJ45BjWQWEvDMybv/A8NlRkPRUuSmPeXE+UOka5MqPvL41vsecD147jZKsk2hTfbxUEIO6vHPg/NXVicoriV+lA9AH0mQ+g9GoxSymkbDAv18Ft5q+5UDNs+j0hDPK UFYET+6d DbiySkbOK9Aaf0Q2OaHq+C0dkFJMNY0dAJ2avwEiOSMjGoQayGwNaU2o+ShxailGBg4wHaSC+qkMJxOAof28FoLfnsKOTejtEizvGmnCzBeDlCVpl1LoV0k5C4trvlwJRLwFIVlbpSCXlykpQUtoQd+WU8kPYWHaeCRyEts5ZyhP0utav8j7H14eYHknbBqFqPEvCAUxYx5R6moFvhhTqD+S3z8A7I6zPUE3i7r/KPYTvpUd0/qKHQY68l5pOSFfGl5nlBa9QTjyUKHbYGn/ZvBvQ2NJF/80zNt9BKD11APfgB5hsWPC1aejj1tCliP6BX6FNk7Fjt9kqIrVdBGp5VkuMxJQDqIJg7zjIotA/VHwFG1QHkaYTksxELsFnWmBHcoOsZ/rwXx84fImUIACaaSTRt7aG3SJoUX5RRXtAduZSwgzCly5xUvul8gBp6zRU6eqGV/6DCBqaEAzcBXP+rnGOfRnOWeVsoduPaS0pKfcfKMN5KLpxjPbadQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a helper in the get_user_pages code to drop a pin or a ref on a page based on being given FOLL_GET or FOLL_PIN in its flags argument or do nothing if neither is set. Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/mm.h | 3 +++ mm/gup.c | 22 ++++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index f3f196e4d66d..f1cf8f4eb946 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1367,6 +1367,9 @@ static inline bool is_cow_mapping(vm_flags_t flags) #define SECTION_IN_PAGE_FLAGS #endif +void folio_put_unpin(struct folio *folio, unsigned int flags); +void page_put_unpin(struct page *page, unsigned int flags); + /* * The identification function is mainly used by the buddy allocator for * determining if two pages could be buddies. We are not really identifying diff --git a/mm/gup.c b/mm/gup.c index f45a3a5be53a..3ee4b4c7e0cb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -191,6 +191,28 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) folio_put_refs(folio, refs); } +/** + * folio_put_unpin - Unpin/put a folio as appropriate + * @folio: The folio to release + * @flags: gup flags indicating the mode of release (FOLL_*) + * + * Release a folio according to the flags. If FOLL_GET is set, the folio has a + * ref dropped; if FOLL_PIN is set, it is unpinned; otherwise it is left + * unaltered. + */ +void folio_put_unpin(struct folio *folio, unsigned int flags) +{ + if (flags & (FOLL_GET | FOLL_PIN)) + gup_put_folio(folio, 1, flags); +} +EXPORT_SYMBOL_GPL(folio_put_unpin); + +void page_put_unpin(struct page *page, unsigned int flags) +{ + folio_put_unpin(page_folio(page), flags); +} +EXPORT_SYMBOL_GPL(page_put_unpin); + /** * try_grab_page() - elevate a page's refcount by a flag-dependent amount * @page: pointer to page to be grabbed From patchwork Fri Jan 20 17:55:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13110334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C7AC25B4E for ; Fri, 20 Jan 2023 17:56:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97E196B0082; Fri, 20 Jan 2023 12:56:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92EFD6B0083; Fri, 20 Jan 2023 12:56:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CF726B0085; Fri, 20 Jan 2023 12:56:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6A3F46B0082 for ; Fri, 20 Jan 2023 12:56:29 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 40F14AB578 for ; Fri, 20 Jan 2023 17:56:29 +0000 (UTC) X-FDA: 80375932098.20.29416DD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 8189D14001C for ; Fri, 20 Jan 2023 17:56:27 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OZAKSpo3; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674237387; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MdinPY6vQ8fVw027uUZU584MsmM5YD/+rGjhklLF+d8=; b=jOw32X0sssVhx8s2oggXv9Df8qJ4dG5P4KxHquxzQCE2+AnesagR6tv6/RX4VZ2LztioHp 3wrIkEwebG109QHv6fRudod6LygdOQysN9MA2KttEUY/WAhre36r2qUGrcWS75zqNaqBO5 M5N1pXhUGjp4iPXCLerDSYH6U/dyoLc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OZAKSpo3; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674237387; a=rsa-sha256; cv=none; b=APkCvD2ByhyMcuXbVYauh1xluWWpq9ngwl1iHobKe+7YKuxS90n1OdHQP1iklcyRTAolC1 SIV9dsvcm77vQOSpec3Wn2Eiau/A30wsIYzcHdJdQOQ29Jnc79FKWFb6Bk2Ceja4336Jw3 uRBq3lAmqiCSmbaJNSxYLui6M5mPuss= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674237386; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MdinPY6vQ8fVw027uUZU584MsmM5YD/+rGjhklLF+d8=; b=OZAKSpo3/G2Id5pCOJgTzASzXPqUkotg5iY7jYO1q0SB+IxXt5ARbyodPqCZ+se/3KCGUC ow0bXIllsubhM5OWyt/E49+VpYzdAjmW1nZvlRvIh29WAoMJ3FcqLrhjN9l2YzPpBDrv9p W8nWwTzi+EaRQWuKthEXWnbsIoVNSpA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-442-XWXguqUFNAG-6TTKujd0rA-1; Fri, 20 Jan 2023 12:56:20 -0500 X-MC-Unique: XWXguqUFNAG-6TTKujd0rA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D11A101A521; Fri, 20 Jan 2023 17:56:19 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.33.36.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id E593A492C3C; Fri, 20 Jan 2023 17:56:17 +0000 (UTC) From: David Howells To: Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jens Axboe , Jan Kara , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-mm@kvack.org Subject: [PATCH v7 8/8] mm: Renumber FOLL_GET and FOLL_PIN down Date: Fri, 20 Jan 2023 17:55:56 +0000 Message-Id: <20230120175556.3556978-9-dhowells@redhat.com> In-Reply-To: <20230120175556.3556978-1-dhowells@redhat.com> References: <20230120175556.3556978-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8189D14001C X-Rspam-User: X-Stat-Signature: bmh4di8y4udti11f1ue1myjyrdy6puxr X-HE-Tag: 1674237387-993029 X-HE-Meta: U2FsdGVkX19DSZRaG9u8n0LAtK1ny+ziCAnpGgQSC1Apwmwng5Q0lfc5ULGG9mp+47CQfmctNylUF/qNcBwzXeSmRGvnDrgTNO+PEXzifCP/zzeXdqUzve1OAdJRvSmzOviAS/D4lYLpuYZZs50Ag3pRqSgDwz0PbrBGYnFwxM44VFcwDebwFrTgwRz71hyi4lQczp9BzasA4WzMfLKNqLmVvtpLPOs32lXZLBfohyFmo3woogdtDbzTcQZkF/MEmNRoq/tUl0x/oeNkwxvwpkC7BXwdicaV7UQL67ItHQh7CU4GNpi5wPxISi+0hmbzUeNjVrsnfqAsLtA/fClRdaUMId9975ytFlWXsOUXD5I00/HbwSQiujTpuk9FfeIprK+yG3tEKgTQxUNSAkSnTQ9SNbdPmShmBK2V7FRyX76A6cOT4CYKDZCRnmMWsTRoFZ3Z6RuXve82GiZvSzfRl/et7fPw7vDxdara2VTf3Z24iov1iGJfe+yfVK5PP65tvd0iG2wFvVwK6/MCc5jiAPgOR2inFW1dL75Fsk2MBXqPVLaJSvg5yGoV5JtBchle2mUq2foQxJKhU8zkZB7NCBTg76L0QNsicIq0uDlBuPHfT9HPVEmh6x2vP8io5urNpXRXiyAF/8uGTPHzoimCKaMbEjSKClN4lGubeql1h1gxK0d7x2ahaY1bSpfeO+BYOSpJU2rrnoo6Q51MfSk1lOsyznx59ew9jYfu9TmUVog9TnlGA3/Q3X9T46Rr8OSY6YKlytJu9vusO0On7isl2AKBYWoBRzE/GcIgvD+f1ebhTk64v93azCleZM4eE5k56QZNIRg320YQ0q1dMZJktgi1DwPsLDv0SPolxhZR/lq/DKEy/qYX41e54yCD2g3r0sv/WnKc8Qq9yQeFBHoCs26yrBAjZdZxp3ZvdHJjLUnmlur8vAdaK/EI7Q6t0jz6r1vBvCjXKeqZgMiNksl oMcgOUt0 JERoWrPBEtp2nB5zzEd9V37ZPXQO8urTSbRCwJcI3szYgjPlZwNTKR/1BtX8z2jIMaYrttVd5m87xtQZz8/vpS103B8FhdVs7Fp4Q+3z8WxzwctU7IbapsA8lF6KsApNmm98Ub/Gi8fXW6mFiPjD/2qblebg2uUKeyJHKjbsNg1NS1uGYqdaYxpwn8STM8IF9Do/zWw7PbMylkK9o7bB0R5soNieMKlGHagbKKCSM2XeDfcVT/yP/voQVYa2c3geuQXgx62fS0EKGC7p21iOXx/BXu3Gyclciew4cKSJj0Yw+H5WB015dpI/Kv6EcyB6c1PNZ7ruwDKRqhzDr4pf6zMtxM+Z1aycqE1JoBdw0OVPDj7UrWDGGjpKRfZFLiSUqMJXGfr0x89yYpfjH+2pceMc8F5aTW5ANh51Obcy+ua+M/Au5BtIS3isunCusHxRnXxNRGxpxOpP60dMbSt7/SCfbysOs9UX7EUfa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Renumber FOLL_GET and FOLL_PIN down to bit 0 and 1 respectively so that they are coincidentally the same as BIO_PAGE_REFFED and BIO_PAGE_PINNED and also so that they can be stored in the bottom two bits of a page pointer (something I'm looking at for zerocopy socket fragments). Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f1cf8f4eb946..33c9eacd9548 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3074,12 +3074,13 @@ static inline vm_fault_t vmf_error(int err) struct page *follow_page(struct vm_area_struct *vma, unsigned long address, unsigned int foll_flags); -#define FOLL_WRITE 0x01 /* check pte is writable */ -#define FOLL_TOUCH 0x02 /* mark page accessed */ -#define FOLL_GET 0x04 /* do get_page on page */ -#define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ -#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ -#define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO +#define FOLL_GET 0x01 /* do get_page on page (equivalent to BIO_FOLL_GET) */ +#define FOLL_PIN 0x02 /* pages must be released via unpin_user_page */ +#define FOLL_WRITE 0x04 /* check pte is writable */ +#define FOLL_TOUCH 0x08 /* mark page accessed */ +#define FOLL_DUMP 0x10 /* give error on hole if it would be zero */ +#define FOLL_FORCE 0x20 /* get_user_pages read/write w/o permission */ +#define FOLL_NOWAIT 0x40 /* if a disk transfer is needed, start the IO * and return without waiting upon it */ #define FOLL_NOFAULT 0x80 /* do not fault in pages */ #define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ @@ -3088,7 +3089,6 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ -#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ #define FOLL_PCI_P2PDMA 0x100000 /* allow returning PCI P2PDMA pages */ #define FOLL_INTERRUPTIBLE 0x200000 /* allow interrupts from generic signals */