From patchwork Sat Jan 7 00:33:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13091911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C91AC54EBD for ; Sat, 7 Jan 2023 00:34:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C28918E0002; Fri, 6 Jan 2023 19:34:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD8758E0001; Fri, 6 Jan 2023 19:34:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A79FE8E0002; Fri, 6 Jan 2023 19:34:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 973B78E0001 for ; Fri, 6 Jan 2023 19:34:07 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 642A3140F0C for ; Sat, 7 Jan 2023 00:34:07 +0000 (UTC) X-FDA: 80326130934.19.F7BD551 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 9ADFBC000C for ; Sat, 7 Jan 2023 00:34:05 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ajr9acHa; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673051645; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oyiEXGzewi8BxhVfMwa0DpjJ6qICpqEootA6xhLl17A=; b=EnzRM+yIneRExkOstVs9ieQP1dwYutLH6W9Mt20JaHwClEmdWifeSpnT4YORRblrp8SZcJ NQgA5rXcrvnwLwyEBsbcoGzOJjT5UYiUvpgAsoMiGXI56vQ5Zb6kbhhKU8hX3pJckpZYox I5gQYifnxXx+ajTD/XhqAof+pPgWYgo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ajr9acHa; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673051645; a=rsa-sha256; cv=none; b=FIDyi8JR6uIJKLJB0MoMIO79YCaKwfSkiE4m1UXRGt3MStuxEE9BSarpb+zzBQWdlcM1XB 6Xh+sdQ1lWcdCXMeBuNuriQ/hx1IhtLsIDJT51lzdg7qSJXqMqcDnCO/RCMAE2PmZt6UMl zwSfjtUokE2Yje2K/YrUGnFhBVo782Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673051645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oyiEXGzewi8BxhVfMwa0DpjJ6qICpqEootA6xhLl17A=; b=Ajr9acHaCN83KdZcSnrSO0ZVyumkESvvw1hW9Snz4LcNt+jTtM600hviJ8RIDEMutMfzCK QR/eKoNy40oOJ9Hm5ZmnL0TsCToUv2T+S7lUvneJ8qLD4qa0aSBMAZMvP7/uz7lz3PWART QipcZ0BB6Qeyg4j43QFZkt6PfNxVIPQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-618-_ATCgz3PMMOgiHpPqp77iw-1; Fri, 06 Jan 2023 19:34:01 -0500 X-MC-Unique: _ATCgz3PMMOgiHpPqp77iw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 320FA85A588; Sat, 7 Jan 2023 00:34:01 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90A0D140EBF5; Sat, 7 Jan 2023 00:33:59 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v4 4/7] iov_iter: Add a function to extract a page list from an iterator From: David Howells To: Al Viro Cc: Christoph Hellwig , John Hubbard , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, dhowells@redhat.com, Christoph Hellwig , Matthew Wilcox , Jens Axboe , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Date: Sat, 07 Jan 2023 00:33:58 +0000 Message-ID: <167305163883.1521586.10777155475378874823.stgit@warthog.procyon.org.uk> In-Reply-To: <167305160937.1521586.133299343565358971.stgit@warthog.procyon.org.uk> References: <167305160937.1521586.133299343565358971.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspamd-Queue-Id: 9ADFBC000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: dbe6kwdxuh9s3n1e56t66wxb8rqkstyi X-HE-Tag: 1673051645-170929 X-HE-Meta: U2FsdGVkX1+6T9XeAynz/Apv24ltzE1GVVZlBJcCiaEEqmR0wyuiC/HzoWfycVKhGCmqjBShdNnNopCtxZISkInv/qMTiuUSb6+tUb8fGmZqexOSAf3sZwUZEVKdQwgbL0dT9VGEUXAojb43UjdQxWPdG4QjSFtoRYVbrbMIbk/KzUzPPH/ITk7EebsOduv2ZUuB99ljwnMfZoC/ePcOd65FDnMPo3MP1dFfOgQk46Oz74cooxcNoxa7+NkBlCuJxyQNHBlgaHKRyuAmlyZ6N8RIhflj4Suif/Zu/L+gIgYejomcPJ3wgIo53PnVq7couMvB0FnDWrfhCimLp4omleUV/ufw0DngqfWQapSKGhgUpxjgKnXhMjRWhkRqY4PWp5ZsQiSwHHCZSf7vDF5Smb5YEjXpwP24Evg83rHKkzemBF3+mFUtgBVmik9KflyOKqZLsApvrXe+nR6t+H3HaVcRzseiB20gqw0VLNszv3X9o1fvyjCk2H/x9t6eV1QFqw40fa5v6oAd6n2ARdNjlS8F8TtwZFUG03C2YtFGIEvKWVKzwgWdbeAsGV6LrIi2XnvQSirjAUu5LMoUUIXRgYy9h8+XEDFA2eQ+5mPN1KrXFOOPf7rI9NyaFFzQaotP9BuEnuG+bmlcnFjn1C5N61wHU06yvqpjX0wh05+n4msVfh6B2IAXRlxhMGJV91vKcZ+PvLmB010BZzFtfR1/3VJJr2EkHNH+M8vnv5IcmpVmKhg0u8l/BrcmV9Xvp/apm2o5SDOtzjVVd0fkPJBx8e3XVNQik9PkuHBZwZP8SL/ivAoSofJuANLRgimgcYdMToKiBTJexFdaMzKFo9hhBJV+nr55a6yxLJkuJtABWz4DzBfmaQC07A7eG5cBiVZW5J45szqbFWv7n38xZKgPafdCQR6lKd3WgKo4Ew+W7Ok= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function, iov_iter_extract_pages(), to extract a list of pages from an iterator. The pages may be returned with a reference added or a pin added or neither, depending on the type of iterator and the direction of transfer. The function also indicates the mode of retention that was employed for an iterator - and therefore how the caller should dispose of the pages later. There are three cases: (1) Transfer *into* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have pins obtained on them (but not references) so that fork() doesn't CoW the pages incorrectly whilst the I/O is in progress. The indicated mode of retention will be FOLL_PIN for this case. The caller should use something like unpin_user_page() to dispose of the page. (2) Transfer is *out of* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have references obtained on them, but not pins. The indicated mode of retention will be FOLL_GET. The caller should use something like put_page() for page disposal. (3) Any other sort of iterator. No refs or pins are obtained on the page, the assumption is made that the caller will manage page retention. The indicated mode of retention will be 0. The pages don't need additional disposal. Changes: ======== vet #4) - Use ITER_SOURCE/DEST instead of WRITE/READ. - Allow additional FOLL_* flags, such as FOLL_PCI_P2PDMA to be passed in. ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: John Hubbard cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/Y3zFzdWnWlEJ8X8/@infradead.org/ [1] Link: https://lore.kernel.org/r/166722777971.2555743.12953624861046741424.stgit@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166732025748.3186319.8314014902727092626.stgit@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166869689451.3723671.18242195992447653092.stgit@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166920903885.1461876.692029808682876184.stgit@warthog.procyon.org.uk/ # v2 Link: https://lore.kernel.org/r/166997421646.9475.14837976344157464997.stgit@warthog.procyon.org.uk/ # v3 --- include/linux/uio.h | 5 + lib/iov_iter.c | 361 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 366 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index acb1ae3324ed..9a36b4cddb28 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -382,4 +382,9 @@ static inline void iov_iter_ubuf(struct iov_iter *i, enum iter_dir direction, }; } +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, unsigned int *cleanup_mode); + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fec1c5513197..dc6db5ad108b 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1914,3 +1914,364 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->iov -= state->nr_segs - i->nr_segs; i->nr_segs = state->nr_segs; } + +/* + * Extract a list of contiguous pages from an ITER_PIPE iterator. This does + * not get references of its own on the pages, nor does it get a pin on them. + * If there's a partial page, it adds that first and will then allocate and add + * pages into the pipe to make up the buffer space to the amount required. + * + * The caller must hold the pipe locked and only transferring into a pipe is + * supported. + */ +static ssize_t iov_iter_extract_pipe_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned int nr, offset, chunk, j; + struct page **p; + size_t left; + + if (!sanity(i)) + return -EFAULT; + + offset = pipe_npages(i, &nr); + if (!nr) + return -EFAULT; + *offset0 = offset; + + maxpages = min_t(size_t, nr, maxpages); + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + left = maxsize; + for (j = 0; j < maxpages; j++) { + struct page *page = append_pipe(i, left, &offset); + if (!page) + break; + chunk = min_t(size_t, left, PAGE_SIZE - offset); + left -= chunk; + *p++ = page; + } + if (!j) + return -EFAULT; + *cleanup_mode = 0; + return maxsize - left; +} + +/* + * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + struct page *page, **p; + unsigned int nr = 0, offset; + loff_t pos = i->xarray_start + i->iov_offset; + pgoff_t index = pos >> PAGE_SHIFT; + XA_STATE(xas, i->xarray, index); + + offset = pos & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + rcu_read_lock(); + for (page = xas_load(&xas); page; page = xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + + /* Has the page moved or been split? */ + if (unlikely(page != xas_reload(&xas))) { + xas_reset(&xas); + continue; + } + + p[nr++] = find_subpage(page, xas.xa_index); + if (nr == maxpages) + break; + } + rcu_read_unlock(); + + maxsize = min_t(size_t, nr * PAGE_SIZE - offset, maxsize); + i->iov_offset += maxsize; + i->count -= maxsize; + *cleanup_mode = 0; + return maxsize; +} + +/* + * Extract a list of contiguous pages from an ITER_BVEC iterator. This does + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + struct page **p, *page; + size_t skip = i->iov_offset, offset; + int k; + + maxsize = min(maxsize, i->bvec->bv_len - skip); + skip += i->bvec->bv_offset; + page = i->bvec->bv_page + skip / PAGE_SIZE; + offset = skip % PAGE_SIZE; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + for (k = 0; k < maxpages; k++) + p[k] = page + k; + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + i->count -= maxsize; + i->iov_offset += maxsize; + if (i->iov_offset == i->bvec->bv_len) { + i->iov_offset = 0; + i->bvec++; + i->nr_segs--; + } + *cleanup_mode = 0; + return maxsize; +} + +/* + * Get the first segment from an ITER_UBUF or ITER_IOVEC iterator. The + * iterator must not be empty. + */ +static unsigned long iov_iter_extract_first_user_segment(const struct iov_iter *i, + size_t *size) +{ + size_t skip; + long k; + + if (iter_is_ubuf(i)) + return (unsigned long)i->ubuf + i->iov_offset; + + for (k = 0, skip = i->iov_offset; k < i->nr_segs; k++, skip = 0) { + size_t len = i->iov[k].iov_len - skip; + + if (unlikely(!len)) + continue; + if (*size > len) + *size = len; + return (unsigned long)i->iov[k].iov_base + skip; + } + BUG(); // if it had been empty, we wouldn't get called +} + +/* + * Extract a list of contiguous pages from a user iterator and get references + * on them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred out of the buffer described by + * the iterator (ie. this is the source). + * + * The pages are returned with incremented refcounts that the caller must undo + * once the transfer is complete, but no additional pins are obtained. + * + * This is only safe to be used where background IO/DMA is not going to be + * modifying the buffer, and so won't cause a problem with CoW on fork. + */ +static ssize_t iov_iter_extract_user_pages_and_get(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned long addr; + size_t offset; + int res; + + if (WARN_ON_ONCE(!iov_iter_is_source(i))) + return -EFAULT; + + gup_flags |= FOLL_GET; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = iov_iter_extract_first_user_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = get_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + *cleanup_mode = FOLL_GET; + return maxsize; +} + +/* + * Extract a list of contiguous pages from a user iterator and get a pin on + * each of them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred into the buffer described by the + * iterator (ie. this is the destination). + * + * It does not get refs on the pages, but the pages must be unpinned by the + * caller once the transfer is complete. + * + * This is safe to be used where background IO/DMA *is* going to be modifying + * the buffer; using a pin rather than a ref makes sure that CoW happens + * correctly in the parent during fork. + */ +static ssize_t iov_iter_extract_user_pages_and_pin(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned long addr; + size_t offset; + int res; + + if (WARN_ON_ONCE(!iov_iter_is_dest(i))) + return -EFAULT; + + gup_flags |= FOLL_PIN | FOLL_WRITE; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + *cleanup_mode = FOLL_PIN; + return maxsize; +} + +static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + if (i->data_source) + return iov_iter_extract_user_pages_and_get(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); + else + return iov_iter_extract_user_pages_and_pin(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); +} + +/** + * iov_iter_extract_pages - Extract a list of contiguous pages from an iterator + * @i: The iterator to extract from + * @pages: Where to return the list of pages + * @maxsize: The maximum amount of iterator to extract + * @maxpages: The maximum size of the list of pages + * @gup_flags: Addition flags when getting pages from a user-backed iterator + * @offset0: Where to return the starting offset into (*@pages)[0] + * @cleanup_mode: Where to return the cleanup mode + * + * Extract a list of contiguous pages from the current point of the iterator, + * advancing the iterator. The maximum number of pages and the maximum amount + * of page contents can be set. + * + * If *@pages is NULL, a page list will be allocated to the required size and + * *@pages will be set to its base. If *@pages is not NULL, it will be assumed + * that the caller allocated a page list at least @maxpages in size and this + * will be filled in. + * + * Extra refs or pins on the pages may be obtained as follows: + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is to be + * transferred /OUT OF/ the described buffer, refs will be taken on the + * pages, but pins will not be added. This can be used for DMA from a + * page; it cannot be used for DMA to a page, as it may cause page-COW + * problems in fork. *@cleanup_mode will be set to FOLL_GET. + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is to be + * transferred /INTO/ the described buffer, pins will be added to the + * pages, but refs will not be taken. This must be used for DMA to a + * page. *@cleanup_mode will be set to FOLL_PIN. + * + * (*) If the iterator is ITER_PIPE, this must describe a destination for the + * data. Additional pages may be allocated and added to the pipe (which + * will hold the refs), but neither refs nor pins will be obtained for the + * caller. The caller must hold the pipe lock. *@cleanup_mode will be + * set to 0. + * + * (*) If the iterator is ITER_BVEC or ITER_XARRAY, the pages are merely + * listed; no extra refs or pins are obtained. *@cleanup_mode will be set + * to 0. + * + * Note also: + * + * (*) Use with ITER_KVEC is not supported as that may refer to memory that + * doesn't have associated page structs. + * + * (*) Use with ITER_DISCARD is not supported as that has no content. + * + * On success, the function sets *@pages to the new pagelist, if allocated, and + * sets *offset0 to the offset into the first page, *cleanup_mode to the + * cleanup required and returns the amount of buffer space added represented by + * the page list. + * + * It may also return -ENOMEM and -EFAULT. + */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int gup_flags, + size_t *offset0, + unsigned int *cleanup_mode) +{ + maxsize = min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT); + if (!maxsize) + return 0; + + if (likely(user_backed_iter(i))) + return iov_iter_extract_user_pages(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); + if (iov_iter_is_bvec(i)) + return iov_iter_extract_bvec_pages(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); + if (iov_iter_is_pipe(i)) + return iov_iter_extract_pipe_pages(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); + if (iov_iter_is_xarray(i)) + return iov_iter_extract_xarray_pages(i, pages, maxsize, + maxpages, gup_flags, + offset0, cleanup_mode); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(iov_iter_extract_pages);