From patchwork Fri May 26 11:28:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13256801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19CAAC7EE2D for ; Fri, 26 May 2023 11:29:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC23C280003; Fri, 26 May 2023 07:29:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A73FE280002; Fri, 26 May 2023 07:29:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9133E280003; Fri, 26 May 2023 07:29:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 83575280002 for ; Fri, 26 May 2023 07:29:28 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 55A11120EC0 for ; Fri, 26 May 2023 11:29:28 +0000 (UTC) X-FDA: 80832185616.12.331DBFC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 3BCED20014 for ; Fri, 26 May 2023 11:29:25 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YmgsjEoc; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685100566; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4Cmt7wVWF/53m28oT2gNRjy02BoBlgv0E5157juSdEU=; b=QFpTxgL2qqVy0GfzFH9o2rjwbQ3KWubsHvNIysySXZdLyxKirFhwAITZ2rPDqi7VYvhyfx TPeBLS/0y8t2aR/lRSt7QZmDNZBPvuQ3O5WBklrOzckmEVAyxbxpIk6ERQIxqlsGu272py OUGffFHQW/skeGP2obWFgT4GWAV6bcw= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YmgsjEoc; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685100566; a=rsa-sha256; cv=none; b=WtL/6fHRZ2TxAlygkLGLZdNxVnchkBEqbdqgP1uW7wKrqCZLtMj0jOs+S9iJ3Ab0ZIbx3c LZdzLFcDdYCY172cpkslOaIIovjQRLiOwTGFBPwI49gnTmt/apyoBkehKvh85H6XFftJWW r5iQdCAp+M7mPwNqQYRD23Z2tvKjBAY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685100565; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4Cmt7wVWF/53m28oT2gNRjy02BoBlgv0E5157juSdEU=; b=YmgsjEocGgzKw7+fKHTe9c/po1SWKbGwHkhtHJ9KhEVdlbtcrVa4vVC5BMTOKdQHTjXQsv EA/hgMagbQzR0Qp34bZZe6Er2pe9bSphPvCoTxmKb2KgHnEKNfho+Aa4erA0luNreAQV+u bZQGpjWWvnhVemTAuDYjnrxjRCrQn7I= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-98-GvRJnau7PleeBtBGQjyOcw-1; Fri, 26 May 2023 07:29:21 -0400 X-MC-Unique: GvRJnau7PleeBtBGQjyOcw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B8A88002BF; Fri, 26 May 2023 11:29:21 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77AB5400E118; Fri, 26 May 2023 11:29:18 +0000 (UTC) From: David Howells To: Christoph Hellwig , David Hildenbrand , Lorenzo Stoakes Cc: David Howells , Jens Axboe , Al Viro , Matthew Wilcox , Jan Kara , Jeff Layton , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , Christian Brauner , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Subject: [PATCH v3 3/3] block: Use iov_iter_extract_pages() and page pinning in direct-io.c Date: Fri, 26 May 2023 12:28:59 +0100 Message-Id: <20230526112859.654506-4-dhowells@redhat.com> In-Reply-To: <20230526112859.654506-1-dhowells@redhat.com> References: <20230526112859.654506-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3BCED20014 X-Stat-Signature: h8r86cnnw1es961wohk3k4r76xdgdxfb X-HE-Tag: 1685100565-651572 X-HE-Meta: U2FsdGVkX1/ohbDaFhLmfD/P0VjFQKQCMJEFqGt9A8wMQEw5SkCorAD5ZYlFiRAq5u5RrS2i8MnN8zmMXpnFIyPyNmQ2EXJa3POSgmZyJGBCtOyAGafP4njRCTEM59SoBAujpGm2RJD3fHvFStToL67p9HhSZbalYt0R8J2nozLOpdUwAVUL0tICRBNmp2lsQk9rxPw6fKf5ThwDxFhYQ/GWG3QgsYi1kIhvcYiejLwPnAToquPSeDZ6VWDyo4KUOiUgWKygeFAuVgLjj0uKnAYmOf2SgMZLJJUlemGCb1uw538ZQgC7lR4cPOLpDIAsHVO7+R53fWOPbMy84Wlpw08EyV1FSn9MM0Opsi3vxRAtnhOh4Cef/awgRqyYWVVntA/SpLpGUT26K0Yyyo+Y9fXsJ+sZwalukYc1JN3NFPN7ioyFExQJtGyVuwtUEcYKlIfWYVm8q0zSWAoo3EEG+YFRZbzMqyk3x101aEYeIEZ2ybGBHnFd/j0A8WelkmZBAFDmcHmT9lmQK03SNmYMca/9WCHjttTPE7Q+2+6k4yBTFM82kxFeHTzkJ9hy5Dj9mJuj5be+SGoxIMnUmKUWaHt5LCBwCWMGxxdtV053fuQmwifICbZcmDAUoAkxraO4iwihlEyDqQ2dL8Ign8KUcJTOQ36cLGHJnVKsC068F6lbnQVatGZTxPsEEgf7yOEf2/8PVkCJoPczj9y5VYwbmfTxT+HwlSwF1ZqQpWfAmlY3blPdqW5BN/Rxhyz7lawkGFMPJUmXsZr3oC/ssPAC/wX3t4miPdufAvpizYkQ1bDzFQspQherGkGlJ/b5xRr98XiM2lDLzmuSVYSSrgDX71nO/497AHcKuEt51pIkfcXzevJmFYiTgiuYod/oavpOgNI85gdJqnPXYxpGegYrxOkBj637a3HCJadT9V088SAAqiP+VobPiRyic6TlSMYlrQ7ubgbJnZ+FMSYxJQG tkp7iYEG 5QwjrixTBfzinx3rOiHF516vxIW2blebJFzGLTDOh5Gq3zymG5KGLcGAerU+wwx965eOze0u45fI24QCrXy+CMg0zNAuv2dQ9Ct/VwfEibgRRKmbNrk5m7oVbkcjR6LqN6sG0PcXLZ69407kipYNaZqE5F4YtfeIdlgrwY0cnDKwcJYuPIA8NQ+gI7jvWBZ4vPuPifxvs4NUdornWl4rcqoBoR5AuB8Ga7BYZlVL4qDRQ0aPLA9TOkP2zK2tY+8QcM2xMJeKVti/HicbDIoaOxz+/UTreGvw1dj2bldvvwgzut61f7cFXC6iJjlAelfvQGNKkQFQ71H651TWVH2AwOrOL10Lj0dplZ+4Fjsgs5fkubXkf4Q2ZXOPKO/6RyWP8EpKexCpcEjGlGcJRelD2ARZee2qDfS4Q4rK04QjS4mN1uYSCdX0es1cGIgFgR6feWydTO3eT1jB/ecg3ltUHty3EExx6Oa8M964IKRVL29gNkZLngTykoIJoM1pC7pQkJr5z2HJMPBg1sO3AVw8TGzcyfvw6528Jp9wMAqKuNRTueHnCwsMWH5pmwGPXXIX5vF9KDKqWGbEEamOzwmja26P6uHcoVNuqiL3DH6jOZ/PAhwVe3N7kZKf0y4erU8D9eqAHNOH7VT5obY8NYJwn5h3jpFt+aoz+G0H6+W6VGOwhdaP6meVNSiOAog== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the old block-based direct-I/O code to use iov_iter_extract_pages() to pin user pages or leave kernel pages unpinned rather than taking refs when submitting bios. This makes use of the preceding patches to not take pins on the zero page (thereby allowing insertion of zero pages in with pinned pages) and to get additional pins on pages, allowing an extracted page to be used in multiple bios without having to re-extract it. Signed-off-by: David Howells cc: Christoph Hellwig cc: David Hildenbrand cc: Lorenzo Stoakes cc: Andrew Morton cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: Jan Kara cc: Jeff Layton cc: Jason Gunthorpe cc: Logan Gunthorpe cc: Hillf Danton cc: Christian Brauner cc: Linus Torvalds cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org cc: linux-kernel@vger.kernel.org cc: linux-mm@kvack.org --- Notes: ver #3) - Rename need_unpin to is_pinned in struct dio. - page_get_additional_pin() was renamed to folio_add_pin(). ver #2) - Need to set BIO_PAGE_PINNED conditionally, not BIO_PAGE_REFFED. fs/direct-io.c | 72 ++++++++++++++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 29 deletions(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index ad20f3428bab..0643f1bb4b59 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -42,8 +42,8 @@ #include "internal.h" /* - * How many user pages to map in one call to get_user_pages(). This determines - * the size of a structure in the slab cache + * How many user pages to map in one call to iov_iter_extract_pages(). This + * determines the size of a structure in the slab cache */ #define DIO_PAGES 64 @@ -121,12 +121,13 @@ struct dio { struct inode *inode; loff_t i_size; /* i_size when submitted */ dio_iodone_t *end_io; /* IO completion function */ + bool is_pinned; /* T if we have pins on the pages */ void *private; /* copy from map_bh.b_private */ /* BIO completion state */ spinlock_t bio_lock; /* protects BIO fields below */ - int page_errors; /* errno from get_user_pages() */ + int page_errors; /* err from iov_iter_extract_pages() */ int is_async; /* is IO async ? */ bool defer_completion; /* defer AIO completion to workqueue? */ bool should_dirty; /* if pages should be dirtied */ @@ -165,14 +166,14 @@ static inline unsigned dio_pages_present(struct dio_submit *sdio) */ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) { + struct page **pages = dio->pages; const enum req_op dio_op = dio->opf & REQ_OP_MASK; ssize_t ret; - ret = iov_iter_get_pages2(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, - &sdio->from); + ret = iov_iter_extract_pages(sdio->iter, &pages, LONG_MAX, + DIO_PAGES, 0, &sdio->from); if (ret < 0 && sdio->blocks_available && dio_op == REQ_OP_WRITE) { - struct page *page = ZERO_PAGE(0); /* * A memory fault, but the filesystem has some outstanding * mapped blocks. We need to use those blocks up to avoid @@ -180,8 +181,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) */ if (dio->page_errors == 0) dio->page_errors = ret; - get_page(page); - dio->pages[0] = page; + dio->pages[0] = ZERO_PAGE(0); sdio->head = 0; sdio->tail = 1; sdio->from = 0; @@ -201,9 +201,9 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) /* * Get another userspace page. Returns an ERR_PTR on error. Pages are - * buffered inside the dio so that we can call get_user_pages() against a - * decent number of pages, less frequently. To provide nicer use of the - * L1 cache. + * buffered inside the dio so that we can call iov_iter_extract_pages() + * against a decent number of pages, less frequently. To provide nicer use of + * the L1 cache. */ static inline struct page *dio_get_page(struct dio *dio, struct dio_submit *sdio) @@ -219,6 +219,18 @@ static inline struct page *dio_get_page(struct dio *dio, return dio->pages[sdio->head]; } +static void dio_pin_page(struct dio *dio, struct page *page) +{ + if (dio->is_pinned) + folio_add_pin(page_folio(page)); +} + +static void dio_unpin_page(struct dio *dio, struct page *page) +{ + if (dio->is_pinned) + unpin_user_page(page); +} + /* * dio_complete() - called when all DIO BIO I/O has been completed * @@ -402,8 +414,8 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, bio->bi_end_io = dio_bio_end_aio; else bio->bi_end_io = dio_bio_end_io; - /* for now require references for all pages */ - bio_set_flag(bio, BIO_PAGE_REFFED); + if (dio->is_pinned) + bio_set_flag(bio, BIO_PAGE_PINNED); sdio->bio = bio; sdio->logical_offset_in_bio = sdio->cur_page_fs_offset; } @@ -444,8 +456,9 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) */ static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio) { - while (sdio->head < sdio->tail) - put_page(dio->pages[sdio->head++]); + if (dio->is_pinned) + unpin_user_pages(dio->pages + sdio->head, + sdio->tail - sdio->head); } /* @@ -676,7 +689,7 @@ static inline int dio_new_bio(struct dio *dio, struct dio_submit *sdio, * * Return zero on success. Non-zero means the caller needs to start a new BIO. */ -static inline int dio_bio_add_page(struct dio_submit *sdio) +static inline int dio_bio_add_page(struct dio *dio, struct dio_submit *sdio) { int ret; @@ -688,7 +701,7 @@ static inline int dio_bio_add_page(struct dio_submit *sdio) */ if ((sdio->cur_page_len + sdio->cur_page_offset) == PAGE_SIZE) sdio->pages_in_io--; - get_page(sdio->cur_page); + dio_pin_page(dio, sdio->cur_page); sdio->final_block_in_bio = sdio->cur_page_block + (sdio->cur_page_len >> sdio->blkbits); ret = 0; @@ -743,11 +756,11 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, goto out; } - if (dio_bio_add_page(sdio) != 0) { + if (dio_bio_add_page(dio, sdio) != 0) { dio_bio_submit(dio, sdio); ret = dio_new_bio(dio, sdio, sdio->cur_page_block, map_bh); if (ret == 0) { - ret = dio_bio_add_page(sdio); + ret = dio_bio_add_page(dio, sdio); BUG_ON(ret != 0); } } @@ -804,13 +817,13 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, */ if (sdio->cur_page) { ret = dio_send_cur_page(dio, sdio, map_bh); - put_page(sdio->cur_page); + dio_unpin_page(dio, sdio->cur_page); sdio->cur_page = NULL; if (ret) return ret; } - get_page(page); /* It is in dio */ + dio_pin_page(dio, page); /* It is in dio */ sdio->cur_page = page; sdio->cur_page_offset = offset; sdio->cur_page_len = len; @@ -825,7 +838,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ret = dio_send_cur_page(dio, sdio, map_bh); if (sdio->bio) dio_bio_submit(dio, sdio); - put_page(sdio->cur_page); + dio_unpin_page(dio, sdio->cur_page); sdio->cur_page = NULL; } return ret; @@ -926,7 +939,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, ret = get_more_blocks(dio, sdio, map_bh); if (ret) { - put_page(page); + dio_unpin_page(dio, page); goto out; } if (!buffer_mapped(map_bh)) @@ -971,7 +984,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, /* AKPM: eargh, -ENOTBLK is a hack */ if (dio_op == REQ_OP_WRITE) { - put_page(page); + dio_unpin_page(dio, page); return -ENOTBLK; } @@ -984,7 +997,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, if (sdio->block_in_file >= i_size_aligned >> blkbits) { /* We hit eof */ - put_page(page); + dio_unpin_page(dio, page); goto out; } zero_user(page, from, 1 << blkbits); @@ -1024,7 +1037,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, sdio->next_block_for_io, map_bh); if (ret) { - put_page(page); + dio_unpin_page(dio, page); goto out; } sdio->next_block_for_io += this_chunk_blocks; @@ -1039,8 +1052,8 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, break; } - /* Drop the ref which was taken in get_user_pages() */ - put_page(page); + /* Drop the pin which was taken in get_user_pages() */ + dio_unpin_page(dio, page); } out: return ret; @@ -1135,6 +1148,7 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, /* will be released by direct_io_worker */ inode_lock(inode); } + dio->is_pinned = iov_iter_extract_will_pin(iter); /* Once we sampled i_size check for reads beyond EOF */ dio->i_size = i_size_read(inode); @@ -1259,7 +1273,7 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, ret2 = dio_send_cur_page(dio, &sdio, &map_bh); if (retval == 0) retval = ret2; - put_page(sdio.cur_page); + dio_unpin_page(dio, sdio.cur_page); sdio.cur_page = NULL; } if (sdio.bio)