From patchwork Mon Apr 24 05:49:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13221697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDEFC77B73 for ; Mon, 24 Apr 2023 05:51:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231260AbjDXFvh (ORCPT ); Mon, 24 Apr 2023 01:51:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231232AbjDXFvO (ORCPT ); Mon, 24 Apr 2023 01:51:14 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E83D4228; Sun, 23 Apr 2023 22:50:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=pJqF9iLDXxalAZri1sgP4ykwSrvt3ymept2aEtNRU2M=; b=3lxWVAk+3jZfmQD+hx2GXifc+u 2uElWR6QJNM63DBT0CyBjHyTQty+aqsqVeci/2v+7iILU2eb3upv78/4g048VSlvmDvh2mJYlZYZ6 XOxjp6O39E62ujMQm69yzR/DnVWTniYzJvEiutGiNhDq42IO09RTn0OMm/3Bvn2K6DU3f11ZHTGj3 wO/B5SuQ7cosdpF7UlWJdzPfVB+e8gNIg9bicPczX/NjxixXpg3qMzmuu6NLAkDvHL5b74xtnSdgO ZKCt4iYFs6TuqhKmfkfCVG1sm381O08hFBNFhqmsvf5phswa19FEEeAjjvkIomMjBB57KJt2H9laH ncijNsbQ==; Received: from [2001:4bb8:189:a74f:e8a5:5f73:6d2:23b8] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pqp5L-00FP1c-2M; Mon, 24 Apr 2023 05:50:00 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Miklos Szeredi , "Darrick J. Wong" , Andrew Morton , David Howells , Matthew Wilcox , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-xfs@vger.kernel.org, linux-nfs@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/17] iomap: use kiocb_write_and_wait and kiocb_invalidate_pages Date: Mon, 24 Apr 2023 07:49:19 +0200 Message-Id: <20230424054926.26927-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230424054926.26927-1-hch@lst.de> References: <20230424054926.26927-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Use the common helpers for direct I/O page invalidation instead of open coding the logic. This leads to a slight reordering of checks in __iomap_dio_rw to keep the logic straight. Signed-off-by: Christoph Hellwig --- fs/iomap/direct-io.c | 55 ++++++++++++++++---------------------------- 1 file changed, 20 insertions(+), 35 deletions(-) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 2bf8d684675615..10a790f568afca 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -470,7 +470,6 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, const struct iomap_ops *ops, const struct iomap_dio_ops *dops, unsigned int dio_flags, void *private, size_t done_before) { - struct address_space *mapping = iocb->ki_filp->f_mapping; struct inode *inode = file_inode(iocb->ki_filp); struct iomap_iter iomi = { .inode = inode, @@ -479,11 +478,11 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, .flags = IOMAP_DIRECT, .private = private, }; - loff_t end = iomi.pos + iomi.len - 1, ret = 0; bool wait_for_completion = is_sync_kiocb(iocb) || (dio_flags & IOMAP_DIO_FORCE_WAIT); struct blk_plug plug; struct iomap_dio *dio; + loff_t ret = 0; if (!iomi.len) return NULL; @@ -505,31 +504,29 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, dio->submit.waiter = current; dio->submit.poll_bio = NULL; + if (iocb->ki_flags & IOCB_NOWAIT) + iomi.flags |= IOMAP_NOWAIT; + if (iov_iter_rw(iter) == READ) { if (iomi.pos >= dio->i_size) goto out_free_dio; - if (iocb->ki_flags & IOCB_NOWAIT) { - if (filemap_range_needs_writeback(mapping, iomi.pos, - end)) { - ret = -EAGAIN; - goto out_free_dio; - } - iomi.flags |= IOMAP_NOWAIT; - } - if (user_backed_iter(iter)) dio->flags |= IOMAP_DIO_DIRTY; + + ret = kiocb_write_and_wait(iocb, iomi.len); + if (ret) + goto out_free_dio; } else { iomi.flags |= IOMAP_WRITE; dio->flags |= IOMAP_DIO_WRITE; - if (iocb->ki_flags & IOCB_NOWAIT) { - if (filemap_range_has_page(mapping, iomi.pos, end)) { - ret = -EAGAIN; + if (dio_flags & IOMAP_DIO_OVERWRITE_ONLY) { + ret = -EAGAIN; + if (iomi.pos >= dio->i_size || + iomi.pos + iomi.len > dio->i_size) goto out_free_dio; - } - iomi.flags |= IOMAP_NOWAIT; + iomi.flags |= IOMAP_OVERWRITE_ONLY; } /* for data sync or sync, we need sync completion processing */ @@ -545,31 +542,19 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, if (!(iocb->ki_flags & IOCB_SYNC)) dio->flags |= IOMAP_DIO_WRITE_FUA; } - } - - if (dio_flags & IOMAP_DIO_OVERWRITE_ONLY) { - ret = -EAGAIN; - if (iomi.pos >= dio->i_size || - iomi.pos + iomi.len > dio->i_size) - goto out_free_dio; - iomi.flags |= IOMAP_OVERWRITE_ONLY; - } - ret = filemap_write_and_wait_range(mapping, iomi.pos, end); - if (ret) - goto out_free_dio; - - if (iov_iter_rw(iter) == WRITE) { /* * Try to invalidate cache pages for the range we are writing. * If this invalidation fails, let the caller fall back to * buffered I/O. */ - if (invalidate_inode_pages2_range(mapping, - iomi.pos >> PAGE_SHIFT, end >> PAGE_SHIFT)) { - trace_iomap_dio_invalidate_fail(inode, iomi.pos, - iomi.len); - ret = -ENOTBLK; + ret = kiocb_invalidate_pages(iocb, iomi.len); + if (ret) { + if (ret != -EAGAIN) { + trace_iomap_dio_invalidate_fail(inode, iomi.pos, + iomi.len); + ret = -ENOTBLK; + } goto out_free_dio; }