From patchwork Thu Apr 25 16:09:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andreas Gruenbacher X-Patchwork-Id: 10917449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69E7376 for ; Thu, 25 Apr 2019 16:09:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58F0528BD5 for ; Thu, 25 Apr 2019 16:09:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CB1F28C29; Thu, 25 Apr 2019 16:09:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D68ED28BD5 for ; Thu, 25 Apr 2019 16:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726020AbfDYQJY (ORCPT ); Thu, 25 Apr 2019 12:09:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47564 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725900AbfDYQJX (ORCPT ); Thu, 25 Apr 2019 12:09:23 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7C53A285B1; Thu, 25 Apr 2019 16:09:23 +0000 (UTC) Received: from max.home.com (unknown [10.40.205.80]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5FAD5600C7; Thu, 25 Apr 2019 16:09:15 +0000 (UTC) From: Andreas Gruenbacher To: cluster-devel@redhat.com, Christoph Hellwig Cc: Bob Peterson , Jan Kara , Dave Chinner , Ross Lagerwall , Mark Syms , =?utf-8?b?RWR3aW4gVMO2csO2aw==?= , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Andreas Gruenbacher Subject: [PATCH v3 1/2] iomap: Add a page_prepare callback Date: Thu, 25 Apr 2019 18:09:12 +0200 Message-Id: <20190425160913.1878-1-agruenba@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Thu, 25 Apr 2019 16:09:23 +0000 (UTC) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the page_done callback into a separate iomap_page_ops structure and add a page_prepare calback to be called before a page is written to. In gfs2, we'll want to start a transaction in page_prepare and end it in page_done, and other filesystems that implement data journaling will require the same kind of mechanism. Signed-off-by: Andreas Gruenbacher --- fs/iomap.c | 22 ++++++++++++++++++---- include/linux/iomap.h | 18 +++++++++++++----- 2 files changed, 31 insertions(+), 9 deletions(-) diff --git a/fs/iomap.c b/fs/iomap.c index 97cb9d486a7d..667a822ecb7d 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -665,6 +665,7 @@ static int iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, struct page **pagep, struct iomap *iomap) { + const struct iomap_page_ops *page_ops = iomap->page_ops; pgoff_t index = pos >> PAGE_SHIFT; struct page *page; int status = 0; @@ -674,9 +675,17 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, if (fatal_signal_pending(current)) return -EINTR; + if (page_ops) { + status = page_ops->page_prepare(inode, pos, len, iomap); + if (status) + return status; + } + page = grab_cache_page_write_begin(inode->i_mapping, index, flags); - if (!page) - return -ENOMEM; + if (!page) { + status = -ENOMEM; + goto no_page; + } if (iomap->type == IOMAP_INLINE) iomap_read_inline_data(inode, page, iomap); @@ -684,12 +693,16 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, status = __block_write_begin_int(page, pos, len, NULL, iomap); else status = __iomap_write_begin(inode, pos, len, page, iomap); + if (unlikely(status)) { unlock_page(page); put_page(page); page = NULL; iomap_write_failed(inode, pos, len); +no_page: + if (page_ops) + page_ops->page_done(inode, pos, 0, NULL, iomap); } *pagep = page; @@ -769,6 +782,7 @@ static int iomap_write_end(struct inode *inode, loff_t pos, unsigned len, unsigned copied, struct page *page, struct iomap *iomap) { + const struct iomap_page_ops *page_ops = iomap->page_ops; int ret; if (iomap->type == IOMAP_INLINE) { @@ -780,8 +794,8 @@ iomap_write_end(struct inode *inode, loff_t pos, unsigned len, ret = __iomap_write_end(inode, pos, len, copied, page, iomap); } - if (iomap->page_done) - iomap->page_done(inode, pos, copied, page, iomap); + if (page_ops) + page_ops->page_done(inode, pos, copied, page, iomap); if (ret < len) iomap_write_failed(inode, pos, len); diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 0fefb5455bda..fd65f27d300e 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -53,6 +53,8 @@ struct vm_fault; */ #define IOMAP_NULL_ADDR -1ULL /* addr is not valid */ +struct iomap_page_ops; + struct iomap { u64 addr; /* disk offset of mapping, bytes */ loff_t offset; /* file offset of mapping, bytes */ @@ -63,12 +65,18 @@ struct iomap { struct dax_device *dax_dev; /* dax_dev for dax operations */ void *inline_data; void *private; /* filesystem private */ + const struct iomap_page_ops *page_ops; +}; - /* - * Called when finished processing a page in the mapping returned in - * this iomap. At least for now this is only supported in the buffered - * write path. - */ +/* + * Called before / after processing a page in the mapping returned in this + * iomap. At least for now, this is only supported in the buffered write path. + * When page_prepare returns 0, page_done is called as well + * (possibly with page == NULL). + */ +struct iomap_page_ops { + int (*page_prepare)(struct inode *inode, loff_t pos, unsigned len, + struct iomap *iomap); void (*page_done)(struct inode *inode, loff_t pos, unsigned copied, struct page *page, struct iomap *iomap); };