From patchwork Mon Apr 29 17:26:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Goldwyn Rodrigues X-Patchwork-Id: 10922209 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8802514B6 for ; Mon, 29 Apr 2019 17:27:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 683CD28849 for ; Mon, 29 Apr 2019 17:27:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C7A12884A; Mon, 29 Apr 2019 17:27:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 092902883B for ; Mon, 29 Apr 2019 17:27:17 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 7B8622122C2E3; Mon, 29 Apr 2019 10:27:16 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=rgoldwyn@suse.de; receiver=linux-nvdimm@lists.01.org Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 5421B2122B994 for ; Mon, 29 Apr 2019 10:27:15 -0700 (PDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EA113ADAF; Mon, 29 Apr 2019 17:27:13 +0000 (UTC) From: Goldwyn Rodrigues To: linux-btrfs@vger.kernel.org Subject: [PATCH 04/18] dax: Introduce IOMAP_DAX_COW to CoW edges during writes Date: Mon, 29 Apr 2019 12:26:35 -0500 Message-Id: <20190429172649.8288-5-rgoldwyn@suse.de> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190429172649.8288-1-rgoldwyn@suse.de> References: <20190429172649.8288-1-rgoldwyn@suse.de> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kilobyte@angband.pl, jack@suse.cz, darrick.wong@oracle.com, nborisov@suse.com, linux-nvdimm@lists.01.org, david@fromorbit.com, dsterba@suse.cz, willy@infradead.org, linux-fsdevel@vger.kernel.org, hch@lst.de, Goldwyn Rodrigues MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP From: Goldwyn Rodrigues The IOMAP_DAX_COW is a iomap type which performs copy of edges of data while performing a write if start/end are not page aligned. The source address is expected in iomap->inline_data. dax_copy_edges() is a helper functions performs a copy from one part of the device to another for data not page aligned. If iomap->inline_data is NULL, it memset's the area to zero. Signed-off-by: Goldwyn Rodrigues --- fs/dax.c | 46 +++++++++++++++++++++++++++++++++++++++++++++- include/linux/iomap.h | 1 + 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index e5e54da1715f..610bfa861a28 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1084,6 +1084,42 @@ int __dax_zero_page_range(struct block_device *bdev, } EXPORT_SYMBOL_GPL(__dax_zero_page_range); +/* + * dax_copy_edges - Copies the part of the pages not included in + * the write, but required for CoW because + * offset/offset+length are not page aligned. + */ +static int dax_copy_edges(struct inode *inode, loff_t pos, loff_t length, + struct iomap *iomap, void *daddr) +{ + unsigned offset = pos & (PAGE_SIZE - 1); + loff_t end = pos + length; + loff_t pg_end = round_up(end, PAGE_SIZE); + void *saddr = iomap->inline_data; + int ret = 0; + /* + * Copy the first part of the page + * Note: we pass offset as length + */ + if (offset) { + if (saddr) + ret = memcpy_mcsafe(daddr, saddr, offset); + else + memset(daddr, 0, offset); + } + + /* Copy the last part of the range */ + if (end < pg_end) { + if (saddr) + ret = memcpy_mcsafe(daddr + offset + length, + saddr + offset + length, pg_end - end); + else + memset(daddr + offset + length, 0, + pg_end - end); + } + return ret; +} + static loff_t dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap) @@ -1105,9 +1141,11 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, return iov_iter_zero(min(length, end - pos), iter); } - if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED)) + if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED + && iomap->type != IOMAP_DAX_COW)) return -EIO; + /* * Write can allocate block for an area which has a hole page mapped * into page tables. We have to tear down these mappings so that data @@ -1144,6 +1182,12 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, break; } + if (iomap->type == IOMAP_DAX_COW) { + ret = dax_copy_edges(inode, pos, length, iomap, kaddr); + if (ret) + break; + } + map_len = PFN_PHYS(map_len); kaddr += offset; map_len -= offset; diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 0fefb5455bda..6e885c5a38a3 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -25,6 +25,7 @@ struct vm_fault; #define IOMAP_MAPPED 0x03 /* blocks allocated at @addr */ #define IOMAP_UNWRITTEN 0x04 /* blocks allocated at @addr in unwritten state */ #define IOMAP_INLINE 0x05 /* data inline in the inode */ +#define IOMAP_DAX_COW 0x06 /* Copy data pointed by inline_data before write*/ /* * Flags for all iomap mappings: