From patchwork Wed Apr 17 01:27:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruan Shiyang X-Patchwork-Id: 10904303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BDCC922 for ; Wed, 17 Apr 2019 01:27:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74B0E28A20 for ; Wed, 17 Apr 2019 01:27:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 690A828A24; Wed, 17 Apr 2019 01:27:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 04F8428A26 for ; Wed, 17 Apr 2019 01:27:29 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 7E6A92120ADC4; Tue, 16 Apr 2019 18:27:29 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: None (no SPF record) identity=mailfrom; client-ip=183.91.158.132; helo=heian.cn.fujitsu.com; envelope-from=ruansy.fnst@cn.fujitsu.com; receiver=linux-nvdimm@lists.01.org Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by ml01.01.org (Postfix) with ESMTP id 016AF212017C0 for ; Tue, 16 Apr 2019 18:27:27 -0700 (PDT) X-IronPort-AV: E=Sophos;i="5.60,359,1549900800"; d="scan'208";a="59482917" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 17 Apr 2019 09:27:25 +0800 Received: from G08CNEXCHPEKD02.g08.fujitsu.local (unknown [10.167.33.83]) by cn.fujitsu.com (Postfix) with ESMTP id A7D284CD84C5; Wed, 17 Apr 2019 09:27:25 +0800 (CST) Received: from iridescent.g08.fujitsu.local (10.167.225.140) by G08CNEXCHPEKD02.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.439.0; Wed, 17 Apr 2019 09:27:29 +0800 From: Shiyang Ruan To: , Subject: [RFC PATCH 3/4] fs/dax: copy source blocks before writing when COW Date: Wed, 17 Apr 2019 09:27:14 +0800 Message-ID: <20190417012715.8287-4-ruansy.fnst@cn.fujitsu.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20190417012715.8287-1-ruansy.fnst@cn.fujitsu.com> References: <20190417012715.8287-1-ruansy.fnst@cn.fujitsu.com> MIME-Version: 1.0 X-Originating-IP: [10.167.225.140] X-yoursite-MailScanner-ID: A7D284CD84C5.A5D9C X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: ruansy.fnst@cn.fujitsu.com X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP The actor functions get the source blocks' start address from iomap->src_addr, and then copy these blocks to the new allocated blocks before writing the user data. Signed-off-by: Shiyang Ruan cc: linux-nvdimm@lists.01.org --- fs/dax.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/fs/dax.c b/fs/dax.c index ca0671d55aa6..28519bdecf7c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -982,6 +982,11 @@ static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos) return (iomap->addr + (pos & PAGE_MASK) - iomap->offset) >> 9; } +static sector_t dax_iomap_src_sector(struct iomap *iomap, loff_t pos) +{ + return (iomap->src_addr + (pos & PAGE_MASK) - iomap->offset) >> 9; +} + static int dax_iomap_pfn(struct iomap *iomap, loff_t pos, size_t size, pfn_t *pfnp) { @@ -1014,6 +1019,51 @@ static int dax_iomap_pfn(struct iomap *iomap, loff_t pos, size_t size, return rc; } +static int dax_iomap_addr(struct iomap *iomap, sector_t sector, size_t size, + void **kaddr) +{ + pgoff_t pgoff; + int id, rc; + long length; + + rc = bdev_dax_pgoff(iomap->bdev, sector, size, &pgoff); + if (rc) + return rc; + + id = dax_read_lock(); + length = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(size), + kaddr, NULL); + if (length < 0) + rc = length; + if (!*kaddr) + rc = -EFAULT; + + dax_read_unlock(id); + return rc; +} + +static int dax_iomap_cow_copy(struct iomap *iomap, loff_t pos, size_t size) +{ + void *kaddr = 0, *src_kaddr = 0; + int error = 0; + const sector_t src_sector = dax_iomap_src_sector(iomap, pos); + const sector_t sector = dax_iomap_sector(iomap, pos); + + error = dax_iomap_addr(iomap, src_sector, size, &src_kaddr); + if (error < 0) + return error; + error = dax_iomap_addr(iomap, sector, size, &kaddr); + if (error < 0) + return error; + + /* + * Copy data from source blocks to the new allocated blocks before + * writing user data. + */ + memcpy(kaddr, src_kaddr, size); + return 0; +} + /* * The user has performed a load from a hole in the file. Allocating a new * page in the file would cause excessive storage usage for workloads with @@ -1149,6 +1199,12 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (map_len > end - pos) map_len = end - pos; + if (iomap->src_addr) { + ret = dax_iomap_cow_copy(iomap, pos, size); + if (ret < 0) + break; + } + /* * The userspace address for the memory copy has already been * validated via access_ok() in either vfs_read() or @@ -1336,6 +1392,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); major = VM_FAULT_MAJOR; } + error = dax_iomap_pfn(&iomap, pos, PAGE_SIZE, &pfn); if (error < 0) goto error_finish_iomap; @@ -1358,6 +1415,13 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, ret = VM_FAULT_NEEDDSYNC | major; goto finish_iomap; } + + if (iomap.src_addr) { + error = dax_iomap_cow_copy(&iomap, pos, PAGE_SIZE); + if (error < 0) + goto error_finish_iomap; + } + trace_dax_insert_mapping(inode, vmf, entry); if (write) ret = vmf_insert_mixed_mkwrite(vma, vaddr, pfn); @@ -1559,6 +1623,12 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, goto finish_iomap; } + if (iomap.src_addr) { + error = dax_iomap_cow_copy(&iomap, pos, PMD_SIZE); + if (error < 0) + goto finish_iomap; + } + trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry); result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn, write);