From patchwork Mon Apr 17 19:11:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 9684285 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5A1FF600F6 for ; Mon, 17 Apr 2017 19:17:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A6D627D4D for ; Mon, 17 Apr 2017 19:17:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DB582816B; Mon, 17 Apr 2017 19:17:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7D72127D4D for ; Mon, 17 Apr 2017 19:17:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6F75EC059742; Mon, 17 Apr 2017 19:17:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 6F75EC059742 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=intel.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dm-devel-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 6F75EC059742 Received: from colo-mx.corp.redhat.com (unknown [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 492FD189CC; Mon, 17 Apr 2017 19:17:50 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 07F7C5ED64; Mon, 17 Apr 2017 19:17:50 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v3HJHnd0026703 for ; Mon, 17 Apr 2017 15:17:49 -0400 Received: by smtp.corp.redhat.com (Postfix) id 314961757E; Mon, 17 Apr 2017 19:17:49 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from mx1.redhat.com (ext-mx03.extmail.prod.ext.phx2.redhat.com [10.5.110.27]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2848677FFD; Mon, 17 Apr 2017 19:17:47 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C9B8183F3E; Mon, 17 Apr 2017 19:17:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C9B8183F3E Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=intel.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dan.j.williams@intel.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C9B8183F3E Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2017 12:17:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.37,215,1488873600"; d="scan'208"; a="1157184943" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.125]) by fmsmga002.fm.intel.com with ESMTP; 17 Apr 2017 12:17:10 -0700 From: Dan Williams To: linux-nvdimm@ml01.01.org Date: Mon, 17 Apr 2017 12:11:26 -0700 Message-ID: <149245628664.10206.1096202287996099788.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <149245612770.10206.15496018295337908594.stgit@dwillia2-desk3.amr.corp.intel.com> References: <149245612770.10206.15496018295337908594.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 X-Greylist: Sender passed SPF test, Sender IP whitelisted by DNSRBL, ACL 203 matched, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Mon, 17 Apr 2017 19:17:38 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Mon, 17 Apr 2017 19:17:38 +0000 (UTC) for IP:'134.134.136.65' DOMAIN:'mga03.intel.com' HELO:'mga03.intel.com' FROM:'dan.j.williams@intel.com' RCPT:'' X-RedHat-Spam-Score: -4.302 (BAYES_50, DCC_REPUT_13_19, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SPF_PASS) 134.134.136.65 mga03.intel.com 134.134.136.65 mga03.intel.com X-Scanned-By: MIMEDefang 2.78 on 10.5.110.27 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-loop: dm-devel@redhat.com Cc: Jan Kara , Toshi Kani , Matthew Wilcox , linux-kernel@vger.kernel.org, Jeff Moyer , linux-block@vger.kernel.org, dm-devel@redhat.com, Al Viro , linux-fsdevel@vger.kernel.org, Ross Zwisler , Linus Torvalds , hch@lst.de Subject: [dm-devel] [resend PATCH v2 29/33] uio, libnvdimm, pmem: implement cache bypass for all copy_from_iter() operations X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 17 Apr 2017 19:17:51 +0000 (UTC) X-Virus-Scanned: ClamAV using ClamSMTP Introduce copy_from_iter_ops() to enable passing custom sub-routines to iterate_and_advance(). Define pmem operations that guarantee cache bypass to supplement the existing usage of __copy_from_iter_nocache() backed by arch_wb_cache_pmem(). Cc: Jan Kara Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Toshi Kani Cc: Al Viro Cc: Matthew Wilcox Cc: Ross Zwisler Cc: Linus Torvalds Signed-off-by: Dan Williams --- drivers/nvdimm/Kconfig | 1 + drivers/nvdimm/pmem.c | 38 +------------------------------------- drivers/nvdimm/pmem.h | 7 +++++++ drivers/nvdimm/x86.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/uio.h | 4 ++++ lib/Kconfig | 3 +++ lib/iov_iter.c | 25 +++++++++++++++++++++++++ 7 files changed, 89 insertions(+), 37 deletions(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig index 4d45196d6f94..28002298cdc8 100644 --- a/drivers/nvdimm/Kconfig +++ b/drivers/nvdimm/Kconfig @@ -38,6 +38,7 @@ config BLK_DEV_PMEM config ARCH_HAS_PMEM_API depends on X86_64 + select COPY_FROM_ITER_OPS def_bool y config ND_BLK diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 329895ca88e1..b000c6db5731 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -223,43 +223,7 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i) { - size_t len; - - /* TODO: skip the write-back by always using non-temporal stores */ - len = copy_from_iter_nocache(addr, bytes, i); - - /* - * In the iovec case on x86_64 copy_from_iter_nocache() uses - * non-temporal stores for the bulk of the transfer, but we need - * to manually flush if the transfer is unaligned. A cached - * memory copy is used when destination or size is not naturally - * aligned. That is: - * - Require 8-byte alignment when size is 8 bytes or larger. - * - Require 4-byte alignment when size is 4 bytes. - * - * In the non-iovec case the entire destination needs to be - * flushed. - */ - if (iter_is_iovec(i)) { - unsigned long flushed, dest = (unsigned long) addr; - - if (bytes < 8) { - if (!IS_ALIGNED(dest, 4) || (bytes != 4)) - arch_wb_cache_pmem(addr, 1); - } else { - if (!IS_ALIGNED(dest, 8)) { - dest = ALIGN(dest, boot_cpu_data.x86_clflush_size); - arch_wb_cache_pmem(addr, 1); - } - - flushed = dest - (unsigned long) addr; - if (bytes > flushed && !IS_ALIGNED(bytes - flushed, 8)) - arch_wb_cache_pmem(addr + bytes - 1, 1); - } - } else - arch_wb_cache_pmem(addr, bytes); - - return len; + return arch_copy_from_iter_pmem(addr, bytes, i); } static const struct block_device_operations pmem_fops = { diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index 00005900c1b7..574b63fb5376 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -3,11 +3,13 @@ #include #include #include +#include #include #ifdef CONFIG_ARCH_HAS_PMEM_API void arch_wb_cache_pmem(void *addr, size_t size); void arch_invalidate_pmem(void *addr, size_t size); +size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, struct iov_iter *i); #else static inline void arch_wb_cache_pmem(void *addr, size_t size) { @@ -15,6 +17,11 @@ static inline void arch_wb_cache_pmem(void *addr, size_t size) static inline void arch_invalidate_pmem(void *addr, size_t size) { } +static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, + struct iov_iter *i) +{ + return copy_from_iter_nocache(addr, bytes, i); +} #endif /* this definition is in it's own header for tools/testing/nvdimm to consume */ diff --git a/drivers/nvdimm/x86.c b/drivers/nvdimm/x86.c index d99b452332a9..bc145d760d43 100644 --- a/drivers/nvdimm/x86.c +++ b/drivers/nvdimm/x86.c @@ -10,6 +10,9 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. */ +#include +#include +#include #include #include #include @@ -105,3 +108,48 @@ void arch_memcpy_to_pmem(void *_dst, void *_src, unsigned size) } } EXPORT_SYMBOL_GPL(arch_memcpy_to_pmem); + +static int pmem_from_user(void *dst, const void __user *src, unsigned size) +{ + unsigned long flushed, dest = (unsigned long) dest; + int rc = __copy_from_user_nocache(dst, src, size); + + /* + * On x86_64 __copy_from_user_nocache() uses non-temporal stores + * for the bulk of the transfer, but we need to manually flush + * if the transfer is unaligned. A cached memory copy is used + * when destination or size is not naturally aligned. That is: + * - Require 8-byte alignment when size is 8 bytes or larger. + * - Require 4-byte alignment when size is 4 bytes. + */ + if (size < 8) { + if (!IS_ALIGNED(dest, 4) || size != 4) + arch_wb_cache_pmem(dst, 1); + } else { + if (!IS_ALIGNED(dest, 8)) { + dest = ALIGN(dest, boot_cpu_data.x86_clflush_size); + arch_wb_cache_pmem(dst, 1); + } + + flushed = dest - (unsigned long) dst; + if (size > flushed && !IS_ALIGNED(size - flushed, 8)) + arch_wb_cache_pmem(dst + size - 1, 1); + } + + return rc; +} + +static void pmem_from_page(char *to, struct page *page, size_t offset, size_t len) +{ + char *from = kmap_atomic(page); + + arch_memcpy_to_pmem(to, from + offset, len); + kunmap_atomic(from); +} + +size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, struct iov_iter *i) +{ + return copy_from_iter_ops(addr, bytes, i, pmem_from_user, pmem_from_page, + arch_memcpy_to_pmem); +} +EXPORT_SYMBOL_GPL(arch_copy_from_iter_pmem); diff --git a/include/linux/uio.h b/include/linux/uio.h index 804e34c6f981..edb78f3fe2c8 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -91,6 +91,10 @@ size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); bool copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i); size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); +size_t copy_from_iter_ops(void *addr, size_t bytes, struct iov_iter *i, + int (*user)(void *, const void __user *, unsigned), + void (*page)(char *, struct page *, size_t, size_t), + void (*copy)(void *, void *, unsigned)); bool copy_from_iter_full_nocache(void *addr, size_t bytes, struct iov_iter *i); size_t iov_iter_zero(size_t bytes, struct iov_iter *); unsigned long iov_iter_alignment(const struct iov_iter *i); diff --git a/lib/Kconfig b/lib/Kconfig index 0c4aac6ef394..4d8f575e65b3 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -404,6 +404,9 @@ config DMA_VIRT_OPS depends on HAS_DMA && (!64BIT || ARCH_DMA_ADDR_T_64BIT) default n +config COPY_FROM_ITER_OPS + bool + config CHECK_SIGNATURE bool diff --git a/lib/iov_iter.c b/lib/iov_iter.c index e68604ae3ced..85f8021504e3 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -571,6 +571,31 @@ size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) } EXPORT_SYMBOL(copy_from_iter); +#ifdef CONFIG_COPY_FROM_ITER_OPS +size_t copy_from_iter_ops(void *addr, size_t bytes, struct iov_iter *i, + int (*user)(void *, const void __user *, unsigned), + void (*page)(char *, struct page *, size_t, size_t), + void (*copy)(void *, void *, unsigned)) +{ + char *to = addr; + + if (unlikely(i->type & ITER_PIPE)) { + WARN_ON(1); + return 0; + } + iterate_and_advance(i, bytes, v, + user((to += v.iov_len) - v.iov_len, v.iov_base, + v.iov_len), + page((to += v.bv_len) - v.bv_len, v.bv_page, v.bv_offset, + v.bv_len), + copy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len) + ) + + return bytes; +} +EXPORT_SYMBOL_GPL(copy_from_iter_ops); +#endif + bool copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i) { char *to = addr;