From patchwork Thu Oct 12 15:50:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pankaj Gupta X-Patchwork-Id: 10002227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5F4A6602BF for ; Thu, 12 Oct 2017 15:56:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 511DE28DC1 for ; Thu, 12 Oct 2017 15:56:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 45E0328DCD; Thu, 12 Oct 2017 15:56:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AEBBE28DC1 for ; Thu, 12 Oct 2017 15:56:21 +0000 (UTC) Received: from localhost ([::1]:46191 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e2fqP-0006My-1n for patchwork-qemu-devel@patchwork.kernel.org; Thu, 12 Oct 2017 11:56:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40009) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e2flM-0003F7-QP for qemu-devel@nongnu.org; Thu, 12 Oct 2017 11:51:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1e2flI-0001PK-PX for qemu-devel@nongnu.org; Thu, 12 Oct 2017 11:51:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52734) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1e2flI-0001Oq-HI for qemu-devel@nongnu.org; Thu, 12 Oct 2017 11:51:04 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 63AC3A0C01; Thu, 12 Oct 2017 15:51:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 63AC3A0C01 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=pagupta@redhat.com Received: from dhcp201-121.englab.pnq.redhat.com (ovpn-116-26.sin2.redhat.com [10.67.116.26]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1A569413E; Thu, 12 Oct 2017 15:50:50 +0000 (UTC) From: Pankaj Gupta To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, qemu-devel@nongnu.org, linux-nvdimm@ml01.01.org, linux-mm@kvack.org Date: Thu, 12 Oct 2017 21:20:25 +0530 Message-Id: <20171012155027.3277-2-pagupta@redhat.com> In-Reply-To: <20171012155027.3277-1-pagupta@redhat.com> References: <20171012155027.3277-1-pagupta@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 12 Oct 2017 15:51:03 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC 1/2] pmem: Move reusable code to base header files X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, haozhong.zhang@intel.com, jack@suse.cz, xiaoguangrong.eric@gmail.com, david@redhat.com, pagupta@redhat.com, ross.zwisler@intel.com, stefanha@redhat.com, pbonzini@redhat.com, dan.j.williams@intel.com, nilal@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch moves common code to base header files so that it can be used for both ACPI pmem and VIRTIO pmem drivers. More common code needs to be moved out in future based on functionality required for virtio_pmem driver and coupling of code with existing ACPI pmem driver. Signed-off-by: Pankaj Gupta --- drivers/nvdimm/pfn.h | 14 ------------ drivers/nvdimm/pfn_devs.c | 20 ----------------- drivers/nvdimm/pmem.c | 40 ---------------------------------- drivers/nvdimm/pmem.h | 5 +---- include/linux/memremap.h | 23 ++++++++++++++++++++ include/linux/pfn.h | 15 +++++++++++++ include/linux/pmem_common.h | 52 +++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 91 insertions(+), 78 deletions(-) create mode 100644 include/linux/pmem_common.h diff --git a/drivers/nvdimm/pfn.h b/drivers/nvdimm/pfn.h index dde9853453d3..1a853f651faf 100644 --- a/drivers/nvdimm/pfn.h +++ b/drivers/nvdimm/pfn.h @@ -40,18 +40,4 @@ struct nd_pfn_sb { __le64 checksum; }; -#ifdef CONFIG_SPARSEMEM -#define PFN_SECTION_ALIGN_DOWN(x) SECTION_ALIGN_DOWN(x) -#define PFN_SECTION_ALIGN_UP(x) SECTION_ALIGN_UP(x) -#else -/* - * In this case ZONE_DEVICE=n and we will disable 'pfn' device support, - * but we still want pmem to compile. - */ -#define PFN_SECTION_ALIGN_DOWN(x) (x) -#define PFN_SECTION_ALIGN_UP(x) (x) -#endif - -#define PHYS_SECTION_ALIGN_DOWN(x) PFN_PHYS(PFN_SECTION_ALIGN_DOWN(PHYS_PFN(x))) -#define PHYS_SECTION_ALIGN_UP(x) PFN_PHYS(PFN_SECTION_ALIGN_UP(PHYS_PFN(x))) #endif /* __NVDIMM_PFN_H */ diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 9576c444f0ab..52d6923e92fc 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -513,26 +513,6 @@ int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns) } EXPORT_SYMBOL(nd_pfn_probe); -/* - * We hotplug memory at section granularity, pad the reserved area from - * the previous section base to the namespace base address. - */ -static unsigned long init_altmap_base(resource_size_t base) -{ - unsigned long base_pfn = PHYS_PFN(base); - - return PFN_SECTION_ALIGN_DOWN(base_pfn); -} - -static unsigned long init_altmap_reserve(resource_size_t base) -{ - unsigned long reserve = PHYS_PFN(SZ_8K); - unsigned long base_pfn = PHYS_PFN(base); - - reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn); - return reserve; -} - static struct vmem_altmap *__nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct resource *res, struct vmem_altmap *altmap) { diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 39dfd7affa31..5075131b715b 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -77,46 +77,6 @@ static blk_status_t pmem_clear_poison(struct pmem_device *pmem, return rc; } -static void write_pmem(void *pmem_addr, struct page *page, - unsigned int off, unsigned int len) -{ - unsigned int chunk; - void *mem; - - while (len) { - mem = kmap_atomic(page); - chunk = min_t(unsigned int, len, PAGE_SIZE); - memcpy_flushcache(pmem_addr, mem + off, chunk); - kunmap_atomic(mem); - len -= chunk; - off = 0; - page++; - pmem_addr += PAGE_SIZE; - } -} - -static blk_status_t read_pmem(struct page *page, unsigned int off, - void *pmem_addr, unsigned int len) -{ - unsigned int chunk; - int rc; - void *mem; - - while (len) { - mem = kmap_atomic(page); - chunk = min_t(unsigned int, len, PAGE_SIZE); - rc = memcpy_mcsafe(mem + off, pmem_addr, chunk); - kunmap_atomic(mem); - if (rc) - return BLK_STS_IOERR; - len -= chunk; - off = 0; - page++; - pmem_addr += PAGE_SIZE; - } - return BLK_STS_OK; -} - static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page, unsigned int len, unsigned int off, bool is_write, sector_t sector) diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index c5917f040fa7..8c5620614ec0 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -1,9 +1,6 @@ #ifndef __NVDIMM_PMEM_H__ #define __NVDIMM_PMEM_H__ -#include -#include -#include -#include +#include /* this definition is in it's own header for tools/testing/nvdimm to consume */ struct pmem_device { diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 79f8ba7c3894..e4eb81020306 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -3,12 +3,35 @@ #include #include #include +#include +#include #include struct resource; struct device; +/* + * We hotplug memory at section granularity, pad the reserved area from + * the previous section base to the namespace base address. + */ +static inline unsigned long init_altmap_base(resource_size_t base) +{ + unsigned long base_pfn = PHYS_PFN(base); + + return PFN_SECTION_ALIGN_DOWN(base_pfn); +} + +static inline unsigned long init_altmap_reserve(resource_size_t base) +{ + unsigned long reserve = PHYS_PFN(SZ_8K); + unsigned long base_pfn = PHYS_PFN(base); + + reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn); + return reserve; +} + + /** * struct vmem_altmap - pre-allocated storage for vmemmap_populate * @base_pfn: base of the entire dev_pagemap mapping diff --git a/include/linux/pfn.h b/include/linux/pfn.h index 1132953235c0..2d8f69cc1470 100644 --- a/include/linux/pfn.h +++ b/include/linux/pfn.h @@ -20,4 +20,19 @@ typedef struct { #define PFN_PHYS(x) ((phys_addr_t)(x) << PAGE_SHIFT) #define PHYS_PFN(x) ((unsigned long)((x) >> PAGE_SHIFT)) +#ifdef CONFIG_SPARSEMEM +#define PFN_SECTION_ALIGN_DOWN(x) SECTION_ALIGN_DOWN(x) +#define PFN_SECTION_ALIGN_UP(x) SECTION_ALIGN_UP(x) +#else +/* + * In this case ZONE_DEVICE=n and we will disable 'pfn' device support, + * but we still want pmem to compile. + */ +#define PFN_SECTION_ALIGN_DOWN(x) (x) +#define PFN_SECTION_ALIGN_UP(x) (x) +#endif + +#define PHYS_SECTION_ALIGN_DOWN(x) PFN_PHYS(PFN_SECTION_ALIGN_DOWN(PHYS_PFN(x))) +#define PHYS_SECTION_ALIGN_UP(x) PFN_PHYS(PFN_SECTION_ALIGN_UP(PHYS_PFN(x))) + #endif diff --git a/include/linux/pmem_common.h b/include/linux/pmem_common.h new file mode 100644 index 000000000000..e2e718c74b3f --- /dev/null +++ b/include/linux/pmem_common.h @@ -0,0 +1,52 @@ +#ifndef __PMEM_COMMON_H__ +#define __PMEM_COMMON_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static void write_pmem(void *pmem_addr, struct page *page, + unsigned int off, unsigned int len) +{ + void *mem = kmap_atomic(page); + + memcpy_flushcache(pmem_addr, mem + off, len); + kunmap_atomic(mem); +} + +static blk_status_t read_pmem(struct page *page, unsigned int off, + void *pmem_addr, unsigned int len) +{ + int rc; + void *mem = kmap_atomic(page); + + rc = memcpy_mcsafe(mem + off, pmem_addr, len); + kunmap_atomic(mem); + if (rc) + return BLK_STS_IOERR; + return BLK_STS_OK; +} + +#endif /* __PMEM_COMMON_H__ */ + +#ifdef CONFIG_ARCH_HAS_PMEM_API +#define ARCH_MEMREMAP_PMEM MEMREMAP_WB +void arch_wb_cache_pmem(void *addr, size_t size); +void arch_invalidate_pmem(void *addr, size_t size); +#else +#define ARCH_MEMREMAP_PMEM MEMREMAP_WT +static inline void arch_wb_cache_pmem(void *addr, size_t size) +{ +} +static inline void arch_invalidate_pmem(void *addr, size_t size) +{ +} +#endif