From patchwork Wed Aug 26 21:34:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7080131 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E2F789F358 for ; Wed, 26 Aug 2015 21:34:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B9E7B20969 for ; Wed, 26 Aug 2015 21:34:24 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9AF952096A for ; Wed, 26 Aug 2015 21:34:23 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 8053A182B4E; Wed, 26 Aug 2015 14:34:23 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id B4696182B45 for ; Wed, 26 Aug 2015 14:34:21 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 26 Aug 2015 14:34:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,418,1437462000"; d="scan'208";a="791572618" Received: from orsmsx106.amr.corp.intel.com ([10.22.225.133]) by orsmga002.jf.intel.com with ESMTP; 26 Aug 2015 14:34:21 -0700 Received: from orsmsx160.amr.corp.intel.com (10.22.226.43) by ORSMSX106.amr.corp.intel.com (10.22.225.133) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 26 Aug 2015 14:34:20 -0700 Received: from orsmsx107.amr.corp.intel.com ([169.254.1.234]) by ORSMSX160.amr.corp.intel.com ([169.254.13.190]) with mapi id 14.03.0224.002; Wed, 26 Aug 2015 14:34:20 -0700 From: "Williams, Dan J" To: "hch@lst.de" Subject: Re: [PATCH v2 5/9] x86, pmem: push fallback handling to arch code Thread-Topic: [PATCH v2 5/9] x86, pmem: push fallback handling to arch code Thread-Index: AQHQ3585kOnBxM54REqrdIbgxDkOk54eryMAgACU5YA= Date: Wed, 26 Aug 2015 21:34:20 +0000 Message-ID: <1440624859.31365.17.camel@intel.com> References: <20150826010220.8851.18077.stgit@dwillia2-desk3.amr.corp.intel.com> <20150826012751.8851.78564.stgit@dwillia2-desk3.amr.corp.intel.com> <20150826124124.GA7613@lst.de> In-Reply-To: <20150826124124.GA7613@lst.de> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.138] Content-ID: <453DD6BA4F29B541A82ABDFACA8189F2@intel.com> MIME-Version: 1.0 Cc: "linux-nvdimm@lists.01.org" , "david@fromorbit.com" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "mingo@redhat.com" , "hpa@zytor.com" , "tglx@linutronix.de" , "mingo@kernel.org" X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, 2015-08-26 at 14:41 +0200, Christoph Hellwig wrote: > I like the intent behind this, but not the implementation. > > I think the right approach is to keep the defaults in linux/pmem.h > and simply not set CONFIG_ARCH_HAS_PMEM_API for x86-32. Yes, that makes things much cleaner. Revised patch and changelog below: 8<---- Subject: x86, pmem: clarify that ARCH_HAS_PMEM_API implies PMEM mapped WB From: Dan Williams Given that a write-back (WB) mapping plus non-temporal stores is expected to be the most efficient way to access PMEM, update the definition of ARCH_HAS_PMEM_API to imply arch support for WB-mapped-PMEM. This is needed as a pre-requisite for adding PMEM to the direct map and mapping it with struct page. The above clarification for X86_64 means that memcpy_to_pmem() is permitted to use the non-temporal arch_memcpy_to_pmem() rather than needlessly fall back to default_memcpy_to_pmem() when the pcommit instruction is not available. When arch_memcpy_to_pmem() is not guaranteed to flush writes out of cache, i.e. on older X86_32 implementations where non-temporal stores may just dirty cache, ARCH_HAS_PMEM_API is simply disabled. The default fall back for persistent memory handling remains. Namely, map it with the WT (write-through) cache-type and hope for the best. arch_has_pmem_api() is updated to only indicate whether the arch provides the proper helpers to meet the minimum "writes are visible outside the cache hierarchy after memcpy_to_pmem() + wmb_pmem()". Code that cares whether wmb_pmem() actually flushes writes to pmem must now call arch_has_wmb_pmem() directly. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: Toshi Kani Cc: Ross Zwisler Cc: Christoph Hellwig [hch: set ARCH_HAS_PMEM_API=n on X86_32] Signed-off-by: Dan Williams Reviewed-by: Christoph Hellwig Reviewed-by: Ross Zwisler Reviewed-by: Toshi Kani --- arch/x86/Kconfig | 2 +- arch/x86/include/asm/io.h | 2 -- arch/x86/include/asm/pmem.h | 8 ++------ drivers/acpi/nfit.c | 2 +- drivers/nvdimm/pmem.c | 2 +- include/linux/pmem.h | 28 +++++++++++++++++----------- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 76c61154ed50..5912859df533 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -27,7 +27,7 @@ config X86 select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FAST_MULTIPLIER select ARCH_HAS_GCOV_PROFILE_ALL - select ARCH_HAS_PMEM_API + select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_SG_CHAIN select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index d241fbd5c87b..83ec9b1d77cc 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -248,8 +248,6 @@ static inline void flush_write_buffers(void) #endif } -#define ARCH_MEMREMAP_PMEM MEMREMAP_WB - #endif /* __KERNEL__ */ extern void native_io_delay(void); diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h index a3a0df6545ee..5111f1f053a4 100644 --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -19,6 +19,7 @@ #include #ifdef CONFIG_ARCH_HAS_PMEM_API +#define ARCH_MEMREMAP_PMEM MEMREMAP_WB /** * arch_memcpy_to_pmem - copy data to persistent memory * @dst: destination buffer for the copy @@ -141,18 +142,13 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) __arch_wb_cache_pmem(vaddr, size); } -static inline bool arch_has_wmb_pmem(void) +static inline bool __arch_has_wmb_pmem(void) { -#ifdef CONFIG_X86_64 /* * We require that wmb() be an 'sfence', that is only guaranteed on * 64-bit builds */ return static_cpu_has(X86_FEATURE_PCOMMIT); -#else - return false; -#endif } #endif /* CONFIG_ARCH_HAS_PMEM_API */ - #endif /* __ASM_X86_PMEM_H__ */ diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c index 7c2638f914a9..c3fe20635562 100644 --- a/drivers/acpi/nfit.c +++ b/drivers/acpi/nfit.c @@ -1364,7 +1364,7 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus, return -ENOMEM; } - if (!arch_has_pmem_api() && !nfit_blk->nvdimm_flush) + if (!arch_has_wmb_pmem() && !nfit_blk->nvdimm_flush) dev_warn(dev, "unable to guarantee persistence of writes\n"); if (mmio->line_size == 0) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 3b5b9cb758b6..20bf122328da 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -125,7 +125,7 @@ static struct pmem_device *pmem_alloc(struct device *dev, pmem->phys_addr = res->start; pmem->size = resource_size(res); - if (!arch_has_pmem_api()) + if (!arch_has_wmb_pmem()) dev_warn(dev, "unable to guarantee persistence of writes\n"); if (!devm_request_mem_region(dev, pmem->phys_addr, pmem->size, diff --git a/include/linux/pmem.h b/include/linux/pmem.h index a9d84bf335ee..9ec42710315e 100644 --- a/include/linux/pmem.h +++ b/include/linux/pmem.h @@ -19,12 +19,12 @@ #ifdef CONFIG_ARCH_HAS_PMEM_API #include #else -static inline void arch_wmb_pmem(void) -{ - BUG(); -} - -static inline bool arch_has_wmb_pmem(void) +/* + * These are simply here to enable compilation, all call sites gate + * calling these symbols with arch_has_pmem_api() and redirect to the + * implementation in asm/pmem.h. + */ +static inline bool __arch_has_wmb_pmem(void) { return false; } @@ -53,7 +53,6 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) * implementations for arch_memcpy_to_pmem(), arch_wmb_pmem(), * arch_copy_from_iter_pmem(), arch_clear_pmem() and arch_has_wmb_pmem(). */ - static inline void memcpy_from_pmem(void *dst, void __pmem const *src, size_t size) { memcpy(dst, (void __force const *) src, size); @@ -64,8 +63,13 @@ static inline void memunmap_pmem(struct device *dev, void __pmem *addr) devm_memunmap(dev, (void __force *) addr); } +static inline bool arch_has_pmem_api(void) +{ + return IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API); +} + /** - * arch_has_pmem_api - true if wmb_pmem() ensures durability + * arch_has_wmb_pmem - true if wmb_pmem() ensures durability * * For a given cpu implementation within an architecture it is possible * that wmb_pmem() resolves to a nop. In the case this returns @@ -73,9 +77,9 @@ static inline void memunmap_pmem(struct device *dev, void __pmem *addr) * fall back to a different data consistency model, or otherwise notify * the user. */ -static inline bool arch_has_pmem_api(void) +static inline bool arch_has_wmb_pmem(void) { - return IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API) && arch_has_wmb_pmem(); + return arch_has_pmem_api() && __arch_has_wmb_pmem(); } /* @@ -158,8 +162,10 @@ static inline void memcpy_to_pmem(void __pmem *dst, const void *src, size_t n) */ static inline void wmb_pmem(void) { - if (arch_has_pmem_api()) + if (arch_has_wmb_pmem()) arch_wmb_pmem(); + else + wmb(); } /**