Message ID | 1440624859.31365.17.camel@intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
This looks fine to me,
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Wed, Aug 26, 2015 at 09:34:20PM +0000, Williams, Dan J wrote: > On Wed, 2015-08-26 at 14:41 +0200, Christoph Hellwig wrote: > > I like the intent behind this, but not the implementation. > > > > I think the right approach is to keep the defaults in linux/pmem.h > > and simply not set CONFIG_ARCH_HAS_PMEM_API for x86-32. > > Yes, that makes things much cleaner. Revised patch and changelog below: > > 8<---- > Subject: x86, pmem: clarify that ARCH_HAS_PMEM_API implies PMEM mapped WB > > From: Dan Williams <dan.j.williams@intel.com> > > Given that a write-back (WB) mapping plus non-temporal stores is > expected to be the most efficient way to access PMEM, update the > definition of ARCH_HAS_PMEM_API to imply arch support for > WB-mapped-PMEM. This is needed as a pre-requisite for adding PMEM to > the direct map and mapping it with struct page. > > The above clarification for X86_64 means that memcpy_to_pmem() is > permitted to use the non-temporal arch_memcpy_to_pmem() rather than > needlessly fall back to default_memcpy_to_pmem() when the pcommit > instruction is not available. When arch_memcpy_to_pmem() is not > guaranteed to flush writes out of cache, i.e. on older X86_32 > implementations where non-temporal stores may just dirty cache, > ARCH_HAS_PMEM_API is simply disabled. > > The default fall back for persistent memory handling remains. Namely, > map it with the WT (write-through) cache-type and hope for the best. > > arch_has_pmem_api() is updated to only indicate whether the arch > provides the proper helpers to meet the minimum "writes are visible > outside the cache hierarchy after memcpy_to_pmem() + wmb_pmem()". Code > that cares whether wmb_pmem() actually flushes writes to pmem must now > call arch_has_wmb_pmem() directly. > > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: "H. Peter Anvin" <hpa@zytor.com> > Cc: Toshi Kani <toshi.kani@hp.com> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Cc: Christoph Hellwig <hch@lst.de> > [hch: set ARCH_HAS_PMEM_API=n on X86_32] > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Yep, this seems like a good change. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
On Wed, 2015-08-26 at 21:34 +0000, Williams, Dan J wrote: > On Wed, 2015-08-26 at 14:41 +0200, Christoph Hellwig wrote: > > I like the intent behind this, but not the implementation. > > > > I think the right approach is to keep the defaults in linux/pmem.h > > and simply not set CONFIG_ARCH_HAS_PMEM_API for x86-32. > > Yes, that makes things much cleaner. Revised patch and changelog below: > > 8<---- > Subject: x86, pmem: clarify that ARCH_HAS_PMEM_API implies PMEM mapped WB > > From: Dan Williams <dan.j.williams@intel.com> > > Given that a write-back (WB) mapping plus non-temporal stores is > expected to be the most efficient way to access PMEM, update the > definition of ARCH_HAS_PMEM_API to imply arch support for > WB-mapped-PMEM. This is needed as a pre-requisite for adding PMEM to > the direct map and mapping it with struct page. > > The above clarification for X86_64 means that memcpy_to_pmem() is > permitted to use the non-temporal arch_memcpy_to_pmem() rather than > needlessly fall back to default_memcpy_to_pmem() when the pcommit > instruction is not available. When arch_memcpy_to_pmem() is not > guaranteed to flush writes out of cache, i.e. on older X86_32 > implementations where non-temporal stores may just dirty cache, > ARCH_HAS_PMEM_API is simply disabled. > > The default fall back for persistent memory handling remains. Namely, > map it with the WT (write-through) cache-type and hope for the best. > > arch_has_pmem_api() is updated to only indicate whether the arch > provides the proper helpers to meet the minimum "writes are visible > outside the cache hierarchy after memcpy_to_pmem() + wmb_pmem()". Code > that cares whether wmb_pmem() actually flushes writes to pmem must now > call arch_has_wmb_pmem() directly. > > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: "H. Peter Anvin" <hpa@zytor.com> > Cc: Toshi Kani <toshi.kani@hp.com> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Cc: Christoph Hellwig <hch@lst.de> > [hch: set ARCH_HAS_PMEM_API=n on X86_32] > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Thanks for making this change! It looks good. Reviewed-by: Toshi Kani <toshi.kani@hp.com> I have one minor comment below: > --- > arch/x86/Kconfig | 2 +- > arch/x86/include/asm/io.h | 2 -- > arch/x86/include/asm/pmem.h | 8 ++------ > drivers/acpi/nfit.c | 2 +- > drivers/nvdimm/pmem.c | 2 +- > include/linux/pmem.h | 28 +++++++++++++++++----------- > 6 files changed, 22 insertions(+), 22 deletions(-) > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 76c61154ed50..5912859df533 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -27,7 +27,7 @@ config X86 > select ARCH_HAS_ELF_RANDOMIZE > select ARCH_HAS_FAST_MULTIPLIER > select ARCH_HAS_GCOV_PROFILE_ALL > - select ARCH_HAS_PMEM_API > + select ARCH_HAS_PMEM_API if X86_64 > select ARCH_HAS_SG_CHAIN > select ARCH_HAVE_NMI_SAFE_CMPXCHG > select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI > diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h > index d241fbd5c87b..83ec9b1d77cc 100644 > --- a/arch/x86/include/asm/io.h > +++ b/arch/x86/include/asm/io.h > @@ -248,8 +248,6 @@ static inline void flush_write_buffers(void) > #endif > } > > -#define ARCH_MEMREMAP_PMEM MEMREMAP_WB Should it be better to do: #else /* !CONFIG_ARCH_HAS_PMEM_API */ #define ARCH_MEMREMAP_PMEM MEMREMAP_WT so that you can remove all '#ifdef ARCH_MEMREMAP_PMEM' stuff? Thanks, -Toshi
On Fri, Aug 28, 2015 at 2:41 PM, Toshi Kani <toshi.kani@hp.com> wrote: > On Wed, 2015-08-26 at 21:34 +0000, Williams, Dan J wrote: [..] >> -#define ARCH_MEMREMAP_PMEM MEMREMAP_WB > > Should it be better to do: > > #else /* !CONFIG_ARCH_HAS_PMEM_API */ > #define ARCH_MEMREMAP_PMEM MEMREMAP_WT > > so that you can remove all '#ifdef ARCH_MEMREMAP_PMEM' stuff? Yeah, that seems like a nice incremental cleanup for memremap_pmem() to just unconditionally use ARCH_MEMREMAP_PMEM, feel free to send it along.
On Fri, 2015-08-28 at 14:47 -0700, Dan Williams wrote: > On Fri, Aug 28, 2015 at 2:41 PM, Toshi Kani <toshi.kani@hp.com> wrote: > > On Wed, 2015-08-26 at 21:34 +0000, Williams, Dan J wrote: > [..] > > > -#define ARCH_MEMREMAP_PMEM MEMREMAP_WB > > > > Should it be better to do: > > > > #else /* !CONFIG_ARCH_HAS_PMEM_API */ > > #define ARCH_MEMREMAP_PMEM MEMREMAP_WT > > > > so that you can remove all '#ifdef ARCH_MEMREMAP_PMEM' stuff? > > Yeah, that seems like a nice incremental cleanup for memremap_pmem() > to just unconditionally use ARCH_MEMREMAP_PMEM, feel free to send it > along. OK. Will do. Thanks, -Toshi
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 76c61154ed50..5912859df533 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -27,7 +27,7 @@ config X86 select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FAST_MULTIPLIER select ARCH_HAS_GCOV_PROFILE_ALL - select ARCH_HAS_PMEM_API + select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_SG_CHAIN select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index d241fbd5c87b..83ec9b1d77cc 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -248,8 +248,6 @@ static inline void flush_write_buffers(void) #endif } -#define ARCH_MEMREMAP_PMEM MEMREMAP_WB - #endif /* __KERNEL__ */ extern void native_io_delay(void); diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h index a3a0df6545ee..5111f1f053a4 100644 --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -19,6 +19,7 @@ #include <asm/special_insns.h> #ifdef CONFIG_ARCH_HAS_PMEM_API +#define ARCH_MEMREMAP_PMEM MEMREMAP_WB /** * arch_memcpy_to_pmem - copy data to persistent memory * @dst: destination buffer for the copy @@ -141,18 +142,13 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) __arch_wb_cache_pmem(vaddr, size); } -static inline bool arch_has_wmb_pmem(void) +static inline bool __arch_has_wmb_pmem(void) { -#ifdef CONFIG_X86_64 /* * We require that wmb() be an 'sfence', that is only guaranteed on * 64-bit builds */ return static_cpu_has(X86_FEATURE_PCOMMIT); -#else - return false; -#endif } #endif /* CONFIG_ARCH_HAS_PMEM_API */ - #endif /* __ASM_X86_PMEM_H__ */ diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c index 7c2638f914a9..c3fe20635562 100644 --- a/drivers/acpi/nfit.c +++ b/drivers/acpi/nfit.c @@ -1364,7 +1364,7 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus, return -ENOMEM; } - if (!arch_has_pmem_api() && !nfit_blk->nvdimm_flush) + if (!arch_has_wmb_pmem() && !nfit_blk->nvdimm_flush) dev_warn(dev, "unable to guarantee persistence of writes\n"); if (mmio->line_size == 0) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 3b5b9cb758b6..20bf122328da 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -125,7 +125,7 @@ static struct pmem_device *pmem_alloc(struct device *dev, pmem->phys_addr = res->start; pmem->size = resource_size(res); - if (!arch_has_pmem_api()) + if (!arch_has_wmb_pmem()) dev_warn(dev, "unable to guarantee persistence of writes\n"); if (!devm_request_mem_region(dev, pmem->phys_addr, pmem->size, diff --git a/include/linux/pmem.h b/include/linux/pmem.h index a9d84bf335ee..9ec42710315e 100644 --- a/include/linux/pmem.h +++ b/include/linux/pmem.h @@ -19,12 +19,12 @@ #ifdef CONFIG_ARCH_HAS_PMEM_API #include <asm/pmem.h> #else -static inline void arch_wmb_pmem(void) -{ - BUG(); -} - -static inline bool arch_has_wmb_pmem(void) +/* + * These are simply here to enable compilation, all call sites gate + * calling these symbols with arch_has_pmem_api() and redirect to the + * implementation in asm/pmem.h. + */ +static inline bool __arch_has_wmb_pmem(void) { return false; } @@ -53,7 +53,6 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) * implementations for arch_memcpy_to_pmem(), arch_wmb_pmem(), * arch_copy_from_iter_pmem(), arch_clear_pmem() and arch_has_wmb_pmem(). */ - static inline void memcpy_from_pmem(void *dst, void __pmem const *src, size_t size) { memcpy(dst, (void __force const *) src, size); @@ -64,8 +63,13 @@ static inline void memunmap_pmem(struct device *dev, void __pmem *addr) devm_memunmap(dev, (void __force *) addr); } +static inline bool arch_has_pmem_api(void) +{ + return IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API); +} + /** - * arch_has_pmem_api - true if wmb_pmem() ensures durability + * arch_has_wmb_pmem - true if wmb_pmem() ensures durability * * For a given cpu implementation within an architecture it is possible * that wmb_pmem() resolves to a nop. In the case this returns @@ -73,9 +77,9 @@ static inline void memunmap_pmem(struct device *dev, void __pmem *addr) * fall back to a different data consistency model, or otherwise notify * the user. */ -static inline bool arch_has_pmem_api(void) +static inline bool arch_has_wmb_pmem(void) { - return IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API) && arch_has_wmb_pmem(); + return arch_has_pmem_api() && __arch_has_wmb_pmem(); } /* @@ -158,8 +162,10 @@ static inline void memcpy_to_pmem(void __pmem *dst, const void *src, size_t n) */ static inline void wmb_pmem(void) { - if (arch_has_pmem_api()) + if (arch_has_wmb_pmem()) arch_wmb_pmem(); + else + wmb(); } /**