From patchwork Fri Aug 9 07:45:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11085591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1A6714D5 for ; Fri, 9 Aug 2019 07:45:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91B3B28B64 for ; Fri, 9 Aug 2019 07:45:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 85D6C28C56; Fri, 9 Aug 2019 07:45:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 119A528C53 for ; Fri, 9 Aug 2019 07:45:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 478A46B000A; Fri, 9 Aug 2019 03:45:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 404676B000C; Fri, 9 Aug 2019 03:45:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E7B6B000D; Fri, 9 Aug 2019 03:45:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f198.google.com (mail-yb1-f198.google.com [209.85.219.198]) by kanga.kvack.org (Postfix) with ESMTP id EAC016B000A for ; Fri, 9 Aug 2019 03:45:48 -0400 (EDT) Received: by mail-yb1-f198.google.com with SMTP id k5so12855884ybo.8 for ; Fri, 09 Aug 2019 00:45:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=7nMwPOELRuL1JjTfUknD5ZJvglW+NvUCwU0Q6CSgFo8=; b=OjBnWkbjVvB1NSGk2uwGDjTUdmXc5Y+RU5WFQR1pbYZ46S6tooA2caClE/b6843BKV nc/2Eb8Qgte6OyhBUrvp6uHGojMIDR9ZKsDxZeagwOb5E9IP1uz8e71h2jPBv43P8S2x XEAkt5gQtRUaI0TDNjnQyZW3B9Ki18C9k/Nglk7jcIBNqjMalD/jWhWCWArvs0Xl7HVc 9N5wBOQAJ3gmR1lRGtQyuZwY4RLoMvFqGeDiomNva0mQ/0a95t5E4dts30MZSMKkAttf n0LTvfRuhVY1XKoSaVueu1qXysNg7td6Hpyz6xbFCBX5NsTonMSU6kUofB9Fomd7vBxH 84PA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: APjAAAVRwSd1Bc8b+p5UX+bShSYVgYFNWgozgmTplXjMujFkAmUYjEoi xuKHW5Dl7U1pf+JCRIvn3seaAU9X+nOeel/DMw4Az2TUtu1qSusjxpzgwEyAh3kdTbVFmWSt7uW OTkZpsm72rqkyNGAWr6MRW7fSYdaB29iZk6SLIc2uKTMe1sGg9s90+iurBGMW3fMCZg== X-Received: by 2002:a0d:d557:: with SMTP id x84mr7528257ywd.455.1565336748683; Fri, 09 Aug 2019 00:45:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqxGP1zQW9nTsxENDT947o3njrh6gm9Qj2jvXYX7o43T18KO33GfR0xNPCSwvIHDjkt7N9ge X-Received: by 2002:a0d:d557:: with SMTP id x84mr7528224ywd.455.1565336747598; Fri, 09 Aug 2019 00:45:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565336747; cv=none; d=google.com; s=arc-20160816; b=FN3xsYjChHeZ0gnfp8fcu4B1iu1NrounpXumkUkWIflHI80WrnUovIl9HlRZFOH5aC uqaONHot17QAzUDTLWmR1oAtfhGawU+eOiZ6lV43KEj/2+2KtRTXAwySWRzMhCHfiB7u bQi2ho6oYzPDmmVEGu7jyb1Q0bGCzrDTQ/076MOtLd7M4e2t99gqr4WT6YNxQmzG9j4S 8X/kHPNvimmLgM/r9SaLa8PU/H74exVdiRpi/GD4kz2LBL/ODqmoykFnxGBCJbVdDS7Q dwjbA6BFL62rNNtUV+Kz5vdJokGYS8zbdcuumaud34L2Yg46ZPQCmrWIbhsFwaMWmvJ0 jrFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=7nMwPOELRuL1JjTfUknD5ZJvglW+NvUCwU0Q6CSgFo8=; b=h4Y3Wkum5K1uIMmvIaNuoslapqZaxC0JNsIA6Veq1ZvlhrRzBxOPx1F+TsobsKtsyk MuVavIedf1XJOMlX2nalUasEqiT/eTFuslPq1MktDL1aJOj4PlLuMNII70RuERZzHi70 pqs+biHU2KEkN/pNbXpDij+P0b6f+5g33+PJUknWQ764u3B5zkQxtWMqu305mNyAMp9W NTJbl4wglYFOV2pW8KeZFY+yOEsLGEIKeYTB2FEzEdwjVU27Y7m/gAmuypr8M589piKg lwOfMs9qMJwtLr+ICkZv8a3xGPuIVZVsKHUBfe/8xqCoMUrms7owfDCaQPqzTwZbuKOF iNeg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id f132si26783688ybf.393.2019.08.09.00.45.47 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Aug 2019 00:45:47 -0700 (PDT) Received-SPF: pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) client-ip=148.163.158.5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x797i6ce101665; Fri, 9 Aug 2019 03:45:46 -0400 Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0b-001b2d01.pphosted.com with ESMTP id 2u91gxpka9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 09 Aug 2019 03:45:46 -0400 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x797hcKG012256; Fri, 9 Aug 2019 07:45:45 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma04dal.us.ibm.com with ESMTP id 2u51w7cqks-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 09 Aug 2019 07:45:45 +0000 Received: from b03ledav003.gho.boulder.ibm.com (b03ledav003.gho.boulder.ibm.com [9.17.130.234]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x797jh4T34996624 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 9 Aug 2019 07:45:43 GMT Received: from b03ledav003.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B421F6A04D; Fri, 9 Aug 2019 07:45:43 +0000 (GMT) Received: from b03ledav003.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B7F0A6A051; Fri, 9 Aug 2019 07:45:41 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.199.36.73]) by b03ledav003.gho.boulder.ibm.com (Postfix) with ESMTP; Fri, 9 Aug 2019 07:45:41 +0000 (GMT) From: "Aneesh Kumar K.V" To: dan.j.williams@intel.com Cc: linux-nvdimm@lists.01.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [PATCH v5 4/4] mm/nvdimm: Pick the right alignment default when creating dax devices Date: Fri, 9 Aug 2019 13:15:20 +0530 Message-Id: <20190809074520.27115-5-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190809074520.27115-1-aneesh.kumar@linux.ibm.com> References: <20190809074520.27115-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-08-09_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908090080 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Allow arch to provide the supported alignments and use hugepage alignment only if we support hugepage. Right now we depend on compile time configs whereas this patch switch this to runtime discovery. Architectures like ppc64 can have THP enabled in code, but then can have hugepage size disabled by the hypervisor. This allows us to create dax devices with PAGE_SIZE alignment in this case. Existing dax namespace with alignment larger than PAGE_SIZE will fail to initialize in this specific case. We still allow fsdax namespace initialization. With respect to identifying whether to enable hugepage fault for a dax device, if THP is enabled during compile, we default to taking hugepage fault and in dax fault handler if we find the fault size > alignment we retry with PAGE_SIZE fault size. This also addresses the below failure scenario on ppc64 ndctl create-namespace --mode=devdax | grep align "align":16777216, "align":16777216 cat /sys/devices/ndbus0/region0/dax0.0/supported_alignments 65536 16777216 daxio.static-debug -z -o /dev/dax0.0 Bus error (core dumped) $ dmesg | tail lpar: Failed hash pte insert with error -4 hash-mmu: mm: Hashing failure ! EA=0x7fff17000000 access=0x8000000000000006 current=daxio hash-mmu: trap=0x300 vsid=0x22cb7a3 ssize=1 base psize=2 psize 10 pte=0xc000000501002b86 daxio[3860]: bus error (7) at 7fff17000000 nip 7fff973c007c lr 7fff973bff34 code 2 in libpmem.so.1.0.0[7fff973b0000+20000] daxio[3860]: code: 792945e4 7d494b78 e95f0098 7d494b78 f93f00a0 4800012c e93f0088 f93f0120 daxio[3860]: code: e93f00a0 f93f0128 e93f0120 e95f0128 e93f0088 39290008 f93f0110 The failure was due to guest kernel using wrong page size. The namespaces created with 16M alignment will appear as below on a config with 16M page size disabled. $ ndctl list -Ni [ { "dev":"namespace0.1", "mode":"fsdax", "map":"dev", "size":5351931904, "uuid":"fc6e9667-461a-4718-82b4-69b24570bddb", "align":16777216, "blockdev":"pmem0.1", "supported_alignments":[ 65536 ] }, { "dev":"namespace0.0", "mode":"fsdax", <==== devdax 16M alignment marked disabled. "map":"mem", "size":5368709120, "uuid":"a4bdf81a-f2ee-4bc6-91db-7b87eddd0484", "state":"disabled" } ] Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/libnvdimm.h | 9 ++++++++ arch/powerpc/mm/Makefile | 1 + arch/powerpc/mm/nvdimm.c | 34 ++++++++++++++++++++++++++++ arch/x86/include/asm/libnvdimm.h | 19 ++++++++++++++++ drivers/nvdimm/nd.h | 6 ----- drivers/nvdimm/pfn_devs.c | 32 +++++++++++++++++++++++++- include/linux/huge_mm.h | 7 +++++- 7 files changed, 100 insertions(+), 8 deletions(-) create mode 100644 arch/powerpc/include/asm/libnvdimm.h create mode 100644 arch/powerpc/mm/nvdimm.c create mode 100644 arch/x86/include/asm/libnvdimm.h diff --git a/arch/powerpc/include/asm/libnvdimm.h b/arch/powerpc/include/asm/libnvdimm.h new file mode 100644 index 000000000000..d35fd7f48603 --- /dev/null +++ b/arch/powerpc/include/asm/libnvdimm.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_LIBNVDIMM_H +#define _ASM_POWERPC_LIBNVDIMM_H + +#define nd_pfn_supported_alignments nd_pfn_supported_alignments +extern unsigned long *nd_pfn_supported_alignments(void); +extern unsigned long nd_pfn_default_alignment(void); + +#endif diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index 0f499db315d6..42e4a399ba5d 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM) += highmem.o obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o obj-$(CONFIG_PPC_PTDUMP) += ptdump/ obj-$(CONFIG_KASAN) += kasan/ +obj-$(CONFIG_NVDIMM_PFN) += nvdimm.o diff --git a/arch/powerpc/mm/nvdimm.c b/arch/powerpc/mm/nvdimm.c new file mode 100644 index 000000000000..a29a4510715e --- /dev/null +++ b/arch/powerpc/mm/nvdimm.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include + +#include +/* + * We support only pte and pmd mappings for now. + */ +const unsigned long *nd_pfn_supported_alignments(void) +{ + static unsigned long supported_alignments[3]; + + supported_alignments[0] = PAGE_SIZE; + + if (has_transparent_hugepage()) + supported_alignments[1] = HPAGE_PMD_SIZE; + else + supported_alignments[1] = 0; + + supported_alignments[2] = 0; + return supported_alignments; +} + +/* + * Use pmd mapping if supported as default alignment + */ +unsigned long nd_pfn_default_alignment(void) +{ + + if (has_transparent_hugepage()) + return HPAGE_PMD_SIZE; + return PAGE_SIZE; +} diff --git a/arch/x86/include/asm/libnvdimm.h b/arch/x86/include/asm/libnvdimm.h new file mode 100644 index 000000000000..3d5361db9164 --- /dev/null +++ b/arch/x86/include/asm/libnvdimm.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_LIBNVDIMM_H +#define _ASM_X86_LIBNVDIMM_H + +static inline unsigned long nd_pfn_default_alignment(void) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + return HPAGE_PMD_SIZE; +#else + return PAGE_SIZE; +#endif +} + +static inline unsigned long nd_altmap_align_size(unsigned long nd_align) +{ + return PMD_SIZE; +} + +#endif diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 1b9955651379..5d095a19ff2c 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -289,12 +289,6 @@ static inline struct device *nd_btt_create(struct nd_region *nd_region) struct nd_pfn *to_nd_pfn(struct device *dev); #if IS_ENABLED(CONFIG_NVDIMM_PFN) -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define PFN_DEFAULT_ALIGNMENT HPAGE_PMD_SIZE -#else -#define PFN_DEFAULT_ALIGNMENT PAGE_SIZE -#endif - int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns); bool is_nd_pfn(struct device *dev); struct device *nd_pfn_create(struct nd_region *nd_region); diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index c1d9be609322..f43d1baa6f33 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "nd-core.h" #include "pfn.h" #include "nd.h" @@ -103,6 +104,8 @@ static ssize_t align_show(struct device *dev, return sprintf(buf, "%ld\n", nd_pfn->align); } +#ifndef nd_pfn_supported_alignments +#define nd_pfn_supported_alignments nd_pfn_supported_alignments static const unsigned long *nd_pfn_supported_alignments(void) { /* @@ -125,6 +128,7 @@ static const unsigned long *nd_pfn_supported_alignments(void) return data; } +#endif static ssize_t align_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) @@ -302,7 +306,7 @@ struct device *nd_pfn_devinit(struct nd_pfn *nd_pfn, return NULL; nd_pfn->mode = PFN_MODE_NONE; - nd_pfn->align = PFN_DEFAULT_ALIGNMENT; + nd_pfn->align = nd_pfn_default_alignment(); dev = &nd_pfn->dev; device_initialize(&nd_pfn->dev); if (ndns && !__nd_attach_ndns(&nd_pfn->dev, ndns, &nd_pfn->ndns)) { @@ -412,6 +416,20 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn) return 0; } +static bool nd_supported_alignment(unsigned long align) +{ + int i; + const unsigned long *supported = nd_pfn_supported_alignments(); + + if (align == 0) + return false; + + for (i = 0; supported[i]; i++) + if (align == supported[i]) + return true; + return false; +} + /** * nd_pfn_validate - read and validate info-block * @nd_pfn: fsdax namespace runtime state / properties @@ -498,6 +516,18 @@ int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig) return -EOPNOTSUPP; } + /* + * Check whether the we support the alignment. For Dax if the + * superblock alignment is not matching, we won't initialize + * the device. + */ + if (!nd_supported_alignment(align) && + !memcmp(pfn_sb->signature, DAX_SIG, PFN_SIG_LEN)) { + dev_err(&nd_pfn->dev, "init failed, alignment mismatch: " + "%ld:%ld\n", nd_pfn->align, align); + return -EOPNOTSUPP; + } + if (!nd_pfn->uuid) { /* * When probing a namepace via nd_pfn_probe() the uuid diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 45ede62aa85b..4fa91f9bd0da 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -108,7 +108,12 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG)) return true; - + /* + * For dax let's try to do hugepage fault always. If we don't support + * hugepages we will not have enabled namespaces with hugepage alignment. + * This also means we try to handle hugepage fault on device with + * smaller alignment. But for then we will return with VM_FAULT_FALLBACK + */ if (vma_is_dax(vma)) return true;