From patchwork Sat Oct 10 00:55:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7364971 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 8923EBF90C for ; Sat, 10 Oct 2015 01:01:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9F98B2060D for ; Sat, 10 Oct 2015 01:01:51 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A911C20830 for ; Sat, 10 Oct 2015 01:01:50 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 99B6D61BBB; Fri, 9 Oct 2015 18:01:50 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by ml01.01.org (Postfix) with ESMTP id C09D461B35 for ; Fri, 9 Oct 2015 18:01:49 -0700 (PDT) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 09 Oct 2015 18:01:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,660,1437462000"; d="scan'208";a="577681472" Received: from dwillia2-desk3.jf.intel.com ([10.54.39.39]) by FMSMGA003.fm.intel.com with ESMTP; 09 Oct 2015 18:01:37 -0700 Subject: [PATCH v2 06/20] libnvdimm, pfn, pmem: allocate memmap array in persistent memory From: Dan Williams To: linux-nvdimm@lists.01.org Date: Fri, 09 Oct 2015 20:55:55 -0400 Message-ID: <20151010005555.17221.32221.stgit@dwillia2-desk3.jf.intel.com> In-Reply-To: <20151010005522.17221.87557.stgit@dwillia2-desk3.jf.intel.com> References: <20151010005522.17221.87557.stgit@dwillia2-desk3.jf.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Cc: Dave Chinner , linux-kernel@vger.kernel.org, hch@lst.de, linux-mm@kvack.org, Andrew Morton X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the new vmem_altmap capability to enable the pmem driver to arrange for a struct page memmap to be established in persistent memory. Cc: Christoph Hellwig Cc: Dave Chinner Cc: Andrew Morton Cc: Ross Zwisler Signed-off-by: Dan Williams --- drivers/nvdimm/pfn_devs.c | 3 +-- drivers/nvdimm/pmem.c | 19 +++++++++++++++++-- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 71805a1aa0f3..a642cfacee07 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -83,8 +83,7 @@ static ssize_t mode_store(struct device *dev, if (strncmp(buf, "pmem\n", n) == 0 || strncmp(buf, "pmem", n) == 0) { - /* TODO: allocate from PMEM support */ - rc = -ENOTTY; + nd_pfn->mode = PFN_MODE_PMEM; } else if (strncmp(buf, "ram\n", n) == 0 || strncmp(buf, "ram", n) == 0) nd_pfn->mode = PFN_MODE_RAM; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 3c5b8f585441..bb66158c0505 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -322,12 +322,16 @@ static int nvdimm_namespace_attach_pfn(struct nd_namespace_common *ndns) struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); struct nd_pfn *nd_pfn = to_nd_pfn(ndns->claim); struct device *dev = &nd_pfn->dev; - struct vmem_altmap *altmap; struct nd_region *nd_region; + struct vmem_altmap *altmap; struct nd_pfn_sb *pfn_sb; struct pmem_device *pmem; phys_addr_t offset; int rc; + struct vmem_altmap __altmap = { + .base_pfn = __phys_to_pfn(nsio->res.start), + .reserve = __phys_to_pfn(SZ_8K), + }; if (!nd_pfn->uuid || !nd_pfn->ndns) return -ENODEV; @@ -355,6 +359,17 @@ static int nvdimm_namespace_attach_pfn(struct nd_namespace_common *ndns) return -EINVAL; nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns); altmap = NULL; + } else if (nd_pfn->mode == PFN_MODE_PMEM) { + nd_pfn->npfns = (resource_size(&nsio->res) - offset) + / PAGE_SIZE; + if (le64_to_cpu(nd_pfn->pfn_sb->npfns) > nd_pfn->npfns) + dev_info(&nd_pfn->dev, + "number of pfns truncated from %lld to %ld\n", + le64_to_cpu(nd_pfn->pfn_sb->npfns), + nd_pfn->npfns); + altmap = & __altmap; + altmap->free = __phys_to_pfn(offset - SZ_8K); + altmap->alloc = 0; } else { rc = -ENXIO; goto err; @@ -364,7 +379,7 @@ static int nvdimm_namespace_attach_pfn(struct nd_namespace_common *ndns) pmem = dev_get_drvdata(dev); devm_memunmap(dev, (void __force *) pmem->virt_addr); pmem->virt_addr = (void __pmem *) devm_memremap_pages(dev, &nsio->res, - NULL); + altmap); if (IS_ERR(pmem->virt_addr)) { rc = PTR_ERR(pmem->virt_addr); goto err;