From patchwork Sat Feb 10 00:38:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 10210111 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A89DF602D8 for ; Sat, 10 Feb 2018 00:38:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C51228112 for ; Sat, 10 Feb 2018 00:38:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F15C299C9; Sat, 10 Feb 2018 00:38:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3A63428112 for ; Sat, 10 Feb 2018 00:38:29 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 5E1AA222DE144; Fri, 9 Feb 2018 16:32:42 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=192.55.52.136; helo=mga12.intel.com; envelope-from=dave.jiang@intel.com; receiver=linux-nvdimm@lists.01.org Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 24E8F222DE135 for ; Fri, 9 Feb 2018 16:32:41 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Feb 2018 16:38:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,486,1511856000"; d="scan'208";a="28879002" Received: from djiang5-desk3.ch.intel.com ([143.182.136.93]) by fmsmga004.fm.intel.com with ESMTP; 09 Feb 2018 16:38:27 -0800 Subject: [PATCH v2 1/2] mm, hugetlbfs: introduce ->pagesize() to vm_operations_struct From: Dave Jiang To: akpm@linux-foundation.org Date: Fri, 09 Feb 2018 17:38:27 -0700 Message-ID: <151822310695.52376.2707461327280470402.stgit@djiang5-desk3.ch.intel.com> In-Reply-To: <151822289999.52376.4998780583577188804.stgit@djiang5-desk3.ch.intel.com> References: <151822289999.52376.4998780583577188804.stgit@djiang5-desk3.ch.intel.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jane Chu , linux-nvdimm@lists.01.org, Benjamin Herrenschmidt , linux-kernel@vger.kernel.org, mhocko@kernel.org, linux-mm@kvack.org, Paul Mackerras , Michael Ellerman Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP From: Dan Williams When device-dax is operating in huge-page mode we want it to behave like hugetlbfs and report the MMU page mapping size that is being enforced by the vma. Similar to commit 31383c6865a5 "mm, hugetlbfs: introduce ->split() to vm_operations_struct" it would be messy to teach vma_mmu_pagesize() about device-dax page mapping sizes in the same (hstate) way that hugetlbfs communicates this attribute. Instead, these patches introduce a new ->pagesize() vm operation. Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Reported-by: Jane Chu Signed-off-by: Dan Williams Signed-off-by: Dave Jiang --- arch/powerpc/mm/hugetlbpage.c | 5 +---- include/linux/mm.h | 1 + mm/hugetlb.c | 23 ++++++++++++----------- 3 files changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index a9b9083c5e49..c6a2e577e842 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -568,10 +568,7 @@ unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) if (!radix_enabled()) return 1UL << mmu_psize_to_shift(psize); #endif - if (!is_vm_hugetlb_page(vma)) - return PAGE_SIZE; - - return huge_page_size(hstate_vma(vma)); + return vma_kernel_pagesize(vma); } static inline bool is_power_of_4(unsigned long x) diff --git a/include/linux/mm.h b/include/linux/mm.h index ea818ff739cd..37b9aef91ec7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -383,6 +383,7 @@ struct vm_operations_struct { int (*huge_fault)(struct vm_fault *vmf, enum page_entry_size pe_size); void (*map_pages)(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff); + unsigned long (*pagesize)(struct vm_area_struct * area); /* notification that a previously read-only page is about to become * writable, if an error is returned it will cause a SIGBUS */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9a334f5fb730..8fa069b5cb4d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -637,14 +637,9 @@ EXPORT_SYMBOL_GPL(linear_hugepage_index); */ unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) { - struct hstate *hstate; - - if (!is_vm_hugetlb_page(vma)) - return PAGE_SIZE; - - hstate = hstate_vma(vma); - - return 1UL << huge_page_shift(hstate); + if (vma->vm_ops && vma->vm_ops->pagesize) + return vma->vm_ops->pagesize(vma); + return PAGE_SIZE; } EXPORT_SYMBOL_GPL(vma_kernel_pagesize); @@ -654,12 +649,10 @@ EXPORT_SYMBOL_GPL(vma_kernel_pagesize); * architectures where it differs, an architecture-specific version of this * function is required. */ -#ifndef vma_mmu_pagesize -unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) +__weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) { return vma_kernel_pagesize(vma); } -#endif /* * Flags for MAP_PRIVATE reservations. These are stored in the bottom @@ -3132,6 +3125,13 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr) return 0; } +static unsigned long hugetlb_vm_op_pagesize(struct vm_area_struct *vma) +{ + struct hstate *hstate = hstate_vma(vma); + + return 1UL << huge_page_shift(hstate); +} + /* * We cannot handle pagefaults against hugetlb pages at all. They cause * handle_mm_fault() to try to instantiate regular-sized pages in the @@ -3149,6 +3149,7 @@ const struct vm_operations_struct hugetlb_vm_ops = { .open = hugetlb_vm_op_open, .close = hugetlb_vm_op_close, .split = hugetlb_vm_op_split, + .pagesize = hugetlb_vm_op_pagesize, }; static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,