From patchwork Sun Nov 8 19:27:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7579221 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A3660C05C6 for ; Sun, 8 Nov 2015 19:33:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AD0F92058C for ; Sun, 8 Nov 2015 19:33:25 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 96D652058A for ; Sun, 8 Nov 2015 19:33:24 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 8AF201A1F7F; Sun, 8 Nov 2015 11:33:24 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id C51C61A1F7F for ; Sun, 8 Nov 2015 11:33:22 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 08 Nov 2015 11:33:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,263,1444719600"; d="scan'208";a="831004096" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.39]) by fmsmga001.fm.intel.com with ESMTP; 08 Nov 2015 11:33:21 -0800 Subject: [PATCH v4 03/14] dax: use HPAGE_SIZE instead of PMD_SIZE From: Dan Williams To: axboe@fb.com Date: Sun, 08 Nov 2015 14:27:39 -0500 Message-ID: <20151108192739.9104.32105.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Cc: jack@suse.cz, linux-nvdimm@lists.01.org, Dave Hansen , david@fromorbit.com, linux-block@vger.kernel.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As Dave points out when dealing with the contents of a page we use PAGE_SIZE and PAGE_SHIFT, similary for huge pages use HPAGE_SIZE and HPAGE_SHIFT. Reported-by: Dave Hansen Signed-off-by: Dan Williams --- fs/dax.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index f8e543839e5c..149d6000d72a 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -511,7 +511,7 @@ EXPORT_SYMBOL_GPL(dax_fault); * The 'colour' (ie low bits) within a PMD of a page offset. This comes up * more often than one might expect in the below function. */ -#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1) +#define PG_PMD_COLOUR ((HPAGE_SIZE >> PAGE_SHIFT) - 1) int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, unsigned int flags, get_block_t get_block, @@ -537,7 +537,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, /* If the PMD would extend outside the VMA */ if (pmd_addr < vma->vm_start) return VM_FAULT_FALLBACK; - if ((pmd_addr + PMD_SIZE) > vma->vm_end) + if ((pmd_addr + HPAGE_SIZE) > vma->vm_end) return VM_FAULT_FALLBACK; pgoff = linear_page_index(vma, pmd_addr); @@ -551,7 +551,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, memset(&bh, 0, sizeof(bh)); block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); - bh.b_size = PMD_SIZE; + bh.b_size = HPAGE_SIZE; length = get_block(inode, block, &bh, write); if (length) return VM_FAULT_SIGBUS; @@ -562,7 +562,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, * just fall back to PTEs. Calling get_block 512 times in a loop * would be silly. */ - if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) + if (!buffer_size_valid(&bh) || bh.b_size < HPAGE_SIZE) goto fallback; /* @@ -571,7 +571,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, */ if (buffer_new(&bh)) { i_mmap_unlock_read(mapping); - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); + unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, HPAGE_SIZE, 0); i_mmap_lock_read(mapping); } @@ -616,7 +616,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, result = VM_FAULT_SIGBUS; goto out; } - if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) + if ((length < HPAGE_SIZE) || (pfn & PG_PMD_COLOUR)) goto fallback; if (buffer_unwritten(&bh) || buffer_new(&bh)) {