From patchwork Thu Dec 3 21:58:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wilcox, Matthew R" X-Patchwork-Id: 7763851 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7AD5F9F1C2 for ; Thu, 3 Dec 2015 21:58:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ADBEB205EC for ; Thu, 3 Dec 2015 21:58:51 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D1DB6205E6 for ; Thu, 3 Dec 2015 21:58:50 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id A8B871A2039; Thu, 3 Dec 2015 13:58:50 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id F1AD81A203A for ; Thu, 3 Dec 2015 13:58:48 -0800 (PST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 03 Dec 2015 13:58:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,378,1444719600"; d="scan'208";a="612164114" Received: from rfarhi-mobl4.amr.corp.intel.com (HELO thog.int.wil.cx) ([10.252.198.26]) by FMSMGA003.fm.intel.com with SMTP; 03 Dec 2015 13:58:47 -0800 Received: by thog.int.wil.cx (Postfix, from userid 1000) id 97F695F7A9; Thu, 3 Dec 2015 16:58:46 -0500 (EST) From: Matthew Wilcox To: Dan Williams Subject: [PATCH 2/2] dax: Fix error returns in PMD fault handler Date: Thu, 3 Dec 2015 16:58:42 -0500 Message-Id: <1449179922-502-3-git-send-email-matthew.r.wilcox@intel.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1449179922-502-1-git-send-email-matthew.r.wilcox@intel.com> References: <1449179922-502-1-git-send-email-matthew.r.wilcox@intel.com> Cc: linux-nvdimm@lists.01.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox When we need to bail at this point, we do not hold the mutex, so jumping to the error path which will drop the mutex is the wrong approach. Just return the fallback instead. This mostly reverts commit fd6b9f6393. Signed-off-by: Matthew Wilcox --- fs/dax.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 727af65..71ba5f7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -581,28 +581,22 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, /* Fall back to PTEs if we're going to COW */ if (write && !(vma->vm_flags & VM_SHARED)) { split_huge_page_pmd(vma, address, pmd); - reason = "cow write"; return VM_FAULT_FALLBACK; } + /* If the PMD would extend outside the VMA */ - if (pmd_addr < vma->vm_start) { - reason = "vma start unaligned"; - goto fallback; - } - if ((pmd_addr + PMD_SIZE) > vma->vm_end) { - reason = "vma end unaligned"; - goto fallback; - } + if (pmd_addr < vma->vm_start) + return VM_FAULT_FALLBACK; + if ((pmd_addr + PMD_SIZE) > vma->vm_end) + return VM_FAULT_FALLBACK; pgoff = linear_page_index(vma, pmd_addr); size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; if (pgoff >= size) return VM_FAULT_SIGBUS; /* If the PMD would cover blocks out of the file */ - if ((pgoff | PG_PMD_COLOUR) >= size) { - reason = "offset + huge page size > file size"; + if ((pgoff | PG_PMD_COLOUR) >= size) return VM_FAULT_FALLBACK; - } memset(&bh, 0, sizeof(bh)); block = (sector_t)pgoff << (PAGE_SHIFT - blkbits);