From patchwork Fri Jan 22 21:36:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 8093271 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 38C159F744 for ; Fri, 22 Jan 2016 21:36:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 64D5C205B1 for ; Fri, 22 Jan 2016 21:36:28 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7170020595 for ; Fri, 22 Jan 2016 21:36:27 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6504C1A2489; Fri, 22 Jan 2016 13:36:27 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by ml01.01.org (Postfix) with ESMTP id B216D1A2493 for ; Fri, 22 Jan 2016 13:36:24 -0800 (PST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP; 22 Jan 2016 13:36:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,333,1449561600"; d="scan'208";a="732559491" Received: from rzwisler-desk1.amr.corp.intel.com (HELO tarkir.lm.intel.com) ([10.232.112.142]) by orsmga003.jf.intel.com with ESMTP; 22 Jan 2016 13:36:24 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH v3 5/5] dax: fix clearing of holes in __dax_pmd_fault() Date: Fri, 22 Jan 2016 14:36:13 -0700 Message-Id: <1453498573-6328-6-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1453498573-6328-1-git-send-email-ross.zwisler@linux.intel.com> References: <1453498573-6328-1-git-send-email-ross.zwisler@linux.intel.com> Cc: Andrew Morton , linux-nvdimm@lists.01.org, Dave Chinner , Alexander Viro , Jan Kara , linux-fsdevel@vger.kernel.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the user reads from a DAX hole via a mmap we service page faults using zero-filled page cache pages. These zero pages are also placed into the address_space radix tree. When we get our first write for that space, we can allocate a PMD page worth of DAX storage to replace that hole. When this happens we need to unmap the zero pages and remove them from the radix tree. Prior to this patch we were unmapping *all* storage in our PMD's range, which is incorrect because it removed DAX entries as well on non-allocating page faults. Instead, keep track of when get_block() actually gives us storage so that we can be sure to only remove zero pages that were covering holes. Signed-off-by: Ross Zwisler Reported-by: Jan Kara --- fs/dax.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a2ed009..206650f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -791,9 +791,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, bool write = flags & FAULT_FLAG_WRITE; struct block_device *bdev; pgoff_t size, pgoff; - loff_t lstart, lend; sector_t block; int error, result = 0; + bool alloc = false; /* dax pmd mappings require pfn_t_devmap() */ if (!IS_ENABLED(CONFIG_FS_DAX_PMD)) @@ -831,10 +831,17 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); bh.b_size = PMD_SIZE; - if (get_block(inode, block, &bh, write) != 0) + + if (get_block(inode, block, &bh, 0) != 0) return VM_FAULT_SIGBUS; + + if (!buffer_mapped(&bh) && write) { + if (get_block(inode, block, &bh, 1) != 0) + return VM_FAULT_SIGBUS; + alloc = true; + } + bdev = bh.b_bdev; - i_mmap_lock_read(mapping); /* * If the filesystem isn't willing to tell us the length of a hole, @@ -843,15 +850,20 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, */ if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) { dax_pmd_dbg(&bh, address, "allocated block too small"); - goto fallback; + return VM_FAULT_FALLBACK; + } + + /* + * If we allocated new storage, make sure no process has any + * zero pages covering this hole + */ + if (alloc) { + loff_t lstart = pgoff << PAGE_SHIFT; + loff_t lend = lstart + PMD_SIZE - 1; /* inclusive */ + + truncate_pagecache_range(inode, lstart, lend); } - /* make sure no process has any zero pages covering this hole */ - lstart = pgoff << PAGE_SHIFT; - lend = lstart + PMD_SIZE - 1; /* inclusive */ - i_mmap_unlock_read(mapping); - unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); - truncate_inode_pages_range(mapping, lstart, lend); i_mmap_lock_read(mapping); /*