From patchwork Fri Jul 21 22:39:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9857805 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2E68760392 for ; Fri, 21 Jul 2017 22:40:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F83528668 for ; Fri, 21 Jul 2017 22:40:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 13C5C2869D; Fri, 21 Jul 2017 22:40:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 87C302866B for ; Fri, 21 Jul 2017 22:40:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=oLLSNnqqkHFfR/SIHoV6KJ0aVC1Q8+lGIDYbbOkZHz4=; b=AFRsr92Dz+BI3kGVi3dR0SG1HV ZNmoyevbPQtP18jJtlZhp8j+eOHzaaBGwDXfaZvaE14nSQxCz1HTmM49pwBCav6diyh0PoL7YfZjp cYryRVaVJb78pl2WnSII4a5OcrxVhmrzEkZeNtQo1BQ7KhiQN9Ygdsllz3Y7YXgXjO5oBsfWiWf5A u+cc8yRyxAOeAjwTzYsFdl8QsJrb1JvvN2Jh4WvYw76sUl/Y3Itn4BKFiJmm9V+E396QuyDkuteuM YHGKmRHRn1PGLTDlrHgIts2KuWWpwdyQmP36ksLAlqqNladaBd6yAgmcagYFiSTCSfe8jdm0YD6uK a+43eg3g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dYgbE-0005OC-Mi; Fri, 21 Jul 2017 22:40:44 +0000 Received: from mga01.intel.com ([192.55.52.88]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dYgat-0004j9-1h for linux-arm-kernel@lists.infradead.org; Fri, 21 Jul 2017 22:40:24 +0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jul 2017 15:40:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,392,1496127600"; d="scan'208";a="995742889" Received: from theros.lm.intel.com ([10.232.112.77]) by orsmga003.jf.intel.com with ESMTP; 21 Jul 2017 15:40:03 -0700 From: Ross Zwisler To: Andrew Morton , linux-kernel@vger.kernel.org Subject: [PATCH v4 2/5] dax: relocate some dax functions Date: Fri, 21 Jul 2017 16:39:52 -0600 Message-Id: <20170721223956.29485-3-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170721223956.29485-1-ross.zwisler@linux.intel.com> References: <20170721223956.29485-1-ross.zwisler@linux.intel.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170721_154023_163419_7AAE9B8A X-CRM114-Status: GOOD ( 21.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Andreas Dilger , Patrik Jakobsson , Christoph Hellwig , linux-samsung-soc@vger.kernel.org, Joonyoung Shim , "Darrick J. Wong" , Tomi Valkeinen , Kyungmin Park , Krzysztof Kozlowski , Ingo Molnar , Ross Zwisler , linux-ext4@vger.kernel.org, Matthew Wilcox , linux-arm-msm@vger.kernel.org, Steven Rostedt , Inki Dae , linux-nvdimm@lists.01.org, Alexander Viro , Dan Williams , linux-arm-kernel@lists.infradead.org, Theodore Ts'o , Jonathan Corbet , Seung-Woo Kim , linux-xfs@vger.kernel.org, Rob Clark , Kukjin Kim , linux-fsdevel@vger.kernel.org, freedreno@lists.freedesktop.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP dax_load_hole() will soon need to call dax_insert_mapping_entry(), so it needs to be moved lower in dax.c so the definition exists. dax_wake_mapping_entry_waiter() will soon be removed from dax.h and be made static to dax.c, so we need to move its definition above all its callers. Signed-off-by: Ross Zwisler Reviewed-by: Jan Kara --- fs/dax.c | 138 +++++++++++++++++++++++++++++++-------------------------------- 1 file changed, 69 insertions(+), 69 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index c844a51..779dc5e 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -121,6 +121,31 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo } /* + * We do not necessarily hold the mapping->tree_lock when we call this + * function so it is possible that 'entry' is no longer a valid item in the + * radix tree. This is okay because all we really need to do is to find the + * correct waitqueue where tasks might be waiting for that old 'entry' and + * wake them. + */ +void dax_wake_mapping_entry_waiter(struct address_space *mapping, + pgoff_t index, void *entry, bool wake_all) +{ + struct exceptional_entry_key key; + wait_queue_head_t *wq; + + wq = dax_entry_waitqueue(mapping, index, entry, &key); + + /* + * Checking for locked entry and prepare_to_wait_exclusive() happens + * under mapping->tree_lock, ditto for entry handling in our callers. + * So at this point all tasks that could have seen our entry locked + * must be in the waitqueue and the following check will see them. + */ + if (waitqueue_active(wq)) + __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key); +} + +/* * Check whether the given slot is locked. The function must be called with * mapping->tree_lock held */ @@ -392,31 +417,6 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index, return entry; } -/* - * We do not necessarily hold the mapping->tree_lock when we call this - * function so it is possible that 'entry' is no longer a valid item in the - * radix tree. This is okay because all we really need to do is to find the - * correct waitqueue where tasks might be waiting for that old 'entry' and - * wake them. - */ -void dax_wake_mapping_entry_waiter(struct address_space *mapping, - pgoff_t index, void *entry, bool wake_all) -{ - struct exceptional_entry_key key; - wait_queue_head_t *wq; - - wq = dax_entry_waitqueue(mapping, index, entry, &key); - - /* - * Checking for locked entry and prepare_to_wait_exclusive() happens - * under mapping->tree_lock, ditto for entry handling in our callers. - * So at this point all tasks that could have seen our entry locked - * must be in the waitqueue and the following check will see them. - */ - if (waitqueue_active(wq)) - __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key); -} - static int __dax_invalidate_mapping_entry(struct address_space *mapping, pgoff_t index, bool trunc) { @@ -468,50 +468,6 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping, return __dax_invalidate_mapping_entry(mapping, index, false); } -/* - * The user has performed a load from a hole in the file. Allocating - * a new page in the file would cause excessive storage usage for - * workloads with sparse files. We allocate a page cache page instead. - * We'll kick it out of the page cache if it's ever written to, - * otherwise it will simply fall out of the page cache under memory - * pressure without ever having been dirtied. - */ -static int dax_load_hole(struct address_space *mapping, void **entry, - struct vm_fault *vmf) -{ - struct inode *inode = mapping->host; - struct page *page; - int ret; - - /* Hole page already exists? Return it... */ - if (!radix_tree_exceptional_entry(*entry)) { - page = *entry; - goto finish_fault; - } - - /* This will replace locked radix tree entry with a hole page */ - page = find_or_create_page(mapping, vmf->pgoff, - vmf->gfp_mask | __GFP_ZERO); - if (!page) { - ret = VM_FAULT_OOM; - goto out; - } - -finish_fault: - vmf->page = page; - ret = finish_fault(vmf); - vmf->page = NULL; - *entry = page; - if (!ret) { - /* Grab reference for PTE that is now referencing the page */ - get_page(page); - ret = VM_FAULT_NOPAGE; - } -out: - trace_dax_load_hole(inode, vmf, ret); - return ret; -} - static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev, sector_t sector, size_t size, struct page *to, unsigned long vaddr) @@ -938,6 +894,50 @@ int dax_pfn_mkwrite(struct vm_fault *vmf) } EXPORT_SYMBOL_GPL(dax_pfn_mkwrite); +/* + * The user has performed a load from a hole in the file. Allocating + * a new page in the file would cause excessive storage usage for + * workloads with sparse files. We allocate a page cache page instead. + * We'll kick it out of the page cache if it's ever written to, + * otherwise it will simply fall out of the page cache under memory + * pressure without ever having been dirtied. + */ +static int dax_load_hole(struct address_space *mapping, void **entry, + struct vm_fault *vmf) +{ + struct inode *inode = mapping->host; + struct page *page; + int ret; + + /* Hole page already exists? Return it... */ + if (!radix_tree_exceptional_entry(*entry)) { + page = *entry; + goto finish_fault; + } + + /* This will replace locked radix tree entry with a hole page */ + page = find_or_create_page(mapping, vmf->pgoff, + vmf->gfp_mask | __GFP_ZERO); + if (!page) { + ret = VM_FAULT_OOM; + goto out; + } + +finish_fault: + vmf->page = page; + ret = finish_fault(vmf); + vmf->page = NULL; + *entry = page; + if (!ret) { + /* Grab reference for PTE that is now referencing the page */ + get_page(page); + ret = VM_FAULT_NOPAGE; + } +out: + trace_dax_load_hole(inode, vmf, ret); + return ret; +} + static bool dax_range_is_aligned(struct block_device *bdev, unsigned int offset, unsigned int length) {