From patchwork Thu Apr 14 21:03:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 8843031 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9C2B1C0553 for ; Thu, 14 Apr 2016 21:04:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B101520272 for ; Thu, 14 Apr 2016 21:04:08 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B3F1520270 for ; Thu, 14 Apr 2016 21:04:07 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 878761A1FC9; Thu, 14 Apr 2016 14:04:07 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id 8A4DA1A1DEC for ; Thu, 14 Apr 2016 14:04:05 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 14 Apr 2016 14:04:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,485,1455004800"; d="scan'208";a="932699254" Received: from rzwisler-desk.amr.corp.intel.com (HELO phyrexia.intel.com) ([10.252.139.27]) by orsmga001.jf.intel.com with ESMTP; 14 Apr 2016 14:04:04 -0700 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH] dax: integrate *sync code & multi-order radix tree Date: Thu, 14 Apr 2016 15:03:24 -0600 Message-Id: <1460667804-21725-1-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.5.5 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrew Morton , linux-nvdimm@lists.01.org, Dave Chinner , Alexander Viro , Jan Kara , linux-fsdevel@vger.kernel.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With the new addition of the multi-order radix tree support we can simplify the DAX *sync PMD support a bit. Instead of manually checking to see if our index is covered by a PMD entry we can rely on the new radix tree to return the PMD entry if present. Signed-off-by: Ross Zwisler --- fs/dax.c | 31 +++++++++++-------------------- 1 file changed, 11 insertions(+), 20 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 735a608..3f87fcc 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -40,6 +40,7 @@ #define RADIX_DAX_SECTOR(entry) (((unsigned long)entry >> RADIX_DAX_SHIFT)) #define RADIX_DAX_ENTRY(sector, pmd) ((void *)((unsigned long)sector << \ RADIX_DAX_SHIFT | (pmd ? RADIX_DAX_PMD : RADIX_DAX_PTE))) +#define RADIX_DAX_ORDER(pmd) (pmd ? PMD_SHIFT - PAGE_SHIFT : 0) static long dax_map_atomic(struct block_device *bdev, struct blk_dax_ctl *dax) { @@ -360,13 +361,11 @@ static int copy_user_bh(struct page *to, struct inode *inode, } #define NO_SECTOR -1 -#define DAX_PMD_INDEX(page_index) (page_index & (PMD_MASK >> PAGE_CACHE_SHIFT)) static int dax_radix_entry(struct address_space *mapping, pgoff_t index, sector_t sector, bool pmd_entry, bool dirty) { struct radix_tree_root *page_tree = &mapping->page_tree; - pgoff_t pmd_index = DAX_PMD_INDEX(index); int type, error = 0; void *entry; @@ -376,12 +375,6 @@ static int dax_radix_entry(struct address_space *mapping, pgoff_t index, spin_lock_irq(&mapping->tree_lock); - entry = radix_tree_lookup(page_tree, pmd_index); - if (entry && RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD) { - index = pmd_index; - goto dirty; - } - entry = radix_tree_lookup(page_tree, index); if (entry) { type = RADIX_DAX_TYPE(entry); @@ -418,7 +411,8 @@ static int dax_radix_entry(struct address_space *mapping, pgoff_t index, goto unlock; } - error = radix_tree_insert(page_tree, index, + error = __radix_tree_insert(page_tree, index, + RADIX_DAX_ORDER(pmd_entry), RADIX_DAX_ENTRY(sector, pmd_entry)); if (error) goto unlock; @@ -462,6 +456,13 @@ static int dax_writeback_one(struct block_device *bdev, goto unlock; } + /* + * Even if dax_writeback_mapping_range() was given a wbc->range_start + * in the middle of a PMD, the 'index' we are given will be aligned to + * the start index of the PMD, as will the sector we pull from + * 'entry'. This allows us to flush for PMD_SIZE and not have to + * worry about partial PMD writebacks. + */ dax.sector = RADIX_DAX_SECTOR(entry); dax.size = (type == RADIX_DAX_PMD ? PMD_SIZE : PAGE_SIZE); spin_unlock_irq(&mapping->tree_lock); @@ -502,12 +503,11 @@ int dax_writeback_mapping_range(struct address_space *mapping, struct block_device *bdev, struct writeback_control *wbc) { struct inode *inode = mapping->host; - pgoff_t start_index, end_index, pmd_index; + pgoff_t start_index, end_index; pgoff_t indices[PAGEVEC_SIZE]; struct pagevec pvec; bool done = false; int i, ret = 0; - void *entry; if (WARN_ON_ONCE(inode->i_blkbits != PAGE_SHIFT)) return -EIO; @@ -517,15 +517,6 @@ int dax_writeback_mapping_range(struct address_space *mapping, start_index = wbc->range_start >> PAGE_CACHE_SHIFT; end_index = wbc->range_end >> PAGE_CACHE_SHIFT; - pmd_index = DAX_PMD_INDEX(start_index); - - rcu_read_lock(); - entry = radix_tree_lookup(&mapping->page_tree, pmd_index); - rcu_read_unlock(); - - /* see if the start of our range is covered by a PMD entry */ - if (entry && RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD) - start_index = pmd_index; tag_pages_for_writeback(mapping, start_index, end_index);