From patchwork Thu Oct 1 07:46:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 7305871 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2F8489FC62 for ; Thu, 1 Oct 2015 07:48:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2969E2080A for ; Thu, 1 Oct 2015 07:48:01 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B311A20807 for ; Thu, 1 Oct 2015 07:47:59 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0F8B061804; Thu, 1 Oct 2015 00:47:59 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from ipmail04.adl6.internode.on.net (ipmail04.adl6.internode.on.net [150.101.137.141]) by ml01.01.org (Postfix) with ESMTP id 3FC6B6158F for ; Thu, 1 Oct 2015 00:47:57 -0700 (PDT) Received: from ppp121-44-14-71.lns20.syd4.internode.on.net (HELO dastard) ([121.44.14.71]) by ipmail04.adl6.internode.on.net with ESMTP; 01 Oct 2015 17:16:46 +0930 Received: from disappointment.disaster.area ([192.168.1.110] helo=disappointment) by dastard with esmtp (Exim 4.80) (envelope-from ) id 1ZhYZh-0005hw-ST; Thu, 01 Oct 2015 17:46:45 +1000 Received: from dave by disappointment with local (Exim 4.86) (envelope-from ) id 1ZhYZh-0001lz-RL; Thu, 01 Oct 2015 17:46:45 +1000 From: Dave Chinner To: xfs@oss.sgi.com Subject: [PATCH 5/7] xfs: Don't use unwritten extents for DAX Date: Thu, 1 Oct 2015 17:46:37 +1000 Message-Id: <1443685599-4843-6-git-send-email-david@fromorbit.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1443685599-4843-1-git-send-email-david@fromorbit.com> References: <1443685599-4843-1-git-send-email-david@fromorbit.com> Cc: jack@suse.cz, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, kirill.shutemov@linux.intel.com X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Chinner DAX has a page fault serialisation problem with block allocation. Because it allows concurrent page faults and does not have a page lock to serialise faults to the same page, it can get two concurrent faults to the page that race. When two read faults race, this isn't a huge problem as the data underlying the page is not changing and so "detect and drop" works just fine. The issues are to do with write faults. When two write faults occur, we serialise block allocation in get_blocks() so only one faul will allocate the extent. It will, however, be marked as an unwritten extent, and that is where the problem lies - the DAX fault code cannot differentiate between a block that was just allocated and a block that was preallocated and needs zeroing. The result is that both write faults end up zeroing the block and attempting to convert it back to written. The problem is that the first fault can zero and convert before the second fault starts zeroing, resulting in the zeroing for the second fault overwriting the data that the first fault wrote with zeros. The second fault then attempts to convert the unwritten extent, which is then a no-op because it's already written. Data loss occurs as a result of this race. Because there is no sane locking construct in the page fault code that we can use for serialisation across the page faults, we need to ensure block allocation and zeroing occurs atomically in the filesystem. This means we can still take concurrent page faults and the only time they will serialise is in the filesystem mapping/allocation callback. The page fault code will always see written, initialised extents, so we will be able to remove the unwritten extent handling from the DAX code when all filesystems are converted. Signed-off-by: Dave Chinner --- fs/dax.c | 5 +++++ fs/xfs/xfs_aops.c | 13 +++++++++---- fs/xfs/xfs_iomap.c | 17 ++++++++++++++++- 3 files changed, 30 insertions(+), 5 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 3994a2b..b36d6d2 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -29,6 +29,11 @@ #include #include +/* + * dax_clear_blocks() is called from within transaction context from XFS, + * and hence this means the stack from this point must follow GFP_NOFS + * semantics for all operations. + */ int dax_clear_blocks(struct inode *inode, sector_t block, long size) { struct block_device *bdev = inode->i_sb->s_bdev; diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index e747d6a..df3dabd 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -1284,15 +1284,12 @@ xfs_map_direct( trace_xfs_gbmap_direct(XFS_I(inode), offset, size, type, imap); - /* XXX: preparation for removing unwritten extents in DAX */ -#if 0 if (dax_fault) { ASSERT(type == XFS_IO_OVERWRITE); trace_xfs_gbmap_direct_none(XFS_I(inode), offset, size, type, imap); return; } -#endif if (bh_result->b_private) { ioend = bh_result->b_private; @@ -1420,10 +1417,12 @@ __xfs_get_blocks( if (error) goto out_unlock; + /* for DAX, we convert unwritten extents directly */ if (create && (!nimaps || (imap.br_startblock == HOLESTARTBLOCK || - imap.br_startblock == DELAYSTARTBLOCK))) { + imap.br_startblock == DELAYSTARTBLOCK) || + (IS_DAX(inode) && ISUNWRITTEN(&imap)))) { if (direct || xfs_get_extsz_hint(ip)) { /* * Drop the ilock in preparation for starting the block @@ -1468,6 +1467,12 @@ __xfs_get_blocks( goto out_unlock; } + if (IS_DAX(inode) && create) { + ASSERT(!ISUNWRITTEN(&imap)); + /* zeroing is not needed at a higher layer */ + new = 0; + } + /* trim mapping down to size requested */ if (direct || size > (1 << inode->i_blkbits)) xfs_map_trim_size(inode, iblock, bh_result, diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 1f86033..8b2b4a9 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -131,6 +131,7 @@ xfs_iomap_write_direct( uint qblocks, resblks, resrtextents; int committed; int error; + int bmapi_flags = XFS_BMAPI_PREALLOC; error = xfs_qm_dqattach(ip, 0); if (error) @@ -196,13 +197,26 @@ xfs_iomap_write_direct( xfs_trans_ijoin(tp, ip, 0); /* + * For DAX, we do not allocate unwritten extents, but instead we zero + * the block before we commit the transaction. Ideally we'd like to do + * this outside the transaction context, but if we commit and then crash + * we may not have zeroed the blocks and this will be exposed on + * recovery of the allocation. Hence we must zero before commit. + * Further, if we are mapping unwritten extents here, we need to zero + * and convert them to written so that we don't need an unwritten extent + * callback for DAX. + */ + if (IS_DAX(VFS_I(ip))) + bmapi_flags = XFS_BMAPI_CONVERT | XFS_BMAPI_ZERO; + + /* * From this point onwards we overwrite the imap pointer that the * caller gave to us. */ xfs_bmap_init(&free_list, &firstfsb); nimaps = 1; error = xfs_bmapi_write(tp, ip, offset_fsb, count_fsb, - XFS_BMAPI_PREALLOC, &firstfsb, 0, + bmapi_flags, &firstfsb, 0, imap, &nimaps, &free_list); if (error) goto out_bmap_cancel; @@ -213,6 +227,7 @@ xfs_iomap_write_direct( error = xfs_bmap_finish(&tp, &free_list, &committed); if (error) goto out_bmap_cancel; + error = xfs_trans_commit(tp); if (error) goto out_unlock;