From patchwork Tue Dec 22 20:14:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Saeed X-Patchwork-Id: 7907041 Return-Path: X-Original-To: patchwork-ocfs2-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EA6FD9F350 for ; Tue, 22 Dec 2015 20:13:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BC3482053D for ; Tue, 22 Dec 2015 20:13:23 +0000 (UTC) Received: from aserp1040.oracle.com (aserp1040.oracle.com [141.146.126.69]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 591282053C for ; Tue, 22 Dec 2015 20:13:22 +0000 (UTC) Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id tBMKBrp7001222 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 22 Dec 2015 20:11:53 GMT Received: from oss.oracle.com (oss-old-reserved.oracle.com [137.254.22.2]) by userv0021.oracle.com (8.13.8/8.13.8) with ESMTP id tBMKBpOr030424 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 22 Dec 2015 20:11:51 GMT Received: from localhost ([127.0.0.1] helo=lb-oss.oracle.com) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1aBTHi-0003qw-SA; Tue, 22 Dec 2015 12:11:50 -0800 Received: from userv0022.oracle.com ([156.151.31.74]) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1aBTHQ-0003q9-Qq for ocfs2-devel@oss.oracle.com; Tue, 22 Dec 2015 12:11:32 -0800 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0022.oracle.com (8.13.8/8.13.8) with ESMTP id tBMKBWIM026730 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 22 Dec 2015 20:11:32 GMT Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21]) by aserv0121.oracle.com (8.13.8/8.13.8) with ESMTP id tBMKBVWR009225; Tue, 22 Dec 2015 20:11:31 GMT Received: from [10.211.46.223] (/10.211.46.223) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 22 Dec 2015 12:11:31 -0800 Message-ID: <5679AF1E.3020304@oracle.com> Date: Tue, 22 Dec 2015 12:14:22 -0800 From: Tariq Saeed User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0 MIME-Version: 1.0 To: mfasheh@suse.com, akpm@linux-foundation.org References: <1440536130-29622-1-git-send-email-tariq.x.saeed@oracle.com> In-Reply-To: <1440536130-29622-1-git-send-email-tariq.x.saeed@oracle.com> X-Forwarded-Message-Id: <1440536130-29622-1-git-send-email-tariq.x.saeed@oracle.com> Cc: "ocfs2-devel@oss.oracle.com" Subject: [Ocfs2-devel] [PATCH] NFS hangs in __ocfs2_cluster_lock due to race with ocfs2_unblock_lock X-BeenThere: ocfs2-devel@oss.oracle.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ocfs2-devel-bounces@oss.oracle.com Errors-To: ocfs2-devel-bounces@oss.oracle.com X-Source-IP: userv0021.oracle.com [156.151.31.71] X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,HTML_MESSAGE, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi, Looks like this fell into through the cracks. This is a very real bug encountered by Luminex Software and they tested the fix. Regards -Tariq -------- Forwarded Message -------- Subject: [Ocfs2-devel] [PATCH] NFS hangs in __ocfs2_cluster_lock due to race with ocfs2_unblock_lock Date: Tue, 25 Aug 2015 13:55:30 -0700 From: Tariq Saeed To: ocfs2-devel@oss.oracle.com CC: mfasheh@suse.de Orabug: 20933419 NFS on a 2 node ocfs2 cluster each node exporting dir. The lock causing the hang is the global bit map inode lock. Node 1 is master, has the lock granted in PR mode; Node 2 is in the converting list (PR -> EX). There are no holders of the lock on the master node so it should downconvert to NL and grant EX to node 2 but that does not happen. BLOCKED + QUEUED in lock res are set and it is on osb blocked list. Threads are waiting in __ocfs2_cluster_lock on BLOCKED. One thread wants EX, rest want PR. So it is as though the downconvert thread needs to be kicked to complete the conv. The hang is caused by an EX req coming into __ocfs2_cluster_lock on the heels of a PR req after it sets BUSY (drops l_lock, releasing EX thread), forcing the incoming EX to wait on BUSY without doing anything. PR has called ocfs2_dlm_lock, which sets the node 1 lock from NL -> PR, queues ast. At this time, upconvert (PR ->EX) arrives from node 2, finds conflict with node 1 lock in PR, so the lock res is put on dlm thread's dirty listt. After ret from ocf2_dlm_lock, PR thread now waits behind EX on BUSY till awoken by ast. Now it is dlm_thread that serially runs dlm_shuffle_lists, ast, bast, in that order. dlm_shuffle_lists ques a bast on behalf of node 2 (which will be run by dlm_thread right after the ast). ast does its part, sets UPCONVERT_FINISHING, clears BUSY and wakes its waiters. Next, dlm_thread runs bast. It sets BLOCKED and kicks dc thread. dc thread runs ocfs2_unblock_lock, but since UPCONVERT_FINISHING set, skips doing anything and reques. Inside of __ocfs2_cluster_lock, since EX has been waiting on BUSY ahead of PR, it wakes up first, finds BLOCKED set and skips doing anything but clearing UPCONVERT_FINISHING (which was actually "meant" for the PR thread), and this time waits on BLOCKED. Next, the PR thread comes out of wait but since UPCONVERT_FINISHING is not set, it skips updating the l_ro_holders and goes straight to wait on BLOCKED. So there, we have a hang! Threads in __ocfs2_cluster_lock wait on BLOCKED, lock res in osb blocked list. Only when dc thread is awoken, it will run ocfs2_unblock_lock and things will unhang. One way to fix this is to wake the dc thread on the flag after clearing UPCONVERT_FINISHING Signed-off-by: Tariq Saeed Reviewed-by: Wengang Wang Signed-off-by: Santosh Shilimkar Reviewed-by: Mark Fasheh --- fs/ocfs2/dlmglue.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c index 8b23aa2..313c816 100644 --- a/fs/ocfs2/dlmglue.c +++ b/fs/ocfs2/dlmglue.c @@ -1390,6 +1390,7 @@ static int __ocfs2_cluster_lock(struct ocfs2_super *osb, unsigned int gen; int noqueue_attempted = 0; int dlm_locked = 0; + int kick_dc = 0; if (!(lockres->l_flags & OCFS2_LOCK_INITIALIZED)) { mlog_errno(-EINVAL); @@ -1524,7 +1525,12 @@ update_holders: unlock: lockres_clear_flags(lockres, OCFS2_LOCK_UPCONVERT_FINISHING); + /* ocfs2_unblock_lock reques on seeing OCFS2_LOCK_UPCONVERT_FINISHING */ + kick_dc = (lockres->l_flags & OCFS2_LOCK_BLOCKED); + spin_unlock_irqrestore(&lockres->l_lock, flags); + if (kick_dc) + ocfs2_wake_downconvert_thread(osb); out: /* * This is helping work around a lock inversion between the page lock