From patchwork Fri Aug 5 02:14:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Somnath Roy X-Patchwork-Id: 9264723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1C2DE60467 for ; Fri, 5 Aug 2016 02:29:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DA39283AC for ; Fri, 5 Aug 2016 02:29:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0264B28409; Fri, 5 Aug 2016 02:29:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 42259283AC for ; Fri, 5 Aug 2016 02:29:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759323AbcHEC30 (ORCPT ); Thu, 4 Aug 2016 22:29:26 -0400 Received: from mail-sn1nam01on0067.outbound.protection.outlook.com ([104.47.32.67]:47451 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756374AbcHEC3Z convert rfc822-to-8bit (ORCPT ); Thu, 4 Aug 2016 22:29:25 -0400 X-Greylist: delayed 918 seconds by postgrey-1.27 at vger.kernel.org; Thu, 04 Aug 2016 22:29:25 EDT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sandiskcorp.onmicrosoft.com; s=selector1-sandisk-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=RoXofRJanKQQnKXlKmtSFVia+DaM7wADSVn5+vCYSuU=; b=veMDJcPunSNPr6TXmasIxcZrnQjtdl/oX3/R+2epwXkqmwnfjinJFs7Nl3Azcf4gzcfzBSC+JilyrXI8B4+n29tx+dqrFSGjC2iDDDRuxGMpHNcssvU0WzgGnZHw3PzZqJGezB6gzP3DpZnP9KiABjG1P1Z6RSsTIqHtBxefGKU= Received: from BL2PR02MB2115.namprd02.prod.outlook.com (10.167.97.13) by BL2PR02MB2114.namprd02.prod.outlook.com (10.167.97.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.549.15; Fri, 5 Aug 2016 02:14:04 +0000 Received: from BL2PR02MB2115.namprd02.prod.outlook.com ([10.167.97.13]) by BL2PR02MB2115.namprd02.prod.outlook.com ([10.167.97.13]) with mapi id 15.01.0549.022; Fri, 5 Aug 2016 02:14:02 +0000 From: Somnath Roy To: "Ma, Jianpeng" , Sage Weil CC: ceph-devel , "Mark Nelson (mnelson@redhat.com)" Subject: RE: BLueStore Deadlock Thread-Topic: BLueStore Deadlock Thread-Index: AdHoqlo3LS+osxH0TeOY0HU9pqmuyAAOsgjAAAY/0zAADjzasAAAl5lwAPsDnBAAYe7uIA== Date: Fri, 5 Aug 2016 02:14:02 +0000 Message-ID: References: <6AA21C22F0A5DA478922644AD2EC308C373B92BC@SHSMSX101.ccr.corp.intel.com> <6AA21C22F0A5DA478922644AD2EC308C373B940C@SHSMSX101.ccr.corp.intel.com> <6AA21C22F0A5DA478922644AD2EC308C373B9424@SHSMSX101.ccr.corp.intel.com> <6AA21C22F0A5DA478922644AD2EC308C373B9AB0@SHSMSX101.ccr.corp.intel.com> In-Reply-To: <6AA21C22F0A5DA478922644AD2EC308C373B9AB0@SHSMSX101.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Somnath.Roy@sandisk.com; x-originating-ip: [63.163.107.100] x-ms-office365-filtering-correlation-id: 8ceee2d9-f861-449e-c7ad-08d3bcd62b97 x-microsoft-exchange-diagnostics: 1; BL2PR02MB2114; 20:o8Do1Z4MO/AbQmmwkKIseU1Y0PLc6VrDgQYzBf9vgfI0bZfNZQhOhHUCnCkq0bUx68wo1Ybe7PLaA8d9u8xZ4x5nr9yDmuyh62QeEh4D+lEsVfm9iwJ9/jJoGHBF0fRWEFmBRFXkp1yJMCBhBxIbHE84FwK4GFirt+1+kgXxwdpcgaVomFt6/14dQdlIciBBeiebFf8ouRgctXrdWXqAmmSCyUzrwzfS9Fd2AtbDKZyYLwPbo/nSqcEqUeLRgG/u x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BL2PR02MB2114; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(166708455590820)(9452136761055)(42932892334569)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026); SRVR:BL2PR02MB2114; BCL:0; PCL:0; RULEID:; SRVR:BL2PR02MB2114; x-forefront-prvs: 0025434D2D x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(7916002)(189002)(374574003)(377454003)(13464003)(199003)(105586002)(86362001)(76176999)(76576001)(74316002)(87936001)(2950100001)(7846002)(68736007)(81166006)(66066001)(99286002)(50986999)(305945005)(7736002)(2900100001)(5001770100001)(9686002)(2906002)(97736004)(189998001)(54356999)(19580395003)(551934003)(7696003)(5002640100001)(4326007)(8676002)(15975445007)(3660700001)(122556002)(106356001)(101416001)(3900700001)(3480700004)(10400500002)(8936002)(221733001)(3846002)(93886004)(77096005)(586003)(102836003)(3280700002)(19580405001)(92566002)(33656002)(81156014)(6116002)(422495003); DIR:OUT; SFP:1101; SCL:1; SRVR:BL2PR02MB2114; H:BL2PR02MB2115.namprd02.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: sandisk.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: sandisk.com X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Aug 2016 02:14:02.7330 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: fcd9ea9c-ae8c-460c-ab3c-3db42d7ac64d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL2PR02MB2114 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sorry for the delay , Jianpeng, Could you please try the following pull request in your setup and see if the deadlock is still happening ? https://github.com/ceph/ceph/pull/10578 Mark, I think it has the probable fix for your crash in onode->flush() as well , could you please try this out ? Thanks & Regards Somnath -----Original Message----- From: Ma, Jianpeng [mailto:jianpeng.ma@intel.com] Sent: Tuesday, August 02, 2016 6:35 PM To: Sage Weil Cc: ceph-devel; Somnath Roy Subject: RE: BLueStore Deadlock Hi Sage: Why is there STATE_WRITTING? In my option: for normal read, it do onode->flush() , so no need wait io complete. The special case is one transaction which has two write and later write need read. For write, if it is wal-write, when do finish_write, the data don't locate into disk. This different with non-wal-write Thanks! -----Original Message----- From: Ma, Jianpeng Sent: Friday, July 29, 2016 9:36 AM To: Ma, Jianpeng ; Somnath Roy Cc: ceph-devel Subject: RE: BLueStore Deadlock Roy: my question is why is there STATE_WRITTING? For read we don't by pass cache, so why need STATE_WRITTING? Thanks! -----Original Message----- From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Ma, Jianpeng Sent: Friday, July 29, 2016 9:24 AM To: Somnath Roy Cc: ceph-devel Subject: RE: BLueStore Deadlock Hi Roy: W/ your patch, there still deadlock. By the way, if we change the BufferSpace::_add_buffer, if w/ cache flush push it into cache front and if w/o cache flag only put it at back. I think use this way we can remove finish_write. How about it? Thanks! -----Original Message----- From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy Sent: Friday, July 29, 2016 6:22 AM To: Ma, Jianpeng Cc: ceph-devel Subject: RE: BLueStore Deadlock Jianpeng, I thought through this and it seems there could be one possible deadlock scenario. tp_osd_tp --> waiting on onode->flush() for previous txc to finish. Holding Wlock(coll) aio_complete_thread --> waiting for RLock(coll) No other thread will be blocked here. We do add previous txc in the flush_txns list during _txc_write_nodes() and before aio_complete_thread calling _txc_state_proc(). So, if within this time frame if we have IO on the same collection , it will be waiting on unfinished txcs. The solution to this could be the following.. root@emsnode5:~/ceph-master/src# git diff diff --git a/src/os/bluestore/BlueStore.cc b/src/os/bluestore/BlueStore.cc index e8548b1..575a234 100644 I am not able to reproduce this in my setup , so, if you can do the above changes in your env and see if you are still hitting the issue, would be helpful. Thanks & Regards Somnath -----Original Message----- From: Somnath Roy Sent: Thursday, July 28, 2016 8:45 AM To: 'Ma, Jianpeng' Cc: ceph-devel Subject: RE: BLueStore Deadlock Hi Jianpeng, Are you trying with latest master and still hitting the issue (seems so but confirming) ? The following scenario should not be creating deadlock because of the following reason. Onode->flush() is waiting on flush_lock() and from _txc_finish() it is releasing that before taking osr->qlock(). Am I missing anything ? I got a deadlock in this path in one of my earlier changes in the following pull request (described in detail there) and it is fixed and merged. https://github.com/ceph/ceph/pull/10220 If my theory is right , we are hitting deadlock because of some other reason may be. It seems you are doing WAL write , could you please describe the steps to reproduce ? Thanks & Regards Somnath From: Ma, Jianpeng [mailto:jianpeng.ma@intel.com] Sent: Thursday, July 28, 2016 1:46 AM To: Somnath Roy Cc: ceph-devel; Ma, Jianpeng Subject: BLueStore Deadlock Hi Roy: When do seqwrite w/ rbd+librbd, I met deadlock for bluestore. It can reproduce 100%.(based on 98602ae6c67637dbadddd549bd9a0035e5a2717) By add message and this found this bug caused by bf70bcb6c54e4d6404533bc91781a5ef77d62033. Consider this case: tp_osd_tp aio_complete_thread kv_sync_thread Rwlock(coll) txc_finish_io _txc_finish do_write lock(osr->qlock) lock(osr->qlock) do_read RLock(coll) need osr->qlock to continue onode->flush() need coll onode->readlock to continue need previous txc complete But current I don't how to fix this. Thanks! PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). --- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- a/src/os/bluestore/BlueStore.cc +++ b/src/os/bluestore/BlueStore.cc @@ -4606,6 +4606,11 @@ void BlueStore::_txc_state_proc(TransContext *txc) (txc->first_collection)->lock.get_read(); } for (auto& o : txc->onodes) { + { + std::lock_guard l(o->flush_lock); + o->flush_txns.insert(txc); + } + for (auto& p : o->blob_map.blob_map) { p.bc.finish_write(txc->seq); } @@ -4733,8 +4738,8 @@ void BlueStore::_txc_write_nodes(TransContext *txc, KeyValueDB::Transaction t) dout(20) << " onode " << (*p)->oid << " is " << bl.length() << dendl; t->set(PREFIX_OBJ, (*p)->key, bl); - std::lock_guard l((*p)->flush_lock); - (*p)->flush_txns.insert(txc); + /*std::lock_guard l((*p)->flush_lock); + (*p)->flush_txns.insert(txc);*/ }