From patchwork Wed May 29 20:50:49 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2632681 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 0245C3FC23 for ; Wed, 29 May 2013 20:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966944Ab3E2Uug (ORCPT ); Wed, 29 May 2013 16:50:36 -0400 Received: from mail-vb0-f43.google.com ([209.85.212.43]:33401 "EHLO mail-vb0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966493Ab3E2Uuf (ORCPT ); Wed, 29 May 2013 16:50:35 -0400 Received: by mail-vb0-f43.google.com with SMTP id e12so4319551vbg.2 for ; Wed, 29 May 2013 13:50:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding:x-gm-message-state; bh=v0oV94mrABrq8RyKUWZeTlmQSPXW/eRvYAT/Xkifcfs=; b=hTtGUUhQ9Xe3XsadzF1dP6cxyYnrJ6Q8nt59dmAdcaQ7wAdMeE9jUivK02xUp4YNUQ 2iJxWXYKutvsyMSU0dSp7Z44zOWca4PpENOTVHrZYGJxQktFmTPlKH95LO9BqZRIlQYa a1gUOOIHwCZmV7tymfKYVAD99saOx4b0yQFNoobDFqcuZupvgMLWTLXF2uSSI2FWBkGz quC6e9ZJenNjDO4PoePC3/VFPkIDsnMUqjFw1ae6onsQiw8qy5BIYkwSH3DKHv129rCF 1l24IYKah32bQXQXm3xGAuGdydtv3lL1KGirnEO6VyzvN6/tFlZ2Lend6HptBHu50iRJ BRAA== X-Received: by 10.58.233.233 with SMTP id tz9mr2854742vec.13.1369860635138; Wed, 29 May 2013 13:50:35 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPSA id s9sm29812380vdh.4.2013.05.29.13.50.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 29 May 2013 13:50:34 -0700 (PDT) Message-ID: <51A66A29.6060805@inktank.com> Date: Wed, 29 May 2013 15:50:49 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: ceph-devel Subject: [PATCH] rbd: protect against duplicate client creation X-Gm-Message-State: ALoCoQngYhJCDEj/G0Dg0CQhx0YqFylkIo3nDyJeK9y2hmIRHlzPFL73OaMaz9Tza9F15eavJKF5 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org If more than one rbd image has the same ceph cluster configuration (same options, same set of monitors, same keys) they normally share a single rbd client. When an image is getting mapped, rbd looks to see if an existing client can be used, and creates a new one if not. The lookup and creation are not done under a common lock though, so mapping two images concurrently could lead to duplicate clients getting set up needlessly. This isn't a major problem, but it's wasteful and different from what's intended. This patch fixes that by using the control mutex to protect both the lookup and (if needed) creation of the client. It was previously used just when creating. This resolves: http://tracker.ceph.com/issues/3094 Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) { @@ -537,8 +537,6 @@ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) kref_init(&rbdc->kref); INIT_LIST_HEAD(&rbdc->node); - mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); - rbdc->client = ceph_create_client(ceph_opts, rbdc, 0, 0); if (IS_ERR(rbdc->client)) goto out_mutex; @@ -552,7 +550,6 @@ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) list_add_tail(&rbdc->node, &rbd_client_list); spin_unlock(&rbd_client_list_lock); - mutex_unlock(&ctl_mutex); dout("%s: rbdc %p\n", __func__, rbdc); return rbdc; @@ -684,11 +681,13 @@ static struct rbd_client *rbd_get_client(struct ceph_options *ceph_opts) { struct rbd_client *rbdc; + mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); rbdc = rbd_client_find(ceph_opts); if (rbdc) /* using an existing client */ ceph_destroy_options(ceph_opts); else rbdc = rbd_client_create(ceph_opts); + mutex_unlock(&ctl_mutex); return rbdc; } diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index aec2438..d255541 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -522,7 +522,7 @@ static const struct block_device_operations rbd_bd_ops = { /* * Initialize an rbd client instance. Success or not, this function - * consumes ceph_opts. + * consumes ceph_opts. Caller holds ctl_mutex */ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts)