From patchwork Sat Jun 1 01:18:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2646361 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 2B0A9DFB79 for ; Sat, 1 Jun 2013 01:18:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755170Ab3FABS6 (ORCPT ); Fri, 31 May 2013 21:18:58 -0400 Received: from mail-ie0-f181.google.com ([209.85.223.181]:48357 "EHLO mail-ie0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755092Ab3FABS5 (ORCPT ); Fri, 31 May 2013 21:18:57 -0400 Received: by mail-ie0-f181.google.com with SMTP id x14so5774290ief.12 for ; Fri, 31 May 2013 18:18:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=+2pfqDtHIn9VL5kshBVznGMDyUStr3LM2Che1tJHV6Q=; b=jyFHNP5Uyb64exwjhQPx0Ao+Pa2HRrdf2Hm/mmYybDDUc40rQ7XkkGRT3IRHA/TBU5 XCDsbjIWddMKHWFJd+2JALUaKVl8EjvOJpN8X8emgeEBJpyblxS0zCXLTz2yjLFvgrKX TW9gO4SErrEvsr5gOkRybjcxHMuaoeMMiJ0PwHw4/Fcn43r6xgRWAVuDfCpR7FwE8y06 M1jftcFjDFAMIYMCyrJq0AASyL/k6w+ykdrJ5ZWxTBBOZ6l0+Lq9WOyU6LJv1g2mttU+ R+naBcsTMVrOwDyDla4z/duufeeBZE3k4+ViClU9gjFZfS4J5CkKjrXNOIYhg93WsEuZ A5Rw== X-Received: by 10.50.50.6 with SMTP id y6mr2697043ign.90.1370049537353; Fri, 31 May 2013 18:18:57 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPSA id r20sm5829664ign.8.2013.05.31.18.18.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 31 May 2013 18:18:56 -0700 (PDT) Message-ID: <51A94BFF.5050406@inktank.com> Date: Fri, 31 May 2013 20:18:55 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: ceph-devel Subject: [PATCH 1/5] rbd: set removing flag while holding list lock References: <51A94BC0.4080703@inktank.com> In-Reply-To: <51A94BC0.4080703@inktank.com> X-Gm-Message-State: ALoCoQkYl5lHZJd5fF4e6xdik5EfDAgwLladLa6ge08b0WaFFu795QMVd5NkyHpGCZ5sVplKrFF+ Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org When unmapping a device, its id is supplied, and that is used to look up which rbd device should be unmapped. Looking up the device involves searching the rbd device list while holding a spinlock that protects access to that list. Currently all of this is done under protection of the control lock, but that protection is going away soon. To ensure the rbd_dev is still valid (still on the list) while setting its REMOVING flag, do so while still holding the list lock. To do so, get rid of __rbd_get_dev(), and open code what it did in the one place it was used. Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 53 +++++++++++++++++++++------------------------------ 1 file changed, 22 insertions(+), 31 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index aace658..716ef1f 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -5106,23 +5106,6 @@ err_out_module: return (ssize_t)rc; } -static struct rbd_device *__rbd_get_dev(unsigned long dev_id) -{ - struct list_head *tmp; - struct rbd_device *rbd_dev; - - spin_lock(&rbd_dev_list_lock); - list_for_each(tmp, &rbd_dev_list) { - rbd_dev = list_entry(tmp, struct rbd_device, node); - if (rbd_dev->dev_id == dev_id) { - spin_unlock(&rbd_dev_list_lock); - return rbd_dev; - } - } - spin_unlock(&rbd_dev_list_lock); - return NULL; -} - static void rbd_dev_device_release(struct device *dev) { struct rbd_device *rbd_dev = dev_to_rbd_dev(dev); @@ -5167,7 +5150,8 @@ static ssize_t rbd_remove(struct bus_type *bus, size_t count) { struct rbd_device *rbd_dev = NULL; - int target_id; + struct list_head *tmp; + int dev_id; unsigned long ul; int ret; @@ -5176,26 +5160,33 @@ static ssize_t rbd_remove(struct bus_type *bus, return ret; /* convert to int; abort if we lost anything in the conversion */ - target_id = (int) ul; - if (target_id != ul) + dev_id = (int)ul; + if (dev_id != ul) return -EINVAL; mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); - rbd_dev = __rbd_get_dev(target_id); - if (!rbd_dev) { - ret = -ENOENT; - goto done; + ret = -ENOENT; + spin_lock(&rbd_dev_list_lock); + list_for_each(tmp, &rbd_dev_list) { + rbd_dev = list_entry(tmp, struct rbd_device, node); + if (rbd_dev->dev_id == dev_id) { + ret = 0; + break; + } } - - spin_lock_irq(&rbd_dev->lock); - if (rbd_dev->open_count) - ret = -EBUSY; - else - set_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags); - spin_unlock_irq(&rbd_dev->lock); + if (!ret) { + spin_lock_irq(&rbd_dev->lock); + if (rbd_dev->open_count) + ret = -EBUSY; + else + set_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags); + spin_unlock_irq(&rbd_dev->lock); + } + spin_unlock(&rbd_dev_list_lock); if (ret < 0) goto done; + rbd_bus_del_dev(rbd_dev); ret = rbd_dev_header_watch_sync(rbd_dev, false); if (ret)