From patchwork Thu Feb 13 19:26:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 11381067 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A86FE1395 for ; Thu, 13 Feb 2020 19:25:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 87F822168B for ; Thu, 13 Feb 2020 19:25:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZTsxShvB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728248AbgBMTZp (ORCPT ); Thu, 13 Feb 2020 14:25:45 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42103 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728174AbgBMTZo (ORCPT ); Thu, 13 Feb 2020 14:25:44 -0500 Received: by mail-wr1-f67.google.com with SMTP id k11so8095257wrd.9 for ; Thu, 13 Feb 2020 11:25:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NmLXxKBCsTPcondIxrW0UsmJUv9LkhHy3/AG7jONab4=; b=ZTsxShvB+s0dxUMTNj6mQNVUFwwP8dcP2jMv5/kdAUNQI29oWIvUp7PJp1LX/VwKjg BM6ZRDpDWD83JCbuymVy2a+jBxvPARWf4tDtXCMEIXvuoVcNKTu8gxutQJ/y/C0ak1bw 7O3G9tkT5TpZ6YPXFga7bMDG87EzLwcWbGnXexy0wiqGdx0KMtoqYVbjsMdpRVNPUB6V jqMOSwcg+NXBME+HSaJIpN/y8QViPQJCeIMg2ca3lkbI5+BHn0tZqLdjkCkTUV5idplw COqTd3MedKYMXApWDS4wx00W9poDVuo1dq75TS/hpYJCbDsSXgbVUrgwYdriVDMcvtFI fwPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NmLXxKBCsTPcondIxrW0UsmJUv9LkhHy3/AG7jONab4=; b=VbwmVWOkzyjwcFpl0lde4K82UHapJpJyRDqoebgzBtCLBP+EblImh2oYsGCNATucgi DCm9OOOJe5RB1uExxeNKQYMX6mtDCb1MyYtxGaRiLumFtXMM+ZKFE/8U3N/4FxcJYOs2 htF3f0aL7gXLzsIIvW1j4VX2h5zX8r8qSKHoedQ+hAlVEEpbmpB66S7lRW4foaiCCrfB FBx8w0YHLI11RUcgwb4aZXz98ZY7GsYtV+caL+qe3Wm9irZuxQYpESuwKt1/QOnBBV/n k3O19jB+rb/jOgmDSus3Q8xdAFGz6/0LVIQv/NiOJSzEr5Y5aw7ileWlhCOPn7ZHv2XK RF8w== X-Gm-Message-State: APjAAAW1ThBTRxsnp9e6uSfF2RPIBsf78aoPr0khhH8mwf8bE6pgDU8R cfSswO1RnXxSLF1Sv0OXGDSoVXnY9Kg= X-Google-Smtp-Source: APXvYqx9VMBXFF7sRUlsIKnYsvQDXSh6cWDyx3AHVYt3EjqvYl9tEBXkyBWfelZ+B7stT9mQ8RpsjA== X-Received: by 2002:a5d:6692:: with SMTP id l18mr22425938wru.382.1581621942166; Thu, 13 Feb 2020 11:25:42 -0800 (PST) Received: from kwango.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id 21sm4326227wmo.8.2020.02.13.11.25.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 11:25:41 -0800 (PST) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Hannes Reinecke Subject: [PATCH 1/5] rbd: kill img_request kref Date: Thu, 13 Feb 2020 20:26:02 +0100 Message-Id: <20200213192606.31194-2-idryomov@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200213192606.31194-1-idryomov@gmail.com> References: <20200213192606.31194-1-idryomov@gmail.com> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Hannes Reinecke The reference counter is never increased, so we can as well call rbd_img_request_destroy() directly and drop the kref. Signed-off-by: Hannes Reinecke --- drivers/block/rbd.c | 24 +++++------------------- 1 file changed, 5 insertions(+), 19 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index f206edbbc5d3..22d524a0e98b 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -349,7 +349,6 @@ struct rbd_img_request { struct pending_result pending; struct work_struct work; int work_result; - struct kref kref; }; #define for_each_obj_request(ireq, oreq) \ @@ -1320,15 +1319,6 @@ static void rbd_obj_request_put(struct rbd_obj_request *obj_request) kref_put(&obj_request->kref, rbd_obj_request_destroy); } -static void rbd_img_request_destroy(struct kref *kref); -static void rbd_img_request_put(struct rbd_img_request *img_request) -{ - rbd_assert(img_request != NULL); - dout("%s: img %p (was %d)\n", __func__, img_request, - kref_read(&img_request->kref)); - kref_put(&img_request->kref, rbd_img_request_destroy); -} - static inline void rbd_img_obj_request_add(struct rbd_img_request *img_request, struct rbd_obj_request *obj_request) { @@ -1656,19 +1646,15 @@ static struct rbd_img_request *rbd_img_request_create( INIT_LIST_HEAD(&img_request->lock_item); INIT_LIST_HEAD(&img_request->object_extents); mutex_init(&img_request->state_mutex); - kref_init(&img_request->kref); return img_request; } -static void rbd_img_request_destroy(struct kref *kref) +static void rbd_img_request_destroy(struct rbd_img_request *img_request) { - struct rbd_img_request *img_request; struct rbd_obj_request *obj_request; struct rbd_obj_request *next_obj_request; - img_request = container_of(kref, struct rbd_img_request, kref); - dout("%s: img %p\n", __func__, img_request); WARN_ON(!list_empty(&img_request->lock_item)); @@ -2885,7 +2871,7 @@ static int rbd_obj_read_from_parent(struct rbd_obj_request *obj_req) obj_req->copyup_bvecs); } if (ret) { - rbd_img_request_put(child_img_req); + rbd_img_request_destroy(child_img_req); return ret; } @@ -3644,7 +3630,7 @@ static void rbd_img_handle_request(struct rbd_img_request *img_req, int result) if (test_bit(IMG_REQ_CHILD, &img_req->flags)) { struct rbd_obj_request *obj_req = img_req->obj_request; - rbd_img_request_put(img_req); + rbd_img_request_destroy(img_req); if (__rbd_obj_handle_request(obj_req, &result)) { img_req = obj_req->img_request; goto again; @@ -3652,7 +3638,7 @@ static void rbd_img_handle_request(struct rbd_img_request *img_req, int result) } else { struct request *rq = img_req->rq; - rbd_img_request_put(img_req); + rbd_img_request_destroy(img_req); blk_mq_end_request(rq, errno_to_blk_status(result)); } } @@ -4798,7 +4784,7 @@ static void rbd_queue_workfn(struct work_struct *work) return; err_img_request: - rbd_img_request_put(img_request); + rbd_img_request_destroy(img_request); err_rq: if (result) rbd_warn(rbd_dev, "%s %llx at %llx result %d", From patchwork Thu Feb 13 19:26:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 11381065 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CFA51800 for ; Thu, 13 Feb 2020 19:25:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ED9A72168B for ; Thu, 13 Feb 2020 19:25:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="K0n4oboT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728543AbgBMTZp (ORCPT ); Thu, 13 Feb 2020 14:25:45 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:36356 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728075AbgBMTZo (ORCPT ); Thu, 13 Feb 2020 14:25:44 -0500 Received: by mail-wr1-f67.google.com with SMTP id z3so8144757wru.3 for ; Thu, 13 Feb 2020 11:25:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r144PgSd2lIbIcJ03EKfsOEE6P1UhKDqheXGlm+kzlM=; b=K0n4oboTdbj+77gQgKubwFABHHCFaOn7UKxtQjVfuFxdHWDMwh23vJk3wl1nIg9sI7 09gyFC/X+tsc6c3FNxyuhJ4Q3u47/VRKamxKJYVp7xQe7g3wCZL/kIwnpVS0Oj5VIca8 CJYXnN+UkuTiEKNMlKdHfIqlWQUtHcJNQqrZn+tZaZo0q6bt2Dkrh/FqEIRixkfJEKo8 4WPvFuaHrkVAnPG9wW40Dghzdk/4Z7KWOMcU90+q3BQrFBFraqMwdZ9QX9oLSsXC7RSA qqBC8jDYGfCaIHss2bA3QIpEn0f60O2h+3oA/b03jOJ1ScDMectK4o7zVH0rGafAaeEm EV8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r144PgSd2lIbIcJ03EKfsOEE6P1UhKDqheXGlm+kzlM=; b=DFHSmnC6o4Vyx4bZh3zKpCbhn1sraAdwqXpIVMjmjYInINKbc6Qa92maEFoB5Rbm6j Chr1VibUoOru+/+mr7kW6SL3i1AnydOa9AqwznpX9lxWFSN2Iq2HOcQDqDYiqcjYztaE W6EXnzLxEqtg/tRXmwS4dlp6UQ+U5HwCbYSvFsuvoJwYwl3v+BBECelf4goQods5jVDf eqZohyMTgXgCeWaN9BxmZveaTCOw6naconiahtkp7gv5lmjsiyP2t1aB0hhIU3D/Nk+C IqMhLuWGLmem7eDDNZM6sJ1PGkSCpRtQmhvjVxsLrFXme4PEKBs9Dl/TnZTIE2zGgaZR 411Q== X-Gm-Message-State: APjAAAUdnfYsN/qw42YY2gSluq+3A7fT+RvP1aBlPKUOZL32XddWYOet PU1WouFlFPKhz/UG/QtLvCPAtnY+n4o= X-Google-Smtp-Source: APXvYqwZYpyqTDWJebLMfQvQj44JP4Wh7s3QeCsLCW8m30tKCFhr2tX+VylgYv6EpvU6BLkHJo0XfA== X-Received: by 2002:a5d:448c:: with SMTP id j12mr22760409wrq.125.1581621943239; Thu, 13 Feb 2020 11:25:43 -0800 (PST) Received: from kwango.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id 21sm4326227wmo.8.2020.02.13.11.25.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 11:25:42 -0800 (PST) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Hannes Reinecke Subject: [PATCH 2/5] rbd: get rid of img_request_layered_clear() Date: Thu, 13 Feb 2020 20:26:03 +0100 Message-Id: <20200213192606.31194-3-idryomov@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200213192606.31194-1-idryomov@gmail.com> References: <20200213192606.31194-1-idryomov@gmail.com> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org No need to clear IMG_REQ_LAYERED before destroying the request. Signed-off-by: Ilya Dryomov --- drivers/block/rbd.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 22d524a0e98b..96aa0133fb40 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1358,11 +1358,6 @@ static void img_request_layered_set(struct rbd_img_request *img_request) set_bit(IMG_REQ_LAYERED, &img_request->flags); } -static void img_request_layered_clear(struct rbd_img_request *img_request) -{ - clear_bit(IMG_REQ_LAYERED, &img_request->flags); -} - static bool img_request_layered_test(struct rbd_img_request *img_request) { return test_bit(IMG_REQ_LAYERED, &img_request->flags) != 0; @@ -1661,10 +1656,8 @@ static void rbd_img_request_destroy(struct rbd_img_request *img_request) for_each_obj_request_safe(img_request, obj_request, next_obj_request) rbd_img_obj_request_del(img_request, obj_request); - if (img_request_layered_test(img_request)) { - img_request_layered_clear(img_request); + if (img_request_layered_test(img_request)) rbd_dev_parent_put(img_request->rbd_dev); - } if (rbd_img_is_write(img_request)) ceph_put_snap_context(img_request->snapc); From patchwork Thu Feb 13 19:26:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 11381069 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 128C01395 for ; Thu, 13 Feb 2020 19:25:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DC90E2168B for ; Thu, 13 Feb 2020 19:25:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RfX1sDEi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728562AbgBMTZr (ORCPT ); Thu, 13 Feb 2020 14:25:47 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45693 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728174AbgBMTZq (ORCPT ); Thu, 13 Feb 2020 14:25:46 -0500 Received: by mail-wr1-f67.google.com with SMTP id g3so8093217wrs.12 for ; Thu, 13 Feb 2020 11:25:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pot70fHs+lXUgzbEMMDHA0AfLvlJ2KldabgcypBbh1Q=; b=RfX1sDEipXmXh4rWT1IknxE/g9dnKXU5uPsmN7eP0NFQeePqF1zeQtonAlqWOYa5QS uqupix83rNKw0BLEkbdao1dxhLcGV2pd8aEO3MBBAejeVbSW6EojdpNQAF3glTsDPYh9 R1g5N+gYKPytlQiLPs44RGT1W6hhmwPZfXt29avsD9RA+A8y0JExV8J6rN7w8hLjcOFY StJVr3RglQXn95gq5FEks66tDumS3apKoZ4hDh1sDLauT5xNCwahvjX1NUdxqJ67xahW qqBHRkjmWl2mYpt95Tw8tw0yVXF20Ven/euC8TuSrLP1XIztBvonOOZ4VEiUhUK3n2Es YayA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pot70fHs+lXUgzbEMMDHA0AfLvlJ2KldabgcypBbh1Q=; b=smTvTxCTxzrTSg+mqSJ8SODt6L+kfmWKgZxHG+dQuOEJb4UwW+p4wWh3hrRxYYlZOC GnN8bEI5qetOWa3b3zKjj2skXFqbwX4r326i1pXqpmX9alYmN7WMBvt9s0w/lnof/b8p YqKmykyxUcNNO6A586ghlf6jvpUIxYsP5GEnDUmDIA+T10/HzToH16osfliH55qlhTBX 6gkbX/A72epyebvyv3OsSEZm4TTBALxvPbCznzoEFUadAd0LIs0B5X3VfoJ7rEHY9hSo vl3cmisdDNa8i+gHqJl5TffvfKJX9FDqJGYPLkSAptNqhXYimanabEMVUwlfOckPrfOF t7EA== X-Gm-Message-State: APjAAAWeQq0QH/5TTS6vQBtgkI/9L/AdEs1SwJu+kpsEV+VHE5bQDtbB ZnjASwJvZMGJd1aC/CB8AkfN5umEEdM= X-Google-Smtp-Source: APXvYqxiIo6h2djZ1AZm8cNjeORhTYbGzeWGHJR+Nuqjh7EP/cjrYCp+wUXHJ+AKmMMLmEbLGehqhg== X-Received: by 2002:a5d:4a91:: with SMTP id o17mr22191571wrq.232.1581621944115; Thu, 13 Feb 2020 11:25:44 -0800 (PST) Received: from kwango.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id 21sm4326227wmo.8.2020.02.13.11.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 11:25:43 -0800 (PST) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Hannes Reinecke Subject: [PATCH 3/5] rbd: acquire header_rwsem just once in rbd_queue_workfn() Date: Thu, 13 Feb 2020 20:26:04 +0100 Message-Id: <20200213192606.31194-4-idryomov@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200213192606.31194-1-idryomov@gmail.com> References: <20200213192606.31194-1-idryomov@gmail.com> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Currently header_rwsem is acquired twice: once in rbd_dev_parent_get() when the image request is being created and then in rbd_queue_workfn() to capture mapping_size and snapc. Introduce rbd_img_capture_header() and move image request allocation so that header_rwsem can be acquired just once. Signed-off-by: Ilya Dryomov --- drivers/block/rbd.c | 59 ++++++++++++++++++++++++--------------------- 1 file changed, 31 insertions(+), 28 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 96aa0133fb40..d9eaf470728b 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1601,10 +1601,8 @@ static bool rbd_dev_parent_get(struct rbd_device *rbd_dev) if (!rbd_dev->parent_spec) return false; - down_read(&rbd_dev->header_rwsem); if (rbd_dev->parent_overlap) counter = atomic_inc_return_safe(&rbd_dev->parent_ref); - up_read(&rbd_dev->header_rwsem); if (counter < 0) rbd_warn(rbd_dev, "parent reference overflow"); @@ -1619,8 +1617,7 @@ static bool rbd_dev_parent_get(struct rbd_device *rbd_dev) */ static struct rbd_img_request *rbd_img_request_create( struct rbd_device *rbd_dev, - enum obj_operation_type op_type, - struct ceph_snap_context *snapc) + enum obj_operation_type op_type) { struct rbd_img_request *img_request; @@ -1630,13 +1627,6 @@ static struct rbd_img_request *rbd_img_request_create( img_request->rbd_dev = rbd_dev; img_request->op_type = op_type; - if (!rbd_img_is_write(img_request)) - img_request->snap_id = rbd_dev->spec->snap_id; - else - img_request->snapc = snapc; - - if (rbd_dev_parent_get(rbd_dev)) - img_request_layered_set(img_request); INIT_LIST_HEAD(&img_request->lock_item); INIT_LIST_HEAD(&img_request->object_extents); @@ -1645,6 +1635,21 @@ static struct rbd_img_request *rbd_img_request_create( return img_request; } +static void rbd_img_capture_header(struct rbd_img_request *img_req) +{ + struct rbd_device *rbd_dev = img_req->rbd_dev; + + lockdep_assert_held(&rbd_dev->header_rwsem); + + if (rbd_img_is_write(img_req)) + img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); + else + img_req->snap_id = rbd_dev->spec->snap_id; + + if (rbd_dev_parent_get(rbd_dev)) + img_request_layered_set(img_req); +} + static void rbd_img_request_destroy(struct rbd_img_request *img_request) { struct rbd_obj_request *obj_request; @@ -2825,17 +2830,21 @@ static int rbd_obj_read_object(struct rbd_obj_request *obj_req) static int rbd_obj_read_from_parent(struct rbd_obj_request *obj_req) { struct rbd_img_request *img_req = obj_req->img_request; + struct rbd_device *parent = img_req->rbd_dev->parent; struct rbd_img_request *child_img_req; int ret; - child_img_req = rbd_img_request_create(img_req->rbd_dev->parent, - OBJ_OP_READ, NULL); + child_img_req = rbd_img_request_create(parent, OBJ_OP_READ); if (!child_img_req) return -ENOMEM; __set_bit(IMG_REQ_CHILD, &child_img_req->flags); child_img_req->obj_request = obj_req; + down_read(&parent->header_rwsem); + rbd_img_capture_header(child_img_req); + up_read(&parent->header_rwsem); + dout("%s child_img_req %p for obj_req %p\n", __func__, child_img_req, obj_req); @@ -4686,7 +4695,6 @@ static void rbd_queue_workfn(struct work_struct *work) struct request *rq = blk_mq_rq_from_pdu(work); struct rbd_device *rbd_dev = rq->q->queuedata; struct rbd_img_request *img_request; - struct ceph_snap_context *snapc = NULL; u64 offset = (u64)blk_rq_pos(rq) << SECTOR_SHIFT; u64 length = blk_rq_bytes(rq); enum obj_operation_type op_type; @@ -4739,28 +4747,24 @@ static void rbd_queue_workfn(struct work_struct *work) blk_mq_start_request(rq); + img_request = rbd_img_request_create(rbd_dev, op_type); + if (!img_request) { + result = -ENOMEM; + goto err_rq; + } + img_request->rq = rq; + down_read(&rbd_dev->header_rwsem); mapping_size = rbd_dev->mapping.size; - if (op_type != OBJ_OP_READ) { - snapc = rbd_dev->header.snapc; - ceph_get_snap_context(snapc); - } + rbd_img_capture_header(img_request); up_read(&rbd_dev->header_rwsem); if (offset + length > mapping_size) { rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)", offset, length, mapping_size); result = -EIO; - goto err_rq; - } - - img_request = rbd_img_request_create(rbd_dev, op_type, snapc); - if (!img_request) { - result = -ENOMEM; - goto err_rq; + goto err_img_request; } - img_request->rq = rq; - snapc = NULL; /* img_request consumes a ref */ dout("%s rbd_dev %p img_req %p %s %llu~%llu\n", __func__, rbd_dev, img_request, obj_op_name(op_type), offset, length); @@ -4782,7 +4786,6 @@ static void rbd_queue_workfn(struct work_struct *work) if (result) rbd_warn(rbd_dev, "%s %llx at %llx result %d", obj_op_name(op_type), length, offset, result); - ceph_put_snap_context(snapc); err: blk_mq_end_request(rq, errno_to_blk_status(result)); } From patchwork Thu Feb 13 19:26:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 11381073 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFC251395 for ; Thu, 13 Feb 2020 19:25:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 95DC92168B for ; Thu, 13 Feb 2020 19:25:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iUJf4eCa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728570AbgBMTZs (ORCPT ); Thu, 13 Feb 2020 14:25:48 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:34417 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728075AbgBMTZs (ORCPT ); Thu, 13 Feb 2020 14:25:48 -0500 Received: by mail-wm1-f68.google.com with SMTP id s144so421814wme.1 for ; Thu, 13 Feb 2020 11:25:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DEUWm2t3nIZGEuMh/RKfnCRKyeOeYAyQHLIDG/g+LTo=; b=iUJf4eCaRhov0JQyA770ZfjSXjZfdE4INQZESUHUwUGCPzvYJ5A+c0m2Z9Ch3xzJve tomjQOO0e2+tEeEiA1yTVQKFDG5pAirxKTwFTFHk+DG44HI0A91EfrrBGZuIbZOaNIko xpzfsE+w04qKObRbK0S1teDw4+PMQFg/Ts/a65fYZZx/yneOKYpsDXwvWB7PLIHVF8bK UyCd0oiYFclQ34IZ4185YvG5dwLGhUHgj2FsMtXTNug0GXTLUOW8Usvw+7ET3TI7e/MF vORdwwvcMFBq/d/ddUyMTVXv/4eEB4Yg9xo7GR5G7TKBVedXEYy5aMFntV3tRkBwVkiN rWCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DEUWm2t3nIZGEuMh/RKfnCRKyeOeYAyQHLIDG/g+LTo=; b=KCyk8HsMsdrmxh7XS9tnFgqZF+CbRLjjX/QdiAbAu8RT+hvK46S93kUiax0VvrQxce Q1iDxLfWMjTPPxgxrGaLHb3xk7oW5CwFCVh6G0kWlnukRn6ZuYmKDDJklW3NrapZ6W29 0M2+yEIBxg1eCP2d42b2FU635dVKmKMzouY5xzh1x8LNYl6Pfd9Q/ZWgvc3u/5rVCuYy 7wXOOlb1NjU7kVLsBQVbJRnbeMVJZGKyRI2MogGoU75baC6DHOIStU5nt3rkU38nWdip urKLodUg+k/xhSmY2GR3+f512HoiV/udfSn+LFdawy7+vG/f/Gx4fXUTQG8kNfKhhXRR OuUA== X-Gm-Message-State: APjAAAVlcispWhztc124kZoVaEViIRmwF5tOCdGDnLuOmYm+bmPXfsu2 dhLvMOSpwZ8pdqqmjIJ0bLQt6Dlgs6c= X-Google-Smtp-Source: APXvYqyd54w9cvzrW7r505jYLSUotayeQyZt7lrX6XHtyiiyFfzWTitUb19YLbbJVb11llaqER5y9w== X-Received: by 2002:a1c:7f0d:: with SMTP id a13mr6996995wmd.182.1581621945005; Thu, 13 Feb 2020 11:25:45 -0800 (PST) Received: from kwango.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id 21sm4326227wmo.8.2020.02.13.11.25.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 11:25:44 -0800 (PST) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Hannes Reinecke Subject: [PATCH 4/5] rbd: embed image request in blk-mq pdu Date: Thu, 13 Feb 2020 20:26:05 +0100 Message-Id: <20200213192606.31194-5-idryomov@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200213192606.31194-1-idryomov@gmail.com> References: <20200213192606.31194-1-idryomov@gmail.com> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Avoid making allocations for !IMG_REQ_CHILD image requests. Only IMG_REQ_CHILD image requests need to be freed now. Move the initial request checks to rbd_queue_rq(). Unfortunately we can't fill the image request and kick the state machine directly from rbd_queue_rq() because ->queue_rq() isn't allowed to block. This is loosely based on a patch from Hannes Reinecke . Signed-off-by: Ilya Dryomov --- drivers/block/rbd.c | 138 ++++++++++++++++---------------------------- 1 file changed, 51 insertions(+), 87 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index d9eaf470728b..9ff4355fe48a 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -337,10 +337,7 @@ struct rbd_img_request { u64 snap_id; /* for reads */ struct ceph_snap_context *snapc; /* for writes */ }; - union { - struct request *rq; /* block request */ - struct rbd_obj_request *obj_request; /* obj req initiator */ - }; + struct rbd_obj_request *obj_request; /* obj req initiator */ struct list_head lock_item; struct list_head object_extents; /* obj_req.ex structs */ @@ -1610,20 +1607,11 @@ static bool rbd_dev_parent_get(struct rbd_device *rbd_dev) return counter > 0; } -/* - * Caller is responsible for filling in the list of object requests - * that comprises the image request, and the Linux request pointer - * (if there is one). - */ -static struct rbd_img_request *rbd_img_request_create( - struct rbd_device *rbd_dev, - enum obj_operation_type op_type) +static void rbd_img_request_init(struct rbd_img_request *img_request, + struct rbd_device *rbd_dev, + enum obj_operation_type op_type) { - struct rbd_img_request *img_request; - - img_request = kmem_cache_zalloc(rbd_img_request_cache, GFP_NOIO); - if (!img_request) - return NULL; + memset(img_request, 0, sizeof(*img_request)); img_request->rbd_dev = rbd_dev; img_request->op_type = op_type; @@ -1631,8 +1619,6 @@ static struct rbd_img_request *rbd_img_request_create( INIT_LIST_HEAD(&img_request->lock_item); INIT_LIST_HEAD(&img_request->object_extents); mutex_init(&img_request->state_mutex); - - return img_request; } static void rbd_img_capture_header(struct rbd_img_request *img_req) @@ -1667,7 +1653,8 @@ static void rbd_img_request_destroy(struct rbd_img_request *img_request) if (rbd_img_is_write(img_request)) ceph_put_snap_context(img_request->snapc); - kmem_cache_free(rbd_img_request_cache, img_request); + if (test_bit(IMG_REQ_CHILD, &img_request->flags)) + kmem_cache_free(rbd_img_request_cache, img_request); } #define BITS_PER_OBJ 2 @@ -2834,10 +2821,11 @@ static int rbd_obj_read_from_parent(struct rbd_obj_request *obj_req) struct rbd_img_request *child_img_req; int ret; - child_img_req = rbd_img_request_create(parent, OBJ_OP_READ); + child_img_req = kmem_cache_alloc(rbd_img_request_cache, GFP_NOIO); if (!child_img_req) return -ENOMEM; + rbd_img_request_init(child_img_req, parent, OBJ_OP_READ); __set_bit(IMG_REQ_CHILD, &child_img_req->flags); child_img_req->obj_request = obj_req; @@ -3638,7 +3626,7 @@ static void rbd_img_handle_request(struct rbd_img_request *img_req, int result) goto again; } } else { - struct request *rq = img_req->rq; + struct request *rq = blk_mq_rq_from_pdu(img_req); rbd_img_request_destroy(img_req); blk_mq_end_request(rq, errno_to_blk_status(result)); @@ -4692,68 +4680,25 @@ static int rbd_obj_method_sync(struct rbd_device *rbd_dev, static void rbd_queue_workfn(struct work_struct *work) { - struct request *rq = blk_mq_rq_from_pdu(work); - struct rbd_device *rbd_dev = rq->q->queuedata; - struct rbd_img_request *img_request; + struct rbd_img_request *img_request = + container_of(work, struct rbd_img_request, work); + struct rbd_device *rbd_dev = img_request->rbd_dev; + enum obj_operation_type op_type = img_request->op_type; + struct request *rq = blk_mq_rq_from_pdu(img_request); u64 offset = (u64)blk_rq_pos(rq) << SECTOR_SHIFT; u64 length = blk_rq_bytes(rq); - enum obj_operation_type op_type; u64 mapping_size; int result; - switch (req_op(rq)) { - case REQ_OP_DISCARD: - op_type = OBJ_OP_DISCARD; - break; - case REQ_OP_WRITE_ZEROES: - op_type = OBJ_OP_ZEROOUT; - break; - case REQ_OP_WRITE: - op_type = OBJ_OP_WRITE; - break; - case REQ_OP_READ: - op_type = OBJ_OP_READ; - break; - default: - dout("%s: non-fs request type %d\n", __func__, req_op(rq)); - result = -EIO; - goto err; - } - /* Ignore/skip any zero-length requests */ - if (!length) { dout("%s: zero-length request\n", __func__); result = 0; - goto err_rq; - } - - if (op_type != OBJ_OP_READ) { - if (rbd_is_ro(rbd_dev)) { - rbd_warn(rbd_dev, "%s on read-only mapping", - obj_op_name(op_type)); - result = -EIO; - goto err; - } - rbd_assert(!rbd_is_snap(rbd_dev)); - } - - if (offset && length > U64_MAX - offset + 1) { - rbd_warn(rbd_dev, "bad request range (%llu~%llu)", offset, - length); - result = -EINVAL; - goto err_rq; /* Shouldn't happen */ + goto err_img_request; } blk_mq_start_request(rq); - img_request = rbd_img_request_create(rbd_dev, op_type); - if (!img_request) { - result = -ENOMEM; - goto err_rq; - } - img_request->rq = rq; - down_read(&rbd_dev->header_rwsem); mapping_size = rbd_dev->mapping.size; rbd_img_capture_header(img_request); @@ -4782,21 +4727,50 @@ static void rbd_queue_workfn(struct work_struct *work) err_img_request: rbd_img_request_destroy(img_request); -err_rq: if (result) rbd_warn(rbd_dev, "%s %llx at %llx result %d", obj_op_name(op_type), length, offset, result); -err: blk_mq_end_request(rq, errno_to_blk_status(result)); } static blk_status_t rbd_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { - struct request *rq = bd->rq; - struct work_struct *work = blk_mq_rq_to_pdu(rq); + struct rbd_device *rbd_dev = hctx->queue->queuedata; + struct rbd_img_request *img_req = blk_mq_rq_to_pdu(bd->rq); + enum obj_operation_type op_type; - queue_work(rbd_wq, work); + switch (req_op(bd->rq)) { + case REQ_OP_DISCARD: + op_type = OBJ_OP_DISCARD; + break; + case REQ_OP_WRITE_ZEROES: + op_type = OBJ_OP_ZEROOUT; + break; + case REQ_OP_WRITE: + op_type = OBJ_OP_WRITE; + break; + case REQ_OP_READ: + op_type = OBJ_OP_READ; + break; + default: + rbd_warn(rbd_dev, "unknown req_op %d", req_op(bd->rq)); + return BLK_STS_IOERR; + } + + rbd_img_request_init(img_req, rbd_dev, op_type); + + if (rbd_img_is_write(img_req)) { + if (rbd_is_ro(rbd_dev)) { + rbd_warn(rbd_dev, "%s on read-only mapping", + obj_op_name(img_req->op_type)); + return BLK_STS_IOERR; + } + rbd_assert(!rbd_is_snap(rbd_dev)); + } + + INIT_WORK(&img_req->work, rbd_queue_workfn); + queue_work(rbd_wq, &img_req->work); return BLK_STS_OK; } @@ -4963,18 +4937,8 @@ static int rbd_dev_refresh(struct rbd_device *rbd_dev) return ret; } -static int rbd_init_request(struct blk_mq_tag_set *set, struct request *rq, - unsigned int hctx_idx, unsigned int numa_node) -{ - struct work_struct *work = blk_mq_rq_to_pdu(rq); - - INIT_WORK(work, rbd_queue_workfn); - return 0; -} - static const struct blk_mq_ops rbd_mq_ops = { .queue_rq = rbd_queue_rq, - .init_request = rbd_init_request, }; static int rbd_init_disk(struct rbd_device *rbd_dev) @@ -5007,7 +4971,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) rbd_dev->tag_set.numa_node = NUMA_NO_NODE; rbd_dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; rbd_dev->tag_set.nr_hw_queues = 1; - rbd_dev->tag_set.cmd_size = sizeof(struct work_struct); + rbd_dev->tag_set.cmd_size = sizeof(struct rbd_img_request); err = blk_mq_alloc_tag_set(&rbd_dev->tag_set); if (err) From patchwork Thu Feb 13 19:26:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 11381071 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B0E892A for ; Thu, 13 Feb 2020 19:25:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EE48E2168B for ; Thu, 13 Feb 2020 19:25:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gtdqLYgp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728563AbgBMTZs (ORCPT ); Thu, 13 Feb 2020 14:25:48 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:41379 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728460AbgBMTZr (ORCPT ); Thu, 13 Feb 2020 14:25:47 -0500 Received: by mail-wr1-f67.google.com with SMTP id c9so8125509wrw.8 for ; Thu, 13 Feb 2020 11:25:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2HRkd5HdwNsGrBM1Iir3PFHXRsqwdBQsOnm/hEO+rGo=; b=gtdqLYgpP+eAKgJRERNiRWX/BIm3J80TyMQKhicVZx6frqtiHSY0BPuOGC6fZpXRs9 INuZOZHIePqVFrR3uVgF7q04BaMo/UFxWicVUTFhqAtbhIeLjFgfOLZBsLmupg22l1Mq 5uFLSRoKocOooPLuE32MKzK9WVNuUAiUnMERSVUSjWmQYm06OsqZ3USCxJ76GJGB3ejx 9YdadjSTfTjQ9p/D4P1RUMrpP4tnZowxc5NBJIKK5IrKDV4Ag6gSavmqq7xtavBm6ZwN 6SmItjUmFqFc2/mveZmqAz/HAEzz6Pkqltsz39FoBPDFglBsILsC4FxowZNLSEe73gBF tCIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2HRkd5HdwNsGrBM1Iir3PFHXRsqwdBQsOnm/hEO+rGo=; b=OoDNeG4IPrld6LLOZkd7/4ZgTyg7uxO/GEDYlpSja5lmDn+QuXt96LZ1u2MTsdNnPi Wj9CiOmxrxy239LlYHKMNz6l6He6ffGJNbg2Py0y0Jpm/NSORFAVW6iJsZlwpHdKvP5N iQmeFMDiWv35m8MS63C9nFtY3ccmH2MbCQDwf8ybRC9gutt8TdPUxvk9TWDd8xYCo6xG JsXOUvM9HEoPrqs2GzgTLJqiOY/06eiOKfWYawXM0YbesNkQaBs7yqt9O7s/GtoLrSz/ tu834vKpQ7+CmCxGsHxx/IdN2VZaNFWEIpzT5wats+2kbbbCORr/g6L2Op3SOVXjy/t5 x/nA== X-Gm-Message-State: APjAAAVoHjd+xh4UlwXi0sqHi2nfEfQYGK8ky8xi2HahGDVd3GaG8qJn NSl4wnAC9yXUTGFtxqPtKBAazsnZkVU= X-Google-Smtp-Source: APXvYqy76K/bv7wGdz43tuii/bm5Clr7ERFbDCfqY9gy+VqQ7RePy7aSBJJy+FaG+REtMhZgL4aLgQ== X-Received: by 2002:a5d:6802:: with SMTP id w2mr22839548wru.353.1581621945857; Thu, 13 Feb 2020 11:25:45 -0800 (PST) Received: from kwango.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id 21sm4326227wmo.8.2020.02.13.11.25.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 11:25:45 -0800 (PST) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Hannes Reinecke Subject: [PATCH 5/5] rbd: enable multiple blk-mq queues Date: Thu, 13 Feb 2020 20:26:06 +0100 Message-Id: <20200213192606.31194-6-idryomov@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200213192606.31194-1-idryomov@gmail.com> References: <20200213192606.31194-1-idryomov@gmail.com> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Hannes Reinecke Allocate one queue per CPU and get a performance boost from higher parallelism. Signed-off-by: Hannes Reinecke --- drivers/block/rbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 9ff4355fe48a..a4e7b494344c 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -4970,7 +4970,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) rbd_dev->tag_set.queue_depth = rbd_dev->opts->queue_depth; rbd_dev->tag_set.numa_node = NUMA_NO_NODE; rbd_dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; - rbd_dev->tag_set.nr_hw_queues = 1; + rbd_dev->tag_set.nr_hw_queues = num_present_cpus(); rbd_dev->tag_set.cmd_size = sizeof(struct rbd_img_request); err = blk_mq_alloc_tag_set(&rbd_dev->tag_set);