From patchwork Fri Apr 5 14:04:52 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2398771 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id C5F4F3FD8C for ; Fri, 5 Apr 2013 14:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161818Ab3DEOEz (ORCPT ); Fri, 5 Apr 2013 10:04:55 -0400 Received: from mail-ie0-f171.google.com ([209.85.223.171]:60173 "EHLO mail-ie0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161730Ab3DEOEy (ORCPT ); Fri, 5 Apr 2013 10:04:54 -0400 Received: by mail-ie0-f171.google.com with SMTP id e14so4360813iej.30 for ; Fri, 05 Apr 2013 07:04:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=GhzfdubrdQNPAdV0PawNySfKJqdqclWZ7LOLHc0LA3k=; b=AeNJLz7g2Z2ZYo7N+/KPKOK9uBB5wxlPmjX+HnW8wKX9GxvF3EWSsfwCpGxZSCdGDB qP+L8olcfKN02+4KeLHxelZByGcJx4YoXcdprCgECdX1BOZfEsXH+ijfpvEKOIZ7wfXV tvabCmHcOxdIKOMxd0NVlFsbx7ST8slHoBk8scz8RELI9GQORXQvJRvU9zSSwbt6h7Ui 8z1MrFRbUYpM5UKFoAlkUzfR/KwmH2VxdJeeyuLosCrxXcwLNqElU2mTUicA2IkXNF4i hfcCt67htqRi4oAw8/Xppg1/J/KxFYsyybc5x0ENsSX3wbRGRBDM+pLaAEYHGtBaiVpt KJ9Q== X-Received: by 10.50.20.136 with SMTP id n8mr1763571ige.34.1365170693953; Fri, 05 Apr 2013 07:04:53 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id in10sm6226836igc.1.2013.04.05.07.04.52 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Apr 2013 07:04:53 -0700 (PDT) Message-ID: <515EDA04.10903@inktank.com> Date: Fri, 05 Apr 2013 09:04:52 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: "ceph-devel@vger.kernel.org" Subject: [PATCH 14/20] rbd: rearrange some code for consistency References: <515ED849.9060901@inktank.com> In-Reply-To: <515ED849.9060901@inktank.com> X-Gm-Message-State: ALoCoQnCjVGaWuF75Zjr5Q3ED8IRb54LbtAps+adU/9EDx4RrtqyzvP2BrOYXKhrO38o1E4JxLW3 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org This patch just trivially moves around some code for consistency. In preparation for initializing osd request data fields in ceph_osdc_build_request(), I wanted to verify that rbd did in fact call that immediatley before it called ceph_osdc_start_request(). It was true (although image requests are built in a group and then started as a group). But I made the changes here just to make it more obvious, by making all of the calls follow a common sequence: osd_req_op__init(); ceph_osd_data__init() osd_req_op__() rbd_osd_req_format() ... ret = rbd_obj_request_submit() I moved the initialization of the callback for image object requests into rbd_img_request_fill_bio(), again, for consistency. To avoid a forward reference, I moved the definition of rbd_img_obj_callback() up in the file. Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 129 +++++++++++++++++++++++++-------------------------- 1 file changed, 63 insertions(+), 66 deletions(-) { @@ -1572,6 +1623,7 @@ static int rbd_img_request_fill_bio(struct rbd_img_request *img_request, if (!osd_req) goto out_partial; obj_request->osd_req = osd_req; + obj_request->callback = rbd_img_obj_callback; osd_data = write_request ? &osd_req->r_data_out : &osd_req->r_data_in; @@ -1582,8 +1634,6 @@ static int rbd_img_request_fill_bio(struct rbd_img_request *img_request, osd_req_op_extent_osd_data(osd_req, 0, osd_data); rbd_osd_req_format(obj_request, write_request); - /* status and version are initially zero-filled */ - rbd_img_obj_request_add(img_request, obj_request); image_offset += length; @@ -1601,57 +1651,6 @@ out_unwind: return -ENOMEM; } -static void rbd_img_obj_callback(struct rbd_obj_request *obj_request) -{ - struct rbd_img_request *img_request; - u32 which = obj_request->which; - bool more = true; - - img_request = obj_request->img_request; - - dout("%s: img %p obj %p\n", __func__, img_request, obj_request); - rbd_assert(img_request != NULL); - rbd_assert(img_request->rq != NULL); - rbd_assert(img_request->obj_request_count > 0); - rbd_assert(which != BAD_WHICH); - rbd_assert(which < img_request->obj_request_count); - rbd_assert(which >= img_request->next_completion); - - spin_lock_irq(&img_request->completion_lock); - if (which != img_request->next_completion) - goto out; - - for_each_obj_request_from(img_request, obj_request) { - unsigned int xferred; - int result; - - rbd_assert(more); - rbd_assert(which < img_request->obj_request_count); - - if (!obj_request_done_test(obj_request)) - break; - - rbd_assert(obj_request->xferred <= (u64) UINT_MAX); - xferred = (unsigned int) obj_request->xferred; - result = (int) obj_request->result; - if (result) - rbd_warn(NULL, "obj_request %s result %d xferred %u\n", - img_request->write_request ? "write" : "read", - result, xferred); - - more = blk_end_request(img_request->rq, result, xferred); - which++; - } - - rbd_assert(more ^ (which == img_request->obj_request_count)); - img_request->next_completion = which; -out: - spin_unlock_irq(&img_request->completion_lock); - - if (!more) - rbd_img_request_complete(img_request); -} - static int rbd_img_request_submit(struct rbd_img_request *img_request) { struct rbd_device *rbd_dev = img_request->rbd_dev; @@ -1662,7 +1661,6 @@ static int rbd_img_request_submit(struct rbd_img_request *img_request) for_each_obj_request(img_request, obj_request) { int ret; - obj_request->callback = rbd_img_obj_callback; ret = rbd_obj_request_submit(osdc, obj_request); if (ret) return ret; @@ -1681,8 +1679,9 @@ static int rbd_obj_notify_ack(struct rbd_device *rbd_dev, u64 ver, u64 notify_id) { struct rbd_obj_request *obj_request; - struct ceph_osd_client *osdc; + struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc; int ret; + osdc = &rbd_dev->rbd_client->client->osdc; obj_request = rbd_obj_request_create(rbd_dev->header_name, 0, 0, OBJ_REQUEST_NODATA); @@ -1693,13 +1692,12 @@ static int rbd_obj_notify_ack(struct rbd_device *rbd_dev, obj_request->osd_req = rbd_osd_req_create(rbd_dev, false, obj_request); if (!obj_request->osd_req) goto out; + obj_request->callback = rbd_obj_request_put; osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_NOTIFY_ACK, notify_id, ver, 0); rbd_osd_req_format(obj_request, false); - osdc = &rbd_dev->rbd_client->client->osdc; - obj_request->callback = rbd_obj_request_put; ret = rbd_obj_request_submit(osdc, obj_request); out: if (ret) @@ -1759,16 +1757,17 @@ static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev, int start) if (!obj_request->osd_req) goto out_cancel; - osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_WATCH, - rbd_dev->watch_event->cookie, - rbd_dev->header.obj_version, start); - rbd_osd_req_format(obj_request, true); - if (start) ceph_osdc_set_request_linger(osdc, obj_request->osd_req); else ceph_osdc_unregister_linger_request(osdc, rbd_dev->watch_request->osd_req); + + osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_WATCH, + rbd_dev->watch_event->cookie, + rbd_dev->header.obj_version, start); + rbd_osd_req_format(obj_request, true); + ret = rbd_obj_request_submit(osdc, obj_request); if (ret) goto out_cancel; @@ -1820,9 +1819,9 @@ static int rbd_obj_method_sync(struct rbd_device *rbd_dev, size_t inbound_size, u64 *version) { + struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc; struct rbd_obj_request *obj_request; struct ceph_osd_data *osd_data; - struct ceph_osd_client *osdc; struct page **pages; u32 page_count; int ret; @@ -1861,7 +1860,6 @@ static int rbd_obj_method_sync(struct rbd_device *rbd_dev, osd_req_op_cls_response_data(obj_request->osd_req, 0, osd_data); rbd_osd_req_format(obj_request, false); - osdc = &rbd_dev->rbd_client->client->osdc; ret = rbd_obj_request_submit(osdc, obj_request); if (ret) goto out; @@ -2037,9 +2035,9 @@ static int rbd_obj_read_sync(struct rbd_device *rbd_dev, char *buf, u64 *version) { + struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc; struct rbd_obj_request *obj_request; struct ceph_osd_data *osd_data; - struct ceph_osd_client *osdc; struct page **pages = NULL; u32 page_count; size_t size; @@ -2073,7 +2071,6 @@ static int rbd_obj_read_sync(struct rbd_device *rbd_dev, osd_req_op_extent_osd_data(obj_request->osd_req, 0, osd_data); rbd_osd_req_format(obj_request, false); - osdc = &rbd_dev->rbd_client->client->osdc; ret = rbd_obj_request_submit(osdc, obj_request); if (ret) goto out; diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 1cd776b..3e8e6d5 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1519,6 +1519,57 @@ static void rbd_img_request_destroy(struct kref *kref) kfree(img_request); } +static void rbd_img_obj_callback(struct rbd_obj_request *obj_request) +{ + struct rbd_img_request *img_request; + u32 which = obj_request->which; + bool more = true; + + img_request = obj_request->img_request; + + dout("%s: img %p obj %p\n", __func__, img_request, obj_request); + rbd_assert(img_request != NULL); + rbd_assert(img_request->rq != NULL); + rbd_assert(img_request->obj_request_count > 0); + rbd_assert(which != BAD_WHICH); + rbd_assert(which < img_request->obj_request_count); + rbd_assert(which >= img_request->next_completion); + + spin_lock_irq(&img_request->completion_lock); + if (which != img_request->next_completion) + goto out; + + for_each_obj_request_from(img_request, obj_request) { + unsigned int xferred; + int result; + + rbd_assert(more); + rbd_assert(which < img_request->obj_request_count); + + if (!obj_request_done_test(obj_request)) + break; + + rbd_assert(obj_request->xferred <= (u64) UINT_MAX); + xferred = (unsigned int) obj_request->xferred; + result = (int) obj_request->result; + if (result) + rbd_warn(NULL, "obj_request %s result %d xferred %u\n", + img_request->write_request ? "write" : "read", + result, xferred); + + more = blk_end_request(img_request->rq, result, xferred); + which++; + } + + rbd_assert(more ^ (which == img_request->obj_request_count)); + img_request->next_completion = which; +out: + spin_unlock_irq(&img_request->completion_lock); + + if (!more) + rbd_img_request_complete(img_request); +} + static int rbd_img_request_fill_bio(struct rbd_img_request *img_request, struct bio *bio_list)