From patchwork Thu Oct 11 02:19:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 1580491 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id F275A4025F for ; Thu, 11 Oct 2012 02:20:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964953Ab2JKCUF (ORCPT ); Wed, 10 Oct 2012 22:20:05 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:39012 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964901Ab2JKCUB (ORCPT ); Wed, 10 Oct 2012 22:20:01 -0400 Received: by mail-pa0-f46.google.com with SMTP id hz1so1233911pad.19 for ; Wed, 10 Oct 2012 19:20:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=OBzS/mOzOKgrUw0WzAweqIZUcW69yxnt5U5HoBtwziI=; b=GbaRjX18OeSkArYulCXebr55KuVnWtCn74QVEG0UGAkG5/5FmmQ+pSu+m/p/YS4Wc/ FPDCbdIL+3aaCDU/Re8hL8pM1NI1mfX2nREVrcsxpd9HwcXRLJLkZk/+mKAljodGjFzt ezhE+FC8iccgRYAqnWVxIB0y3NYHcyFEDtqgLTjCht0GaD/4apGySceGmLvqCMghvKC+ q+c/DkNl8ULNivumTazFnY5D/lU6jG2bvYqpuPqzx+KXGP9a2YZalpatL+x1mvz74FY0 JVoUwrDIJMFkjPwIQ6Z98Hgj9sVtImBLkDJgfhK76Dhbuui1yKadO8/hBGqN/+Hi2PL9 G0pw== Received: by 10.68.212.71 with SMTP id ni7mr78382661pbc.81.1349922000958; Wed, 10 Oct 2012 19:20:00 -0700 (PDT) Received: from ?IPv6:2607:f298:a:607:1059:f4c1:babe:9559? ([2607:f298:a:607:1059:f4c1:babe:9559]) by mx.google.com with ESMTPS id i4sm1750154pav.20.2012.10.10.19.19.56 (version=SSLv3 cipher=OTHER); Wed, 10 Oct 2012 19:19:57 -0700 (PDT) Message-ID: <50762CCB.5040007@inktank.com> Date: Wed, 10 Oct 2012 19:19:55 -0700 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120912 Thunderbird/15.0.1 MIME-Version: 1.0 To: ceph-devel@vger.kernel.org Subject: [PATCH 3/3] rbd: consolidate rbd_do_op() calls References: <50762C54.40101@inktank.com> In-Reply-To: <50762C54.40101@inktank.com> X-Gm-Message-State: ALoCoQks55eIbkFDaYeqYe6qr6gOsXOYg2i/u23eKuIZX2C4OpELY5eINfVVV44GibExzYdyhiOg Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org The two calls to rbd_do_op() from rbd_rq_fn() differ only in the value passed for the snapshot id and the snapshot context. For reads the snapshot always comes from the mapping, and for writes the snapshot id is always CEPH_NOSNAP. The snapshot context is always null for reads. For writes, the snapshot context always comes from the rbd header, but it is acquired under protection of header semaphore and could change thereafter, so we can't simply use what's available inside rbd_do_op(). Eliminate the snapid parameter from rbd_do_op(), and set it based on the I/O direction inside that function instead. Always pass the snapshot context acquired in the caller, but reset it to a null pointer inside rbd_do_op() if the operation is a read. As a result, there is no difference in the read and write calls to rbd_do_op() made in rbd_rq_fn(), so just call it unconditionally. Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 396af14..ca28036 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1163,7 +1163,6 @@ done: static int rbd_do_op(struct request *rq, struct rbd_device *rbd_dev, struct ceph_snap_context *snapc, - u64 snapid, u64 ofs, u64 len, struct bio *bio, struct rbd_req_coll *coll, @@ -1177,6 +1176,7 @@ static int rbd_do_op(struct request *rq, u32 payload_len; int opcode; int flags; + u64 snapid; seg_name = rbd_segment_name(rbd_dev, ofs); if (!seg_name) @@ -1187,10 +1187,13 @@ static int rbd_do_op(struct request *rq, if (rq_data_dir(rq) == WRITE) { opcode = CEPH_OSD_OP_WRITE; flags = CEPH_OSD_FLAG_WRITE|CEPH_OSD_FLAG_ONDISK; + snapid = CEPH_NOSNAP; payload_len = seg_len; } else { opcode = CEPH_OSD_OP_READ; flags = CEPH_OSD_FLAG_READ; + snapc = NULL; + snapid = rbd_dev->mapping.snap_id; payload_len = 0; } @@ -1518,24 +1521,13 @@ static void rbd_rq_fn(struct request_queue *q) kref_get(&coll->kref); bio = bio_chain_clone(&rq_bio, &next_bio, &bp, op_size, GFP_ATOMIC); - if (!bio) { + if (bio) + (void) rbd_do_op(rq, rbd_dev, snapc, + ofs, op_size, + bio, coll, cur_seg); + else rbd_coll_end_req_index(rq, coll, cur_seg, -ENOMEM, op_size); - goto next_seg; - } - - /* init OSD command: write or read */ - if (do_write) - (void) rbd_do_op(rq, rbd_dev, - snapc, CEPH_NOSNAP, - ofs, op_size, bio, - coll, cur_seg); - else - (void) rbd_do_op(rq, rbd_dev, - NULL, rbd_dev->mapping.snap_id, - ofs, op_size, bio, - coll, cur_seg); -next_seg: size -= op_size; ofs += op_size;