From patchwork Fri Apr 19 22:50:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2467441 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 2F7DD3FD8C for ; Fri, 19 Apr 2013 22:50:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964931Ab3DSWuU (ORCPT ); Fri, 19 Apr 2013 18:50:20 -0400 Received: from mail-ie0-f180.google.com ([209.85.223.180]:40786 "EHLO mail-ie0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964846Ab3DSWuT (ORCPT ); Fri, 19 Apr 2013 18:50:19 -0400 Received: by mail-ie0-f180.google.com with SMTP id to1so610047ieb.25 for ; Fri, 19 Apr 2013 15:50:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=2mRdS0MjWRqUpx86soNvFUzcctBK5fjUGc+BTY512hM=; b=B8phPkNJIAzqmDkyFLAlphzJbxCMjuDp4IRnSNKeC/rnA+wH6BWP8nki5mL0rKpL0A 00SBK7EjO4fehkgqj6p969990hTvL38o2/zWp1PeEFxW5d1cLuzJM3QEe/9qLyydYaz2 RSdn1SJuwGGL68fb4DijrsSc29UXI67AkKD8Q6djsooXxUC6PWsFnAovV4A11Jj03lAn LCfH5Dlvm8+wcGe5dL6p0ynAHipN/K6bIEXlKU/ENMsGEURRMlOxmoRyDVJn1c1Y9G1U J91U4wSSeqEJ2XjEzOrQLL+923NH01Pe8ulsV+Dc32lnkvkPnduZ9tCaX9y9R6WDhBlh XUKw== X-Received: by 10.50.67.18 with SMTP id j18mr3510272igt.110.1366411819095; Fri, 19 Apr 2013 15:50:19 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPSA id ie7sm4954655igb.1.2013.04.19.15.50.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 19 Apr 2013 15:50:18 -0700 (PDT) Message-ID: <5171CA29.7000500@inktank.com> Date: Fri, 19 Apr 2013 17:50:17 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 MIME-Version: 1.0 To: ceph-devel Subject: [PATCH 3/4] rbd: define zero_pages() References: <5171C963.2050402@inktank.com> In-Reply-To: <5171C963.2050402@inktank.com> X-Gm-Message-State: ALoCoQmg3ftDLpvagUn582POcOQayRI9nQ8ewumQ1RQASEcN6ujeGdLKIXrLIAaoNWI8G8JLBOfP Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Define a new function zero_pages() that zeroes a range of memory defined by a page array, along the lines of zero_bio_chain(). It saves and the irq flags like bvec_kmap_irq() does, though I'm not sure at this point that it's necessary. Update rbd_img_obj_request_read_callback() to use the new function if the object request contains page rather than bio data. For the moment, only bio data is used for osd READ ops. Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 55 +++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 47 insertions(+), 8 deletions(-) */ @@ -1352,9 +1383,12 @@ static bool img_request_layered_test(struct rbd_img_request *img_request) static void rbd_img_obj_request_read_callback(struct rbd_obj_request *obj_request) { + u64 xferred = obj_request->xferred; + u64 length = obj_request->length; + dout("%s: obj %p img %p result %d %llu/%llu\n", __func__, obj_request, obj_request->img_request, obj_request->result, - obj_request->xferred, obj_request->length); + xferred, length); /* * ENOENT means a hole in the image. We zero-fill the * entire length of the request. A short read also implies @@ -1362,15 +1396,20 @@ rbd_img_obj_request_read_callback(struct rbd_obj_request *obj_request) * update the xferred count to indicate the whole request * was satisfied. */ - BUG_ON(obj_request->type != OBJ_REQUEST_BIO); + rbd_assert(obj_request->type != OBJ_REQUEST_NODATA); if (obj_request->result == -ENOENT) { - zero_bio_chain(obj_request->bio_list, 0); + if (obj_request->type == OBJ_REQUEST_BIO) + zero_bio_chain(obj_request->bio_list, 0); + else + zero_pages(obj_request->pages, 0, length); obj_request->result = 0; - obj_request->xferred = obj_request->length; - } else if (obj_request->xferred < obj_request->length && - !obj_request->result) { - zero_bio_chain(obj_request->bio_list, obj_request->xferred); - obj_request->xferred = obj_request->length; + obj_request->xferred = length; + } else if (xferred < length && !obj_request->result) { + if (obj_request->type == OBJ_REQUEST_BIO) + zero_bio_chain(obj_request->bio_list, xferred); + else + zero_pages(obj_request->pages, xferred, length); + obj_request->xferred = length; } obj_request_done_set(obj_request); } diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 894af4f..ac9abab 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -971,6 +971,37 @@ static void zero_bio_chain(struct bio *chain, int start_ofs) } /* + * similar to zero_bio_chain(), zeros data defined by a page array, + * starting at the given byte offset from the start of the array and + * continuing up to the given end offset. The pages array is + * assumed to be big enough to hold all bytes up to the end. + */ +static void zero_pages(struct page **pages, u64 offset, u64 end) +{ + struct page **page = &pages[offset >> PAGE_SHIFT]; + + rbd_assert(end > offset); + rbd_assert(end - offset <= (u64)SIZE_MAX); + while (offset < end) { + size_t page_offset; + size_t length; + unsigned long flags; + void *kaddr; + + page_offset = (size_t)(offset & ~PAGE_MASK); + length = min(PAGE_SIZE - page_offset, (size_t)(end - offset)); + local_irq_save(flags); + kaddr = kmap_atomic(*page); + memset(kaddr + page_offset, 0, length); + kunmap_atomic(kaddr); + local_irq_restore(flags); + + offset += length; + page++; + } +} + +/* * Clone a portion of a bio, starting at the given byte offset * and continuing for the number of bytes indicated.