From patchwork Mon May 8 18:42:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 9716533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E4523602A0 for ; Mon, 8 May 2017 18:42:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6880205D1 for ; Mon, 8 May 2017 18:42:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA7DF223B2; Mon, 8 May 2017 18:42:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_TVD_MIME_EPI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 460ED205D1 for ; Mon, 8 May 2017 18:42:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754236AbdEHSmC (ORCPT ); Mon, 8 May 2017 14:42:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50120 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754143AbdEHSmB (ORCPT ); Mon, 8 May 2017 14:42:01 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 20C893DE3C; Mon, 8 May 2017 18:42:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 20C893DE3C Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=mchristi@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 20C893DE3C Received: from [10.10.125.57] (ovpn-125-57.rdu2.redhat.com [10.10.125.57]) by smtp.corp.redhat.com (Postfix) with ESMTP id A13B960465; Mon, 8 May 2017 18:42:00 +0000 (UTC) Subject: Re: [PATCH] use rbd_aio_readv/_writev instead of bounce_buffer To: David Butterfield , target-devel@vger.kernel.org References: <20170507045908.19579-1-dab21774@gmail.com> From: Mike Christie Message-ID: <5910BBF8.1040607@redhat.com> Date: Mon, 8 May 2017 13:42:00 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20170507045908.19579-1-dab21774@gmail.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Mon, 08 May 2017 18:42:01 +0000 (UTC) Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 05/06/2017 11:59 PM, David Butterfield wrote: > This change may need adjustment for local style conventions; > but it's what I did in my copy to use the rbd iovec functions > instead of copying through a bounce_buffer. Note all the extra > conditional checks are done at compile-time. > > Signed-off-by: David Butterfield > --- > rbd.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++------------------- > 1 file changed, 49 insertions(+), 19 deletions(-) > > diff --git a/rbd.c b/rbd.c > index c8019b5..1b80d14 100644 > --- a/rbd.c > +++ b/rbd.c > @@ -37,6 +37,17 @@ > > #include > > +/* Use gcc -DCONFIG_TCMU_NO_IOV=0 to enable use of RBD iovec calls */ > +/* Use gcc -DCONFIG_TCMU_NO_IOV=1 to disable use of RBD iovec calls */ > +#ifndef CONFIG_TCMU_NO_IOV > +#define CONFIG_TCMU_NO_IOV true /* default for now is to assume older library */ > +#endif > + > +#if CONFIG_TCMU_NO_IOV > +#define rbd_aio_readv(image, iov, iov_cnt, offset, completion) -EIO > +#define rbd_aio_writev(image, iov, iov_cnt, offset, completion) -EIO > +#endif > + It think we can limit the checks and merge some common code. Here is a completely untested patch. diff --git a/rbd.c b/rbd.c index 40f16ea..a37269b 100644 --- a/rbd.c +++ b/rbd.c @@ -61,6 +61,7 @@ struct rbd_aio_cb { struct tcmulib_cmd *tcmulib_cmd; int64_t length; + bool is_read; char *bounce_buffer; }; @@ -451,13 +452,13 @@ static void tcmu_rbd_close(struct tcmu_device *dev) * allocation errors. */ -static void rbd_finish_aio_read(rbd_completion_t completion, +static void tcmu_rbd_finish_aio(rbd_completion_t completion, struct rbd_aio_cb *aio_cb) { struct tcmu_device *dev = aio_cb->dev; - struct tcmulib_cmd *tcmulib_cmd = aio_cb->tcmulib_cmd; - struct iovec *iovec = tcmulib_cmd->iovec; - size_t iov_cnt = tcmulib_cmd->iov_cnt; + struct tcmulib_cmd *cmd = aio_cb->tcmulib_cmd; + struct iovec *iovec = cmd->iovec; + size_t iov_cnt = cmd->iov_cnt; int64_t ret; int tcmu_r; @@ -465,29 +466,92 @@ static void rbd_finish_aio_read(rbd_completion_t completion, rbd_aio_release(completion); if (ret == -ESHUTDOWN) { - tcmu_r = tcmu_set_sense_data(tcmulib_cmd->sense_buf, + tcmu_r = tcmu_set_sense_data(cmd->sense_buf, NOT_READY, ASC_PORT_IN_STANDBY, NULL); } else if (ret < 0) { - tcmu_r = tcmu_set_sense_data(tcmulib_cmd->sense_buf, - MEDIUM_ERROR, ASC_READ_ERROR, NULL); + tcmu_r = tcmu_set_sense_data(cmd->sense_buf, MEDIUM_ERROR, + aio_cb->is_read ? ASC_READ_ERROR : + ASC_WRITE_ERROR, NULL); } else { + if (aio_cb->is_read && aio_cb->bounce_buffer) + tcmu_memcpy_into_iovec(iovec, iov_cnt, + aio_cb->bounce_buffer, + aio_cb->length); tcmu_r = SAM_STAT_GOOD; - tcmu_memcpy_into_iovec(iovec, iov_cnt, - aio_cb->bounce_buffer, aio_cb->length); } - tcmulib_cmd->done(dev, tcmulib_cmd, tcmu_r); + cmd->done(dev, cmd, tcmu_r); - free(aio_cb->bounce_buffer); + if (aio_cb->bounce_buffer) + free(aio_cb->bounce_buffer); free(aio_cb); } -static int tcmu_rbd_read(struct tcmu_device *dev, struct tcmulib_cmd *cmd, - struct iovec *iov, size_t iov_cnt, size_t length, - off_t offset) +#ifdef LIBRBD_SUPPORTS_IOVEC + +static rbd_image_t tcmu_dev_to_image(struct tcmu_device *dev) +{ + struct tcmu_rbd_state *state = tcmu_get_dev_private(dev); + return state->image; +} + +#define tcmu_rbd_aio_read(dev, aio_cb, completion, iov, iov_cnt, length, offset) + rbd_aio_readv(tcmu_dev_to_image(dev), iov, iov_cnt, offset, completion); + +#define tcmu_rbd_aio_write(dev, aio_cb, completion, iov, iov_cnt, length, offset + rbd_aio_writev(tcmu_dev_to_image(dev), iov, iov_cnt, offset, completion); + +#else + +static int tcmu_rbd_aio_read(struct tcmu_device *dev, struct rbd_aio_cb *aio_cb, + rbd_completion_t completion, struct iovec *iov, + size_t iov_cnt, size_t length, off_t offset) { struct tcmu_rbd_state *state = tcmu_get_dev_private(dev); + int ret; + + aio_cb->bounce_buffer = malloc(length); + if (!aio_cb->bounce_buffer) { + tcmu_dev_err(dev, "Could not allocate bounce buffer.\n"); + return -ENOMEM; + } + + ret = rbd_aio_read(state->image, offset, length, aio_cb->bounce_buffer, + completion); + if (ret < 0) + free(aio_cb->bounce_buffer); + return ret; +} + +static int tcmu_rbd_aio_write(struct tcmu_device *dev, struct rbd_aio_cb *aio_cb, + rbd_completion_t completion, struct iovec *iov, + size_t iov_cnt, size_t length, off_t offset) +{ + struct tcmu_rbd_state *state = tcmu_get_dev_private(dev); + int ret; + + aio_cb->bounce_buffer = malloc(length); + if (!aio_cb->bounce_buffer) { + tcmu_dev_err(dev, "Failed to allocate bounce buffer.\n"); + return -ENOMEM;; + } + + tcmu_memcpy_from_iovec(aio_cb->bounce_buffer, length, iov, iov_cnt); + + ret = rbd_aio_write(state->image, offset, + length, aio_cb->bounce_buffer, completion); + if (ret < 0) + free(aio_cb->bounce_buffer); + return ret; +} + +#endif + +static int tcmu_rbd_read(struct tcmu_device *dev, struct tcmulib_cmd *cmd, + struct iovec *iov, size_t iov_cnt, size_t length, + off_t offset) +{ struct rbd_aio_cb *aio_cb; rbd_completion_t completion; ssize_t ret; @@ -501,21 +565,15 @@ static int tcmu_rbd_read(struct tcmu_device *dev, struct tcmulib_cmd *cmd, aio_cb->dev = dev; aio_cb->length = length; aio_cb->tcmulib_cmd = cmd; + aio_cb->is_read = 1; - aio_cb->bounce_buffer = malloc(length); - if (!aio_cb->bounce_buffer) { - tcmu_dev_err(dev, "Could not allocate bounce buffer.\n"); + ret = rbd_aio_create_completion(aio_cb, (rbd_callback_t) + tcmu_rbd_finish_aio, &completion); + if (ret < 0) goto out_free_aio_cb; - } - ret = rbd_aio_create_completion - (aio_cb, (rbd_callback_t) rbd_finish_aio_read, &completion); - if (ret < 0) { - goto out_free_bounce_buffer; - } - - ret = rbd_aio_read(state->image, offset, length, aio_cb->bounce_buffer, - completion); + ret = tcmu_rbd_aio_read(dev, aio_cb, completion, iov, iov_cnt, + length, offset); if (ret < 0) { goto out_remove_tracked_aio; } @@ -524,51 +582,17 @@ static int tcmu_rbd_read(struct tcmu_device *dev, struct tcmulib_cmd *cmd, out_remove_tracked_aio: rbd_aio_release(completion); -out_free_bounce_buffer: - free(aio_cb->bounce_buffer); out_free_aio_cb: free(aio_cb); out: return SAM_STAT_TASK_SET_FULL; } -static void rbd_finish_aio_generic(rbd_completion_t completion, - struct rbd_aio_cb *aio_cb) -{ - struct tcmu_device *dev = aio_cb->dev; - struct tcmulib_cmd *tcmulib_cmd = aio_cb->tcmulib_cmd; - int64_t ret; - int tcmu_r; - - ret = rbd_aio_get_return_value(completion); - rbd_aio_release(completion); - - if (ret == -ESHUTDOWN) { - tcmu_r = tcmu_set_sense_data(tcmulib_cmd->sense_buf, - NOT_READY, ASC_PORT_IN_STANDBY, - NULL); - } else if (ret < 0) { - tcmu_r = tcmu_set_sense_data(tcmulib_cmd->sense_buf, - MEDIUM_ERROR, ASC_WRITE_ERROR, - NULL); - } else { - tcmu_r = SAM_STAT_GOOD; - } - - tcmulib_cmd->done(dev, tcmulib_cmd, tcmu_r); - - if (aio_cb->bounce_buffer) { - free(aio_cb->bounce_buffer); - } - free(aio_cb); -} - static int tcmu_rbd_write(struct tcmu_device *dev, struct tcmulib_cmd *cmd, struct iovec *iov, size_t iov_cnt, size_t length, off_t offset) { - struct tcmu_rbd_state *state = tcmu_get_dev_private(dev); struct rbd_aio_cb *aio_cb; rbd_completion_t completion; ssize_t ret; @@ -583,22 +607,14 @@ static int tcmu_rbd_write(struct tcmu_device *dev, struct tcmulib_cmd *cmd, aio_cb->length = length; aio_cb->tcmulib_cmd = cmd; - aio_cb->bounce_buffer = malloc(length); - if (!aio_cb->bounce_buffer) { - tcmu_dev_err(dev, "Failed to allocate bounce buffer.\n"); - goto out_free_aio_cb; - } - - tcmu_memcpy_from_iovec(aio_cb->bounce_buffer, length, iov, iov_cnt); - - ret = rbd_aio_create_completion - (aio_cb, (rbd_callback_t) rbd_finish_aio_generic, &completion); + ret = rbd_aio_create_completion(aio_cb, (rbd_callback_t) + tcmu_rbd_finish_aio, &completion); if (ret < 0) { - goto out_free_bounce_buffer; + goto out_free_aio_cb; } - ret = rbd_aio_write(state->image, offset, - length, aio_cb->bounce_buffer, completion); + ret = tcmu_rbd_aio_write(dev, aio_cb, completion, iov, iov_cnt, + length, offset); if (ret < 0) { goto out_remove_tracked_aio; } @@ -607,8 +623,6 @@ static int tcmu_rbd_write(struct tcmu_device *dev, struct tcmulib_cmd *cmd, out_remove_tracked_aio: rbd_aio_release(completion); -out_free_bounce_buffer: - free(aio_cb->bounce_buffer); out_free_aio_cb: free(aio_cb); out: @@ -632,8 +646,8 @@ static int tcmu_rbd_flush(struct tcmu_device *dev, struct tcmulib_cmd *cmd) aio_cb->tcmulib_cmd = cmd; aio_cb->bounce_buffer = NULL; - ret = rbd_aio_create_completion - (aio_cb, (rbd_callback_t) rbd_finish_aio_generic, &completion); + ret = rbd_aio_create_completion(aio_cb, (rbd_callback_t) + tcmu_rbd_finish_aio, &completion); if (ret < 0) { goto out_free_aio_cb; }