From patchwork Sat Apr 4 12:24:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akinobu Mita X-Patchwork-Id: 6160851 Return-Path: X-Original-To: patchwork-linux-scsi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3147ABF4A6 for ; Sat, 4 Apr 2015 12:24:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3B30420379 for ; Sat, 4 Apr 2015 12:24:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 327CE2035C for ; Sat, 4 Apr 2015 12:24:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752514AbbDDMYw (ORCPT ); Sat, 4 Apr 2015 08:24:52 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:33784 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752248AbbDDMYv (ORCPT ); Sat, 4 Apr 2015 08:24:51 -0400 Received: by paboj16 with SMTP id oj16so63540616pab.0; Sat, 04 Apr 2015 05:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mEv17g8C/2F0KRB9CC8WatNLuQeYVkXUhYRbqReraQU=; b=wHbsFXBTtDdlrE4oCz9Ol2Hr6iJ6iW4YmJtgCgRVCNkJuUCPOv5J9veW19t/ytmNUj UdEkTQtj+WVT0CDrSRjF79wDCMqE63zGY7i3fPKGyMLgthcJ8H01Pjynicr01o+IXoEA TDADnFjkDrSvAjpNp5RKY5XUmuj+e09iuCHVwCRhtUmL36EARhD+srISxJh7e/9ERxWZ FOXmQo2CZBhd/D8bJCeIfpy5LdXvR8hFlhgIktIH60zK1KhZgESECfN2B+Kui2kW8zzm VYHWgXVVJipjPXZkO5cV9oae5aTCSyrCGIq5USaHt1l1bEb9LJFnNK7XQXDQ7h8/6wRN fPTg== X-Received: by 10.70.65.39 with SMTP id u7mr11876279pds.11.1428150290975; Sat, 04 Apr 2015 05:24:50 -0700 (PDT) Received: from localhost.localdomain (KD106168100169.ppp-bb.dion.ne.jp. [106.168.100.169]) by mx.google.com with ESMTPSA id rt12sm9900707pab.34.2015.04.04.05.24.47 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 04 Apr 2015 05:24:49 -0700 (PDT) From: Akinobu Mita To: target-devel@vger.kernel.org Cc: Akinobu Mita , Nicholas Bellinger , Asias He , "Martin K. Petersen" , Christoph Hellwig , "James E.J. Bottomley" , linux-scsi@vger.kernel.org Subject: [PATCH 2/2] target/rd: Don't pass imcomplete scatterlist entries to sbc_dif_verify_* Date: Sat, 4 Apr 2015 21:24:28 +0900 Message-Id: <1428150268-30260-2-git-send-email-akinobu.mita@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1428150268-30260-1-git-send-email-akinobu.mita@gmail.com> References: <1428150268-30260-1-git-send-email-akinobu.mita@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The scatterlist for protection information which is passed to sbc_dif_verify_read() or sbc_dif_verify_write() requires that neighboring scatterlist entries are contiguous or chained so that they can be iterated by sg_next(). However, the protection information for RD-MCP backends could be located in the multiple scatterlist arrays when the ramdisk space is too large. So if the read/write request straddles this boundary, sbc_dif_verify_read() or sbc_dif_verify_write() can't iterate all scatterlist entries. This fixes it by allocating temporary scatterlist if it is needed. Signed-off-by: Akinobu Mita Cc: Nicholas Bellinger Cc: Asias He Cc: "Martin K. Petersen" Cc: Christoph Hellwig Cc: "James E.J. Bottomley" Cc: target-devel@vger.kernel.org Cc: linux-scsi@vger.kernel.org --- drivers/target/target_core_rd.c | 39 +++++++++++++++++++++++++++++++++++---- 1 file changed, 35 insertions(+), 4 deletions(-) diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c index 4d614c9..19c893d 100644 --- a/drivers/target/target_core_rd.c +++ b/drivers/target/target_core_rd.c @@ -387,11 +387,12 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_write) struct se_device *se_dev = cmd->se_dev; struct rd_dev *dev = RD_DEV(se_dev); struct rd_dev_sg_table *prot_table; + bool need_to_release = false; struct scatterlist *prot_sg; u32 sectors = cmd->data_length / se_dev->dev_attrib.block_size; - u32 prot_offset, prot_page; + u32 prot_offset, prot_page, prot_npages; u64 tmp; - sense_reason_t rc; + sense_reason_t rc = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; sense_reason_t (*dif_verify)(struct se_cmd *, sector_t, unsigned int, unsigned int, struct scatterlist *, int) = is_write ? sbc_dif_verify_write : sbc_dif_verify_read; @@ -404,10 +405,40 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_write) if (!prot_table) return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; - prot_sg = &prot_table->sg_table[prot_page - - prot_table->page_start_offset]; + prot_npages = DIV_ROUND_UP(prot_offset + sectors * se_dev->prot_length, + PAGE_SIZE); + + /* prot pages straddles multiple scatterlist tables */ + if (prot_table->page_end_offset < prot_page + prot_npages - 1) { + int i; + + prot_sg = kcalloc(prot_npages, sizeof(*prot_sg), GFP_KERNEL); + if (!prot_sg) + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; + + need_to_release = true; + sg_init_table(prot_sg, prot_npages); + + for (i = 0; i < prot_npages; i++) { + if (prot_page + i > prot_table->page_end_offset) { + prot_table = rd_get_prot_table(dev, + prot_page + i); + if (!prot_table) + goto out; + sg_unmark_end(&prot_sg[i - 1]); + } + prot_sg[i] = prot_table->sg_table[prot_page + i - + prot_table->page_start_offset]; + } + } else { + prot_sg = &prot_table->sg_table[prot_page - + prot_table->page_start_offset]; + } rc = dif_verify(cmd, cmd->t_task_lba, sectors, 0, prot_sg, prot_offset); +out: + if (need_to_release) + kfree(prot_sg); return rc; }