From patchwork Fri May 1 06:23:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akinobu Mita X-Patchwork-Id: 6308561 Return-Path: X-Original-To: patchwork-linux-scsi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 00EC8BEEE5 for ; Fri, 1 May 2015 06:24:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DD1BA203A9 for ; Fri, 1 May 2015 06:24:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A7AB2203C2 for ; Fri, 1 May 2015 06:24:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753336AbbEAGY1 (ORCPT ); Fri, 1 May 2015 02:24:27 -0400 Received: from mail-pd0-f176.google.com ([209.85.192.176]:35536 "EHLO mail-pd0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750904AbbEAGY0 (ORCPT ); Fri, 1 May 2015 02:24:26 -0400 Received: by pdbqd1 with SMTP id qd1so84004056pdb.2; Thu, 30 Apr 2015 23:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h43SaJ82pFHEFKsm+VSYGojSX3018rjjttcgC9Dnr40=; b=UVCZuR3pRdHjmbZgr79XPPwT+Y6p+/kiCtWBEVpbsa1tsAgSkfnRLmeg992jOSX9u1 sp5/qaTkZhQarApdEMJBm3Qe/wJIthdBLbUGZbm7x/iKxposcWH8JOVkGagH2q1Oqkmc f8agKPGXdS2RmdEUE+xBsPxyYswV+UKLHgBsvjRtrzF5v/6/CigSpJiZZcYtogwZ/wo0 0PVZ1SH7SWYm6x8eRHwyPsXm6/c7Ze4M7m3HTZWUUUccY9ERiqNwInI+msy0yjGgzX+L dYrKtRGT1GnQtZp2V8CgGi+ZtB5QfDHa2j7EMDwtUVwodlOLotUgquXvU9fjtYtS9qkI NoWA== X-Received: by 10.70.91.225 with SMTP id ch1mr15126575pdb.65.1430461465659; Thu, 30 Apr 2015 23:24:25 -0700 (PDT) Received: from localhost.localdomain (KD106168100169.ppp-bb.dion.ne.jp. [106.168.100.169]) by mx.google.com with ESMTPSA id bz11sm3889311pdb.34.2015.04.30.23.24.21 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 Apr 2015 23:24:24 -0700 (PDT) From: Akinobu Mita To: target-devel@vger.kernel.org Cc: Akinobu Mita , Tim Chen , Herbert Xu , "David S. Miller" , linux-crypto@vger.kernel.org, Nicholas Bellinger , Sagi Grimberg , "Martin K. Petersen" , Christoph Hellwig , "James E.J. Bottomley" , linux-scsi@vger.kernel.org Subject: [PATCH v4 4/4] target: handle odd SG mapping for data transfer memory Date: Fri, 1 May 2015 15:23:51 +0900 Message-Id: <1430461431-5936-5-git-send-email-akinobu.mita@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1430461431-5936-1-git-send-email-akinobu.mita@gmail.com> References: <1430461431-5936-1-git-send-email-akinobu.mita@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP sbc_dif_generate() and sbc_dif_verify() currently assume that each SG element for data transfer memory doesn't straddle the block size boundary. However, when using SG_IO ioctl, we can choose the data transfer memory which doesn't satisfy that alignment requirement. In order to handle such cases correctly, this change inverts the outer loop to iterate data transfer memory and the inner loop to iterate protection information and enables to calculate CRC for a block which straddles multiple SG elements. Signed-off-by: Akinobu Mita Cc: Tim Chen Cc: Herbert Xu Cc: "David S. Miller" Cc: linux-crypto@vger.kernel.org Cc: Nicholas Bellinger Cc: Sagi Grimberg Cc: "Martin K. Petersen" Cc: Christoph Hellwig Cc: "James E.J. Bottomley" Cc: target-devel@vger.kernel.org Cc: linux-scsi@vger.kernel.org --- * Changes from v3: - Fix inconsistent address passed to kunmap_atomic, reported by Sagi - Stop operation in sbc_dif_generate() and sbc_dif_verify() when reaching the end of data SG elements drivers/target/target_core_sbc.c | 122 ++++++++++++++++++++++++++------------- 1 file changed, 83 insertions(+), 39 deletions(-) diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c index b765cdd..4a2df6d 100644 --- a/drivers/target/target_core_sbc.c +++ b/drivers/target/target_core_sbc.c @@ -1182,27 +1182,50 @@ sbc_dif_generate(struct se_cmd *cmd) { struct se_device *dev = cmd->se_dev; struct se_dif_v1_tuple *sdt; - struct scatterlist *dsg, *psg = cmd->t_prot_sg; + struct scatterlist *dsg = cmd->t_data_sg, *psg; sector_t sector = cmd->t_task_lba; void *daddr, *paddr; int i, j, offset = 0; + unsigned int block_size = dev->dev_attrib.block_size; - for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) { - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) { paddr = kmap_atomic(sg_page(psg)) + psg->offset; + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; - for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) { + for (j = 0; j < psg->length; + j += sizeof(struct se_dif_v1_tuple)) { + __u16 crc; + unsigned int avail; - if (offset >= psg->length) { - kunmap_atomic(paddr); - psg = sg_next(psg); - paddr = kmap_atomic(sg_page(psg)) + psg->offset; - offset = 0; + if (offset >= dsg->length) { + offset -= dsg->length; + kunmap_atomic(daddr - dsg->offset); + dsg = sg_next(dsg); + if (!dsg) { + kunmap_atomic(paddr - psg->offset); + return; + } + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; } - sdt = paddr + offset; - sdt->guard_tag = cpu_to_be16(crc_t10dif(daddr + j, - dev->dev_attrib.block_size)); + sdt = paddr + j; + avail = min(block_size, dsg->length - offset); + crc = crc_t10dif(daddr + offset, avail); + if (avail < block_size) { + kunmap_atomic(daddr - dsg->offset); + dsg = sg_next(dsg); + if (!dsg) { + kunmap_atomic(paddr - psg->offset); + return; + } + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + offset = block_size - avail; + crc = crc_t10dif_update(crc, daddr, offset); + } else { + offset += block_size; + } + + sdt->guard_tag = cpu_to_be16(crc); if (cmd->prot_type == TARGET_DIF_TYPE1_PROT) sdt->ref_tag = cpu_to_be32(sector & 0xffffffff); sdt->app_tag = 0; @@ -1215,26 +1238,23 @@ sbc_dif_generate(struct se_cmd *cmd) be32_to_cpu(sdt->ref_tag)); sector++; - offset += sizeof(struct se_dif_v1_tuple); } - kunmap_atomic(paddr); - kunmap_atomic(daddr); + kunmap_atomic(daddr - dsg->offset); + kunmap_atomic(paddr - psg->offset); } } static sense_reason_t sbc_dif_v1_verify(struct se_cmd *cmd, struct se_dif_v1_tuple *sdt, - const void *p, sector_t sector, unsigned int ei_lba) + __u16 crc, sector_t sector, unsigned int ei_lba) { - struct se_device *dev = cmd->se_dev; - int block_size = dev->dev_attrib.block_size; __be16 csum; if (!(cmd->prot_checks & TARGET_DIF_CHECK_GUARD)) goto check_ref; - csum = cpu_to_be16(crc_t10dif(p, block_size)); + csum = cpu_to_be16(crc); if (sdt->guard_tag != csum) { pr_err("DIFv1 checksum failed on sector %llu guard tag 0x%04x" @@ -1317,26 +1337,36 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors, { struct se_device *dev = cmd->se_dev; struct se_dif_v1_tuple *sdt; - struct scatterlist *dsg; + struct scatterlist *dsg = cmd->t_data_sg; sector_t sector = start; void *daddr, *paddr; - int i, j; + int i; sense_reason_t rc; + int dsg_off = 0; + unsigned int block_size = dev->dev_attrib.block_size; - for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) { - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + for (; psg && sector < start + sectors; psg = sg_next(psg)) { paddr = kmap_atomic(sg_page(psg)) + psg->offset; + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; - for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) { + for (i = psg_off; i < psg->length && + sector < start + sectors; + i += sizeof(struct se_dif_v1_tuple)) { + __u16 crc; + unsigned int avail; - if (psg_off >= psg->length) { - kunmap_atomic(paddr - psg->offset); - psg = sg_next(psg); - paddr = kmap_atomic(sg_page(psg)) + psg->offset; - psg_off = 0; + if (dsg_off >= dsg->length) { + dsg_off -= dsg->length; + kunmap_atomic(daddr - dsg->offset); + dsg = sg_next(dsg); + if (!dsg) { + kunmap_atomic(paddr - psg->offset); + return 0; + } + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; } - sdt = paddr + psg_off; + sdt = paddr + i; pr_debug("DIF READ sector: %llu guard_tag: 0x%04x" " app_tag: 0x%04x ref_tag: %u\n", @@ -1344,27 +1374,41 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors, sdt->app_tag, be32_to_cpu(sdt->ref_tag)); if (sdt->app_tag == cpu_to_be16(0xffff)) { - sector++; - psg_off += sizeof(struct se_dif_v1_tuple); - continue; + dsg_off += block_size; + goto next; + } + + avail = min(block_size, dsg->length - dsg_off); + crc = crc_t10dif(daddr + dsg_off, avail); + if (avail < block_size) { + kunmap_atomic(daddr - dsg->offset); + dsg = sg_next(dsg); + if (!dsg) { + kunmap_atomic(paddr - psg->offset); + return 0; + } + daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + dsg_off = block_size - avail; + crc = crc_t10dif_update(crc, daddr, dsg_off); + } else { + dsg_off += block_size; } - rc = sbc_dif_v1_verify(cmd, sdt, daddr + j, sector, - ei_lba); + rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba); if (rc) { - kunmap_atomic(paddr - psg->offset); kunmap_atomic(daddr - dsg->offset); + kunmap_atomic(paddr - psg->offset); cmd->bad_sector = sector; return rc; } - +next: sector++; ei_lba++; - psg_off += sizeof(struct se_dif_v1_tuple); } - kunmap_atomic(paddr - psg->offset); + psg_off = 0; kunmap_atomic(daddr - dsg->offset); + kunmap_atomic(paddr - psg->offset); } return 0;