From patchwork Thu Mar 16 14:39:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Milan Broz X-Patchwork-Id: 9628423 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 527F460244 for ; Thu, 16 Mar 2017 14:40:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3DF632857D for ; Thu, 16 Mar 2017 14:40:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 32DFB28643; Thu, 16 Mar 2017 14:40:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 14EBC2857D for ; Thu, 16 Mar 2017 14:40:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2BD09804F4; Thu, 16 Mar 2017 14:40:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 2BD09804F4 Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dm-devel-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 2BD09804F4 Authentication-Results: mx1.redhat.com; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P0lTqAGb" Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 08C6851DFA; Thu, 16 Mar 2017 14:40:30 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id C76ED1853D07; Thu, 16 Mar 2017 14:40:28 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v2GEeBQD029471 for ; Thu, 16 Mar 2017 10:40:11 -0400 Received: by smtp.corp.redhat.com (Postfix) id D4D3AC0D8C; Thu, 16 Mar 2017 14:40:11 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from mx1.redhat.com (ext-mx10.extmail.prod.ext.phx2.redhat.com [10.5.110.39]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CD7E9C0D88 for ; Thu, 16 Mar 2017 14:40:11 +0000 (UTC) Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6D05261B8C for ; Thu, 16 Mar 2017 14:40:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 6D05261B8C Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=gmazyland@gmail.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 6D05261B8C Received: by mail-wm0-f66.google.com with SMTP id z133so3275959wmb.2 for ; Thu, 16 Mar 2017 07:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=s3BXZs7iWZaAJRpsezA3eyi/hPRFm2uF/gTOI9ccX6c=; b=P0lTqAGbqihP5W8MJ7uHNBHA7UcczPiZbaZgmR1+GM8kPC+Nepat2f4M4Uf1XIAbXv b4xacfKm/LSkVHril8xGp6eBpqv5IpujT0jXnrvBVwhJd3PqbqIH55NVmvSixwrqHFbJ 3jVcVHTsJQSeqVOyPh70yMCZFMAqXsUNXQTs8NZAnq0i9Rb3lwxyxUkfYWTqzBfZ14we ukmNnFrDzjhbHkCophxsYqatOh8jHujmQzkKSWoMnCHGK+KlcyiDlXih6UJ03onEhX1Q 7myPNF34dDfPR1NQ4KcW2AOrLT4LlAAO8Ktc6iohKxTzT6CQ8mAhBQgn6ZNNdwkd+JoO PGfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=s3BXZs7iWZaAJRpsezA3eyi/hPRFm2uF/gTOI9ccX6c=; b=dCaOcsgGY3qtMGk7mU2SJooH4J3hCzjV1D/fj1+e1D0NfXf9ifL8RaRvnQe2tccGQe C9bIaIIh6HtHtgJkMGIk20Y4idcH9Z/r+KdLiDlijVUF0R6fSA5idtAGKrbgapg5YjDh N37ayLZGuIbLoTZIE0r5LcWULEIxYJy5HYW3bCMkhMgDOgUaEHZ8c8ZIT4Z2YjCv3pgS DCp8Bo74jKSed7OirjZq1ko9aP5YCdJTI8tl/E2qBCe3Qv7Pn6AX2me/YfKgsv1g8wzX LGwqMH35zRc0rlF/Ov0lXtt2okm4wceWcXSHzpM3Pex7TfRZCdBMBEKfPq+zH2D2fGz8 yeiQ== X-Gm-Message-State: AFeK/H2dFKpY0CXswzb4j9/HiYADbZy3zQI6j8bn3Rk+4T0w1AFZmX6s9X3RiCOM7eze3A== X-Received: by 10.28.218.80 with SMTP id r77mr9479903wmg.0.1489675205702; Thu, 16 Mar 2017 07:40:05 -0700 (PDT) Received: from merlot.mazyland.net (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.googlemail.com with ESMTPSA id i203sm4553466wmf.12.2017.03.16.07.40.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Mar 2017 07:40:05 -0700 (PDT) From: Milan Broz To: dm-devel@redhat.com Date: Thu, 16 Mar 2017 15:39:44 +0100 Message-Id: <20170316143944.19843-8-gmazyland@gmail.com> In-Reply-To: <20170316143944.19843-1-gmazyland@gmail.com> References: <20170316143944.19843-1-gmazyland@gmail.com> In-Reply-To: References: X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 16 Mar 2017 14:40:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 16 Mar 2017 14:40:09 +0000 (UTC) for IP:'74.125.82.66' DOMAIN:'mail-wm0-f66.google.com' HELO:'mail-wm0-f66.google.com' FROM:'gmazyland@gmail.com' RCPT:'' X-RedHat-Spam-Score: 1.07 * (BAYES_50, DCC_REPUT_13_19, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, RCVD_IN_SORBS_SPAM, SPF_PASS) 74.125.82.66 mail-wm0-f66.google.com 74.125.82.66 mail-wm0-f66.google.com X-Scanned-By: MIMEDefang 2.78 on 10.5.110.39 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-loop: dm-devel@redhat.com Cc: Milan Broz Subject: [dm-devel] [PATCH 7/7] dm-crypt: optionally support larger encryption sector size X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 16 Mar 2017 14:40:32 +0000 (UTC) X-Virus-Scanned: ClamAV using ClamSMTP This patch adds optional "sector_size" parameter that specifies encryption sector size (atomic unit of block device encryption). Parameter can be in range 512 - 4096 bytes and must be power of two. For compatibility reasons, the maximal IO must fit into the page limit, so the limit is set to minimal page size possible (4096 bytes). NOTE: this device cannot be handled yet with cryptsetup directly if this parameter is set. IV for sector is calculated from the 512 bytes sector offset until the iv_large_sectors option is used. Test script using dmsetup: DEV="/dev/sdb" DEV_SIZE=$(blockdev --getsz $DEV) KEY="9c1185a5c5e9fc54612808977ee8f548b2258d31ddadef707ba62c166051b9e3cd0294c27515f2bccee924e8823ca6e124b8fc3167ed478bca702babe4e130ac" BLOCK_SIZE=4096 dmsetup create test_crypt --table "0 $DEV_SIZE crypt aes-xts-plain64 $KEY 0 $DEV 0 1 sector_size:$BLOCK_SIZE" #dmsetup table --showkeys test_crypt Signed-off-by: Milan Broz --- Documentation/device-mapper/dm-crypt.txt | 14 +++++ drivers/md/dm-crypt.c | 105 ++++++++++++++++++++++++------- 2 files changed, 96 insertions(+), 23 deletions(-) diff --git a/Documentation/device-mapper/dm-crypt.txt b/Documentation/device-mapper/dm-crypt.txt index 8140b71f3c54..3b3e1de21c9c 100644 --- a/Documentation/device-mapper/dm-crypt.txt +++ b/Documentation/device-mapper/dm-crypt.txt @@ -122,6 +122,20 @@ integrity:: integrity for the encrypted device. The additional space is then used for storing authentication tag (and persistent IV if needed). +sector_size: + Use as the encryption unit instead of 512 bytes sectors. + This option can be in range 512 - 4096 bytes and must be power of two. + Virtual device will announce this size as a minimal IO and logical sector. + +iv_large_sectors + IV generators will use sector number counted in units + instead of default 512 bytes sectors. + + For example, if is 4096 bytes, plain64 IV for the second + sector will be 8 (without flag) and 1 if iv_large_sectors is present. + The must be multiple of (in 512 bytes units) + if this flag is specified. + Example scripts =============== LUKS (Linux Unified Key Setup) is now the preferred way to set up disk diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 270adba14717..83fa03a2e60b 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -129,6 +129,7 @@ enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID, enum cipher_flags { CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cihper */ + CRYPT_IV_LARGE_SECTORS, /* Calculate IV from sector_size, not 512B sectors */ }; /* @@ -171,6 +172,7 @@ struct crypt_config { } iv_gen_private; sector_t iv_offset; unsigned int iv_size; + unsigned int sector_size; /* ESSIV: struct crypto_cipher *essiv_tfm */ void *iv_private; @@ -524,6 +526,11 @@ static int crypt_iv_lmk_ctr(struct crypt_config *cc, struct dm_target *ti, { struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk; + if (cc->sector_size != (1 << SECTOR_SHIFT)) { + ti->error = "Unsupported sector size for LMK"; + return -EINVAL; + } + lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0); if (IS_ERR(lmk->hash_tfm)) { ti->error = "Error initializing LMK hash"; @@ -677,6 +684,11 @@ static int crypt_iv_tcw_ctr(struct crypt_config *cc, struct dm_target *ti, { struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw; + if (cc->sector_size != (1 << SECTOR_SHIFT)) { + ti->error = "Unsupported sector size for TCW"; + return -EINVAL; + } + if (cc->key_size <= (cc->iv_size + TCW_WHITENING_SIZE)) { ti->error = "Wrong key size for TCW"; return -EINVAL; @@ -1037,15 +1049,20 @@ static int crypt_convert_block_aead(struct crypt_config *cc, struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in); struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out); struct dm_crypt_request *dmreq; - unsigned int data_len = 1 << SECTOR_SHIFT; u8 *iv, *org_iv, *tag_iv, *tag; uint64_t *sector; int r = 0; BUG_ON(cc->integrity_iv_size && cc->integrity_iv_size != cc->iv_size); + /* Reject unexpected unaligned bio. */ + if (unlikely(bv_in.bv_offset & (cc->sector_size - 1))) + return -EIO; + dmreq = dmreq_of_req(cc, req); dmreq->iv_sector = ctx->cc_sector; + if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags)) + sector_div(dmreq->iv_sector, cc->sector_size >> SECTOR_SHIFT); dmreq->ctx = ctx; *org_tag_of_dmreq(cc, dmreq) = tag_offset; @@ -1066,13 +1083,13 @@ static int crypt_convert_block_aead(struct crypt_config *cc, sg_init_table(dmreq->sg_in, 4); sg_set_buf(&dmreq->sg_in[0], sector, sizeof(uint64_t)); sg_set_buf(&dmreq->sg_in[1], org_iv, cc->iv_size); - sg_set_page(&dmreq->sg_in[2], bv_in.bv_page, data_len, bv_in.bv_offset); + sg_set_page(&dmreq->sg_in[2], bv_in.bv_page, cc->sector_size, bv_in.bv_offset); sg_set_buf(&dmreq->sg_in[3], tag, cc->integrity_tag_size); sg_init_table(dmreq->sg_out, 4); sg_set_buf(&dmreq->sg_out[0], sector, sizeof(uint64_t)); sg_set_buf(&dmreq->sg_out[1], org_iv, cc->iv_size); - sg_set_page(&dmreq->sg_out[2], bv_out.bv_page, data_len, bv_out.bv_offset); + sg_set_page(&dmreq->sg_out[2], bv_out.bv_page, cc->sector_size, bv_out.bv_offset); sg_set_buf(&dmreq->sg_out[3], tag, cc->integrity_tag_size); if (cc->iv_gen_ops) { @@ -1094,14 +1111,14 @@ static int crypt_convert_block_aead(struct crypt_config *cc, aead_request_set_ad(req, sizeof(uint64_t) + cc->iv_size); if (bio_data_dir(ctx->bio_in) == WRITE) { aead_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out, - data_len, iv); + cc->sector_size, iv); r = crypto_aead_encrypt(req); if (cc->integrity_tag_size + cc->integrity_iv_size != cc->on_disk_tag_size) memset(tag + cc->integrity_tag_size + cc->integrity_iv_size, 0, cc->on_disk_tag_size - (cc->integrity_tag_size + cc->integrity_iv_size)); } else { aead_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out, - data_len + cc->integrity_tag_size, iv); + cc->sector_size + cc->integrity_tag_size, iv); r = crypto_aead_decrypt(req); } @@ -1112,8 +1129,8 @@ static int crypt_convert_block_aead(struct crypt_config *cc, if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post) r = cc->iv_gen_ops->post(cc, org_iv, dmreq); - bio_advance_iter(ctx->bio_in, &ctx->iter_in, data_len); - bio_advance_iter(ctx->bio_out, &ctx->iter_out, data_len); + bio_advance_iter(ctx->bio_in, &ctx->iter_in, cc->sector_size); + bio_advance_iter(ctx->bio_out, &ctx->iter_out, cc->sector_size); return r; } @@ -1127,13 +1144,18 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out); struct scatterlist *sg_in, *sg_out; struct dm_crypt_request *dmreq; - unsigned int data_len = 1 << SECTOR_SHIFT; u8 *iv, *org_iv, *tag_iv; uint64_t *sector; int r = 0; + /* Reject unexpected unaligned bio. */ + if (unlikely(bv_in.bv_offset & (cc->sector_size - 1))) + return -EIO; + dmreq = dmreq_of_req(cc, req); dmreq->iv_sector = ctx->cc_sector; + if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags)) + sector_div(dmreq->iv_sector, cc->sector_size >> SECTOR_SHIFT); dmreq->ctx = ctx; *org_tag_of_dmreq(cc, dmreq) = tag_offset; @@ -1150,10 +1172,10 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, sg_out = &dmreq->sg_out[0]; sg_init_table(sg_in, 1); - sg_set_page(sg_in, bv_in.bv_page, data_len, bv_in.bv_offset); + sg_set_page(sg_in, bv_in.bv_page, cc->sector_size, bv_in.bv_offset); sg_init_table(sg_out, 1); - sg_set_page(sg_out, bv_out.bv_page, data_len, bv_out.bv_offset); + sg_set_page(sg_out, bv_out.bv_page, cc->sector_size, bv_out.bv_offset); if (cc->iv_gen_ops) { /* For READs use IV stored in integrity metadata */ @@ -1171,7 +1193,7 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, memcpy(iv, org_iv, cc->iv_size); } - skcipher_request_set_crypt(req, sg_in, sg_out, data_len, iv); + skcipher_request_set_crypt(req, sg_in, sg_out, cc->sector_size, iv); if (bio_data_dir(ctx->bio_in) == WRITE) r = crypto_skcipher_encrypt(req); @@ -1181,8 +1203,8 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post) r = cc->iv_gen_ops->post(cc, org_iv, dmreq); - bio_advance_iter(ctx->bio_in, &ctx->iter_in, data_len); - bio_advance_iter(ctx->bio_out, &ctx->iter_out, data_len); + bio_advance_iter(ctx->bio_in, &ctx->iter_in, cc->sector_size); + bio_advance_iter(ctx->bio_out, &ctx->iter_out, cc->sector_size); return r; } @@ -1268,6 +1290,7 @@ static int crypt_convert(struct crypt_config *cc, struct convert_context *ctx) { unsigned int tag_offset = 0; + unsigned int sector_step = cc->sector_size / (1 << SECTOR_SHIFT); int r; atomic_set(&ctx->cc_pending, 1); @@ -1275,7 +1298,6 @@ static int crypt_convert(struct crypt_config *cc, while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) { crypt_alloc_req(cc, ctx); - atomic_inc(&ctx->cc_pending); if (crypt_integrity_aead(cc)) @@ -1298,16 +1320,16 @@ static int crypt_convert(struct crypt_config *cc, */ case -EINPROGRESS: ctx->r.req = NULL; - ctx->cc_sector++; - tag_offset++; + ctx->cc_sector += sector_step; + tag_offset += sector_step; continue; /* * The request was already processed (synchronously). */ case 0: atomic_dec(&ctx->cc_pending); - ctx->cc_sector++; - tag_offset++; + ctx->cc_sector += sector_step; + tag_offset += sector_step; cond_resched(); continue; /* @@ -2499,10 +2521,11 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar struct crypt_config *cc = ti->private; struct dm_arg_set as; static struct dm_arg _args[] = { - {0, 3, "Invalid number of feature args"}, + {0, 6, "Invalid number of feature args"}, }; unsigned int opt_params, val; const char *opt_string, *sval; + char dummy; int ret; /* Optional parameters */ @@ -2545,7 +2568,16 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar cc->cipher_auth = kstrdup(sval, GFP_KERNEL); if (!cc->cipher_auth) return -ENOMEM; - } else { + } else if (sscanf(opt_string, "sector_size:%u%c", &cc->sector_size, &dummy) == 1) { + if (cc->sector_size < (1 << SECTOR_SHIFT) || + cc->sector_size > 4096 || + (1 << ilog2(cc->sector_size) != cc->sector_size)) { + ti->error = "Invalid feature value for sector_size"; + return -EINVAL; + } + } else if (!strcasecmp(opt_string, "iv_large_sectors")) + set_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags); + else { ti->error = "Invalid feature arguments"; return -EINVAL; } @@ -2585,6 +2617,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) return -ENOMEM; } cc->key_size = key_size; + cc->sector_size = (1 << SECTOR_SHIFT); ti->private = cc; @@ -2657,7 +2690,8 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) mutex_init(&cc->bio_alloc_lock); ret = -EINVAL; - if (sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) { + if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || + (tmpll & ((cc->sector_size >> SECTOR_SHIFT) - 1))) { ti->error = "Invalid iv_offset sector"; goto bad; } @@ -2758,6 +2792,16 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) (bio_data_dir(bio) == WRITE || cc->on_disk_tag_size)) dm_accept_partial_bio(bio, ((BIO_MAX_PAGES << PAGE_SHIFT) >> SECTOR_SHIFT)); + /* + * Ensure that bio is a multiple of internal sector encryption size + * and is aligned to this size as defined in IO hints. + */ + if (unlikely((bio->bi_iter.bi_sector & ((cc->sector_size >> SECTOR_SHIFT) - 1)) != 0)) + return -EIO; + + if (unlikely(bio->bi_iter.bi_size & (cc->sector_size - 1))) + return -EIO; + io = dm_per_bio_data(bio, cc->per_bio_data_size); crypt_io_init(io, cc, bio, dm_target_offset(ti, bio->bi_iter.bi_sector)); @@ -2765,12 +2809,13 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) unsigned tag_len = cc->on_disk_tag_size * bio_sectors(bio); if (unlikely(tag_len > KMALLOC_MAX_SIZE) || - unlikely(!(io->integrity_metadata = kmalloc(tag_len, + unlikely(!(io->integrity_metadata = kzalloc(tag_len, GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN)))) { if (bio_sectors(bio) > cc->tag_pool_max_sectors) dm_accept_partial_bio(bio, cc->tag_pool_max_sectors); io->integrity_metadata = mempool_alloc(cc->tag_pool, GFP_NOIO); io->integrity_metadata_from_pool = true; + memset(io->integrity_metadata, 0, cc->tag_pool_max_sectors * (1 << SECTOR_SHIFT)); } } @@ -2818,6 +2863,8 @@ static void crypt_status(struct dm_target *ti, status_type_t type, num_feature_args += !!ti->num_discard_bios; num_feature_args += test_bit(DM_CRYPT_SAME_CPU, &cc->flags); num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags); + num_feature_args += (cc->sector_size != (1 << SECTOR_SHIFT)) ? 1 : 0; + num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags); if (cc->on_disk_tag_size) num_feature_args++; if (num_feature_args) { @@ -2830,6 +2877,10 @@ static void crypt_status(struct dm_target *ti, status_type_t type, DMEMIT(" submit_from_crypt_cpus"); if (cc->on_disk_tag_size) DMEMIT(" integrity:%u:%s", cc->on_disk_tag_size, cc->cipher_auth); + if (cc->sector_size != (1 << SECTOR_SHIFT)) + DMEMIT(" sector_size:%d", cc->sector_size); + if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags)) + DMEMIT(" iv_large_sectors"); } break; @@ -2919,6 +2970,8 @@ static int crypt_iterate_devices(struct dm_target *ti, static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) { + struct crypt_config *cc = ti->private; + /* * Unfortunate constraint that is required to avoid the potential * for exceeding underlying device's max_segments limits -- due to @@ -2926,11 +2979,17 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) * bio that are not as physically contiguous as the original bio. */ limits->max_segment_size = PAGE_SIZE; + + if (cc->sector_size != (1 << SECTOR_SHIFT)) { + limits->logical_block_size = cc->sector_size; + limits->physical_block_size = cc->sector_size; + blk_limits_io_min(limits, cc->sector_size); + } } static struct target_type crypt_target = { .name = "crypt", - .version = {1, 16, 0}, + .version = {1, 17, 0}, .module = THIS_MODULE, .ctr = crypt_ctr, .dtr = crypt_dtr,