From patchwork Thu Jan 21 23:03:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12038011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC5E7C433DB for ; Thu, 21 Jan 2021 23:28:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D26B20769 for ; Thu, 21 Jan 2021 23:28:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726502AbhAUX1o (ORCPT ); Thu, 21 Jan 2021 18:27:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726014AbhAUXFP (ORCPT ); Thu, 21 Jan 2021 18:05:15 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 393C9C06178B for ; Thu, 21 Jan 2021 15:03:42 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id k7so3666323ybm.13 for ; Thu, 21 Jan 2021 15:03:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=tsvK1rDY/RlLBS3dgT5AwWIiim6Sn26CCHt+oR69tjk=; b=H/ahfyxTlyAPoLQaLRVWHEfSuE2axSrjqhW9z2Mnk7WuoPj/ZKZko57raR77rpxVUS vGwJPkad8WqGC5uNEHe2PllgE4DRbRX8ULPa/2hGY4Fjce2VmuAtEDmj9sxB+1HQd2Ij huFtfko8gVOilXe/PCJHJrALQh78niqaraqp7KGQbD2bAWxCJaBB820SMYDvaiJX3gva K8wTvNwsa7X/XHG9dnqW+3ZXusKESAnS9SCMqrpp24B2LurLmupOFgYBP9bgq6d5PE8t xOg1b8z4ACYb29jVYDdbIIaDsLqXtnJG+q87KfmXxu6L+6QG9jaQ45Pprba2uC92gd7v UJRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tsvK1rDY/RlLBS3dgT5AwWIiim6Sn26CCHt+oR69tjk=; b=oXqW2NoqWvbGkA8EV+o0P0LDTuJPGrssnne2tR9dgZniGkO2r5kMD8zdkxjJLQOoAn DcULdqoPUKHsLXbeDH93+4irR7XOtLXWQH2WdviH0JpRLCZYYkdwBt8K+GO3t8b03xci f2LTi/GlSii9ZhF/IlGJg7IUhm3/zZOcUvBQhJTlyS9SUEngNueqNMXGayDZix8a/UxN 83nkz4DQ6Ptzi/BWpv6OQmHy7SN+EHDOZNkNfF/XwC4169bgopmmm3vHLQbVzCr0oOKv P7z+qqb6Zr8hfgb/LA3gr32/42yw3mHeBkpXKy9QDw8l/u9MXH/ozM7B5EaiN32Nz5Qh lLkg== X-Gm-Message-State: AOAM530ul1gzoeB2B6R3mp5pbT3WVy2kZXiY8Y+NtKs+CiekZTn99u2V OfVtKyzD/c7/gRYHz8OAJIuAV2q1NN8= X-Google-Smtp-Source: ABdhPJwhHYNflZk74TDje1JUFLXpn0ipWHt439v307Cf6eQCsO/Pjj34D7D9GE7Px3U3bAbbcwFM4hcAPjI= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:ced0:: with SMTP id x199mr2493767ybe.448.1611270221473; Thu, 21 Jan 2021 15:03:41 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:29 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-2-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 1/8] block: blk-crypto-fallback: handle data unit split across multiple bvecs From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Till now, the blk-crypto-fallback required each crypto data unit to be contained within a single bvec. It also required the starting offset of each bvec to be aligned to the data unit size. This patch removes both restrictions, so that blk-crypto-fallback can handle crypto data units split across multiple bvecs. blk-crypto-fallback now only requires that the total size of the bio be aligned to the crypto data unit size. The buffer that is being read/written to no longer needs to be data unit size aligned. This is useful for making the alignment requirements for direct I/O on encrypted files similar to those for direct I/O on unencrypted files. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/blk-crypto-fallback.c | 203 +++++++++++++++++++++++++++--------- 1 file changed, 156 insertions(+), 47 deletions(-) diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index c162b754efbd..663579d0783f 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -249,6 +249,65 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], iv->dun[i] = cpu_to_le64(dun[i]); } +/* + * If the length of any bio segment isn't a multiple of data_unit_size + * (which can happen if data_unit_size > logical_block_size), then each + * encryption/decryption might need to be passed multiple scatterlist elements. + * If that will be the case, this function allocates and initializes src and dst + * scatterlists (or a combined src/dst scatterlist) with the needed length. + * + * If 1 element is guaranteed to be enough (which is usually the case, and is + * guaranteed when data_unit_size <= logical_block_size), then this function + * just initializes the on-stack scatterlist(s). + */ +static bool blk_crypto_alloc_sglists(struct bio *bio, + const struct bvec_iter *start_iter, + unsigned int data_unit_size, + struct scatterlist **src_p, + struct scatterlist **dst_p) +{ + struct bio_vec bv; + struct bvec_iter iter; + bool aligned = true; + unsigned int count = 0; + + __bio_for_each_segment(bv, bio, iter, *start_iter) { + count++; + aligned &= IS_ALIGNED(bv.bv_len, data_unit_size); + } + if (aligned) { + count = 1; + } else { + /* + * We can't need more elements than bio segments, and we can't + * need more than the number of sectors per data unit. This may + * overestimate the required length by a bit, but that's okay. + */ + count = min(count, data_unit_size >> SECTOR_SHIFT); + } + + if (count > 1) { + *src_p = kmalloc_array(count, sizeof(struct scatterlist), + GFP_NOIO); + if (!*src_p) + return false; + if (dst_p) { + *dst_p = kmalloc_array(count, + sizeof(struct scatterlist), + GFP_NOIO); + if (!*dst_p) { + kfree(*src_p); + *src_p = NULL; + return false; + } + } + } + sg_init_table(*src_p, count); + if (dst_p) + sg_init_table(*dst_p, count); + return true; +} + /* * The crypto API fallback's encryption routine. * Allocate a bounce bio for encryption, encrypt the input bio using crypto API, @@ -265,9 +324,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) struct skcipher_request *ciph_req = NULL; DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; - struct scatterlist src, dst; + struct scatterlist _src, *src = &_src; + struct scatterlist _dst, *dst = &_dst; union blk_crypto_iv iv; - unsigned int i, j; + unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; bool ret = false; blk_status_t blk_st; @@ -279,11 +341,18 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) bc = src_bio->bi_crypt_context; data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; + /* Allocate scatterlists if needed */ + if (!blk_crypto_alloc_sglists(src_bio, &src_bio->bi_iter, + data_unit_size, &src, &dst)) { + src_bio->bi_status = BLK_STS_RESOURCE; + return false; + } + /* Allocate bounce bio for encryption */ enc_bio = blk_crypto_clone_bio(src_bio); if (!enc_bio) { src_bio->bi_status = BLK_STS_RESOURCE; - return false; + goto out_free_sglists; } /* @@ -303,45 +372,58 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&src, 1); - sg_init_table(&dst, 1); - skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size, + skcipher_request_set_crypt(ciph_req, src, dst, data_unit_size, iv.bytes); - /* Encrypt each page in the bounce bio */ + /* + * Encrypt each data unit in the bounce bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ for (i = 0; i < enc_bio->bi_vcnt; i++) { - struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i]; - struct page *plaintext_page = enc_bvec->bv_page; + struct bio_vec *bv = &enc_bio->bi_io_vec[i]; + struct page *plaintext_page = bv->bv_page; struct page *ciphertext_page = mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO); + unsigned int offset_in_bv = 0; - enc_bvec->bv_page = ciphertext_page; + bv->bv_page = ciphertext_page; if (!ciphertext_page) { src_bio->bi_status = BLK_STS_RESOURCE; goto out_free_bounce_pages; } - sg_set_page(&src, plaintext_page, data_unit_size, - enc_bvec->bv_offset); - sg_set_page(&dst, ciphertext_page, data_unit_size, - enc_bvec->bv_offset); - - /* Encrypt each data unit in this page */ - for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), - &wait)) { - i++; - src_bio->bi_status = BLK_STS_IOERR; - goto out_free_bounce_pages; + while (offset_in_bv < bv->bv_len) { + unsigned int n = min(bv->bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&src[sg_idx], plaintext_page, n, + bv->bv_offset + offset_in_bv); + sg_set_page(&dst[sg_idx], ciphertext_page, n, + bv->bv_offset + offset_in_bv); + sg_idx++; + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait)) { + src_bio->bi_status = BLK_STS_IOERR; + i++; + goto out_free_bounce_pages; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - src.offset += data_unit_size; - dst.offset += data_unit_size; } } + if (WARN_ON_ONCE(du_filled != 0)) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_free_bounce_pages; + } enc_bio->bi_private = src_bio; enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio; @@ -362,7 +444,11 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) out_put_enc_bio: if (enc_bio) bio_put(enc_bio); - +out_free_sglists: + if (src != &_src) + kfree(src); + if (dst != &_dst) + kfree(dst); return ret; } @@ -381,13 +467,21 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; union blk_crypto_iv iv; - struct scatterlist sg; + struct scatterlist _sg, *sg = &_sg; struct bio_vec bv; struct bvec_iter iter; const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; - unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; blk_status_t blk_st; + /* Allocate scatterlist if needed */ + if (!blk_crypto_alloc_sglists(bio, &f_ctx->crypt_iter, data_unit_size, + &sg, NULL)) { + bio->bi_status = BLK_STS_RESOURCE; + goto out_no_sglists; + } + /* * Use the crypto API fallback keyslot manager to get a crypto_skcipher * for the algorithm and key specified for this bio. @@ -405,33 +499,48 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&sg, 1); - skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, - iv.bytes); + skcipher_request_set_crypt(ciph_req, sg, sg, data_unit_size, iv.bytes); - /* Decrypt each segment in the bio */ + /* + * Decrypt each data unit in the bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) { - struct page *page = bv.bv_page; - - sg_set_page(&sg, page, data_unit_size, bv.bv_offset); - - /* Decrypt each data unit in the segment */ - for (i = 0; i < bv.bv_len; i += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), - &wait)) { - bio->bi_status = BLK_STS_IOERR; - goto out; + unsigned int offset_in_bv = 0; + + while (offset_in_bv < bv.bv_len) { + unsigned int n = min(bv.bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&sg[sg_idx++], bv.bv_page, n, + bv.bv_offset + offset_in_bv); + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - sg.offset += data_unit_size; } } - + if (WARN_ON_ONCE(du_filled != 0)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } out: skcipher_request_free(ciph_req); blk_ksm_put_slot(slot); out_no_keyslot: + if (sg != &_sg) + kfree(sg); +out_no_sglists: mempool_free(f_ctx, bio_fallback_crypt_ctx_pool); bio_endio(bio); } From patchwork Thu Jan 21 23:03:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12038013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D842C433E0 for ; Thu, 21 Jan 2021 23:29:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 417AA2087E for ; Thu, 21 Jan 2021 23:29:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726419AbhAUX2R (ORCPT ); Thu, 21 Jan 2021 18:28:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726030AbhAUXFP (ORCPT ); Thu, 21 Jan 2021 18:05:15 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC330C061794 for ; Thu, 21 Jan 2021 15:03:43 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id k7so3666400ybm.13 for ; Thu, 21 Jan 2021 15:03:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=BAlJPK4qVbFO+YocbKUqMkxk0C4PzWuOV8MSNSKCIhI=; b=QpJhHzvp1GXeEjDGYNxaJ+4dbtZiUzLq85/2LS484mjd4YFHF0DMU88K4lo+zwGWkM 9LWilqjyHM5F5TtGZpKac7DrVr7HfcqlaVQK2F+DnPn7HQ1OpLiTN2syMLwN6sUYSfyq kMRuDoXMZo7mTk3nzJtbmwY7+bGtriVPvGjQkk+MnGqGoiLkOelUThpM6hXWK0x36ohG e4kaG1rglU1iTZ1tDhy7HPk7x9BjPJ8bIRyrdO3svNhEMvqS/Hns9be4EINUxvy/KNns rOQqszDYyJegCrtkYY9PpwY9HTVsvA0YScQQg5XyqxRJ8fMZK7LQEekKi+Z66tBXsO7I 6JBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BAlJPK4qVbFO+YocbKUqMkxk0C4PzWuOV8MSNSKCIhI=; b=bNiJdpMzLCIecrytxpkkjCbZIJe4aRNh440IaYF9OvP9zHhw6kMB4uhVJcoG7m/4Ml aesrAVHuQzGSaMrVuZ+gDbaFJzOPfoIeq3dS6C43yFAF4dKCGICPv5ONCZ+9BmY71G4A hDHnlW37H5UkwdIGXbCpA7VSIvOstZyh0bqJ9KnXSYhHouVsCJzQ1jwy2eyvN0/dmd+x sWQRbW3g+3xu4IRHM2SMTC9gXvT+8yuU5EBJGqSVjj//TjZ1O8ctFcEdei/uLOILeYKw oL7sZnLi9m32z93Kx+9jLQ8chxP2H+KVkS1h7DS+ClNQENCqDSchj0jgNAD/tQ3MuKou a3Aw== X-Gm-Message-State: AOAM532/F3CFq4MTte33zVwgQCxKdZomOAm+UFlSAd5bjWpTZHNU8f4W o8KtYwBSG4PJuaXtWA0ifHYH467I9dk= X-Google-Smtp-Source: ABdhPJwxayAKSF6mXmBJ8++TA75KLd08uhG910WxJC5CctiAVx3ai3oKRw6joRzYaCJBGp4lZOg1Q76giQM= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:9d83:: with SMTP id v3mr2313605ybp.368.1611270223182; Thu, 21 Jan 2021 15:03:43 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:30 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-3-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 2/8] block: blk-crypto: relax alignment requirements for bvecs in bios From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org blk-crypto only accepted bios whose bvecs' offsets and lengths were aligned to the crypto data unit size, since blk-crypto-fallback required that to work correctly. Now that the blk-crypto-fallback has been updated to work without that assumption, we relax the alignment requirement - blk-crypto now only needs the total size of the bio to be aligned to the crypto data unit size. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/blk-crypto.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 5da43f0973b4..fcee0038f7e0 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -200,22 +200,6 @@ bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes, return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun); } -/* Check that all I/O segments are data unit aligned. */ -static bool bio_crypt_check_alignment(struct bio *bio) -{ - const unsigned int data_unit_size = - bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size; - struct bvec_iter iter; - struct bio_vec bv; - - bio_for_each_segment(bv, bio, iter) { - if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) - return false; - } - - return true; -} - blk_status_t __blk_crypto_init_request(struct request *rq) { return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key, @@ -271,7 +255,8 @@ bool __blk_crypto_bio_prep(struct bio **bio_ptr) goto fail; } - if (!bio_crypt_check_alignment(bio)) { + if (!IS_ALIGNED(bio->bi_iter.bi_size, + bc_key->crypto_cfg.data_unit_size)) { bio->bi_status = BLK_STS_IOERR; goto fail; } From patchwork Thu Jan 21 23:03:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12037973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBE66C433E9 for ; Thu, 21 Jan 2021 23:23:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A15EC22CAE for ; Thu, 21 Jan 2021 23:23:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726257AbhAUXGj (ORCPT ); Thu, 21 Jan 2021 18:06:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726066AbhAUXFj (ORCPT ); Thu, 21 Jan 2021 18:05:39 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD25EC061353 for ; Thu, 21 Jan 2021 15:03:45 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id d38so3663815ybe.15 for ; Thu, 21 Jan 2021 15:03:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=1ckCI9Tb/L8gev/gEmm7X0c/Y7ml6iq7ack/b5dDjYE=; b=X2HZ0VgBO6retkgDPI/hc7hrVqJpli3v91VUSZGMkTuBAdGZGHSP1S1FQXAJngbCEz EoBnCkl65VpF50pD6TqEcZWSidz54WMSsVPtzpPSvfNohUYXl27xMgQxvZNlzRxWDdNB PjWWPhFygSLnu5/7x+VDslvjZB30+nb1dhEePKPJHqPE74VIuLRkEO77aa6QPFfAtpZ7 a3kwExIPUdQ7MAUGK5B5s4W8fW5MxiUrFAdlm6fJqkjZ0pSr9Zz7IU3nd1wGQByQWrEF fe4L/8WSO99cW3nR7KrcBCNOU9//6BXJBxZt9xlQtc+tPFCKWydLHbFawFw8DU7zKf3y fHLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1ckCI9Tb/L8gev/gEmm7X0c/Y7ml6iq7ack/b5dDjYE=; b=GuIYQH91TlNk4ymAJtrU6xMd7L+QwJoqJDpP3fYGrlKUySZ9g+3K7ok8jL9+Oc8xcG lV7evJjrAdZCSgpyLc8rTztGaRE/Wmkno5jmLHxg7E2xDYqpEnE/JT/5CNFQnknm6+tr k1OpiqHc5sJ4zFFh9Cysy2pxEegdEX+d9TZE8y3e32pEukUwGSr6tLdpnvSVuEyxgjT+ MmZh29/VpdXK09RWTpm5BLsFsmwL8dwQCa/mvNTicgZhexy8lIHDUZiGQhGkXcJMtLBI KNAT3rgPz7VOAgo68fSW3AQ2VN0EZE7rx6mpRjJzGQq6TFV/eyhUv5BSqx3DyxeS5fpv rTJQ== X-Gm-Message-State: AOAM532Vk754z814yUuuvPawEn2NfExXMbEybncF+xunnB+yHq1FmJSK lZQCf2wdXbTkp9riwkHgCQxHmZt9CIg= X-Google-Smtp-Source: ABdhPJxwqigOy2i/tD3YTQY4MNcnJ8qVtkw1pfvH4oaxwmRUFlyn2I/Ix//2uo7EiICH3ps8LTqwlWWs89o= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:bb8f:: with SMTP id y15mr2467322ybg.139.1611270224921; Thu, 21 Jan 2021 15:03:44 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:31 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-4-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 3/8] fscrypt: add functions for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Introduce fscrypt_dio_supported() to check whether a direct I/O request is unsupported due to encryption constraints. Also introduce fscrypt_limit_io_blocks() to limit how many blocks can be added to a bio being prepared for direct I/O. This is needed for filesystems that use the iomap direct I/O implementation to avoid DUN wraparound in the middle of a bio (which is possible with the IV_INO_LBLK_32 IV generation method). Elsewhere fscrypt_mergeable_bio() is used for this, but iomap operates on logical ranges directly, so filesystems using iomap won't have a chance to call fscrypt_mergeable_bio() on every block added to a bio. So we need this function which limits a logical range in one go. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- fs/crypto/crypto.c | 8 +++++ fs/crypto/inline_crypt.c | 74 ++++++++++++++++++++++++++++++++++++++++ include/linux/fscrypt.h | 18 ++++++++++ 3 files changed, 100 insertions(+) diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 4ef3f714046a..4fcca79f39ae 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -69,6 +69,14 @@ void fscrypt_free_bounce_page(struct page *bounce_page) } EXPORT_SYMBOL(fscrypt_free_bounce_page); +/* + * Generate the IV for the given logical block number within the given file. + * For filenames encryption, lblk_num == 0. + * + * Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks() + * needs to know about any IV generation methods where the low bits of IV don't + * simply contain the lblk_num (e.g., IV_INO_LBLK_32). + */ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, const struct fscrypt_info *ci) { diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index c57bebfa48fe..956f5bfab7a0 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "fscrypt_private.h" @@ -363,3 +364,76 @@ bool fscrypt_mergeable_bio_bh(struct bio *bio, return fscrypt_mergeable_bio(bio, inode, next_lblk); } EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh); + +/** + * fscrypt_dio_supported() - check whether a direct I/O request is unsupported + * due to encryption constraints + * @iocb: the file and position the I/O is targeting + * @iter: the I/O data segment(s) + * + * Return: true if direct I/O is supported + */ +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + const unsigned int blocksize = i_blocksize(inode); + + /* If the file is unencrypted, no veto from us. */ + if (!fscrypt_needs_contents_encryption(inode)) + return true; + + /* We only support direct I/O with inline crypto, not fs-layer crypto */ + if (!fscrypt_inode_uses_inline_crypto(inode)) + return false; + + /* + * Since the granularity of encryption is filesystem blocks, the I/O + * must be block aligned -- not just disk sector aligned. + */ + if (!IS_ALIGNED(iocb->ki_pos | iov_iter_count(iter), blocksize)) + return false; + + return true; +} +EXPORT_SYMBOL_GPL(fscrypt_dio_supported); + +/** + * fscrypt_limit_io_blocks() - limit I/O blocks to avoid discontiguous DUNs + * @inode: the file on which I/O is being done + * @lblk: the block at which the I/O is being started from + * @nr_blocks: the number of blocks we want to submit starting at @lblk + * + * Determine the limit to the number of blocks that can be submitted in the bio + * targeting @lblk without causing a data unit number (DUN) discontinuity. + * + * This is normally just @nr_blocks, as normally the DUNs just increment along + * with the logical blocks. (Or the file is not encrypted.) + * + * In rare cases, fscrypt can be using an IV generation method that allows the + * DUN to wrap around within logically continuous blocks, and that wraparound + * will occur. If this happens, a value less than @nr_blocks will be returned + * so that the wraparound doesn't occur in the middle of the bio. + * + * Return: the actual number of blocks that can be submitted + */ +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks) +{ + const struct fscrypt_info *ci = inode->i_crypt_info; + u32 dun; + + if (!fscrypt_inode_uses_inline_crypto(inode)) + return nr_blocks; + + if (nr_blocks <= 1) + return nr_blocks; + + if (!(fscrypt_policy_flags(&ci->ci_policy) & + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) + return nr_blocks; + + /* With IV_INO_LBLK_32, the DUN can wrap around from U32_MAX to 0. */ + + dun = ci->ci_hashed_ino + lblk; + + return min_t(u64, nr_blocks, (u64)U32_MAX + 1 - dun); +} diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index 2ea1387bb497..d8dde02aee82 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -609,6 +609,10 @@ bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, bool fscrypt_mergeable_bio_bh(struct bio *bio, const struct buffer_head *next_bh); +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter); + +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks); + #else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ static inline bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode) @@ -637,6 +641,20 @@ static inline bool fscrypt_mergeable_bio_bh(struct bio *bio, { return true; } + +static inline bool fscrypt_dio_supported(struct kiocb *iocb, + struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + + return !fscrypt_needs_contents_encryption(inode); +} + +static inline u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, + u64 nr_blocks) +{ + return nr_blocks; +} #endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ /** From patchwork Thu Jan 21 23:03:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12038009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AFDCC433E0 for ; Thu, 21 Jan 2021 23:27:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 274712087E for ; Thu, 21 Jan 2021 23:27:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726105AbhAUX1K (ORCPT ); Thu, 21 Jan 2021 18:27:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726073AbhAUXFp (ORCPT ); Thu, 21 Jan 2021 18:05:45 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7C25C06121D for ; Thu, 21 Jan 2021 15:03:47 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id z9so1974425plg.19 for ; Thu, 21 Jan 2021 15:03:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=73jUIQHuj0265QRP8P3oVVOaSevfoFr6R3Pxp4Z8uk0=; b=RpM4qKLA+63xTwFiz255mspkEHNICv4dlhm7EHfnRU0UFN2Ihke5P4zOrKCiazdR7d bXOol4JvqiZOdRVsLQZnm4+9mSTLVh42aPGA5Y4Y5IB+y5B3KInSnmfMLfUe5WfbPlV5 VxFwpZ2Vqn0u1sbV9D8cI0VjEg3QNpJhpPWDPwb+anXbPJOiIr/ySdiPWgUfOjgjdgkA IdEjApFDSNmguJFxEjGJY6eDJTmCCZhgIUZiGfP7Ni5urYiytgymML2bkgTyZ9IBfFd0 PUCS2z8FThCRQC0kEqTRjWh5yf4h+dirIoOYGRoPF8cFverIpktw4+4nKzBCaAqBcI22 stXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=73jUIQHuj0265QRP8P3oVVOaSevfoFr6R3Pxp4Z8uk0=; b=fd4HG5IsB7i+U65kiWr4MReb5reXogjLUTZ9B6RV/ME6uhFqOx7n3C6gu+KZ0mlNjB ag7ifiUNHi/2eLoWE2uC2T+YITfMdojHF+N9u0Fiw9T8IXs0CDgRuNO3t4xSB61TXpzL PZsF3AL+yM+UPFESXs8/K45qaYXm/4Vs1F2JIPLLMOpnDRZOA1ezspYxr9SWjrq216Zu mFReax9XpkuqilfOyTQYJG1R8/nQ/lMCf4KS/ncpYoapqjbSXGdkLNNnWVS/hE3O+2xq z9l4IIRiqdvZWI5GcMlpq99znaZegkY46AaqsNNc61XSbqkOTi6rV9yLq338WckorgMe 2HGQ== X-Gm-Message-State: AOAM530JNH9jIVpgfa0+wq42KRvmp4rprIlB4yH2GEwWWigjSX3iz63f xyqsu7HThTGtJy0NH5lng32W5R0J8sE= X-Google-Smtp-Source: ABdhPJwJkQ2FWOp1yUHZk7nzCbAeBUq10CoEvBoJQOw9xo3NYsJVQXsuE+AArFPbdsM6X5/w9fvQdGdj3Ek= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:90b:370d:: with SMTP id mg13mr1905409pjb.161.1611270227094; Thu, 21 Jan 2021 15:03:47 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:32 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-5-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 4/8] direct-io: add support for fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required, and explicitly check for DUN continuity when adding pages to the bio. (While DUN continuity is usually implied by logical block contiguity, this is not the case when using certain fscrypt IV generation methods like IV_INO_LBLK_32). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim --- fs/direct-io.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index d53fa92a1ab6..f6672c4030e3 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -392,6 +393,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, sector_t first_sector, int nr_vecs) { struct bio *bio; + struct inode *inode = dio->inode; /* * bio_alloc() is guaranteed to return a bio when allowed to sleep and @@ -399,6 +401,9 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, */ bio = bio_alloc(GFP_KERNEL, nr_vecs); + fscrypt_set_bio_crypt_ctx(bio, inode, + sdio->cur_page_fs_offset >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = first_sector; bio_set_op_attrs(bio, dio->op, dio->op_flags); @@ -763,9 +768,17 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, * current logical offset in the file does not equal what would * be the next logical offset in the bio, submit the bio we * have. + * + * When fscrypt inline encryption is used, data unit number + * (DUN) contiguity is also required. Normally that's implied + * by logical contiguity. However, certain IV generation + * methods (e.g. IV_INO_LBLK_32) don't guarantee it. So, we + * must explicitly check fscrypt_mergeable_bio() too. */ if (sdio->final_block_in_bio != sdio->cur_page_block || - cur_offset != bio_next_offset) + cur_offset != bio_next_offset || + !fscrypt_mergeable_bio(sdio->bio, dio->inode, + cur_offset >> dio->inode->i_blkbits)) dio_bio_submit(dio, sdio); } From patchwork Thu Jan 21 23:03:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12037979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF9B2C433DB for ; Thu, 21 Jan 2021 23:25:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B26B2087E for ; Thu, 21 Jan 2021 23:25:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726222AbhAUXZL (ORCPT ); Thu, 21 Jan 2021 18:25:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726187AbhAUXG1 (ORCPT ); Thu, 21 Jan 2021 18:06:27 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0345C061221 for ; Thu, 21 Jan 2021 15:03:49 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id r1so3623784ybd.23 for ; Thu, 21 Jan 2021 15:03:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Dk48xoljaDfKrHrHMhr41MTHc48m5tHc6K5cc6+60c0=; b=AOwqVIYhEfj64Av4uEMJ8dSXxk8ne2BkUsUnz+ACunkTKRrxPpI/6sLcfbRBs7cNfl h5qPnwUs6LddC54j2mpWSWeh77OnP521QsSDsA2EqHU3eTeUqnT+oe4HtfLSjcnqB5V7 D9cvChljpRJNuYOjKg9Oyj+cAe/xvu+QkQ1VIJQm6uloPLqIY07SFnMq2UBqecaSUDHp +Ajt62MR8Fp5RrZ7TibPGcZK9kJVvNizT+AUEgltWWilJcA7Fu0O34Tf4Eoqz77HP5X3 wtX06Ui8oahF007Y1fWYBzCRcteJTi0Ni09bl5j2Z9rw84yplKdcqpX6r+O7nek4Me+V fvJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Dk48xoljaDfKrHrHMhr41MTHc48m5tHc6K5cc6+60c0=; b=ZHL9R/owkqwmVJ0ToRxi/a0sfM4p0sqDDY+U/chINKyLd0V1LFlMddss/49iduMbG/ 1aA9nmGvJb2DItkE4OMBE2vwdEPvVAZKkugNKlF+2NXaziutx0wLyBrNtMCqYBPzF9pa CEKgfHg5jGsuZaNzjHY5tEydwtMwPd1LnP6aYrjmx5VMVx0RIJ8VPFOd/g+bvspZRYjn fDstLKIAcgtviwGRu0UjIcrXSGfVIPC24QHoNLFLVS+GQEVlE0TjhexHk8To163BG6Ej /+c8aH7SjUZ3wZpfY6/XdDsF0n+YCZSzVaaT5T+xueX8jabOkfO7Fsumem0HrYm5CCRz hyVQ== X-Gm-Message-State: AOAM532oxnO1KYJDK7mknAGni1GF36qw/pRR9iOkMRt4mcWXOooXgJ+y pCUmCO1Lwc2v3TBUHKOuibt8BeITgvU= X-Google-Smtp-Source: ABdhPJyX5dl3YFdc0cGF8/opvXclsHMhLvELrPXG/G+g5KWNEjBlc4t0ZdcLzDJrZ/b4VlIyaHc4NeW9Ir8= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:cbca:: with SMTP id b193mr2572599ybg.324.1611270229151; Thu, 21 Jan 2021 15:03:49 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:33 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-6-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 5/8] iomap: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required. No DUN contiguity checks are done - callers are expected to set up the iomap correctly to ensure that each bio submitted by iomap will not have blocks with incontiguous DUNs by calling fscrypt_limit_io_blocks() appropriately. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- fs/iomap/direct-io.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 933f234d5bec..b4240cc3c9f9 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -185,11 +186,14 @@ static void iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, unsigned len) { + struct inode *inode = file_inode(dio->iocb->ki_filp); struct page *page = ZERO_PAGE(0); int flags = REQ_SYNC | REQ_IDLE; struct bio *bio; bio = bio_alloc(GFP_KERNEL, 1); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_private = dio; @@ -272,6 +276,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, } bio = bio_alloc(GFP_KERNEL, nr_pages); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_write_hint = dio->iocb->ki_hint; From patchwork Thu Jan 21 23:03:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12037981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C437C433DB for ; Thu, 21 Jan 2021 23:25:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B451B2087E for ; Thu, 21 Jan 2021 23:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725995AbhAUXZp (ORCPT ); Thu, 21 Jan 2021 18:25:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726166AbhAUXG0 (ORCPT ); Thu, 21 Jan 2021 18:06:26 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0320C061226 for ; Thu, 21 Jan 2021 15:03:51 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id c21so2682926pjr.8 for ; Thu, 21 Jan 2021 15:03:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=9aiKVbqQYlPIThgTQ82dzvRvheXuBHM9UbbX1NZZwXQ=; b=PDwCDzUeRrtiDbzP8R+ZzFNkDbG94q487W93Pzf3+343mIFOYQTDkuNfSKH3Njmgwk Y1YPBbC6QmPRr4Alpg0Kx5Fm+v1IdyCTm0Yg+e6vqeauAts8m3c9NJ8uXTDII+85pRDd 2zUYu2cZLlMKY7g0UyvFFR98CJuJd0CliHcYVfg1y0sE3TBQVxqi4Mn51kTipxxQG9lG SAQqsDMxW+8p4X4CfWdq2yc9jdOF0n7jVQ9JmayBBNV1rw54mgpTduxHdebVkhVQO2xM HGr4GC96GJ5HXmQ+RoPWclxW98DPjcCU93GdDJbQ7hQnKT4gdwYR/sa5P6nZIJsn4WW3 1bXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9aiKVbqQYlPIThgTQ82dzvRvheXuBHM9UbbX1NZZwXQ=; b=nliroUXxlEQ40YuepkiK5vxD25dsONKpMu4/1t0k+NyMHWf8U7bAfVFzIQbG2ezdzs oXeedxz1bxPJSF8qcG8/RTkm4PpHEV+9pskN6WPXMxlVhakcuESXViP5hsrmut3T7JpS aL/xPdxIDwUUEBitMlrjBGHpsxzCJ/RWpqc+M9Nh9Mt00Lj5IslqTxPOljDAyF3GK7+U FhhTVw9zTxeNLo24bgef5lfjukvnL9cHh8JSsVjMtcjCNREF7RNQ71nroUczgGZmDfS1 otKNJtCOZZY1ioEkwvB3kNbzwmIAtwkAOQkWj+rZWVbiATAk4+zVNdrMndOT06x5g5VM J5Vg== X-Gm-Message-State: AOAM531CkJ+bUQWOs+7bUF1OeHjBRAwrub9IO9KbF0EeyQ5JI+/hLFQb JN7FhKO8ylSrPk/RqSNAvhaIyEcxtuM= X-Google-Smtp-Source: ABdhPJwXyycuFLMhL7lKL1NWCjYBJKBgLzmFW+8+TaABQDqzzJuvNirg4GTlA554HwKIFpRGe7RJFJ2TUyQ= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a62:37c7:0:b029:1aa:22ea:537d with SMTP id e190-20020a6237c70000b02901aa22ea537dmr1803778pfa.56.1611270230981; Thu, 21 Jan 2021 15:03:50 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:34 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-7-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 6/8] ext4: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Wire up ext4 with fscrypt direct I/O support. Direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). fscrypt_limit_io_blocks() is called before setting up the iomap to ensure that the blocks of each bio that iomap will submit will have contiguous DUNs. Note that fscrypt_limit_io_blocks() is normally a no-op, as normally the DUNs simply increment along with the logical blocks. But it's needed to handle an edge case in one of the fscrypt IV generation methods. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim Acked-by: Theodore Ts'o --- fs/ext4/file.c | 10 ++++++---- fs/ext4/inode.c | 7 +++++++ 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 349b27f0dda0..77681ba5e6cc 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -36,9 +36,11 @@ #include "acl.h" #include "truncate.h" -static bool ext4_dio_supported(struct inode *inode) +static bool ext4_dio_supported(struct kiocb *iocb, struct iov_iter *iter) { - if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENCRYPTED(inode)) + struct inode *inode = file_inode(iocb->ki_filp); + + if (!fscrypt_dio_supported(iocb, iter)) return false; if (fsverity_active(inode)) return false; @@ -61,7 +63,7 @@ static ssize_t ext4_dio_read_iter(struct kiocb *iocb, struct iov_iter *to) inode_lock_shared(inode); } - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, to)) { inode_unlock_shared(inode); /* * Fallback to buffered I/O if the operation being performed on @@ -495,7 +497,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from) } /* Fallback to buffered I/O if the inode does not support direct I/O. */ - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, from)) { if (ilock_shared) inode_unlock_shared(inode); else diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index c173c8405856..e5407699ce92 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3482,6 +3482,13 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length, if (ret < 0) return ret; out: + /* + * When inline encryption is enabled, sometimes I/O to an encrypted file + * has to be broken up to guarantee DUN contiguity. Handle this by + * limiting the length of the mapping returned. + */ + map.m_len = fscrypt_limit_io_blocks(inode, map.m_lblk, map.m_len); + ext4_set_iomap(inode, iomap, &map, offset, length); return 0; From patchwork Thu Jan 21 23:03:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12037975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C589C433E0 for ; Thu, 21 Jan 2021 23:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAB3520769 for ; Thu, 21 Jan 2021 23:24:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726241AbhAUXXl (ORCPT ); Thu, 21 Jan 2021 18:23:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726037AbhAUXGq (ORCPT ); Thu, 21 Jan 2021 18:06:46 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54872C0611C3 for ; Thu, 21 Jan 2021 15:03:53 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id u9so2778719qkk.5 for ; Thu, 21 Jan 2021 15:03:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=rQjj2/WzWm3qCtm3ibhFIzAxq8pwrDZXF1QHOMz3o4g=; b=OQwGeNkLyENvgqmLiHvbP7CSYH33j6r4RtJwGizSN8nWtM8Bd2dSMW/aHoyzXkcIXQ SFQAzk5HLMFt5kaGzlRUPtwP9Cbhd/ympkQ6qOFzAlI2PxYcVQ0gf5Y41UoQBpok/ssF pdjthPoUH3UPaE0tH2FUI6ZXptBS76fptbASUd5oFKbwFtVjL+37a5dGq3QxI8snd6w2 /ZXit2IQfvc79GmSw0sHRQPGpRCCqCtWa32wqwXTBD9Q5oUnteXCjDPx8vA++4qRAB8/ 5eOStHbSSsuYUP+zC61cZOHerpgOL2Kh/UfZQ+22Z1w+MhDwLjcekAl9wXkg40ZZM1zP yyiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rQjj2/WzWm3qCtm3ibhFIzAxq8pwrDZXF1QHOMz3o4g=; b=JuB2bvsEesE8SMLc8L88B0If/WlcGOG0axwoqPbTkP16T7+fujhS92692cRdJEI4Rd maXYiA/RwQkOA3ZmDd7SNXYz/n50Vo1zc4h6txMnl6X9dIe3AWltmJJyGo5zIXOvsKRK O002fNo5h+2aPgXPkKCUIWn822bP3QK0Jx9+81LUsVMaHspBKr8Fac4Uz+U5USISApQk Tcgj/bM7Yc7wQ7/xJp9sHoMyXeGfnaRhDQB5prBiKHFFxUo1orQqtMsjPRePI5V+qt53 29xOXXueZ0Yp1dECd3t3ZhNk72FtUHDe6GrIYFQjAjn7LGGDDSFG8dtrqNIXq1772jfQ SRhA== X-Gm-Message-State: AOAM531geDWQsj88JebAD2FP9qgFRSZVi3EQ16siEAimXuXlPj96Izl/ FjwgDJeMrVsNwhJ5Mqius0HvXiAzWTk= X-Google-Smtp-Source: ABdhPJzBS4n+vW6iQYRenoIZnviw813XxH38zeCsNFuDUBoahsAaZ41A78g1uNGV7RY2FFPhiAQAyieotuA= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a0c:a692:: with SMTP id t18mr2037203qva.18.1611270232543; Thu, 21 Jan 2021 15:03:52 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:35 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-8-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 7/8] f2fs: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Wire up f2fs with fscrypt direct I/O support. direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Acked-by: Jaegeuk Kim --- fs/f2fs/f2fs.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index bb11759191dc..5130423a13e7 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -4091,7 +4091,11 @@ static inline bool f2fs_force_buffered_io(struct inode *inode, struct f2fs_sb_info *sbi = F2FS_I_SB(inode); int rw = iov_iter_rw(iter); - if (f2fs_post_read_required(inode)) + if (!fscrypt_dio_supported(iocb, iter)) + return true; + if (fsverity_active(inode)) + return true; + if (f2fs_compressed_file(inode)) return true; if (f2fs_is_multi_device(sbi)) return true; From patchwork Thu Jan 21 23:03:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12037977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27AF6C433DB for ; Thu, 21 Jan 2021 23:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7CF22087E for ; Thu, 21 Jan 2021 23:24:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726229AbhAUXXg (ORCPT ); Thu, 21 Jan 2021 18:23:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726241AbhAUXGb (ORCPT ); Thu, 21 Jan 2021 18:06:31 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33C18C061A29 for ; Thu, 21 Jan 2021 15:03:55 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 127so3695660ybn.5 for ; Thu, 21 Jan 2021 15:03:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=CZMyGWlKFZQTRektdrF/slvssAIzMQGtBohXi2xlJZo=; b=rrCQEqKVW1ZDhW4+xC0fjbzkbeY/JM71fiwBKXHsmFH4bwE2GxtDFk+U88/SUhdZrf m7wzzh7s3pqKnNjd10foCZJDNdK/vkuuRpPD69ujIVZe3eifyns8biUA8hZoFwPNKnDb Ba3v3G79vllZnXC34ggan7U2sUksnH545H+f+6bih9NuBbW12SjTSwsWplttdtVyg536 GP0pN9LaqdE82PHlByUSvgve2pSMZuoQ6XfZu6yXARBBw+19h3asuK13j5tmK1ncQkcY r6k+aHDCwUHG8X/Lo9Rh9plI3yOEJxIzYYGPgdsvlhvIGQPysaEunA99fEdDpORoP0Ww T1lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CZMyGWlKFZQTRektdrF/slvssAIzMQGtBohXi2xlJZo=; b=DO7Revl4q7DKHXkXy4uqY0aaG6GfBkzWpXLBS2/Hseh0zOlEtyq5z52UpcehS3xT9p V7rRgbwZV+I58eNccUR66MZgBPw6WqlJ8TVtw0qM0a1X0sEACP6PG3Eo8CsiBxYhI4kM 4F1VHl1Dq+BPUr8YO/cQ2ldXhfKuKogCDzPvTIF19QCxczHN4+MDA38pVKFn6o14/Zpy Q8AMHwHpAvqh4Jl5zLHUqQ0ADyxTjDNkY4o79+AK/DsEaX19GpBDBdBShkCXPep2B1x8 WyoGgWHZRLzZbxD5H84My4n+6k6cSCxU4VyKQegcKcbBdxfcQqVDxeFC7agg4PGtw5Rb rLrA== X-Gm-Message-State: AOAM533Ayr7cd5/cUHVNxGoPw8nWSRWZ3S5CeLPJk9mjFCzti9lcPad3 iLuN85zlfzugCJqYdtmQows/rAYyXL8= X-Google-Smtp-Source: ABdhPJwB0G5OeVzx657WHRwvKU6JZee+AtLxAL7APPuuacdYF8k9c5jXPavHspxiawBp2gvtwJK1mN7H78c= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:61c2:: with SMTP id v185mr2563181ybb.378.1611270234438; Thu, 21 Jan 2021 15:03:54 -0800 (PST) Date: Thu, 21 Jan 2021 23:03:36 +0000 In-Reply-To: <20210121230336.1373726-1-satyat@google.com> Message-Id: <20210121230336.1373726-9-satyat@google.com> Mime-Version: 1.0 References: <20210121230336.1373726-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH v8 8/8] fscrypt: update documentation for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Update fscrypt documentation to reflect the addition of direct I/O support and document the necessary conditions for direct I/O on encrypted files. Signed-off-by: Satya Tangirala Reviewed-by: Eric Biggers Reviewed-by: Jaegeuk Kim --- Documentation/filesystems/fscrypt.rst | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 44b67ebd6e40..c0c1747fa2fb 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -1047,8 +1047,10 @@ astute users may notice some differences in behavior: may be used to overwrite the source files but isn't guaranteed to be effective on all filesystems and storage devices. -- Direct I/O is not supported on encrypted files. Attempts to use - direct I/O on such files will fall back to buffered I/O. +- Direct I/O is supported on encrypted files only under some + circumstances (see `Direct I/O support`_ for details). When these + circumstances are not met, attempts to use direct I/O on encrypted + files will fall back to buffered I/O. - The fallocate operations FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSERT_RANGE are not supported on encrypted files and will @@ -1121,6 +1123,21 @@ It is not currently possible to backup and restore encrypted files without the encryption key. This would require special APIs which have not yet been implemented. +Direct I/O support +================== + +Direct I/O on encrypted files is supported through blk-crypto. In +particular, this means the kernel must have CONFIG_BLK_INLINE_ENCRYPTION +enabled, the filesystem must have had the 'inlinecrypt' mount option +specified, and either hardware inline encryption must be present, or +CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK must have been enabled. Further, +the starting position in the file and the length of any I/O must be aligned +to the filesystem block size (*not* necessarily the same as the block +device's block size). If any of these conditions isn't met, attempts to do +direct I/O on an encrypted file will fall back to buffered I/O. However, +there aren't any additional requirements on user buffer alignment (apart +from those already present when using direct I/O on unencrypted files). + Encryption policy enforcement =============================