From patchwork Thu Jan 14 15:47:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15FD0C433E0 for ; Thu, 14 Jan 2021 15:49:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C905F23B2F for ; Thu, 14 Jan 2021 15:49:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729444AbhANPsX (ORCPT ); Thu, 14 Jan 2021 10:48:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729434AbhANPsR (ORCPT ); Thu, 14 Jan 2021 10:48:17 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373F1C061794 for ; Thu, 14 Jan 2021 07:47:32 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id h18so4818446qtr.2 for ; Thu, 14 Jan 2021 07:47:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=rQiteVCxoBWutnawrqGu3qFkhS4PTdd43xyBddh3/+Y=; b=HFuO8i9XZqAgUTRptYDPtYA3XeZBkEp5/BGbnCK98kkXOi73ZiYZsNR2HMGW7zfetj TNbjcYrV6P/OnqiTGR4PCAkMhFrvIXAaAc+8iUWpAaqkhhMRXTKU6rMrrT7gKVykBcjA JhgfelXllBeTBMqv7tzSNRl0FWZRxnUOUjWW23v8sFEafJiarXHpZAoK4uLWbngn6Lnh Klj51v8gvOZnGr0mSQGYKOtwNT96OdmPYzEpVzKSVGEAF3W/bsUiSZlo3uvR8vhukMfV ryggZc58z4s4V1uCEB3Z9VonV9THRje3A09PaFjpA1UGraofATL0PmYbCy5BhTOf1NBu fz0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rQiteVCxoBWutnawrqGu3qFkhS4PTdd43xyBddh3/+Y=; b=Llzr8dMIqPavVFEw4aVVLQ2eYsUqxAmEhRRyVB2djk/RWQ+5jLcDU8QiYEuKfK0SU/ OeEeSzX0lRTecv2BTTRYz2jK/U9WyxTwwCEyWv6FI+3wHc5Uu1ls6I+yK8jhPzZzt6+R AuFUh0j8sqNbstSgPWzWZd7aaA4/OgQLiJKvAemD4KvATEn+FlqR9qM4doKzdpj1uFp4 9CjcUTiujAMQrWvT2t4fvQqCMtkWZKYQy5hLkhgtWtwZejSOs1CEr8H8SG1uqzotSbzD QeV/GO/zwErxs6kHQaFzxyQoiH2KMHlHIbL6a10qirrxVxN1Zi6ohS+IqCVUV5CI8hcY RdKg== X-Gm-Message-State: AOAM532ftgVo7vrB7eeldq3iZCRVaH3t+Z34UFQQcARaeAs9GlV1CQBI lRatzEcLMBftTEOlR0vz6nXKDgJjpmLW6w5xEEgnZ6q7FttDIJuTggsDEuYqubuBTUARJMS5MyZ +L0UQx/mTQW/AbIc1lY5huHIRD9LRFQ2a3Pv9dM4r17/09RzBv7AcpQYt0HXHUuAELKo4 X-Google-Smtp-Source: ABdhPJwrcaTLhhnNcxZsQ6jw7FyV0YKxvrm6mEVXy8dUFIzzPwJaDWzSLOLinvMc3mhnHCwcoMBhT2PSqcM= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:ad4:452f:: with SMTP id l15mr7697781qvu.49.1610639251249; Thu, 14 Jan 2021 07:47:31 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:17 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-2-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 1/7] block: make blk_bio_segment_split() able to fail and return error From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Till now, blk_bio_segment_split() always succeeded and returned a split bio if necessary. Instead, allow it to return an error code to indicate errors (and pass a pointer to the split bio as an argument instead). blk_bio_segment_split() is only called by __blk_queue_split(), which has been updated to return the error code from blk_bio_segment_split(). This patch also updates all callers of __blk_queue_split() and blk_queue_split() to handle any error returned by those functions. blk_bio_segment_split() needs to be able to fail because future patches will ensure that the size of the split bio is aligned to the data unit size of the bio crypt context of the bio (if it exists). It's possible that the largest aligned size that satisfies all the requirements of blk_bio_segment_split() is 0, at which point we need error out. Signed-off-by: Satya Tangirala --- block/blk-merge.c | 36 +++++++++++++++++++++++++---------- block/blk-mq.c | 5 ++++- block/blk.h | 2 +- drivers/block/drbd/drbd_req.c | 5 ++++- drivers/block/pktcdvd.c | 3 ++- drivers/block/ps3vram.c | 5 ++++- drivers/block/rsxx/dev.c | 3 ++- drivers/block/umem.c | 5 ++++- drivers/lightnvm/pblk-init.c | 13 ++++++++++--- drivers/md/dm.c | 8 ++++++-- drivers/md/md.c | 5 ++++- drivers/nvme/host/multipath.c | 5 ++++- drivers/s390/block/dcssblk.c | 3 ++- drivers/s390/block/xpram.c | 3 ++- include/linux/blkdev.h | 2 +- 15 files changed, 76 insertions(+), 27 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 808768f6b174..a23a91e12e24 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -229,6 +229,7 @@ static bool bvec_split_segs(const struct request_queue *q, * @bio: [in] bio to be split * @bs: [in] bio set to allocate the clone from * @segs: [out] number of segments in the bio with the first half of the sectors + * @split: [out] The split bio, if @bio is split * * Clone @bio, update the bi_iter of the clone to represent the first sectors * of @bio and update @bio->bi_iter to represent the remaining sectors. The @@ -241,11 +242,14 @@ static bool bvec_split_segs(const struct request_queue *q, * original bio is not freed before the cloned bio. The caller is also * responsible for ensuring that @bs is only destroyed after processing of the * split bio has finished. + * + * Return: 0 on success, negative on error */ -static struct bio *blk_bio_segment_split(struct request_queue *q, - struct bio *bio, - struct bio_set *bs, - unsigned *segs) +static int blk_bio_segment_split(struct request_queue *q, + struct bio *bio, + struct bio_set *bs, + unsigned *segs, + struct bio **split) { struct bio_vec bv, bvprv, *bvprvp = NULL; struct bvec_iter iter; @@ -276,7 +280,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, } *segs = nsegs; - return NULL; + *split = NULL; + return 0; split: *segs = nsegs; @@ -287,7 +292,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ bio->bi_opf &= ~REQ_HIPRI; - return bio_split(bio, sectors, GFP_NOIO, bs); + *split = bio_split(bio, sectors, GFP_NOIO, bs); + return 0; } /** @@ -302,11 +308,14 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, * the responsibility of the caller to ensure that * @bio->bi_disk->queue->bio_split is only released after processing of the * split bio has finished. + * + * Return: 0 on succes, negative on error */ -void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) +int __blk_queue_split(struct bio **bio, unsigned int *nr_segs) { struct request_queue *q = (*bio)->bi_disk->queue; struct bio *split = NULL; + int err; switch (bio_op(*bio)) { case REQ_OP_DISCARD: @@ -337,7 +346,10 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) *nr_segs = 1; break; } - split = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs); + err = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs, + &split); + if (err) + return err; break; } @@ -350,6 +362,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) submit_bio_noacct(*bio); *bio = split; } + + return 0; } /** @@ -361,12 +375,14 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) * a new bio from @bio->bi_disk->queue->bio_split, it is the responsibility of * the caller to ensure that @bio->bi_disk->queue->bio_split is only released * after processing of the split bio has finished. + * + * Return: 0 on success, negative on error */ -void blk_queue_split(struct bio **bio) +int blk_queue_split(struct bio **bio) { unsigned int nr_segs; - __blk_queue_split(bio, &nr_segs); + return __blk_queue_split(bio, &nr_segs); } EXPORT_SYMBOL(blk_queue_split); diff --git a/block/blk-mq.c b/block/blk-mq.c index f285a9123a8b..43fe5be6bbb7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2143,7 +2143,10 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) bool hipri; blk_queue_bounce(q, &bio); - __blk_queue_split(&bio, &nr_segs); + if (__blk_queue_split(&bio, &nr_segs)) { + bio_io_error(bio); + goto queue_exit; + } if (!bio_integrity_prep(bio)) goto queue_exit; diff --git a/block/blk.h b/block/blk.h index 7550364c326c..22096c8272cb 100644 --- a/block/blk.h +++ b/block/blk.h @@ -218,7 +218,7 @@ ssize_t part_timeout_show(struct device *, struct device_attribute *, char *); ssize_t part_timeout_store(struct device *, struct device_attribute *, const char *, size_t); -void __blk_queue_split(struct bio **bio, unsigned int *nr_segs); +int __blk_queue_split(struct bio **bio, unsigned int *nr_segs); int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs); int blk_attempt_req_merge(struct request_queue *q, struct request *rq, diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 330f851cb8f0..1baaaad22bff 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1598,7 +1598,10 @@ blk_qc_t drbd_submit_bio(struct bio *bio) struct drbd_device *device = bio->bi_disk->private_data; unsigned long start_jif; - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } start_jif = jiffies; diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index b8bb8ec7538d..702f7e5564ff 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2372,7 +2372,8 @@ static blk_qc_t pkt_submit_bio(struct bio *bio) char b[BDEVNAME_SIZE]; struct bio *split; - blk_queue_split(&bio); + if (blk_queue_split(&bio)) + goto end_io; pd = bio->bi_disk->queue->queuedata; if (!pd) { diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c index b71d28372ef3..772e0c3e7036 100644 --- a/drivers/block/ps3vram.c +++ b/drivers/block/ps3vram.c @@ -587,7 +587,10 @@ static blk_qc_t ps3vram_submit_bio(struct bio *bio) dev_dbg(&dev->core, "%s\n", __func__); - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } spin_lock_irq(&priv->lock); busy = !bio_list_empty(&priv->list); diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c index edacefff6e35..e9d3538a2625 100644 --- a/drivers/block/rsxx/dev.c +++ b/drivers/block/rsxx/dev.c @@ -126,7 +126,8 @@ static blk_qc_t rsxx_submit_bio(struct bio *bio) struct rsxx_bio_meta *bio_meta; blk_status_t st = BLK_STS_IOERR; - blk_queue_split(&bio); + if (blk_queue_split(&bio)) + goto req_err; might_sleep(); diff --git a/drivers/block/umem.c b/drivers/block/umem.c index 2b95d7b33b91..ac1e8a0750a9 100644 --- a/drivers/block/umem.c +++ b/drivers/block/umem.c @@ -527,7 +527,10 @@ static blk_qc_t mm_submit_bio(struct bio *bio) (unsigned long long)bio->bi_iter.bi_sector, bio->bi_iter.bi_size); - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } spin_lock_irq(&card->lock); *card->biotail = bio; diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b6246f73895c..42873e04edf4 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -63,15 +63,22 @@ static blk_qc_t pblk_submit_bio(struct bio *bio) * constraint. Writes can be of arbitrary size. */ if (bio_data_dir(bio) == READ) { - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } pblk_submit_read(pblk, bio); } else { /* Prevent deadlock in the case of a modest LUN configuration * and large user I/Os. Unless stalled, the rate limiter * leaves at least 256KB available for user I/O. */ - if (pblk_get_secs(bio) > pblk_rl_max_io(&pblk->rl)) - blk_queue_split(&bio); + if (pblk_get_secs(bio) > pblk_rl_max_io(&pblk->rl)) { + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } + } pblk_write_to_cache(pblk, bio, PBLK_IOTYPE_USER); } diff --git a/drivers/md/dm.c b/drivers/md/dm.c index b3c3c8b4cb42..f304cb017176 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1654,8 +1654,12 @@ static blk_qc_t dm_submit_bio(struct bio *bio) * Use blk_queue_split() for abnormal IO (e.g. discard, writesame, etc) * otherwise associated queue_limits won't be imposed. */ - if (is_abnormal_io(bio)) - blk_queue_split(&bio); + if (is_abnormal_io(bio)) { + if (blk_queue_split(&bio)) { + bio_io_error(bio); + goto out; + } + } ret = __split_and_process_bio(md, map, bio); out: diff --git a/drivers/md/md.c b/drivers/md/md.c index ca409428b4fc..66c9cc95d14d 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -498,7 +498,10 @@ static blk_qc_t md_submit_bio(struct bio *bio) return BLK_QC_T_NONE; } - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return BLK_QC_T_NONE; + } if (mddev->ro == 1 && unlikely(rw == WRITE)) { if (bio_sectors(bio) != 0) diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 9ac762b28811..34874dc5258f 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -307,7 +307,10 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio) * different queue via blk_steal_bios(), so we need to use the bio_split * pool from the original queue to allocate the bvecs from. */ - blk_queue_split(&bio); + if (blk_queue_split(&bio)) { + bio_io_error(bio); + return ret; + } srcu_idx = srcu_read_lock(&head->srcu); ns = nvme_find_path(head); diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index 299e77ec2c41..33904f527f62 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -876,7 +876,8 @@ dcssblk_submit_bio(struct bio *bio) unsigned long source_addr; unsigned long bytes_done; - blk_queue_split(&bio); + if (blk_queue_split(&bio)) + goto fail; bytes_done = 0; dev_info = bio->bi_disk->private_data; diff --git a/drivers/s390/block/xpram.c b/drivers/s390/block/xpram.c index c2536f7767b3..6f2f772e81e8 100644 --- a/drivers/s390/block/xpram.c +++ b/drivers/s390/block/xpram.c @@ -191,7 +191,8 @@ static blk_qc_t xpram_submit_bio(struct bio *bio) unsigned long page_addr; unsigned long bytes; - blk_queue_split(&bio); + if (blk_queue_split(&bio)) + goto fail; if ((bio->bi_iter.bi_sector & 7) != 0 || (bio->bi_iter.bi_size & 4095) != 0) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f94ee3089e01..b4b44d7262e5 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -926,7 +926,7 @@ extern void blk_rq_unprep_clone(struct request *rq); extern blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq); extern int blk_rq_append_bio(struct request *rq, struct bio **bio); -extern void blk_queue_split(struct bio **); +extern int blk_queue_split(struct bio **); extern int scsi_verify_blk_ioctl(struct block_device *, unsigned int); extern int scsi_cmd_blk_ioctl(struct block_device *, fmode_t, unsigned int, void __user *); From patchwork Thu Jan 14 15:47:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E869AC433E9 for ; Thu, 14 Jan 2021 15:48:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB7C823B3E for ; Thu, 14 Jan 2021 15:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729450AbhANPsX (ORCPT ); Thu, 14 Jan 2021 10:48:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729438AbhANPsV (ORCPT ); Thu, 14 Jan 2021 10:48:21 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ABC2C061795 for ; Thu, 14 Jan 2021 07:47:34 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id k126so5093278qkf.8 for ; Thu, 14 Jan 2021 07:47:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=VXMYjn9O4XsIoXbGPi+u1w3+Oc2FJKcjd/08+b6WrBI=; b=dpUlfBP7ghRE0W1c/N+pxmSRl/ey4DQezPpbDZN0l+b8RG8CvlIPqIOxB+RvfmiswX sDi7LGLRi4EbbiVKyU0CjN8jKrFTsPgcYMgbxcuq3OqFoP8pqZXrV+1LgTVswuOCQvO4 kNsH7JVDnYv/ZbpRhGecHxPO63rfSqtxAROm9xvp/P5Ra+G2TgNB5TU4rOud1n7g2utY EMAh23S6jSqWphe4ZoWFRzaVE9kkUy84pkEAXD1aeGinxhIV4fLxefiyk/ushanWTVdn xO6xhFXwKvW9gikMWbc9+PejtCtaq5TTKVuvYlS9qHqgweoRs7VhpqUu0jB3dgpfuZMi vhyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VXMYjn9O4XsIoXbGPi+u1w3+Oc2FJKcjd/08+b6WrBI=; b=JtjbESY9RtIvH6mxV5faFZ6GnCHoRBP3n4nDKVvwgHa6O1HZGsUJLgtt2A+xsbKE1G cXZkexpujyvK7yzyUObu+pIoTIaruANL+J2hoKECPp/ZTLO4qRM3Efook9I1FXwN/Zhb CenR5FiVXN6YYXFxq/qUwAFAnPQQpa49tUhstT1luoaqi8eApycramcrneXlfcYS7TaE pA+1tsJHEV+pY96beXowXw4opgPsUmsk1D9OPKWv0g+luB4T9jEdfLrqvoFKmnfyKgNp o08MEkLc/5ePidCUkU0Ty9LnlgbyjaNRA73TZiUe1OgdJ6IdM/HNFH59nLqXZnl+g4Nu QT2w== X-Gm-Message-State: AOAM532iDXvQe3P7B4uTEzW/eiEYLYLOZF+Judhon9qFtkqqW8b4WZw9 Vh5Rp8m4B3BrfC6i3IFH48Tm7PREvhJJa2m7sudRobKbCH8lKZlWNJwbND0Z5MnY0+p9d/rtEzy YjL+OkMfq4NfWMzczAzYWFs/FM6E30lOdMYI9xAAc7c2l5QPiKk0tLfl4vJoaZj/fLFF1 X-Google-Smtp-Source: ABdhPJxB4L1wTGhRJSsrgLoO7tHuFYdrw8HNgPGtxULPSi0IZ0JS2biD0BYgq/c95/CXgcqzxvlrPWQSOJw= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:b328:: with SMTP id l40mr11010076ybj.15.1610639253503; Thu, 14 Jan 2021 07:47:33 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:18 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-3-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 2/7] block: blk-crypto: Introduce blk_crypto_bio_sectors_alignment() From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The size of any bio must be aligned to the data unit size of the bio crypt context (if it exists) of that bio. This must also be ensured whenever a bio is split. Introduce blk_crypto_bio_sectors_alignment() that returns the required alignment in sectors. The number of sectors passed to any call of bio_split() should be aligned to blk_crypto_bio_sectors_alignment(). Signed-off-by: Satya Tangirala --- block/blk-crypto-internal.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h index 0d36aae538d7..304e90ed99f5 100644 --- a/block/blk-crypto-internal.h +++ b/block/blk-crypto-internal.h @@ -60,6 +60,19 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq) return rq->crypt_ctx; } +/* + * Returns the alignment requirement for the number of sectors in this bio based + * on its bi_crypt_context. Any bios split from this bio must follow this + * alignment requirement as well. + */ +static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio) +{ + if (!bio_has_crypt_ctx(bio)) + return 1; + return bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size >> + SECTOR_SHIFT; +} + #else /* CONFIG_BLK_INLINE_ENCRYPTION */ static inline bool bio_crypt_rq_ctx_compatible(struct request *rq, @@ -93,6 +106,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq) return false; } +static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio) +{ + return 1; +} + #endif /* CONFIG_BLK_INLINE_ENCRYPTION */ void __bio_crypt_advance(struct bio *bio, unsigned int bytes); From patchwork Thu Jan 14 15:47:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B35BC433E6 for ; Thu, 14 Jan 2021 15:48:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB78A23B2F for ; Thu, 14 Jan 2021 15:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729471AbhANPsZ (ORCPT ); Thu, 14 Jan 2021 10:48:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729458AbhANPsY (ORCPT ); Thu, 14 Jan 2021 10:48:24 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 322A2C061798 for ; Thu, 14 Jan 2021 07:47:36 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id l22so4024052pgc.15 for ; Thu, 14 Jan 2021 07:47:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=33jUdHhsrV2CapOFoRvS5dOYYlCXOaJVM4qnyxmNbLU=; b=apKtoFYknu3DIcXShFWPosdE+beqxNL3vnxUidGPnAu4PB+7WBe23CsevJNH8B3nzC YeS2VuQUPAqWD+CAbk1eZ91bQ79KccJuqim4mCQfmOlUOqnaeiYbOe7XzjaHBPE8VehE HcMYkND85iXYvGQdWN/rM6gAG9jW+Zn2gXwASSuE/bPBBgDSL0pEm9cH8za/aw0/IY3P pDo++HqA3k0shLmHIO8lFLb4Q4rfwMAchSbZjbksJe7/9mwTnMS5PG6XCZgeNvno1tKj zRvQSJJdOh/c341ec5fFf6wd4tAP50OKMskZzM6zhRQfeKD1jb9g3r/gZLNiedP2ynZ8 Kk8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=33jUdHhsrV2CapOFoRvS5dOYYlCXOaJVM4qnyxmNbLU=; b=elexyGfH7G+8rhjb1N9dStMpSQ68WW4I4gyrVkLJMYJ/Ebkk0/WRo9isu6MAaE7N12 XIeaadGOP7/VO2Wl7c2QgNxw4MlY3pnr7h5bTEWIYCjSWYouM5lBOrihwnHbsEFzWdZ7 Cc1IH4V9VcEmNMqsCnyf8+FZuLdfubf1fBoDhqqOLf0qDP6W77tGrftm5T1fYxQ6lJZI imyRdrkoU8yAKnMzQfHxU0gMLmHTH1tLaqu6ee53RMsYDW88vKgVcorOqsDlhOeF9x8C RYTAXfajsfyjwOvzlTYQCWiejS73CUNs0WtRTVaCucTcIrXqhDFafK02U9awdRETIg/v Psig== X-Gm-Message-State: AOAM532NQ5Hlf4qW/QDAjNk7kwPH0Qcsgkj6+SC50o90+pXs7srlgUzU xryfb0AjvOcCpEw7DBWGqdb6x32+WRHmbEmpchGtvKQ1nxz01vme8k8zudAbjwmFCAJ/VSD/Ywt 2ghlCpfYOh2HfGoWbZRU+NdHC1AP4c2nPplC3/nXi5deO/fRzIENyNTy/MkelEkqMiESE X-Google-Smtp-Source: ABdhPJyg3FWwzuF2MutPmTEmq0PgduUD2kKumkbOPbt3K4Us/J6vOeb3DnbLSRWjxovCLHKsVQ7RE4hvH+A= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:903:2292:b029:de:45c0:7005 with SMTP id b18-20020a1709032292b02900de45c07005mr5349837plh.75.1610639255488; Thu, 14 Jan 2021 07:47:35 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:19 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-4-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 3/7] block: respect blk_crypto_bio_sectors_alignment() in bounce.c From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make __blk_queue_bounce respect blk_crypto_bio_sectors_alignment() when calling bio_split(). Signed-off-by: Satya Tangirala --- block/bounce.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/bounce.c b/block/bounce.c index d3f51acd6e3b..7800e2a5a0f8 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -305,6 +305,9 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, if (!bounce) return; + sectors = round_down(sectors, + blk_crypto_bio_sectors_alignment(*bio_orig)); + if (!passthrough && sectors < bio_sectors(*bio_orig)) { bio = bio_split(*bio_orig, sectors, GFP_NOIO, &bounce_bio_split); bio_chain(bio, *bio_orig); From patchwork Thu Jan 14 15:47:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D365BC433DB for ; Thu, 14 Jan 2021 15:48:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B7F123B7D for ; Thu, 14 Jan 2021 15:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729489AbhANPs3 (ORCPT ); Thu, 14 Jan 2021 10:48:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729480AbhANPs2 (ORCPT ); Thu, 14 Jan 2021 10:48:28 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A30EC061799 for ; Thu, 14 Jan 2021 07:47:38 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id w204so5081723qka.18 for ; Thu, 14 Jan 2021 07:47:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=T+AWygX7OH80QXvfOzs3n7xe/G6bC11ON9WcWWdoQ+8=; b=U4LFD2nsz8xQHgT0rhZ1G/AMQ+FWnUzAzA6avnsMxwvpUEx7V1inLTHZQPW/o8yTam aw46GWdNFUsm2oO00a4n4ZmI7vlVdSmBFGetcUXDRbc/Udqh/xgSCv552WA2uRnnQXdI B9T9yIsTX6ts6jNo5CxgjE0vidmRPd2x+ooyvuit2hN7D5/lYkONfYd/jISnYd8Ote2W 8/hBCTJ3FEndjzU/0YlKBd32qdMUY0sFawi+o0QqDVhiCVy8rHWSaHFWA6WgRnd+4+j4 fLqFb0kBmx5FANNwVGnto0X18SlLpCMkazzi262K7CAjO3YV36X3Pun3EMS+IJLJvORo L05A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T+AWygX7OH80QXvfOzs3n7xe/G6bC11ON9WcWWdoQ+8=; b=GwxQdANIAn+m0iutthdz0UL9J9Ce9CAVUUhpXGoEgs9EMHANNfZ32Zl5JWlzthox5J WZ/0vj7b6rWiMW3QFQ5STvIXh+phPGmNAJoJV4Cma7sg3iKO1xzvAaOzwbD3Ry4GDiEP K9U9tyNM0ttEgigwcN6pTOAnqPVIV81pENI8M6x9giW39DWrJYmgddW6ddnHDT5S+ZoE hosRtcU9oynFlRugxcsfmVXPw9fuuwrMxHskeGja0iQ6Uyp/4OLzjmcgYpFH31h4420M Gnj6tiLa+uFd6WhqIqVba3IvV2DBed32sfAprGq8J7LbcQY1ImREmOyluHGysMFuDpO/ kLgA== X-Gm-Message-State: AOAM533+Je2kUkME679/0UMtE1VP4faUvU5o6M/ZaYiXOfjbbrmCssz0 TAO6XPr2cnsjHBbjcHPUBm6ySpXlbX5q6moVOjWjnl2PazXiZWm0PtQ1ywr7sUCW++Kqv0TFzKR 17Nx/2sztwNlsvtsxI64h0fve57T5N8dubVxEDH6lRIjp4wBYaqy0uPehHSr+DTuKTSM2 X-Google-Smtp-Source: ABdhPJzTX4rjIJNaqVFSCnwwKO5HfCbqTP33xZ/K20KnPSIjDpsOY8CWYUyJncST7orNHt+rk2kmGBrif0Q= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:ad4:4a70:: with SMTP id cn16mr7421032qvb.38.1610639257345; Thu, 14 Jan 2021 07:47:37 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:20 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-5-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 4/7] block: respect blk_crypto_bio_sectors_alignment() in blk-crypto-fallback From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk_crypto_split_bio_if_needed() respect blk_crypto_bio_sectors_alignment() when calling bio_split(). Signed-off-by: Satya Tangirala --- block/blk-crypto-fallback.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index c162b754efbd..77e20175df40 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -222,6 +222,8 @@ static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr) if (num_sectors < bio_sectors(bio)) { struct bio *split_bio; + num_sectors = round_down(len, + blk_crypto_bio_sectors_alignment(bio)); split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL); if (!split_bio) { bio->bi_status = BLK_STS_RESOURCE; From patchwork Thu Jan 14 15:47:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87769C4332E for ; Thu, 14 Jan 2021 15:48:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69E6B23B84 for ; Thu, 14 Jan 2021 15:48:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729524AbhANPsh (ORCPT ); Thu, 14 Jan 2021 10:48:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729504AbhANPsc (ORCPT ); Thu, 14 Jan 2021 10:48:32 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 110B4C06179B for ; Thu, 14 Jan 2021 07:47:40 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id 32so3441835plf.3 for ; Thu, 14 Jan 2021 07:47:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=g2xCDGkf1QV8m2oT6FSaZhKIqRrYyoL3mQeF9dlubXk=; b=P8luqrH5fshKRy49UY321ccjqfMF+1nPrvcMgYsQRGwWvzW/T10vlEN+9nLbYE+bLf NsB5qJP6UoUga+wSwzCUgpPYoUTykzMHGpgY54Dk0ey2sF18Cai0yXtv6Pn8xgAJgqSN D0cvYGOQWpyl2O1Qp1p61+R5otk3CSYbHjPKSRCmZBe7yxYdsyxWWNC3O5uxD9+azrUS me2JZj16nNfGby1IxHjCh6NMlxx2RqfWyGNkkEI8zh4PzJfRJz1NlS5l0zN82Qps7rRB SD8uLJwWUCRC2jsiIthRvO1rU8da7j0B1yjDT/4oMNOmUTBcrWG6RaWlYYxQ369Wsd8O SwUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g2xCDGkf1QV8m2oT6FSaZhKIqRrYyoL3mQeF9dlubXk=; b=lHWFVMB8/OylhJ4RR+CutNk/Q/5jHKJh4igPoMbJDZVM/EZ/f64GEcQHvxZShEPJbW JvNUFxyYm1ntLUmwVSzlHt2OW4JUmenwdu3P9DvTN86kMe00whYF9xdHsMI6Gkc6LPlt /pRIpHgNzs2Swse5BsZKR+FQewUGS/JVhMfQCrcLljnfvRZsJc6o3yQNJaASb20FLqKE m1OjIaYhjnbX7rm71LAoOjvQnZko3FW8AYoBdCAHT9jeWz1KQBSy9F1V7U68i7yNnnWv IvPbI9xodnVVq4B4RdcqkZoxk1zM9pux+TnqTQ1oGAQ10LJUmR24qR8Pl82UQJaihNPn 1ITA== X-Gm-Message-State: AOAM5337lHrfpkspViMX6N0o76XtDt5weY/xH//kZKXajEOGiTxx9D7A FBc9p+5/znWmu+PJLAZjICv1dL9moSK0f5PHxhy+S+PQjvou9SdALJPW7kUdC6HvufokGRmjV10 +YGVpD5IbQfV27oXmpIxNcocJdxkOpv6KbJUNIsNMs5Orli4Ymsi5+g8D4UogD+pcXdyX X-Google-Smtp-Source: ABdhPJwoueZO6ismwa6iFeqnJ2iy7LZ1F+RkesR2OnxUVW0OhtcSuObGIvf/i3mP2BWIbkcszBYJMxFg97c= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:aa7:96d9:0:b029:1a1:c2c4:bdb8 with SMTP id h25-20020aa796d90000b02901a1c2c4bdb8mr7961145pfq.72.1610639259410; Thu, 14 Jan 2021 07:47:39 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:21 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-6-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 5/7] block: respect blk_crypto_bio_sectors_alignment() in blk-merge From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk_bio_segment_split() respect blk_crypto_bio_sectors_alignment() when calling bio_split(). The number of sectors is rounded down to the required alignment just before the call to bio_split(). This makes it possible for nsegs to be overestimated, but this solution is a lot simpler than trying to calculate the exact number of nsegs required for the aligned number of sectors. A future patch will attempt to calculate nsegs more accurately. Signed-off-by: Satya Tangirala --- block/blk-merge.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/block/blk-merge.c b/block/blk-merge.c index a23a91e12e24..45cda45c1066 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -236,6 +236,8 @@ static bool bvec_split_segs(const struct request_queue *q, * following is guaranteed for the cloned bio: * - That it has at most get_max_io_size(@q, @bio) sectors. * - That it has at most queue_max_segments(@q) segments. + * - That the number of sectors in the returned bio is aligned to + * blk_crypto_bio_sectors_alignment(@bio) * * Except for discard requests the cloned bio will point at the bi_io_vec of * the original bio. It is the responsibility of the caller to ensure that the @@ -292,6 +294,9 @@ static int blk_bio_segment_split(struct request_queue *q, */ bio->bi_opf &= ~REQ_HIPRI; + sectors = round_down(sectors, blk_crypto_bio_sectors_alignment(bio)); + if (WARN_ON(sectors == 0)) + return -EIO; *split = bio_split(bio, sectors, GFP_NOIO, bs); return 0; } From patchwork Thu Jan 14 15:47:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA701C4332D for ; Thu, 14 Jan 2021 15:48:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 872D923B46 for ; Thu, 14 Jan 2021 15:48:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729519AbhANPsg (ORCPT ); Thu, 14 Jan 2021 10:48:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729517AbhANPsg (ORCPT ); Thu, 14 Jan 2021 10:48:36 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E47FC06179C for ; Thu, 14 Jan 2021 07:47:42 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id bg11so3437828plb.16 for ; Thu, 14 Jan 2021 07:47:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=/1PpEbZ5u6Fy77mPqfkLa/nkoU52/tGMEHsbdtWepW4=; b=K1hogn8H0AcWLbJ1RB0i9mk+1WZxhfVKnMp1bl0mzoSc3owq6JLTq4RswfCdqIijZ7 JOzl88lRYNXKHVEqn4ckkcT00dOCrFa1oaYNf6+uk5mksiFgViBomI+LuT7trMVxIKL3 lA678wJO8a0w1YSBmHzgp35CrsaWRmz6iMZi/86cpDalD7nwnhvoGZa1RXWpkXylT0p8 W8jajLGIfymIxuhZGGvlyB1V/J7d8pb1EchKb+Sgioprp+tRoh1SoBzMGGCYc+ln3yGx C0Ld5Xjh/k9F/QNGfldfQiivtqAjGp3Rkr/xLdXihH7H0MAa2Oa5GnvdC7RFwVZyt+Wh yeVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/1PpEbZ5u6Fy77mPqfkLa/nkoU52/tGMEHsbdtWepW4=; b=lEjQa6WJ0u5fClpGTbQSYXQGrLOkn3AxBMK0xnUDb7QMt7U8u+qTOFwotOc9o1l9c+ G6devld9E49oPJK1k7NXySW7zbV0QdJTevWQaBfep95JbzIzVYymNZEX3Dix2TRaf15D 7TLv3F/Z5sl+MFdRp/Qcs1cUhvRFfuw4ClXDzKHfJZJts1SjcgUIKueFJ0G/jZ8xu89p wuslecDntM8uU1zdCeEyBgeo81BdmerrEXY+6HcinkenSeYTM69SknzUc5oynQYELmML wrI1/iQjyQCXCbNwhhIVgCGIYHhKQ0CM8TdhkuHTJw4bosX7/5pt8HBKsxT24/8MVIJg ACZQ== X-Gm-Message-State: AOAM531FtMa/RcmrqkD1USDBDg4I5tu11kE/MbrWt4M721tfAbpSr1hs lyfOANRg4rxSLedtXXbc3pob1s/B1W/LXrEWcRjuOE+KBfI+Ult05IQnnnMV/b7R/blE7nqlcrT kf/M8847Mn/Zz+zMa1VnihOjI4gzZFVUaWvKMpRaXHTdhdgDrPjia4+wBRaRnNdVsXaTN X-Google-Smtp-Source: ABdhPJxvZfMl2YEge75XAxbNvz3zndXJD7I49Liu1RV4ESXrtxeeROuRAuEe7qcF1uQihpPtTzG7Lg/KyvY= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:90b:ec2:: with SMTP id gz2mr5519727pjb.143.1610639261504; Thu, 14 Jan 2021 07:47:41 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:22 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-7-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 6/7] block: add WARN() in bio_split() for sector alignment From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The number of sectors passed to bio_split() should be aligned to blk_crypto_bio_sectors_alignment(). All callers have been updated to ensure this, so add a WARN() if the number of sectors is not aligned. Signed-off-by: Satya Tangirala --- block/bio.c | 1 + 1 file changed, 1 insertion(+) diff --git a/block/bio.c b/block/bio.c index 1f2cc1fbe283..c5f577ee6b8d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1472,6 +1472,7 @@ struct bio *bio_split(struct bio *bio, int sectors, BUG_ON(sectors <= 0); BUG_ON(sectors >= bio_sectors(bio)); + WARN_ON(!IS_ALIGNED(sectors, blk_crypto_bio_sectors_alignment(bio))); /* Zone append commands cannot be split */ if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND)) From patchwork Thu Jan 14 15:47:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12020031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8525C433E6 for ; Thu, 14 Jan 2021 15:48:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B22F423B44 for ; Thu, 14 Jan 2021 15:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729498AbhANPsb (ORCPT ); Thu, 14 Jan 2021 10:48:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729488AbhANPs3 (ORCPT ); Thu, 14 Jan 2021 10:48:29 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 597DEC06179F for ; Thu, 14 Jan 2021 07:47:44 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id l3so4890188qvr.10 for ; Thu, 14 Jan 2021 07:47:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yjd7j/oRyi/09Ir0jClwnKrYAzTy2DPmihjhcr8kWmc=; b=YiREpvemIUhEmqaqL7UIAOp0t9TKxvhpsMRPaJXjYRycY9YcQHN79MViB3j2yosid1 fcVc9F58S5arQiivAzziZ5HC+Cw9J2lSVwpVjgLz0U/HuGkcpjR2Q8xHBL6P4sUyfU7p ohOraXB1O/Vqraj0V9oqga+jIrZ8eOqgGLULv3XH8JOrzKgkkKhDH09Ye2Ej4eguMvo7 WEV1iHYJ5dj4UHQM6yDWk5B8j5QIEgv9iMnXPrtNJWmUa3UtFNETfeWwWlLWzmpWG861 2I9T6Dgm1gSTJwhBrAA0IRLAfK+HeF6On2w/ZhzDTaLsGGIARr7/CmmP/CKTDKX3cK7x KgBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yjd7j/oRyi/09Ir0jClwnKrYAzTy2DPmihjhcr8kWmc=; b=GlGJ5K1sqei8UY3xc2F+j44gPYwrRdiMUt9FfpHrpp+H6p+D1aPZiNvCAkHrELhvMk yjK6/CWDkIydQE2YeiSDY/Y7MpdkjczSX751/T8Qa8NambAnoqMzHEyjinWF+rfj28hw aaSifEmtKBW4KRyqC77QebAP1dYvOtsVGnHenTZfY6iLY3Hzc+32g9zDNkDl2eQ0k3z6 3L4KWPvr5m2D9CZIln0r5t3nl8KSOGlM94aUfFxR8cJGO3DCSahdBhZqSNPTZPsnfmgg WHv8vNKx6/1lkb02lFLyGA7aHzgSuUoVRWbY/FVMY9CC5XKcd7hZa/VymFA8q0snJWvG TMCw== X-Gm-Message-State: AOAM530E3wo6ntsx3LD5OuAv9cXuqYfSpQfp0Af++HJt5OTHAbZ6VXBC CwyrKx8rpe+619x3lGbX9E9wm2dQCDPCoqspYvKQIpFfWweDmOoi6uBBcrg9Vry27QjLAzxLgYI QGZ8UXGGHiBxtaQ1UN022+7u1NHpylYbDMsyxMt/pVZvUepZM2ayJ3shii+eJGM1E8I21 X-Google-Smtp-Source: ABdhPJyi8OkvC7JM5jNlxAohk9r69dEQp7fliC/97rsmmY8YySVm9GtYm8d8/SHRxKz74I3GTEmSmucDh/8= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:6b52:: with SMTP id o18mr12072716ybm.143.1610639263413; Thu, 14 Jan 2021 07:47:43 -0800 (PST) Date: Thu, 14 Jan 2021 15:47:23 +0000 In-Reply-To: <20210114154723.2495814-1-satyat@google.com> Message-Id: <20210114154723.2495814-8-satyat@google.com> Mime-Version: 1.0 References: <20210114154723.2495814-1-satyat@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 7/7] block: compute nsegs more accurately in blk_bio_segment_split() From: Satya Tangirala To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jens Axboe , Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Previously, we rounded down the number of sectors just before calling bio_split() in blk_bio_segment_split(). While this ensures that bios are not split in the middle of a data unit, it makes it possible for nsegs to be overestimated. This patch calculates nsegs accurately (it calculates the smallest number of segments required for the aligned number of sectors in the split bio). Signed-off-by: Satya Tangirala --- block/blk-merge.c | 97 ++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 80 insertions(+), 17 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 45cda45c1066..58428d348661 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -145,17 +145,17 @@ static inline unsigned get_max_io_size(struct request_queue *q, struct bio *bio) { unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector, 0); - unsigned max_sectors = sectors; unsigned pbs = queue_physical_block_size(q) >> SECTOR_SHIFT; unsigned lbs = queue_logical_block_size(q) >> SECTOR_SHIFT; - unsigned start_offset = bio->bi_iter.bi_sector & (pbs - 1); + unsigned pbs_aligned_sector = + round_down(sectors + bio->bi_iter.bi_sector, pbs); - max_sectors += start_offset; - max_sectors &= ~(pbs - 1); - if (max_sectors > start_offset) - return max_sectors - start_offset; + lbs = max(lbs, blk_crypto_bio_sectors_alignment(bio)); - return sectors & ~(lbs - 1); + if (pbs_aligned_sector >= bio->bi_iter.bi_sector + lbs) + sectors = pbs_aligned_sector; + + return round_down(sectors, lbs); } static inline unsigned get_max_segment_size(const struct request_queue *q, @@ -174,6 +174,41 @@ static inline unsigned get_max_segment_size(const struct request_queue *q, (unsigned long)queue_max_segment_size(q)); } +/** + * update_aligned_sectors_and_segs() - Ensures that *@aligned_sectors is aligned + * to @bio_sectors_alignment, and that + * *@aligned_segs is the value of nsegs + * when sectors reached/first exceeded that + * value of *@aligned_sectors. + * + * @nsegs: [in] The current number of segs + * @sectors: [in] The current number of sectors + * @aligned_segs: [in,out] The number of segments that make up @aligned_sectors + * @aligned_sectors: [in,out] The largest number of sectors <= @sectors that is + * aligned to @sectors + * @bio_sectors_alignment: [in] The alignment requirement for the number of + * sectors + * + * Updates *@aligned_sectors to the largest number <= @sectors that is also a + * multiple of @bio_sectors_alignment. This is done by updating *@aligned_sectors + * whenever @sectors is at least @bio_sectors_alignment more than + * *@aligned_sectors, since that means we can increment *@aligned_sectors while + * still keeping it aligned to @bio_sectors_alignment and also keeping it <= + * @sectors. *@aligned_segs is updated to the value of nsegs when @sectors first + * reaches/exceeds any value that causes *@aligned_sectors to be updated. + */ +static inline void update_aligned_sectors_and_segs(const unsigned int nsegs, + const unsigned int sectors, + unsigned int *aligned_segs, + unsigned int *aligned_sectors, + const unsigned int bio_sectors_alignment) +{ + if (sectors - *aligned_sectors < bio_sectors_alignment) + return; + *aligned_sectors = round_down(sectors, bio_sectors_alignment); + *aligned_segs = nsegs; +} + /** * bvec_split_segs - verify whether or not a bvec should be split in the middle * @q: [in] request queue associated with the bio associated with @bv @@ -195,9 +230,12 @@ static inline unsigned get_max_segment_size(const struct request_queue *q, * the block driver. */ static bool bvec_split_segs(const struct request_queue *q, - const struct bio_vec *bv, unsigned *nsegs, - unsigned *sectors, unsigned max_segs, - unsigned max_sectors) + const struct bio_vec *bv, unsigned int *nsegs, + unsigned int *sectors, unsigned int *aligned_segs, + unsigned int *aligned_sectors, + unsigned int bio_sectors_alignment, + unsigned int max_segs, + unsigned int max_sectors) { unsigned max_len = (min(max_sectors, UINT_MAX >> 9) - *sectors) << 9; unsigned len = min(bv->bv_len, max_len); @@ -211,6 +249,11 @@ static bool bvec_split_segs(const struct request_queue *q, (*nsegs)++; total_len += seg_size; + update_aligned_sectors_and_segs(*nsegs, + *sectors + (total_len >> 9), + aligned_segs, + aligned_sectors, + bio_sectors_alignment); len -= seg_size; if ((bv->bv_offset + total_len) & queue_virt_boundary(q)) @@ -258,6 +301,9 @@ static int blk_bio_segment_split(struct request_queue *q, unsigned nsegs = 0, sectors = 0; const unsigned max_sectors = get_max_io_size(q, bio); const unsigned max_segs = queue_max_segments(q); + const unsigned int bio_sectors_alignment = + blk_crypto_bio_sectors_alignment(bio); + unsigned int aligned_segs = 0, aligned_sectors = 0; bio_for_each_bvec(bv, bio, iter) { /* @@ -272,8 +318,14 @@ static int blk_bio_segment_split(struct request_queue *q, bv.bv_offset + bv.bv_len <= PAGE_SIZE) { nsegs++; sectors += bv.bv_len >> 9; - } else if (bvec_split_segs(q, &bv, &nsegs, §ors, max_segs, - max_sectors)) { + update_aligned_sectors_and_segs(nsegs, sectors, + &aligned_segs, + &aligned_sectors, + bio_sectors_alignment); + } else if (bvec_split_segs(q, &bv, &nsegs, §ors, + &aligned_segs, &aligned_sectors, + bio_sectors_alignment, max_segs, + max_sectors)) { goto split; } @@ -281,11 +333,18 @@ static int blk_bio_segment_split(struct request_queue *q, bvprvp = &bvprv; } + /* + * The input bio's number of sectors is assumed to be aligned to + * bio_sectors_alignment. If that's the case, then this function should + * ensure that aligned_segs == nsegs and aligned_sectors == sectors if + * the bio is not going to be split. + */ + WARN_ON(aligned_segs != nsegs || aligned_sectors != sectors); *segs = nsegs; *split = NULL; return 0; split: - *segs = nsegs; + *segs = aligned_segs; /* * Bio splitting may cause subtle trouble such as hang when doing sync @@ -294,10 +353,9 @@ static int blk_bio_segment_split(struct request_queue *q, */ bio->bi_opf &= ~REQ_HIPRI; - sectors = round_down(sectors, blk_crypto_bio_sectors_alignment(bio)); - if (WARN_ON(sectors == 0)) + if (WARN_ON(aligned_sectors == 0)) return -EIO; - *split = bio_split(bio, sectors, GFP_NOIO, bs); + *split = bio_split(bio, aligned_sectors, GFP_NOIO, bs); return 0; } @@ -395,6 +453,9 @@ unsigned int blk_recalc_rq_segments(struct request *rq) { unsigned int nr_phys_segs = 0; unsigned int nr_sectors = 0; + unsigned int nr_aligned_phys_segs = 0; + unsigned int nr_aligned_sectors = 0; + unsigned int bio_sectors_alignment; struct req_iterator iter; struct bio_vec bv; @@ -410,9 +471,11 @@ unsigned int blk_recalc_rq_segments(struct request *rq) return 1; } + bio_sectors_alignment = blk_crypto_bio_sectors_alignment(rq->bio); rq_for_each_bvec(bv, rq, iter) bvec_split_segs(rq->q, &bv, &nr_phys_segs, &nr_sectors, - UINT_MAX, UINT_MAX); + &nr_aligned_phys_segs, &nr_aligned_sectors, + bio_sectors_alignment, UINT_MAX, UINT_MAX); return nr_phys_segs; }