From patchwork Tue Sep 22 02:32:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791377 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D22F76CA for ; Tue, 22 Sep 2020 02:32:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB57E2193E for ; Tue, 22 Sep 2020 02:32:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mkiKdxfH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729707AbgIVCc4 (ORCPT ); Mon, 21 Sep 2020 22:32:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCcz (ORCPT ); Mon, 21 Sep 2020 22:32:55 -0400 Received: from mail-qv1-xf43.google.com (mail-qv1-xf43.google.com [IPv6:2607:f8b0:4864:20::f43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A642CC061755 for ; Mon, 21 Sep 2020 19:32:55 -0700 (PDT) Received: by mail-qv1-xf43.google.com with SMTP id j3so8712043qvi.7 for ; Mon, 21 Sep 2020 19:32:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=JYQbtbils7OGh5uP6Fpkr+447ZcftT0pb1EE6IEMw9o=; b=mkiKdxfHRbocn+6bI/8KHAVuSl23Yrn1u0PS315sZpGO/RINOWhJ15tgErpSY4o0EA Gs4YuG9O8wNSZZNNsW6kNeuW6A8LNd1v4RxttPmjbIRLsYpBLXAR3SbS1JNOIhWurKa4 WfLv4bhH54e4y64IGUqPiRlYpIgbQ0ISLbCM/Yq+DsCA24Av5p93zGGRFFI5kw2WDfdn leOLJHFfh0BOb/g9wwTGDVlyslD41oQxNkFYnUgDvvO0onYdcIwtg1a2ro9gG1SrSAV1 zwHK9FJozNznMz4wqXhss2tJae96ffyyZnWbPdEaux5Qf+5w9SMCVzSWq92pTHK7Lztx TvdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=JYQbtbils7OGh5uP6Fpkr+447ZcftT0pb1EE6IEMw9o=; b=WlfY0FImy/TYDar3McozQ/aS+w8lTn5bFLUuPY4i8OjWN/uzULVhOhy90ZhyrmjN0Z XSGYM0S4Ja8CgL4dIhp+mfcR45NNjuatBPaDTQrDreQBknqjEdccVMYPgRdek4DjwIqy +T+1EDMbe7MJOW9cHdfbTBZ2cE5dAwgAhdzWTXMiW6+lEnD5brmm6VcYRahkRVZTzNZ2 SgU3Xv/adQHFbzI//Ja4WGxhSZMVXrNiblllWwd6ENG9uH7i09vX6zO4wwJpAhqriz9S e5s6b1c4Cd6FUVXntaOLQ+NOZc3cJ9NMW7Oqk6gp7sdq9g+O52hCfgp2unKZZVScee+M X4wA== X-Gm-Message-State: AOAM532Ejjv8f6NLZwKUvAcA113REC8uCS8labBNuvRVkyyTikA+hOa6 JCmP1pWdl57/DCQ2UR4w3Gw= X-Google-Smtp-Source: ABdhPJyoXtCEfYkk40FJYctRlI58zYdrRSpqflZViwb6wpJ8jBCqLGDhFf4TjNMvg6lfMx80mAgojQ== X-Received: by 2002:a0c:c492:: with SMTP id u18mr3606404qvi.18.1600741974844; Mon, 21 Sep 2020 19:32:54 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id n1sm11622494qte.91.2020.09.21.19.32.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:32:54 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 1/6] dm: fix bio splitting and its bio completion order for regular IO Date: Mon, 21 Sep 2020 22:32:46 -0400 Message-Id: <20200922023251.47712-2-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org dm_queue_split() is removed because __split_and_process_bio() _must_ handle splitting bios to ensure proper bio submission and completion ordering as a bio is split. Otherwise, multiple recursive calls to ->submit_bio will cause multiple split bios to be allocated from the same ->bio_split mempool at the same time. This would result in deadlock in low memory conditions because no progress could be made (only one bio is available in ->bio_split mempool). This fix has been verified to still fix the loss of performance, due to excess splitting, that commit 120c9257f5f1 provided. Fixes: 120c9257f5f1 ("Revert "dm: always call blk_queue_split() in dm_process_bio()"") Cc: stable@vger.kernel.org # 5.0+, requires custom backport due to 5.9 changes Reported-by: Ming Lei Signed-off-by: Mike Snitzer --- drivers/md/dm.c | 23 ++--------------------- 1 file changed, 2 insertions(+), 21 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4a40df8af7d3..d948cd522431 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1724,23 +1724,6 @@ static blk_qc_t __process_bio(struct mapped_device *md, struct dm_table *map, return ret; } -static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio) -{ - unsigned len, sector_count; - - sector_count = bio_sectors(*bio); - len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count); - - if (sector_count > len) { - struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split); - - bio_chain(split, *bio); - trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector); - submit_bio_noacct(*bio); - *bio = split; - } -} - static blk_qc_t dm_process_bio(struct mapped_device *md, struct dm_table *map, struct bio *bio) { @@ -1768,14 +1751,12 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, if (current->bio_list) { if (is_abnormal_io(bio)) blk_queue_split(&bio); - else - dm_queue_split(md, ti, &bio); + /* regular IO is split by __split_and_process_bio */ } if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED) return __process_bio(md, map, bio, ti); - else - return __split_and_process_bio(md, map, bio); + return __split_and_process_bio(md, map, bio); } static blk_qc_t dm_submit_bio(struct bio *bio) From patchwork Tue Sep 22 02:32:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29AD56CA for ; Tue, 22 Sep 2020 02:32:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D6452193E for ; Tue, 22 Sep 2020 02:32:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KQ/J74mq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729710AbgIVCc5 (ORCPT ); Mon, 21 Sep 2020 22:32:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCc5 (ORCPT ); Mon, 21 Sep 2020 22:32:57 -0400 Received: from mail-qk1-x742.google.com (mail-qk1-x742.google.com [IPv6:2607:f8b0:4864:20::742]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DEFEC061755 for ; Mon, 21 Sep 2020 19:32:57 -0700 (PDT) Received: by mail-qk1-x742.google.com with SMTP id g72so17577317qke.8 for ; Mon, 21 Sep 2020 19:32:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=bWaXIov8TtAl/0XsAvHxYfnPFbq37XOvCLuYhpc0zL0=; b=KQ/J74mqJZmGoH1fobR6T1qCrxdGhSWMn1sKtfCwg5//U5YamDPraAjCc+zWeVyyPf 5pqA4MIH0X5ZqUKRkcBsQaY+ZHbS35/6zVCGui6fWPOrtaKvUkCbRxxFl0n3vMYeN4un QSPdxS0bd9H/rMm2Z01biPpDs4So8gnN2o2LVHUDMSpXaaj/BjtnY/VfTYzZzIzAqVXl 5QL93DGFqaLOVILjy9hDGzFHIvEmxQLizngOLwCcXmpmi6hJ/nxFFVK3zDijvk/AGPy7 BQculOdzL2vTlBZjV0xUFN/i6Ce4fnRjKChVJ+0ACJ/vIpm6mHQix+4N4UYFrdNRUMHW q9ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=bWaXIov8TtAl/0XsAvHxYfnPFbq37XOvCLuYhpc0zL0=; b=GRA9Oasru/C7PcIrbZ4cX9QLBAPW4gI/eNQkxSup3HTKGXR9Vi0POBI7RWbJrg4U9l fUATE+UD1Np5JQarSuEa1OodHYqByVTJk4PnJ/hjd06dY1EtuKk3jg+X5U7HJnwsSKmQ /QQy4dvNxOnWPeu9piwFAlN/WZrPHtyd2c9aUyy0HmlHiSoGZT5BoRjgy29+vZeDY+bF Qqy6tR9dRL827woU5xZIrKptZROXwbeK8IjhHVclPXFb0f7RAf3/XGmOlv9iOln6dxPo qJBIQuXPzEt3JEws+D0Sai8GBLBQ34HJeIT09WuGdD4WTyg8Gpjr0XWK7jk8jjCizJcp EJQg== X-Gm-Message-State: AOAM531Ie2f9PC1KZGG9agT1cRIKAuIBeOl0IVswCugQ9cKgunIwdG11 TfM+QVA/GTx0GGbxjFPAh3c= X-Google-Smtp-Source: ABdhPJy6JO/cMS6eyzyES62A6ssN7ZfRrD419Tdfay1OexLN53AAgSYU1/Gv5s6SDuBxiWcxprI+gQ== X-Received: by 2002:a37:c09:: with SMTP id 9mr2892321qkm.471.1600741976270; Mon, 21 Sep 2020 19:32:56 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id m6sm10685774qkh.106.2020.09.21.19.32.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:32:55 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 2/6] dm: fix comment in dm_process_bio() Date: Mon, 21 Sep 2020 22:32:47 -0400 Message-Id: <20200922023251.47712-3-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Refer to the correct function (->submit_bio instead of ->queue_bio). Also, add details about why using blk_queue_split() isn't needed for dm_wq_work()'s call to dm_process_bio(). Fixes: c62b37d96b6eb ("block: move ->make_request_fn to struct block_device_operations") Signed-off-by: Mike Snitzer --- drivers/md/dm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index d948cd522431..6ed05ca65a0f 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1744,9 +1744,11 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, } /* - * If in ->queue_bio we need to use blk_queue_split(), otherwise + * If in ->submit_bio we need to use blk_queue_split(), otherwise * queue_limits for abnormal requests (e.g. discard, writesame, etc) * won't be imposed. + * If called from dm_wq_work() for deferred bio processing, bio + * was already handled by following code with previous ->submit_bio. */ if (current->bio_list) { if (is_abnormal_io(bio)) From patchwork Tue Sep 22 02:32:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45B4C6CB for ; Tue, 22 Sep 2020 02:33:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27E662193E for ; Tue, 22 Sep 2020 02:33:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DbCTFiiK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729711AbgIVCc7 (ORCPT ); Mon, 21 Sep 2020 22:32:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCc7 (ORCPT ); Mon, 21 Sep 2020 22:32:59 -0400 Received: from mail-qt1-x843.google.com (mail-qt1-x843.google.com [IPv6:2607:f8b0:4864:20::843]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85188C061755 for ; Mon, 21 Sep 2020 19:32:58 -0700 (PDT) Received: by mail-qt1-x843.google.com with SMTP id b2so14383675qtp.8 for ; Mon, 21 Sep 2020 19:32:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=Kx5MVQ6xbvL7SHYiOkls0hYmmDYxo6c0riPFgM+lrnY=; b=DbCTFiiKgwYTisI7zwDF25LFzr/TNByZUa+MYb1qTBrQcTqQm4ORV5/O9lfA+UJQGz +auGPrV8++8UwJrFvrctpmCmIWMmLl3M0sfKXJ7xOudTfk+vRw96qPBDZzdUBuT5jhIG EQfVZurfX8DpkhBD0em1XYsAyJk64++OkH78YJR/eQi8BfC6SJAkpkXqBJc419B/5CoQ ic0xIubgR9lJWVcBTLIMu3KB4wOTAZDCN24k0XLV17LRYGuxu+HNO16G1yGaU5vmtIoV JgX/mWKmSq6dCWB8ZtG62+KIyDnGTpTJL1xhDxB9fW3wikEiga2a6o5hfJS/XPiM+oCy R7Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=Kx5MVQ6xbvL7SHYiOkls0hYmmDYxo6c0riPFgM+lrnY=; b=dNUDssoLXdENPmH2jWPG8Bne6vVlp9tKRyBMBSIAQYuX9e0w2IXpugqBZuoXBz67rK NhKruMukO7mE/nzCJDTvjNfKQB2r5PJD1Tt3inY+Vl5wDQxRTbMP62CHp4hHkNhnKzXt ibSwfPtIZLh9KOPlJoZTHAKF4Q0pi9SuOZ8xYbPHVLbMKczIGoDkgP/xAUVKOjocQJ/c Rlivbanq/l0wodZDXTTyRxGBm3/nQw6+nIKsiMUTJWHwiNhFKp8SHIuWGOYEiuAmROg7 7nknxxoMvRCj3T5yYOyn2CmIOqAFDi0kAd468uuxWr4ciWSqapcJlvKIpmN/DHdrrD7v GGKA== X-Gm-Message-State: AOAM530D5NzhmOD+dKQsZSqZ9d2kDF7VNCQANnRs+uBMFfChF4ibBDdv PsmLZXDnCr/OOTtt0i80jhY= X-Google-Smtp-Source: ABdhPJwRDDrM3bVoPqW8HIr2Xj7UzfZ1I3ZNuN8k4ylYbPXjcOuy7Qmicr1i6OuGmq6x2BBeJENvSw== X-Received: by 2002:ac8:d01:: with SMTP id q1mr2768872qti.276.1600741977737; Mon, 21 Sep 2020 19:32:57 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id d10sm10397116qkk.1.2020.09.21.19.32.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:32:57 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 3/6] block: use lcm_not_zero() when stacking chunk_sectors Date: Mon, 21 Sep 2020 22:32:48 -0400 Message-Id: <20200922023251.47712-4-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Like 'io_opt', blk_stack_limits() should stack 'chunk_sectors' using lcm_not_zero() rather than min_not_zero() -- otherwise the final 'chunk_sectors' could result in sub-optimal alignment of IO to component devices in the IO stack. Also, if 'chunk_sectors' isn't a multiple of 'physical_block_size' then it is a bug in the driver and the device should be flagged as 'misaligned'. Signed-off-by: Mike Snitzer Reviewed-by: Ming Lei Reviewed-by: Martin K. Petersen --- block/blk-settings.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 76a7e03bcd6c..b2e1a929a6db 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -534,6 +534,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->io_min = max(t->io_min, b->io_min); t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); + t->chunk_sectors = lcm_not_zero(t->chunk_sectors, b->chunk_sectors); /* Physical block size a multiple of the logical block size? */ if (t->physical_block_size & (t->logical_block_size - 1)) { @@ -556,6 +557,13 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, ret = -1; } + /* chunk_sectors a multiple of the physical block size? */ + if ((t->chunk_sectors << 9) & (t->physical_block_size - 1)) { + t->chunk_sectors = 0; + t->misaligned = 1; + ret = -1; + } + t->raid_partial_stripes_expensive = max(t->raid_partial_stripes_expensive, b->raid_partial_stripes_expensive); @@ -594,10 +602,6 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->discard_granularity; } - if (b->chunk_sectors) - t->chunk_sectors = min_not_zero(t->chunk_sectors, - b->chunk_sectors); - t->zoned = max(t->zoned, b->zoned); return ret; } From patchwork Tue Sep 22 02:32:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E9E7B6CB for ; Tue, 22 Sep 2020 02:33:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD29D2145D for ; Tue, 22 Sep 2020 02:33:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qTJJWSxl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729714AbgIVCdA (ORCPT ); Mon, 21 Sep 2020 22:33:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCdA (ORCPT ); Mon, 21 Sep 2020 22:33:00 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEF59C061755 for ; Mon, 21 Sep 2020 19:32:59 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id n10so14419902qtv.3 for ; Mon, 21 Sep 2020 19:32:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=FPyxJTEfNz4t6PoD6vTT7hbbizqhBOkxQ4lCpIxAUdw=; b=qTJJWSxlwxSXXROpFetAkvGXbASVGP8SEQb5QZGJkQNzZToy8R+w5DayRqqH4OkP7m yLcUtq6po8GjUb55FlYNuxp1msA5K4YwIaKuqM8LuqSGbi4ZCWNnt/MFpVj65LzlHbMp ukI6xEX4CVCB9DjLfUsrCBnKhwi76mxQYDxGm3jBdpDs00SIdslQYnwnGKE1/uGhXdwF GaZH8x4sVM7sTbgagfXdv1SOTxSacfwn3nszp3ndARzaqvzkWL1LFxxMgPDZ/va+eedw 92HHKYhBPoKbLsy/xDTuf8aRUOVH/YH1t1Metw6vWj2Xeibl9YXBaT14QrdhwrNWCB+R 4TJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=FPyxJTEfNz4t6PoD6vTT7hbbizqhBOkxQ4lCpIxAUdw=; b=S8a829DYRxgPoxucArxEK8H9I3Airh1i95aN90Z8e8jyDqxL7Bf70/YjHjmY/mfuRi DajCPNmxlPsbBXJ7FIhsACeQuFd2uFSvU2xNxdAzd3oPhz8zdNUek1VCqYwipkreEqXl Gw07QENqKAoDLzD+DU9lYUvPPOzvXPKK2+bFDScXcazxSG0nJ+vpiD+DzJotCXwwhfON 54+5rCoeQZ7cr74+Y3esjl/S4LoD7atjKvQgBHuo5w28tpzmSJt7B54i59qLqNBHG6/D JBh58XNKYBHbhzPR5cj5QuM/BFyXBW9H8FWU20j78qitrcAGjMsgxZbOvQDad+14g2cn sSfA== X-Gm-Message-State: AOAM533fwzSqB//OP2dNK6EtI2mIRmdrA/zt10TMHXQ8qv7ddhq0E587 AZQbuUGpKR8xzo0B3TtIp6I= X-Google-Smtp-Source: ABdhPJwA7XzwPgQjajmIKJUINJYBRbVON+qScRpOurSH9hVDi4qKTD8l6xVe/lcnpeUv0u/RnRLQfA== X-Received: by 2002:ac8:3fdd:: with SMTP id v29mr2599290qtk.383.1600741979177; Mon, 21 Sep 2020 19:32:59 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id r24sm11240605qtm.70.2020.09.21.19.32.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:32:58 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 4/6] block: allow 'chunk_sectors' to be non-power-of-2 Date: Mon, 21 Sep 2020 22:32:49 -0400 Message-Id: <20200922023251.47712-5-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It is possible, albeit more unlikely, for a block device to have a non power-of-2 for chunk_sectors (e.g. 10+2 RAID6 with 128K chunk_sectors, which results in a full-stripe size of 1280K. This causes the RAID6's io_opt to be advertised as 1280K, and a stacked device _could_ then be made to use a blocksize, aka chunk_sectors, that matches non power-of-2 io_opt of underlying RAID6 -- resulting in stacked device's chunk_sectors being a non power-of-2). Update blk_queue_chunk_sectors() and blk_max_size_offset() to accommodate drivers that need a non power-of-2 chunk_sectors. Signed-off-by: Mike Snitzer Reviewed-by: Ming Lei Reviewed-by: Martin K. Petersen --- block/blk-settings.c | 10 ++++------ include/linux/blkdev.h | 12 +++++++++--- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index b2e1a929a6db..5ea3de48afba 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -172,15 +172,13 @@ EXPORT_SYMBOL(blk_queue_max_hw_sectors); * * Description: * If a driver doesn't want IOs to cross a given chunk size, it can set - * this limit and prevent merging across chunks. Note that the chunk size - * must currently be a power-of-2 in sectors. Also note that the block - * layer must accept a page worth of data at any offset. So if the - * crossing of chunks is a hard limitation in the driver, it must still be - * prepared to split single page bios. + * this limit and prevent merging across chunks. Note that the block layer + * must accept a page worth of data at any offset. So if the crossing of + * chunks is a hard limitation in the driver, it must still be prepared + * to split single page bios. **/ void blk_queue_chunk_sectors(struct request_queue *q, unsigned int chunk_sectors) { - BUG_ON(!is_power_of_2(chunk_sectors)); q->limits.chunk_sectors = chunk_sectors; } EXPORT_SYMBOL(blk_queue_chunk_sectors); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index bb5636cc17b9..51d98a595943 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1059,11 +1059,17 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, static inline unsigned int blk_max_size_offset(struct request_queue *q, sector_t offset) { - if (!q->limits.chunk_sectors) + unsigned int chunk_sectors = q->limits.chunk_sectors; + + if (!chunk_sectors) return q->limits.max_sectors; - return min(q->limits.max_sectors, (unsigned int)(q->limits.chunk_sectors - - (offset & (q->limits.chunk_sectors - 1)))); + if (likely(is_power_of_2(chunk_sectors))) + chunk_sectors -= offset & (chunk_sectors - 1); + else + chunk_sectors -= sector_div(offset, chunk_sectors); + + return min(q->limits.max_sectors, chunk_sectors); } static inline unsigned int blk_rq_get_max_sectors(struct request *rq, From patchwork Tue Sep 22 02:32:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47CD56CA for ; Tue, 22 Sep 2020 02:33:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D32F21D91 for ; Tue, 22 Sep 2020 02:33:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UKnm8Z19" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729718AbgIVCdB (ORCPT ); Mon, 21 Sep 2020 22:33:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCdB (ORCPT ); Mon, 21 Sep 2020 22:33:01 -0400 Received: from mail-qv1-xf44.google.com (mail-qv1-xf44.google.com [IPv6:2607:f8b0:4864:20::f44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BE0BC061755 for ; Mon, 21 Sep 2020 19:33:01 -0700 (PDT) Received: by mail-qv1-xf44.google.com with SMTP id cr8so8724660qvb.10 for ; Mon, 21 Sep 2020 19:33:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=f+wJtsaJ4ki6kX0/25dbRuxCZ8RWAvelZliZZZPY1wA=; b=UKnm8Z19ZZN+3X9eVnBVY9zx4xSYf4Wca1Fk5DB5m/l7WOQ0aGNEkhcIVeyCwcyPO3 Yw+gHvJQ+qXYZ88D6ywbDPNpbhQ7ZD8i8IGkb0SoZA+SIH2yuHYKmb1VHSgi7nWvxxOz +HRqgn0zGaWZkPQ5v2kiQ+kYFZ0A2/JUU43e50eHzOvDCOmRLnCyyFK85Z2wxzTF6i/8 qUWBDTpTKcl/nN2DG/byLbZxXh6yScybrIXvBrRsYZ/JjjAL09ltiTUop5Uq8abe9qs/ scEQ12RRdS2ytVSPB6LghSTqJXnFgOoj7hNn8/cWNIVk6xOC0A/0Igt1XJtzQA18ZcSd x5iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=f+wJtsaJ4ki6kX0/25dbRuxCZ8RWAvelZliZZZPY1wA=; b=f3N5p+9mHgbd5+3QnZ3ZeNkSRtk+1QzQDWlc3KuKwLsTrbglkvHkGkFqnqStLYujCD RB34Ym7AVnjT0tUxT6U3k8hDBvxxEd18s7zTKTFFjtqX+L42FPTbYLJF12fE7cOW08SU pl8B8+Z5XZufJ1gqoKbLRzGBvLcBaDxawEJNGrrT7DtJ9x3mIXbG/cS4jPIwgnGghBvK n/NGUX7IWqaQkEz0jRO74mavjtmHEG6/K+bOO36/1HbIrUuxPUUXzsrClpAusg9urCGR Z8tG0t3HrJPMKOgu16D0hnF8CFlQthVjGUeq8BnF+HorxDFvOz6GJh50NpcmGZenPHkf bQ2w== X-Gm-Message-State: AOAM531pzKcUKLv3D73UVSZO1t9lgmPg6SYVIfyxf+bPw1tPDJ2C4j0y ylT8itg+OgKi2CTZdIOcW24= X-Google-Smtp-Source: ABdhPJybaXDHibBnryCpvlu+eGdLlFYEy/iKvGW8mgApZ21WrhLkuUCAuLFUgOC/psmohYQMDaM4ag== X-Received: by 2002:ad4:500c:: with SMTP id s12mr3583015qvo.7.1600741980589; Mon, 21 Sep 2020 19:33:00 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id 16sm10783248qks.102.2020.09.21.19.32.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:32:59 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 5/6] dm table: stack 'chunk_sectors' limit to account for target-specific splitting Date: Mon, 21 Sep 2020 22:32:50 -0400 Message-Id: <20200922023251.47712-6-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If target set ti->max_io_len it must be used when stacking DM device's queue_limits to establish a 'chunk_sectors' that is compatible with the IO stack. By using lcm_not_zero() care is taken to avoid blindly overriding the chunk_sectors limit stacked up by blk_stack_limits(). Signed-off-by: Mike Snitzer --- drivers/md/dm-table.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 229f461e7def..3f4e7c7912a2 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -1506,6 +1507,10 @@ int dm_calculate_queue_limits(struct dm_table *table, zone_sectors = ti_limits.chunk_sectors; } + /* Stack chunk_sectors if target-specific splitting is required */ + if (ti->max_io_len) + ti_limits.chunk_sectors = lcm_not_zero(ti->max_io_len, + ti_limits.chunk_sectors); /* Set I/O hints portion of queue limits */ if (ti->type->io_hints) ti->type->io_hints(ti, &ti_limits); From patchwork Tue Sep 22 02:32:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 11791387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F48F6CB for ; Tue, 22 Sep 2020 02:33:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64ACD2193E for ; Tue, 22 Sep 2020 02:33:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Eo/c9gLF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729720AbgIVCdC (ORCPT ); Mon, 21 Sep 2020 22:33:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729379AbgIVCdC (ORCPT ); Mon, 21 Sep 2020 22:33:02 -0400 Received: from mail-qv1-xf43.google.com (mail-qv1-xf43.google.com [IPv6:2607:f8b0:4864:20::f43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CF84C061755 for ; Mon, 21 Sep 2020 19:33:02 -0700 (PDT) Received: by mail-qv1-xf43.google.com with SMTP id ef16so8733368qvb.8 for ; Mon, 21 Sep 2020 19:33:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=NtWlq/47WlrxDfVxu0B44CPpyCD9vao2Nu3MMDC0MmU=; b=Eo/c9gLFowJNZLtWG08NnllCNEhxoXXcTsM2WEtV1BiQDZsj2A6l5KvqSum1jKYPxW HXPfDzWJcmgJ5sMNghmoxhfeNw2FQ5S3fJm4iKrcnLlHVkXm/YZRcqLP0QBBJdIHSXvu MmJv+SAfNu7phAnu2mVQz3e51kRJxf49w//rgCU/Q0HTXhGuhar+KeqXSNXSzU0/ALE9 1MwVwA5koPlDH0cMfVShop/oBZNUbKx5yRErKoFMSnTgbOsEpAvdyVLkVe1twwwln2a9 ZLoqDu19nc/PrzEYY3pJTYNuwIFYQ2bzCrX/GtA1ICGi2Xp4F+OgF0Pxxl/G2Ys2QGM+ JeQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=NtWlq/47WlrxDfVxu0B44CPpyCD9vao2Nu3MMDC0MmU=; b=PjvQzK5RoPAoYKYvBrinGQimzboOxcACWJtk65/KmIu56Mvvo6CLMt6qOlv28pB5ml V9dsEJ0PxXVUF95/3B326iiErJtbv4gzOuQkQtVuGBqAB3u04QMD1BBwVUHmY+U4l9qC iIdf1gGYZybM2pDpDJGOYGvfcV5PQgQgdl3n510zvK7FBhkH694oAS4MwLJoMV49VIBv QEyaKewJCCyXc54tR1o4KuXtdjuzk+rGeL1A/brNr9DWR8Q8bm6MApdKJ8DCZO7J3ge6 TYHcTSKIU4gLcRXda+p6ChjflHQ6o4/pB75PuVQUrAK3DvOHF9ah0Rs5Bv5b19sAlfMY yGvg== X-Gm-Message-State: AOAM532F3gmPcJEoXkJGxS8mvd03Nso52dSszuScd8x1t/iNGXzRINYC 0C69Zet6uy6mpbN+tatwVbM= X-Google-Smtp-Source: ABdhPJwstwVK6opMvdpCo+BnKDX6pHoZtowyEX6H2tPqAhqWRAaCcm28GxGYEq4JsEtXWaht8CgOYw== X-Received: by 2002:a0c:d443:: with SMTP id r3mr3613860qvh.20.1600741981873; Mon, 21 Sep 2020 19:33:01 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id m10sm11779149qti.46.2020.09.21.19.33.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 19:33:01 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer To: Jens Axboe Cc: Ming Lei , Vijayendra Suman , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v3 6/6] dm: change max_io_len() to use blk_max_size_offset() Date: Mon, 21 Sep 2020 22:32:51 -0400 Message-Id: <20200922023251.47712-7-snitzer@redhat.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20200922023251.47712-1-snitzer@redhat.com> References: <20200922023251.47712-1-snitzer@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Using blk_max_size_offset() enables DM core's splitting to impose ti->max_io_len (via q->limits.chunk_sectors) and also fallback to respecting q->limits.max_sectors if chunk_sectors isn't set. Signed-off-by: Mike Snitzer --- drivers/md/dm.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 6ed05ca65a0f..3982012b1309 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1051,22 +1051,18 @@ static sector_t max_io_len_target_boundary(sector_t sector, struct dm_target *ti static sector_t max_io_len(sector_t sector, struct dm_target *ti) { sector_t len = max_io_len_target_boundary(sector, ti); - sector_t offset, max_len; + sector_t max_len; /* * Does the target need to split even further? + * - q->limits.chunk_sectors reflects ti->max_io_len so + * blk_max_size_offset() provides required splitting. + * - blk_max_size_offset() also respects q->limits.max_sectors */ - if (ti->max_io_len) { - offset = dm_target_offset(ti, sector); - if (unlikely(ti->max_io_len & (ti->max_io_len - 1))) - max_len = sector_div(offset, ti->max_io_len); - else - max_len = offset & (ti->max_io_len - 1); - max_len = ti->max_io_len - max_len; - - if (len > max_len) - len = max_len; - } + max_len = blk_max_size_offset(dm_table_get_md(ti->table)->queue, + dm_target_offset(ti, sector)); + if (len > max_len) + len = max_len; return len; }