From patchwork Tue Jan 12 22:08:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 8021181 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id DFA85BEEE5 for ; Tue, 12 Jan 2016 22:08:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1AA11203F3 for ; Tue, 12 Jan 2016 22:08:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 29FB5203FB for ; Tue, 12 Jan 2016 22:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752516AbcALWIv (ORCPT ); Tue, 12 Jan 2016 17:08:51 -0500 Received: from mga01.intel.com ([192.55.52.88]:52704 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751515AbcALWIv (ORCPT ); Tue, 12 Jan 2016 17:08:51 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP; 12 Jan 2016 14:08:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,285,1449561600"; d="scan'208";a="859233098" Received: from dcgshare.lm.intel.com ([10.232.118.254]) by orsmga001.jf.intel.com with ESMTP; 12 Jan 2016 14:08:50 -0800 Received: by dcgshare.lm.intel.com (Postfix, from userid 1017) id CDE8EE0C64; Tue, 12 Jan 2016 15:08:49 -0700 (MST) From: Keith Busch To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, Jens Axboe Cc: Keith Busch , Jens Axboe , Ming Lei , Kent Overstreet Subject: [PATCH] block: split bios to max possible length Date: Tue, 12 Jan 2016 15:08:39 -0700 Message-Id: <1452636519-32697-1-git-send-email-keith.busch@intel.com> X-Mailer: git-send-email 1.7.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This splits bio in the middle of a vector to form the largest possible bio at the h/w's desired alignment, and guarantees the bio being split will have some data. The criteria for splitting is changed from the max sectors to the h/w's optimal sector alignment if it is provided. For h/w that advertise their block storage's underlying chunk size, it's a big performance win to not submit commands that cross them. If sector alignment is not provided, this patch uses the max sectors as before. This addresses the performance issue commit d380561113 attempted to fix, but was reverted due to splitting logic error. Signed-off-by: Keith Busch Cc: Jens Axboe Cc: Ming Lei Cc: Kent Overstreet Cc: # 4.4.x- --- block/blk-merge.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index e01405a..6b982ca 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -81,9 +81,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; bio_for_each_segment(bv, bio, iter) { - if (sectors + (bv.bv_len >> 9) > queue_max_sectors(q)) - goto split; - /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -91,6 +88,22 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset)) goto split; + if (sectors + (bv.bv_len >> 9) > + blk_max_size_offset(q, bio->bi_iter.bi_sector)) { + /* + * Consider this a new segment if we're splitting in + * the middle of this vector. + */ + if (nsegs < queue_max_segments(q) && + sectors < blk_max_size_offset(q, + bio->bi_iter.bi_sector)) { + nsegs++; + sectors = blk_max_size_offset(q, + bio->bi_iter.bi_sector); + } + goto split; + } + if (bvprvp && blk_queue_cluster(q)) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment;