From patchwork Sat Oct 29 08:08:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9403323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6F13860587 for ; Sat, 29 Oct 2016 08:22:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B5A42A13D for ; Sat, 29 Oct 2016 08:22:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 502922A5F2; Sat, 29 Oct 2016 08:22:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E972F2A13D for ; Sat, 29 Oct 2016 08:22:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935028AbcJ2IW0 (ORCPT ); Sat, 29 Oct 2016 04:22:26 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:33894 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965330AbcJ2IPI (ORCPT ); Sat, 29 Oct 2016 04:15:08 -0400 Received: by mail-pf0-f193.google.com with SMTP id u84so3381182pfj.1; Sat, 29 Oct 2016 01:15:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YNUxGuHJrpknUHaddKJwG650b9+5civnFTvXQF6HcnI=; b=OtE387fkfsxvL6xQyEc0qe/8q/6/ga0FJRJ1QuM3kOQrX1VuQN9LYHSnHQeH8H5Nkh AKhXUDGaOau3JCGPvSJ9hD+XeUtYmrH9+M6PFsKc6g3snACsjcwn69T5Nkn3p5xtPjIU PoFGsrhtxba0IM6Fa+lgLRqtINHqBPktX5/YIG0QklNa+O0uHp7qMsHeylubjUgQrdIZ s7wMXLq+Jre1M27Sf3chFGpyhlarBUn+w7NHvskWQnCFo1GG6moIGUroRgOBl+B7mPPG cffHpleUnFXbNpLnC6E+riNQXaF0NVTQ2ZVtOTkabQ7k4k1cZMtDIGAn93bF77szptqx HGng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YNUxGuHJrpknUHaddKJwG650b9+5civnFTvXQF6HcnI=; b=aSYH4paMVW73Ama53S2Izh3KbgRxM+DQ5rxmklHxkSB7Yq7umgTND/HUKGrMSo77dt X443CUWFqSO809SO4AHVtFskXn2sua9H8fizYnFTG7QQ0mNWCEfCCZsiK8+kRQwCtC2r bdcM+v6k30LTFF6fmE3exjYVXE3zD0QBeHDY1YXAWqOgiEPBzKJce7FeCfmhivo+AZpO z/Q8Kiq8jH1KBev70pD6g3g7s79aIR3tBp/i08V/aeE7RznLwT3cZrM6QzSjbkiALUke dxiPlxMc/QnSvXtpxuMhLejFtKGT9a7wx4N9bkyklCam2QenlqoRPbwNQXq6hWpesPSi brgQ== X-Gm-Message-State: ABUngvfxiNAdg7oLh3tmbcQtAvr5Q9gVt0CU7BCge3eFalQnQTxK09BEF46+k5zD5YH8hQ== X-Received: by 10.98.3.65 with SMTP id 62mr31484021pfd.98.1477728907932; Sat, 29 Oct 2016 01:15:07 -0700 (PDT) Received: from localhost ([45.34.23.101]) by smtp.gmail.com with ESMTPSA id d15sm23498142pfl.90.2016.10.29.01.15.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 29 Oct 2016 01:15:07 -0700 (PDT) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , "Kirill A . Shutemov" , Ming Lei , Jens Axboe Subject: [PATCH 41/60] block: blk-merge: try to make front segments in full size Date: Sat, 29 Oct 2016 16:08:40 +0800 Message-Id: <1477728600-12938-42-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477728600-12938-1-git-send-email-tom.leiming@gmail.com> References: <1477728600-12938-1-git-send-email-tom.leiming@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When merging one bvec into segment, if the bvec is too big to merge, current policy is to move the whole bvec into another new segment. This patchset changes the policy into trying to maximize size of front segments, that means in above situation, part of bvec is merged into current segment, and the remainder is put into next segment. This patch prepares for support multipage bvec because it can be quite common to see this case and we should try to make front segments in full size. Signed-off-by: Ming Lei --- block/blk-merge.c | 44 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 39 insertions(+), 5 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 465d9c65cb41..a6457e70dafc 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -99,6 +99,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); unsigned bvecs = 0; + unsigned advance; bio_for_each_segment(bv, bio, iter) { /* @@ -129,6 +130,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset)) goto split; + advance = 0; if (sectors + (bv.bv_len >> 9) > max_sectors) { /* * Consider this a new segment if we're splitting in @@ -145,12 +147,24 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, } if (bvprvp && blk_queue_cluster(q)) { - if (seg_size + bv.bv_len > queue_max_segment_size(q)) - goto new_segment; if (!BIOVEC_PHYS_MERGEABLE(bvprvp, &bv)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprvp, &bv)) goto new_segment; + if (seg_size + bv.bv_len > queue_max_segment_size(q)) { + advance = queue_max_segment_size(q) - seg_size; + + if (advance > 0) { + seg_size += advance; + sectors += advance >> 9; + bv.bv_len -= advance; + bv.bv_offset += advance; + } else { + advance = 0; + } + + goto new_segment; + } seg_size += bv.bv_len; bvprv = bv; @@ -172,6 +186,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, seg_size = bv.bv_len; sectors += bv.bv_len >> 9; + /* restore the bvec for iterator */ + bv.bv_len += advance; + bv.bv_offset -= advance; } do_split = false; @@ -371,16 +388,29 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, { int nbytes = bvec->bv_len; + int advance = 0; if (*sg && *cluster) { - if ((*sg)->length + nbytes > queue_max_segment_size(q)) - goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) goto new_segment; + /* try best to merge part of the bvec into previous seg */ + if ((*sg)->length + nbytes > queue_max_segment_size(q)) { + advance = queue_max_segment_size(q) - (*sg)->length; + if (advance <= 0) { + advance = 0; + goto new_segment; + } + + (*sg)->length += advance; + + bvec->bv_offset += advance; + bvec->bv_len -= advance; + goto new_segment; + } + (*sg)->length += nbytes; } else { new_segment: @@ -403,6 +433,10 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); (*nsegs)++; + + /* for making iterator happy */ + bvec->bv_offset -= advance; + bvec->bv_len += advance; } *bvprv = *bvec; }