From patchwork Tue Dec 27 15:56:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9489553 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 997E9601D2 for ; Tue, 27 Dec 2016 16:10:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89C45200E7 for ; Tue, 27 Dec 2016 16:10:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7B7B8223B2; Tue, 27 Dec 2016 16:10:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B3A7200E7 for ; Tue, 27 Dec 2016 16:10:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933038AbcL0QKR (ORCPT ); Tue, 27 Dec 2016 11:10:17 -0500 Received: from mail-pg0-f67.google.com ([74.125.83.67]:34753 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932862AbcL0QKL (ORCPT ); Tue, 27 Dec 2016 11:10:11 -0500 Received: by mail-pg0-f67.google.com with SMTP id b1so12815296pgc.1; Tue, 27 Dec 2016 08:10:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=X0qZuhXoeaNYpSiMv33/Ea7IEUrFY3b8lZiyPMUQbYI=; b=CZD1yCQM2p99vzaWWfFOtyaep1NUHhhTe6MqvQbW9hWYwAqbnbcbT7Fa2WaE5ehp0+ WMwL8HVM7Q6pjSCyEFOuwHJ5KGjvkYeT7mwCaUUMmk+AJ8OwTELDmmPf36sx/bnsWDn1 BwUh/sM6XCBSoe9q0Pb5c3UhKjj4EmI2Yf1XE3zhw6iQzFHEZcGFp2MfDq8x7CTHTnSk GjZFhye8nwQyvZFWwVerGK0DDxIKm+KtX7kbiQqz1pheVTpMByhfGmd7NpMPXUnom2Xf U0mfNnN53hPUwN4hx3H0sfkKClGdD52sBpYEEjbHlyOFphgoIZx5ZoXmnqXHHJSUwA87 dMZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=X0qZuhXoeaNYpSiMv33/Ea7IEUrFY3b8lZiyPMUQbYI=; b=fqDvn3Lx4q2hdTY/78X5O51V3KPU1UN4fGBZJyr+F3FS1PDZyeWiFbcvl+Y0nw0pqS NdxbcaGd9dCNNX/9uqalpuKM1ywpkcLgG6/kIXp4k+IlKjxyQ4tXrcgL2W+wpXz+cSu0 EixZ4nSz7QgMoow7E/1PGvAnStQzUC0aBzeXf2zRUCXvA0DrdxmqEun2eEMu5aHIOfSW fI3wfDXOhpNFxeGaQhr3bKxy9VTv7n0R8csQ9ucX2zxN1D2nQnMCWjihfeJizpvJ5F2v xZSCnkBu0PKc0/OQw2w9Z0WwWXO6WkPYBTRbBv3eb3P+8SwdPfhixvZLdT94RbyckBtE eMjg== X-Gm-Message-State: AIkVDXItsS26fA11/zPB1tulIqz5tkNepZDjKJNmZxhJzsyXSg1pgdOz+G4qNs6rTLTKog== X-Received: by 10.99.156.2 with SMTP id f2mr60377440pge.20.1482854493974; Tue, 27 Dec 2016 08:01:33 -0800 (PST) Received: from localhost ([45.35.47.137]) by smtp.gmail.com with ESMTPSA id p68sm90654056pfd.11.2016.12.27.08.01.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Dec 2016 08:01:32 -0800 (PST) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Christoph Hellwig , Ming Lei , Jens Axboe Subject: [PATCH v1 28/54] block: use bio_for_each_segment_mp() to map sg Date: Tue, 27 Dec 2016 23:56:17 +0800 Message-Id: <1482854250-13481-29-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> References: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It is more efficient to use bio_for_each_segment_mp() for mapping sg, meantime we have to consider splitting multipage bvec as done in blk_bio_segment_split(). Signed-off-by: Ming Lei --- block/blk-merge.c | 72 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index a0e97959db7b..55c5866ea77a 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -450,6 +450,56 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, return 0; } +static inline struct scatterlist *blk_next_sg(struct scatterlist **sg, + struct scatterlist *sglist) +{ + if (!*sg) + return sglist; + else { + /* + * If the driver previously mapped a shorter + * list, we could see a termination bit + * prematurely unless it fully inits the sg + * table on each mapping. We KNOW that there + * must be more entries here or the driver + * would be buggy, so force clear the + * termination bit to avoid doing a full + * sg_init_table() in drivers for each command. + */ + sg_unmark_end(*sg); + return sg_next(*sg); + } +} + +static inline unsigned +blk_bvec_map_sg(struct request_queue *q, struct bio_vec *bvec, + struct scatterlist *sglist, struct scatterlist **sg) +{ + unsigned nbytes = bvec->bv_len; + unsigned nsegs = 0, total = 0; + + while (nbytes > 0) { + unsigned seg_size; + struct page *pg; + unsigned offset, idx; + + *sg = blk_next_sg(sg, sglist); + + seg_size = min(nbytes, queue_max_segment_size(q)); + offset = (total + bvec->bv_offset) % PAGE_SIZE; + idx = (total + bvec->bv_offset) / PAGE_SIZE; + pg = nth_page(bvec->bv_page, idx); + + sg_set_page(*sg, pg, seg_size, offset); + + total += seg_size; + nbytes -= seg_size; + nsegs++; + } + + return nsegs; +} + static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, @@ -483,25 +533,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, (*sg)->length += nbytes; } else { new_segment: - if (!*sg) - *sg = sglist; - else { - /* - * If the driver previously mapped a shorter - * list, we could see a termination bit - * prematurely unless it fully inits the sg - * table on each mapping. We KNOW that there - * must be more entries here or the driver - * would be buggy, so force clear the - * termination bit to avoid doing a full - * sg_init_table() in drivers for each command. - */ - sg_unmark_end(*sg); - *sg = sg_next(*sg); - } - - sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); - (*nsegs)++; + (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg); /* for making iterator happy */ bvec->bv_offset -= advance; @@ -527,7 +559,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, int cluster = blk_queue_cluster(q), nsegs = 0; for_each_bio(bio) - bio_for_each_segment(bvec, bio, iter) + bio_for_each_segment_mp(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, &nsegs, &cluster);