From patchwork Mon May 4 13:29:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11525973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEFF115E6 for ; Mon, 4 May 2020 13:29:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 956172075B for ; Mon, 4 May 2020 13:29:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="rAwrIjR/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728179AbgEDN3j (ORCPT ); Mon, 4 May 2020 09:29:39 -0400 Received: from forwardcorp1p.mail.yandex.net ([77.88.29.217]:60982 "EHLO forwardcorp1p.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbgEDN3i (ORCPT ); Mon, 4 May 2020 09:29:38 -0400 Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net [IPv6:2a02:6b8:0:1619::162]) by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 6E0612E0DF0; Mon, 4 May 2020 16:29:31 +0300 (MSK) Received: from sas2-32987e004045.qloud-c.yandex.net (sas2-32987e004045.qloud-c.yandex.net [2a02:6b8:c08:b889:0:640:3298:7e00]) by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id ONPkaKCySh-TUW0g9np; Mon, 04 May 2020 16:29:31 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1588598971; bh=ACX3+Kjv5E3oD5vNgDl+54nvwQYFB1P5GSJhJZbV9OY=; h=Message-ID:Date:To:From:Subject; b=rAwrIjR//k3mW9rEBXzWcDKsqOJIpjQyCffU1ukvAQVJtsFti82vcvHBslQUGyoEO LXZXh2j2HiAnetq814e7vgjngYFcmyl4vRyVGq02eAqRz7xC2/KiD1uwOQ3QC08imG CmsMI/OpSbKCvdOyRNl8d7/UNKhtoG/tNKLRVcCo= Authentication-Results: mxbackcorp1j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net [2a02:6b8:b081:409::1:8]) by sas2-32987e004045.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id XuQMN8HZVe-TTW88Pdb; Mon, 04 May 2020 16:29:30 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: [PATCH 1/4] block/part_stat: remove rcu_read_lock() from part_stat_lock() From: Konstantin Khlebnikov To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe Date: Mon, 04 May 2020 16:29:29 +0300 Message-ID: <158859896942.19836.15240144203131230746.stgit@buzz> User-Agent: StGit/0.22-32-g6a05 MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org RCU lock is required only in blk_account_io_start() to lookup partition. After that request holds reference to related hd_struct. Replace get_cpu() with preempt_disable() - returned cpu index is unused. Non-SMP case also needs preempt_disable, otherwise statistics update could be non-atomic. Previously that was provided by rcu_read_lock(). Signed-off-by: Konstantin Khlebnikov Signed-off-by: Christoph Hellwig --- block/blk-core.c | 3 +++ include/linux/part_stat.h | 7 +++---- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 7f11560bfddb..45ddf7238c06 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1362,6 +1362,7 @@ void blk_account_io_start(struct request *rq, bool new_io) part = rq->part; part_stat_inc(part, merges[rw]); } else { + rcu_read_lock(); part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq)); if (!hd_struct_try_get(part)) { /* @@ -1375,6 +1376,8 @@ void blk_account_io_start(struct request *rq, bool new_io) part = &rq->rq_disk->part0; hd_struct_get(part); } + rcu_read_unlock(); + part_inc_in_flight(rq->q, part, rw); rq->part = part; } diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h index ece607607a86..755a01f0fd61 100644 --- a/include/linux/part_stat.h +++ b/include/linux/part_stat.h @@ -16,9 +16,10 @@ * part_stat_{add|set_all}() and {init|free}_part_stats are for * internal use only. */ +#define part_stat_lock() preempt_disable() +#define part_stat_unlock() preempt_enable() + #ifdef CONFIG_SMP -#define part_stat_lock() ({ rcu_read_lock(); get_cpu(); }) -#define part_stat_unlock() do { put_cpu(); rcu_read_unlock(); } while (0) #define part_stat_get_cpu(part, field, cpu) \ (per_cpu_ptr((part)->dkstats, (cpu))->field) @@ -58,8 +59,6 @@ static inline void free_part_stats(struct hd_struct *part) } #else /* !CONFIG_SMP */ -#define part_stat_lock() ({ rcu_read_lock(); 0; }) -#define part_stat_unlock() rcu_read_unlock() #define part_stat_get(part, field) ((part)->dkstats.field) #define part_stat_get_cpu(part, field, cpu) part_stat_get(part, field) From patchwork Mon May 4 13:29:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11525971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 93D791668 for ; Mon, 4 May 2020 13:29:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 786D82075B for ; Mon, 4 May 2020 13:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="V4banS6t" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728025AbgEDN3g (ORCPT ); Mon, 4 May 2020 09:29:36 -0400 Received: from forwardcorp1o.mail.yandex.net ([95.108.205.193]:45964 "EHLO forwardcorp1o.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbgEDN3g (ORCPT ); Mon, 4 May 2020 09:29:36 -0400 Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::119]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 88FDA2E146B; Mon, 4 May 2020 16:29:33 +0300 (MSK) Received: from vla5-58875c36c028.qloud-c.yandex.net (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36]) by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id I1XY7HP5b5-TXXG417C; Mon, 04 May 2020 16:29:33 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1588598973; bh=C/AXuDO5e4aQmVSLTTHoTqT3ftuz0AKcd7baVzpRAuM=; h=Message-ID:References:Date:To:From:Subject:In-Reply-To; b=V4banS6tP1Dp2BHTnVDf9p6HMz5T9bT13R8OqDBAg4D0DZhisl/m0tb3VOMEJEIDS pE2Pg1If2f1uwV5kRlookI0D8mywgM6Au/3ArC50MpHOfXEb49G2XcEfnifR7dBx3d YDc5zLnWKSnplB2Fd9+8sjb2pcgR50kc9dlWSnyo= Authentication-Results: mxbackcorp2j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net [2a02:6b8:b081:409::1:8]) by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id vrQGc0NzBw-TWYeoIkx; Mon, 04 May 2020 16:29:32 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: [PATCH 2/4] block/part_stat: use __this_cpu_add() instead of access by smp_processor_id() From: Konstantin Khlebnikov To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe Date: Mon, 04 May 2020 16:29:32 +0300 Message-ID: <158859897252.19836.5614675872684760741.stgit@buzz> In-Reply-To: <158859896942.19836.15240144203131230746.stgit@buzz> References: <158859896942.19836.15240144203131230746.stgit@buzz> User-Agent: StGit/0.22-32-g6a05 MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Most architectures have fast path to access percpu for current cpu. Required preempt_disable() is provided by part_stat_lock(). Signed-off-by: Konstantin Khlebnikov Reviewed-by: Christoph Hellwig --- include/linux/part_stat.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h index 755a01f0fd61..a0ddeff3798e 100644 --- a/include/linux/part_stat.h +++ b/include/linux/part_stat.h @@ -36,6 +36,9 @@ res; \ }) +#define __part_stat_add(part, field, addnd) \ + __this_cpu_add((part)->dkstats->field, addnd) + static inline void part_stat_set_all(struct hd_struct *part, int value) { int i; @@ -64,6 +67,9 @@ static inline void free_part_stats(struct hd_struct *part) #define part_stat_get_cpu(part, field, cpu) part_stat_get(part, field) #define part_stat_read(part, field) part_stat_get(part, field) +#define __part_stat_add(part, field, addnd) \ + (part_stat_get(part, field) += (addnd)) + static inline void part_stat_set_all(struct hd_struct *part, int value) { memset(&part->dkstats, value, sizeof(struct disk_stats)); @@ -85,9 +91,6 @@ static inline void free_part_stats(struct hd_struct *part) part_stat_read(part, field[STAT_WRITE]) + \ part_stat_read(part, field[STAT_DISCARD])) -#define __part_stat_add(part, field, addnd) \ - (part_stat_get(part, field) += (addnd)) - #define part_stat_add(part, field, addnd) do { \ __part_stat_add((part), field, addnd); \ if ((part)->partno) \ From patchwork Mon May 4 13:30:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11525975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C696139A for ; Mon, 4 May 2020 13:30:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF5592075E for ; Mon, 4 May 2020 13:30:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="wbZzOWD0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728055AbgEDNa4 (ORCPT ); Mon, 4 May 2020 09:30:56 -0400 Received: from forwardcorp1o.mail.yandex.net ([95.108.205.193]:47022 "EHLO forwardcorp1o.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbgEDNa4 (ORCPT ); Mon, 4 May 2020 09:30:56 -0400 Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 5C7352E1460; Mon, 4 May 2020 16:30:53 +0300 (MSK) Received: from myt5-70c90f7d6d7d.qloud-c.yandex.net (myt5-70c90f7d6d7d.qloud-c.yandex.net [2a02:6b8:c12:3e2c:0:640:70c9:f7d]) by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id p37YMl27Y5-UqbWRbWb; Mon, 04 May 2020 16:30:53 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1588599053; bh=Spta+i/oO8BEvGD3BvlYBGUTfw/bm1sIiGx2AOg3kHw=; h=Message-ID:References:Date:To:From:Subject:In-Reply-To; b=wbZzOWD0J1vut19Iy8vO2yMXcjAexNO0wnK6JSkxfdqMgkmuxWsuRrt4q+oe4tTBF sacTDHlJSNqEj9NBtCtARAnk1kxnK/4g4YED+IrJxbnAQnVGMeSRSGIbiZa9P5UWfj LTRfvy42yQsKrBnDVNWiiDbAHzMOb920G2agr5TA= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net [2a02:6b8:b081:409::1:8]) by myt5-70c90f7d6d7d.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id lOjei5J8xC-UqWWBwFp; Mon, 04 May 2020 16:30:52 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: [PATCH 3/4] block/part_stat: account merge of two requests From: Konstantin Khlebnikov To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe Date: Mon, 04 May 2020 16:30:52 +0300 Message-ID: <158859904278.19926.1357797452754171976.stgit@buzz> In-Reply-To: <158859896942.19836.15240144203131230746.stgit@buzz> References: <158859896942.19836.15240144203131230746.stgit@buzz> User-Agent: StGit/0.22-32-g6a05 MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Also rename blk_account_io_merge() into blk_account_io_merge_request() to distinguish it from merging request and bio. Signed-off-by: Konstantin Khlebnikov Reviewed-by: Christoph Hellwig --- block/blk-merge.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index a04e991b5ded..37bced39bae8 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -662,20 +662,23 @@ void blk_rq_set_mixed_merge(struct request *rq) rq->rq_flags |= RQF_MIXED_MERGE; } -static void blk_account_io_merge(struct request *req) +static void blk_account_io_merge_request(struct request *req) { if (blk_do_io_stat(req)) { + const int sgrp = op_stat_group(req_op(req)); struct hd_struct *part; part_stat_lock(); part = req->part; + part_stat_inc(part, merges[sgrp]); part_dec_in_flight(req->q, part, rq_data_dir(req)); hd_struct_put(part); part_stat_unlock(); } } + /* * Two cases of handling DISCARD merge: * If max_discard_segments > 1, the driver takes every bio @@ -787,7 +790,7 @@ static struct request *attempt_merge(struct request_queue *q, /* * 'next' is going away, so update stats accordingly */ - blk_account_io_merge(next); + blk_account_io_merge_request(next); /* * ownership of bio passed from next to req, return 'next' for From patchwork Mon May 4 13:31:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11525977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FC25913 for ; Mon, 4 May 2020 13:31:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 003912075E for ; Mon, 4 May 2020 13:31:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="EeeNfwqm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727103AbgEDNbK (ORCPT ); Mon, 4 May 2020 09:31:10 -0400 Received: from forwardcorp1j.mail.yandex.net ([5.45.199.163]:46750 "EHLO forwardcorp1j.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbgEDNbI (ORCPT ); Mon, 4 May 2020 09:31:08 -0400 Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 069812E09E9; Mon, 4 May 2020 16:31:06 +0300 (MSK) Received: from vla1-81430ab5870b.qloud-c.yandex.net (vla1-81430ab5870b.qloud-c.yandex.net [2a02:6b8:c0d:35a1:0:640:8143:ab5]) by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id pH5zwfWsX8-V4bKi1Cm; Mon, 04 May 2020 16:31:05 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1588599065; bh=6SgBaqhCwd7nXa7Q0OgUv6wMyJo9sh+9c7AFEoii+p0=; h=Message-ID:References:Date:To:From:Subject:In-Reply-To; b=EeeNfwqmYY4ELZvsWcsFIsl1z5IQPTUP7RWaobgjqo5POwx8P0UutteXDyFIjrRPR RALnMWrv4aHIx+WaNKIin7irjEqnWBvhp8+v3QkoyMHgSJLdNeGSTOsdGVokGubIUg qF/DKLsq6GoXtNzhUqd/ZKxVPAgtJUn2yNyLmGKA= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net [2a02:6b8:b081:409::1:8]) by vla1-81430ab5870b.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id c9UVbiVNmj-V4X0KAgb; Mon, 04 May 2020 16:31:04 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: [PATCH 4/4] block/part_stat: add helper blk_account_io_merge_bio() From: Konstantin Khlebnikov To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe Date: Mon, 04 May 2020 16:31:04 +0300 Message-ID: <158859906056.19958.10435750035306672420.stgit@buzz> In-Reply-To: <158859896942.19836.15240144203131230746.stgit@buzz> References: <158859896942.19836.15240144203131230746.stgit@buzz> User-Agent: StGit/0.22-32-g6a05 MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move non-"new_io" branch of blk_account_io_start() into separate function. Fix merge accounting for discards (they were counted as write merges). Also blk_account_io_merge_bio() doesn't call update_io_ticks() unlike to blk_account_io_start(), there is no reason for that. Signed-off-by: Konstantin Khlebnikov Reviewed-by: Christoph Hellwig --- block/blk-core.c | 39 ++++++++++++++++++++++----------------- block/blk-exec.c | 2 +- block/blk-mq.c | 2 +- block/blk.h | 2 +- 4 files changed, 25 insertions(+), 20 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 45ddf7238c06..18fb42eb2f18 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -622,6 +622,17 @@ void blk_put_request(struct request *req) } EXPORT_SYMBOL(blk_put_request); +static void blk_account_io_merge_bio(struct request *req) +{ + if (blk_do_io_stat(req)) { + const int sgrp = op_stat_group(req_op(req)); + + part_stat_lock(); + part_stat_inc(req->part, merges[sgrp]); + part_stat_unlock(); + } +} + bool bio_attempt_back_merge(struct request *req, struct bio *bio, unsigned int nr_segs) { @@ -640,7 +651,7 @@ bool bio_attempt_back_merge(struct request *req, struct bio *bio, req->biotail = bio; req->__data_len += bio->bi_iter.bi_size; - blk_account_io_start(req, false); + blk_account_io_merge_bio(req); return true; } @@ -664,7 +675,7 @@ bool bio_attempt_front_merge(struct request *req, struct bio *bio, req->__sector = bio->bi_iter.bi_sector; req->__data_len += bio->bi_iter.bi_size; - blk_account_io_start(req, false); + blk_account_io_merge_bio(req); return true; } @@ -686,7 +697,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, req->__data_len += bio->bi_iter.bi_size; req->nr_phys_segments = segments + 1; - blk_account_io_start(req, false); + blk_account_io_merge_bio(req); return true; no_merge: req_set_nomerge(q, req); @@ -1258,7 +1269,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * return BLK_STS_IOERR; if (blk_queue_io_stat(q)) - blk_account_io_start(rq, true); + blk_account_io_start(rq); /* * Since we have a scheduler attached on the top device, @@ -1348,20 +1359,14 @@ void blk_account_io_done(struct request *req, u64 now) } } -void blk_account_io_start(struct request *rq, bool new_io) +void blk_account_io_start(struct request *rq) { struct hd_struct *part; int rw = rq_data_dir(rq); - if (!blk_do_io_stat(rq)) - return; - - part_stat_lock(); + if (blk_do_io_stat(rq)) { + part_stat_lock(); - if (!new_io) { - part = rq->part; - part_stat_inc(part, merges[rw]); - } else { rcu_read_lock(); part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq)); if (!hd_struct_try_get(part)) { @@ -1378,13 +1383,13 @@ void blk_account_io_start(struct request *rq, bool new_io) } rcu_read_unlock(); - part_inc_in_flight(rq->q, part, rw); rq->part = part; - } - update_io_ticks(part, jiffies, false); + part_inc_in_flight(rq->q, part, rw); + update_io_ticks(part, jiffies, false); - part_stat_unlock(); + part_stat_unlock(); + } } /* diff --git a/block/blk-exec.c b/block/blk-exec.c index e20a852ae432..85324d53d072 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -55,7 +55,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, rq->rq_disk = bd_disk; rq->end_io = done; - blk_account_io_start(rq, true); + blk_account_io_start(rq); /* * don't check dying flag for MQ because the request won't diff --git a/block/blk-mq.c b/block/blk-mq.c index bcc3a2397d4a..049c4f9417c3 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1794,7 +1794,7 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, rq->write_hint = bio->bi_write_hint; blk_rq_bio_prep(rq, bio, nr_segs); - blk_account_io_start(rq, true); + blk_account_io_start(rq); } static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, diff --git a/block/blk.h b/block/blk.h index 73bd3b1c6938..06cd57cc10fb 100644 --- a/block/blk.h +++ b/block/blk.h @@ -195,7 +195,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs, struct request **same_queue_rq); -void blk_account_io_start(struct request *req, bool new_io); +void blk_account_io_start(struct request *req); void blk_account_io_completion(struct request *req, unsigned int bytes); void blk_account_io_done(struct request *req, u64 now);