From patchwork Thu Apr 7 14:45:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 8773621 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EF2099F36E for ; Thu, 7 Apr 2016 14:46:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C174D20251 for ; Thu, 7 Apr 2016 14:46:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9533D2012D for ; Thu, 7 Apr 2016 14:46:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755472AbcDGOqG (ORCPT ); Thu, 7 Apr 2016 10:46:06 -0400 Received: from mail-pa0-f66.google.com ([209.85.220.66]:35164 "EHLO mail-pa0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753684AbcDGOqF (ORCPT ); Thu, 7 Apr 2016 10:46:05 -0400 Received: by mail-pa0-f66.google.com with SMTP id zy2so6997924pac.2; Thu, 07 Apr 2016 07:46:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=tyojTsu+9AdgTM3FGQfRtBOzqBz7EcaQpXgbiV1OOm0=; b=HDYJNYKKTqEPYyWQcg2fWUKKx/TJ9qOWufM+EOiX+49bVzn8J8c2vqpSpsQD7VWXf0 0aGmH8REh7SiwqNvliL8gygZvosYSfI3x8tcoMA6kwkDqtTHqDAHLgvt1DlMFxO3s/lt e4TlBBM2kehfD8jYP91X8MGIeD/0g2AJW1zGbNQ45WZrFp7tqBfCVm7aziTR5ERE4Yrg 2m7D3JsGIa5Up/vH7opFVza8XCGCOrc0mA1HTHysQz6KLV1L5FWVVaGUnJNZ/gfx/ZMu GCEwuhW0UAnyV0gz8lp8g2xZHF7KvF0ad+Cyb43sYpbFRhdGm7gKN3oY89RuzydHbsP4 ze9Q== X-Gm-Message-State: AD7BkJIF53XTEa23jABcTDppuSKMVGLdD0xEbdbwzXmNgYmE0eQswZRrVuzcjMw1IB63jA== X-Received: by 10.66.157.194 with SMTP id wo2mr5291563pab.128.1460040363424; Thu, 07 Apr 2016 07:46:03 -0700 (PDT) Received: from localhost ([45.35.47.131]) by smtp.gmail.com with ESMTPSA id tp9sm12847192pab.14.2016.04.07.07.46.00 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 07 Apr 2016 07:46:01 -0700 (PDT) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Christoph Hellwig , Ming Lei , Shaun Tancheff , Mikulas Patocka Subject: [PATCH] block: avoid to call .bi_end_io() recursively Date: Thu, 7 Apr 2016 22:45:34 +0800 Message-Id: <1460040334-24376-1-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.9.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There were reports about heavy stack use by recursive calling .bi_end_io().[1][2][3] Also these patches[1] [2] [3] were posted for addressing the issue. And the idea is basically similar, all serializes the recursive calling of .bi_end_io() by percpu list. This patch still takes the same idea, but uses bio_list to implement it, which turns out more simple and the code becomes more readable meantime. xfstests(-g auto) is run with this patch and no regression is found. [1] http://marc.info/?t=121428502000004&r=1&w=2 [2] http://marc.info/?l=dm-devel&m=139595190620008&w=2 [3] http://marc.info/?t=145974644100001&r=1&w=2 Cc: Shaun Tancheff Cc: Christoph Hellwig Cc: Mikulas Patocka Signed-off-by: Ming Lei --- block/bio.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 44 insertions(+), 2 deletions(-) diff --git a/block/bio.c b/block/bio.c index f124a0a..4188491 100644 --- a/block/bio.c +++ b/block/bio.c @@ -68,6 +68,8 @@ static DEFINE_MUTEX(bio_slab_lock); static struct bio_slab *bio_slabs; static unsigned int bio_slab_nr, bio_slab_max; +static DEFINE_PER_CPU(struct bio_list *, bio_end_list) = { NULL }; + static struct kmem_cache *bio_find_or_create_slab(unsigned int extra_size) { unsigned int sz = sizeof(struct bio) + extra_size; @@ -1737,6 +1739,47 @@ static inline bool bio_remaining_done(struct bio *bio) return false; } +/* disable local irq when manipulating the percpu bio_list */ +static void unwind_bio_endio(struct bio *bio) +{ + struct bio_list *bl; + unsigned long flags; + bool clear_list = false; + + preempt_disable(); + local_irq_save(flags); + + bl = this_cpu_read(bio_end_list); + if (!bl) { + struct bio_list bl_in_stack; + + bl = &bl_in_stack; + bio_list_init(bl); + this_cpu_write(bio_end_list, bl); + clear_list = true; + } + + if (!bio_list_empty(bl)) { + bio_list_add(bl, bio); + goto out; + } + + while (bio) { + local_irq_restore(flags); + + if (bio->bi_end_io) + bio->bi_end_io(bio); + + local_irq_save(flags); + bio = bio_list_pop(bl); + } + if (clear_list) + this_cpu_write(bio_end_list, NULL); + out: + local_irq_restore(flags); + preempt_enable(); +} + /** * bio_endio - end I/O on a bio * @bio: bio @@ -1765,8 +1808,7 @@ again: goto again; } - if (bio->bi_end_io) - bio->bi_end_io(bio); + unwind_bio_endio(bio); } EXPORT_SYMBOL(bio_endio);