From patchwork Tue Jul 30 11:24:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11065637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61F0D13B1 for ; Tue, 30 Jul 2019 12:08:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5130028784 for ; Tue, 30 Jul 2019 12:08:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44F9C287D2; Tue, 30 Jul 2019 12:08:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD17828784 for ; Tue, 30 Jul 2019 12:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729398AbfG3MHx (ORCPT ); Tue, 30 Jul 2019 08:07:53 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:56555 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729088AbfG3MHx (ORCPT ); Tue, 30 Jul 2019 08:07:53 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hsQuM-0006QI-An; Tue, 30 Jul 2019 14:07:10 +0200 Message-Id: <20190730120321.285095769@linutronix.de> User-Agent: quilt/0.65 Date: Tue, 30 Jul 2019 13:24:54 +0200 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Anna-Maria Gleixner , Steven Rostedt , Julia Cartwright , "Theodore Tso" , Matthew Wilcox , Alexander Viro , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, Jan Kara Subject: [patch 2/4] fs/buffer: Move BH_Uptodate_Lock locking into wrapper functions References: <20190730112452.871257694@linutronix.de> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Bit spinlocks are problematic if PREEMPT_RT is enabled, because they disable preemption, which is undesired for latency reasons and breaks when regular spinlocks are taken within the bit_spinlock locked region because regular spinlocks are converted to 'sleeping spinlocks' on RT. So RT replaces the bit spinlocks with regular spinlocks to avoid this problem. To avoid ifdeffery at the source level, wrap all BH_Uptodate_Lock bitlock operations with inline functions, so the spinlock substitution can be done at one place. Using regular spinlocks can also be enabled for lock debugging purposes so the lock operations become visible to lockdep. Signed-off-by: Thomas Gleixner Cc: "Theodore Ts'o" Cc: Matthew Wilcox Cc: Alexander Viro Cc: linux-fsdevel@vger.kernel.org Reviewed-by: Jan Kara --- fs/buffer.c | 20 ++++++-------------- fs/ext4/page-io.c | 6 ++---- fs/ntfs/aops.c | 10 +++------- include/linux/buffer_head.h | 16 ++++++++++++++++ 4 files changed, 27 insertions(+), 25 deletions(-) --- a/fs/buffer.c +++ b/fs/buffer.c @@ -275,8 +275,7 @@ static void end_buffer_async_read(struct * decide that the page is now completely done. */ first = page_buffers(page); - local_irq_save(flags); - bit_spin_lock(BH_Uptodate_Lock, &first->b_state); + flags = bh_uptodate_lock_irqsave(first); clear_buffer_async_read(bh); unlock_buffer(bh); tmp = bh; @@ -289,8 +288,7 @@ static void end_buffer_async_read(struct } tmp = tmp->b_this_page; } while (tmp != bh); - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); + bh_uptodate_unlock_irqrestore(first, flags); /* * If none of the buffers had errors and they are all @@ -302,9 +300,7 @@ static void end_buffer_async_read(struct return; still_busy: - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); - return; + bh_uptodate_unlock_irqrestore(first, flags); } /* @@ -331,8 +327,7 @@ void end_buffer_async_write(struct buffe } first = page_buffers(page); - local_irq_save(flags); - bit_spin_lock(BH_Uptodate_Lock, &first->b_state); + flags = bh_uptodate_lock_irqsave(first); clear_buffer_async_write(bh); unlock_buffer(bh); @@ -344,15 +339,12 @@ void end_buffer_async_write(struct buffe } tmp = tmp->b_this_page; } - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); + bh_uptodate_unlock_irqrestore(first, flags); end_page_writeback(page); return; still_busy: - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); - return; + bh_uptodate_unlock_irqrestore(first, flags); } EXPORT_SYMBOL(end_buffer_async_write); --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -90,8 +90,7 @@ static void ext4_finish_bio(struct bio * * We check all buffers in the page under BH_Uptodate_Lock * to avoid races with other end io clearing async_write flags */ - local_irq_save(flags); - bit_spin_lock(BH_Uptodate_Lock, &head->b_state); + flags = bh_uptodate_lock_irqsave(head); do { if (bh_offset(bh) < bio_start || bh_offset(bh) + bh->b_size > bio_end) { @@ -103,8 +102,7 @@ static void ext4_finish_bio(struct bio * if (bio->bi_status) buffer_io_error(bh); } while ((bh = bh->b_this_page) != head); - bit_spin_unlock(BH_Uptodate_Lock, &head->b_state); - local_irq_restore(flags); + bh_uptodate_unlock_irqrestore(head, flags); if (!under_io) { fscrypt_free_bounce_page(bounce_page); end_page_writeback(page); --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -92,8 +92,7 @@ static void ntfs_end_buffer_async_read(s "0x%llx.", (unsigned long long)bh->b_blocknr); } first = page_buffers(page); - local_irq_save(flags); - bit_spin_lock(BH_Uptodate_Lock, &first->b_state); + flags = bh_uptodate_lock_irqsave(first); clear_buffer_async_read(bh); unlock_buffer(bh); tmp = bh; @@ -108,8 +107,7 @@ static void ntfs_end_buffer_async_read(s } tmp = tmp->b_this_page; } while (tmp != bh); - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); + bh_uptodate_unlock_irqrestore(first, flags); /* * If none of the buffers had errors then we can set the page uptodate, * but we first have to perform the post read mst fixups, if the @@ -142,9 +140,7 @@ static void ntfs_end_buffer_async_read(s unlock_page(page); return; still_busy: - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); - return; + bh_uptodate_unlock_irqrestore(first, flags); } /** --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -78,6 +78,22 @@ struct buffer_head { atomic_t b_count; /* users using this buffer_head */ }; +static inline unsigned long bh_uptodate_lock_irqsave(struct buffer_head *bh) +{ + unsigned long flags; + + local_irq_save(flags); + bit_spin_lock(BH_Uptodate_Lock, &bh->b_state); + return flags; +} + +static inline void +bh_uptodate_unlock_irqrestore(struct buffer_head *bh, unsigned long flags) +{ + bit_spin_unlock(BH_Uptodate_Lock, &bh->b_state); + local_irq_restore(flags); +} + /* * macro tricks to expand the set_buffer_foo(), clear_buffer_foo() * and buffer_foo() functions.