From patchwork Tue Aug 17 14:40:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12441607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43A68C4338F for ; Tue, 17 Aug 2021 14:42:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 885E160E90 for ; Tue, 17 Aug 2021 14:42:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 885E160E90 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vhXv0PR/VPIMizuyXrQQaSXLOXt1LvLJuMuEAe+gUjc=; b=s2RUUBNWLxmKXD ffPOFr3MIFTuWY8Dbp70oTNo30im4M9HKXRz3RMndn3dxkW3KLoE1qkO64vxYxRElzmU1scDCFh/C 2Vpjma4Bsc4J2bws+MQuRrAzaUzc0cEwtzhZyENTVfFLC3BW6Ml1FtzdfhkXE6p2zKPaNRO7MY1Hm +KpgNr0AYcr/5ccRwFyTwJ2PsVl5WSHQhXFoFdmZr+0UvvLAgF8GG/f1U9pdWcx3PC2rrLB1WiYxv NIvBEjtetXo7SeXtDJlkt5wZa4hCP4RHdN5NwgckMZv+7anCN6FT8pWtNMlluEkmEU68Ypi31MftI L5UhbUDpoNljgTH3KxiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mG0GX-002hsP-JJ; Tue, 17 Aug 2021 14:40:33 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mG0GN-002hqn-FR for linux-arm-kernel@lists.infradead.org; Tue, 17 Aug 2021 14:40:30 +0000 Date: Tue, 17 Aug 2021 16:40:18 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1629211220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jD5QrDzoUlwgmKT1L4ik250SzOJtPAV7eUaF5P748YM=; b=PkMVUQGyZsaJRoYBJ3gtm5yRX6CkFmXK+dk6khXmyk5pHa1Q77foqp+xhMYmvTLYYwOaSI neglHj8c6COsZ5IeqyXulmPKOXKUwSX/sGRSxQnTB/ZMqRMWujAkUr6dXal6EZgRcOsjM2 OnXoXSx5YFvaGJguwB7GENDjC+/wjcQ6ZsLH7lJkncgRQp9ttR0PesjxpBNU9CFPLVIDWA b8ReGmEoguqVptk2GaOGry0Z6o7AXBwwQrsW2c3vHAbIT+y+/xZbDB2jehG2qSd9FG6VT6 d7HvU20SIP3ZzMlVoKfiN1RAfMmxnJiS1sBLwjfoITwxS2iRz9CegSieMGYZwQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1629211220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jD5QrDzoUlwgmKT1L4ik250SzOJtPAV7eUaF5P748YM=; b=D7fovlxYAnlhodjSoHwf1NV98x3UK6cGLhONMXcQJm5S7dl+uCGmUoNFWcCuhrvXdzTkGa pHzrAON0b3rvhZCw== From: Sebastian Andrzej Siewior To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, rcu@vger.kernel.org, linux-rt-users@vger.kernel.org, Catalin Marinas , Will Deacon , Ingo Molnar , Peter Zijlstra , Thomas Gleixner , Steven Rostedt , Daniel Bristot de Oliveira , "Paul E. McKenney" , Frederic Weisbecker , Josh Triplett , Mathieu Desnoyers , Davidlohr Bueso , Lai Jiangshan , Joel Fernandes , Anshuman Khandual , Vincenzo Frascino , Steven Price , Ard Biesheuvel , Boqun Feng , Mike Galbraith , Scott Wood Subject: [PATCH] rcutorture: Avoid problematic critical section nesting on RT Message-ID: <20210817144018.nqssoq475vitrqlv@linutronix.de> References: <20210811201354.1976839-1-valentin.schneider@arm.com> <20210811201354.1976839-2-valentin.schneider@arm.com> <20210817121345.5iyj5epemczn3a52@linutronix.de> <20210817131741.evduh4fw7vyv2dzt@linutronix.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210817131741.evduh4fw7vyv2dzt@linutronix.de> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210817_074023_891506_E6BB94A7 X-CRM114-Status: GOOD ( 20.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Scott Wood rcutorture was generating some nesting scenarios that are not reasonable. Constrain the state selection to avoid them. Example: 1. rcu_read_lock() 2. local_irq_disable() 3. rcu_read_unlock() 4. local_irq_enable() If the thread is preempted between steps 1 and 2, rcu_read_unlock_special.b.blocked will be set, but it won't be acted on in step 3 because IRQs are disabled. Thus, reporting of the quiescent state will be delayed beyond the local_irq_enable(). For now, these scenarios will continue to be tested on non-PREEMPT_RT kernels, until debug checks are added to ensure that they are not happening elsewhere. Signed-off-by: Scott Wood [valentin.schneider@arm.com: Don't disable BH in atomic context] [bigeasy: remove 'preempt_disable(); local_bh_disable(); preempt_enable(); local_bh_enable();' from the examples because this works on RT now. ] Signed-off-by: Sebastian Andrzej Siewior --- I folded Valentin's bits. I removed the unbalanced preempt_disable()/migrate_disable() part from the description because it is supported now by the migrate disable implementation. I didn't find it explicit in code/ patch except as part of local_bh_disable(). kernel/rcu/rcutorture.c | 94 ++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 14 deletions(-) --- --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -61,10 +61,13 @@ MODULE_AUTHOR("Paul E. McKenney > RCUTORTURE_RDR_SHIFT) > 1); rtrsp->rt_readstate = newstate; - /* First, put new protection in place to avoid critical-section gap. */ + /* + * First, put new protection in place to avoid critical-section gap. + * Disable preemption around the ATOM disables to ensure that + * in_atomic() is true. + */ if (statesnew & RCUTORTURE_RDR_BH) local_bh_disable(); + if (statesnew & RCUTORTURE_RDR_RBH) + rcu_read_lock_bh(); if (statesnew & RCUTORTURE_RDR_IRQ) local_irq_disable(); if (statesnew & RCUTORTURE_RDR_PREEMPT) preempt_disable(); - if (statesnew & RCUTORTURE_RDR_RBH) - rcu_read_lock_bh(); if (statesnew & RCUTORTURE_RDR_SCHED) rcu_read_lock_sched(); + preempt_disable(); + if (statesnew & RCUTORTURE_RDR_ATOM_BH) + local_bh_disable(); + if (statesnew & RCUTORTURE_RDR_ATOM_RBH) + rcu_read_lock_bh(); + preempt_enable(); if (statesnew & RCUTORTURE_RDR_RCU) idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT; - /* Next, remove old protection, irq first due to bh conflict. */ + /* + * Next, remove old protection, in decreasing order of strength + * to avoid unlock paths that aren't safe in the stronger + * context. Disable preemption around the ATOM enables in + * case the context was only atomic due to IRQ disabling. + */ + preempt_disable(); if (statesold & RCUTORTURE_RDR_IRQ) local_irq_enable(); - if (statesold & RCUTORTURE_RDR_BH) + if (statesold & RCUTORTURE_RDR_ATOM_BH) local_bh_enable(); + if (statesold & RCUTORTURE_RDR_ATOM_RBH) + rcu_read_unlock_bh(); + preempt_enable(); if (statesold & RCUTORTURE_RDR_PREEMPT) preempt_enable(); - if (statesold & RCUTORTURE_RDR_RBH) - rcu_read_unlock_bh(); if (statesold & RCUTORTURE_RDR_SCHED) rcu_read_unlock_sched(); + if (statesold & RCUTORTURE_RDR_BH) + local_bh_enable(); + if (statesold & RCUTORTURE_RDR_RBH) + rcu_read_unlock_bh(); + if (statesold & RCUTORTURE_RDR_RCU) { bool lockit = !statesnew && !(torture_random(trsp) & 0xffff); @@ -1496,6 +1521,12 @@ rcutorture_extend_mask(int oldmask, stru int mask = rcutorture_extend_mask_max(); unsigned long randmask1 = torture_random(trsp) >> 8; unsigned long randmask2 = randmask1 >> 3; + unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED; + unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ; + unsigned long nonatomic_bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH; + unsigned long atomic_bhs = RCUTORTURE_RDR_ATOM_BH | + RCUTORTURE_RDR_ATOM_RBH; + unsigned long tmp; WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT); /* Mostly only one bit (need preemption!), sometimes lots of bits. */ @@ -1503,11 +1534,46 @@ rcutorture_extend_mask(int oldmask, stru mask = mask & randmask2; else mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS)); - /* Can't enable bh w/irq disabled. */ - if ((mask & RCUTORTURE_RDR_IRQ) && - ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) || - (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH)))) - mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH; + + /* + * Can't enable bh w/irq disabled. + */ + tmp = atomic_bhs | nonatomic_bhs; + if (mask & RCUTORTURE_RDR_IRQ) + mask |= oldmask & tmp; + + /* + * Ideally these sequences would be detected in debug builds + * (regardless of RT), but until then don't stop testing + * them on non-RT. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { + /* + * Can't disable bh in atomic context if bh was already + * disabled by another task on the same CPU. Instead of + * attempting to track this, just avoid disabling bh in atomic + * context. + */ + mask &= ~atomic_bhs; + /* + * Can't release the outermost rcu lock in an irq disabled + * section without preemption also being disabled, if irqs + * had ever been enabled during this RCU critical section + * (could leak a special flag and delay reporting the qs). + */ + if ((oldmask & RCUTORTURE_RDR_RCU) && + (mask & RCUTORTURE_RDR_IRQ) && + !(mask & preempts)) + mask |= RCUTORTURE_RDR_RCU; + + /* Can't modify non-atomic bh in atomic context */ + tmp = nonatomic_bhs; + if (oldmask & preempts_irq) + mask &= ~tmp; + if ((oldmask | mask) & preempts_irq) + mask |= oldmask & tmp; + } + return mask ?: RCUTORTURE_RDR_RCU; }