From patchwork Tue Apr 28 20:58:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11515551 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6770714DD for ; Tue, 28 Apr 2020 20:59:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2547C2137B for ; Tue, 28 Apr 2020 20:59:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IiSiQD7I" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2547C2137B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4A6598E0006; Tue, 28 Apr 2020 16:59:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 456DC8E0001; Tue, 28 Apr 2020 16:59:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36DA48E0006; Tue, 28 Apr 2020 16:59:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 20CA58E0001 for ; Tue, 28 Apr 2020 16:59:28 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D9352180AD802 for ; Tue, 28 Apr 2020 20:59:27 +0000 (UTC) X-FDA: 76758479574.23.wall25_5297080620663 X-Spam-Summary: 2,0,0,761709f2bed6907e,d41d8cd98f00b204,urezki@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1606:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2691:2693:2898:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:3874:4119:4250:4321:4385:4390:4395:5007:6117:6119:6261:6653:7514:7576:7903:9040:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12683:12895:13149:13230:13846:13894:14394:14687:21080:21433:21444:21451:21627:21666:21740:21990:30012:30054:30070,0,RBL:209.85.208.194:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wall25_5297080620663 X-Filterd-Recvd-Size: 8381 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Apr 2020 20:59:27 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id j3so209000ljg.8 for ; Tue, 28 Apr 2020 13:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=20juLYXIkeuaSxH0ZOfDE2aYGH0+hGPgd5TgVWNgKV0=; b=IiSiQD7Iu4uhe8rSKiFsy+4MM1PYZAifb/4X08Jtk+cZrudizBa+RKHTmspM1N+54J zqJhTvKu/5RQ6kvBm1U2XnFtEFoT5WK+zwPhKmJkjxoJF7GUThJw2k4x9UKsHyN3uVGr t1obO+H+JCn5et6c+tkfu4seuB0gVaBP66MA3RnOVoiWWR6yNpPVLWRXguro8HTAimgy dWMKPS6AWlRiBSdUP8vLP6smxvt8NoPstE7Me+W9nAo+jVxPe1UXCLY3/kLTgCPzwEGN UqOgOltC5P1qNzlwU+eozjY3EfiF2YNtHyaDio7PldqVoXdf4YBXbYJs+pJcbBW0vSZH qZLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=20juLYXIkeuaSxH0ZOfDE2aYGH0+hGPgd5TgVWNgKV0=; b=Sloq4MMO2RavGsWapqMpASrZS1W4lD3dwK1TQ0f5KMF1U3c3EjVF9GzSZ79y0o7rjX Tlp3X5QEy4KaLkozCfJRuKjqss8lr9wx5D/9SUrLbLKcNiV9FEeUQXrog/oCasip8pSQ 6TKuo+nbcXIrqogrMlQBcrAECTvIOulr1VW5Pyfl0g29i9AZro0t0IzEeiGZj2WBzreV K/S9GNX0e6/XrlqmgG7Tpe5bwnEWG5iUSHjcF7CPPqm4VWRXxp9QIni2N3JqApHt8jTR AQCDBCrGUpzxbqIkWn5soLHns9b51NZKi9XjhmQaWD22iY83RdP9roWiP2RuvELk4AhB +QRQ== X-Gm-Message-State: AGi0PuaEpCNmOWlbKt++dx6PjlMkzqrdkf6oCXYRtGiilvMXHCIMASKq ATsw4gq31oV7nDtws46UT9U= X-Google-Smtp-Source: APiQypIzU/HDHdtBEfR5w4aVQIlns0HcLgmvrgZUF/xNsiisHjiPOjyV2oZEMK3gwdyuCi4Uq6EsKQ== X-Received: by 2002:a19:7008:: with SMTP id h8mr20057304lfc.43.1588107566139; Tue, 28 Apr 2020 13:59:26 -0700 (PDT) Received: from pc638.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id z21sm295483ljh.42.2020.04.28.13.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Apr 2020 13:59:23 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: LKML , linux-mm@kvack.org Cc: Andrew Morton , "Paul E . McKenney" , "Theodore Y . Ts'o" , Matthew Wilcox , Joel Fernandes , RCU , Uladzislau Rezki , Oleksiy Avramchenko , bigeasy@linutronix.de Subject: [PATCH 01/24] rcu/tree: Keep kfree_rcu() awake during lock contention Date: Tue, 28 Apr 2020 22:58:40 +0200 Message-Id: <20200428205903.61704-2-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200428205903.61704-1-urezki@gmail.com> References: <20200428205903.61704-1-urezki@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Joel Fernandes (Google)" On PREEMPT_RT kernels, contending on the krcp spinlock can cause sleeping as on these kernels, the spinlock is converted to an rt-mutex. To prevent breakage of possible usage of kfree_rcu() now or in the future, make use of raw spinlocks which are not subject to such conversions. Vetting all code paths, there is no reason to believe that the raw spinlock will be held for long time so PREEMPT_RT should not suffer from lengthy acquirals of the lock. Cc: bigeasy@linutronix.de Cc: Uladzislau Rezki Reviewed-by: Uladzislau Rezki Signed-off-by: Joel Fernandes (Google) Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f288477ee1c2..cf68d3d9f5b8 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2905,7 +2905,7 @@ struct kfree_rcu_cpu { struct kfree_rcu_bulk_data *bhead; struct kfree_rcu_bulk_data *bcached; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - spinlock_t lock; + raw_spinlock_t lock; struct delayed_work monitor_work; bool monitor_todo; bool initialized; @@ -2939,12 +2939,12 @@ static void kfree_rcu_work(struct work_struct *work) krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); head = krwp->head_free; krwp->head_free = NULL; bhead = krwp->bhead_free; krwp->bhead_free = NULL; - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); /* "bhead" is now private, so traverse locklessly. */ for (; bhead; bhead = bnext) { @@ -3047,14 +3047,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp, krcp->monitor_todo = false; if (queue_kfree_rcu_work(krcp)) { // Success! Our job is done here. - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); return; } // Previous RCU batch still in progress, try again later. krcp->monitor_todo = true; schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } /* @@ -3067,11 +3067,11 @@ static void kfree_rcu_monitor(struct work_struct *work) struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, monitor_work.work); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline bool @@ -3142,7 +3142,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) - spin_lock(&krcp->lock); + raw_spin_lock(&krcp->lock); // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(head)) { @@ -3173,7 +3173,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: if (krcp->initialized) - spin_unlock(&krcp->lock); + raw_spin_unlock(&krcp->lock); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(kfree_call_rcu); @@ -3205,11 +3205,11 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); count = krcp->count; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); sc->nr_to_scan -= count; freed += count; @@ -3236,15 +3236,15 @@ void __init kfree_rcu_scheduler_running(void) for_each_online_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (!krcp->head || krcp->monitor_todo) { - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); continue; } krcp->monitor_todo = true; schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } } @@ -4140,7 +4140,7 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_init(&krcp->lock); + raw_spin_lock_init(&krcp->lock); for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp;