From patchwork Mon May 25 21:47:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11569465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B41890 for ; Mon, 25 May 2020 21:48:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE8662075F for ; Mon, 25 May 2020 21:48:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YkNESquX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE8662075F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E0C68E0008; Mon, 25 May 2020 17:48:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8419C80061; Mon, 25 May 2020 17:48:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 691808E0013; Mon, 25 May 2020 17:48:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 4E2CD8E0008 for ; Mon, 25 May 2020 17:48:13 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id ECDA4181AC9CB for ; Mon, 25 May 2020 21:48:12 +0000 (UTC) X-FDA: 76856580024.12.trick69_2a96fe9f02c50 X-Spam-Summary: 2,0,0,cc5b743e29036889,d41d8cd98f00b204,urezki@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1606:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2691:2693:2898:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:3874:4119:4250:4321:4385:4390:4395:5007:6117:6119:6261:6653:7514:7576:7903:9040:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12683:12895:12986:13846:13894:14394:14687:21080:21433:21444:21451:21627:21666:21740:21990:30012:30054:30070,0,RBL:209.85.167.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: trick69_2a96fe9f02c50 X-Filterd-Recvd-Size: 8352 Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 May 2020 21:48:12 +0000 (UTC) Received: by mail-lf1-f68.google.com with SMTP id z206so7533845lfc.6 for ; Mon, 25 May 2020 14:48:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gGk+1Dv1xZWBD45WykPdecLcAYKfFE81IWJLJfIPzl4=; b=YkNESquXqQUg7n7ikcJ7p3fXioH3Tb6MRVXMx68ADYtDErZ6m8ILzBiSS96gBU5/K+ k9UbRIKwDOM3N4BQnVG/G+Tj8CfCAxkECW0eonIxdrxJC0Vx5fF7IWSYNPOe+/8UP+yt fJ4tdGUTbs+yFPo59NDWlvhSlCCqZFyBfIRrLpwmecehEm9mRb6yAqVBUgoerjSRTOBP G8E0yivmIeyNqO+QxWa/lIz721ZFs3qiyu4ayiVW9W+xMQPBkIxhCaKXQGpoxMLXDc1R bvj1/OuViAYkvPW7eieykUzB0ZkLpSny5Svu+2JDtcfZD4la8g0XljN0WF9w2EqxISt2 9RPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gGk+1Dv1xZWBD45WykPdecLcAYKfFE81IWJLJfIPzl4=; b=lvEvFMA0m4xSBrSiJHJ+dU/NEFmlUSlFsT50hddO+G+1tCB9x6Q4tx9+J0cAv+Zqzg zwCClUrq17iODz92OgePlL9X9plFw1Pjon83MU32SzrVuqkQatImzIww0jRC1ku2XDMX Y0IjtGlvvkgyANzCZ4pH3yYmneIcFZRm4XmUfhAEAde0VN4xrwv7/u+9MJyDWTHb+AhB hKZjge18dDcTpbR8ME36F0/Lk5yK5BMBqnsyPpuNmKikh8WU1b1ir2N1ZOy6JC8gK08f oWmBCSsRZoY+5bEecjXJhiEicLaNvJZUiNUxCLFfoKDy5zlz/E+YCmfbU1maiHXnn+4e 9Weg== X-Gm-Message-State: AOAM531bjW1NeIEUvqHFN/pF9g87hSDu6zjWKGNxpNexhCrN93zApJIl zCjW4mDE2YrcvVW75VMX0FI= X-Google-Smtp-Source: ABdhPJxnmRdUPLkFsmHymHpSeTqdd/iueRk3nTcSAYw1dqYbXgKIV3RatLW1vOgritqB/JVx2XTE6g== X-Received: by 2002:a19:6b14:: with SMTP id d20mr9822238lfa.202.1590443291199; Mon, 25 May 2020 14:48:11 -0700 (PDT) Received: from pc638.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id a6sm2280044lji.29.2020.05.25.14.48.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 May 2020 14:48:10 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: LKML , linux-mm@kvack.org Cc: Andrew Morton , "Paul E . McKenney" , "Theodore Y . Ts'o" , Matthew Wilcox , Joel Fernandes , RCU , Uladzislau Rezki , Oleksiy Avramchenko , bigeasy@linutronix.de Subject: [PATCH v2 01/16] rcu/tree: Keep kfree_rcu() awake during lock contention Date: Mon, 25 May 2020 23:47:45 +0200 Message-Id: <20200525214800.93072-2-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200525214800.93072-1-urezki@gmail.com> References: <20200525214800.93072-1-urezki@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Joel Fernandes (Google)" On PREEMPT_RT kernels, the krcp spinlock gets converted to an rt-mutex and causes kfree_rcu() callers to sleep. This makes it unusable for callers in purely atomic sections such as non-threaded IRQ handlers and raw spinlock sections. Fix it by converting the spinlock to a raw spinlock. Vetting all code paths, there is no reason to believe that the raw spinlock will hurt RT latencies as it is not held for a long time. Cc: bigeasy@linutronix.de Cc: Uladzislau Rezki Reviewed-by: Uladzislau Rezki Signed-off-by: Joel Fernandes (Google) Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 6e120be29332..6e967a9d6704 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2882,7 +2882,7 @@ struct kfree_rcu_cpu { struct kfree_rcu_bulk_data *bhead; struct kfree_rcu_bulk_data *bcached; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - spinlock_t lock; + raw_spinlock_t lock; struct delayed_work monitor_work; bool monitor_todo; bool initialized; @@ -2915,12 +2915,12 @@ static void kfree_rcu_work(struct work_struct *work) krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); head = krwp->head_free; krwp->head_free = NULL; bhead = krwp->bhead_free; krwp->bhead_free = NULL; - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); /* "bhead" is now private, so traverse locklessly. */ for (; bhead; bhead = bnext) { @@ -3023,14 +3023,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp, krcp->monitor_todo = false; if (queue_kfree_rcu_work(krcp)) { // Success! Our job is done here. - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); return; } // Previous RCU batch still in progress, try again later. krcp->monitor_todo = true; schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } /* @@ -3043,11 +3043,11 @@ static void kfree_rcu_monitor(struct work_struct *work) struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, monitor_work.work); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline bool @@ -3118,7 +3118,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) - spin_lock(&krcp->lock); + raw_spin_lock(&krcp->lock); // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(head)) { @@ -3149,7 +3149,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: if (krcp->initialized) - spin_unlock(&krcp->lock); + raw_spin_unlock(&krcp->lock); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(kfree_call_rcu); @@ -3181,11 +3181,11 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); count = krcp->count; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); sc->nr_to_scan -= count; freed += count; @@ -3212,15 +3212,15 @@ void __init kfree_rcu_scheduler_running(void) for_each_online_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (!krcp->head || krcp->monitor_todo) { - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); continue; } krcp->monitor_todo = true; schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } } @@ -4113,7 +4113,7 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_init(&krcp->lock); + raw_spin_lock_init(&krcp->lock); for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp;