From patchwork Mon Mar 30 02:32:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 11464473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AEE11392 for ; Mon, 30 Mar 2020 02:33:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E4E92078B for ; Mon, 30 Mar 2020 02:33:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="fxT0FW2S" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E4E92078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB0366B006E; Sun, 29 Mar 2020 22:33:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B38DC6B0070; Sun, 29 Mar 2020 22:33:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DB016B0071; Sun, 29 Mar 2020 22:33:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 815966B006E for ; Sun, 29 Mar 2020 22:33:37 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 813A4180AD817 for ; Mon, 30 Mar 2020 02:33:37 +0000 (UTC) X-FDA: 76650457674.27.oven40_59cf5e0622b33 X-Spam-Summary: 2,0,0,c7368cd7e166a5d7,d41d8cd98f00b204,joel@joelfernandes.org,,RULES_HIT:41:355:379:421:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4118:4321:4385:4390:4395:4605:5007:6261:6653:6742:7514:7576:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:14181:14394:14721:21080:21212:21433:21444:21451:21627:21740:30034:30054,0,RBL:209.85.160.195:@joelfernandes.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: oven40_59cf5e0622b33 X-Filterd-Recvd-Size: 7900 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Mar 2020 02:33:37 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id t17so13887442qtn.12 for ; Sun, 29 Mar 2020 19:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i7es2W9CEqIB4GMVROiAy2JZJ57uZN4pFanE1V1uKhA=; b=fxT0FW2SkmPY/tTctVOOiNEaCQkEJhYBD9Ljg3DTwuJpXM6Z+pQuU2xjzrWckST7Wt kTky1ILPKzjvatfadFjqzTvUwv+sQZt7v+UsnxOvCkxsgAJran4D5XewUjyU9oT8+Jl+ E6huwFcNFYQ0MuaqQftjvXLOI6/gzgK68TEnc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i7es2W9CEqIB4GMVROiAy2JZJ57uZN4pFanE1V1uKhA=; b=KdpW+XKHkE5Lg2mvSNsta8U+fE5LGhRSj3Em4dQPTlo8lxoY5MhxZMgbEy9YfemrPZ by3pI2+8vgfK9HLTzASnLu3uRJ3KmHUOWoC4ORdt/MC+XOCFFuWpdRKoTaxqK3AFMl3Y wk03V+EchJX4tbc1Vv/A6AhYYQL/1g4e1Irh/SPWGN/a41+rUIJeQI8w08jp/5R1SmbY afOnQam+jhMJch80VUxkqTqchKvdTowAVbBVoVq0QFJidLnjN+XpJJPgHyzSQ+gETl0y btHKJAEcmJay+d5GcnTysUnp5YiUPa/cxh9qc2pPIa1FXRCvH059V3HT66V6OK0qHGzx vakw== X-Gm-Message-State: ANhLgQ1ZF1gixeK67NxAypuASeQz00EMFdyrqkBbCfeP5/Vku61Gb7rP hgNGyJrXClhIsT11tCCR3BIWHQ== X-Google-Smtp-Source: ADFU+vs2ndobSCzogyfpMzEQcEldd3+b0TiV6qhu47wRBO4t65Nlw0iw94d+x/RbDFy/qTpIQuCBmQ== X-Received: by 2002:aed:3c4b:: with SMTP id u11mr9819279qte.208.1585535616115; Sun, 29 Mar 2020 19:33:36 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id q15sm10030625qtj.83.2020.03.29.19.33.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Mar 2020 19:33:35 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Uladzislau Rezki (Sony)" , Joel Fernandes , Andrew Morton , Ingo Molnar , Josh Triplett , Lai Jiangshan , linux-mm@kvack.org, Mathieu Desnoyers , "Paul E. McKenney" , "Rafael J. Wysocki" , rcu@vger.kernel.org, Steven Rostedt Subject: [PATCH 05/18] rcu: Rename kfree_call_rcu() to the kvfree_call_rcu(). Date: Sun, 29 Mar 2020 22:32:35 -0400 Message-Id: <20200330023248.164994-6-joel@joelfernandes.org> X-Mailer: git-send-email 2.26.0.rc2.310.g2932bb562d-goog In-Reply-To: <20200330023248.164994-1-joel@joelfernandes.org> References: <20200330023248.164994-1-joel@joelfernandes.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Uladzislau Rezki (Sony)" The reason is, it is capable of freeing vmalloc() memory now. Do the same with __kfree_rcu() macro, it becomes __kvfree_rcu(), the reason is the same as pointed above. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Joel Fernandes (Google) Signed-off-by: Joel Fernandes (Google) --- include/linux/rcupdate.h | 8 ++++---- include/linux/rcutiny.h | 2 +- include/linux/rcutree.h | 2 +- kernel/rcu/tree.c | 8 ++++---- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index c6f6a195cb1cd..edb6eeba49f83 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -830,10 +830,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) /* * Helper macro for kfree_rcu() to prevent argument-expansion eyestrain. */ -#define __kfree_rcu(head, offset) \ +#define __kvfree_rcu(head, offset) \ do { \ BUILD_BUG_ON(!__is_kvfree_rcu_offset(offset)); \ - kfree_call_rcu(head, (rcu_callback_t)(unsigned long)(offset)); \ + kvfree_call_rcu(head, (rcu_callback_t)(unsigned long)(offset)); \ } while (0) /** @@ -852,7 +852,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) * Because the functions are not allowed in the low-order 4096 bytes of * kernel virtual memory, offsets up to 4095 bytes can be accommodated. * If the offset is larger than 4095 bytes, a compile-time error will - * be generated in __kfree_rcu(). If this error is triggered, you can + * be generated in __kvfree_rcu(). If this error is triggered, you can * either fall back to use of call_rcu() or rearrange the structure to * position the rcu_head structure into the first 4096 bytes. * @@ -867,7 +867,7 @@ do { \ typeof (ptr) ___p = (ptr); \ \ if (___p) \ - __kfree_rcu(&((___p)->rhf), offsetof(typeof(*(ptr)), rhf)); \ + __kvfree_rcu(&((___p)->rhf), offsetof(typeof(*(ptr)), rhf)); \ } while (0) /** diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index d77e11186afd1..5ba0bcb231976 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -34,7 +34,7 @@ static inline void synchronize_rcu_expedited(void) synchronize_rcu(); } -static inline void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) { call_rcu(head, func); } diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 45f3f66bb04df..3a7829d69fef8 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -33,7 +33,7 @@ static inline void rcu_virt_note_context_switch(int cpu) } void synchronize_rcu_expedited(void); -void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func); +void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func); void rcu_barrier(void); bool rcu_eqs_special_set(int cpu); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 1209945a34bfd..3fb19ea039912 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3082,18 +3082,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, } /* - * Queue a request for lazy invocation of kfree_bulk()/kfree() after a grace + * Queue a request for lazy invocation of kfree_bulk()/kvfree() after a grace * period. Please note there are two paths are maintained, one is the main one * that uses kfree_bulk() interface and second one is emergency one, that is * used only when the main path can not be maintained temporary, due to memory * pressure. * - * Each kfree_call_rcu() request is added to a batch. The batch will be drained + * Each kvfree_call_rcu() request is added to a batch. The batch will be drained * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will * be free'd in workqueue context. This allows us to: batch requests together to * reduce the number of grace periods during heavy kfree_rcu() load. */ -void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) { unsigned long flags; struct kfree_rcu_cpu *krcp; @@ -3142,7 +3142,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) spin_unlock(&krcp->lock); local_irq_restore(flags); } -EXPORT_SYMBOL_GPL(kfree_call_rcu); +EXPORT_SYMBOL_GPL(kvfree_call_rcu); static unsigned long kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)