From patchwork Mon Mar 30 02:32:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 11464477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 91EC713A4 for ; Mon, 30 Mar 2020 02:33:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4881920735 for ; Mon, 30 Mar 2020 02:33:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="n8ZFtlSW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4881920735 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 792106B0071; Sun, 29 Mar 2020 22:33:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 71D286B0072; Sun, 29 Mar 2020 22:33:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56E536B0073; Sun, 29 Mar 2020 22:33:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 3FB536B0071 for ; Sun, 29 Mar 2020 22:33:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 50AA38248D7C for ; Mon, 30 Mar 2020 02:33:39 +0000 (UTC) X-FDA: 76650457758.11.dog83_5a14697d57043 X-Spam-Summary: 2,0,0,f0b063de3db24f1c,d41d8cd98f00b204,joel@joelfernandes.org,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2907:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:3872:3874:4118:4250:4385:5007:6117:6119:6261:6653:6742:7875:8660:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12683:12895:12986:13148:13149:13230:13894:14110:14394:14721:21080:21444:21451:21627:21740:21789:21987:21990:30054,0,RBL:209.85.222.195:@joelfernandes.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: dog83_5a14697d57043 X-Filterd-Recvd-Size: 7069 Received: from mail-qk1-f195.google.com (mail-qk1-f195.google.com [209.85.222.195]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Mar 2020 02:33:38 +0000 (UTC) Received: by mail-qk1-f195.google.com with SMTP id b62so17560552qkf.6 for ; Sun, 29 Mar 2020 19:33:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+SC1wb04vPFRHvA69idHFJ/D6TQHtUQk5KfrhqDEopc=; b=n8ZFtlSWpZkaejINrNFAgQe81mPm1qWw92alN9sRCcqMfUbbD+oO2me5NN9T23SnJB OGq7zOh2DqT7DRspsD/Mnk5C1Al6o2RjKHcfVq86GVhWCuMmRlVru+fvyVpWfYwn5FAV qQNh0nYHY0ZBoY2rruGIs8jCY5RCLAvmcz604= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+SC1wb04vPFRHvA69idHFJ/D6TQHtUQk5KfrhqDEopc=; b=CNoPpEOHPft7yprKKekkMBL2eihriJtxWV0ADmnorK8g1gqYPh/e8RdkWKGRLFF/L3 Sle/G40SMAqaPIQQDnUYajdW4RGU99l9t17HkIf/KGNsuA+9twOa+yLK/TXEFcj8GriZ HEocqXzVhiGbqCyuSay64YNAl3WdCevHnWIHs8KVeb+UYJcPHnZBkjsG2r2W1QZ5y5/F aEzo4+aR9ur3Pm+NS6A9/28gUw1m6y9v9qsS4KVJfzkwp/Y+0xUnMqWnfGpxSWeG/bp4 6FuxOP+Ot7LWkr+3u2j10i0mfSdsTY/qUYBvU/fz3h0sxE05K7Hs75qdRs64Vpf2G7Ek DQWA== X-Gm-Message-State: ANhLgQ1oLf0HILHUTaqAFKApBFrBUR1jKTQbT3YE2Yslen0QQaoI5hwC 4JXKIqxNcpt6YeUrhPFmKQ+Q5Q== X-Google-Smtp-Source: ADFU+vszNuc31kIyM2RoqVUpLdVvAYIJ6JetNU3ZOQhTJ5Je0FGvESstvuE4ZqfTJltehBQY4GpxHg== X-Received: by 2002:a37:db0a:: with SMTP id e10mr9787278qki.273.1585535618038; Sun, 29 Mar 2020 19:33:38 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id q15sm10030625qtj.83.2020.03.29.19.33.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Mar 2020 19:33:37 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Andrew Morton , Ingo Molnar , Josh Triplett , Lai Jiangshan , linux-mm@kvack.org, Mathieu Desnoyers , "Paul E. McKenney" , "Rafael J. Wysocki" , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" Subject: [PATCH 07/18] rcu/tree: Simplify debug_objects handling Date: Sun, 29 Mar 2020 22:32:37 -0400 Message-Id: <20200330023248.164994-8-joel@joelfernandes.org> X-Mailer: git-send-email 2.26.0.rc2.310.g2932bb562d-goog In-Reply-To: <20200330023248.164994-1-joel@joelfernandes.org> References: <20200330023248.164994-1-joel@joelfernandes.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to prepare for future changes for headless RCU support, make the debug_objects handling in kfree_rcu use the final 'pointer' value of the object, instead of depending on the head. Signed-off-by: Joel Fernandes (Google) Reported-by: kbuild test robot --- kernel/rcu/tree.c | 30 +++++++++++++----------------- 1 file changed, 13 insertions(+), 17 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 3fb19ea039912..95d1f5e20d5ec 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2825,7 +2825,6 @@ struct kfree_rcu_bulk_data { unsigned long nr_records; void *records[KFREE_BULK_MAX_ENTR]; struct kfree_rcu_bulk_data *next; - struct rcu_head *head_free_debug; }; /** @@ -2875,11 +2874,11 @@ struct kfree_rcu_cpu { static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc); static __always_inline void -debug_rcu_head_unqueue_bulk(struct rcu_head *head) +debug_rcu_bhead_unqueue(struct kfree_rcu_bulk_data *bhead) { #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD - for (; head; head = head->next) - debug_rcu_head_unqueue(head); + for (int i = 0; i < bhead->nr_records; i++) + debug_rcu_head_unqueue((struct rcu_head *)(bhead->records[i])); #endif } @@ -2909,7 +2908,7 @@ static void kfree_rcu_work(struct work_struct *work) for (; bhead; bhead = bnext) { bnext = bhead->next; - debug_rcu_head_unqueue_bulk(bhead->head_free_debug); + debug_rcu_bhead_unqueue(bhead); rcu_lock_acquire(&rcu_callback_map); trace_rcu_invoke_kfree_bulk_callback(rcu_state.name, @@ -2931,14 +2930,15 @@ static void kfree_rcu_work(struct work_struct *work) */ for (; head; head = next) { unsigned long offset = (unsigned long)head->func; + void *ptr = (void *)head - offset; next = head->next; - debug_rcu_head_unqueue(head); + debug_rcu_head_unqueue((struct rcu_head *)ptr); rcu_lock_acquire(&rcu_callback_map); trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree((void *)head - offset); + kvfree(ptr); rcu_lock_release(&rcu_callback_map); cond_resched_tasks_rcu_qs(); @@ -3062,18 +3062,11 @@ kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, /* Initialize the new block. */ bnode->nr_records = 0; bnode->next = krcp->bhead; - bnode->head_free_debug = NULL; /* Attach it to the head. */ krcp->bhead = bnode; } -#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD - head->func = func; - head->next = krcp->bhead->head_free_debug; - krcp->bhead->head_free_debug = head; -#endif - /* Finally insert. */ krcp->bhead->records[krcp->bhead->nr_records++] = (void *) head - (unsigned long) func; @@ -3097,14 +3090,17 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) { unsigned long flags; struct kfree_rcu_cpu *krcp; + void *ptr; local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) spin_lock(&krcp->lock); + ptr = (void *)head - (unsigned long)func; + // Queue the object but don't yet schedule the batch. - if (debug_rcu_head_queue(head)) { + if (debug_rcu_head_queue(ptr)) { // Probable double kfree_rcu(), just leak. WARN_ONCE(1, "%s(): Double-freed call. rcu_head %p\n", __func__, head); @@ -3121,8 +3117,8 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) * Under high memory pressure GFP_NOWAIT can fail, * in that case the emergency path is maintained. */ - if (is_vmalloc_addr((void *) head - (unsigned long) func) || - !kfree_call_rcu_add_ptr_to_bulk(krcp, head, func)) { + if (is_vmalloc_addr(ptr) || + !kfree_call_rcu_add_ptr_to_bulk(krcp, head, func)) { head->func = func; head->next = krcp->head; krcp->head = head;