From patchwork Tue Oct 8 13:50:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13826521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92DFECEF179 for ; Tue, 8 Oct 2024 13:52:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0EF46B0089; Tue, 8 Oct 2024 09:52:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CABE36B0093; Tue, 8 Oct 2024 09:52:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98E866B0089; Tue, 8 Oct 2024 09:52:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 715AA6B0092 for ; Tue, 8 Oct 2024 09:52:44 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2254081C80 for ; Tue, 8 Oct 2024 13:52:43 +0000 (UTC) X-FDA: 82650575448.22.5296BEA Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf28.hostedemail.com (Postfix) with ESMTP id 8CEC0C0007 for ; Tue, 8 Oct 2024 13:52:42 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=LtITutnp; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf28.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728395519; a=rsa-sha256; cv=none; b=JCF6x11508e9D3kZ8PdKgOXC+uqshTwoiapNFs18zqDmsLqcWaXQ5PsBJg//I2H/Eiw+HE SMWipjdCmfNArajzovnJ7aAxDjPgociCH3NdBsOlbUb6C6IE0L/fexjEN26+GFUU61u+G2 X+vZX0JJwL++XU5ctboEKkBDSW0PJiQ= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=LtITutnp; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf28.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728395519; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3kXqX9kwZUqLQ5vuGBV+0fBOaaNpCVSZ1lREQGq+YPw=; b=X7SXNuY9xbXnkrpUU2ulnynSHig3la4J5gLudZF+IKWnCRsVtvGzoM8d7K32zur0JCeXkM FctyzZbXgJzb9gR4s0RwVqeDHWnwMxMnHbmVE7RAt5LJbbuP10ApETUw+pXdoFSBmm7sSm Dao03UU3Q6wF7EpSTNG0B9BDu6g/Emo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728395561; bh=ruHajFuAP+9RCH6285NJzkwm+MUgE21IicKMAn8zEWg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LtITutnpgtMp+C31zbrNesLxnf5B4k6XtwV8rHMbjoI0AAVE6gW+o+wi5mUtCkwar Ga57E27euyLob/B6qOuupcvTmjgOcIIXfFLr+b0/ycaUgq34HFGeWkG+30IY/OvYQl NtqKnB1ArwD7ms3LYC446z5H5ihWusKxOvh9HM4Sf90OOJa29PjrdcDhTPFQZZaplZ QNc2KdnFDVBg5Xmdeo3dpG6TnmZse20WHqosPv+be60G3vRxnRYZBjG8g9NlNBBtP7 D9ichiqHRmRyCVDonzZX7nmCcCU31tnjxH4MBfropSezGXysozpRBQa++c1cHbWoYU 4aECQfjirUreg== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XNHX91sCszM4s; Tue, 8 Oct 2024 09:52:41 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev, Gary Guo , Nikita Popov , llvm@lists.linux.dev Subject: [RFC PATCH v3 1/4] compiler.h: Introduce ptr_eq() to preserve address dependency Date: Tue, 8 Oct 2024 09:50:31 -0400 Message-Id: <20241008135034.1982519-2-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> References: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 8CEC0C0007 X-Rspamd-Server: rspam01 X-Stat-Signature: 6epq7ioifm5ihtsdcgmtfoxjnrifjd6n X-HE-Tag: 1728395562-618488 X-HE-Meta: U2FsdGVkX182gs4wV/DBHdEWpbfTZKlxecXoFd/oeGcDY48XwVMr9QQWgUNAHOnC0mFyhsRr/7JqPdGcJADZSzwF9uzhn6ZhKcrAwbjZuc7v2KfnZOyZpeK0CmHn6PJlWg381vpkLfSGzj9Xks9WJ7FemxTO4x23ZA68e8U8kPuic6cOlpDriRmo7xXH3n5Np6BTyPGT2wWOMzAMXS8mnEFZaQs8+qBHLzynyxTI5+Vlj2ZAxRxJOK2ciKB28W3HsgwdMb4gxtDbpb62GgNhc2ueIW6cP8CaY0Z7yvZli1Fpk+ZqEsEVAxalXbDsPAcllGpda/d3jFtt44H/JC28Mo030maKuPZgKNSwKK1oROREiU62jCV7bBZS9IJbgXEIyK63jXQNuntR8o6Ai1aDVontu8g2DJlj8dmSSLin3CjtBepJXl2Ma5NosWOndrbI6BLJk2I2FobjhhPMDziicF4pvIKmiwa9ywzgFRt+EvAE98edhGdd+nLZ2wEfoRJTci1UmMhtYgxMY7Fg665FJrgM3UsoLdHKN9gPzxdDFU8pk2UdxRN7FtMWwajeC+JzhwPLqtzkn5+Mdfofq8cpwaITGD7hR/fSealMExMmJO7qlhdrQ9GtOhs4+aHdXcJvUrvHTlCjpV4XkHNFlJAgYoaol0V4kLj5oRG0DD5LHRmiRzDxruvtBc6oNsjV3SlwNjcYv1Ot7613R5jNLwh5L0FSjTin/I1J1B+uOB8EJpQaFCqiwcpx8/M6QXUFF++Xlx98rEoiTFW+LTxuQXe6zMfzbpzB36lUW4fg8nA++/anNUmK2ZRJgnVUAo+yf/sRxh6l32+zI4xc8VjjnUNFeLFANXpVtgsuOH/Ms3P5e/XF7alLu5tKpab4EffKbX4ed2KOXfZVtpcJx2zvutZ6SA/PkVzqviN/sUR6Lp9I8JeZlNdQ0FjoV7OOnzMF70xsHiFooryFwNp4HfdkZo8 5RWp0XQ5 Up8+EAvbQT+cR48kjbMBC2DCC3hS9hZZ3UwmQr8V+BFCrgMcwHn57VaejIFnWYkkN/3+logvct+37pK+pe36huj0o6UNWu9l1lnEfWPbGDAdwRo27YjRlgZv/7iBRiBHhI4QcEiDcGTv+2rlrpQTY4+NRlVYMhaJx0kiPgFL8mPFXfyuESo03IIJxML0mGZgsw8cY3qKdGP6HAmplW+Spumkoa1L+6lvY6Z6cZ9CoEPbKVn1m6dlXbZ12/8CRXT6jxal+GzknJQBIXjkoT5DHrsdes+lzMaTvNNS3ypdg/TF42kW3TAC81uaEVm5QylQBzT4xqC/MWCqc+L6T5S01Z1u+3YtzQD0R8uIpxz4pZtaRzJtIOfKehgIZ0889TVjK8/Rne+9hQX/53lLnp+XKIaFIQVgUFQoI0TeC/stJ0aW26W2S2uLG27rhioZR1iJBvHvkbmgSXuXejSeMl3wX7G4wLw2zdIYCvmzxAffEUg0UbZd87uJxUUF7mio2vbxOuxlIfb+yZ3LOflIwAI7RCD5nMuz8K9z18QcRS4+GLhvAkr0ZZWAvE0PPpc5YhdG9X1l0SIurcU75pgK/f970S3gB8OjEwbie/xTraZmaFxm7k41q0GBpCeXdO5hZrvprk0gRwX0vEu0B+XECFKhQ+0c4SQj5rnNK5GnGu+DICVt8D68mfL/Gz4brnaSXozexOXMNsgp3j24I20MSp9WVZ7AWu5tHy58iVwJkFeiJphxP1JYbvoaxwruOOCaTMXTLPq5u3/fGn0c3aDk6oUA8k20xAyNziTa7iB/1CjsrNOv9wC3zYa2ja36lbRUnb0Fq2zqFIeDMnDSw39q1Kk9VqCSpN8fxJTNxOBVEzjpXKfpJ5IpfY8uWiWfUYcHmHesF8isnKG0P73des6UB3bDXCq2YKQTeQJZhfK83Q8W5JdgoGIs6yJ4KRX5raFahp8Bqnkq9angHfZNdiptZ7s1moMrsSS6P cTusW3Iu /5sZKKEEqvxvcIqKKonSr1wUmrXEzZbl1riP/khaKi4Z9DwFv2hIL56JC0Bv+Fk9shYwGhG02Blj3yrRlFTpxE9PaszjghtMxmr21jbNJMt1ijZXzT6hMf2P8REwit0x14qYN74hlhQijrZUa/V0jlTGKRwwgqZupXrVR5f0xhm2ONRp6UfgjeVUi2oR89NVOIb+ooqM8RUY8/jzOU8cDHdA/TtMlJzI9ZGFOeNZYR+gS4/CrprQZUW3wv4qcBov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Compiler CSE and SSA GVN optimizations can cause the address dependency of addresses returned by rcu_dereference to be lost when comparing those pointers with either constants or previously loaded pointers. Introduce ptr_eq() to compare two addresses while preserving the address dependencies for later use of the address. It should be used when comparing an address returned by rcu_dereference(). This is needed to prevent the compiler CSE and SSA GVN optimizations from using @a (or @b) in places where the source refers to @b (or @a) based on the fact that after the comparison, the two are known to be equal, which does not preserve address dependencies and allows the following misordering speculations: - If @b is a constant, the compiler can issue the loads which depend on @a before loading @a. - If @b is a register populated by a prior load, weakly-ordered CPUs can speculate loads which depend on @a before loading @a. The same logic applies with @a and @b swapped. Suggested-by: Linus Torvalds Suggested-by: Boqun Feng Signed-off-by: Mathieu Desnoyers Reviewed-by: Boqun Feng Reviewed-by: Joel Fernandes (Google) Tested-by: Joel Fernandes (Google) Acked-by: "Paul E. McKenney" Acked-by: Alan Stern Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Gary Guo Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev Cc: Nikita Popov Cc: llvm@lists.linux.dev --- Changes since v0: - Include feedback from Alan Stern. --- include/linux/compiler.h | 63 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 2df665fa2964..75a378ae7af1 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -186,6 +186,69 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, __asm__ ("" : "=r" (var) : "0" (var)) #endif +/* + * Compare two addresses while preserving the address dependencies for + * later use of the address. It should be used when comparing an address + * returned by rcu_dereference(). + * + * This is needed to prevent the compiler CSE and SSA GVN optimizations + * from using @a (or @b) in places where the source refers to @b (or @a) + * based on the fact that after the comparison, the two are known to be + * equal, which does not preserve address dependencies and allows the + * following misordering speculations: + * + * - If @b is a constant, the compiler can issue the loads which depend + * on @a before loading @a. + * - If @b is a register populated by a prior load, weakly-ordered + * CPUs can speculate loads which depend on @a before loading @a. + * + * The same logic applies with @a and @b swapped. + * + * Return value: true if pointers are equal, false otherwise. + * + * The compiler barrier() is ineffective at fixing this issue. It does + * not prevent the compiler CSE from losing the address dependency: + * + * int fct_2_volatile_barriers(void) + * { + * int *a, *b; + * + * do { + * a = READ_ONCE(p); + * asm volatile ("" : : : "memory"); + * b = READ_ONCE(p); + * } while (a != b); + * asm volatile ("" : : : "memory"); <-- barrier() + * return *b; + * } + * + * With gcc 14.2 (arm64): + * + * fct_2_volatile_barriers: + * adrp x0, .LANCHOR0 + * add x0, x0, :lo12:.LANCHOR0 + * .L2: + * ldr x1, [x0] <-- x1 populated by first load. + * ldr x2, [x0] + * cmp x1, x2 + * bne .L2 + * ldr w0, [x1] <-- x1 is used for access which should depend on b. + * ret + * + * On weakly-ordered architectures, this lets CPU speculation use the + * result from the first load to speculate "ldr w0, [x1]" before + * "ldr x2, [x0]". + * Based on the RCU documentation, the control dependency does not + * prevent the CPU from speculating loads. + */ +static __always_inline +int ptr_eq(const volatile void *a, const volatile void *b) +{ + OPTIMIZER_HIDE_VAR(a); + OPTIMIZER_HIDE_VAR(b); + return a == b; +} + #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) /** From patchwork Tue Oct 8 13:50:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13826522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B2E7CEF177 for ; Tue, 8 Oct 2024 13:52:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 460426B0092; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40FF66B0093; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B04E6B0095; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0CABC6B0092 for ; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6BD9DA1CD1 for ; Tue, 8 Oct 2024 13:52:42 +0000 (UTC) X-FDA: 82650575448.19.A185BEF Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf03.hostedemail.com (Postfix) with ESMTP id 236742000A for ; Tue, 8 Oct 2024 13:52:42 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=JUhZup11; spf=pass (imf03.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728395429; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KqLVqymiLubFOx8t5GxWwHY/o6+WmyMaWWla3hOXDIE=; b=XhYAxC992fmmGhj31kybUu/i8SHiv9vTn1JZOn7eeCKfcCiBjDrk8HvzDCyyeECSzEtmQB /5mIlvlA0KSfSHz53eWYsH6C64n5qpSJbe5OYo/08HI/lpzCV9+fJHMbXVX7HTOXbGmwt0 0Qn2hhE2+wTJNKG1av0y6z3ix4TFanI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728395429; a=rsa-sha256; cv=none; b=3wDeHKZo/l4NswexdFd+i71DIvVmJweLGnNdli2QLu5gJA7HDwX7QDjQyqTRJ8/i2tVcNF pyBXcb0HT689livKF1TYeVPW2UnNvvCEV/PcNJk4P5/7M8s4FjRE4EnXXfHkaPsL1wJmNg JC7KtO2z4sML42zN0IApZAGZhXNd5FI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=JUhZup11; spf=pass (imf03.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728395562; bh=yje+Nf53ZMDX2ciH9OYWiJcxOcJxvL0v/cML/xD8LJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JUhZup11xZUM5lXz8AmFl7eRRRbNdTVJn/MAouIeqyExpdpC7wwzSy9hI/bDD7sTZ zMS9WBOhOM6injeA4saJAsztMqLPoIHRVPMs9V6IGJ7OLZUFUAbHuGH7I2h3JMmPTM dKXT7l/hdTnifrGM8Nctf6T/VvE4JryCejgDRtQ4aIiW7NFqaLuyjMnDIEkRXQC2c8 AXZBOW/Lg9c/MmCwqVIp60r7R/DHkbkd9yjFgUEQVD48fYXvqco8rKZLfZNJxTssgk QctXWViwcxB7jAFx2bZZGRV7318YPgpZOPtHD0kNVZsevNkm7LfIYya52amK3TQzov ly7soWyCuYxoQ== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XNHX965gyzLcG; Tue, 8 Oct 2024 09:52:41 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev, Gary Guo , Nikita Popov , llvm@lists.linux.dev Subject: [RFC PATCH v3 2/4] Documentation: RCU: Refer to ptr_eq() Date: Tue, 8 Oct 2024 09:50:32 -0400 Message-Id: <20241008135034.1982519-3-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> References: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 236742000A X-Stat-Signature: ysuf67x7zwe9h64ungsk66okrc6ci53q X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1728395562-516341 X-HE-Meta: U2FsdGVkX18joZvZbRcw4AaXVt0ll/pSEUTbj558B0XOPZ0dkmmilc/P58Tab1aWdUMUm0pS7V29WQUhmbO1zKk6Nel9D6XaTq6TvKyBA6QoDL8CgesyUzgOALgUSh4KL+vQmy88FRfpZrdHR9MolAGSnLl+Kiyjtqv0sReXpoI3rjRQpyw7CagkZspcNeALo9Z00B9jYYX5+Ht+/U6wsXyvfC7bbg1wLWFoMuYQcBG0i9CiVyTsfd1d1LY1mRKnFl4X6LZ0W1pmik/TAm4J+lxYGti2n++4+j0qXgS1b10vXaJjC1emWwWsfgj6GF5YubxqRMeQqoEHIndHdEvpO05tebDLvv8t2/58xrL4EdBZCLYNecoxuxvZQQyM24rJZMjr1QO0XNxSMWTCIJqqrLrxwsO5Bob8oANSWepZxpCu/5q39bHP8Wr8ybtVmGkqno4DiLby/pjaaCsqJ9In8ijCHyHPdSnpR1xUSv3pyZCB1e4n+MSgNluLoWYTw8wxK3q+ZzIfkEBg6eOU99mVVqkKEEkFpbdDT/H+U4m+QSh+CDvdtzWXU0t2gcI7Hm8xXPTCCGXKYOvWB1MAKAEWZ8T/vLtgJJ9XuGsfcB98NPJGadrc0H8E3nY7Z0oTP1Tn0I4wcjKZULu0ZXo3jQoTJd3wTPoK0oYSbHflPmX2+Snwe52ZscErvGQKZLtCaDKjm32pz0GBAaOVVjQ5S5u0Z2xkwRTXsRHQUamElQ2OQ7JnozY3wS3RISUjq9gUcp9/odAGZALvC83VmVitw3fAtqeibDKsSshnV0WORFQiUWyXyeOexCHvYEIWka12r9sxTVGPLQjhgObPyPZV9izgmQ4B1J+YNqsTAErJc4zeqqrxbuyiqieQmQx1Nb1R3pR4REFWE+HM4sOfSd//Bn6u6KcaRZCBuZP4sgdKGLPJ6loB/qTuwCWuuDC2AV2A2ORZKNpSQKbZAbhmOiEYfIp lvyL0wkN Nu/5Czrfs3VBLlGBvearQRG0Gf0uUAsBaJKz7sa5fobX+kB1UKHVnRCSu8NHQoZHdEyX0TURIxddAuKozrBG1ekgAQu7Cs02iQyIQa9tUKmPLnotFnw15yKBP7p70mW4JRdc0fFPe4PuBs0oqku/MKCc9MLwXXNy5armk1SU2UrC69PEWXxwG+LNYHPFrTHjZ6bBsPP7fSTYOWL+g+I56mk6jN/zrikcxGL6u92YDtY+LECUC620POAJ2vV4kmAJFEraMQHOoik7saQDHlrl3VXgOJpc21iPHmR2A6u1rjyammMrsiq/eqGfeOfmPRitwcPK/NoNY+PsQUt/OZFw3Dl1fh3yiBQCon+L1apqYfFbtbXk8q9eQp/zsBGIACHBfj6IVwA0Y7FmBn40xWSv8j8+66dTeyqPEVFkiatSdB3DRIC7eQH9D4ZXMC/ebj8qpoW6SG9N9mbdPbZaVlwWVOQuAoT0zO9Yaoz0UBNnstIYsp+hnlm6repjig95rKRel6jjqeUWXZ2GHj/kQ/9oYJXS+YixucR6477dOz9weyE2taAaWLAwlI7nJdNVUtPM0mavW/juLOe/wT4mIHkdrpureqzuAP+veZSRke6b16vqejVraSX6fqT7d0fFpUnluKKastcpg3b3iZ/YiaucgtLJvDQjwv/XLzoc66XboBydjxFdx3XFC+McePVovwJ6Rouy+rb/5FI5O2hiv9zhQzZ36rqs4aa26ykJnudDakQ+J2b05+inj7A3seL6Y2/hPDzxDBCZ8ZK4KuyoSmnEfPNc/cMeEfBpEWEYKahiZTN5L2wbj4gzsGu9O79XeS3H8kFavnQmtIorIRSwLVTkrOY/HBr5VwOZYCgqYBV6b9BnMXoWjDU5uhdlDOkEmvDL8Wtgd1IIm1SoABuG6pygOWdnaixUqx7le/+Q3LG4iw1WSGn2uIIbRQobty2UqjqFZ60EVmUC8BIPvt4U+eG305eSNGD2l zTjuOKFq XtqByr0WdQ3/GG2N56P7blpKjlZ6iqOR6lqjquvgPSC1WP05FWNQVhbqkqqgb0+7ZSB0bQouEfrknAnhPcmcE1A94ipeB2K0aZ4TAPvJQ8C8jz+MkKcUbFLiXlqnBMIovbAiM6ZEqiasR+ary0PMHS4LmH2U1BEJP9FVjZaROztfD6YDoKSc7g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refer to ptr_eq() in the rcu_dereference() documentation. ptr_eq() is a mechanism that preserves address dependencies when comparing pointers, and should be favored when comparing a pointer obtained from rcu_dereference() against another pointer. Signed-off-by: Mathieu Desnoyers Acked-by: Alan Stern Acked-by: Paul E. McKenney Reviewed-by: Joel Fernandes (Google) Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Gary Guo Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev Cc: Nikita Popov Cc: llvm@lists.linux.dev --- Changes since v0: - Include feedback from Alan Stern. Changes since v1: - Include feedback from Paul E. McKenney. --- Documentation/RCU/rcu_dereference.rst | 38 +++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/Documentation/RCU/rcu_dereference.rst b/Documentation/RCU/rcu_dereference.rst index 2524dcdadde2..de6175bf430f 100644 --- a/Documentation/RCU/rcu_dereference.rst +++ b/Documentation/RCU/rcu_dereference.rst @@ -104,11 +104,12 @@ readers working properly: after such branches, but can speculate loads, which can again result in misordering bugs. -- Be very careful about comparing pointers obtained from - rcu_dereference() against non-NULL values. As Linus Torvalds - explained, if the two pointers are equal, the compiler could - substitute the pointer you are comparing against for the pointer - obtained from rcu_dereference(). For example:: +- Use operations that preserve address dependencies (such as + "ptr_eq()") to compare pointers obtained from rcu_dereference() + against non-NULL pointers. As Linus Torvalds explained, if the + two pointers are equal, the compiler could substitute the + pointer you are comparing against for the pointer obtained from + rcu_dereference(). For example:: p = rcu_dereference(gp); if (p == &default_struct) @@ -125,6 +126,29 @@ readers working properly: On ARM and Power hardware, the load from "default_struct.a" can now be speculated, such that it might happen before the rcu_dereference(). This could result in bugs due to misordering. + Performing the comparison with "ptr_eq()" ensures the compiler + does not perform such transformation. + + If the comparison is against another pointer, the compiler is + allowed to use either pointer for the following accesses, which + loses the address dependency and allows weakly-ordered + architectures such as ARM and PowerPC to speculate the + address-dependent load before rcu_dereference(). For example:: + + p1 = READ_ONCE(gp); + p2 = rcu_dereference(gp); + if (p1 == p2) /* BUGGY!!! */ + do_default(p2->a); + + The compiler can use p1->a rather than p2->a, destroying the + address dependency. Performing the comparison with "ptr_eq()" + ensures the compiler preserves the address dependencies. + Corrected code:: + + p1 = READ_ONCE(gp); + p2 = rcu_dereference(gp); + if (ptr_eq(p1, p2)) + do_default(p2->a); However, comparisons are OK in the following cases: @@ -204,6 +228,10 @@ readers working properly: comparison will provide exactly the information that the compiler needs to deduce the value of the pointer. + When in doubt, use operations that preserve address dependencies + (such as "ptr_eq()") to compare pointers obtained from + rcu_dereference() against non-NULL pointers. + - Disable any value-speculation optimizations that your compiler might provide, especially if you are making use of feedback-based optimizations that take data collected from prior runs. Such From patchwork Tue Oct 8 13:50:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13826523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0780CCEF173 for ; Tue, 8 Oct 2024 13:52:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23E436B007B; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 17A9C6B0093; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0D226B0098; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D1B9A6B0093 for ; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 922B31A11DB for ; Tue, 8 Oct 2024 13:52:43 +0000 (UTC) X-FDA: 82650575490.01.1DAD651 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf27.hostedemail.com (Postfix) with ESMTP id AE8084001E for ; Tue, 8 Oct 2024 13:52:43 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=ep37mymW; spf=pass (imf27.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728395428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wt+qOE+oY8cedzfD/SRV+LLhAr24vQilCar5MDmLUuw=; b=qAy3uPkLHCfoAPczRz2Ryf5dpZT0y/ZihtWyBRc51xj2HfhqcIjzwsW/ZBAOvWWIcZZ8Jv 66KiX2pUA1AdzsdwhU9aETn8o2P5QS7x+V8saqYUubfyVGXN6ZXf9zuzu66+B6mUvCpZTU dwUqURYEzywEuRtHv/9tDmbh41EcPcM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728395428; a=rsa-sha256; cv=none; b=0ooJTklcKCbA5o8LdJSAUdpTB2srUvSmSxmDDPfubwOnaegSik7pXcT9Uu+wXBnyOWm1O/ ZtrgseD40cLaEAd7Axmhpuz/tdo4XXfxOCN7IuCJP0yoTaRHsvx170Uf96FjFop0Lp9MR6 zwA5VWcaPLY252rfQaqO0KZXtTrpPlk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=ep37mymW; spf=pass (imf27.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728395562; bh=CqeorPBfn7xn4x4/jMKXIZM1tORejYC8CC1QTPI+6II=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ep37mymW3evxObDnQSIt1sbxWRwIqCJEJg8zI5qZFOdnwyIF1lEeimU1x2fdW++2p xoZNWi7AQXhawbbn7rEar3t/FN+BclKdleWla6J97d6X7/ubl8160zYpGkGqWbmGkM e5CpAk8ILsjCM8zeRvxrcgxKYXWXMonmvkf8dnpd9/G1PP8t0aamSnOMbOs7NSOCAU tXazdlfBc29XdllDSjZY/FXwkQCVrdXiIeIJepVGQSVYNfdFsWwa7Ey13CEXH/hKYE VRK1DaPvw5XOh2AZdbfLY6MpSc3g/Ww1UYNQv1TZYv7D8clPnxK9IbH2T/ZJLhe/96 6kICSCx5z4QoQ== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XNHXB3Lv8zLwW; Tue, 8 Oct 2024 09:52:42 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev Subject: [RFC PATCH v3 3/4] hazptr: Implement Hazard Pointers Date: Tue, 8 Oct 2024 09:50:33 -0400 Message-Id: <20241008135034.1982519-4-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> References: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Stat-Signature: qdg6powyy798ha9nwynuudicfw11xwxq X-Rspamd-Queue-Id: AE8084001E X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728395563-121929 X-HE-Meta: U2FsdGVkX1+ejzDlgsE9UVZnmW6/CQasReRhMgcJ6OvBSUeeu9hoejPPLyzgv9EFYt6guGt6rU3M10FprKFM5JSNXIzCJ/waI0fSDtQas0WNuw5XMiNi75AsHt1Xo+PykQCRlFxXGCeNtPztH3O7Dzrgm54VYVel4n8iwePHbl5BFdoRvPXK65fDiLpY60KYJcWGePWqs4d/MYcd8/3FhDMtjeBopwabLlng1QioTHzz+YkNqRFPArxZi5WNBDmT4K6biEec7eEP+tQsTDDt4i1FfE+kQEaDsyKjt8CxODcCgrcNxu1zO0JARIuPEaTm4JzTfVVL0K15ME2HeA8wbW6N13Tqygx6pftzJ9EEDrsz7ZAgG6GAtEH5jlMY1kbpbwV7ciC5gUPPg+/3w8NCYsBQW1iWo7X5AAkNktcS1QIPbqF76WVtuaYG9eLNB2AxYEwm0R9IV++eF/BZ9fEgYTODI/7mdEnIDAooKYasS1pVk/0/K2QayY59CwOR7jXCm3ELsqKHzkcLUYMiL5bkqxplRNeXcI4dt/16pkfVBetvd0X7yNBm+8w5wUG1W7cp8tZ0uLH9yDOL4Mpyrqb1MxPp/JRoC6RVrrXAOivc4RJY6UybkGoVFf5lqTYkkcwYxPykuVnU6sQZmSfEMMAnUIAtZc6dh4o/bahmTD1gigWDZz827T/y/4hFAUwrbSNDGA1z/W7O65UoXw+A7SeViuSLwvo5Jc0urW0vt+pF/LazTEV98wwlJ4JPz8G5vYc8ZG7dTrPDZpthX5cxcRRN/4DblaJ2v8oIlJZ9XdDzMR/e3dV/GeXR2XVF/QzevmeMw0Dn+KHKmY6mNYtwFDaUJJri0dlC6NV6ljyiCW2G3VboIUh1ZY5E7kwD9qBboPQvxswpQd646ke4Bmkm+Z+XqLVk+UyhUOltgMFyvhHWSPdazqWpYxBlrCnPCcUpEjtOmqaT9K4OseKWyLAuBoN Xq+pD44/ ZoD8divsVhQ+kglLwkg6oDtlVEYDDWvg9UZVjG7wdSApwzAGb5g5Ke7tlkfJc+IjS5DVibfhkpAgzr3qRnzS2aCdGqrdvHTa5jMAlRtvjBgo/nCaWYSVNa3N77r/CgCrrnmJk/P5PVsd6WcSBxESfEbqaLP9bL6BiwLj2Yc1OKJV0VtHi6NdTmnyPk2N+j0yNZOkmHJTU9DoiKPVs/oDKIiJQhSgtLOd8oJmw4JzuJZkYEhWml6kmDDWpTNnfcvTqPMXPsdpYxyDWtmKzFIaU0gHMV9ENij50xlXtxkkZjQ2ujMnOaE0sTuABvjYsC1yFeEC1XqUJebOjhnjFksY/rnKeRmPkuONJSTduBtGOQ5ASKubOi6DAK5HLDGaptKCtC5KFu2bV3RAUqILrluI3uMRAdNUQt99f0Mvys3hodvOd2fulqhfuXm47PD1YRvl3W4VV9wHYxBkiQkXNX0rSTtYIsFPWFU5hFGB/WsQGeG/YseE1G9JynpCOevp0/ZBECAJpuTUXAjUEW77QZPM8/Zz+8S/1Y3/SGGrw7R26D03LP2I6rcmh+bUcVAAGUpaZcbwRxEmgO3xD7x1pX3yR1MpmgunJuls9itlhDAJDhQsdj9P0pfNceXdN4k4g1Rh+c/0bBeDnGS/zJt/TnnfKB+fAGeZxeMj0ivf4I/VKNY+JwySq8vizvX3jRtH9qUBPE3ZuVOKcucPGF+VLbiVqyrEozi3dCAS/xF3xmieSG7ecG9x74l3AXdfnD9Tjo0uvjTWw2QN5u5w1hqkiSwcrAfSUEFlsahaIEOqCklAtvRP3YkabhmJjkqQdxmmzImopHrJvNED/RMO1aj9zyrLAL7DTdS4of/3CdO9SGSAPDsCgYpAF+SU1hXuxa5tTOisxG/PLqfPB+5aqystAC7LYHaO/uN5jCaIdmHReXEbG/Vm4gN7BX/Q1aR6HKY4d9hQz9Mo6D9aXILc69LvJ6hJq3pu2syhA IOMeWzPq oM0Qlbzua6RWXuJw7neoVR0LneQlvaFOv9cX9mEoDw6pXOz4IpbxDLhD/jF2UtQtciITvxZd9kGpjQEVfjiXXNR+J2WWGVAO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This API provides existence guarantees of objects through Hazard Pointers (hazptr). This minimalist implementation is specific to use with preemption disabled, but can be extended further as needed. Each hazptr domain defines a fixed number of hazard pointer slots (nr_cpus) across the entire system. Its main benefit over RCU is that it allows fast reclaim of HP-protected pointers without needing to wait for a grace period. It also allows the hazard pointer scan to call a user-defined callback to retire a hazard pointer slot immediately if needed. This callback may, for instance, issue an IPI to the relevant CPU. There are a few possible use-cases for this in the Linux kernel: - Improve performance of mm_count by replacing lazy active mm by hazptr. - Guarantee object existence on pointer dereference to use refcount: - replace locking used for that purpose in some drivers, - replace RCU + inc_not_zero pattern, - rtmutex: Improve situations where locks need to be taken in reverse dependency chain order by guaranteeing existence of first and second locks in traversal order, allowing them to be locked in the correct order (which is reverse from traversal order) rather than try-lock+retry on nested lock. References: [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for lock-free objects," in IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004 Link: https://lore.kernel.org/lkml/j3scdl5iymjlxavomgc6u5ndg3svhab6ga23dr36o4f5mt333w@7xslvq6b6hmv/ Link: https://lpc.events/event/18/contributions/1731/ Signed-off-by: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Andrew Morton Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev --- Changes since v0: - Remove slot variable from hp_dereference_allocate(). Changes since v2: - Address Peter Zijlstra's comments. - Address Paul E. McKenney's comments. --- include/linux/hazptr.h | 165 +++++++++++++++++++++++++++++++++++++++++ kernel/Makefile | 2 +- kernel/hazptr.c | 51 +++++++++++++ 3 files changed, 217 insertions(+), 1 deletion(-) create mode 100644 include/linux/hazptr.h create mode 100644 kernel/hazptr.c diff --git a/include/linux/hazptr.h b/include/linux/hazptr.h new file mode 100644 index 000000000000..f8e36d2bdc58 --- /dev/null +++ b/include/linux/hazptr.h @@ -0,0 +1,165 @@ +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers +// +// SPDX-License-Identifier: LGPL-2.1-or-later + +#ifndef _LINUX_HAZPTR_H +#define _LINUX_HAZPTR_H + +/* + * HP: Hazard Pointers + * + * This API provides existence guarantees of objects through hazard + * pointers. + * + * It uses a fixed number of hazard pointer slots (nr_cpus) across the + * entire system for each hazard pointer domain. + * + * Its main benefit over RCU is that it allows fast reclaim of + * HP-protected pointers without needing to wait for a grace period. + * + * It also allows the hazard pointer scan to call a user-defined callback + * to retire a hazard pointer slot immediately if needed. This callback + * may, for instance, issue an IPI to the relevant CPU. + * + * References: + * + * [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for + * lock-free objects," in IEEE Transactions on Parallel and + * Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004 + */ + +#include +#include + +/* + * Hazard pointer slot. + */ +struct hazptr_slot { + void *addr; +}; + +struct hazptr_domain { + struct hazptr_slot __percpu *percpu_slots; +}; + +#define DECLARE_HAZPTR_DOMAIN(domain) \ + extern struct hazptr_domain domain + +#define DEFINE_HAZPTR_DOMAIN(domain) \ + static DEFINE_PER_CPU(struct hazptr_slot, __ ## domain ## _slots); \ + struct hazptr_domain domain = { \ + .percpu_slots = &__## domain ## _slots, \ + } + +/* + * hazptr_scan: Scan hazard pointer domain for @addr. + * + * Scan hazard pointer domain for @addr. + * If @on_match_cb is NULL, wait to observe that each slot contains a value + * that differs from @addr. + * If @on_match_cb is non-NULL, invoke @on_match_cb for each slot containing + * @addr. + */ +void hazptr_scan(struct hazptr_domain *domain, void *addr, + void (*on_match_cb)(int cpu, struct hazptr_slot *slot, void *addr)); + +/* + * hazptr_try_protect: Try to protect with hazard pointer. + * + * Try to protect @addr with a hazard pointer slot. The object existence + * should be guaranteed by the caller. Expects to be called from preempt + * disable context. + * + * Returns true if protect succeeds, false otherwise. + * On success, if @_slot is not NULL, the protected hazptr slot is stored in @_slot. + */ +static inline +bool hazptr_try_protect(struct hazptr_domain *hazptr_domain, void *addr, struct hazptr_slot **_slot) +{ + struct hazptr_slot __percpu *percpu_slots = hazptr_domain->percpu_slots; + struct hazptr_slot *slot; + + if (!addr) + return false; + slot = this_cpu_ptr(percpu_slots); + /* + * A single hazard pointer slot per CPU is available currently. + * Other hazard pointer domains can eventually have a different + * configuration. + */ + if (READ_ONCE(slot->addr)) + return false; + WRITE_ONCE(slot->addr, addr); /* Store B */ + if (_slot) + *_slot = slot; + return true; +} + +/* + * hazptr_load_try_protect: Load and try to protect with hazard pointer. + * + * Load @addr_p, and try to protect the loaded pointer with hazard + * pointers. + * + * Returns a protected address on success, NULL on failure. Expects to + * be called from preempt disable context. + * + * On success, if @_slot is not NULL, the protected hazptr slot is stored in @_slot. + */ +static inline +void *__hazptr_load_try_protect(struct hazptr_domain *hazptr_domain, + void * const * addr_p, struct hazptr_slot **_slot) +{ + struct hazptr_slot *slot; + void *addr, *addr2; + + /* + * Load @addr_p to know which address should be protected. + */ + addr = READ_ONCE(*addr_p); +retry: + /* Try to protect the address by storing it into a slot. */ + if (!hazptr_try_protect(hazptr_domain, addr, &slot)) + return NULL; + /* Memory ordering: Store B before Load A. */ + smp_mb(); + /* + * Re-load @addr_p after storing it to the hazard pointer slot. + */ + addr2 = READ_ONCE(*addr_p); /* Load A */ + /* + * If @addr_p content has changed since the first load, + * retire the hazard pointer and try again. + */ + if (!ptr_eq(addr2, addr)) { + WRITE_ONCE(slot->addr, NULL); + if (!addr2) + return NULL; + addr = addr2; + goto retry; + } + if (_slot) + *_slot = slot; + /* + * Use addr2 loaded from the second READ_ONCE() to preserve + * address dependency ordering. + */ + return addr2; +} + +/* + * Use a comma expression within typeof: __typeof__((void)**(addr_p), *(addr_p)) + * to generate a compile error if addr_p is not a pointer to a pointer. + */ +#define hazptr_load_try_protect(domain, addr_p, slot_p) \ + ((__typeof__((void)**(addr_p), *(addr_p))) __hazptr_load_try_protect(domain, (void * const *) (addr_p), slot_p)) + +/* Retire the protected hazard pointer from @slot. */ +static inline +void hazptr_retire(struct hazptr_slot *slot, void *addr) +{ + WARN_ON_ONCE(slot->addr != addr); + smp_store_release(&slot->addr, NULL); +} + +#endif /* _LINUX_HAZPTR_H */ diff --git a/kernel/Makefile b/kernel/Makefile index 3c13240dfc9f..bf6ed81d5983 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -7,7 +7,7 @@ obj-y = fork.o exec_domain.o panic.o \ cpu.o exit.o softirq.o resource.o \ sysctl.o capability.o ptrace.o user.o \ signal.o sys.o umh.o workqueue.o pid.o task_work.o \ - extable.o params.o \ + extable.o params.o hazptr.o \ kthread.o sys_ni.o nsproxy.o \ notifier.o ksysfs.o cred.o reboot.o \ async.o range.o smpboot.o ucount.o regset.o ksyms_common.o diff --git a/kernel/hazptr.c b/kernel/hazptr.c new file mode 100644 index 000000000000..3f9f14afbf1d --- /dev/null +++ b/kernel/hazptr.c @@ -0,0 +1,51 @@ +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers +// +// SPDX-License-Identifier: LGPL-2.1-or-later + +/* + * hazptr: Hazard Pointers + */ + +#include +#include + +/* + * hazptr_scan: Scan hazard pointer domain for @addr. + * + * Scan hazard pointer domain for @addr. + * If @on_match_cb is non-NULL, invoke @callback for each slot containing + * @addr. + * Wait to observe that each slot contains a value that differs from + * @addr before returning. + */ +void hazptr_scan(struct hazptr_domain *hazptr_domain, void *addr, + void (*on_match_cb)(int cpu, struct hazptr_slot *slot, void *addr)) +{ + struct hazptr_slot __percpu *percpu_slots = hazptr_domain->percpu_slots; + int cpu; + + /* Should only be called from preemptible context. */ + lockdep_assert_preemption_enabled(); + + /* + * Store A precedes hazptr_scan(): it unpublishes addr (sets it to + * NULL or to a different value), and thus hides it from hazard + * pointer readers. + */ + if (!addr) + return; + /* Memory ordering: Store A before Load B. */ + smp_mb(); + /* Scan all CPUs slots. */ + for_each_possible_cpu(cpu) { + struct hazptr_slot *slot = per_cpu_ptr(percpu_slots, cpu); + + if (on_match_cb) { + if (smp_load_acquire(&slot->addr) == addr) /* Load B */ + on_match_cb(cpu, slot, addr); + } else { + /* Busy-wait if node is found. */ + smp_cond_load_acquire(&slot->addr, VAL != addr); /* Load B */ + } + } +} From patchwork Tue Oct 8 13:50:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13826524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A108CEF177 for ; Tue, 8 Oct 2024 13:52:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D627A6B0093; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CEA566B0095; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B645F6B0098; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 859636B0093 for ; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 414DC81C74 for ; Tue, 8 Oct 2024 13:52:45 +0000 (UTC) X-FDA: 82650575532.22.001914C Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf30.hostedemail.com (Postfix) with ESMTP id B89BD80018 for ; Tue, 8 Oct 2024 13:52:44 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=Ug24iPj9; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf30.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728395465; a=rsa-sha256; cv=none; b=C834jxqrq5uE6BFwQnbFiNSBe/OQ9HAfa1HCvbFKy06s/6wD8mahGDXdWXiywO9OwZuvok UQm4lYg/NkBYe+rJmapsaZbQaPsqp7CkIzeAhZ0U29ZGQZFB6WzwVMBDqVea7mkK5Xilak 8P42Rk0pmEluJAfN4J/dDAp0uDso2x8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=Ug24iPj9; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf30.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728395465; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aJYvCXm0TG1k6oMjEKzATYEsJx+iazaaHaHisbfaN9s=; b=YKUHckEJgMeDS9c/8IP2KsssBUaonC2Ts4oHfPzywN7rZno644YaSA3Po/I+hlYoJWer0L IC9bmwWL7tmMRsDIx03VQ5NaGK8kzxmd/JKfCQ0iv4imUJFHYL13kdRRrBHLtXffrqKNiH XRlQvKCsoheEZ6PUNcgyAnzT6qlFvSY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728395563; bh=8C3q9wj5bRQ4clt4Y8eAd1LZVU6nG7UzyP+UbnjHZww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ug24iPj9L61ex8hFJS2JEoR21qWW5amtYa1PmEJ/WY8vy96wcK9ETL5l5rs2ZI98p T6QG5Lp874R6Khc2ae3oxs4rXIqJp8L7UpbnSwzWz+4g3oebuAMcJiHocP9HVXyPbf RkK4PoTg8VHI02xrx7WaSvHiOzSw8R+ESPXPVGGcFystR3EgPNwnMceM5U8BdKyAlt +YL47jPrJTfar8p1GfYd5lSKj5kKw/Dhws+e5G71Lj3mvX5+mRF8wLNJocWR3H3xLe s8mgTTpLC5FwKY+YLCyN5eGALx++5yjoMHoiJvvxyOoZ22AdZsyuOgv8ZDYBnXbAdj BR4PdROwcScKw== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XNHXC0cplzLwX; Tue, 8 Oct 2024 09:52:43 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev Subject: [RFC PATCH v3 4/4] sched+mm: Use hazard pointers to track lazy active mm existence Date: Tue, 8 Oct 2024 09:50:34 -0400 Message-Id: <20241008135034.1982519-5-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> References: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B89BD80018 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: tec1crnmqgnps5sspjby6r5syc9f6duk X-HE-Tag: 1728395564-783517 X-HE-Meta: U2FsdGVkX1+Buts4JCU8MkeZb6n59EqtrrFGLxFGI/y6hC/vcpS1dFVVbefsqpoNBO3Mr+rDUCW3k+HjaG+stHyBboUtwWyRpV+zG7A8p0UzUtfagExc+jRwN28qyZxVXuUnVOtORU993MscpoSfGWUZ4HCtdL/qMleDlrvpqd978UI2syfsAwCykj0eHp1JYhxcKjNyDUVPlDwjUNqJuS47RU7CJhv9yDhWsMWPf1392ELn5C7ovPQT/cygG4h06V1bRwVKgzpMKIm8WXW8mCSov1O12GD5KmoliE851yN9HrjmjDsdrUhYDwbvtYS93umq1ew2gg3X3xpkuapejZBx1RBO29XDwLWtGEYUTU5IArS8ZCo/ct4UOfsVFBqsN3m4BGmjDVq4oB35+3bdupaGSMAFR6J6rtU+pOFbBhaSmKtDY7Rkg8fSJRzJFbM3q8bPUFNlkUqecV2tf8e8rtuUzPkzx6/BKlLnbK37YZqPxo1+k5V4lONIex6dSsnUwB/TB9xKVt19TRFTuEYFDp7enHjr3rqYxgSuEJ73hCjmwY7/PvAqxMugcRPG5lOzzhgdeRlsW4eoBfIwBXugqWIdNDa1TpL4brn6ecp256tXrSd3NS46PMtoWqmKZiPt/oBdLSIcgGgDDp3c8MvQRxLqo9SW6DHf5MZ/I3+yuf0Vl8vnxqRpLaHt/A1uH0azjAOfUp0htZn2dR8LXeUcrQu7LqUFCWKai0IyJu47HpIO3JLHNce+jOgC79QoCiO6l5+OK9LW5N2sAPTxasDMe5uCSENQZhBhZ42Rkt8RX8sBDq9JGcPOhg5MeB7SSvAyHUIfoIrrh0VCsXqoTM3EBnU2HdZpI9opHdFyyvOVRJ+eH8AKfuKtebUHjhDd4QkNbgzjcQsTe/udO+rcKu1CtX2r1GWq4iJl7xJ2vY66pPhiaC1drwTKYc7g08Ao/S2KuYjrzsyu0V2uK8MZlIW +rVICpac E+Nga9FA9CY4nYNAYEO7YTJBJMHEE1EmE+q6TCy23LkxUiCbkyiVBgnmj7gGctPSMP1cC8balP8p93rBYXBTfZ/eLq6skarJJwhkzxAXWrapntmqiUBdKtlcOEvDbJa/+P2K+CUsNaOZJPWya6BjY/mUILVi5Pnn6z7SB1m1JQosyTG4uDTLHCKTw9dyb7NhfoDNCLnu64DAwOmNjtEaJKGIPbWOPZzeIsOUNslXvsZI2izlsdk8vVZUOuqrYdq+yrWJKSIpl0tC6WLeFnJoCb/H3EvwXPIcIWbsiRRAhU6DB/VcCVo++ZrqFCBXw1XM1jN25ZMCaFo4HIhUpM/T/5rQsShY4CUNQKarCzHn/beQRwFWLAmJlRzyz0SuSd3EZijmlrq/c1ANy5j3chUWrv9zybN3LRvLNW3mz7/uX/yDLV8C6DpE9mHuk8VCloR2vezoWyKV2ThhKl5fpiiuBwQofXobCGnyesbWACprljbmu6uG/sd8kCJPK4nFbqt4AOE03V2KDLNbyaEdnD2wkUCqk3VSaWMqezrir671JFGvuvMUNQm0dCzgTfHvXXGvZddNLgUc0CYGSRWalt9RBHMKetTc5JmipX1Ttc2VcFBKi7pFCueXLdJ7yyeck6LZ6pGorik3RdQBGf/RTUGDqiOYIei2dwjlmvMO96yXW/aqeUBjCotGHQqzGw9jSQuExlwhdmVKilHuA3ncgmgv9zMj8oyHcooQeOYi9B5mLVCWooEzB2qqTy016wbEi0LGzhMPlPCn5sgs+Jn24MMQ7PxJUgBoz4HpR74bzW1AiIYnRbJOM5g0qDyp3boDiUhIDsLTBS+34Vg6gIB0jwV/cmds5wmWOGjqJggwkkA4U5H2Pr5JyPp3j/2WxCxz99OPxNOCwgyQQgiC4Jm94GZWSpM0C/sFTyIQlxXbShWDqjXWtN1jU2tZ40m8zYb2uJfed2cvjb75s4Q+FG2lUTu0+PsfQW1in uYVISp36 JMZmq7aTq7k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace lazy active mm existence tracking with hazard pointers. This removes the following implementations and their associated config options: - MMU_LAZY_TLB_REFCOUNT - MMU_LAZY_TLB_SHOOTDOWN - This removes the call_rcu delayed mm drop for RT. It leverages the fact that each CPU only ever have at most one single lazy active mm. This makes it a very good fit for a hazard pointer domain implemented with one hazard pointer slot per CPU. * Benchmarks: will-it-scale context_switch1_threads nr threads (-t) speedup 1 -0.2% 2 +0.4% 3 +0.2% 6 +0.6% 12 +0.8% 24 +3% 48 +12% 96 +21% 192 +28% 384 +4% 768 -0.6% Methodology: Each test is the average of 20 iterations. Use median result of 3 test runs. Test hardware: CPU(s): 384 On-line CPU(s) list: 0-383 Vendor ID: AuthenticAMD Model name: AMD EPYC 9654 96-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 2 Core(s) per socket: 96 Socket(s): 2 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 100% CPU max MHz: 3709.0000 CPU min MHz: 400.0000 BogoMIPS: 4799.75 Memory: 768 GB ram. Signed-off-by: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Andrew Morton Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev --- Documentation/mm/active_mm.rst | 9 ++-- arch/Kconfig | 32 ------------- arch/powerpc/Kconfig | 1 - arch/powerpc/mm/book3s64/radix_tlb.c | 23 +--------- include/linux/mm_types.h | 3 -- include/linux/sched/mm.h | 68 ++++++++++------------------ kernel/exit.c | 4 +- kernel/fork.c | 47 +++++-------------- kernel/sched/sched.h | 8 +--- lib/Kconfig.debug | 10 ---- 10 files changed, 45 insertions(+), 160 deletions(-) diff --git a/Documentation/mm/active_mm.rst b/Documentation/mm/active_mm.rst index d096fc091e23..c225cac49c30 100644 --- a/Documentation/mm/active_mm.rst +++ b/Documentation/mm/active_mm.rst @@ -2,11 +2,10 @@ Active MM ========= -Note, the mm_count refcount may no longer include the "lazy" users -(running tasks with ->active_mm == mm && ->mm == NULL) on kernels -with CONFIG_MMU_LAZY_TLB_REFCOUNT=n. Taking and releasing these lazy -references must be done with mmgrab_lazy_tlb() and mmdrop_lazy_tlb() -helpers, which abstract this config option. +Note, the mm_count refcount no longer include the "lazy" users (running +tasks with ->active_mm == mm && ->mm == NULL) Taking and releasing these +lazy references must be done with mmgrab_lazy_tlb() and mmdrop_lazy_tlb() +helpers, which are implemented with hazard pointers. :: diff --git a/arch/Kconfig b/arch/Kconfig index 975dd22a2dbd..d4261935f8dc 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -475,38 +475,6 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM irqs disabled over activate_mm. Architectures that do IPI based TLB shootdowns should enable this. -# Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. -# MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching -# to/from kernel threads when the same mm is running on a lot of CPUs (a large -# multi-threaded application), by reducing contention on the mm refcount. -# -# This can be disabled if the architecture ensures no CPUs are using an mm as a -# "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm -# or its kernel page tables). This could be arranged by arch_exit_mmap(), or -# final exit(2) TLB flush, for example. -# -# To implement this, an arch *must*: -# Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating -# the lazy tlb reference of a kthread's ->active_mm (non-arch code has been -# converted already). -config MMU_LAZY_TLB_REFCOUNT - def_bool y - depends on !MMU_LAZY_TLB_SHOOTDOWN - -# This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an -# mm as a lazy tlb beyond its last reference count, by shooting down these -# users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may -# be using the mm as a lazy tlb, so that they may switch themselves to using -# init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs -# may be using mm as a lazy tlb mm. -# -# To implement this, an arch *must*: -# - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains -# at least all possible CPUs in which the mm is lazy. -# - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). -config MMU_LAZY_TLB_SHOOTDOWN - bool - config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d7b09b064a8a..b1e25e75baab 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -291,7 +291,6 @@ config PPC select MMU_GATHER_PAGE_SIZE select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_MERGE_VMAS - select MMU_LAZY_TLB_SHOOTDOWN if PPC_BOOK3S_64 select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE if PPC64 || NOT_COHERENT_CACHE select NEED_PER_CPU_EMBED_FIRST_CHUNK if PPC64 diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 9e1f6558d026..ff0d4f28cf52 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -1197,28 +1197,7 @@ void radix__tlb_flush(struct mmu_gather *tlb) * See the comment for radix in arch_exit_mmap(). */ if (tlb->fullmm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { - /* - * Shootdown based lazy tlb mm refcounting means we - * have to IPI everyone in the mm_cpumask anyway soon - * when the mm goes away, so might as well do it as - * part of the final flush now. - * - * If lazy shootdown was improved to reduce IPIs (e.g., - * by batching), then it may end up being better to use - * tlbies here instead. - */ - preempt_disable(); - - smp_mb(); /* see radix__flush_tlb_mm */ - exit_flush_lazy_tlbs(mm); - __flush_all_mm(mm, true); - - preempt_enable(); - } else { - __flush_all_mm(mm, true); - } - + __flush_all_mm(mm, true); } else if ( (psize = radix_get_mmu_psize(page_size)) == -1) { if (!tlb->freed_tables) radix__flush_tlb_mm(mm); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..db5f13554485 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -975,9 +975,6 @@ struct mm_struct { atomic_t tlb_flush_batched; #endif struct uprobes_state uprobes_state; -#ifdef CONFIG_PREEMPT_RT - struct rcu_head delayed_drop; -#endif #ifdef CONFIG_HUGETLB_PAGE atomic_long_t hugetlb_usage; #endif diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 91546493c43d..7b2f0a432f6e 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -9,6 +9,10 @@ #include #include #include +#include + +/* Sched lazy mm hazard pointer domain. */ +DECLARE_HAZPTR_DOMAIN(hazptr_domain_sched_lazy_mm); /* * Routines for handling mm_structs @@ -55,61 +59,37 @@ static inline void mmdrop(struct mm_struct *mm) __mmdrop(mm); } -#ifdef CONFIG_PREEMPT_RT -/* - * RCU callback for delayed mm drop. Not strictly RCU, but call_rcu() is - * by far the least expensive way to do that. - */ -static inline void __mmdrop_delayed(struct rcu_head *rhp) -{ - struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop); - - __mmdrop(mm); -} - -/* - * Invoked from finish_task_switch(). Delegates the heavy lifting on RT - * kernels via RCU. - */ -static inline void mmdrop_sched(struct mm_struct *mm) -{ - /* Provides a full memory barrier. See mmdrop() */ - if (atomic_dec_and_test(&mm->mm_count)) - call_rcu(&mm->delayed_drop, __mmdrop_delayed); -} -#else -static inline void mmdrop_sched(struct mm_struct *mm) -{ - mmdrop(mm); -} -#endif - /* Helpers for lazy TLB mm refcounting */ static inline void mmgrab_lazy_tlb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) - mmgrab(mm); + /* + * mmgrab_lazy_tlb must provide a full memory barrier, see the + * membarrier comment finish_task_switch which relies on this. + */ + smp_mb(); + + /* + * The caller guarantees existence of mm. Post a hazard pointer + * to chain this existence guarantee to a hazard pointer. + * There is only a single lazy mm per CPU at any time. + */ + WARN_ON_ONCE(!hazptr_try_protect(&hazptr_domain_sched_lazy_mm, mm, NULL)); } static inline void mmdrop_lazy_tlb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) { - mmdrop(mm); - } else { - /* - * mmdrop_lazy_tlb must provide a full memory barrier, see the - * membarrier comment finish_task_switch which relies on this. - */ - smp_mb(); - } + /* + * mmdrop_lazy_tlb must provide a full memory barrier, see the + * membarrier comment finish_task_switch which relies on this. + */ + smp_mb(); + this_cpu_write(hazptr_domain_sched_lazy_mm.percpu_slots->addr, NULL); } static inline void mmdrop_lazy_tlb_sched(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) - mmdrop_sched(mm); - else - smp_mb(); /* see mmdrop_lazy_tlb() above */ + smp_mb(); /* see mmdrop_lazy_tlb() above */ + this_cpu_write(hazptr_domain_sched_lazy_mm.percpu_slots->addr, NULL); } /** diff --git a/kernel/exit.c b/kernel/exit.c index 7430852a8571..cb4ace06c0f0 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -545,8 +545,6 @@ static void exit_mm(void) if (!mm) return; mmap_read_lock(mm); - mmgrab_lazy_tlb(mm); - BUG_ON(mm != current->active_mm); /* more a memory barrier than a real lock */ task_lock(current); /* @@ -561,6 +559,8 @@ static void exit_mm(void) */ smp_mb__after_spinlock(); local_irq_disable(); + mmgrab_lazy_tlb(mm); + BUG_ON(mm != current->active_mm); current->mm = NULL; membarrier_update_current_mm(NULL); enter_lazy_tlb(mm, current); diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..0a2e2ab1680a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -149,6 +149,9 @@ DEFINE_PER_CPU(unsigned long, process_counts) = 0; __cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */ +/* Sched lazy mm hazard pointer domain. */ +DEFINE_HAZPTR_DOMAIN(hazptr_domain_sched_lazy_mm); + #ifdef CONFIG_PROVE_RCU int lockdep_tasklist_lock_is_held(void) { @@ -855,50 +858,24 @@ static void do_shoot_lazy_tlb(void *arg) WARN_ON_ONCE(current->mm); current->active_mm = &init_mm; switch_mm(mm, &init_mm, current); + this_cpu_write(hazptr_domain_sched_lazy_mm.percpu_slots->addr, NULL); } } -static void cleanup_lazy_tlbs(struct mm_struct *mm) +static void remove_lazy_mm_hp(int cpu, struct hazptr_slot *slot, void *addr) { - if (!IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { - /* - * In this case, lazy tlb mms are refounted and would not reach - * __mmdrop until all CPUs have switched away and mmdrop()ed. - */ - return; - } + smp_call_function_single(cpu, do_shoot_lazy_tlb, addr, 1); + smp_call_function_single(cpu, do_check_lazy_tlb, addr, 1); +} +static void cleanup_lazy_tlbs(struct mm_struct *mm) +{ /* - * Lazy mm shootdown does not refcount "lazy tlb mm" usage, rather it - * requires lazy mm users to switch to another mm when the refcount + * Require lazy mm users to switch to another mm when the refcount * drops to zero, before the mm is freed. This requires IPIs here to * switch kernel threads to init_mm. - * - * archs that use IPIs to flush TLBs can piggy-back that lazy tlb mm - * switch with the final userspace teardown TLB flush which leaves the - * mm lazy on this CPU but no others, reducing the need for additional - * IPIs here. There are cases where a final IPI is still required here, - * such as the final mmdrop being performed on a different CPU than the - * one exiting, or kernel threads using the mm when userspace exits. - * - * IPI overheads have not found to be expensive, but they could be - * reduced in a number of possible ways, for example (roughly - * increasing order of complexity): - * - The last lazy reference created by exit_mm() could instead switch - * to init_mm, however it's probable this will run on the same CPU - * immediately afterwards, so this may not reduce IPIs much. - * - A batch of mms requiring IPIs could be gathered and freed at once. - * - CPUs store active_mm where it can be remotely checked without a - * lock, to filter out false-positives in the cpumask. - * - After mm_users or mm_count reaches zero, switching away from the - * mm could clear mm_cpumask to reduce some IPIs, perhaps together - * with some batching or delaying of the final IPIs. - * - A delayed freeing and RCU-like quiescing sequence based on mm - * switching to avoid IPIs completely. */ - on_each_cpu_mask(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1); - if (IS_ENABLED(CONFIG_DEBUG_VM_SHOOT_LAZIES)) - on_each_cpu(do_check_lazy_tlb, (void *)mm, 1); + hazptr_scan(&hazptr_domain_sched_lazy_mm, mm, remove_lazy_mm_hp); } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4c36cc680361..d883c2aa3518 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3527,12 +3527,8 @@ static inline void switch_mm_cid(struct rq *rq, if (!next->mm) { // to kernel /* * user -> kernel transition does not guarantee a barrier, but - * we can use the fact that it performs an atomic operation in - * mmgrab(). - */ - if (prev->mm) // from user - smp_mb__after_mmgrab(); - /* + * we can use the fact that mmgrab() has a full barrier. + * * kernel -> kernel transition does not change rq->curr->mm * state. It stays NULL. */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index a30c03a66172..1cb9dab361c9 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -803,16 +803,6 @@ config DEBUG_VM If unsure, say N. -config DEBUG_VM_SHOOT_LAZIES - bool "Debug MMU_LAZY_TLB_SHOOTDOWN implementation" - depends on DEBUG_VM - depends on MMU_LAZY_TLB_SHOOTDOWN - help - Enable additional IPIs that ensure lazy tlb mm references are removed - before the mm is freed. - - If unsure, say N. - config DEBUG_VM_MAPLE_TREE bool "Debug VM maple trees" depends on DEBUG_VM