From patchwork Mon Dec 9 02:41:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13898755 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97805487A5; Mon, 9 Dec 2024 02:41:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712081; cv=none; b=uQ3dgRmjB+G8OqeQBC0yuvmi/MVwaurMm5kbayzaY0UVnnHvJHqtR+IexsAAxnRDOMt4/MDd4/T3BF40aCGF7GVLN0uvKo5YZzvRBLRDBMTI6R/Ro3Uue1HvBFmKxM7Ex2Xs6KewpElvY1+nNW8zfI8MexbPr9HqXqUVj/LuvLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712081; c=relaxed/simple; bh=HKISBZfln6I7KM8OMFNC3uKPwof/poz/+cHr+oNkTF8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dzI3ILOP91OyKsNPnmREiB0ID26PMhub63AqhYHmdF5xpdyFlsp65cP9SHTd0rt50yMY7/9i8nirspod3sWCrBA/R8i67Mfwft9zwKgJt9CmiUSs8NjA8p5Cmzvzxs5ob4A1QFTPB2hi5xXL/zFNVmIulso3+UO8PLHVGnGC+a0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oNccMC2R; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oNccMC2R" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FCAFC4CED2; Mon, 9 Dec 2024 02:41:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733712081; bh=HKISBZfln6I7KM8OMFNC3uKPwof/poz/+cHr+oNkTF8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oNccMC2Rd7bKnI0DGnw4V7/PuE8jBNZOqBbc0bsspSH2WfRAKYXuxuXj341UEO/tM 2PzOeI1jSXnANb7top3admU9KF56xSrn8qtkSgsNNSW+UHMhHSl3t0jDYfjgm4Th+9 9jh8ayCcPXi/1J/8YioVmEQQNAkyhQlLyOvFe6e9BqQDo4y6S6nsTGMARQ+x74TcjJ 6Ls6ui1kcQLBcifgNxADd+qdvAAJ74nh4VYVaEi5BlsDR2AdaN4dTZfYWL/43/l4Re xigjTvETwixiA/A5ZKTryPJiwiXa4Ndmv7wEPYBsn9TFrLdwLbJHh58uvTzKRr4zDb CrMSxECIEHIFg== From: "Masami Hiramatsu (Google)" To: Steven Rostedt , Peter Zijlstra Cc: Anil S Keshavamurthy , Masami Hiramatsu , "David S . Miller" , Mathieu Desnoyers , Oleg Nesterov , Tzvetomir Stoyanov , Naveen N Rao , Josh Poimboeuf , Jason Baron , Ard Biesheuvel , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 1/5] jump_label: Define guard() for jump_label_lock Date: Mon, 9 Dec 2024 11:41:11 +0900 Message-ID: <173371207108.480397.12818384744149153972.stgit@devnote2> X-Mailer: git-send-email 2.43.0 In-Reply-To: <173371205755.480397.7893311565254712194.stgit@devnote2> References: <173371205755.480397.7893311565254712194.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Masami Hiramatsu (Google) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/jump_label.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index f5a2727ca4a9..fdb79dd1ebd8 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -75,6 +75,7 @@ #include #include +#include extern bool static_key_initialized; @@ -347,6 +348,8 @@ static inline void static_key_disable(struct static_key *key) #endif /* CONFIG_JUMP_LABEL */ +DEFINE_LOCK_GUARD_0(jump_label_lock, jump_label_lock(), jump_label_unlock()) + #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE #define jump_label_enabled static_key_enabled From patchwork Mon Dec 9 02:41:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13898756 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A1BA433BC; Mon, 9 Dec 2024 02:41:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712093; cv=none; b=lfu7DaJLhUUV+tUWzo0ImzsJQhNbd+nBmwQvCl2choJyVwcWf4t94qpC21+xIYnkt4z/ADuuIUZnJwqTf1QVXTwWPWpRIN2wO1CUFUx5T50Kaiak6y/K8A17qbvq0lfuX73JZvxZnXw2b11ZDNUt0n4A85dR/9JITMJJYoiXgG4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712093; c=relaxed/simple; bh=rNRwR/igHBGhOK9H26aXWYQ+tnb/pRgzzGu4+MXqWic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AK66jZqhoq2ILKPMKnkKtDqrJHu3AiDonBaThrrFvvNteOady1KcdR/Hps1Ldsm04ymOlfYqOBLt76uafVUrTSSlwK/TPdZ0Sv01GZDR+j3u24J+8dWSpvQRB9KmTEQK49mdtfHe5Alte5b1l0tdmdoGJJJr9TTSx+6ppRbE6fE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MSBI3Q/d; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MSBI3Q/d" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0FB5C4CED2; Mon, 9 Dec 2024 02:41:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733712093; bh=rNRwR/igHBGhOK9H26aXWYQ+tnb/pRgzzGu4+MXqWic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MSBI3Q/d3nV7Gk/KwXCluNFvalEKcWApu6FID7y3FQR+47kcKBkia2TN0bYcfTplk v6RPwUNhBg+fYxZa/7gyBroMW1cWoEMk6nhUtA4Jq3tVnVbg+sCOcsoboDQL1iPdT9 hiqf8v0bNkjlwfWkZu0uYZDz77JTMnoQ5MAKi4dVLSF17npqNZr7bdR9x+ThW2E5cz bGOf3POkZohRcibSwbjTxH1uQzmPNhK86NU4bP24U6JesBI5s3k4IwIfuoyJuPp9rq Fcb/Ulp/7Q5Zrr6GvBYGPlR/pxoVJPEuqOTrTwis63ejA7b58ZC8DUmMV65jMF6u4+ lsvY7ltDdwBfA== From: "Masami Hiramatsu (Google)" To: Steven Rostedt , Peter Zijlstra Cc: Anil S Keshavamurthy , Masami Hiramatsu , "David S . Miller" , Mathieu Desnoyers , Oleg Nesterov , Tzvetomir Stoyanov , Naveen N Rao , Josh Poimboeuf , Jason Baron , Ard Biesheuvel , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 2/5] kprobes: Use guard() for external locks Date: Mon, 9 Dec 2024 11:41:26 +0900 Message-ID: <173371208663.480397.7535769878667655223.stgit@devnote2> X-Mailer: git-send-email 2.43.0 In-Reply-To: <173371205755.480397.7893311565254712194.stgit@devnote2> References: <173371205755.480397.7893311565254712194.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Masami Hiramatsu (Google) Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in the kprobes. Signed-off-by: Masami Hiramatsu (Google) --- kernel/kprobes.c | 209 +++++++++++++++++++++++------------------------------- 1 file changed, 90 insertions(+), 119 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 62b5b08d809d..004eb8326520 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -596,41 +596,38 @@ static void kick_kprobe_optimizer(void) /* Kprobe jump optimizer */ static void kprobe_optimizer(struct work_struct *work) { - mutex_lock(&kprobe_mutex); - cpus_read_lock(); - mutex_lock(&text_mutex); + guard(mutex)(&kprobe_mutex); - /* - * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed) - * kprobes before waiting for quiesence period. - */ - do_unoptimize_kprobes(); + scoped_guard(cpus_read_lock) { + guard(mutex)(&text_mutex); - /* - * Step 2: Wait for quiesence period to ensure all potentially - * preempted tasks to have normally scheduled. Because optprobe - * may modify multiple instructions, there is a chance that Nth - * instruction is preempted. In that case, such tasks can return - * to 2nd-Nth byte of jump instruction. This wait is for avoiding it. - * Note that on non-preemptive kernel, this is transparently converted - * to synchronoze_sched() to wait for all interrupts to have completed. - */ - synchronize_rcu_tasks(); + /* + * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed) + * kprobes before waiting for quiesence period. + */ + do_unoptimize_kprobes(); - /* Step 3: Optimize kprobes after quiesence period */ - do_optimize_kprobes(); + /* + * Step 2: Wait for quiesence period to ensure all potentially + * preempted tasks to have normally scheduled. Because optprobe + * may modify multiple instructions, there is a chance that Nth + * instruction is preempted. In that case, such tasks can return + * to 2nd-Nth byte of jump instruction. This wait is for avoiding it. + * Note that on non-preemptive kernel, this is transparently converted + * to synchronoze_sched() to wait for all interrupts to have completed. + */ + synchronize_rcu_tasks(); - /* Step 4: Free cleaned kprobes after quiesence period */ - do_free_cleaned_kprobes(); + /* Step 3: Optimize kprobes after quiesence period */ + do_optimize_kprobes(); - mutex_unlock(&text_mutex); - cpus_read_unlock(); + /* Step 4: Free cleaned kprobes after quiesence period */ + do_free_cleaned_kprobes(); + } /* Step 5: Kick optimizer again if needed */ if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list)) kick_kprobe_optimizer(); - - mutex_unlock(&kprobe_mutex); } static void wait_for_kprobe_optimizer_locked(void) @@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p) return; /* For preparing optimization, jump_label_text_reserved() is called. */ - cpus_read_lock(); - jump_label_lock(); - mutex_lock(&text_mutex); + guard(cpus_read_lock)(); + guard(jump_label_lock)(); + guard(mutex)(&text_mutex); ap = alloc_aggr_kprobe(p); if (!ap) - goto out; + return; op = container_of(ap, struct optimized_kprobe, kp); if (!arch_prepared_optinsn(&op->optinsn)) { /* If failed to setup optimizing, fallback to kprobe. */ arch_remove_optimized_kprobe(op); kfree(op); - goto out; + return; } init_aggr_kprobe(ap, p); optimize_kprobe(ap); /* This just kicks optimizer thread. */ - -out: - mutex_unlock(&text_mutex); - jump_label_unlock(); - cpus_read_unlock(); } static void optimize_all_kprobes(void) @@ -1158,12 +1150,9 @@ static int arm_kprobe(struct kprobe *kp) if (unlikely(kprobe_ftrace(kp))) return arm_kprobe_ftrace(kp); - cpus_read_lock(); - mutex_lock(&text_mutex); + guard(cpus_read_lock)(); + guard(mutex)(&text_mutex); __arm_kprobe(kp); - mutex_unlock(&text_mutex); - cpus_read_unlock(); - return 0; } @@ -1172,12 +1161,9 @@ static int disarm_kprobe(struct kprobe *kp, bool reopt) if (unlikely(kprobe_ftrace(kp))) return disarm_kprobe_ftrace(kp); - cpus_read_lock(); - mutex_lock(&text_mutex); + guard(cpus_read_lock)(); + guard(mutex)(&text_mutex); __disarm_kprobe(kp, reopt); - mutex_unlock(&text_mutex); - cpus_read_unlock(); - return 0; } @@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p) int ret = 0; struct kprobe *ap = orig_p; - cpus_read_lock(); - - /* For preparing optimization, jump_label_text_reserved() is called */ - jump_label_lock(); - mutex_lock(&text_mutex); - - if (!kprobe_aggrprobe(orig_p)) { - /* If 'orig_p' is not an 'aggr_kprobe', create new one. */ - ap = alloc_aggr_kprobe(orig_p); - if (!ap) { - ret = -ENOMEM; - goto out; + scoped_guard(cpus_read_lock) { + /* For preparing optimization, jump_label_text_reserved() is called */ + guard(jump_label_lock)(); + guard(mutex)(&text_mutex); + + if (!kprobe_aggrprobe(orig_p)) { + /* If 'orig_p' is not an 'aggr_kprobe', create new one. */ + ap = alloc_aggr_kprobe(orig_p); + if (!ap) + return -ENOMEM; + init_aggr_kprobe(ap, orig_p); + } else if (kprobe_unused(ap)) { + /* This probe is going to die. Rescue it */ + ret = reuse_unused_kprobe(ap); + if (ret) + return ret; } - init_aggr_kprobe(ap, orig_p); - } else if (kprobe_unused(ap)) { - /* This probe is going to die. Rescue it */ - ret = reuse_unused_kprobe(ap); - if (ret) - goto out; - } - if (kprobe_gone(ap)) { - /* - * Attempting to insert new probe at the same location that - * had a probe in the module vaddr area which already - * freed. So, the instruction slot has already been - * released. We need a new slot for the new probe. - */ - ret = arch_prepare_kprobe(ap); - if (ret) + if (kprobe_gone(ap)) { /* - * Even if fail to allocate new slot, don't need to - * free the 'ap'. It will be used next time, or - * freed by unregister_kprobe(). + * Attempting to insert new probe at the same location that + * had a probe in the module vaddr area which already + * freed. So, the instruction slot has already been + * released. We need a new slot for the new probe. */ - goto out; + ret = arch_prepare_kprobe(ap); + if (ret) + /* + * Even if fail to allocate new slot, don't need to + * free the 'ap'. It will be used next time, or + * freed by unregister_kprobe(). + */ + return ret; - /* Prepare optimized instructions if possible. */ - prepare_optimized_kprobe(ap); + /* Prepare optimized instructions if possible. */ + prepare_optimized_kprobe(ap); - /* - * Clear gone flag to prevent allocating new slot again, and - * set disabled flag because it is not armed yet. - */ - ap->flags = (ap->flags & ~KPROBE_FLAG_GONE) - | KPROBE_FLAG_DISABLED; - } - - /* Copy the insn slot of 'p' to 'ap'. */ - copy_kprobe(ap, p); - ret = add_new_kprobe(ap, p); + /* + * Clear gone flag to prevent allocating new slot again, and + * set disabled flag because it is not armed yet. + */ + ap->flags = (ap->flags & ~KPROBE_FLAG_GONE) + | KPROBE_FLAG_DISABLED; + } -out: - mutex_unlock(&text_mutex); - jump_label_unlock(); - cpus_read_unlock(); + /* Copy the insn slot of 'p' to 'ap'. */ + copy_kprobe(ap, p); + ret = add_new_kprobe(ap, p); + } if (ret == 0 && kprobe_disabled(ap) && !kprobe_disabled(p)) { ap->flags &= ~KPROBE_FLAG_DISABLED; @@ -1559,26 +1538,23 @@ static int check_kprobe_address_safe(struct kprobe *p, ret = check_ftrace_location(p); if (ret) return ret; - jump_label_lock(); + + guard(jump_label_lock)(); /* Ensure the address is in a text area, and find a module if exists. */ *probed_mod = NULL; if (!core_kernel_text((unsigned long) p->addr)) { guard(preempt)(); *probed_mod = __module_text_address((unsigned long) p->addr); - if (!(*probed_mod)) { - ret = -EINVAL; - goto out; - } + if (!(*probed_mod)) + return -EINVAL; /* * We must hold a refcount of the probed module while updating * its code to prohibit unexpected unloading. */ - if (unlikely(!try_module_get(*probed_mod))) { - ret = -ENOENT; - goto out; - } + if (unlikely(!try_module_get(*probed_mod))) + return -ENOENT; } /* Ensure it is not in reserved area. */ if (in_gate_area_no_mm((unsigned long) p->addr) || @@ -1588,8 +1564,7 @@ static int check_kprobe_address_safe(struct kprobe *p, find_bug((unsigned long)p->addr) || is_cfi_preamble_symbol((unsigned long)p->addr)) { module_put(*probed_mod); - ret = -EINVAL; - goto out; + return -EINVAL; } /* Get module refcount and reject __init functions for loaded modules. */ @@ -1601,14 +1576,11 @@ static int check_kprobe_address_safe(struct kprobe *p, if (within_module_init((unsigned long)p->addr, *probed_mod) && !module_is_coming(*probed_mod)) { module_put(*probed_mod); - ret = -ENOENT; + return -ENOENT; } } -out: - jump_label_unlock(); - - return ret; + return 0; } static int __register_kprobe(struct kprobe *p) @@ -1623,14 +1595,13 @@ static int __register_kprobe(struct kprobe *p) /* Since this may unoptimize 'old_p', locking 'text_mutex'. */ return register_aggr_kprobe(old_p, p); - cpus_read_lock(); - /* Prevent text modification */ - mutex_lock(&text_mutex); - ret = prepare_kprobe(p); - mutex_unlock(&text_mutex); - cpus_read_unlock(); - if (ret) - return ret; + scoped_guard(cpus_read_lock) { + /* Prevent text modification */ + guard(mutex)(&text_mutex); + ret = prepare_kprobe(p); + if (ret) + return ret; + } INIT_HLIST_NODE(&p->hlist); hlist_add_head_rcu(&p->hlist, From patchwork Mon Dec 9 02:41:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13898757 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E80D4594D; Mon, 9 Dec 2024 02:41:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712106; cv=none; b=cGonDhFiHuX6gNngjmabeRByYqeZWtpCPwZD3KDneg4bcSeLKj6titc0Q2GzWcvS6YcHT0vdDcOCjFy+PMgI+rEPtxvCSqrImQZEefSG9+titFqmmN/7nWYhswijaB3r2JV3agj0ikBJzinCKkoCyinDySLnOtAWjkf1go73FDo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712106; c=relaxed/simple; bh=YI34RXWyW3r1EI6GTGvDFujx5+lI9n8gUg1Yl+QM0As=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=T8rxDXW0ZpqtH4jMlK4OsO6IKoZ7XePa4RwpdM2phuMK2dTAEJ1cyZFJ2nu6ms0iC/YL0VmeYSyiQbATjPkvl2WXAP9W+lMpKZy5RWS0+KAwGQAnnn9LCIWVspN/vwm1aSWr5Z/gVK8YUwbnN1lwRNQvr+7Za9Unufh1Bjua+qI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qFOzsvq0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qFOzsvq0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11A92C4CED2; Mon, 9 Dec 2024 02:41:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733712106; bh=YI34RXWyW3r1EI6GTGvDFujx5+lI9n8gUg1Yl+QM0As=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qFOzsvq0FNuCaLc0pm4iTMQHmGp9Y5xR8m5Nh2vA18TFfsVgPKs7idFULlqSbJobB 9WAnm1hZj8DW87WNG0dgkbLwrbzCT0u8d/RXTdzw6MB0T7Q7d2VEKs3eYxAlnApn0K ABoewCN4hOTP7f+OvPwtRrilCDJXDLeLthJVhvAO0SUT+DHteuC9svj40lLZLlG7ag 53YOyFdWr1MzxGR7WK6BtI/kZ7vioVE8lrdd/v79ciePQq4lJewtHTkqfYB2UjR7bP u9PKxkbviKKecB6D1Botrj2C1Oq+22/o7y+m/EfSKuuZdONzzZ2MEo9o0X/IpPiHml aGXy7PGmrXTYw== From: "Masami Hiramatsu (Google)" To: Steven Rostedt , Peter Zijlstra Cc: Anil S Keshavamurthy , Masami Hiramatsu , "David S . Miller" , Mathieu Desnoyers , Oleg Nesterov , Tzvetomir Stoyanov , Naveen N Rao , Josh Poimboeuf , Jason Baron , Ard Biesheuvel , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 3/5] kprobes: Use guard for rcu_read_lock Date: Mon, 9 Dec 2024 11:41:38 +0900 Message-ID: <173371209846.480397.3852648910271029695.stgit@devnote2> X-Mailer: git-send-email 2.43.0 In-Reply-To: <173371205755.480397.7893311565254712194.stgit@devnote2> References: <173371205755.480397.7893311565254712194.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Masami Hiramatsu (Google) Use guard(rcu) for rcu_read_lock so that it can remove unneeded gotos and make it more structured. Signed-off-by: Masami Hiramatsu (Google) --- kernel/kprobes.c | 66 +++++++++++++++++++++++++++++------------------------- 1 file changed, 36 insertions(+), 30 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 004eb8326520..a24587e8f91a 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -144,30 +144,26 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c) /* Since the slot array is not protected by rcu, we need a mutex */ guard(mutex)(&c->mutex); - retry: - rcu_read_lock(); - list_for_each_entry_rcu(kip, &c->pages, list) { - if (kip->nused < slots_per_page(c)) { - int i; - - for (i = 0; i < slots_per_page(c); i++) { - if (kip->slot_used[i] == SLOT_CLEAN) { - kip->slot_used[i] = SLOT_USED; - kip->nused++; - rcu_read_unlock(); - return kip->insns + (i * c->insn_size); + do { + guard(rcu)(); + list_for_each_entry_rcu(kip, &c->pages, list) { + if (kip->nused < slots_per_page(c)) { + int i; + + for (i = 0; i < slots_per_page(c); i++) { + if (kip->slot_used[i] == SLOT_CLEAN) { + kip->slot_used[i] = SLOT_USED; + kip->nused++; + return kip->insns + (i * c->insn_size); + } } + /* kip->nused is broken. Fix it. */ + kip->nused = slots_per_page(c); + WARN_ON(1); } - /* kip->nused is broken. Fix it. */ - kip->nused = slots_per_page(c); - WARN_ON(1); } - } - rcu_read_unlock(); - /* If there are any garbage slots, collect it and try again. */ - if (c->nr_garbage && collect_garbage_slots(c) == 0) - goto retry; + } while (c->nr_garbage && collect_garbage_slots(c) == 0); /* All out of space. Need to allocate a new page. */ kip = kmalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL); @@ -246,25 +242,35 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c) return 0; } -void __free_insn_slot(struct kprobe_insn_cache *c, - kprobe_opcode_t *slot, int dirty) +static long __find_insn_page(struct kprobe_insn_cache *c, + kprobe_opcode_t *slot, struct kprobe_insn_page **pkip) { - struct kprobe_insn_page *kip; + struct kprobe_insn_page *kip = NULL; long idx; - guard(mutex)(&c->mutex); - rcu_read_lock(); + guard(rcu)(); list_for_each_entry_rcu(kip, &c->pages, list) { idx = ((long)slot - (long)kip->insns) / (c->insn_size * sizeof(kprobe_opcode_t)); - if (idx >= 0 && idx < slots_per_page(c)) - goto out; + if (idx >= 0 && idx < slots_per_page(c)) { + *pkip = kip; + return idx; + } } /* Could not find this slot. */ WARN_ON(1); - kip = NULL; -out: - rcu_read_unlock(); + *pkip = NULL; + return -1; +} + +void __free_insn_slot(struct kprobe_insn_cache *c, + kprobe_opcode_t *slot, int dirty) +{ + struct kprobe_insn_page *kip = NULL; + long idx; + + guard(mutex)(&c->mutex); + idx = __find_insn_page(c, slot, &kip); /* Mark and sweep: this may sleep */ if (kip) { /* Check double free */ From patchwork Mon Dec 9 02:41:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13898758 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA75970802; Mon, 9 Dec 2024 02:41:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712119; cv=none; b=U69P0+nwKFYbx+IR2P5V2QFytfsEjhK0lPNYbePE401klNpjV37xhvAw5hqQdFm3/m9XItJB9JlBEotUrrp4waIHWuzZU5sR+I+ARUhLb4O3CpAQdfLbM/HyqewmeeP3HH0T5L7hQvkxgwQC7O0WRNCwzniEfE7aMdk0zQZUjaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712119; c=relaxed/simple; bh=/l67opR2BkVN0689/PXyBVzSNMlXdLM5Dtq0u70RTd4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Thxe9L6HrCQNJ+7X0+qq0GSlmOLVPjog4o7hgY9dU3kmoOS7d1/U7bFa20QWprdz2giGP1dFdxnQB094Aqu+O3BD9wFqEkpZgulzPbgOAx0rtMVSRSjNkVtbm8+cJgUClnOj3tKrTppCOKvinBZSBjwoIepvueqHs8yXbJVrNvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PP6tQ+br; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PP6tQ+br" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6222C4CED2; Mon, 9 Dec 2024 02:41:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733712119; bh=/l67opR2BkVN0689/PXyBVzSNMlXdLM5Dtq0u70RTd4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PP6tQ+brmzfR5Ioop/FGGQs5znx5rCa9Aen+TbRGFx94H2hLQRYVcbQ9OsLCL6RA2 8cf2MEb2c8H+8N6a6K3Cv4Mz1qBENeHaEtRw2YGH9ZdjDN20jLM/ADyKLAFpgqPhWI Ld75HmWU0r3QPeynhgoxRrYiauziOrbLRQDj2kgz7F5mSu9OiaDA/IYBgXpQc0jqY2 gQk8Q3Qmsx2DGcqFAB129kym6yAD6h5MOZTOsgGgiXFDpLmhF8G1XvpAODgRCA2pzX qePnutfNyes2GYp51GO9sy3jSycB3SVVzvfkOenBmU8d8henLVR7bSAB9Nuz7G2XGH 3FaVr/b+X/5bg== From: "Masami Hiramatsu (Google)" To: Steven Rostedt , Peter Zijlstra Cc: Anil S Keshavamurthy , Masami Hiramatsu , "David S . Miller" , Mathieu Desnoyers , Oleg Nesterov , Tzvetomir Stoyanov , Naveen N Rao , Josh Poimboeuf , Jason Baron , Ard Biesheuvel , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 4/5] kprobes: Remove unneeded goto Date: Mon, 9 Dec 2024 11:41:52 +0900 Message-ID: <173371211203.480397.13988907319659165160.stgit@devnote2> X-Mailer: git-send-email 2.43.0 In-Reply-To: <173371205755.480397.7893311565254712194.stgit@devnote2> References: <173371205755.480397.7893311565254712194.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Masami Hiramatsu (Google) Remove unneeded gotos. Since the labels referred by these gotos have only one reference for each, we can replace those gotos with the referred code. Signed-off-by: Masami Hiramatsu (Google) --- kernel/kprobes.c | 45 +++++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a24587e8f91a..34cbbb2206f4 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1071,20 +1071,18 @@ static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops, if (*cnt == 0) { ret = register_ftrace_function(ops); - if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) - goto err_ftrace; + if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) { + /* + * At this point, sinec ops is not registered, we should be sefe from + * registering empty filter. + */ + ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0); + return ret; + } } (*cnt)++; return ret; - -err_ftrace: - /* - * At this point, sinec ops is not registered, we should be sefe from - * registering empty filter. - */ - ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0); - return ret; } static int arm_kprobe_ftrace(struct kprobe *p) @@ -1428,7 +1426,7 @@ _kprobe_addr(kprobe_opcode_t *addr, const char *symbol_name, unsigned long offset, bool *on_func_entry) { if ((symbol_name && addr) || (!symbol_name && !addr)) - goto invalid; + return ERR_PTR(-EINVAL); if (symbol_name) { /* @@ -1458,11 +1456,10 @@ _kprobe_addr(kprobe_opcode_t *addr, const char *symbol_name, * at the start of the function. */ addr = arch_adjust_kprobe_addr((unsigned long)addr, offset, on_func_entry); - if (addr) - return addr; + if (!addr) + return ERR_PTR(-EINVAL); -invalid: - return ERR_PTR(-EINVAL); + return addr; } static kprobe_opcode_t *kprobe_addr(struct kprobe *p) @@ -1486,15 +1483,15 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p) if (unlikely(!ap)) return NULL; - if (p != ap) { - list_for_each_entry(list_p, &ap->list, list) - if (list_p == p) - /* kprobe p is a valid probe */ - goto valid; - return NULL; - } -valid: - return ap; + if (p == ap) + return ap; + + list_for_each_entry(list_p, &ap->list, list) + if (list_p == p) + /* kprobe p is a valid probe */ + return ap; + + return NULL; } /* From patchwork Mon Dec 9 02:42:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13898759 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1608446A1; Mon, 9 Dec 2024 02:42:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712131; cv=none; b=i8Vg+2Gz/W9snZqZvtpS5G/UK4wXRiX+MCrGj8XIR7SP/fIdE8noi7bu3oemtxD/LhVIBs/59yGp8g2JVFlef4MdrsWkG7VDSkvaiPFTE7yalqtb5mm1l7GSYDGeCuXOdZysvDETz9ZaDb44y0WJ5WFN+xLxcv4xHzma7Thottk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733712131; c=relaxed/simple; bh=IuZ2XQ5La6w2CIYPeJ7tRxYaceO2siijVzc5McVgygI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aaVYMieAAstb7drfREPcaec+LBofJkXkmrs8Dp93FSCt+ToIMILfSn2L+GQ08RSaA0Bv9Lq8yzYf47DYXlzKeVHKRx2yw6t2CH+RA+t//t8/cD8fjW/k9RWUDF/Wtgc6ucNunK817ctQ7NkpzLglPwZG2ZLSBT88LO3Rd9e4UIM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OrHEIhcS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OrHEIhcS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B399C4CED2; Mon, 9 Dec 2024 02:42:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733712130; bh=IuZ2XQ5La6w2CIYPeJ7tRxYaceO2siijVzc5McVgygI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OrHEIhcSoaQxFNZbjtOj/W3JjvLbjtMZmKY/DOEdJzgIGKfX0OfMMTpmM30qe3VGM V0o6LI7YV9NhBn+b3BULFIFlzP+vuiYg5QVYmEoV+8TZIjBV8L+gIB/tpARl0CpOHs OwKeA+EzWwBwCkk2FV43YlKUIWqEPT+VNOtyEKaozntboiPfXBGXqwSv+g6Rudteoa 36KiZPeJbYeAzBYU75PmM/PsVQQ83tVfNtxG/veYhIv8E1FDvCjvAM/r1LTpIbnvtA NleMc7s+9fSXBsECqkpNC+oDTNZG9bokI7a1a6zsdWAt2mAqRi1aVaDSGNcH6SDWX/ bITnwsZhPbnNQ== From: "Masami Hiramatsu (Google)" To: Steven Rostedt , Peter Zijlstra Cc: Anil S Keshavamurthy , Masami Hiramatsu , "David S . Miller" , Mathieu Desnoyers , Oleg Nesterov , Tzvetomir Stoyanov , Naveen N Rao , Josh Poimboeuf , Jason Baron , Ard Biesheuvel , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 5/5] kprobes: Remove remaining gotos Date: Mon, 9 Dec 2024 11:42:04 +0900 Message-ID: <173371212474.480397.5684523564137819115.stgit@devnote2> X-Mailer: git-send-email 2.43.0 In-Reply-To: <173371205755.480397.7893311565254712194.stgit@devnote2> References: <173371205755.480397.7893311565254712194.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Masami Hiramatsu (Google) Remove remaining gotos from kprobes.c to clean up the code. This does not use cleanup macros, but changes code flow for avoiding gotos. Signed-off-by: Masami Hiramatsu (Google) --- kernel/kprobes.c | 63 +++++++++++++++++++++++++++--------------------------- 1 file changed, 31 insertions(+), 32 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 34cbbb2206f4..030569210670 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1730,29 +1730,31 @@ static int __unregister_kprobe_top(struct kprobe *p) if (IS_ERR(ap)) return PTR_ERR(ap); - if (ap == p) - /* - * This probe is an independent(and non-optimized) kprobe - * (not an aggrprobe). Remove from the hash list. - */ - goto disarmed; - - /* Following process expects this probe is an aggrprobe */ - WARN_ON(!kprobe_aggrprobe(ap)); + WARN_ON(ap != p && !kprobe_aggrprobe(ap)); - if (list_is_singular(&ap->list) && kprobe_disarmed(ap)) + /* + * If the probe is an independent(and non-optimized) kprobe + * (not an aggrprobe), the last kprobe on the aggrprobe, or + * kprobe is already disarmed, just remove from the hash list. + */ + if (ap == p || + (list_is_singular(&ap->list) && kprobe_disarmed(ap))) { /* * !disarmed could be happen if the probe is under delayed * unoptimizing. */ - goto disarmed; - else { - /* If disabling probe has special handlers, update aggrprobe */ - if (p->post_handler && !kprobe_gone(p)) { - list_for_each_entry(list_p, &ap->list, list) { - if ((list_p != p) && (list_p->post_handler)) - goto noclean; - } + hlist_del_rcu(&ap->hlist); + return 0; + } + + /* If disabling probe has special handlers, update aggrprobe */ + if (p->post_handler && !kprobe_gone(p)) { + list_for_each_entry(list_p, &ap->list, list) { + if ((list_p != p) && (list_p->post_handler)) + break; + } + /* No other probe has post_handler */ + if (list_entry_is_head(list_p, &ap->list, list)) { /* * For the kprobe-on-ftrace case, we keep the * post_handler setting to identify this aggrprobe @@ -1761,24 +1763,21 @@ static int __unregister_kprobe_top(struct kprobe *p) if (!kprobe_ftrace(ap)) ap->post_handler = NULL; } -noclean: + } + + /* + * Remove from the aggrprobe: this path will do nothing in + * __unregister_kprobe_bottom(). + */ + list_del_rcu(&p->list); + if (!kprobe_disabled(ap) && !kprobes_all_disarmed) /* - * Remove from the aggrprobe: this path will do nothing in - * __unregister_kprobe_bottom(). + * Try to optimize this probe again, because post + * handler may have been changed. */ - list_del_rcu(&p->list); - if (!kprobe_disabled(ap) && !kprobes_all_disarmed) - /* - * Try to optimize this probe again, because post - * handler may have been changed. - */ - optimize_kprobe(ap); - } + optimize_kprobe(ap); return 0; -disarmed: - hlist_del_rcu(&ap->hlist); - return 0; } static void __unregister_kprobe_bottom(struct kprobe *p)