From patchwork Tue Nov 5 13:33:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862970 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 042791D5168; Tue, 5 Nov 2024 13:34:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813662; cv=none; b=PGlbV6Fe3BJpgVOoTgR+0X5iJV7pt8n3N62wcRfrfoTA4J9nqpQXBPapqGSHJTLbPAuw9pJK3ZqjoEtm7u2A7Fzs4aXmDUkXKa5RatpRZOW3ltfRO3UUtzV2ofzcQo7pz5VTZr9DaRXbUsA5cobWCtC8dP+cJoEglilmNq55fho= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813662; c=relaxed/simple; bh=Aiw3eLbvPv7sUQ1IuNCFQdOCQyxDHTYPh/lNZWaD2h0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CdJqYp3hBG9nKhAaDhYGqwCWgmQjO+Ra+N+M7yZwKF5Kg4bhf43QkJ8wESbzoFJiS1b6txzmcCu0735v9x3umdU612GL5iK32xund/wPXXy9jf3TkvoaKcuQ6vs581OJqir3JsID0UoShppTpTqfRFryvt6piSNqZ0JseuJK6Dk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i3RQRXzg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i3RQRXzg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D892C4CED0; Tue, 5 Nov 2024 13:34:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813661; bh=Aiw3eLbvPv7sUQ1IuNCFQdOCQyxDHTYPh/lNZWaD2h0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i3RQRXzgbPqJGFt2yrImDi9W67YB7PbPop6CkLTenMPzy//DMrNyJui2v6SN5m8i3 GUN7Ol5RgKQDOjdbbb9CXsofVDpMHEQjSUSPhwpluiaR2puSvd0ivsoJnpi5IkBbyz wciaIXFNA5SzNJ4pVKAgoYf+pVzztFL6dSsc9HDOzTiR9vYZEC4Z12NRN8FVs6IHGz XZrLyyaTQ/hPiQEoOeoXeFn8s3Lf72RqOzP7qWdr8o+ohLJ5HwBFGROWdmfKHxeN+W ggzulEEW4XNanmbu4uSZUsCK1o3ubZ/+7JP0oT10bT9Sj8zohmwTPfkdX8RP9IWsoW bmWiIEfomu0DQ== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 01/11] uprobes: Rename arch_uretprobe_trampoline function Date: Tue, 5 Nov 2024 14:33:55 +0100 Message-ID: <20241105133405.2703607-2-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 We are about to add uprobe trampoline, so cleaning up the namespace. Signed-off-by: Jiri Olsa --- arch/x86/kernel/uprobes.c | 2 +- include/linux/uprobes.h | 2 +- kernel/events/uprobes.c | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 5a952c5ea66b..22a17c149a55 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -338,7 +338,7 @@ extern u8 uretprobe_trampoline_entry[]; extern u8 uretprobe_trampoline_end[]; extern u8 uretprobe_syscall_check[]; -void *arch_uprobe_trampoline(unsigned long *psize) +void *arch_uretprobe_trampoline(unsigned long *psize) { static uprobe_opcode_t insn = UPROBE_SWBP_INSN; struct pt_regs *regs = task_pt_regs(current); diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 7a051b5d2edd..2f500bc97263 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -211,7 +211,7 @@ extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, void *src, unsigned long len); extern void uprobe_handle_trampoline(struct pt_regs *regs); -extern void *arch_uprobe_trampoline(unsigned long *psize); +extern void *arch_uretprobe_trampoline(unsigned long *psize); extern unsigned long uprobe_get_trampoline_vaddr(void); #else /* !CONFIG_UPROBES */ struct uprobes_state { diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index a76ddc5fc982..0b04c051d712 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1696,7 +1696,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area) return ret; } -void * __weak arch_uprobe_trampoline(unsigned long *psize) +void * __weak arch_uretprobe_trampoline(unsigned long *psize) { static uprobe_opcode_t insn = UPROBE_SWBP_INSN; @@ -1728,7 +1728,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr) init_waitqueue_head(&area->wq); /* Reserve the 1st slot for get_trampoline_vaddr() */ set_bit(0, area->bitmap); - insns = arch_uprobe_trampoline(&insns_size); + insns = arch_uretprobe_trampoline(&insns_size); arch_uprobe_copy_ixol(area->page, 0, insns, insns_size); if (!xol_add_vma(mm, area)) From patchwork Tue Nov 5 13:33:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862971 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 957E41D5ACE; Tue, 5 Nov 2024 13:34:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813672; cv=none; b=mSCasDA7yGuHVy4tD6tqrg4/jveoRPq8+N9YrU9dX7C2WftWS4KhuVJuMXvYiJs9m/rHWRQZpkjsVoHkpHELpfgTpVXuPYwxUhrngNvGmk3rarE3+qD1mQZVeaRpDOTWaMyzRaxCMtqTNGuRaffy8CyH2KT7Q7hIWqTJyl/be7k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813672; c=relaxed/simple; bh=r0HQNtBTw57Ve3m9QapRAgnMEQsc1BZ44c9hrQqt3qI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bx54rGROFoYYR3m2tnOXmdTZoT8IQS9eXXMQSEo+PBjRONua95yg/RAn7XkUNCHRsSBePHCIIg6sZXInLBxu1BHEjMq6+G2eDimNrJArAinjzKBMrR1e83W9HZZpB4UUqAOG8AYad9ZwXV3WZ8wbbAKJovXqIp6iwvr2souEJhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f/fMqAKo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f/fMqAKo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 116A8C4CECF; Tue, 5 Nov 2024 13:34:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813672; bh=r0HQNtBTw57Ve3m9QapRAgnMEQsc1BZ44c9hrQqt3qI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f/fMqAKoSAEkYSczxHa8b/lWLLKGDApMl5+nUMVGztRgj+ERxMKYRP7ASbGch4Y28 h27CUga2Mkie/pOrrMid1gx9Fnvq9RjxrzdQR448elXEp8oNqYKNIjwGu+tA58mp88 bLdBZYRKMEMwTR8RRMpQrSHzNCCHC68BNv6pDbvALUgUh3DQyI+ZU3rqrFVJ2xCJv3 7L/kw8v+iFn5iKB9OwxVvxnTxSk+Sje0bXmfhxoxAfqTML8TE2HWbmsV1jpLl3UiJr InLQSwIeXT70BbpPxsbhcVUtDMahBvOyJNMxw/BHQAVArqs+DK5V1l534lsqtjK1wF vpr0EaWAaJO7A== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 02/11] uprobes: Make copy_from_page global Date: Tue, 5 Nov 2024 14:33:56 +0100 Message-ID: <20241105133405.2703607-3-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Making copy_from_page global and adding uprobe prefix. Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 1 + kernel/events/uprobes.c | 10 +++++----- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 2f500bc97263..28068f9fcdc1 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -213,6 +213,7 @@ extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, extern void uprobe_handle_trampoline(struct pt_regs *regs); extern void *arch_uretprobe_trampoline(unsigned long *psize); extern unsigned long uprobe_get_trampoline_vaddr(void); +extern void uprobe_copy_from_page(struct page *page, unsigned long vaddr, void *dst, int len); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 0b04c051d712..e9308649bba3 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -250,7 +250,7 @@ bool __weak is_trap_insn(uprobe_opcode_t *insn) return is_swbp_insn(insn); } -static void copy_from_page(struct page *page, unsigned long vaddr, void *dst, int len) +void uprobe_copy_from_page(struct page *page, unsigned long vaddr, void *dst, int len) { void *kaddr = kmap_atomic(page); memcpy(dst, kaddr + (vaddr & ~PAGE_MASK), len); @@ -278,7 +278,7 @@ static int verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t * is a trap variant; uprobes always wins over any other (gdb) * breakpoint. */ - copy_from_page(page, vaddr, &old_opcode, UPROBE_SWBP_INSN_SIZE); + uprobe_copy_from_page(page, vaddr, &old_opcode, UPROBE_SWBP_INSN_SIZE); is_swbp = is_swbp_insn(&old_opcode); if (is_swbp_insn(new_opcode)) { @@ -1027,7 +1027,7 @@ static int __copy_insn(struct address_space *mapping, struct file *filp, if (IS_ERR(page)) return PTR_ERR(page); - copy_from_page(page, offset, insn, nbytes); + uprobe_copy_from_page(page, offset, insn, nbytes); put_page(page); return 0; @@ -1368,7 +1368,7 @@ struct uprobe *uprobe_register(struct inode *inode, return ERR_PTR(-EINVAL); /* - * This ensures that copy_from_page(), copy_to_page() and + * This ensures that uprobe_copy_from_page(), copy_to_page() and * __update_ref_ctr() can't cross page boundary. */ if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE)) @@ -2288,7 +2288,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) if (result < 0) return result; - copy_from_page(page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + uprobe_copy_from_page(page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); put_page(page); out: /* This needs to return true for any variant of the trap insn */ From patchwork Tue Nov 5 13:33:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862972 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAF731D8A10; Tue, 5 Nov 2024 13:34:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813683; cv=none; b=bcmY1rWV2p+8dpXjum96jbCoLNwi8kaJiYhpHHNzXBWb9chQq5qgQA6vHQN9zKW5z3JoF0pVzMkYGtdxXNCGLXjXTFaQQZSpABTbGQqEKHHwQucxXIImX/Wzj8s5nJca8QQk0VIpTwTJk56iH0F1LBjIOYk60aq3VGe1VRgRgC4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813683; c=relaxed/simple; bh=stFTvQD4RtZMfQ6w+MnxEIvGsnQkeuy7rwuTg/tvlHo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fkdBLvF+RwaPqXPMqTsoON5+n2puFXwtX3Xfj0VeatnwPtE7fVKf391zHrBq0+RBhiZZlZRXNO/CgAMJ84OsWtYmHObMdqJPD4G7/+BlqUOZEyqQWL55XnOrfNF1lYo3uDI9ySL3IyI+OCX+ZCCbiwTvvVDljdrMAq3CLg3GpqI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FN/2UF1a; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FN/2UF1a" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7B98C4CECF; Tue, 5 Nov 2024 13:34:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813683; bh=stFTvQD4RtZMfQ6w+MnxEIvGsnQkeuy7rwuTg/tvlHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FN/2UF1aOH0s7w0ifCu5cg/nAujo0R0EFb7hQQ5HPXWrH1AIW/90sFpOi0ualdiiT wHMVyBVEAPfSAszrmfeq1A/tXsn6J5FpyzXJxBZZohdurz2VIGbq6AodIOtzRHavob jK0Wrrojx/W71Dtwd+LrcZqp65S4Ed+C51+ru6N7RMd8EIYLCvgBePoQh/Thh3jQWC cb5+dqT2UnmCHcEGfxmHB/Ic3vpxQ+8vajPoRexYDwrLGIvpu1jWAyUDFciVYZr6Mq YuHPUxg2ZFuBroRonb3HzHVveuRFKJe+lt7NVjP8UFwSfYUm/z1dxiEyUpswavAFK7 er9AjIRvdOHtg== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 03/11] uprobes: Add len argument to uprobe_write_opcode Date: Tue, 5 Nov 2024 14:33:57 +0100 Message-ID: <20241105133405.2703607-4-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding len argument to uprobe_write_opcode as preparation fo writing longer instructions in following changes. Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 3 ++- kernel/events/uprobes.c | 14 ++++++++------ 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 28068f9fcdc1..7d23a4fee6f4 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -181,7 +181,8 @@ extern bool is_swbp_insn(uprobe_opcode_t *insn); extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); -extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t); +extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, + unsigned long vaddr, uprobe_opcode_t *insn, int len); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index e9308649bba3..3e275717789b 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -471,7 +471,7 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * Return 0 (success) or a negative errno. */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t opcode) + unsigned long vaddr, uprobe_opcode_t *insn, int len) { struct uprobe *uprobe; struct page *old_page, *new_page; @@ -480,7 +480,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; - is_register = is_swbp_insn(&opcode); + is_register = is_swbp_insn(insn); uprobe = container_of(auprobe, struct uprobe, arch); retry: @@ -491,7 +491,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (IS_ERR(old_page)) return PTR_ERR(old_page); - ret = verify_opcode(old_page, vaddr, &opcode); + ret = verify_opcode(old_page, vaddr, insn); if (ret <= 0) goto put_old; @@ -525,7 +525,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, __SetPageUptodate(new_page); copy_highpage(new_page, old_page); - copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + copy_to_page(new_page, vaddr, insn, len); if (!is_register) { struct page *orig_page; @@ -582,7 +582,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, */ int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, UPROBE_SWBP_INSN); + uprobe_opcode_t insn = UPROBE_SWBP_INSN; + + return uprobe_write_opcode(auprobe, mm, vaddr, &insn, UPROBE_SWBP_INSN_SIZE); } /** @@ -598,7 +600,7 @@ int __weak set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) { return uprobe_write_opcode(auprobe, mm, vaddr, - *(uprobe_opcode_t *)&auprobe->insn); + (uprobe_opcode_t *)&auprobe->insn, UPROBE_SWBP_INSN_SIZE); } /* uprobe should have guaranteed positive refcount */ From patchwork Tue Nov 5 13:33:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862973 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B2141D86C7; Tue, 5 Nov 2024 13:34:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813694; cv=none; b=ane4FgL+IlhoT8+klbQ2zozHn1R50Mu7uvsLtu8B6EPou2j5nZbnH1Oeal+itAfCS3odpbCtF8/8/VkBgGl+fNHRKgTmW/tLuBKQOSBHPMQf7mIA3UHK3ccrAwl2PiMnLXB5SfdJfEhUZI/6RMQPYHjPmTERyaWWG4mHF9Wl2j0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813694; c=relaxed/simple; bh=APszTcX/BrIYGhd6xCUj6f8ck+yVAHrqlRKgkckhCtI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F8dxwe0b4tSV9OFag2IszUYcEGHw26lnrHrxpaMwZrBas7DuvaP96zkK58Yhu/xMPu1T+EVLW6YgO0lZ9NUcMO3XR0yZXvk3UNuw6q2BRUwvfg+AFfQk79/ltFUyxfGPyxjJpVhoufKfF85Muot3koWxjMQ7mo0Q0RbL1hDSDjg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KA/3DpSs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KA/3DpSs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5097C4CED0; Tue, 5 Nov 2024 13:34:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813693; bh=APszTcX/BrIYGhd6xCUj6f8ck+yVAHrqlRKgkckhCtI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KA/3DpSsylGmlKhntbIR31NnPmDouwtgeprKXrjhny6kyAFuF1nmL3vgbpj/mYTvH 4iGLXqpNjpM5t9SFd58/EixHBHr6cjrHVslX4tORM/dbT4fK01RxGjcogJ8pTrqw+D 8WFAwPJezyFCvv8QyZsUPo6qjuG5rp0H6DnvHWHaK488NUkxiBigkP5kxPBeD42x6s P+M/AhoDtyVaynRF6WM6usBYI/dZfkfgaEx+KlAhaRR2Bv+gVuZc7LH0a4EnyUeQFz gu+8lg1BCCbdpKA1xJEUD99P5YPxtk61vc3UHQE3jqxrcJ2JYPJqmn+wHeosWlYEDa 06YR1O92fZS0w== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 04/11] uprobes: Add data argument to uprobe_write_opcode function Date: Tue, 5 Nov 2024 14:33:58 +0100 Message-ID: <20241105133405.2703607-5-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding data argument to uprobe_write_opcode function and passing it to newly added arch overloaded functions: arch_uprobe_verify_opcode arch_uprobe_is_register This way each architecture can provide custmized verification. Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 6 +++++- kernel/events/uprobes.c | 25 +++++++++++++++++++------ 2 files changed, 24 insertions(+), 7 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 7d23a4fee6f4..be306028ed59 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -182,7 +182,7 @@ extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t *insn, int len); + unsigned long vaddr, uprobe_opcode_t *insn, int len, void *data); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); @@ -215,6 +215,10 @@ extern void uprobe_handle_trampoline(struct pt_regs *regs); extern void *arch_uretprobe_trampoline(unsigned long *psize); extern unsigned long uprobe_get_trampoline_vaddr(void); extern void uprobe_copy_from_page(struct page *page, unsigned long vaddr, void *dst, int len); +extern int uprobe_verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t *new_opcode); +extern int arch_uprobe_verify_opcode(struct page *page, unsigned long vaddr, + uprobe_opcode_t *new_opcode, void *data); +extern bool arch_uprobe_is_register(uprobe_opcode_t *insn, int len, void *data); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 3e275717789b..944d9df1f081 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -264,7 +264,13 @@ static void copy_to_page(struct page *page, unsigned long vaddr, const void *src kunmap_atomic(kaddr); } -static int verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t *new_opcode) +__weak bool arch_uprobe_is_register(uprobe_opcode_t *insn, int len, void *data) +{ + return is_swbp_insn(insn); +} + +int uprobe_verify_opcode(struct page *page, unsigned long vaddr, + uprobe_opcode_t *new_opcode) { uprobe_opcode_t old_opcode; bool is_swbp; @@ -292,6 +298,12 @@ static int verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t return 1; } +__weak int arch_uprobe_verify_opcode(struct page *page, unsigned long vaddr, + uprobe_opcode_t *new_opcode, void *data) +{ + return uprobe_verify_opcode(page, vaddr, new_opcode); +} + static struct delayed_uprobe * delayed_uprobe_check(struct uprobe *uprobe, struct mm_struct *mm) { @@ -471,7 +483,8 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * Return 0 (success) or a negative errno. */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t *insn, int len) + unsigned long vaddr, uprobe_opcode_t *insn, int len, + void *data) { struct uprobe *uprobe; struct page *old_page, *new_page; @@ -480,7 +493,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; - is_register = is_swbp_insn(insn); + is_register = arch_uprobe_is_register(insn, len, data); uprobe = container_of(auprobe, struct uprobe, arch); retry: @@ -491,7 +504,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (IS_ERR(old_page)) return PTR_ERR(old_page); - ret = verify_opcode(old_page, vaddr, insn); + ret = arch_uprobe_verify_opcode(old_page, vaddr, insn, data); if (ret <= 0) goto put_old; @@ -584,7 +597,7 @@ int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned { uprobe_opcode_t insn = UPROBE_SWBP_INSN; - return uprobe_write_opcode(auprobe, mm, vaddr, &insn, UPROBE_SWBP_INSN_SIZE); + return uprobe_write_opcode(auprobe, mm, vaddr, &insn, UPROBE_SWBP_INSN_SIZE, NULL); } /** @@ -600,7 +613,7 @@ int __weak set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) { return uprobe_write_opcode(auprobe, mm, vaddr, - (uprobe_opcode_t *)&auprobe->insn, UPROBE_SWBP_INSN_SIZE); + (uprobe_opcode_t *)&auprobe->insn, UPROBE_SWBP_INSN_SIZE, NULL); } /* uprobe should have guaranteed positive refcount */ From patchwork Tue Nov 5 13:33:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862974 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 021181D362B; Tue, 5 Nov 2024 13:35:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813705; cv=none; b=WaEFz+aGzBNWSIByfQllXMs878oVJIgZbTZjtBUMyTCr5txkRoqaIOt4d254RT37tnRMmcs4ax+rnKhr45RT5J0dT4NhYTsGbRjUJGkJza3BSj+6tufyNkl9TPusCSJf0tyx+e1bSxy3di9Gn46uk/RaRihuoDDH0NTtV5aMWYA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813705; c=relaxed/simple; bh=I1y0BuXthi+rHhswQatOYfSek/3hTV8+LS5gYMFqRu4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D+VRg9/5nxkCOgG3XSRjcLsaInXxYTBgg9Gm7QEHL1wdecWwwvTop/XVrO4n3o12SgSww0zpEBQ2748GlrEyiBm8AF7u4nDtsG2T3XNkDrGpQ2IytukRRd28ekME3uRfiOhLPFSn88sue5MuKrugu2zXS3wnej/EVQp3JPi8Pyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aOvBOjHc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aOvBOjHc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F706C4CECF; Tue, 5 Nov 2024 13:35:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813704; bh=I1y0BuXthi+rHhswQatOYfSek/3hTV8+LS5gYMFqRu4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aOvBOjHcCiPYpk3cjZcc+/WX5HImRKZP8408IMUNNBIGIIKRn/gGL6BKhOoOKjPTF gGCfz6ghBC2sL1WKeV0MaZk7WOWkOcoF8fFwQVTmG7ZHrfHJNTiagHb52okE5huVDd 4OlO866jbYtLIUAFncsc3IhNst4vSSI7auAMa5zh12otRr1dK2a81McnYocI3xg5ZJ EysQcRRblaXeMPk7eAQxVLep6/UcczxC01MY+v6hPhDFVhnDY+2im5fq2lh5Kgqsud 2JkfyjLfWt8BC621CKTyEmfqXFvTa5jB29rWSzlf3ekf4PNVb6gQiQiAzMG+U2X1sT SGcdFldm7FwsA== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 05/11] uprobes: Add mapping for optimized uprobe trampolines Date: Tue, 5 Nov 2024 14:33:59 +0100 Message-ID: <20241105133405.2703607-6-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding interface to add special mapping for user space page that will be used as place holder for uprobe trampoline in following changes. The get_tramp_area(vaddr) function either finds 'callable' page or create new one. The 'callable' means it's reachable by call instruction (from vaddr argument) and is decided by each arch via new arch_uprobe_is_callable function. The put_tramp_area function either drops refcount or destroys the special mapping and all the maps are clean up when the process goes down. Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 12 ++++ kernel/events/uprobes.c | 141 ++++++++++++++++++++++++++++++++++++++++ kernel/fork.c | 2 + 3 files changed, 155 insertions(+) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index be306028ed59..222d8e82cee2 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -172,6 +172,15 @@ struct xol_area; struct uprobes_state { struct xol_area *xol_area; + struct hlist_head tramp_head; + struct mutex tramp_mutex; +}; + +struct tramp_area { + unsigned long vaddr; + struct page *page; + struct hlist_node node; + refcount_t ref; }; extern void __init uprobes_init(void); @@ -219,6 +228,9 @@ extern int uprobe_verify_opcode(struct page *page, unsigned long vaddr, uprobe_o extern int arch_uprobe_verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t *new_opcode, void *data); extern bool arch_uprobe_is_register(uprobe_opcode_t *insn, int len, void *data); +struct tramp_area *get_tramp_area(unsigned long vaddr); +void put_tramp_area(struct tramp_area *area); +bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 944d9df1f081..a44305c559a4 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -616,6 +616,145 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v (uprobe_opcode_t *)&auprobe->insn, UPROBE_SWBP_INSN_SIZE, NULL); } +bool __weak arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr) +{ + return false; +} + +static unsigned long find_nearest_page(unsigned long vaddr) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + VMA_ITERATOR(vmi, mm, 0); + + prev = vma_next(&vmi); + vma = vma_next(&vmi); + while (vma) { + if (vma->vm_start - prev->vm_end >= PAGE_SIZE && + arch_uprobe_is_callable(prev->vm_end, vaddr)) + return prev->vm_end; + + prev = vma; + vma = vma_next(&vmi); + } + + return 0; +} + +static vm_fault_t tramp_fault(const struct vm_special_mapping *sm, + struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct hlist_head *head = &vma->vm_mm->uprobes_state.tramp_head; + struct tramp_area *area; + + hlist_for_each_entry(area, head, node) { + if (vma->vm_start == area->vaddr) { + vmf->page = area->page; + get_page(vmf->page); + return 0; + } + } + + return -EINVAL; +} + +static int tramp_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) +{ + return -EPERM; +} + +static const struct vm_special_mapping tramp_mapping = { + .name = "[uprobes-trampoline]", + .fault = tramp_fault, + .mremap = tramp_mremap, +}; + +static struct tramp_area *create_tramp_area(unsigned long vaddr) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma; + struct tramp_area *area; + + vaddr = find_nearest_page(vaddr); + if (!vaddr) + return NULL; + + area = kzalloc(sizeof(*area), GFP_KERNEL); + if (unlikely(!area)) + return NULL; + + area->page = alloc_page(GFP_HIGHUSER); + if (!area->page) + goto free_area; + + refcount_set(&area->ref, 1); + area->vaddr = vaddr; + + vma = _install_special_mapping(mm, area->vaddr, PAGE_SIZE, + VM_READ|VM_EXEC|VM_MAYEXEC|VM_MAYREAD|VM_DONTCOPY|VM_IO, + &tramp_mapping); + if (!IS_ERR(vma)) + return area; + + __free_page(area->page); + free_area: + kfree(area); + return NULL; +} + +struct tramp_area *get_tramp_area(unsigned long vaddr) +{ + struct uprobes_state *state = ¤t->mm->uprobes_state; + struct tramp_area *area = NULL; + + mutex_lock(&state->tramp_mutex); + hlist_for_each_entry(area, &state->tramp_head, node) { + if (arch_uprobe_is_callable(area->vaddr, vaddr)) { + refcount_inc(&area->ref); + goto unlock; + } + } + + area = create_tramp_area(vaddr); + if (area) + hlist_add_head(&area->node, &state->tramp_head); + +unlock: + mutex_unlock(&state->tramp_mutex); + return area; +} + +static void destroy_tramp_area(struct tramp_area *area) +{ + hlist_del(&area->node); + put_page(area->page); + kfree(area); +} + +void put_tramp_area(struct tramp_area *area) +{ + struct mm_struct *mm = current->mm; + struct uprobes_state *state = &mm->uprobes_state; + + if (area == NULL) + return; + + mutex_lock(&state->tramp_mutex); + if (refcount_dec_and_test(&area->ref)) + destroy_tramp_area(area); + mutex_unlock(&state->tramp_mutex); +} + +static void clear_tramp_head(struct mm_struct *mm) +{ + struct uprobes_state *state = &mm->uprobes_state; + struct tramp_area *area; + struct hlist_node *n; + + hlist_for_each_entry_safe(area, n, &state->tramp_head, node) + destroy_tramp_area(area); +} + /* uprobe should have guaranteed positive refcount */ static struct uprobe *get_uprobe(struct uprobe *uprobe) { @@ -1788,6 +1927,8 @@ void uprobe_clear_state(struct mm_struct *mm) delayed_uprobe_remove(NULL, mm); mutex_unlock(&delayed_uprobe_lock); + clear_tramp_head(mm); + if (!area) return; diff --git a/kernel/fork.c b/kernel/fork.c index 89ceb4a68af2..b1fe431e5cce 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1248,6 +1248,8 @@ static void mm_init_uprobes_state(struct mm_struct *mm) { #ifdef CONFIG_UPROBES mm->uprobes_state.xol_area = NULL; + mutex_init(&mm->uprobes_state.tramp_mutex); + INIT_HLIST_HEAD(&mm->uprobes_state.tramp_head); #endif } From patchwork Tue Nov 5 13:34:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862975 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A41DE1D63E3; Tue, 5 Nov 2024 13:35:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813715; cv=none; b=j00fbdOlrHpv++l46CsUS0nNp9i3LRRI4FBJr+AizOA0TBL6eBmos2XGW6figaNiSKc1eswMc7mk4xLzjNluYplrXz81ybpzpMmITJgMva/LvbHya5ahTFEoXKtNmL2rY8MdASk/Clvr22nOtYFeTJB/sKYkbGmll37zkOTZEgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813715; c=relaxed/simple; bh=kc9co9tk0UA3stprV/3FEsFADBhczA5tg3Mf9cRThwY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V/XyDWjmlmRQK1t0LCtfwFwQAbdER4P0aw13pRwztOqthDOWwKGt2opSV4BpI9vYxfsiJLK7iiFvPaNIbZ+3RAv9gohF+5BtieQgdbu0u4urKLziCaTdzmRy6zceHh+S3yQXk7SAnBq102RpMfJYfXCZOlBMiN5YBUh4voVQ9e0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QQMAPlQz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QQMAPlQz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE60DC4CECF; Tue, 5 Nov 2024 13:35:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813715; bh=kc9co9tk0UA3stprV/3FEsFADBhczA5tg3Mf9cRThwY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QQMAPlQzC9g+Pbg/kaV5DQQGw49GDYmUeFc2nvcBTSaKQKZuXkMl/iNw6u2d5zrtm LcsMHi6dwcAVg5HumMOmwp9JlZahUipNeF+ILKXydh+gJbDa0xp0YmG9BGm9U4vdIl qfhZ2EIRiZcUiT77t/GTXek6NJYW0+IFmMq9ggQNGbcVXLgYcxfKzKgNBTEQNkzV10 BVfRh86SZp4pBpjxdaI5dZT41iCSV22WtjqzvfN6sSz/hQZ6fjcDdZJRD9bB79YIsj qFYBHSe2u+NZExFdiZMVzysM2jX5HvGZMf6kiFuzKtNbCsosBouLkbvW53JAQtfxSO CaqVf2vZWBRUg== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 06/11] uprobes: Add uprobe syscall to speed up uprobe Date: Tue, 5 Nov 2024 14:34:00 +0100 Message-ID: <20241105133405.2703607-7-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding new uprobe syscall that calls uprobe handlers for given 'breakpoint' address. The idea is that the 'breakpoint' address calls the user space trampoline which executes the uprobe syscall. The syscall handler reads the return address of the initiall call to retrieve the original 'breakpoint' address. With this address we find the related uprobe object and call its consumers. TODO allow to call uprobe syscall only from uprobe trampoline. Signed-off-by: Jiri Olsa --- arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/x86/kernel/uprobes.c | 48 ++++++++++++++++++++++++++ include/linux/syscalls.h | 2 ++ include/linux/uprobes.h | 2 ++ kernel/events/uprobes.c | 35 +++++++++++++++++++ kernel/sys_ni.c | 1 + 6 files changed, 89 insertions(+) diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 7093ee21c0d1..f6299d57afe5 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -345,6 +345,7 @@ 333 common io_pgetevents sys_io_pgetevents 334 common rseq sys_rseq 335 common uretprobe sys_uretprobe +336 common uprobe sys_uprobe # don't use numbers 387 through 423, add new calls after the last # 'common' entry 424 common pidfd_send_signal sys_pidfd_send_signal diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 22a17c149a55..02aa4519b677 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -425,6 +425,54 @@ SYSCALL_DEFINE0(uretprobe) return -1; } +SYSCALL_DEFINE0(uprobe) +{ + struct pt_regs *regs = task_pt_regs(current); + unsigned long bp_vaddr; + int err; + + err = copy_from_user(&bp_vaddr, (void __user *)regs->sp + 3*8, sizeof(bp_vaddr)); + if (err) { + force_sig(SIGILL); + return -1; + } + + handle_syscall_uprobe(regs, bp_vaddr - 5); + return 0; +} + +asm ( + ".pushsection .rodata\n" + ".global uprobe_trampoline_entry\n" + "uprobe_trampoline_entry:\n" + "push %rcx\n" + "push %r11\n" + "push %rax\n" + "movq $" __stringify(__NR_uprobe) ", %rax\n" + "syscall\n" + "pop %rax\n" + "pop %r11\n" + "pop %rcx\n" + "ret\n" + ".global uprobe_trampoline_end\n" + "uprobe_trampoline_end:\n" + ".popsection\n" +); + +extern __visible u8 uprobe_trampoline_entry[]; +extern __visible u8 uprobe_trampoline_end[]; + +void *arch_uprobe_trampoline(unsigned long *psize) +{ + struct pt_regs *regs = task_pt_regs(current); + + if (user_64bit_mode(regs)) { + *psize = uprobe_trampoline_end - uprobe_trampoline_entry; + return uprobe_trampoline_entry; + } + return NULL; +} + /* * If arch_uprobe->insn doesn't use rip-relative addressing, return * immediately. Otherwise, rewrite the instruction so that it accesses diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 5758104921e6..a2573f9dd248 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -981,6 +981,8 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on); asmlinkage long sys_uretprobe(void); +asmlinkage long sys_uprobe(void); + /* pciconfig: alpha, arm, arm64, ia64, sparc */ asmlinkage long sys_pciconfig_read(unsigned long bus, unsigned long dfn, unsigned long off, unsigned long len, diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 222d8e82cee2..4024e6ea52a4 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -231,6 +231,8 @@ extern bool arch_uprobe_is_register(uprobe_opcode_t *insn, int len, void *data); struct tramp_area *get_tramp_area(unsigned long vaddr); void put_tramp_area(struct tramp_area *area); bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr); +extern void *arch_uprobe_trampoline(unsigned long *psize); +extern void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index a44305c559a4..b8399684231c 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -621,6 +621,11 @@ bool __weak arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr) return false; } +void * __weak arch_uprobe_trampoline(unsigned long *psize) +{ + return NULL; +} + static unsigned long find_nearest_page(unsigned long vaddr) { struct mm_struct *mm = current->mm; @@ -673,7 +678,13 @@ static struct tramp_area *create_tramp_area(unsigned long vaddr) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; + unsigned long tramp_size; struct tramp_area *area; + void *tramp; + + tramp = arch_uprobe_trampoline(&tramp_size); + if (!tramp) + return NULL; vaddr = find_nearest_page(vaddr); if (!vaddr) @@ -690,6 +701,8 @@ static struct tramp_area *create_tramp_area(unsigned long vaddr) refcount_set(&area->ref, 1); area->vaddr = vaddr; + arch_uprobe_copy_ixol(area->page, 0, tramp, tramp_size); + vma = _install_special_mapping(mm, area->vaddr, PAGE_SIZE, VM_READ|VM_EXEC|VM_MAYEXEC|VM_MAYREAD|VM_DONTCOPY|VM_IO, &tramp_mapping); @@ -2757,6 +2770,28 @@ static void handle_swbp(struct pt_regs *regs) rcu_read_unlock_trace(); } +void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr) +{ + struct uprobe *uprobe; + int is_swbp; + + rcu_read_lock_trace(); + uprobe = find_active_uprobe_rcu(bp_vaddr, &is_swbp); + if (!uprobe) + goto unlock; + + if (!get_utask()) + goto unlock; + + if (arch_uprobe_ignore(&uprobe->arch, regs)) + goto unlock; + + handler_chain(uprobe, regs); + +unlock: + rcu_read_unlock_trace(); +} + /* * Perform required fix-ups and disable singlestep. * Allow pending signals to take effect. diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index c00a86931f8c..bf5d05c635ff 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -392,3 +392,4 @@ COND_SYSCALL(setuid16); COND_SYSCALL(rseq); COND_SYSCALL(uretprobe); +COND_SYSCALL(uprobe); From patchwork Tue Nov 5 13:34:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862976 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A8661D5168; Tue, 5 Nov 2024 13:35:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813726; cv=none; b=JW60VqjNhQXDYr2yLXJWSxIbf6BrwiOxfGyd+rLq9C80RG8f3jsEducdvM2aedeU4nIZOJJO+khb38cuNCt2dOYG0HNOaHO8f9jMOrFRvRjNFowUaL88OAlxt9m/2vVfiAOk1c4FlSdp+mEzu+iYqDH2FMqxnIOTqprtCAO8FFQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813726; c=relaxed/simple; bh=EtJfWq9duQ7ZZR4ixiYEYXsxTJTZs998UR8FdU9uC1A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oXGlTH5p2kD6VugzGnLAk1nXARlYjLqSbcJhNIElZu1cLLrOGHtU/xWR1TWCMRLOatsvOagacnm0zOsNdfttU9GqlX1fa1Vl9w4XhnJLqnqS8Nrzy99Tp5/lqsOJKGCXxST5rODiPksdK/wyCxhY2DTo8v6GaetEMjjkEWig2BI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bK1l9NBE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bK1l9NBE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6E46C4CECF; Tue, 5 Nov 2024 13:35:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813726; bh=EtJfWq9duQ7ZZR4ixiYEYXsxTJTZs998UR8FdU9uC1A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bK1l9NBEdWbq8Q7iISKA2JKj0dfN7xAVEpOlcuFU5NN7eAoc0nqoI8045HrCH8MBP hQpn9j2gr4K78MXmjDWfqXe4C3YsU5SGUtSgNBLUSBQoyQzqUqF9Ad/tOT0O8xHcB0 uTrs4lLBCA0CBJGR5thUoht5kKwYkJdq6i3q/9746v8PSyL/i+KGqNqRaXssdZySra 7JaIyECrOmRbKD3JCmlLelfVZ/Ms4WgvV5R7AYQpPS/tWHaOprfiQKwIf6joZ4bc1m 6TIhAewHBJDZfAExn1N9SK6YP/m/rDkEs+A63Vg+q7zBV01GDdoOPp/6TLDBPR0iZl XeBs8vgrBxiuQ== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC perf/core 07/11] uprobes/x86: Add support to optimize uprobes Date: Tue, 5 Nov 2024 14:34:01 +0100 Message-ID: <20241105133405.2703607-8-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Putting together all the previously added pieces to support optimized uprobes on top of 5-byte nop instruction. The current uprobe execution goes through following: - installs breakpoint instruction over original instruction - exception handler hit and calls related uprobe consumers - and either simulates original instruction or does out of line single step execution of it - returns to user space The optimized uprobe path - checks the original instruction is 5-byte nop (plus other checks) - adds (or uses existing) user space trampoline and overwrites original instruction (5-byte nop) with call to user space trampoline - the user space trampoline executes uprobe syscall that calls related uprobe consumers - trampoline returns back to next instruction This approach won't speed up all uprobes as it's limited to using nop5 as original instruction, but we could use nop5 as USDT probe instruction (which uses single byte nop ATM) and speed up the USDT probes. This patch overloads related arch functions in uprobe_write_opcode and set_orig_insn so they can install call instruction if needed. The arch_uprobe_optimize triggers the uprobe optimization and is called after first uprobe hit. I originally had it called on uprobe installation but then it clashed with elf loader, because the user space trampoline was added in a place where loader might need to put elf segments, so I decided to do it after first uprobe hit when loading is done. TODO release uprobe trampoline when it's no longer needed.. we might need to stop all cpus to make sure no user space thread is in the trampoline.. or we might just keep it, because there's just one 4GB memory region? Signed-off-by: Jiri Olsa --- arch/x86/include/asm/uprobes.h | 7 ++ arch/x86/kernel/uprobes.c | 130 +++++++++++++++++++++++++++++++++ include/linux/uprobes.h | 1 + kernel/events/uprobes.c | 3 + 4 files changed, 141 insertions(+) diff --git a/arch/x86/include/asm/uprobes.h b/arch/x86/include/asm/uprobes.h index 678fb546f0a7..84a75ed748f0 100644 --- a/arch/x86/include/asm/uprobes.h +++ b/arch/x86/include/asm/uprobes.h @@ -20,6 +20,11 @@ typedef u8 uprobe_opcode_t; #define UPROBE_SWBP_INSN 0xcc #define UPROBE_SWBP_INSN_SIZE 1 +enum { + ARCH_UPROBE_FLAG_CAN_OPTIMIZE = 0, + ARCH_UPROBE_FLAG_OPTIMIZED = 1, +}; + struct uprobe_xol_ops; struct arch_uprobe { @@ -45,6 +50,8 @@ struct arch_uprobe { u8 ilen; } push; }; + + unsigned long flags; }; struct arch_uprobe_task { diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 02aa4519b677..50ccf24ff42c 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -18,6 +18,7 @@ #include #include #include +#include /* Post-execution fixups. */ @@ -877,6 +878,33 @@ static const struct uprobe_xol_ops push_xol_ops = { .emulate = push_emulate_op, }; +static int is_nop5_insns(uprobe_opcode_t *insn) +{ + return !memcmp(insn, x86_nops[5], 5); +} + +static int is_call_insns(uprobe_opcode_t *insn) +{ + return *insn == 0xe8; +} + +static void relative_insn(void *dest, void *from, void *to, u8 op) +{ + struct __arch_relative_insn { + u8 op; + s32 raddr; + } __packed *insn; + + insn = (struct __arch_relative_insn *)dest; + insn->raddr = (s32)((long)(to) - ((long)(from) + 5)); + insn->op = op; +} + +static void relative_call(void *dest, void *from, void *to) +{ + relative_insn(dest, from, to, CALL_INSN_OPCODE); +} + /* Returns -ENOSYS if branch_xol_ops doesn't handle this insn */ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) { @@ -896,6 +924,10 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) break; case 0x0f: + if (is_nop5_insns((uprobe_opcode_t *) &auprobe->insn)) { + set_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags); + break; + } if (insn->opcode.nbytes != 2) return -ENOSYS; /* @@ -1267,3 +1299,101 @@ bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx, else return regs->sp <= ret->stack; } + +int arch_uprobe_verify_opcode(struct page *page, unsigned long vaddr, + uprobe_opcode_t *new_opcode, void *opt) +{ + if (opt) { + uprobe_opcode_t old_opcode[5]; + bool is_call; + + uprobe_copy_from_page(page, vaddr, (uprobe_opcode_t *) &old_opcode, 5); + is_call = is_call_insns((uprobe_opcode_t *) &old_opcode); + + if (is_call_insns(new_opcode)) { + if (is_call) /* register: already installed? */ + return 0; + } else { + if (!is_call) /* unregister: was it changed by us? */ + return 0; + } + + return 1; + } + + return uprobe_verify_opcode(page, vaddr, new_opcode); +} + +bool arch_uprobe_is_register(uprobe_opcode_t *insn, int len, void *data) +{ + return data ? len == 5 && is_call_insns(insn) : is_swbp_insn(insn); +} + +static void __arch_uprobe_optimize(struct arch_uprobe *auprobe, struct mm_struct *mm, + unsigned long vaddr) +{ + struct tramp_area *area = NULL; + char call[5]; + + /* We can't do cross page atomic writes yet. */ + if (PAGE_SIZE - (vaddr & ~PAGE_MASK) < 5) + goto fail; + + area = get_tramp_area(vaddr); + if (!area) + goto fail; + + relative_call(call, (void *) vaddr, (void *) area->vaddr); + if (uprobe_write_opcode(auprobe, mm, vaddr, call, 5, (void *) 1)) + goto fail; + + set_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags); + return; + +fail: + /* Once we fail we never try again. */ + put_tramp_area(area); + clear_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags); +} + +static bool should_optimize(struct arch_uprobe *auprobe) +{ + if (!test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags)) + return false; + if (test_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags)) + return false; + return true; +} + +void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr) +{ + struct mm_struct *mm = current->mm; + + if (!should_optimize(auprobe)) + return; + + mmap_write_lock(mm); + if (should_optimize(auprobe)) + __arch_uprobe_optimize(auprobe, mm, vaddr); + mmap_write_unlock(mm); +} + +int set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) +{ + uprobe_opcode_t *insn = (uprobe_opcode_t *) auprobe->insn; + + if (test_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags)) + return uprobe_write_opcode(auprobe, mm, vaddr, insn, 5, (void *) 1); + + return uprobe_write_opcode(auprobe, mm, vaddr, insn, UPROBE_SWBP_INSN_SIZE, NULL); +} + +bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr) +{ + unsigned long delta; + + /* call instructions size */ + vaddr += 5; + delta = vaddr < vtramp ? vtramp - vaddr : vaddr - vtramp; + return delta < 0xffffffff; +} diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 4024e6ea52a4..42ab29f80220 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -233,6 +233,7 @@ void put_tramp_area(struct tramp_area *area); bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr); extern void *arch_uprobe_trampoline(unsigned long *psize); extern void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr); +extern void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index b8399684231c..efe45fcd5d0a 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -2759,6 +2759,9 @@ static void handle_swbp(struct pt_regs *regs) handler_chain(uprobe, regs); + /* Try to optimize after first hit. */ + arch_uprobe_optimize(&uprobe->arch, bp_vaddr); + if (arch_uprobe_skip_sstep(&uprobe->arch, regs)) goto out; From patchwork Tue Nov 5 13:34:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862977 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 307B71D63EA; Tue, 5 Nov 2024 13:35:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813737; cv=none; b=kKQ893d0ioGKXfPWnRyDkfXu7orZvGGR1pG1vfVDrxCTTPPK4l+raR8S4syum8s//UkWVztixvutehcwgqYnp1HcWZXpndeSmkQi3RBsefpIYAYx6NqvSihf/uU70JI+GdPdt447OV3922jilS/9Xz/OjON+aQgm7/GzuqcIiHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813737; c=relaxed/simple; bh=eVJp2tvqlAvwi3XNY+zHgNGDY+OQsQBQRyY5w45290U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gqOyOY+Sbms6OkuDSbM1UiGBzNa8kwXQehzAUB0MoBbWgwRrgHnrmPgOxwftyqEsYlDSrQQkxb2B4wm1aOPwcla/xzYg4QdyoTlEYxBSLOoD/Gx4WMOZAnE8qNOn/+3r7LRFe2dLr80bDFcYRHUQFtBk+k5nVH9ikMJuE6mnT/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y/ZL9e2J; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y/ZL9e2J" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A28CC4CECF; Tue, 5 Nov 2024 13:35:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813736; bh=eVJp2tvqlAvwi3XNY+zHgNGDY+OQsQBQRyY5w45290U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y/ZL9e2JRiz8Y0IArgMnxL0btwzw+wIrKrW43vSOe1w58W8+GmiJNuTKa+JTNLToV 5rThuGTZ56YQY++8IHa9J5WmP+oTneVUTnC2ITPDZw1j2xDb5dvmY81I6/a4IBPO5b ttEk4aHzW8t/yBEjaW3TSG4UCYlO1UyMEFCgJPomdby1XHbL2IuZuld6M79lU0bnGr sEuvLj8qGgcViJqbSifw0vggxohxFWZejhvOTfllBgsC1FIwH7NDj98WHwdVV7iurC fa1K5a6ABim40OF/9cpRqV7Pp9J96GvUfL6tx66wwBKLfX58n5vmJ38qTjN0dBSUCK DFneyTS63vu1Q== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC bpf-next 08/11] selftests/bpf: Use 5-byte nop for x86 usdt probes Date: Tue, 5 Nov 2024 14:34:02 +0100 Message-ID: <20241105133405.2703607-9-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Using 5-byte nop for x86 usdt probes so we can switch to optimized uprobe them. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/sdt.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/sdt.h b/tools/testing/selftests/bpf/sdt.h index ca0162b4dc57..7ac9291f45f1 100644 --- a/tools/testing/selftests/bpf/sdt.h +++ b/tools/testing/selftests/bpf/sdt.h @@ -234,6 +234,13 @@ __extension__ extern unsigned long long __sdt_unsp; #define _SDT_NOP nop #endif +/* Use 5 byte nop for x86_64 to allow optimizing uprobes. */ +#if defined(__x86_64__) +# define _SDT_DEF_NOP _SDT_ASM_5(990: .byte 0x0f, 0x1f, 0x44, 0x00, 0x00) +#else +# define _SDT_DEF_NOP _SDT_ASM_1(990: _SDT_NOP) +#endif + #define _SDT_NOTE_NAME "stapsdt" #define _SDT_NOTE_TYPE 3 @@ -286,7 +293,7 @@ __extension__ extern unsigned long long __sdt_unsp; #define _SDT_ASM_BODY(provider, name, pack_args, args, ...) \ _SDT_DEF_MACROS \ - _SDT_ASM_1(990: _SDT_NOP) \ + _SDT_DEF_NOP \ _SDT_ASM_3( .pushsection .note.stapsdt,_SDT_ASM_AUTOGROUP,"note") \ _SDT_ASM_1( .balign 4) \ _SDT_ASM_3( .4byte 992f-991f, 994f-993f, _SDT_NOTE_TYPE) \ From patchwork Tue Nov 5 13:34:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862978 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4B8C1D7E28; Tue, 5 Nov 2024 13:35:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813747; cv=none; b=g6S+KjEh8wwPa6msR9v9bUkpSfT9TRxoVrWlldpo99lJ643JFU1oLJLfgfHbT7BY/lzqbloCbTdV3Rui2wY/3aV12pIZnx8ZuRGhC7SE9ClAXjzoG5Lb/V5bPl+tY+ER5MplRpupcmxwkzteEEwoIvRlE6ZwoTb0UpARZyf2Ayw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813747; c=relaxed/simple; bh=d3uhH5n9FcMxQYCQCXwHMSzfHGCPMLxnVC89hpazmwc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZBALyU2flWczhrEHbRPIfht9dW/L/DPEEsHXWXjawf03Q5nRGGdt1BPryAVSVRyqxmaPcGKInEfi5l7XCIJRKJJojybxenItvK57HPKP7v9qo1ptG6CAxDTAWKiUnhC1nnpXB3lKvUDLEj/yQ5tNcYUzZHnivKGOCcTP4UxlYag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FjjVto4i; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FjjVto4i" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F646C4CECF; Tue, 5 Nov 2024 13:35:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813747; bh=d3uhH5n9FcMxQYCQCXwHMSzfHGCPMLxnVC89hpazmwc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FjjVto4izTQ/zq3KWjkBieMmke3derMvt0KOTGA+6a0MzvOLE+1f7IQgc/K1aWiai FwoICuI/qxkKieBOeupDw/TeDS3VTrX2h9p7Atz6y/54Kx/U/z1VHRbKXRo0h8xrYF GTNtrNYzHqNV14zHILkObjKE+BmI/WaIUYRCudb/UAg83Z4h+E/YtdVCFvp6aTHeWy v7PgUCG7Tgk03vmhaFHHoH8lBaAiHYDEOQW9c/7CRIMKlWiofFWJb2o+mIv8i5DdYm c4r8ZUp1WoAAM9M29YsFbaDKdlZlNWltIlv6PBg8jKCKm7eZusx4OntKnMQwZlZp6d WK7W1R/3Ikepg== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC bpf-next 09/11] selftests/bpf: Add usdt trigger bench Date: Tue, 5 Nov 2024 14:34:03 +0100 Message-ID: <20241105133405.2703607-10-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding usdt trigger bench to meassure optimized usdt probes. Signed-off-by: Jiri Olsa --- tools/testing/selftests/bpf/bench.c | 2 + .../selftests/bpf/benchs/bench_trigger.c | 45 +++++++++++++++++++ .../selftests/bpf/progs/trigger_bench.c | 10 ++++- 3 files changed, 56 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c index 1bd403a5ef7b..dc5121e49623 100644 --- a/tools/testing/selftests/bpf/bench.c +++ b/tools/testing/selftests/bpf/bench.c @@ -526,6 +526,7 @@ extern const struct bench bench_trig_uprobe_multi_push; extern const struct bench bench_trig_uretprobe_multi_push; extern const struct bench bench_trig_uprobe_multi_ret; extern const struct bench bench_trig_uretprobe_multi_ret; +extern const struct bench bench_trig_usdt; extern const struct bench bench_rb_libbpf; extern const struct bench bench_rb_custom; @@ -586,6 +587,7 @@ static const struct bench *benchs[] = { &bench_trig_uretprobe_multi_push, &bench_trig_uprobe_multi_ret, &bench_trig_uretprobe_multi_ret, + &bench_trig_usdt, /* ringbuf/perfbuf benchmarks */ &bench_rb_libbpf, &bench_rb_custom, diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c b/tools/testing/selftests/bpf/benchs/bench_trigger.c index 32e9f194d449..bdee8b8362d0 100644 --- a/tools/testing/selftests/bpf/benchs/bench_trigger.c +++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c @@ -8,6 +8,7 @@ #include "bench.h" #include "trigger_bench.skel.h" #include "trace_helpers.h" +#include "../sdt.h" #define MAX_TRIG_BATCH_ITERS 1000 @@ -333,6 +334,13 @@ static void *uprobe_producer_ret(void *input) return NULL; } +static void *uprobe_producer_usdt(void *input) +{ + while (true) + STAP_PROBE(trigger, usdt); + return NULL; +} + static void usetup(bool use_retprobe, bool use_multi, void *target_addr) { size_t uprobe_offset; @@ -383,6 +391,37 @@ static void usetup(bool use_retprobe, bool use_multi, void *target_addr) } } +static void __usdt_setup(const char *provider, const char *name) +{ + struct bpf_link *link; + int err; + + setup_libbpf(); + + ctx.skel = trigger_bench__open(); + if (!ctx.skel) { + fprintf(stderr, "failed to open skeleton\n"); + exit(1); + } + + bpf_program__set_autoload(ctx.skel->progs.bench_trigger_usdt, true); + + err = trigger_bench__load(ctx.skel); + if (err) { + fprintf(stderr, "failed to load skeleton\n"); + exit(1); + } + + link = bpf_program__attach_usdt(ctx.skel->progs.bench_trigger_usdt, + -1 /* all PIDs */, "/proc/self/exe", + provider, name, NULL); + if (!link) { + fprintf(stderr, "failed to attach uprobe!\n"); + exit(1); + } + ctx.skel->links.bench_trigger_usdt = link; +} + static void usermode_count_setup(void) { ctx.usermode_counters = true; @@ -448,6 +487,11 @@ static void uretprobe_multi_ret_setup(void) usetup(true, true /* use_multi */, &uprobe_target_ret); } +static void usdt_setup(void) +{ + __usdt_setup("trigger", "usdt"); +} + const struct bench bench_trig_syscall_count = { .name = "trig-syscall-count", .validate = trigger_validate, @@ -506,3 +550,4 @@ BENCH_TRIG_USERMODE(uprobe_multi_ret, ret, "uprobe-multi-ret"); BENCH_TRIG_USERMODE(uretprobe_multi_nop, nop, "uretprobe-multi-nop"); BENCH_TRIG_USERMODE(uretprobe_multi_push, push, "uretprobe-multi-push"); BENCH_TRIG_USERMODE(uretprobe_multi_ret, ret, "uretprobe-multi-ret"); +BENCH_TRIG_USERMODE(usdt, usdt, "usdt"); diff --git a/tools/testing/selftests/bpf/progs/trigger_bench.c b/tools/testing/selftests/bpf/progs/trigger_bench.c index 044a6d78923e..7b7d4a71e7d4 100644 --- a/tools/testing/selftests/bpf/progs/trigger_bench.c +++ b/tools/testing/selftests/bpf/progs/trigger_bench.c @@ -1,8 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 // Copyright (c) 2020 Facebook -#include +#include "vmlinux.h" #include #include +#include #include #include "bpf_misc.h" @@ -138,3 +139,10 @@ int bench_trigger_rawtp(void *ctx) inc_counter(); return 0; } + +SEC("?usdt") +int bench_trigger_usdt(struct pt_regs *ctx) +{ + inc_counter(); + return 0; +} From patchwork Tue Nov 5 13:34:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862979 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C7471D63C1; Tue, 5 Nov 2024 13:35:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813758; cv=none; b=uJRK82hP27h5WDeVtI45bwy7Cui2OIgn3+iq7kQN1LrAIn4+qrvAJody9F4faTN9psMiUXDNFLHGr4aUwCQAmp4ubSzsLIH/efGPfk+Nj829fMOfAGygYyY32plTYLAA1yIF5mc3yoOIvwthCUFe9nDMJ2I4NFE/rzC2L6FfQEA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813758; c=relaxed/simple; bh=lKhoL/3wTEc44+r1qIMCHJuyUtca9Ay2cYmrQPDuSwU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HBkzhmUfzRLZGnSpYlvSF+AvLIfBAgoH6q4Z1FzDiZVwHIKeAv1bxX3drf/UletLy1j9GwOM9Fw89hVl/Wu7KHW4Gq1Yn9KYjDfYWOIoN5NSTuxz0fhKFL1BCzu4+BbBCbxIgKw19atHChHKB1bVCg4pWkCMBktq/d3Tjr8Urj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=omaE+UqN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="omaE+UqN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C3E1C4CECF; Tue, 5 Nov 2024 13:35:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813757; bh=lKhoL/3wTEc44+r1qIMCHJuyUtca9Ay2cYmrQPDuSwU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=omaE+UqNFhXFNNZlZsv4V/S9mywio1rd2CCn/oLBU+oBYlCZ0h5rQV52ElTgxZ4Gy emTCxFCHj8avngEpayPTTM6Isfs4SRCYHXJO8E+tRVS6a8qN7fm48PNM3sJVPpr6em IxRNhmZaS93rvy0cNWgH2lif1XPSqQ5aZ8cE7+3H5hDjNTqX/ZQk4L3rqDiDTzzcng nZ8tQ6bCjDlNTGr13V7mAAmexJBLwiV4snpFWV7kDs19nCUTWrS8t9IQDPys89MYSi A3VIFkwCwRvIN9wM4TgClc44B3vWZlY2QPHlqfjSrl0Zu+GDKHKwYnxPLYjWbBIUlY aFIAPZH1AbSOw== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC bpf-next 10/11] selftests/bpf: Add uprobe/usdt optimized test Date: Tue, 5 Nov 2024 14:34:04 +0100 Message-ID: <20241105133405.2703607-11-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding tests for optimized uprobe/usdt probes. Checking that we get expected trampoline and attached bpf programs get executed properly. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_optimized.c | 192 ++++++++++++++++++ .../selftests/bpf/progs/uprobe_optimized.c | 29 +++ 2 files changed, 221 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c create mode 100644 tools/testing/selftests/bpf/progs/uprobe_optimized.c diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c b/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c new file mode 100644 index 000000000000..f6eb4089b1e2 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c @@ -0,0 +1,192 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +#ifdef __x86_64__ + +#include "sdt.h" +#include "uprobe_optimized.skel.h" + +#define TRAMP "[uprobes-trampoline]" + +__naked noinline void uprobe_test(void) +{ + asm volatile (".byte 0x0f, 0x1f, 0x44, 0x00, 0x00\n\t" + "ret\n\t"); +} + +static int find_uprobes_trampoline(void **start, void **end) +{ + char line[128]; + int ret = -1; + FILE *maps; + + maps = fopen("/proc/self/maps", "r"); + if (!maps) { + fprintf(stderr, "cannot open maps\n"); + return -1; + } + + while (fgets(line, sizeof(line), maps)) { + int m = -1; + + /* We care only about private r-x mappings. */ + if (sscanf(line, "%p-%p r-xp %*x %*x:%*x %*u %n", start, end, &m) != 2) + continue; + if (m < 0) + continue; + if (!strncmp(&line[m], TRAMP, sizeof(TRAMP)-1)) { + ret = 0; + break; + } + } + + fclose(maps); + return ret; +} + +static void check_attach(struct uprobe_optimized *skel, void (*trigger)(void)) +{ + void *tramp_start, *tramp_end; + struct __arch_relative_insn { + u8 op; + s32 raddr; + } __packed *call; + + unsigned long delta; + + /* Uprobe gets optimized after first trigger, so let's press twice. */ + trigger(); + trigger(); + + if (!ASSERT_OK(find_uprobes_trampoline(&tramp_start, &tramp_end), "uprobes_trampoline")) + return; + + /* Make sure bpf program got executed.. */ + ASSERT_EQ(skel->bss->executed, 2, "executed"); + + /* .. and check the trampoline is as expected. */ + call = (struct __arch_relative_insn *) trigger; + + delta = tramp_start > (void *) trigger ? + tramp_start - (void *) trigger : + (void *) trigger - tramp_start; + + /* and minus call instruction size itself */ + delta -= 5; + + ASSERT_EQ(call->op, 0xe8, "call"); + ASSERT_EQ(call->raddr, delta, "delta"); + ASSERT_EQ(tramp_end - tramp_start, 4096, "size"); +} + +static void check_detach(struct uprobe_optimized *skel, void (*trigger)(void)) +{ + unsigned char nop5[5] = { 0x0f, 0x1f, 0x44, 0x00, 0x00 }; + void *tramp_start, *tramp_end; + + /* [uprobes_trampoline] stays after detach */ + ASSERT_OK(find_uprobes_trampoline(&tramp_start, &tramp_end), "uprobes_trampoline"); + ASSERT_OK(memcmp(trigger, nop5, 5), "nop5"); +} + +static void check(struct uprobe_optimized *skel, struct bpf_link *link, + void (*trigger)(void)) +{ + check_attach(skel, trigger); + bpf_link__destroy(link); + check_detach(skel, uprobe_test); +} + +static void test_uprobe(void) +{ + struct uprobe_optimized *skel; + unsigned long offset; + + skel = uprobe_optimized__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_optimized__open_and_load")) + return; + + offset = get_uprobe_offset(&uprobe_test); + if (!ASSERT_GE(offset, 0, "get_uprobe_offset")) + goto cleanup; + + skel->links.test_1 = bpf_program__attach_uprobe_opts(skel->progs.test_1, + 0, "/proc/self/exe", offset, NULL); + if (!ASSERT_OK_PTR(skel->links.test_1, "bpf_program__attach_uprobe_opts")) + goto cleanup; + + check(skel, skel->links.test_1, uprobe_test); + skel->links.test_1 = NULL; + +cleanup: + uprobe_optimized__destroy(skel); +} + +static void test_uprobe_multi(void) +{ + struct uprobe_optimized *skel; + + skel = uprobe_optimized__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_optimized__open_and_load")) + return; + + skel->links.test_2 = bpf_program__attach_uprobe_multi(skel->progs.test_2, + 0, "/proc/self/exe", "uprobe_test", NULL); + if (!ASSERT_OK_PTR(skel->links.test_2, "bpf_program__attach_uprobe_multi")) + goto cleanup; + + check(skel, skel->links.test_2, uprobe_test); + skel->links.test_2 = NULL; + +cleanup: + uprobe_optimized__destroy(skel); +} + +__naked noinline void usdt_test(void) +{ + STAP_PROBE(optimized_uprobe, usdt); + asm volatile ("ret\n"); +} + +static void test_usdt(void) +{ + struct uprobe_optimized *skel; + + skel = uprobe_optimized__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_optimized__open_and_load")) + return; + + skel->links.test_3 = bpf_program__attach_usdt(skel->progs.test_3, + -1 /* all PIDs */, "/proc/self/exe", + "optimized_uprobe", "usdt", NULL); + if (!ASSERT_OK_PTR(skel->links.test_3, "bpf_program__attach_usdt")) + goto cleanup; + + check(skel, skel->links.test_3, usdt_test); + skel->links.test_3 = NULL; + +cleanup: + uprobe_optimized__destroy(skel); +} + +static void test_optimized(void) +{ + if (test__start_subtest("uprobe")) + test_uprobe(); + if (test__start_subtest("uprobe_multi")) + test_uprobe_multi(); + if (test__start_subtest("usdt")) + test_usdt(); +} +#else +static void test_optimized(void) +{ + test__skip(); +} +#endif /* __x86_64__ */ + +void test_uprobe_optimized(void) +{ + test_optimized(); +} diff --git a/tools/testing/selftests/bpf/progs/uprobe_optimized.c b/tools/testing/selftests/bpf/progs/uprobe_optimized.c new file mode 100644 index 000000000000..7f29c968b7c4 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/uprobe_optimized.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "vmlinux.h" +#include +#include + +char _license[] SEC("license") = "GPL"; +int executed = 0; + +SEC("uprobe") +int BPF_UPROBE(test_1) +{ + executed++; + return 0; +} + +SEC("uprobe.multi") +int BPF_UPROBE(test_2) +{ + executed++; + return 0; +} + +SEC("usdt") +int test_3(struct pt_regs *ctx) +{ + executed++; + return 0; +} From patchwork Tue Nov 5 13:34:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 13862980 X-Patchwork-Delegate: mhiramat@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 916131D799D; Tue, 5 Nov 2024 13:36:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813768; cv=none; b=BJvJ/KaH+mt29QBAHqU2jALpjqdwpWiCJoBKV/6LNHHwTuL9uO5Yv1IM8PHgDU0+oZxYrbpScHG1uIOG44IP5csK/gYgDcAZNxGoTDJuEHTQaEf2gZv2Hm1E0t6ZZ8Mz8dOw4/DG/U0OLw9lkUq0cXwPdr5wC8neGkXPtQ9B0Kc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730813768; c=relaxed/simple; bh=ka2R4E6eJ9tB1uYMv+2eDuyQ2Np/aM7mXyHoDxFqX8g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UCOOfG2h4Y3mapGHW87e5uZUQkxEfSqPkELX73nNSy9Jq39RIbl3ee0HmqUfsHVt96JTSkZ1N4OXjrvlQilLljjST169laJB3mBCcFB5QHgVW6zJH8WYdp7N3T3cKFaBddaViUC2F997AMUnw3JT9b57A5/vhoMjrSP6RnxEIQc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=G5OWEeZJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="G5OWEeZJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36BB2C4CECF; Tue, 5 Nov 2024 13:36:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730813768; bh=ka2R4E6eJ9tB1uYMv+2eDuyQ2Np/aM7mXyHoDxFqX8g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G5OWEeZJkRVIplFYgYgCUvcUo3Aes5lzA19k2VESuc6gFjeEqJR23WfogwYtRWEDZ aiu/otEPLc3p1rGJeCI5PZuKhppcLPpufplwBTQY9RUP+eH1JfPHHjaJ/zEBbxhFaH toS+hl71cjkr1Nrgi5kHjF31Jaxio56qnQJydxHu8Vr7Vmbp5AuFtAWxoDNhjBDB38 9Xh68CEU+2mmBJBK9HXDrnq4gbo0CoDrapyvusOQK1dtjdjzalh78GguyIKV/dxn9N daQDbzCr2dDQnVfxViKPFvuWxwSLUduqAxLKF1sxufSWAjw/QulsDp+qS19aTCYr/Y m3ksw+GsP4s0w== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [RFC bpf-next 11/11] selftests/bpf: Add hit/attach/detach race optimized uprobe test Date: Tue, 5 Nov 2024 14:34:05 +0100 Message-ID: <20241105133405.2703607-12-jolsa@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241105133405.2703607-1-jolsa@kernel.org> References: <20241105133405.2703607-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Adding test that makes sure parallel execution of the uprobe and attach/detach of optimized uprobe on it works properly. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/uprobe_optimized.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c b/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c index f6eb4089b1e2..4b9a579c232d 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_optimized.c @@ -170,6 +170,64 @@ static void test_usdt(void) uprobe_optimized__destroy(skel); } +static bool race_stop; + +static void *worker(void*) +{ + while (!race_stop) + uprobe_test(); + return NULL; +} + +static void test_race(void) +{ + int err, i, nr_cpus, rounds = 0; + struct uprobe_optimized *skel; + pthread_t *threads; + time_t start; + + nr_cpus = libbpf_num_possible_cpus(); + if (!ASSERT_GE(nr_cpus, 0, "nr_cpus")) + return; + + threads = malloc(sizeof(*threads) * nr_cpus); + if (!ASSERT_OK_PTR(threads, "malloc")) + return; + + for (i = 0; i < nr_cpus; i++) { + err = pthread_create(&threads[i], NULL, worker, NULL); + if (!ASSERT_OK(err, "pthread_create")) + goto cleanup; + } + + skel = uprobe_optimized__open_and_load(); + if (!ASSERT_OK_PTR(skel, "uprobe_optimized__open_and_load")) + goto cleanup; + + start = time(NULL); + while (1) { + skel->links.test_2 = bpf_program__attach_uprobe_multi(skel->progs.test_2, -1, + "/proc/self/exe", "uprobe_test", NULL); + if (!ASSERT_OK_PTR(skel->links.test_2, "bpf_program__attach_uprobe_multi")) + break; + + bpf_link__destroy(skel->links.test_2); + skel->links.test_2 = NULL; + rounds++; + + if (start + 2 < time(NULL)) + break; + } + + printf("rounds: %d hits: %d\n", rounds, skel->bss->executed); + +cleanup: + race_stop = true; + for (i = 0; i < nr_cpus; i++) + pthread_join(threads[i], NULL); + uprobe_optimized__destroy(skel); +} + static void test_optimized(void) { if (test__start_subtest("uprobe")) @@ -178,6 +236,8 @@ static void test_optimized(void) test_uprobe_multi(); if (test__start_subtest("usdt")) test_usdt(); + if (test__start_subtest("race")) + test_race(); } #else static void test_optimized(void)