From patchwork Tue Mar 18 22:14:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14021645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AE7FC35FF3 for ; Tue, 18 Mar 2025 22:15:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4C02280003; Tue, 18 Mar 2025 18:15:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A293E280001; Tue, 18 Mar 2025 18:15:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89E6F280003; Tue, 18 Mar 2025 18:15:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 671D5280001 for ; Tue, 18 Mar 2025 18:15:06 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E9D1E160C11 for ; Tue, 18 Mar 2025 22:15:06 +0000 (UTC) X-FDA: 83236078212.27.F4A71CE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id CA5448000E for ; Tue, 18 Mar 2025 22:15:04 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HZb8S0xF; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742336104; a=rsa-sha256; cv=none; b=cjzU/9AGDApfdmx1UCliTsqYWD7W7nkb4sET6NlQkl0Y+YP9da497ck0Gs/gViRK8j9cBp v+QTEZEwrn58dW9jIe42tVG9l6Bz6b4pwHoWkyGWhUQC+0oz2twm/9sFjPIP8rvLQQIxxS B/LZKgOOHN25les4bkEjPZQyYuSHb/0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HZb8S0xF; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742336104; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Rus6kw9XsBL2c9jHGjOEjWiY0U6NqB+KHNJXO9LzuQY=; b=GNo+bkGo7KfAUrY1o15wgxNWAWvDWyYlcYtGWDLpm4073bJc24z4f3HF8oH8cdzJJP0zY3 XZlREEe3WE7HfDhz9O1h1cCZ+2ZFMVlQNDCerVGsArDgX29g9oxJ3WwIPVxeniSAsZgRtb JhJl1Kei7ayljxLfYM8aQR+lghKR8zE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742336104; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rus6kw9XsBL2c9jHGjOEjWiY0U6NqB+KHNJXO9LzuQY=; b=HZb8S0xFrLEAjsfe8oezHXKdyrdCy9dYddjEnMDPGE6HKYYhtUJ0PrAtk4WLMdUZsoU32g nNzQWv+ki5evKxXgNUU/2A+GNqx2UoZJEHiiJ8Ej/X+UcaCu4d4qLeKHOdmFysGHMGVHF/ FI9gnlhHWPLYeRPTPKThznrapkXzLiI= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-468-ys2-5xgIMFyp84wGmEKoaQ-1; Tue, 18 Mar 2025 18:15:02 -0400 X-MC-Unique: ys2-5xgIMFyp84wGmEKoaQ-1 X-Mimecast-MFC-AGG-ID: ys2-5xgIMFyp84wGmEKoaQ_1742336102 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-391345e3aa3so3650441f8f.0 for ; Tue, 18 Mar 2025 15:15:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742336101; x=1742940901; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rus6kw9XsBL2c9jHGjOEjWiY0U6NqB+KHNJXO9LzuQY=; b=bmjRcA0pP2r2MxMd5vz8mxFUV1e7LA78MrppzppPmFCyaDBjh4UklZGsiIZfQp9ajc 9Q11aln6Ohorp7lToDUHjIGbXyQrlArH8zs3EPvnjOE12Sy4rAU7MsWgqr5FzZONlmpQ OWQoMIROKBO7+ZQyUbvYrEU6TLqgat622OM7K3ZXrwmTL2vnoq/4Y3YgE5efP5FhVGsl 1+PkUtHJWZB+zcEg1gI27RGTTJHG5xetDSgXj4VmP1rd+grW1eMbSQUQb6WCJMaOs5E+ VmT0Ltb3kPiia1ES9HHDY0uwfRcSWQzx/HWCFbnFXQoa79Bkyo9DOYc826nwicFlLf8K AmJw== X-Gm-Message-State: AOJu0YzduViLreyI8ZbTUr35wAGRgnEgbcwRm2ar+1+fzcpVD7kVXsto Pk5uIewsIswI9+ojciD3WQHXiAOeZ3SGFIz7Fkk5BANrSq8IT/oWzJ5Odru6C9N9E8wEu7vvIkW N4LB6pDGJQpVLRvso5+KxVOHkm24nwiJ05ZNyEC16R9eTXIwA X-Gm-Gg: ASbGncuZ6vlTYx6UveQD/e0HwJZR9PvWU6QjCbSiQcJmeywxMZdVaVO7F3V/WwCBEhq O1jQ3mUwdvjUaIQIKxSEoNJ9VRIz60pUSl4vc+VgJmYOi+tFeSxtlcOA3zZI7dziCsoDPJkN6bD MjyRbEz2PGemvbXA5NKwJDFqa5WELV8DG1CzJiWh9i1a8O4HRmNqtiwSQ6d4jQrppKQZwR/Pu6s Hp6evHZit+6+3wTJMDOuc/nLB5xo8D5A6IpfgjXG4+m8JUQD/wUdwGrZxiPAlH4pJ0ufM3hCw0Q 6g3p0F+ljHVyzCuQ8s7Pm32OFA1BVvOzDkHepZaiuj8RCcJ1UByYasoWwvZ+sk8NwOyr14Ky/8r / X-Received: by 2002:a05:6000:2a6:b0:390:e8e4:7e3e with SMTP id ffacd0b85a97d-399739b484emr390657f8f.6.1742336101721; Tue, 18 Mar 2025 15:15:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHEntLQOUh/Nv/fGa/REaqlKcGpxmQ8E/qfsoQdKPfCKbsE1Lco5OztnXYuajlRpXhgshxCpw== X-Received: by 2002:a05:6000:2a6:b0:390:e8e4:7e3e with SMTP id ffacd0b85a97d-399739b484emr390645f8f.6.1742336101239; Tue, 18 Mar 2025 15:15:01 -0700 (PDT) Received: from localhost (p200300cbc72d250094b54b7dad4afd0b.dip0.t-ipconnect.de. [2003:cb:c72d:2500:94b5:4b7d:ad4a:fd0b]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-395cb318a8bsm19682001f8f.66.2025.03.18.15.15.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 18 Mar 2025 15:15:00 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v2 1/3] kernel/events/uprobes: pass VMA instead of MM to remove_breakpoint() Date: Tue, 18 Mar 2025 23:14:55 +0100 Message-ID: <20250318221457.3055598-2-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250318221457.3055598-1-david@redhat.com> References: <20250318221457.3055598-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 3ryZpkkETvNUr9VFJHpKRv5NXXWYM4luJaOdddKZTAU_1742336102 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Queue-Id: CA5448000E X-Stat-Signature: do91zahyp5tgwmuqrx56hikgfyxn8yeb X-Rspamd-Server: rspam06 X-HE-Tag: 1742336104-332801 X-HE-Meta: U2FsdGVkX198+miRcqzOgqDfIk5YI7l+43jdanc7YJTdugKvWwvL6az7u38pA3Y1R1Wf77VIECyH7R7wHZrUJTdGOEfuu+QyWTjgmXArOe3bGZaQrzkXtHgzi3x7PXtOvpFDhCJsnUls6NTjsdqkq32XcSzJtxqHFgbdgfy2nIv5d6b6kSUWuKLz//PYqfWIff+Vm3QFZd1R46dfdurEc69MRw8N/H9im690TGvkHxrbOCMdr5RWLFoatA5JcigRoeVPThGEE//s+gka6JsCklk5pE3pFBtzEhMYTJWgfGa6OQZmdsl0PKTSHdYSd6GsyFs2WuQUq67qfzhVBTwiVg40Cs9nUbhqZ7LQodXZQWqAOx4UoVI4g6svyRQL6aayoBICJ0HNe7tY6PzUpfhsIBcj2Qq51ZFgeM31IvwjJez0JlkH0k1OeksH0tJSZ24+HL67taW4sn7E3pjK61wAoNnzpbW7ae57O3VZ5cAP7s0Or2v5XZ99UuaumT78RI25Gui08ozKnNk4OpBkIXmQsh2nc7O7RUGp85zYiWlgHLCl4SoXA7ALZoNmYgQo1NN6wXEJmmvqyYVPR8kUosk34DF0NRjZ3YTp+9tX6VkswMWKLIayKg8DqJVtmBjY8JanpJy0uznusfyMG0RvNjTkr/ymf/hpdnjObszSPnGfKe3WR8MpR/5c/Ms5yIW3CR0s8vgSwXKe/MdQfjxcRvFMIgJA8xuLBuQYzNRsQCD3l79q+GA/SIiHk0vtUBeUWSJ/px7Pa4mYbkhxGhs35RVpRzKGuM+AIkf5UvsfZLXbuPyqQ6vbLIUl0WB76AK8LpDv1hLOQy4cVWwEwPBjFPqHdnpxExTMH33UPuWe8sXi+PfwWqjBauHNIzgGTbW26ClIOMcyaGgW9TZCRDa9oBymCGspkmAQG8Ysr0DYADOLr8grZUJUfnHfaPfdfXqngfAXgxuyHNQhyj87P2Dc1Fs 8kbXx/yb G6VOveCUgPwAJ+WCkCmDERwmezcyjMASPf/3wL98vUezs6iDeKL2umKO0urbKPxm8HXyGdckRtTjCLYYPkf+3euNRjwpP/ppp+YEkJ0zRYa7t39ETi6o7A4BeNjLCFNMYojK3ziEdX0JFmk7SRF5ATHeVLM4FkoV5jOi48vGKwHL4ugy/Kk3Oi5Iq4MQ1n3akauWLUmrQ/4JQ7S3wLGuLzkcMptQbE3qARzyVhf7Tg+pYgdu/SNJCYjO/Bp1TzJ7yfJQeg8fdgYgycGxFjeIU1OiK/LB9tIoZzsOb45TTL0Bt2CntpPKog0cWbWlYliQnp3wdW8JlBulCkghTMl5Vfu9Hk9aj6ouF8gYevQB/JOsJQ9/dyPXvDvgp76sjsdnAL3ppZS+L1J+KX0fT/phebtS8k/Ov3Ms2Ofy1YFzrZNo4kXjIPgePOsMT7tXqyNdb/EA39wBHBrll8M8SgzDytQ0SNgvWcwCXS6opogtk0vEUfwpCZE9nGLEKwzZS2KsVL2bYj7sov+8d9GcUGwzM5PM6Yw+pnl/5EqmX1GCcW4oeltxPH9JZR3cWTEugS4CQqjTgQYgMOLKHbRIyxPuSK2K5FA58EXIh+iGO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: ... and remove the "MM" argument from install_breakpoint(), because it can easily be derived from the VMA. Signed-off-by: David Hildenbrand --- kernel/events/uprobes.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 5d6f3d9d29f44..259038d099819 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1134,10 +1134,10 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) return ret; } -static int -install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long vaddr) +static int install_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, + unsigned long vaddr) { + struct mm_struct *mm = vma->vm_mm; bool first_uprobe; int ret; @@ -1162,9 +1162,11 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, return ret; } -static int -remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vaddr) +static int remove_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, + unsigned long vaddr) { + struct mm_struct *mm = vma->vm_mm; + set_bit(MMF_RECALC_UPROBES, &mm->flags); return set_orig_insn(&uprobe->arch, mm, vaddr); } @@ -1296,10 +1298,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new) if (is_register) { /* consult only the "caller", new consumer. */ if (consumer_filter(new, mm)) - err = install_breakpoint(uprobe, mm, vma, info->vaddr); + err = install_breakpoint(uprobe, vma, info->vaddr); } else if (test_bit(MMF_HAS_UPROBES, &mm->flags)) { if (!filter_chain(uprobe, mm)) - err |= remove_breakpoint(uprobe, mm, info->vaddr); + err |= remove_breakpoint(uprobe, vma, info->vaddr); } unlock: @@ -1472,7 +1474,7 @@ static int unapply_uprobe(struct uprobe *uprobe, struct mm_struct *mm) continue; vaddr = offset_to_vaddr(vma, uprobe->offset); - err |= remove_breakpoint(uprobe, mm, vaddr); + err |= remove_breakpoint(uprobe, vma, vaddr); } mmap_read_unlock(mm); @@ -1610,7 +1612,7 @@ int uprobe_mmap(struct vm_area_struct *vma) if (!fatal_signal_pending(current) && filter_chain(uprobe, vma->vm_mm)) { unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset); - install_breakpoint(uprobe, vma->vm_mm, vma, vaddr); + install_breakpoint(uprobe, vma, vaddr); } put_uprobe(uprobe); } From patchwork Tue Mar 18 22:14:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14021646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C08ABC35FF3 for ; Tue, 18 Mar 2025 22:15:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E5D5280004; Tue, 18 Mar 2025 18:15:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 66836280001; Tue, 18 Mar 2025 18:15:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44A9A280004; Tue, 18 Mar 2025 18:15:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 20224280001 for ; Tue, 18 Mar 2025 18:15:09 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B37421C7EAB for ; Tue, 18 Mar 2025 22:15:09 +0000 (UTC) X-FDA: 83236078338.01.D4E1E16 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 889C71C000F for ; Tue, 18 Mar 2025 22:15:07 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hpBgQMJl; spf=pass (imf20.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742336107; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nBMqWWSmA8jTU5EpZWWLoWgjipzuL5yIrBN/CPUgt3Q=; b=mGzMHFNRpUELgpv16RjTsSDBMS8Mf1AT9Oll/SfKTuIzQ3j04o/Z1tdlGryKyF7r9DrssW JzIklMkbrqUYMkn485NVN7ajDo+FpnygZVZ8V9bVYaDpbKRHBo3N7juujdhrIE6bEplW7F upavoRw1EsIYr39abQG5jxrJx+765po= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hpBgQMJl; spf=pass (imf20.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742336107; a=rsa-sha256; cv=none; b=dq3mnuiVRd5+G1+OuRGghLoZnY8HOGfnx2/YRtV7R6C9hkfRkwozxs4q0paGoRbslvpgOz hmDimsM6UA9krhxT5AWr1IU1qTQ8QGgThAsdJfLxSOVqUVbd6eC4ZrDH1iH5xIGxw+HC2q jHi41iPGju61nwHtCk1lbbLskt19oZ8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742336107; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nBMqWWSmA8jTU5EpZWWLoWgjipzuL5yIrBN/CPUgt3Q=; b=hpBgQMJlPyqLhvqrw7o3OUsuMW3a8CZOOSHMYK1lBfL52rwwHqtfizlXXom1qcsi6ZLBmC T5G9hDjGj+73PzJqYuMazl6XckV56tgj9c7KMCbhdHrssacBanqRQ8dnQLOqckI1TTrcA2 NXFFMcowNrgu/TnvQsMuVXvoefbt/w4= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-630-qMolqr1FOua0ZIP95YexXQ-1; Tue, 18 Mar 2025 18:15:05 -0400 X-MC-Unique: qMolqr1FOua0ZIP95YexXQ-1 X-Mimecast-MFC-AGG-ID: qMolqr1FOua0ZIP95YexXQ_1742336104 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43cfda30a3cso23591385e9.3 for ; Tue, 18 Mar 2025 15:15:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742336104; x=1742940904; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nBMqWWSmA8jTU5EpZWWLoWgjipzuL5yIrBN/CPUgt3Q=; b=NGO5QoiJZXAG8dlFdE3ez+YxW5sL3LbiGLUqHHplGdwcn6ble1dzjqxLdhkhhrru7H fW2LbIEYhYLONNVGnMaXmk5xRQBBOekQpVuyDRGmTUGPAuCey5c7bj5FX635UtMW5paD 1dcL/y+NFSsNB2Gtt6KCBpBkOOUkZGJrcmQaocSSaM3brOWPE9rOyEdX4ulMaxvg5LCI RD/m8rz/pJEiRBBcwbYiBMixjhxedNsMbuH/nY+mJDj20IIEEQfgNfnZc5CnBxorlB1B zeJcx031dB6ZKtIEGj5lOf2m6o/GxGg2L8Y6OiRNvZHqmik8v5CRcb3sxHsY06WfpGrk 2bzA== X-Gm-Message-State: AOJu0YxB1DAmeA6SFMBd4OAA0furklsh3Sa4rOP3l22cuVkW0j6cq7rf xXyGLPxYANNXJvJxij6YXKZldQwW2pxCnbFFOplXdAcA9jaGBJvn/VTnt3vFvKVkBJNKq4S7DGt 9oEI5BLWEVewcOqvQ3Y69s8PopG5Rp1c2Lie7J0kMXWfAyE3T X-Gm-Gg: ASbGncs++z54t6e1E6fK1yLJQn6Dm/GsaXJ0OIIe8MWGgdhET+VmxGefPDIH3AOgjBo flTGCG7oMPB4TyKSiG+8R5U6Qwd2PlxWakgj81O3kq/zJCSfQZk6Jhe/EcbGnCOhoIRLcbAni7d 1qPrebxnQ7C+fD3Nkr/sA8WCuBqC8Jtc7FIxteOdFR2TOaRTPaWGSKcho6ZktfR1j/JOCcJss50 aynIeC5cnT9Tl9lyl+3DuSbYEkdWRC1bM68SRy+FU+gNBfTzDj4xaSPTZCE5pel1R7Iij+HAF3d I9WogLIq1V4Oj7hVjawQiigeoU5PM1WIZpMxBxQw1sCDNmJLRfDZwm0/S3uzsQkMW1l548IaIS/ R X-Received: by 2002:a5d:47ca:0:b0:391:47d8:de3d with SMTP id ffacd0b85a97d-399739bc959mr401435f8f.16.1742336103818; Tue, 18 Mar 2025 15:15:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHVLQM9GTMoKWmV/uO8RTwFHvBEBsZ0wjM+Jkn+TplyWph+3HhlRdj3hBn4fJ8DOgl1P3A6xg== X-Received: by 2002:a5d:47ca:0:b0:391:47d8:de3d with SMTP id ffacd0b85a97d-399739bc959mr401407f8f.16.1742336103412; Tue, 18 Mar 2025 15:15:03 -0700 (PDT) Received: from localhost (p200300cbc72d250094b54b7dad4afd0b.dip0.t-ipconnect.de. [2003:cb:c72d:2500:94b5:4b7d:ad4a:fd0b]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-395cbbc88f2sm19281199f8f.101.2025.03.18.15.15.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 18 Mar 2025 15:15:02 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v2 2/3] kernel/events/uprobes: pass VMA to set_swbp(), set_orig_insn() and uprobe_write_opcode() Date: Tue, 18 Mar 2025 23:14:56 +0100 Message-ID: <20250318221457.3055598-3-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250318221457.3055598-1-david@redhat.com> References: <20250318221457.3055598-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 9EDB5XsCr13Axo3RAJmSXKzgf6pIokFI5EmOAT_hIuU_1742336104 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 889C71C000F X-Stat-Signature: 3g7gtuztqet3i5b8gqy4r6fh1k8eatn7 X-HE-Tag: 1742336107-291509 X-HE-Meta: U2FsdGVkX193HxGhQbIyQ8jrqD9jzqWaboD1mvdbYhcwECX7MepbPfivQVKT2kb4M7jW7/IsJ01Pi0E0SkQjVMU3ge7VB6CAFQ5oqKoHWD1e6Iq1psKuXwJGNggcv18PzuAd9Spb44fIpjzn3g9ljzIC13npyX+RYuV4gG8b827M96fo8svsz9NxVo/14U1I8bLxmC8EShzQQni3DTtrvYg0uK62TxBCXhbO3P+HxLXf8uMsxxNG2M4Ev3DLgTQDcZSif6G3Vk3y7wfXaWTL2V7ZgJREZxVdcQHrewMMQEOYJTJLUTPeCCm2hSmQEQ5jWmsNAyCci2+twO/I9EET35sBldstWgGgYC8DKK0peOBuZY3ZVajvpmzMs2ohzkiQbGtaryHmkMlLw/jHBeJk7f74IEQsE/gQnmCTScY5WfQjjPr4Oy2JdMojVnSsk4QpM9LxobT2KMcr17aRh5mvw4v417inoB2FXWVC/DMYfzcSa6ogb6NireoderxktYTgZ6X6hYG2QdwORCQq8VQ/s8spLoNmI9PFO/dMMCscycBeHU+OfzSAzchpk2RQL95/UhtKWvZzAtssHQH2QF7LCPLeW39z/FdkrDnql/3WVTFeO43BGLVBQSGWUc6tZZclHGMGGr/QwCuWCqz+MfHqkHXgWMMnlWerpSCd/hpQzugs58rQHw/cbABsOQJx2WgK4ylG3INag/Sd5AeadHZ0/BTb0rfwbwKBymxkn8CAZGD0GiXhdKHaN+rYxcJkpHJ3KI6tk4Bbun068F7CzBODYT6pWj5CY3poJcvsDa0SouSxKHqUr1i0037vZcajiN++vb4m+jhn1Sr7YK890qsm5g5sT83jg0zf9zgJCPeJDGLymjXQkQHuofPkDxrAlirO+DWkZDPKi7JhMbuyJ/gQPJmBWSvKUEH8v/Ej+Cqlkc1xQCarssZYDaNO5gQssY38sIUTiRC9+3JvXPeoT2c pgCd1JaM f2cJngafnS7Ncvm5jm9+MZv/+9kLuclA2R05xaIb1gEa5LE6K0AW4/1objlFc6KNpPKMYgHF4SmETrLjpQHvbPk2/rqMd2xdP9kZJ7ReFJ1Zlw4fPP8VlLMpq5j8KTgFXZF7gRn2VfxzyjOQpjPtorbZUDu6VyKc2Cfntt0RSj6bKpk53a6RSmDP0VUNp431G/BR+cjhzhUY2RiIjaTZomMBIQYP8kNjv+3i2WRrsYQyd+5VDEiwFSLTSmPBlwBo3GPlDjVocgDyTdMsBySQGM6amPySya2gPZzKmIG/bhJl36MDQfql7+SAW3gf4qQtR9idOmcknlboq3J4WtySFebE3nWuBkzL4lCdoDBn2VWjnM+ovnE6OTOD6XDtGw409pKKUCGLaFmEqXYSdzGO8nhU3k602cMdCG24QiJbmwHzMmaF7CUMS/YPlejJiMKvUXq5HTvvPX//zhWO0Vnh1Gsyy4OD3sNd1TYdSks2k/A9Scf3iGAKWn1vgjAE7/rVPIKozPK3BExScsg5dMqu+xKihms9aFz6zCB1oD8g5ZbNfsSi2MS8sCsk3Rkxzr6gPHcuVy5xWlei2pSVDrSPBHH4RVycNiesgbSvT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We already have the VMA, no need to look it up using get_user_page_vma_remote(). We can now switch to get_user_pages_remote(). Signed-off-by: David Hildenbrand --- arch/arm/probes/uprobes/core.c | 4 ++-- include/linux/uprobes.h | 6 +++--- kernel/events/uprobes.c | 33 +++++++++++++++++---------------- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/arm/probes/uprobes/core.c b/arch/arm/probes/uprobes/core.c index f5f790c6e5f89..885e0c5e8c20d 100644 --- a/arch/arm/probes/uprobes/core.c +++ b/arch/arm/probes/uprobes/core.c @@ -26,10 +26,10 @@ bool is_swbp_insn(uprobe_opcode_t *insn) (UPROBE_SWBP_ARM_INSN & 0x0fffffff); } -int set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, +int set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, __opcode_to_mem_arm(auprobe->bpinsn)); } diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index b1df7d792fa16..288a42cc40baa 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -185,13 +185,13 @@ struct uprobes_state { }; extern void __init uprobes_init(void); -extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); -extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); +extern int set_swbp(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); +extern int set_orig_insn(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); extern bool is_swbp_insn(uprobe_opcode_t *insn); extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); -extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t); +extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr, uprobe_opcode_t); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 259038d099819..ac17c16f65d63 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -474,19 +474,19 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to store the opcode. * @opcode: opcode to be written at @vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ -int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t opcode) +int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, + unsigned long vaddr, uprobe_opcode_t opcode) { + struct mm_struct *mm = vma->vm_mm; struct uprobe *uprobe; struct page *old_page, *new_page; - struct vm_area_struct *vma; int ret, is_register, ref_ctr_updated = 0; bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; @@ -498,9 +498,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - old_page = get_user_page_vma_remote(mm, vaddr, gup_flags, &vma); - if (IS_ERR(old_page)) - return PTR_ERR(old_page); + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); + if (ret != 1) + return ret; ret = verify_opcode(old_page, vaddr, &opcode); if (ret <= 0) @@ -590,30 +590,31 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, /** * set_swbp - store breakpoint at a given address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, store the breakpoint instruction at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) +int __weak set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma, + unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, UPROBE_SWBP_INSN); + return uprobe_write_opcode(auprobe, vma, vaddr, UPROBE_SWBP_INSN); } /** * set_orig_insn - Restore the original instruction. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @auprobe: arch specific probepoint information. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, restore the original opcode (opcode) at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak -set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) +int __weak set_orig_insn(struct arch_uprobe *auprobe, + struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn); } @@ -1153,7 +1154,7 @@ static int install_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, if (first_uprobe) set_bit(MMF_HAS_UPROBES, &mm->flags); - ret = set_swbp(&uprobe->arch, mm, vaddr); + ret = set_swbp(&uprobe->arch, vma, vaddr); if (!ret) clear_bit(MMF_RECALC_UPROBES, &mm->flags); else if (first_uprobe) @@ -1168,7 +1169,7 @@ static int remove_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; set_bit(MMF_RECALC_UPROBES, &mm->flags); - return set_orig_insn(&uprobe->arch, mm, vaddr); + return set_orig_insn(&uprobe->arch, vma, vaddr); } struct map_info { From patchwork Tue Mar 18 22:14:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14021647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6A87C282EC for ; Tue, 18 Mar 2025 22:15:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A3EE280005; Tue, 18 Mar 2025 18:15:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 452E8280001; Tue, 18 Mar 2025 18:15:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BD77280005; Tue, 18 Mar 2025 18:15:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DDC5E280001 for ; Tue, 18 Mar 2025 18:15:12 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 804A458C74 for ; Tue, 18 Mar 2025 22:15:13 +0000 (UTC) X-FDA: 83236078506.07.9DF3C79 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 55822140004 for ; Tue, 18 Mar 2025 22:15:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Rzvbq/Ee"; spf=pass (imf23.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742336111; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V5W1IqTkr6wDzZNkwgWJDbhBMCl0QK4SwST8Ho7En/8=; b=wd9bREnop8sdNUjhWGMv4redJ8IN+8mD7j32Y76h4DT0y+5NvauZsiuLeHN6tN15Tb3EQq abRIJHWx0nT/nb37IJBzWHQMClx0ORkk7HseCLm30sTl0fb5v8DTDSr0m+vNXLmEM/wzTQ hIOTDov91Jk5mn0f3EsEE1DjeIyqcwk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742336111; a=rsa-sha256; cv=none; b=c2MDOHSEGQ47SvUHLJJwJsNC9dOYQmnTOUJ94vfLiqUXACFcEk0qmO1uLNCOhxMQwBn3Au 5QskF9aJLfpnInUjgTSsOFvxzMwSnJN2oq+ldX4oozgbAlGD+DPbCWv5mJ8hQtI/VhyJ+t pxxRtMj957ivF8N4rSvodaQTgCmoNS8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Rzvbq/Ee"; spf=pass (imf23.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742336110; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V5W1IqTkr6wDzZNkwgWJDbhBMCl0QK4SwST8Ho7En/8=; b=Rzvbq/Ee4bKHfTmDcAojB4zbfZg1tdltKR7FXAzjnouf7GRSCtWjflfhIpSdZ1cNIIhtqk M/gAiih3qbwMvPseZoSjTL3DnrFbcCX5j5NQoRxFXiXsYYulGUYnCsh566fMDASXGwxtxR aeGONMyvh2Z9qOtwJitqJzhOBrB6nNQ= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-231-6Bavck8LMDCodmGQQS59fQ-1; Tue, 18 Mar 2025 18:15:07 -0400 X-MC-Unique: 6Bavck8LMDCodmGQQS59fQ-1 X-Mimecast-MFC-AGG-ID: 6Bavck8LMDCodmGQQS59fQ_1742336106 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-43cf327e9a2so30773555e9.3 for ; Tue, 18 Mar 2025 15:15:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742336106; x=1742940906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V5W1IqTkr6wDzZNkwgWJDbhBMCl0QK4SwST8Ho7En/8=; b=F6kooKq6lmaPHIcSrDAoW4K0mJzhCChg8pVyjqJu3bHFDPuEJktH6vVmaIKxf3JV+S ouihb5fd1hujI5W2ikGckiA33jt0bydtdCwBw/1ZlsThZcT+RvvooNYmabAUTyAXRKav Tl3uuUbpGzFYoOrZEiPH3sKtkHdvblsgiypQl8yqKPz3BTPxn5X0uqBhCmhMyYiGMTOC zfv67V2VoRKul8EXTPYmnbxDhrqQXkyz8R7D98dUU5EgGLIAB9HY6fNk2FGLQBQGRbV9 elR9iBWELbZ45TZjneQbk2cAW6XxYYknBdGFm0K1/Xk1oDWylHL70d3yARmgU4HSLxhP 19YQ== X-Gm-Message-State: AOJu0YyoKEqIEJEL1HmAHxBdBlx6kkPaxw1xpAJuROWCijrGa0lKkQAU vjR3B2rZYDtLDT1hlhvZ9bRcmj53k6JXQpgVdm09HcS7Z3VM3QoWPNHIJlw4TKNaDQz46vGB81L cXBA/RE2cbEkH5sE4NYuTdoPuaIt2VoTg27RjSIHOnNYRGcs2 X-Gm-Gg: ASbGncsYGOLbpd48mzLiaRWmbCJxGLIpma9+zm5jtEDkNJASn5i9ol1CRJEaxn2uOfS mI3TJ0XfTP/rWbC9vObCpOwYzG2KzKeNOEM/zH5l0wKWYvHt3kUuWr0tPdJVHy1XQTWHD/5K2cd ZyOr+FgNoM/jxlEDvncTQ8c3F0SavkkzjVxDJLS1jh6CEB1cBD0ieyyqi1xZrxb5Kf4a9dsqtbX IbrXe/OHvsk6HbWgAEQuCt2nTtDN3abvujfNu9biyxiuar0rCnLBtAM1crq/uiNk9CJpV35reqe dl5VNW8F+ALdX1AB/itBV89y4TNGOfXXVQNmB64L0z0x8iDJTbxFBCwqgcKYan/H97ECiC3F0wO y X-Received: by 2002:a05:6000:18af:b0:391:4052:a232 with SMTP id ffacd0b85a97d-39973b08ed4mr377859f8f.55.1742336106033; Tue, 18 Mar 2025 15:15:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGRn8SLUZF1otjALjo9h8G83+kClNDhOJq3ekHXBkIIONIsWk2AaqwLhi6ne/EApjU79CJUNQ== X-Received: by 2002:a05:6000:18af:b0:391:4052:a232 with SMTP id ffacd0b85a97d-39973b08ed4mr377840f8f.55.1742336105475; Tue, 18 Mar 2025 15:15:05 -0700 (PDT) Received: from localhost (p200300cbc72d250094b54b7dad4afd0b.dip0.t-ipconnect.de. [2003:cb:c72d:2500:94b5:4b7d:ad4a:fd0b]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-395c888117csm18989501f8f.44.2025.03.18.15.15.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 18 Mar 2025 15:15:05 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v2 3/3] kernel/events/uprobes: uprobe_write_opcode() rewrite Date: Tue, 18 Mar 2025 23:14:57 +0100 Message-ID: <20250318221457.3055598-4-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250318221457.3055598-1-david@redhat.com> References: <20250318221457.3055598-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 3jeXuAHHmPBjvEnTZLeP7U20bQl66AbAp01MlYFbMxc_1742336106 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 55822140004 X-Stat-Signature: 84ny5errdkffmw7qpt945m4ee7xnhb89 X-HE-Tag: 1742336111-661705 X-HE-Meta: U2FsdGVkX1/7opmnv07xS+Ka8k29GzLlSYnXkhJXS2/gcES3ZacO6rUm0i/OZ5Yk6INMzo5ScQ0Kw+XifEl9E6BLvxpPWm8yOQ7gWtdTNVjYoshNuT689nkW070oyL/VCKTB+2r9/UZDGriH/lYNtrImpFAtND2sR+q+WB/pPCkqb43/zoj71dk5uF8N5Vu4CEgHvBnilc0DCQQGaSbbfKaHKIGyUrFmpzkilCgTk5dU5lk65+bzYevVUPki9re96WGb8g7HQRi/FYUcTS9nX6ogijpSYeuuXFbE44xnoQfLvlJRw3f6O7wTGn5l3BK+eDwgWPkwQuuz00CU6owoyck0tsgnwXj8P8Gg+qk54GLEHoUJ+NAPo60zDNwUDFeKUlz2CUqmc1G31J4gPL9yRmZAXCk4mB/wXoALOwF5dVaDVU4KVVBhuzpSaTBi126ST0Cva/ndEHrQ+q5Z4jPWh1orUNlYTpHig8zMSTz6PMMZE8jvxZ7RmktXg56TeIKp0IVLwbl0Z6xVPAkf3p2wQzsWLZqQTUzxbdvp2FRm2vAlqJlJbuu3zhpQ372syb2VND8AEQG8paG1L5I58Ow3IVduv8+fJUUT8+0ovJqtXdw/FyZvB8yWYSkyhFsHJhf3gEGnf/g7oauxqGhnOsLOp1OE6xtWGwOIn4ArXdzVvJrruo5xTcRvf51yxc4VbIuk5uFnde8gR/whoLZAaeus2ka6+yz5tpKbmjhWprjCzv95HE/I8D6gWaVpCtu6yp4RGHBAftzR6ircNaXkJXI0XPkRwOFNwvOg+QCgk3y7/RAZXZRzO9Lptq/CueSGU3TSClPKR7BGyXm4Hp8clzBI2XM2wu663HcxnIRsbPDy0zgmRCZvUUxBrIsxj7BdMdhvPNHD11ElK6uP5Bf2UUAmpT/PD8AmCqgiIxy8neup9C0tM4X8Eds8EIza5y4gZQCTVOVc8TmGXsnk5WZAcoQ q/LZKn0U beSwj+zBoNtgF2+MPhjQGCnpK3QbZtE/JviIUVKY9ZMb4AIhQVValEw1X52aLr3aL7OWVqXd4N7bMAUk5wrB7p6gZBIjmRUJRP0DJc4GUkp+d6xtUCt+gF7cV86JCM9PeWmCbnj33Gnpq0vOIFZXJeb1I+iXNbpIDTJ2jPAnEznyc5wAVuQdTmFYnJ6XPVP1qK64lDf8J4RDP842qstfPCDONCPfRMcZQ+PvXTuHesHNd4X63rRbEagAuhS9v57PG9tViP2cOWhcPA1iZWvuZsMoRZe3oVcxld3286MsjDXJWP1U0DvIJXdR2wbtWeeYnAlNSZg8V5rgYDvXSUtFi6lNUt9HLBDp6nqT9votyKPUEg+GRItIop7m9xV1oH8cItrPsguLktuwsXXBDx1g1QVEXXeRSwCt5mq273IcBtlz0vDBEakEEyRuGeS7Ze1sMS4O+in9d/MMdW8CYxl6qWBYAGF3Nvv+D2qyeiDEuCiBK52BARtso6ioJgeH7G5yZxP83vh/z4rGpGTo9h2lTSMbp/uz0GXXj5xwNMMkZak2WuwQi+CqFZ/3zmgvZGdDxx5o76FfMqPRrz2l5BZYhWtXKBXtgirw5uXqH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: uprobe_write_opcode() does some pretty low-level things that really, it shouldn't be doing: for example, manually breaking COW by allocating anonymous folios and replacing mapped pages. Further, it does seem to do some shaky things: for example, writing to possible COW-shared anonymous pages or zapping anonymous pages that might be pinned. We're also not taking care of uffd, uffd-wp, softdirty ... although rather corner cases here. Let's just get it right like ordinary ptrace writes would. Let's rewrite the code, leaving COW-breaking to core-MM, triggered by FOLL_FORCE|FOLL_WRITE (note that the code was already using FOLL_FORCE). We'll use GUP to lookup/faultin the page and break COW if required. Then, we'll walk the page tables using a folio_walk to perform our page modification atomically by temporarily unmap the PTE + flushing the TLB. Likely, we could avoid the temporary unmap in case we can just atomically write the instruction, but that will be a separate project. Unfortunately, we still have to implement the zapping logic manually, because we only want to zap in specific circumstances (e.g., page content identical). Note that we can now handle large folios (compound pages) and the shared zeropage just fine, so drop these checks. Signed-off-by: David Hildenbrand Acked-by: Oleg Nesterov --- kernel/events/uprobes.c | 311 ++++++++++++++++++++-------------------- 1 file changed, 157 insertions(+), 154 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ac17c16f65d63..671b8b6ad4e1b 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -29,6 +29,7 @@ #include #include #include /* check_stable_address_space */ +#include #include @@ -151,91 +152,6 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr) return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start); } -/** - * __replace_page - replace page in vma by new page. - * based on replace_page in mm/ksm.c - * - * @vma: vma that holds the pte pointing to page - * @addr: address the old @page is mapped at - * @old_page: the page we are replacing by new_page - * @new_page: the modified page we replace page by - * - * If @new_page is NULL, only unmap @old_page. - * - * Returns 0 on success, negative error code otherwise. - */ -static int __replace_page(struct vm_area_struct *vma, unsigned long addr, - struct page *old_page, struct page *new_page) -{ - struct folio *old_folio = page_folio(old_page); - struct folio *new_folio; - struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); - int err; - struct mmu_notifier_range range; - pte_t pte; - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, - addr + PAGE_SIZE); - - if (new_page) { - new_folio = page_folio(new_page); - err = mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL); - if (err) - return err; - } - - /* For folio_free_swap() below */ - folio_lock(old_folio); - - mmu_notifier_invalidate_range_start(&range); - err = -EAGAIN; - if (!page_vma_mapped_walk(&pvmw)) - goto unlock; - VM_BUG_ON_PAGE(addr != pvmw.address, old_page); - pte = ptep_get(pvmw.pte); - - /* - * Handle PFN swap PTES, such as device-exclusive ones, that actually - * map pages: simply trigger GUP again to fix it up. - */ - if (unlikely(!pte_present(pte))) { - page_vma_mapped_walk_done(&pvmw); - goto unlock; - } - - if (new_page) { - folio_get(new_folio); - folio_add_new_anon_rmap(new_folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(new_folio, vma); - } else - /* no new page, just dec_mm_counter for old_page */ - dec_mm_counter(mm, MM_ANONPAGES); - - if (!folio_test_anon(old_folio)) { - dec_mm_counter(mm, mm_counter_file(old_folio)); - inc_mm_counter(mm, MM_ANONPAGES); - } - - flush_cache_page(vma, addr, pte_pfn(pte)); - ptep_clear_flush(vma, addr, pvmw.pte); - if (new_page) - set_pte_at(mm, addr, pvmw.pte, - mk_pte(new_page, vma->vm_page_prot)); - - folio_remove_rmap_pte(old_folio, old_page, vma); - if (!folio_mapped(old_folio)) - folio_free_swap(old_folio); - page_vma_mapped_walk_done(&pvmw); - folio_put(old_folio); - - err = 0; - unlock: - mmu_notifier_invalidate_range_end(&range); - folio_unlock(old_folio); - return err; -} - /** * is_swbp_insn - check if instruction is breakpoint instruction. * @insn: instruction to be checked. @@ -463,6 +379,95 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, return ret; } +static bool orig_page_is_identical(struct vm_area_struct *vma, + unsigned long vaddr, struct page *page, bool *pmd_mappable) +{ + const pgoff_t index = vaddr_to_offset(vma, vaddr) >> PAGE_SHIFT; + struct folio *orig_folio = filemap_get_folio(vma->vm_file->f_mapping, + index); + struct page *orig_page; + bool identical; + + if (IS_ERR(orig_folio)) + return false; + orig_page = folio_file_page(orig_folio, index); + + *pmd_mappable = folio_test_pmd_mappable(orig_folio); + identical = folio_test_uptodate(orig_folio) && + pages_identical(page, orig_page); + folio_put(orig_folio); + return identical; +} + +static int __uprobe_write_opcode(struct vm_area_struct *vma, + struct folio_walk *fw, struct folio *folio, + unsigned long opcode_vaddr, uprobe_opcode_t opcode) +{ + const unsigned long vaddr = opcode_vaddr & PAGE_MASK; + const bool is_register = !!is_swbp_insn(&opcode); + bool pmd_mappable; + + /* For now, we'll only handle PTE-mapped folios. */ + if (fw->level != FW_LEVEL_PTE) + return -EFAULT; + + /* + * See can_follow_write_pte(): we'd actually prefer a writable PTE here, + * but the VMA might not be writable. + */ + if (!pte_write(fw->pte)) { + if (!PageAnonExclusive(fw->page)) + return -EFAULT; + if (unlikely(userfaultfd_pte_wp(vma, fw->pte))) + return -EFAULT; + /* SOFTDIRTY is handled via pte_mkdirty() below. */ + } + + /* + * We'll temporarily unmap the page and flush the TLB, such that we can + * modify the page atomically. + */ + flush_cache_page(vma, vaddr, pte_pfn(fw->pte)); + fw->pte = ptep_clear_flush(vma, vaddr, fw->ptep); + copy_to_page(fw->page, opcode_vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + + /* + * When unregistering, we may only zap a PTE if uffd is disabled and + * there are no unexpected folio references ... + */ + if (is_register || userfaultfd_missing(vma) || + (folio_ref_count(folio) != folio_mapcount(folio) + 1 + + folio_test_swapcache(folio) * folio_nr_pages(folio))) + goto remap; + + /* + * ... and the mapped page is identical to the original page that + * would get faulted in on next access. + */ + if (!orig_page_is_identical(vma, vaddr, fw->page, &pmd_mappable)) + goto remap; + + dec_mm_counter(vma->vm_mm, MM_ANONPAGES); + folio_remove_rmap_pte(folio, fw->page, vma); + if (!folio_mapped(folio) && folio_test_swapcache(folio) && + folio_trylock(folio)) { + folio_free_swap(folio); + folio_unlock(folio); + } + folio_put(folio); + + return pmd_mappable; +remap: + /* + * Make sure that our copy_to_page() changes become visible before the + * set_pte_at() write. + */ + smp_wmb(); + /* We modified the page. Make sure to mark the PTE dirty. */ + set_pte_at(vma->vm_mm, vaddr, fw->ptep, pte_mkdirty(fw->pte)); + return 0; +} + /* * NOTE: * Expect the breakpoint instruction to be the smallest size instruction for @@ -475,116 +480,114 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. * @vma: the probed virtual memory area. - * @vaddr: the virtual address to store the opcode. - * @opcode: opcode to be written at @vaddr. + * @opcode_vaddr: the virtual address to store the opcode. + * @opcode: opcode to be written at @opcode_vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, - unsigned long vaddr, uprobe_opcode_t opcode) + const unsigned long opcode_vaddr, uprobe_opcode_t opcode) { + const unsigned long vaddr = opcode_vaddr & PAGE_MASK; struct mm_struct *mm = vma->vm_mm; struct uprobe *uprobe; - struct page *old_page, *new_page; int ret, is_register, ref_ctr_updated = 0; - bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; + struct mmu_notifier_range range; + struct folio_walk fw; + struct folio *folio; + struct page *page; is_register = is_swbp_insn(&opcode); uprobe = container_of(auprobe, struct uprobe, arch); -retry: + if (WARN_ON_ONCE(!is_cow_mapping(vma->vm_flags))) + return -EINVAL; + + /* + * When registering, we have to break COW to get an exclusive anonymous + * page that we can safely modify. Use FOLL_WRITE to trigger a write + * fault if required. When unregistering, we might be lucky and the + * anon page is already gone. So defer write faults until really + * required. Use FOLL_SPLIT_PMD, because __uprobe_write_opcode() + * cannot deal with PMDs yet. + */ if (is_register) - gup_flags |= FOLL_SPLIT_PMD; - /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); - if (ret != 1) - return ret; + gup_flags |= FOLL_WRITE | FOLL_SPLIT_PMD; - ret = verify_opcode(old_page, vaddr, &opcode); +retry: + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &page, NULL); if (ret <= 0) - goto put_old; - - if (is_zero_page(old_page)) { - ret = -EINVAL; - goto put_old; - } + goto out; + folio = page_folio(page); - if (WARN(!is_register && PageCompound(old_page), - "uprobe unregister should never work on compound page\n")) { - ret = -EINVAL; - goto put_old; + ret = verify_opcode(page, opcode_vaddr, &opcode); + if (ret <= 0) { + folio_put(folio); + goto out; } /* We are going to replace instruction, update ref_ctr. */ if (!ref_ctr_updated && uprobe->ref_ctr_offset) { ret = update_ref_ctr(uprobe, mm, is_register ? 1 : -1); - if (ret) - goto put_old; + if (ret) { + folio_put(folio); + goto out; + } ref_ctr_updated = 1; } ret = 0; - if (!is_register && !PageAnon(old_page)) - goto put_old; - - ret = anon_vma_prepare(vma); - if (ret) - goto put_old; - - ret = -ENOMEM; - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); - if (!new_page) - goto put_old; - - __SetPageUptodate(new_page); - copy_highpage(new_page, old_page); - copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + if (unlikely(!folio_test_anon(folio))) { + VM_WARN_ON_ONCE(is_register); + goto out; + } if (!is_register) { - struct page *orig_page; - pgoff_t index; - - VM_BUG_ON_PAGE(!PageAnon(old_page), old_page); - - index = vaddr_to_offset(vma, vaddr & PAGE_MASK) >> PAGE_SHIFT; - orig_page = find_get_page(vma->vm_file->f_inode->i_mapping, - index); - - if (orig_page) { - if (PageUptodate(orig_page) && - pages_identical(new_page, orig_page)) { - /* let go new_page */ - put_page(new_page); - new_page = NULL; - - if (PageCompound(orig_page)) - orig_page_huge = true; - } - put_page(orig_page); - } + /* + * In the common case, we'll be able to zap the page when + * unregistering. So trigger MMU notifiers now, as we won't + * be able to do it under PTL. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, + vaddr, vaddr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); + } + + ret = -EAGAIN; + /* Walk the page tables again, to perform the actual update. */ + if (folio_walk_start(&fw, vma, vaddr, 0)) { + if (fw.page == page) + ret = __uprobe_write_opcode(vma, &fw, folio, opcode_vaddr, opcode); + folio_walk_end(&fw, vma); } - ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page); - if (new_page) - put_page(new_page); -put_old: - put_page(old_page); + if (!is_register) + mmu_notifier_invalidate_range_end(&range); - if (unlikely(ret == -EAGAIN)) + folio_put(folio); + switch (ret) { + case -EFAULT: + gup_flags |= FOLL_WRITE | FOLL_SPLIT_PMD; + fallthrough; + case -EAGAIN: goto retry; + default: + break; + } +out: /* Revert back reference counter if instruction update failed. */ - if (ret && is_register && ref_ctr_updated) + if (ret < 0 && is_register && ref_ctr_updated) update_ref_ctr(uprobe, mm, -1); /* try collapse pmd for compound page */ - if (!ret && orig_page_huge) + if (ret > 0) collapse_pte_mapped_thp(mm, vaddr, false); - return ret; + return ret < 0 ? ret : 0; } /**