From patchwork Mon Mar 3 05:37:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3CB5C282C5 for ; Mon, 3 Mar 2025 05:40:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=11XVZ4zj75vGzfRxwzqc/xjgcajBZqFGyuNELSq3ePM=; b=3M983I+1GgirHegcWfrugeSsJh yDEJn5ryvW+dKMox5a1enDsyVR7NQ2xKXdex7RgyLeBsr3J6LPd4Z2TbByQMOsqT0d/PY07U9ofkt 0ZGavSWqtAwbq9/iE9/yFEaydE6yM1afxRwnCNu/2Vij27aTEBobbE27X6mAMvIcARYu0ivZBXO2s 7e1JPyGrJ3ZpOiHPC+EcxY0Uw4ybxaiy/Fi9VFsccq2cCe6f9fJXWSeAl1CtMgLMvu9jDbw33tCY6 G35CrbAiVLWoTVFd+yZaAiYhy5neyJqaPjghqlzUcpMoHzSMq4o0HxTwBkXZrzMPtOU30kP4ayINo 1kCg61iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyXC-0000000HGqg-1M2b; Mon, 03 Mar 2025 05:40:10 +0000 Received: from mail-oa1-x49.google.com ([2001:4860:4864:20::49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyUO-0000000HGIa-0hrC for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:17 +0000 Received: by mail-oa1-x49.google.com with SMTP id 586e51a60fabf-2c14aae048bso6165850fac.1 for ; Sun, 02 Mar 2025 21:37:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980235; x=1741585035; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=11XVZ4zj75vGzfRxwzqc/xjgcajBZqFGyuNELSq3ePM=; b=pIe26oapykUSv8mG9HqH6htOXGbKH4stwG6BTgbdiUPeb+SiGaLL1HoiL3xNjYpIby h0mNPyDO2tLwe2lmv/8rNBkMUXPkfzpMMxTIGdYwvvkow7OlGlDAgIzl43ITXUIEAiAK ZpMBwP1sCwNyjK9I74nu4TZxbIyR7v7mutSDtPAUXewsHXs2qZX1EMQxtC1rFco2lzri jMuQFUhjYJnzaFhWd9G6ipurIHy5eGbQwpWdZRqYXTxd1J/TCFJbza9EGPsyn2znTO1Y dp5TfNwIUKuEdmJx3S63PNKH7Tjt5+FNSfEHkt4da9b5lX4Whkr87eWo6zdquXYhBcOd 4BHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980235; x=1741585035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=11XVZ4zj75vGzfRxwzqc/xjgcajBZqFGyuNELSq3ePM=; b=WDbVWCbNPevDyFgNDJnKF2zpQnVn7NiN+260gISH/bBx4OVRJH90FCns4ZEHoTyudN Tk1dBGE+FUWJ9AM147iF/bGnh1E2KxPVYKh+kUWG/8pTXDXMkdXtQ1wMlx0++bJDw89g Ygigzj0kAiOrBWlc0LiDDzEbJnMiRpp/rEh2535nYbId1QRB6adseR/VXIE5rhQ/GdOy zi2XqlY6SViMd9ubstGhPgZB4no6jZ45wFapSnz2Y4KIod5/HV1pWkeoSCULbImt9aGK +IYhUCnEf3LHYB57vavg00JpAJ33HEB37LLcyVcE3PiemOjdcIuTGRnLWVf20PScT9ox 8B4Q== X-Forwarded-Encrypted: i=1; AJvYcCX95Hyox5Hq8RTiR+sqWh5Ib4oRwEX7rvLGU6d7AjRir9jfsHWo/CBx1Teur12jDuNfcZfKeWzcvfkZxNki5obQ@lists.infradead.org X-Gm-Message-State: AOJu0YyxUs6oiAO5/H8brKCGjs2rtQQkMnP9kFUOFoXeXJEu/lEdu4f0 Gh+LqtZyyATcPkHNS1I1GJJrm4cSxiMYdyIVbXMpTyClHql2K9Zy/W/u0Rb+KRslr3eoTiBgw1k bIqmADhuwjw== X-Google-Smtp-Source: AGHT+IHCMcja7O5nHvC3arJ2BfVxXnE0ShNgQeyCXpt5rqYUWe3LnL8pz4yrPXjuWabnQEE7N86n3GCAwOPQKA== X-Received: from oablh23.prod.google.com ([2002:a05:6870:b17:b0:2ba:487b:683e]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:332b:b0:2c1:7289:d62a with SMTP id 586e51a60fabf-2c1786d0bdamr7852051fac.36.1740980234887; Sun, 02 Mar 2025 21:37:14 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:07 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v4 01/10] bpf/verifier: Factor out atomic_ptr_type_ok() From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213716_208969_12441A53 X-CRM114-Status: GOOD ( 12.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Factor out atomic_ptr_type_ok() as a helper function to be used later. Signed-off-by: Peilin Ye --- kernel/bpf/verifier.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index eb1624f6e743..66b19fa4be48 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6195,6 +6195,26 @@ static bool is_arena_reg(struct bpf_verifier_env *env, int regno) return reg->type == PTR_TO_ARENA; } +/* Return false if @regno contains a pointer whose type isn't supported for + * atomic instruction @insn. + */ +static bool atomic_ptr_type_ok(struct bpf_verifier_env *env, int regno, + struct bpf_insn *insn) +{ + if (is_ctx_reg(env, regno)) + return false; + if (is_pkt_reg(env, regno)) + return false; + if (is_flow_key_reg(env, regno)) + return false; + if (is_sk_reg(env, regno)) + return false; + if (is_arena_reg(env, regno)) + return bpf_jit_supports_insn(insn, true); + + return true; +} + static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { #ifdef CONFIG_NET [PTR_TO_SOCKET] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK], @@ -7652,11 +7672,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return -EACCES; } - if (is_ctx_reg(env, insn->dst_reg) || - is_pkt_reg(env, insn->dst_reg) || - is_flow_key_reg(env, insn->dst_reg) || - is_sk_reg(env, insn->dst_reg) || - (is_arena_reg(env, insn->dst_reg) && !bpf_jit_supports_insn(insn, true))) { + if (!atomic_ptr_type_ok(env, insn->dst_reg, insn)) { verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str(env, reg_state(env, insn->dst_reg)->type)); From patchwork Mon Mar 3 05:37:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC1DAC282C5 for ; Mon, 3 Mar 2025 05:41:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=t+JbDsy67cUYL40rbVjYYPAZCoJAbJK+Bgpj1Q0SZXU=; b=KJ81q0jbHOcqSkXl1A/x/zmwbb 0nlkHbeULWn0KZYXFgMBgTP+qFlJnxW2/uFa9O/lHSLBpeLTEPCdKSOuM5yo6V3XYd85Mv1+1Wj7V aPmUf7mZDQoa6WikOOYwmY9Y1/LZMKPZ6/6YZR/rJntZXU0Q4rYQQBWws6kYoD9lEiMcxNmSTGA/Z 1oClvf8uixdDJJZuz/RYXpDlCufSh2kowSKaEIILpe66rQLJwraW1hwRl4k8jA19zWNAr0FQP+Pvw 4b+yuDWDEnIPC5ztPkh7PxXFBn11GH9M9zGAK8dF1CiXNYiu/PfwzyTaOktbz+jy+BgkDoiN1oOx4 zHaIae/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyYk-0000000HGyl-47VI; Mon, 03 Mar 2025 05:41:46 +0000 Received: from mail-ot1-x34a.google.com ([2607:f8b0:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyUW-0000000HGJP-2GBo for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:25 +0000 Received: by mail-ot1-x34a.google.com with SMTP id 46e09a7af769-729ff9b1a06so134727a34.2 for ; Sun, 02 Mar 2025 21:37:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980242; x=1741585042; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=t+JbDsy67cUYL40rbVjYYPAZCoJAbJK+Bgpj1Q0SZXU=; b=3CAM5gH1WiN+Dq9U2gwgEbpO0XUqwYQbsFkPzS2v/2LgszAnYNgMz2K3cbohobbtFs 9H8jWN5EchqapgB40M6x2BWXq2KWJK9b8SFx/TONzO9oh4fWNBsJy/ZYQU1UlemK2Kk3 Bdn+fQwLgLNcCKK0sywTUmJPvhEgmxRHoHo7ITTlfXyFw7/3TAqlxBgI30HPcXMAW9N3 9QxJO3n9sDO+kR+wImYCLunvax9+YEFpCiV6egEdfhKrH756R0wAafAxGdaDglNEM1NV P4FstWWn0rUEp435620bUcLyqSpBm9gFM95eYDtHuF/sgihwqHyFrvz27hsxQqOEHSBf K0bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980242; x=1741585042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t+JbDsy67cUYL40rbVjYYPAZCoJAbJK+Bgpj1Q0SZXU=; b=P7vJbzhoI6cM4goVT+tb+jTlo+Kgk3wlnmv+QZclCm/YCRArESnyI+XsuofEK4S7I5 RpxCVFgy7SkqlNjzUKA62uN23pPPiEvEXOV5hhL9IGQ2YESBZ8K5R3hx9Hn/nUVrZs82 A3/482Ly/mo9bCtW2tXJO8qBxFObSpc3a/UXOzvwBMXfmjNBdUz9R8S5kvV0G/jUARrQ SrpNmlsfDLEm7+cUDV6osfmTdGyq0JNnX8VHK1xYzkuGT/H6jfZqwuTBQQbw26k6OkE0 qpV90IwFOgkXiSXkrF/zwnx8xabIwVT9GLaUsqzBUYKNLQNZdqtz19gTI1vzoQalfsQq iF5Q== X-Forwarded-Encrypted: i=1; AJvYcCXyNxPmKZNrpXyor7rRDRKgZuWyH3VXDuRUuUBuCUfOqBm1MRwAyZwzaRkFSRfTcLFf0yVBZIZwkPj6XAM8NV6X@lists.infradead.org X-Gm-Message-State: AOJu0Yy3hahUZhz1GlWdmmqL82pHdMBw901ZJ0lUtknjEGyeCgZsQNZ9 47vLyih/KCgdjDLe1Zfbwzi5GFKMgew65tvO2LWbnqUJvMhNObSd/aScUJFVHaTAVfYmYIWluJr oSLwqkrtcVg== X-Google-Smtp-Source: AGHT+IFuLGBKXj1TfcM+fxYVw/tUg0+xPiDUPHUfaif0a8hS8bsHY09Qvrucsh1Gl+blURS/FrX7hRSfk4jONw== X-Received: from oaclu2.prod.google.com ([2002:a05:6871:3142:b0:2bc:6ad3:5671]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:af08:b0:29e:7603:be65 with SMTP id 586e51a60fabf-2c178337712mr6365626fac.1.1740980242647; Sun, 02 Mar 2025 21:37:22 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:19 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <6323ac8e73a10a1c8ee547c77ed68cf8eb6b90e1.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 02/10] bpf/verifier: Factor out check_atomic_rmw() From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213724_580165_C67DD2FB X-CRM114-Status: GOOD ( 15.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, check_atomic() only handles atomic read-modify-write (RMW) instructions. Since we are planning to introduce other types of atomic instructions (i.e., atomic load/store), extract the existing RMW handling logic into its own function named check_atomic_rmw(). Remove the @insn_idx parameter as it is not really necessary. Use 'env->insn_idx' instead, as in other places in verifier.c. Signed-off-by: Peilin Ye --- kernel/bpf/verifier.c | 53 +++++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 66b19fa4be48..e3991ac72029 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7616,28 +7616,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type, bool allow_trust_mismatch); -static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) +static int check_atomic_rmw(struct bpf_verifier_env *env, + struct bpf_insn *insn) { int load_reg; int err; - switch (insn->imm) { - case BPF_ADD: - case BPF_ADD | BPF_FETCH: - case BPF_AND: - case BPF_AND | BPF_FETCH: - case BPF_OR: - case BPF_OR | BPF_FETCH: - case BPF_XOR: - case BPF_XOR | BPF_FETCH: - case BPF_XCHG: - case BPF_CMPXCHG: - break; - default: - verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); - return -EINVAL; - } - if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) { verbose(env, "invalid atomic operand size\n"); return -EINVAL; @@ -7699,12 +7683,12 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i /* Check whether we can read the memory, with second call for fetch * case to simulate the register fill. */ - err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, + err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, -1, true, false); if (!err && load_reg >= 0) - err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_READ, load_reg, - true, false); + err = check_mem_access(env, env->insn_idx, insn->dst_reg, + insn->off, BPF_SIZE(insn->code), + BPF_READ, load_reg, true, false); if (err) return err; @@ -7714,13 +7698,34 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return err; } /* Check whether we can write into the same memory. */ - err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, + err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, true, false); if (err) return err; return 0; } +static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn) +{ + switch (insn->imm) { + case BPF_ADD: + case BPF_ADD | BPF_FETCH: + case BPF_AND: + case BPF_AND | BPF_FETCH: + case BPF_OR: + case BPF_OR | BPF_FETCH: + case BPF_XOR: + case BPF_XOR | BPF_FETCH: + case BPF_XCHG: + case BPF_CMPXCHG: + return check_atomic_rmw(env, insn); + default: + verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", + insn->imm); + return -EINVAL; + } +} + /* When register 'regno' is used to read the stack (either directly or through * a helper function) make sure that it's within stack boundary and, depending * on the access type and privileges, that all elements of the stack are @@ -19224,7 +19229,7 @@ static int do_check(struct bpf_verifier_env *env) enum bpf_reg_type dst_reg_type; if (BPF_MODE(insn->code) == BPF_ATOMIC) { - err = check_atomic(env, env->insn_idx, insn); + err = check_atomic(env, insn); if (err) return err; env->insn_idx++; From patchwork Mon Mar 3 05:37:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C13ECC282C5 for ; Mon, 3 Mar 2025 05:43:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xPM/d8ucU/aCvkPtTtqBYkvS0mdPeqTCnB8f7vkTZfc=; b=oQwMCFWpCRZ67cuAu8wtQS2KyC J1ntncME+QaAh8h2yxyCEpPmMM9/raV0/FoACr8R4mieA5B2BLZqr9OlwEce2KyAuK3huCy+9v7Ll bbfwSFSjwxxuqI8IB8cG0ooCrHg46ah+fPGkijtkVXM6/06rzC9znert4qF4dMOGrDPUUptjTXpd7 /c0AcEVKmM9xgZEJb6HRVmNtsfz9XRDWIqoUmOt9IiEZDJG2npqLXZsQerMRyrpHC+7F0sHtu2krp ouEHFMumMAqeb3sXi3vekJmphP0ZhyRz+ZUkbAgw8ZZihjK2Ew1kXg0JPTqJCtVMOFb8UePXlnkKi V9pLQkiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyaI-0000000HH59-2dvQ; Mon, 03 Mar 2025 05:43:22 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyUd-0000000HGKt-49OC for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:33 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-22328fb6cbfso72619855ad.2 for ; Sun, 02 Mar 2025 21:37:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980250; x=1741585050; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xPM/d8ucU/aCvkPtTtqBYkvS0mdPeqTCnB8f7vkTZfc=; b=TzQJidolUVYpibgKf1ZjquN7JFdAZj+EMNTE/AUxe+a7pwu43TvVMLjTXKgfnD4BS1 CV2XXh43QPRZd7BLgjmSmeZcVHE0tDxWtyZ+vgadFd5Ke6TtzgWfQGgHbjCKTg2C/+9c 0e6KWlGGAtqXaobOE0w2qNAyc18aiEygtG+JeTHcN6UGAbaIKmDNd2MVzCeSQgEhRh+W ALhpHcOYVq3cHoNVOeaPgC5SEuB7bk5JrREDQEKNu8DQei+3uImNypCWxETy99TKv3dx lou7vOUVOZTs4WvGBiEYRVMakqyT0gXy+S4mqy9jkt3fgcr9v4MpAsnNSaW+WzO3NlSo DstQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980250; x=1741585050; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xPM/d8ucU/aCvkPtTtqBYkvS0mdPeqTCnB8f7vkTZfc=; b=tDlBiEEH/HWEsrTQSoGhwFroKvvjk++qkhbcdjEqhpwOtafWWWshl0m9/3l3Ra21pJ 5lgd7uB4sgSZJpdK2OcHbaLCD5Q0pUzvOzUQ7r+Zpf9kseu0QZ+1Hj2l/jBmv1+AXiMU ACzQcA34HOZYMrf6LXMXHuSW2uiNsX0mN/QUVFP9NEKBPrOH8EX+FP2XsN21lnH65SbL yzALzKKIK5uI5cQ8C8xtRCQIOihzCG81oWgghDfx0kbfDbePswtPvcf+O3WRRp2SxlOG ankcczl0PIkTIzSxDZX8vZxNFzz+M4Uq6e/Wsf5Xdz5sAHpTg4Ewpb2S8c5mzqwiwOHP JHUQ== X-Forwarded-Encrypted: i=1; AJvYcCXDUuD3YMZrsv+yfHHZwK2YJLERPpYsBCqSVg2ofpP8MwXY2X4HDKazGY8ZIrcgZyAPpRa0LzurkrmySwr7awU3@lists.infradead.org X-Gm-Message-State: AOJu0Yy15i5lulfDIMq2ULtJ8wg7ODVAX2gRvWhYSnHEGyqBBECVZSR7 iLEYtlBdCcH7gnzM+xmLygbGh6KHClbzKDc4JmTkarVJtCthwjacDenbiBCFHXFF80Sgf6hX6Qc BhEToCLFXbg== X-Google-Smtp-Source: AGHT+IFd0bRzT6Rf6GCL8yvgeLbmofEZHIi8D+gz8MXJG3yNYxhrfMMp8T7aKzxnwK4iYpxZh2FJ3rZewRiYoA== X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2fc:2959:b397]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:244d:b0:216:6901:d588 with SMTP id d9443c01a7336-22368f75980mr157873115ad.15.1740980250225; Sun, 02 Mar 2025 21:37:30 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:25 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <8b39c94eac2bb7389ff12392ca666f939124ec4f.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 03/10] bpf/verifier: Factor out check_load_mem() and check_store_reg() From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213732_041637_77A87458 X-CRM114-Status: GOOD ( 16.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extract BPF_LDX and most non-ATOMIC BPF_STX instruction handling logic in do_check() into helper functions to be used later. While we are here, make that comment about "reserved fields" more specific. Suggested-by: Eduard Zingerman Acked-by: Eduard Zingerman Signed-off-by: Peilin Ye --- kernel/bpf/verifier.c | 110 +++++++++++++++++++++++++----------------- 1 file changed, 67 insertions(+), 43 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e3991ac72029..22c4edc8695c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7616,6 +7616,67 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type, bool allow_trust_mismatch); +static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn, + bool strict_alignment_once, bool is_ldsx, + bool allow_trust_mismatch, const char *ctx) +{ + struct bpf_reg_state *regs = cur_regs(env); + enum bpf_reg_type src_reg_type; + int err; + + /* check src operand */ + err = check_reg_arg(env, insn->src_reg, SRC_OP); + if (err) + return err; + + /* check dst operand */ + err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); + if (err) + return err; + + src_reg_type = regs[insn->src_reg].type; + + /* Check if (src_reg + off) is readable. The state of dst_reg will be + * updated by this call. + */ + err = check_mem_access(env, env->insn_idx, insn->src_reg, insn->off, + BPF_SIZE(insn->code), BPF_READ, insn->dst_reg, + strict_alignment_once, is_ldsx); + err = err ?: save_aux_ptr_type(env, src_reg_type, + allow_trust_mismatch); + err = err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], ctx); + + return err; +} + +static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn, + bool strict_alignment_once) +{ + struct bpf_reg_state *regs = cur_regs(env); + enum bpf_reg_type dst_reg_type; + int err; + + /* check src1 operand */ + err = check_reg_arg(env, insn->src_reg, SRC_OP); + if (err) + return err; + + /* check src2 operand */ + err = check_reg_arg(env, insn->dst_reg, SRC_OP); + if (err) + return err; + + dst_reg_type = regs[insn->dst_reg].type; + + /* Check if (dst_reg + off) is writeable. */ + err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, + BPF_SIZE(insn->code), BPF_WRITE, insn->src_reg, + strict_alignment_once, false); + err = err ?: save_aux_ptr_type(env, dst_reg_type, false); + + return err; +} + static int check_atomic_rmw(struct bpf_verifier_env *env, struct bpf_insn *insn) { @@ -19199,35 +19260,16 @@ static int do_check(struct bpf_verifier_env *env) return err; } else if (class == BPF_LDX) { - enum bpf_reg_type src_reg_type; - - /* check for reserved fields is already done */ - - /* check src operand */ - err = check_reg_arg(env, insn->src_reg, SRC_OP); - if (err) - return err; + bool is_ldsx = BPF_MODE(insn->code) == BPF_MEMSX; - err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); - if (err) - return err; - - src_reg_type = regs[insn->src_reg].type; - - /* check that memory (src_reg + off) is readable, - * the state of dst_reg will be updated by this func + /* Check for reserved fields is already done in + * resolve_pseudo_ldimm64(). */ - err = check_mem_access(env, env->insn_idx, insn->src_reg, - insn->off, BPF_SIZE(insn->code), - BPF_READ, insn->dst_reg, false, - BPF_MODE(insn->code) == BPF_MEMSX); - err = err ?: save_aux_ptr_type(env, src_reg_type, true); - err = err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], "ldx"); + err = check_load_mem(env, insn, false, is_ldsx, true, + "ldx"); if (err) return err; } else if (class == BPF_STX) { - enum bpf_reg_type dst_reg_type; - if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, insn); if (err) @@ -19241,25 +19283,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - /* check src1 operand */ - err = check_reg_arg(env, insn->src_reg, SRC_OP); - if (err) - return err; - /* check src2 operand */ - err = check_reg_arg(env, insn->dst_reg, SRC_OP); - if (err) - return err; - - dst_reg_type = regs[insn->dst_reg].type; - - /* check that memory (dst_reg + off) is writeable */ - err = check_mem_access(env, env->insn_idx, insn->dst_reg, - insn->off, BPF_SIZE(insn->code), - BPF_WRITE, insn->src_reg, false, false); - if (err) - return err; - - err = save_aux_ptr_type(env, dst_reg_type, false); + err = check_store_reg(env, insn, false); if (err) return err; } else if (class == BPF_ST) { From patchwork Mon Mar 3 05:37:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8956C282C5 for ; Mon, 3 Mar 2025 05:45:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SQHObxRFv3b1dtkhqcCo12kh/bUbqfHPaXfLX2WxZNw=; b=WnZ+1Pgf1d9fNan6enAQ2Ewuh+ gx5UFr298y/0z+gpeRF+4BiLeQithLrQ1RWluElfDa4DOYBYbT4sjVh2uTvXG5OLw4GZCOKPxqT0W lXuM+zBspLqhqbmGJ6uPzLQ1ECwg+XhJp6BJYm+oZk/wXSwRPSXly3QkD97a8ONoHbC9MREKftYLm 4Ou1L5R85gfqZu3iaOyNR4ai/hm3a8nDXMSsdCYuY0+dYKqkTuDvHE/H5qB/LgWUXeUYO3tfyXeYH fYknsTw6ld5r/u0tsANwRzcoymp8Z7xKI6dPX+TqOW5hzK7L+NyeUSj5CyemlU8hCriinRuuvKVtw mA/ssREA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toybq-0000000HHAg-1F2y; Mon, 03 Mar 2025 05:44:58 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyUo-0000000HGND-1iaU for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:43 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-2233b154004so77147065ad.0 for ; Sun, 02 Mar 2025 21:37:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980261; x=1741585061; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SQHObxRFv3b1dtkhqcCo12kh/bUbqfHPaXfLX2WxZNw=; b=CevdBJx09JXa+/OeWJPA6edz1uYjc/Hhr1iyN08uRpkdqndqeQF4vyKo85heDTsDmF OLMcAltrhu2Dhi/Slv0J/tjQhGWR2izbBwHh1MvgzDTVQobWvvmJnZmX5ixdEz0RUFBp VxCGwiXOCuy57fSdJdRWDpLDdOhLeHipY7cxcQcFgsiNXNXV8kVpiZh0P5M6r8m7ZxAK 2UEhydkAjg9ZPLVCKx8301ow1C9AJpfrkiQvAt+VbNq/YVcnmb61atXJXoZqEI4kI+Af HD2Ui6/Jku79qh6ETmao2CUzKe2pnRUyrmJpkN5+SmcCEDDPPVl0c9+EQH/Va9ze3iIR DRtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980261; x=1741585061; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SQHObxRFv3b1dtkhqcCo12kh/bUbqfHPaXfLX2WxZNw=; b=ekvwcqePfbZ9nIE3rsAqX2LMEb78iosZBcT9T+55yQfK+U6zS2HlpRXDgtz06Bx6wp DovBe7t/e0WyZ3mvPORXQZaOKykx8yIBoE1bQ1Rug8DLv6ESnlpYpaGw1BEX+cUkD0D8 aoRlfe28X7lCTGtpszI9wwczCV/E7ajQPqFGLRhjVCxJXAdvmCSkvtuNZarXj2NLJrLo BUc9jTaHQMhosBC10tA8Qk0jv4AuxHloMWbdwQtWg0jgeBsKqSrZSBDYnsF3/o1TjhtW q6k/dTS6jBr4ROFI5DHPEeRNfAC/YK7napYnEZ4rO4ruhO4/ohj8yWaGmCz1CdKuwLJP lqbA== X-Forwarded-Encrypted: i=1; AJvYcCXr8zUT0dDyChORtVfYssIAhAnDeKDbVa8dIZaxj7HeR+oDwZhrbSVAZSkLyVq3/uVmvflzKgvXPTJAYVHjQpsw@lists.infradead.org X-Gm-Message-State: AOJu0YwdM6NWWa886PHK9FJeIyoFd47Rvb2mwXJj3bMDzi2E9BBshBI9 d8i9bdsir+p+0/RwAGJUMG+Io/qJCJs9T4RM+BMtVuV8Yz8jpQI6pNSV/RUc1+gqYVHHGZscchj rrY0wozF3AA== X-Google-Smtp-Source: AGHT+IGVBM97b1TKWcrlp1EX6pOOu/P0bD82xs6zMcoigcFS91E2Bq0KHib9VoJogvCerWoWgXJW1aWjzHCRJw== X-Received: from pfht3.prod.google.com ([2002:a62:ea03:0:b0:730:8e17:ed06]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e54a:b0:220:bdf2:e51e with SMTP id d9443c01a7336-22368fa90cbmr198079885ad.26.1740980261030; Sun, 02 Mar 2025 21:37:41 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:37 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <2c82b29f3530b961b41f94a4942e490ab35a31c8.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 04/10] bpf: Introduce load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213742_461000_ABECC48D X-CRM114-Status: GOOD ( 21.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce BPF instructions with load-acquire and store-release semantics, as discussed in [1]. Define 2 new flags: #define BPF_LOAD_ACQ 0x100 #define BPF_STORE_REL 0x110 A "load-acquire" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_LOAD_ACQ (0x100). Similarly, a "store-release" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_STORE_REL (0x110). Unlike existing atomic read-modify-write operations that only support BPF_W (32-bit) and BPF_DW (64-bit) size modifiers, load-acquires and store-releases also support BPF_B (8-bit) and BPF_H (16-bit). An 8- or 16-bit load-acquire zero-extends the value before writing it to a 32-bit register, just like ARM64 instruction LDARH and friends. Similar to existing atomic read-modify-write operations, misaligned load-acquires/store-releases are not allowed (even if BPF_F_ANY_ALIGNMENT is set). As an example, consider the following 64-bit load-acquire BPF instruction (assuming little-endian): db 10 00 00 00 01 00 00 r0 = load_acquire((u64 *)(r1 + 0x0)) opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000100): BPF_LOAD_ACQ Similarly, a 16-bit BPF store-release: cb 21 00 00 10 01 00 00 store_release((u16 *)(r1 + 0x0), w2) opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000110): BPF_STORE_REL In arch/{arm64,s390,x86}/net/bpf_jit_comp.c, have bpf_jit_supports_insn(..., /*in_arena=*/true) return false for the new instructions, until the corresponding JIT compiler supports them in arena. [1] https://lore.kernel.org/all/20240729183246.4110549-1-yepeilin@google.com/ Acked-by: Eduard Zingerman Acked-by: Ilya Leoshkevich Signed-off-by: Peilin Ye --- arch/arm64/net/bpf_jit_comp.c | 4 +++ arch/s390/net/bpf_jit_comp.c | 14 +++++--- arch/x86/net/bpf_jit_comp.c | 4 +++ include/linux/bpf.h | 15 ++++++++ include/linux/filter.h | 2 ++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/core.c | 63 ++++++++++++++++++++++++++++++---- kernel/bpf/disasm.c | 12 +++++++ kernel/bpf/verifier.c | 45 ++++++++++++++++++++++-- tools/include/uapi/linux/bpf.h | 3 ++ 10 files changed, 152 insertions(+), 13 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 7409c8acbde3..bdda5a77bb16 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -2667,8 +2667,12 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) if (!in_arena) return true; switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 9d440a0b729e..0776dfde2dba 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -2919,10 +2919,16 @@ bool bpf_jit_supports_arena(void) bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) { - /* - * Currently the verifier uses this function only to check which - * atomic stores to arena are supported, and they all are. - */ + if (!in_arena) + return true; + switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; + } return true; } diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a43fc5af973d..f0c31c940fb8 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -3771,8 +3771,12 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) if (!in_arena) return true; switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; if (insn->imm == (BPF_AND | BPF_FETCH) || insn->imm == (BPF_OR | BPF_FETCH) || insn->imm == (BPF_XOR | BPF_FETCH)) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4c4028d865ee..b556a26d8150 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -991,6 +991,21 @@ static inline bool bpf_pseudo_func(const struct bpf_insn *insn) return bpf_is_ldimm64(insn) && insn->src_reg == BPF_PSEUDO_FUNC; } +/* Given a BPF_ATOMIC instruction @atomic_insn, return true if it is an + * atomic load or store, and false if it is a read-modify-write instruction. + */ +static inline bool +bpf_atomic_is_load_store(const struct bpf_insn *atomic_insn) +{ + switch (atomic_insn->imm) { + case BPF_LOAD_ACQ: + case BPF_STORE_REL: + return true; + default: + return false; + } +} + struct bpf_prog_ops { int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); diff --git a/include/linux/filter.h b/include/linux/filter.h index 3ed6eb9e7c73..24e94afb5622 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -364,6 +364,8 @@ static inline bool insn_is_cast_user(const struct bpf_insn *insn) * BPF_XOR | BPF_FETCH src_reg = atomic_fetch_xor(dst_reg + off16, src_reg); * BPF_XCHG src_reg = atomic_xchg(dst_reg + off16, src_reg) * BPF_CMPXCHG r0 = atomic_cmpxchg(dst_reg + off16, r0, src_reg) + * BPF_LOAD_ACQ dst_reg = smp_load_acquire(src_reg + off16) + * BPF_STORE_REL smp_store_release(dst_reg + off16, src_reg) */ #define BPF_ATOMIC_OP(SIZE, OP, DST, SRC, OFF) \ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index beac5cdf2d2c..bb37897c0393 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -51,6 +51,9 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_LOAD_ACQ 0x100 /* load-acquire */ +#define BPF_STORE_REL 0x110 /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO = 0, }; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index a0200fbbace9..323af18d7d49 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1663,14 +1663,17 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(JMP, JSET, K), \ INSN_2(JMP, JA), \ INSN_2(JMP32, JA), \ + /* Atomic operations. */ \ + INSN_3(STX, ATOMIC, B), \ + INSN_3(STX, ATOMIC, H), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Store instructions. */ \ /* Register based. */ \ INSN_3(STX, MEM, B), \ INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, ATOMIC, W), \ - INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -2152,24 +2155,33 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) if (BPF_SIZE(insn->code) == BPF_W) \ atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \ (DST + insn->off)); \ - else \ + else if (BPF_SIZE(insn->code) == BPF_DW) \ atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \ (DST + insn->off)); \ + else \ + goto default_label; \ break; \ case BOP | BPF_FETCH: \ if (BPF_SIZE(insn->code) == BPF_W) \ SRC = (u32) atomic_fetch_##KOP( \ (u32) SRC, \ (atomic_t *)(unsigned long) (DST + insn->off)); \ - else \ + else if (BPF_SIZE(insn->code) == BPF_DW) \ SRC = (u64) atomic64_fetch_##KOP( \ (u64) SRC, \ (atomic64_t *)(unsigned long) (DST + insn->off)); \ + else \ + goto default_label; \ break; STX_ATOMIC_DW: STX_ATOMIC_W: + STX_ATOMIC_H: + STX_ATOMIC_B: switch (IMM) { + /* Atomic read-modify-write instructions support only W and DW + * size modifiers. + */ ATOMIC_ALU_OP(BPF_ADD, add) ATOMIC_ALU_OP(BPF_AND, and) ATOMIC_ALU_OP(BPF_OR, or) @@ -2181,20 +2193,59 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) SRC = (u32) atomic_xchg( (atomic_t *)(unsigned long) (DST + insn->off), (u32) SRC); - else + else if (BPF_SIZE(insn->code) == BPF_DW) SRC = (u64) atomic64_xchg( (atomic64_t *)(unsigned long) (DST + insn->off), (u64) SRC); + else + goto default_label; break; case BPF_CMPXCHG: if (BPF_SIZE(insn->code) == BPF_W) BPF_R0 = (u32) atomic_cmpxchg( (atomic_t *)(unsigned long) (DST + insn->off), (u32) BPF_R0, (u32) SRC); - else + else if (BPF_SIZE(insn->code) == BPF_DW) BPF_R0 = (u64) atomic64_cmpxchg( (atomic64_t *)(unsigned long) (DST + insn->off), (u64) BPF_R0, (u64) SRC); + else + goto default_label; + break; + /* Atomic load and store instructions support all size + * modifiers. + */ + case BPF_LOAD_ACQ: + switch (BPF_SIZE(insn->code)) { +#define LOAD_ACQUIRE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + DST = (SIZE)smp_load_acquire( \ + (SIZE *)(unsigned long)(SRC + insn->off)); \ + break; + LOAD_ACQUIRE(B, u8) + LOAD_ACQUIRE(H, u16) + LOAD_ACQUIRE(W, u32) + LOAD_ACQUIRE(DW, u64) +#undef LOAD_ACQUIRE + default: + goto default_label; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(insn->code)) { +#define STORE_RELEASE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + smp_store_release( \ + (SIZE *)(unsigned long)(DST + insn->off), (SIZE)SRC); \ + break; + STORE_RELEASE(B, u8) + STORE_RELEASE(H, u16) + STORE_RELEASE(W, u32) + STORE_RELEASE(DW, u64) +#undef STORE_RELEASE + default: + goto default_label; + } break; default: diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 309c4aa1b026..974d172d6735 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -267,6 +267,18 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_LOAD_ACQ) { + verbose(cbs->private_data, "(%02x) r%d = load_acquire((%s *)(r%d %+d))\n", + insn->code, insn->dst_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->src_reg, insn->off); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_STORE_REL) { + verbose(cbs->private_data, "(%02x) store_release((%s *)(r%d %+d), r%d)\n", + insn->code, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 22c4edc8695c..dac7af73ac8a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -579,6 +579,13 @@ static bool is_cmpxchg_insn(const struct bpf_insn *insn) insn->imm == BPF_CMPXCHG; } +static bool is_atomic_load_insn(const struct bpf_insn *insn) +{ + return BPF_CLASS(insn->code) == BPF_STX && + BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_LOAD_ACQ; +} + static int __get_spi(s32 off) { return (-off - 1) / BPF_REG_SIZE; @@ -3567,7 +3574,7 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, } if (class == BPF_STX) { - /* BPF_STX (including atomic variants) has multiple source + /* BPF_STX (including atomic variants) has one or more source * operands, one of which is a ptr. Check whether the caller is * asking about it. */ @@ -4181,7 +4188,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, * dreg still needs precision before this insn */ } - } else if (class == BPF_LDX) { + } else if (class == BPF_LDX || is_atomic_load_insn(insn)) { if (!bt_is_reg_set(bt, dreg)) return 0; bt_clear_reg(bt, dreg); @@ -7766,6 +7773,32 @@ static int check_atomic_rmw(struct bpf_verifier_env *env, return 0; } +static int check_atomic_load(struct bpf_verifier_env *env, + struct bpf_insn *insn) +{ + if (!atomic_ptr_type_ok(env, insn->src_reg, insn)) { + verbose(env, "BPF_ATOMIC loads from R%d %s is not allowed\n", + insn->src_reg, + reg_type_str(env, reg_state(env, insn->src_reg)->type)); + return -EACCES; + } + + return check_load_mem(env, insn, true, false, false, "atomic_load"); +} + +static int check_atomic_store(struct bpf_verifier_env *env, + struct bpf_insn *insn) +{ + if (!atomic_ptr_type_ok(env, insn->dst_reg, insn)) { + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", + insn->dst_reg, + reg_type_str(env, reg_state(env, insn->dst_reg)->type)); + return -EACCES; + } + + return check_store_reg(env, insn, true); +} + static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn) { switch (insn->imm) { @@ -7780,6 +7813,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn) case BPF_XCHG: case BPF_CMPXCHG: return check_atomic_rmw(env, insn); + case BPF_LOAD_ACQ: + return check_atomic_load(env, insn); + case BPF_STORE_REL: + return check_atomic_store(env, insn); default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); @@ -20605,7 +20642,9 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn->code == (BPF_ST | BPF_MEM | BPF_W) || insn->code == (BPF_ST | BPF_MEM | BPF_DW)) { type = BPF_WRITE; - } else if ((insn->code == (BPF_STX | BPF_ATOMIC | BPF_W) || + } else if ((insn->code == (BPF_STX | BPF_ATOMIC | BPF_B) || + insn->code == (BPF_STX | BPF_ATOMIC | BPF_H) || + insn->code == (BPF_STX | BPF_ATOMIC | BPF_W) || insn->code == (BPF_STX | BPF_ATOMIC | BPF_DW)) && env->insn_aux_data[i + delta].ptr_type == PTR_TO_ARENA) { insn->code = BPF_STX | BPF_PROBE_ATOMIC | BPF_SIZE(insn->code); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index beac5cdf2d2c..bb37897c0393 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -51,6 +51,9 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_LOAD_ACQ 0x100 /* load-acquire */ +#define BPF_STORE_REL 0x110 /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO = 0, }; From patchwork Mon Mar 3 05:37:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F22B6C282C5 for ; Mon, 3 Mar 2025 05:46:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xDRlPmyHtUNZ0yOQY1bkQ2kEbwvsRJ/+HMV3GdLKk6A=; b=c2Djkd6/ppOnoeKQO9w4F2zWDv CsZE9xXGADtpHwbe4yCwGC8MN/9y3rN3miHIma+hOl1hYs5Tnbo9X5osJbYYhA30ydrizGzP428ie AxWmul5nHsz7ieeIXBYmCvXM3oB7C6CeALEGACwzR+JP1Ht4iRMtaUYEZAQqh20x0WMKLBdk1IwXh LLacpB+nDti2In+dHcCAlBLtjM8hvaZlyzjDu1EmFJJL0kIvrjuFpe72ibr9q4KtncUyKNw1wopGy M1UxjIWaO9m8xI6ZUlyFGSo93PVDdgElD/897SiZYBB4j0EICtBOpKACarjG9/ZY+51iRfVwJ0qM/ 4MmIFx5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toydP-0000000HHMO-08Ay; Mon, 03 Mar 2025 05:46:35 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyUu-0000000HGPB-2nZN for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:49 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2f81a0d0a18so8193496a91.3 for ; Sun, 02 Mar 2025 21:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980267; x=1741585067; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xDRlPmyHtUNZ0yOQY1bkQ2kEbwvsRJ/+HMV3GdLKk6A=; b=YW408+4T4SOsFZb4pL6IVKybXUQKk+hMLO99woouvGYFQf94YuWPbCxYBWZkPgEoYB fkynLacuHxPqn8qTRYEQPTPWclwPcV1DuooKh9qPhRiJPkadkmi5UtpqGp2ApsTtEKJe ON+OEPt4KuLE+M0xmh4RL89XvY/gvkuILtN7awlDO1fGj9JMKL9QlA8WyswoYU9bvm8U WZo89KPZbDtixoDkeuaWtht2neLFKb6QLezbMhh5ouOGpJSNoi2i7BgjRa29VbAg20j/ dZm3n09F5qkOnBIEhX4WJ0ieMxF9LY7vFGfzuDEcKG6MsEUTJ0N7RC3b1Dy425WnZlrx O5kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980267; x=1741585067; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xDRlPmyHtUNZ0yOQY1bkQ2kEbwvsRJ/+HMV3GdLKk6A=; b=qgRN/Pu/s6sl0g8UOpqnkG/sAgOaTTlP4WwDEzyTG8oEn8QZNMXHuD5Ko+SoK8GXC6 d0kgbnHjq7aPXzAr1pgH0lK9pwvN2vXV64FX0Oc6MnlJp+jYRIHx3vAtnfG3xQYXDYen t3vShaXFnAAYExuz5qKcBuyxFNPhksjQnkHYkqScIl8Ej2MmeOsjDcA2WWbf16h8EpxU cmuH9vnWmhVoBWn9fKytmrdEjt+v2bFmH2vpNthMy7NftqHRe7cf3JknKNYw2o4ZJcbw Za0SSVol5gnxWTbwUH1uoewVe/q3qPB3Lp9nZ3gys+ZpuoEmscmZ7IRy0CsJmWg93lY+ BmTw== X-Forwarded-Encrypted: i=1; AJvYcCVKp2hxk44UMK7U/TOZ7k4BA0TZCF0qpsnAEW/F6VArTkkRPpbDj9MACPzRmSWasgKXtHVs5M87wGY8xgfiiH5O@lists.infradead.org X-Gm-Message-State: AOJu0YwQz6Q7q0ctHYY1vajrWYYfYzQ+NLZdnAcPZ76rN0LJ6/+9tcHh 24Da8Mku+OIXRQiOXMuYueFA/IIlqBQH3Bd67ab2QVl1IYCgfIBzVnrZsEmfx2EY7iAumRNA/fK 5AsoEDPgmvg== X-Google-Smtp-Source: AGHT+IEyeR4LjO07+giiIXLMHgTNqF3xV9ys5aBY0UaUCNsB7smdHPPbNq89GHq0OP7YKsBN/rIFz4H/QXqJzA== X-Received: from pjbhl3.prod.google.com ([2002:a17:90b:1343:b0:2fa:2661:76ac]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3ec4:b0:2fa:15ab:4de7 with SMTP id 98e67ed59e1d1-2febab3df8cmr22652320a91.12.1740980267592; Sun, 02 Mar 2025 21:37:47 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:44 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <03ceb882fa759f503b3f4b7f4ff2a5ce69a0eb05.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 05/10] arm64: insn: Add BIT(23) to {load,store}_ex's mask From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213748_703874_3C6EA54C X-CRM114-Status: GOOD ( 11.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We are planning to add load-acquire (LDAR{,B,H}) and store-release (STLR{,B,H}) instructions to insn.{c,h}; add BIT(23) to mask of load_ex and store_ex to prevent aarch64_insn_is_{load,store}_ex() from returning false-positives for load-acquire and store-release instructions. Reference: Arm Architecture Reference Manual (ARM DDI 0487K.a, ID032224), * C6.2.228 LDXR * C6.2.165 LDAXR * C6.2.161 LDAR * C6.2.393 STXR * C6.2.360 STLXR * C6.2.353 STLR Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index e390c432f546..2d8316b3abaf 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -351,8 +351,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) -__AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) -__AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) +__AARCH64_INSN_FUNCS(load_ex, 0x3FC00000, 0x08400000) +__AARCH64_INSN_FUNCS(store_ex, 0x3FC00000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000) __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000) From patchwork Mon Mar 3 05:37:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2B3DC282C5 for ; Mon, 3 Mar 2025 05:48:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6BuDMmX0jHHwSlxRNuzBpxukuM3wSJDBUPrtSODIeAg=; b=xAHeJ1CXFrQhule5DmVTeEDPTj YoUN4Cva8HmKWpZywHBfivVtwojkeNM5ojplfqrwKm8RUudNX/zNRFl6VOAH3eiH9nuNiY90QqT/l g8iixpmkGL1MYUbsbm/yP22RAa4VDFyQYY+kQOjP+dltS2gO0CN6azUKXfdHLAoP76Smch1/AVFrT D6T0JJ772+OFjMu1Iy2NK4hyhYaCtyHH5eYdcJ7dT6ClPU20abMJ0mKGrrn39xsUK1f33KoC1VeNs jz1KlFQrSY/Yp5kbT/MLOeczUXWvUlQAkzekm7k3m7j3qXjIAaqXj1gjHJICvLyhAla1co74pgfpj ifN7tR6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyew-0000000HHSQ-3BWH; Mon, 03 Mar 2025 05:48:10 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyV2-0000000HGQm-1Y9U for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:37:57 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2feda472a38so2642907a91.0 for ; Sun, 02 Mar 2025 21:37:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980274; x=1741585074; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6BuDMmX0jHHwSlxRNuzBpxukuM3wSJDBUPrtSODIeAg=; b=WDN4OubDF8ZxSUYepfRBxivKqnfVi7AjHpyS4M/6iHH9kx5yZKl9qKm0tEMOW9CXyt +0roFUyzl4lLnmi3MCORtkdkTgiGyiSybbQ1D3jGP48QPwLyJj32qsZ+QQC3Aa52RVY5 4UPYOxuCDk6BsmDV3xgl5w6nxevo2BqLMua8Pu+PnV86dG70tQ2VJbCBEHj+kdZ/wtx2 2wOVSWiWPGdx3czVnOt9FSt/rShLdSaE1zy0WSI6J6M0CWNBtLcglpCFM+Qp8X7X2Sf+ mCfDw+np26oPhuq4sB6BS832xotp0QbIK5ZHEaLHRa0q7dYrIL3qhfiCXMHIgoDK27pX gEEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980274; x=1741585074; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6BuDMmX0jHHwSlxRNuzBpxukuM3wSJDBUPrtSODIeAg=; b=RmhxrUqZ5bfjCxSWD6w9+LxozxC3LtMdV1s4g9Co9JsVD59guWBMs8/yoCuRWrkzj3 gR0a8sQgAc6laS947I1QQ7aJKQoNZi53RFgIeDnbC/Iww85aB6vghIGejM34YfKu0RbM u2M3hycFNzmkmRzBCR8Cu3ZDzifBkNGI0iUMl9otlDK6YiCSxP2pDpfZQPACYxWWmaE1 3DxLIi6U2U08q3Q3WCts8xYsCkP5yrNdS+WS+F6fMKZywbMKrj5xEqS73cwXsrDhezlh W7aMtuMYEuUPBouF4q0/mfJ1b+7XBQ9MYo+QGlHJQt6qusDOHNriEGwu3eQessadIU2N F5NA== X-Forwarded-Encrypted: i=1; AJvYcCVDWV7cVIezH3/N/mZ1QA/RrUScGkb28QVLH2SkDxIgQHJSCLoZtzVZtfQOTDXlP7cAmO/yGbI38X5H8gpbEWXg@lists.infradead.org X-Gm-Message-State: AOJu0Yz8ARbWULpxh1mh0E7Jb+PIHyoOiY1m4g6WFpwOysETIc0XzxQa mY8U45V2MmLdTgViyK4BiyM9xxNvcujYCcBwr537e+uPYXyHCVnlclePOB3G9xZFe0M2CdjVs6M b0KV1faCXgw== X-Google-Smtp-Source: AGHT+IF+dFa+oVN+px7oriCejWI8DZczRIcXQ/DWwnis3CtNEJ4xMwkMzgiRSThkIyWy6x7SDoJRZjZlfz6A3w== X-Received: from pjbqx5.prod.google.com ([2002:a17:90b:3e45:b0:2fa:a101:755]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fc8:b0:2ee:c9b6:4c42 with SMTP id 98e67ed59e1d1-2febab78711mr21685276a91.16.1740980274486; Sun, 02 Mar 2025 21:37:54 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:50 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <347830eb25b3bfea7687d4081852fb7d54e82307.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 06/10] arm64: insn: Add load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213756_408221_66B0D202 X-CRM114-Status: GOOD ( 11.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add load-acquire ("load_acq", LDAR{,B,H}) and store-release ("store_rel", STLR{,B,H}) instructions. Breakdown of encoding: size L (Rs) o0 (Rt2) Rn Rt mask (0x3fdffc00): 00 111111 1 1 0 11111 1 11111 00000 00000 value, load_acq (0x08dffc00): 00 001000 1 1 0 11111 1 11111 00000 00000 value, store_rel (0x089ffc00): 00 001000 1 0 0 11111 1 11111 00000 00000 As suggested by Xu [1], include all Should-Be-One (SBO) bits ("Rs" and "Rt2" fields) in the "mask" and "value" numbers. It is worth noting that we are adding the "no offset" variant of STLR instead of the "pre-index" variant, which has a different encoding. Reference: Arm Architecture Reference Manual (ARM DDI 0487K.a, ID032224), * C6.2.161 LDAR * C6.2.353 STLR [1] https://lore.kernel.org/bpf/4e6641ce-3f1e-4251-8daf-4dd4b77d08c4@huaweicloud.com/ Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 8 ++++++++ arch/arm64/lib/insn.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 2d8316b3abaf..39577f1d079a 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -188,8 +188,10 @@ enum aarch64_insn_ldst_type { AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX, AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, + AARCH64_INSN_LDST_LOAD_ACQ, AARCH64_INSN_LDST_LOAD_EX, AARCH64_INSN_LDST_LOAD_ACQ_EX, + AARCH64_INSN_LDST_STORE_REL, AARCH64_INSN_LDST_STORE_EX, AARCH64_INSN_LDST_STORE_REL_EX, AARCH64_INSN_LDST_SIGNED_LOAD_IMM_OFFSET, @@ -351,6 +353,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) +__AARCH64_INSN_FUNCS(load_acq, 0x3FDFFC00, 0x08DFFC00) +__AARCH64_INSN_FUNCS(store_rel, 0x3FDFFC00, 0x089FFC00) __AARCH64_INSN_FUNCS(load_ex, 0x3FC00000, 0x08400000) __AARCH64_INSN_FUNCS(store_ex, 0x3FC00000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) @@ -602,6 +606,10 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, int offset, enum aarch64_insn_variant variant, enum aarch64_insn_ldst_type type); +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type); u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index b008a9b46a7f..9bef696e2230 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -540,6 +540,35 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, offset >> shift); } +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type) +{ + u32 insn; + + switch (type) { + case AARCH64_INSN_LDST_LOAD_ACQ: + insn = aarch64_insn_get_load_acq_value(); + break; + case AARCH64_INSN_LDST_STORE_REL: + insn = aarch64_insn_get_store_rel_value(); + break; + default: + pr_err("%s: unknown load-acquire/store-release encoding %d\n", + __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn = aarch64_insn_encode_ldst_size(size, insn); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, + reg); + + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, + base); +} + u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, From patchwork Mon Mar 3 05:37:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9C89C282C5 for ; Mon, 3 Mar 2025 05:49:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8j2BSjmna2ghQKBcJTnNrRFDhjm44gnJpM2ihIwsNrg=; b=LdoMKVREhBG2+64dIKGBfhKRN8 jdTcWTBVZNxdyniElSI4Mao/w5dEMNW4wQPKmn4Mc0eJkl4wZAWrYCW0eFFpxGaRcQgpR1acFFHUh BfCbbL255J45xhNvSxu57JXmFB4hSosGROaP07MJTaNZBY/AfXplL+ToVQGFHBV9KbsR4XCJbXB8K sg9XGCm6jJaZRkS9GSxxZiYJJHnFbkvvGOoN6HvEYWZmKbk9XzJe0J5K77JxwofX1Np2FD+EnHQqB gzEuuNjA9k/ZkOkNmkquwy5Uy7gzkl2zO9DTLQshfyM5ew6rlzZuSTiRPKseoB5HqCAP6THJOivXR XUWi7HUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toygU-0000000HHYZ-1moS; Mon, 03 Mar 2025 05:49:46 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyV8-0000000HGSF-43MR for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:38:04 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2fea8e4a655so12298030a91.0 for ; Sun, 02 Mar 2025 21:38:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980282; x=1741585082; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8j2BSjmna2ghQKBcJTnNrRFDhjm44gnJpM2ihIwsNrg=; b=yPNNmeCLhKN5ksSLV0fFJkF6h9a72UdObMCruwfHbBxhcENO+dRaF9jNXN5/x4NVXW lzIUdU2uYChPpUe5mdQ/ROyoqeMuFqZ4/8MLZvtZvrnP1oXHBq4ZeG2esvv25n6SmRq/ 7Fr7HEoiBoxfTk+J89ngFLlJ6gvcSv4n4sVodarh481XEMo5hDIvA2TOz9I3qXOjjcvr OAQUBhkBGNJiovepsd6BfyybGggGoXaH2TYpOJiV72xiC6rlfSo7VsNclsR6pw3HKVlt OB2hR6E1DeLsbljOy3gO0EZoo2LCKBXB4pE/yjJ/98yhmqEvTXTUL24BASHgD0LfZvC1 VMvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980282; x=1741585082; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8j2BSjmna2ghQKBcJTnNrRFDhjm44gnJpM2ihIwsNrg=; b=hKFueB3X03I7VeTko9NBkACCrQx7Fb1vdBgdYEegApXmQdHS019tEcWNMcsily9Zud TIAct2xVViBOS0KcdW6KBYJ9Mxnoxe15p+aK8iiOfyICW1sTlpzJwiLgWiQlx3nbS0QT rukgR6wzJDPLqu5sYx7N7mmw9hSrnLs3Kyba87M5P0Yk15E8ynm1qN7xKrx340owQpAt jM3GYoZynehe9XErL/czTsYyhrSBZO5NvZCdYnCQboIq66AKpcnmM2K/GEHrpb0/yLGm 6SSJX+vS+/atWcoTbF7VqzY11VpOPIeNbSoGhsEdEvOsdWnwQwsLEBkfXCzepkZrclS+ /hBg== X-Forwarded-Encrypted: i=1; AJvYcCUTLW6GZeSoNX0XQINoLjbikw7ETdx695smcZwlBW4xcu/tkr8sNrRjH1vMT1PZnUrHEAdD9M4TtNxtCSKCOItk@lists.infradead.org X-Gm-Message-State: AOJu0YybIfQ40RcJuKOFBm1ac/ZjReXDCEtzvH8Diurpdfz5LK1UpmBS L7yX5j5RapXBpK5y3wBzA2Pcuh2+1ndajehbqk1+KgbgwYqsrD2BRe0rwbWRQqJ2TWH1Ynx5ScC M7gY2Sv2ChA== X-Google-Smtp-Source: AGHT+IHHb7LJW0g3eTBSnlrprbgedUMRa58V7lo2LF2csuSpQF2jf3BcVMV57yHFwU5hOE2koO3EM+oxlRXBaQ== X-Received: from pjn14.prod.google.com ([2002:a17:90b:570e:b0:2ea:aa56:49c]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4acb:b0:2ee:45fd:34f2 with SMTP id 98e67ed59e1d1-2febab2eca7mr17893611a91.6.1740980281958; Sun, 02 Mar 2025 21:38:01 -0800 (PST) Date: Mon, 3 Mar 2025 05:37:57 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <3dcd54003b8dbb9ed74beb0535fee704db421515.1740978603.git.yepeilin@google.com> Subject: [PATCH bpf-next v4 07/10] bpf, arm64: Support load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213803_007332_0FC183C0 X-CRM114-Status: GOOD ( 15.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Support BPF load-acquire (BPF_LOAD_ACQ) and store-release (BPF_STORE_REL) instructions in the arm64 JIT compiler. For example (assuming little-endian): db 10 00 00 00 01 00 00 r0 = load_acquire((u64 *)(r1 + 0x0)) 95 00 00 00 00 00 00 00 exit opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000100): BPF_LOAD_ACQ The JIT compiler would emit an LDAR instruction for the above, e.g.: ldar x7, [x0] Similarly, consider the following 16-bit store-release: cb 21 00 00 10 01 00 00 store_release((u16 *)(r1 + 0x0), w2) 95 00 00 00 00 00 00 00 exit opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000110): BPF_STORE_REL An STLRH instruction would be emitted, e.g.: stlrh w1, [x0] For a complete mapping: load-acquire 8-bit LDARB (BPF_LOAD_ACQ) 16-bit LDARH 32-bit LDAR (32-bit) 64-bit LDAR (64-bit) store-release 8-bit STLRB (BPF_STORE_REL) 16-bit STLRH 32-bit STLR (32-bit) 64-bit STLR (64-bit) Arena accesses are supported. bpf_jit_supports_insn(..., /*in_arena=*/true) always returns true for BPF_LOAD_ACQ and BPF_STORE_REL instructions, as they don't depend on ARM64_HAS_LSE_ATOMICS. Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/net/bpf_jit.h | 20 ++++++++ arch/arm64/net/bpf_jit_comp.c | 90 ++++++++++++++++++++++++++++++++--- 2 files changed, 104 insertions(+), 6 deletions(-) diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index b22ab2f97a30..a3b0e693a125 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -119,6 +119,26 @@ aarch64_insn_gen_load_store_ex(Rt, Rn, Rs, A64_SIZE(sf), \ AARCH64_INSN_LDST_STORE_REL_EX) +/* Load-acquire & store-release */ +#define A64_LDAR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_LOAD_ACQ) +#define A64_STLR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_STORE_REL) + +/* Rt = [Rn] (load acquire) */ +#define A64_LDARB(Wt, Xn) A64_LDAR(Wt, Xn, 8) +#define A64_LDARH(Wt, Xn) A64_LDAR(Wt, Xn, 16) +#define A64_LDAR32(Wt, Xn) A64_LDAR(Wt, Xn, 32) +#define A64_LDAR64(Xt, Xn) A64_LDAR(Xt, Xn, 64) + +/* [Rn] = Rt (store release) */ +#define A64_STLRB(Wt, Xn) A64_STLR(Wt, Xn, 8) +#define A64_STLRH(Wt, Xn) A64_STLR(Wt, Xn, 16) +#define A64_STLR32(Wt, Xn) A64_STLR(Wt, Xn, 32) +#define A64_STLR64(Xt, Xn) A64_STLR(Xt, Xn, 64) + /* * LSE atomics * diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index bdda5a77bb16..70d7c89d3ac9 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -647,6 +647,81 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) return 0; } +static int emit_atomic_ld_st(const struct bpf_insn *insn, struct jit_ctx *ctx) +{ + const s32 imm = insn->imm; + const s16 off = insn->off; + const u8 code = insn->code; + const bool arena = BPF_MODE(code) == BPF_PROBE_ATOMIC; + const u8 arena_vm_base = bpf2a64[ARENA_VM_START]; + const u8 dst = bpf2a64[insn->dst_reg]; + const u8 src = bpf2a64[insn->src_reg]; + const u8 tmp = bpf2a64[TMP_REG_1]; + u8 reg; + + switch (imm) { + case BPF_LOAD_ACQ: + reg = src; + break; + case BPF_STORE_REL: + reg = dst; + break; + default: + pr_err_once("unknown atomic load/store op code %02x\n", imm); + return -EINVAL; + } + + if (off) { + emit_a64_add_i(1, tmp, reg, tmp, off, ctx); + reg = tmp; + } + if (arena) { + emit(A64_ADD(1, tmp, reg, arena_vm_base), ctx); + reg = tmp; + } + + switch (imm) { + case BPF_LOAD_ACQ: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_LDARB(dst, reg), ctx); + break; + case BPF_H: + emit(A64_LDARH(dst, reg), ctx); + break; + case BPF_W: + emit(A64_LDAR32(dst, reg), ctx); + break; + case BPF_DW: + emit(A64_LDAR64(dst, reg), ctx); + break; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_STLRB(src, reg), ctx); + break; + case BPF_H: + emit(A64_STLRH(src, reg), ctx); + break; + case BPF_W: + emit(A64_STLR32(src, reg), ctx); + break; + case BPF_DW: + emit(A64_STLR64(src, reg), ctx); + break; + } + break; + default: + pr_err_once("unexpected atomic load/store op code %02x\n", + imm); + return -EINVAL; + } + + return 0; +} + #ifdef CONFIG_ARM64_LSE_ATOMICS static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) { @@ -1641,11 +1716,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return ret; break; + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: - if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (bpf_atomic_is_load_store(insn)) + ret = emit_atomic_ld_st(insn, ctx); + else if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) ret = emit_lse_atomic(insn, ctx); else ret = emit_ll_sc_atomic(insn, ctx); @@ -2667,13 +2748,10 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) if (!in_arena) return true; switch (insn->code) { - case BPF_STX | BPF_ATOMIC | BPF_B: - case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (bpf_atomic_is_load_store(insn)) - return false; - if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (!bpf_atomic_is_load_store(insn) && + !cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } return true; From patchwork Mon Mar 3 05:38:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 02CD7C282C5 for ; Mon, 3 Mar 2025 05:51:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VycOpllnvSzKdjd9R3576qTHHYaFrsKy4wd3sa7AubQ=; b=cQup5y96Jv6VwiOFWi3vFJppec c6mSnZ1sonzZiLyT3kiB6Xk3D+vvBpITQgrSZgUNYx99QEKqPERXc97lt0lK5wctPCD7Fh2+GG/7G Bb7lCNYs0F26XYnSp8r2Rt1pS6gZKzSeju1CXe9yH36yLm0txdZzEUHC6FXDv2zjjnAZaaJsodCLo 9IEXUDliCt+5ueBfu7tTE+kXtzomWrh5rL31HF7ErNpVk7e1iZ+JPLK85NscWpKQOxlXSvGjb82j6 OvnRBxin3m3F2UNdd2vNDHwgE+RFyF2gw8kVmSB1clLdtApwGn189yI7fjzj/Pc+uJfS3k0HsgiQl QdlimsTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyi2-0000000HHhi-0QDh; Mon, 03 Mar 2025 05:51:22 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyVL-0000000HGUV-0q6d for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:38:16 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2feb4648d4dso11497588a91.1 for ; Sun, 02 Mar 2025 21:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980294; x=1741585094; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VycOpllnvSzKdjd9R3576qTHHYaFrsKy4wd3sa7AubQ=; b=QA1W2BuQVuppGWGNgMR/Ne+RFnVuSGINTDuTxgap0Hkj1xL4qi6pNxnDT6w5Caoq+8 YfD9eeS6vMBqAvMZzdkVcbdNVw2YoM2PYoTcQPd/vznXNnw/XxJDDz+myPudgzbllYd8 VWff6HSE7s6Yn/G7N4MVnP+L84YMsWcEw3MSy9ZrI5aMFEVp4z+/29iM8vGxbmvYzj9W TsNUkgWBoOUSZVk/nPNkkLcurByckYvMGNmF9RbekIQZiyRJEw0/KYP3rieOJnZsbn4W k7EJ9U7ProyjJ67CoTEbawPsQSlsjmuBNI6Qg8DXkeXgcjlctjpiHoSGhvuUWXRn7VIr hBzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980294; x=1741585094; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VycOpllnvSzKdjd9R3576qTHHYaFrsKy4wd3sa7AubQ=; b=A0PWlspm09jXADE6LQ9ntF4FWT5O648AdyXmDHiAqcTKnqhL5MDvOZgR19gKjxvMOO 5wnn6uooj8+2NShI8WAca9krUyFhtmDr+oqOqc3Y/suItBk/6IVaqks5I9G9t6Dgcwmv jhRbBpbvMH2TUBozg50lYTzRnN2MtBUqpQbz5gX+w7TIkWVMMN2ry6AWWuv0dy3z5kPD quiyMUhymv/OR5xtqIxgSimE7DEQFWtVcd3PuyvM3uO0+IXapFNqcU2aIx50IUPzuRoD U8Y1JvjlhKAqjwQiZstkRtlORE+6SLd8YguBwG8qZ2zEr6U5ZOy6DOoJvHzA1n5fnzW4 xVxw== X-Forwarded-Encrypted: i=1; AJvYcCXbBmJz1L64THvXWg1xd6y2AydRXiY7a0Eh/oJkmVsXoB5ykhU4OtG/2c3t3HwvQqi2trXMAPyaR2kGZPEKESTp@lists.infradead.org X-Gm-Message-State: AOJu0YycHgL50CcGlZS0yBI3/tYW1IB+HGDCsOFPa+UjpmeDDCqkcsHu Gqw9TsyDNHj1ESncB2zoMABYo4mEmZsYQbqJN28UC13exsnJNFvCR9w/cDGRKh+O9iD5M5Lsuwd Er6Nke/J/PQ== X-Google-Smtp-Source: AGHT+IHllqIaGVwHEYKEha98799p/W5J6WNyThFgS1lHfGtsfwVyB+AM3fp47cytXtmuvF7FgDBcYhz+Dj9rJA== X-Received: from pjbpl16.prod.google.com ([2002:a17:90b:2690:b0:2fc:11a0:c546]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:384c:b0:2fe:8694:3956 with SMTP id 98e67ed59e1d1-2febab787d6mr21394914a91.16.1740980293845; Sun, 02 Mar 2025 21:38:13 -0800 (PST) Date: Mon, 3 Mar 2025 05:38:07 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v4 08/10] bpf, x86: Support load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213815_256249_EED1AC1B X-CRM114-Status: GOOD ( 16.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Recently we introduced BPF load-acquire (BPF_LOAD_ACQ) and store-release (BPF_STORE_REL) instructions. For x86-64, simply implement them as regular BPF_LDX/BPF_STX loads and stores. The verifier always rejects misaligned load-acquires/store-releases (even if BPF_F_ANY_ALIGNMENT is set), so emitted MOV* instructions are guaranteed to be atomic. Arena accesses are supported. 8- and 16-bit load-acquires are zero-extending (i.e., MOVZBQ, MOVZWQ). Rename emit_atomic{,_index}() to emit_atomic_rmw{,_index}() to make it clear that they only handle read-modify-write atomics, and extend their @atomic_op parameter from u8 to u32, since we are starting to use more than the lowest 8 bits of the 'imm' field. Signed-off-by: Peilin Ye --- arch/x86/net/bpf_jit_comp.c | 99 ++++++++++++++++++++++++++++++------- 1 file changed, 82 insertions(+), 17 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index f0c31c940fb8..0263d98d92b0 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1242,8 +1242,8 @@ static void emit_st_r12(u8 **pprog, u32 size, u32 dst_reg, int off, int imm) emit_st_index(pprog, size, dst_reg, X86_REG_R12, off, imm); } -static int emit_atomic(u8 **pprog, u8 atomic_op, - u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) +static int emit_atomic_rmw(u8 **pprog, u32 atomic_op, + u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) { u8 *prog = *pprog; @@ -1283,8 +1283,9 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, return 0; } -static int emit_atomic_index(u8 **pprog, u8 atomic_op, u32 size, - u32 dst_reg, u32 src_reg, u32 index_reg, int off) +static int emit_atomic_rmw_index(u8 **pprog, u32 atomic_op, u32 size, + u32 dst_reg, u32 src_reg, u32 index_reg, + int off) { u8 *prog = *pprog; @@ -1297,7 +1298,7 @@ static int emit_atomic_index(u8 **pprog, u8 atomic_op, u32 size, EMIT1(add_3mod(0x48, dst_reg, src_reg, index_reg)); break; default: - pr_err("bpf_jit: 1 and 2 byte atomics are not supported\n"); + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); return -EFAULT; } @@ -1331,6 +1332,49 @@ static int emit_atomic_index(u8 **pprog, u8 atomic_op, u32 size, return 0; } +static int emit_atomic_ld_st(u8 **pprog, u32 atomic_op, u32 dst_reg, + u32 src_reg, s16 off, u8 bpf_size) +{ + switch (atomic_op) { + case BPF_LOAD_ACQ: + /* dst_reg = smp_load_acquire(src_reg + off16) */ + emit_ldx(pprog, bpf_size, dst_reg, src_reg, off); + break; + case BPF_STORE_REL: + /* smp_store_release(dst_reg + off16, src_reg) */ + emit_stx(pprog, bpf_size, dst_reg, src_reg, off); + break; + default: + pr_err("bpf_jit: unknown atomic load/store opcode %02x\n", + atomic_op); + return -EFAULT; + } + + return 0; +} + +static int emit_atomic_ld_st_index(u8 **pprog, u32 atomic_op, u32 size, + u32 dst_reg, u32 src_reg, u32 index_reg, + int off) +{ + switch (atomic_op) { + case BPF_LOAD_ACQ: + /* dst_reg = smp_load_acquire(src_reg + idx_reg + off16) */ + emit_ldx_index(pprog, size, dst_reg, src_reg, index_reg, off); + break; + case BPF_STORE_REL: + /* smp_store_release(dst_reg + idx_reg + off16, src_reg) */ + emit_stx_index(pprog, size, dst_reg, src_reg, index_reg, off); + break; + default: + pr_err("bpf_jit: unknown atomic load/store opcode %02x\n", + atomic_op); + return -EFAULT; + } + + return 0; +} + #define DONT_CLEAR 1 bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs) @@ -2113,6 +2157,13 @@ st: if (is_imm8(insn->off)) } break; + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: + if (!bpf_atomic_is_load_store(insn)) { + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); + return -EFAULT; + } + fallthrough; case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: if (insn->imm == (BPF_AND | BPF_FETCH) || @@ -2148,10 +2199,10 @@ st: if (is_imm8(insn->off)) EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)], add_2reg(0xC0, AUX_REG, real_src_reg)); /* Attempt to swap in new value */ - err = emit_atomic(&prog, BPF_CMPXCHG, - real_dst_reg, AUX_REG, - insn->off, - BPF_SIZE(insn->code)); + err = emit_atomic_rmw(&prog, BPF_CMPXCHG, + real_dst_reg, AUX_REG, + insn->off, + BPF_SIZE(insn->code)); if (WARN_ON(err)) return err; /* @@ -2166,17 +2217,35 @@ st: if (is_imm8(insn->off)) break; } - err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, - insn->off, BPF_SIZE(insn->code)); + if (bpf_atomic_is_load_store(insn)) + err = emit_atomic_ld_st(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); + else + err = emit_atomic_rmw(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); if (err) return err; break; + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: + if (!bpf_atomic_is_load_store(insn)) { + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); + return -EFAULT; + } + fallthrough; case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: start_of_ldx = prog; - err = emit_atomic_index(&prog, insn->imm, BPF_SIZE(insn->code), - dst_reg, src_reg, X86_REG_R12, insn->off); + + if (bpf_atomic_is_load_store(insn)) + err = emit_atomic_ld_st_index(&prog, insn->imm, + BPF_SIZE(insn->code), dst_reg, + src_reg, X86_REG_R12, insn->off); + else + err = emit_atomic_rmw_index(&prog, insn->imm, BPF_SIZE(insn->code), + dst_reg, src_reg, X86_REG_R12, + insn->off); if (err) return err; goto populate_extable; @@ -3771,12 +3840,8 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) if (!in_arena) return true; switch (insn->code) { - case BPF_STX | BPF_ATOMIC | BPF_B: - case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (bpf_atomic_is_load_store(insn)) - return false; if (insn->imm == (BPF_AND | BPF_FETCH) || insn->imm == (BPF_OR | BPF_FETCH) || insn->imm == (BPF_XOR | BPF_FETCH)) From patchwork Mon Mar 3 05:38:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7AFE6C282C5 for ; Mon, 3 Mar 2025 05:53:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=As2DPcESZJWowI4ZYutFcnaDQOl7DP+t093UG1sAE1A=; b=uqx8s6KFjL8UrDaLATjfZL357G 6ZBGa0aa5O8XPvgsnI4cjXwujQX4SJXK28pM9sgNbCmUfPJB0PEq74iS1Jphz8D4gIPSxsRb2CTQN DMopDg1FLgXsLhMJu/iA+vncopWR0O9OkorJPOZtLT4X+WBEN+1H3ChwSymTPsPp7oN8aqprq3ji+ y9pBcDL8m6yCvZ7Iw/pTa/2F7fHIsOtt7BGJYls84LQMzpIWSxwGV4v0u2XHng2AsuAMuTRSm3cU8 azy7frKOj9oSScaNJobd1tAUKD9qHTMzb/Q4sckCtlhJDydgwIdjucFczyRpGu/uQrYovvOHtbdhh xftzVmcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyjZ-0000000HHqK-3Pn5; Mon, 03 Mar 2025 05:52:57 +0000 Received: from mail-oo1-xc4a.google.com ([2607:f8b0:4864:20::c4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyVV-0000000HGV7-1Omb for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:38:26 +0000 Received: by mail-oo1-xc4a.google.com with SMTP id 006d021491bc7-5feaeffd84cso987680eaf.1 for ; Sun, 02 Mar 2025 21:38:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980303; x=1741585103; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=As2DPcESZJWowI4ZYutFcnaDQOl7DP+t093UG1sAE1A=; b=gh/GxkzSvuWdNPi0Uv11ktbJVPzQHL5xWDVDDGPWMgxJ9WBgzCbVeBK134dCOmuZwL iVA/wQBdunbr08/PfEOQfGMKgX50afIcVfCKRigMb5+oriGz4adZUGFXIQIcmQ8KCInk ZnSRQRIyhT51K4X/t34mF0DnZ6ps0B0SdUqjO9PoKjuwc5xfqVjBv+aanVncUiMMtDbO Om1yo02Kk2a76ue51K/MyWgfaxL7/2ib2G2ckV8MZIh51xk9mAbKrpyqzbmYR5p+9Dh1 oHASMC9mwHBESN3z1QJTVeOwGaz4lIBC0x51qFc5Yu9x+zvm/Rdh3ZrQxPD2EXpG3IwF R//A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980303; x=1741585103; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=As2DPcESZJWowI4ZYutFcnaDQOl7DP+t093UG1sAE1A=; b=EBp9PoUsS0G7XD2x2pCmSS0WZ38E4TciLlt//55Uw+ljZAa+n7av4ihL+HfoCCBOZI Q8JlDykKLvZMfDj/ZeFgUbjDPUBqCDZ/3OB0QCOZ3Te9k0kKIJE+SdUCW/ukkvrRmHDj Cs6fICQ2rm/+G+ffetiLp96sIqhRuH+uGYiDzp7MMfWdyzusr7MV+yvElq/kS/5HY8WA fVS0G/fr18RugKDFycdBzAXj/vPFm22Iv3YtTsskhXszcoMLXlQli8wPM5p4qvninMPt o6BlqC3yIMmyVqeZVBVs1ouDKmJ8Y/1ybHD7KQtdXr06HmSrMbSFHVqc88tAsu4vYLJu ZUlA== X-Forwarded-Encrypted: i=1; AJvYcCXUmY6jd1pq4nlbCn2AMMEeKF4IEazFhumavPHaZKE+1oFlzTkIkSG9nXTYOKyy5N7ZRlC/NkHm6LrFRZUWWfZW@lists.infradead.org X-Gm-Message-State: AOJu0YyjXFxnn4+APl9MqDmahSVtSu4zQaDVH5knajskmTW3SPMYmG4b qXIA6Fquam4oArJuWsgrMz/ULRMjVRVaLDi2Nhus4esiGytUTGabDGBbKhJgYukrM1iokHG/xhO GtOCMEq08ew== X-Google-Smtp-Source: AGHT+IEwQXSQmXX185xjYan8znMPFYmhpv6mBUHnwDtA3yzE4BF/clf5NGYtqx+bPscAhZmisJ7IkfOlEhN7yQ== X-Received: from oabhe15.prod.google.com ([2002:a05:6870:798f:b0:2c1:6d20:7b9c]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:aa16:b0:2b8:3d1c:772 with SMTP id 586e51a60fabf-2c178335be6mr7183359fac.5.1740980303527; Sun, 02 Mar 2025 21:38:23 -0800 (PST) Date: Mon, 3 Mar 2025 05:38:18 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v4 09/10] selftests/bpf: Add selftests for load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213825_395273_D080D6C1 X-CRM114-Status: GOOD ( 22.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add several ./test_progs tests: - arena_atomics/load_acquire - arena_atomics/store_release - verifier_load_acquire/* - verifier_store_release/* - verifier_precision/bpf_load_acquire - verifier_precision/bpf_store_release The last two tests are added to check if backtrack_insn() handles the new instructions correctly. Additionally, the last test also makes sure that the verifier "remembers" the value (in src_reg) we store-release into e.g. a stack slot. For example, if we take a look at the test program: #0: r1 = 8; /* store_release((u64 *)(r10 - 8), r1); */ #1: .8byte %[store_release]; #2: r1 = *(u64 *)(r10 - 8); #3: r2 = r10; #4: r2 += r1; #5: r0 = 0; #6: exit; At #1, if the verifier doesn't remember that we wrote 8 to the stack, then later at #4 we would be adding an unbounded scalar value to the stack pointer, which would cause the program to be rejected: VERIFIER LOG: ============= ... math between fp pointer and register with unbounded min value is not allowed For easier CI integration, instead of using built-ins like __atomic_{load,store}_n() which depend on the new __BPF_FEATURE_LOAD_ACQ_STORE_REL pre-defined macro, manually craft load-acquire/store-release instructions using __imm_insn(), as suggested by Eduard. All new tests depend on: (1) Clang major version >= 18, and (2) ENABLE_ATOMICS_TESTS is defined (currently implies -mcpu=v3 or v4), and (3) JIT supports load-acquire/store-release (currently arm64 and x86-64) In .../progs/arena_atomics.c: /* 8-byte-aligned */ __u8 __arena_global load_acquire8_value = 0x12; /* 1-byte hole */ __u16 __arena_global load_acquire16_value = 0x1234; That 1-byte hole in the .addr_space.1 ELF section caused clang-17 to crash: fatal error: error in backend: unable to write nop sequence of 1 bytes To work around such llvm-17 CI job failures, conditionally define __arena_global variables as 64-bit if __clang_major__ < 18, to make sure .addr_space.1 has no holes. Ideally we should avoid compiling this file using clang-17 at all (arena tests depend on __BPF_FEATURE_ADDR_SPACE_CAST, and are skipped for llvm-17 anyway), but that is a separate topic. Acked-by: Eduard Zingerman Signed-off-by: Peilin Ye --- .../selftests/bpf/prog_tests/arena_atomics.c | 66 ++++- .../selftests/bpf/prog_tests/verifier.c | 4 + .../selftests/bpf/progs/arena_atomics.c | 121 +++++++- .../bpf/progs/verifier_load_acquire.c | 197 +++++++++++++ .../selftests/bpf/progs/verifier_precision.c | 49 ++++ .../bpf/progs/verifier_store_release.c | 264 ++++++++++++++++++ 6 files changed, 698 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_load_acquire.c create mode 100644 tools/testing/selftests/bpf/progs/verifier_store_release.c diff --git a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c index 26e7c06c6cb4..d98577a6babc 100644 --- a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c @@ -162,6 +162,66 @@ static void test_uaf(struct arena_atomics *skel) ASSERT_EQ(skel->arena->uaf_recovery_fails, 0, "uaf_recovery_fails"); } +static void test_load_acquire(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP: ENABLE_ATOMICS_TESTS not defined, Clang doesn't support addr_space_cast, and/or JIT doesn't support load-acquire\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = bpf_program__fd(skel->progs.load_acquire); + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->load_acquire8_result, 0x12, + "load_acquire8_result"); + ASSERT_EQ(skel->arena->load_acquire16_result, 0x1234, + "load_acquire16_result"); + ASSERT_EQ(skel->arena->load_acquire32_result, 0x12345678, + "load_acquire32_result"); + ASSERT_EQ(skel->arena->load_acquire64_result, 0x1234567890abcdef, + "load_acquire64_result"); +} + +static void test_store_release(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP: ENABLE_ATOMICS_TESTS not defined, Clang doesn't support addr_space_cast, and/or JIT doesn't support store-release\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = bpf_program__fd(skel->progs.store_release); + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->store_release8_result, 0x12, + "store_release8_result"); + ASSERT_EQ(skel->arena->store_release16_result, 0x1234, + "store_release16_result"); + ASSERT_EQ(skel->arena->store_release32_result, 0x12345678, + "store_release32_result"); + ASSERT_EQ(skel->arena->store_release64_result, 0x1234567890abcdef, + "store_release64_result"); +} + void test_arena_atomics(void) { struct arena_atomics *skel; @@ -171,7 +231,7 @@ void test_arena_atomics(void) if (!ASSERT_OK_PTR(skel, "arena atomics skeleton open")) return; - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS or no addr_space_cast support in clang", __func__); test__skip(); @@ -198,6 +258,10 @@ void test_arena_atomics(void) test_xchg(skel); if (test__start_subtest("uaf")) test_uaf(skel); + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); cleanup: arena_atomics__destroy(skel); diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 8a0e1ff8a2dc..cfe47b529e01 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -45,6 +45,7 @@ #include "verifier_ldsx.skel.h" #include "verifier_leak_ptr.skel.h" #include "verifier_linked_scalars.skel.h" +#include "verifier_load_acquire.skel.h" #include "verifier_loops1.skel.h" #include "verifier_lwt.skel.h" #include "verifier_map_in_map.skel.h" @@ -80,6 +81,7 @@ #include "verifier_spill_fill.skel.h" #include "verifier_spin_lock.skel.h" #include "verifier_stack_ptr.skel.h" +#include "verifier_store_release.skel.h" #include "verifier_subprog_precision.skel.h" #include "verifier_subreg.skel.h" #include "verifier_tailcall_jit.skel.h" @@ -173,6 +175,7 @@ void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } void test_verifier_iterating_callbacks(void) { RUN(verifier_iterating_callbacks); } void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); } void test_verifier_jit_convergence(void) { RUN(verifier_jit_convergence); } +void test_verifier_load_acquire(void) { RUN(verifier_load_acquire); } void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_ldsx(void) { RUN(verifier_ldsx); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } @@ -211,6 +214,7 @@ void test_verifier_sockmap_mutate(void) { RUN(verifier_sockmap_mutate); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } +void test_verifier_store_release(void) { RUN(verifier_store_release); } void test_verifier_subprog_precision(void) { RUN(verifier_subprog_precision); } void test_verifier_subreg(void) { RUN(verifier_subreg); } void test_verifier_tailcall_jit(void) { RUN(verifier_tailcall_jit); } diff --git a/tools/testing/selftests/bpf/progs/arena_atomics.c b/tools/testing/selftests/bpf/progs/arena_atomics.c index 40dd57fca5cc..a52feff98112 100644 --- a/tools/testing/selftests/bpf/progs/arena_atomics.c +++ b/tools/testing/selftests/bpf/progs/arena_atomics.c @@ -6,6 +6,8 @@ #include #include #include "bpf_arena_common.h" +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" struct { __uint(type, BPF_MAP_TYPE_ARENA); @@ -19,9 +21,17 @@ struct { } arena SEC(".maps"); #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) -bool skip_tests __attribute((__section__(".data"))) = false; +bool skip_all_tests __attribute((__section__(".data"))) = false; #else -bool skip_tests = true; +bool skip_all_tests = true; +#endif + +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) +bool skip_lacq_srel_tests __attribute((__section__(".data"))) = false; +#else +bool skip_lacq_srel_tests = true; #endif __u32 pid = 0; @@ -274,4 +284,111 @@ int uaf(const void *ctx) return 0; } +#if __clang_major__ >= 18 +__u8 __arena_global load_acquire8_value = 0x12; +__u16 __arena_global load_acquire16_value = 0x1234; +__u32 __arena_global load_acquire32_value = 0x12345678; +__u64 __arena_global load_acquire64_value = 0x1234567890abcdef; + +__u8 __arena_global load_acquire8_result = 0; +__u16 __arena_global load_acquire16_result = 0; +__u32 __arena_global load_acquire32_result = 0; +__u64 __arena_global load_acquire64_result = 0; +#else +/* clang-17 crashes if the .addr_space.1 ELF section has holes. Work around + * this issue by defining the below variables as 64-bit. + */ +__u64 __arena_global load_acquire8_value; +__u64 __arena_global load_acquire16_value; +__u64 __arena_global load_acquire32_value; +__u64 __arena_global load_acquire64_value; + +__u64 __arena_global load_acquire8_result; +__u64 __arena_global load_acquire16_result; +__u64 __arena_global load_acquire32_result; +__u64 __arena_global load_acquire64_result; +#endif + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +#define LOAD_ACQUIRE_ARENA(SIZEOP, SIZE, SRC, DST) \ + { asm volatile ( \ + "r1 = %[" #SRC "] ll;" \ + "r1 = addr_space_cast(r1, 0x0, 0x1);" \ + ".8byte %[load_acquire_insn];" \ + "r3 = %[" #DST "] ll;" \ + "r3 = addr_space_cast(r3, 0x0, 0x1);" \ + "*(" #SIZE " *)(r3 + 0) = r2;" \ + : \ + : __imm_addr(SRC), \ + __imm_insn(load_acquire_insn, \ + BPF_ATOMIC_OP(BPF_##SIZEOP, BPF_LOAD_ACQ, \ + BPF_REG_2, BPF_REG_1, 0)), \ + __imm_addr(DST) \ + : __clobber_all); } \ + + LOAD_ACQUIRE_ARENA(B, u8, load_acquire8_value, load_acquire8_result) + LOAD_ACQUIRE_ARENA(H, u16, load_acquire16_value, + load_acquire16_result) + LOAD_ACQUIRE_ARENA(W, u32, load_acquire32_value, + load_acquire32_result) + LOAD_ACQUIRE_ARENA(DW, u64, load_acquire64_value, + load_acquire64_result) +#undef LOAD_ACQUIRE_ARENA + +#endif + return 0; +} + +#if __clang_major__ >= 18 +__u8 __arena_global store_release8_result = 0; +__u16 __arena_global store_release16_result = 0; +__u32 __arena_global store_release32_result = 0; +__u64 __arena_global store_release64_result = 0; +#else +/* clang-17 crashes if the .addr_space.1 ELF section has holes. Work around + * this issue by defining the below variables as 64-bit. + */ +__u64 __arena_global store_release8_result; +__u64 __arena_global store_release16_result; +__u64 __arena_global store_release32_result; +__u64 __arena_global store_release64_result; +#endif + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +#define STORE_RELEASE_ARENA(SIZEOP, DST, VAL) \ + { asm volatile ( \ + "r1 = " VAL ";" \ + "r2 = %[" #DST "] ll;" \ + "r2 = addr_space_cast(r2, 0x0, 0x1);" \ + ".8byte %[store_release_insn];" \ + : \ + : __imm_addr(DST), \ + __imm_insn(store_release_insn, \ + BPF_ATOMIC_OP(BPF_##SIZEOP, BPF_STORE_REL, \ + BPF_REG_2, BPF_REG_1, 0)) \ + : __clobber_all); } \ + + STORE_RELEASE_ARENA(B, store_release8_result, "0x12") + STORE_RELEASE_ARENA(H, store_release16_result, "0x1234") + STORE_RELEASE_ARENA(W, store_release32_result, "0x12345678") + STORE_RELEASE_ARENA(DW, store_release64_result, + "0x1234567890abcdef ll") +#undef STORE_RELEASE_ARENA + +#endif + return 0; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c new file mode 100644 index 000000000000..ff308f24d745 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2025 Google LLC. */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#if __clang_major__ >= 18 && defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("socket") +__description("load-acquire, 8-bit") +__success __success_unpriv __retval(0x12) +__naked void load_acquire_8(void) +{ + asm volatile ( + "w1 = 0x12;" + "*(u8 *)(r10 - 1) = w1;" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r10 - 1)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -1)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 16-bit") +__success __success_unpriv __retval(0x1234) +__naked void load_acquire_16(void) +{ + asm volatile ( + "w1 = 0x1234;" + "*(u16 *)(r10 - 2) = w1;" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u16 *)(r10 - 2)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -2)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 32-bit") +__success __success_unpriv __retval(0x12345678) +__naked void load_acquire_32(void) +{ + asm volatile ( + "w1 = 0x12345678;" + "*(u32 *)(r10 - 4) = w1;" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u32 *)(r10 - 4)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -4)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 64-bit") +__success __success_unpriv __retval(0x1234567890abcdef) +__naked void load_acquire_64(void) +{ + asm volatile ( + "r1 = 0x1234567890abcdef ll;" + "*(u64 *)(r10 - 8) = r1;" + ".8byte %[load_acquire_insn];" // r0 = load_acquire((u64 *)(r10 - 8)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -8)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire with uninitialized src_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void load_acquire_with_uninitialized_src_reg(void) +{ + asm volatile ( + ".8byte %[load_acquire_insn];" // r0 = load_acquire((u64 *)(r2 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire with non-pointer src_reg") +__failure __failure_unpriv __msg("R1 invalid mem access 'scalar'") +__naked void load_acquire_with_non_pointer_src_reg(void) +{ + asm volatile ( + "r1 = 0;" + ".8byte %[load_acquire_insn];" // r0 = load_acquire((u64 *)(r1 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("socket") +__description("misaligned load-acquire") +__failure __failure_unpriv __msg("misaligned stack access off") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void load_acquire_misaligned(void) +{ + asm volatile ( + "r1 = 0;" + "*(u64 *)(r10 - 8) = r1;" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u32 *)(r10 - 5)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -5)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire from ctx pointer") +__failure __failure_unpriv __msg("BPF_ATOMIC loads from R1 ctx is not allowed") +__naked void load_acquire_from_ctx_pointer(void) +{ + asm volatile ( + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r1 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("xdp") +__description("load-acquire from pkt pointer") +__failure __msg("BPF_ATOMIC loads from R2 pkt is not allowed") +__naked void load_acquire_from_pkt_pointer(void) +{ + asm volatile ( + "r2 = *(u32 *)(r1 + %[xdp_md_data]);" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("flow_dissector") +__description("load-acquire from flow_keys pointer") +__failure __msg("BPF_ATOMIC loads from R2 flow_keys is not allowed") +__naked void load_acquire_from_flow_keys_pointer(void) +{ + asm volatile ( + "r2 = *(u64 *)(r1 + %[__sk_buff_flow_keys]);" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(__sk_buff_flow_keys, + offsetof(struct __sk_buff, flow_keys)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("load-acquire from sock pointer") +__failure __msg("BPF_ATOMIC loads from R2 sock is not allowed") +__naked void load_acquire_from_sock_pointer(void) +{ + asm volatile ( + "r2 = *(u64 *)(r1 + %[sk_reuseport_md_sk]);" + ".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(sk_reuseport_md_sk, offsetof(struct sk_reuseport_md, sk)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +#else + +SEC("socket") +__description("Clang version < 18, ENABLE_ATOMICS_TESTS not defined, and/or JIT doesn't support load-acquire, use a dummy test") +__success +int dummy_test(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools/testing/selftests/bpf/progs/verifier_precision.c index 6b564d4c0986..6662d4b39969 100644 --- a/tools/testing/selftests/bpf/progs/verifier_precision.c +++ b/tools/testing/selftests/bpf/progs/verifier_precision.c @@ -2,6 +2,7 @@ /* Copyright (C) 2023 SUSE LLC */ #include #include +#include "../../../include/linux/filter.h" #include "bpf_misc.h" SEC("?raw_tp") @@ -90,6 +91,54 @@ __naked int bpf_end_bswap(void) ::: __clobber_all); } +#if defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("?raw_tp") +__success __log_level(2) +__msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r3 = r10") +__msg("mark_precise: frame0: regs=r2 stack= before 2: (db) r2 = load_acquire((u64 *)(r10 -8))") +__msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") +__msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") +__naked int bpf_load_acquire(void) +{ + asm volatile ( + "r1 = 8;" + "*(u64 *)(r10 - 8) = r1;" + ".8byte %[load_acquire_insn];" /* r2 = load_acquire((u64 *)(r10 - 8)); */ + "r3 = r10;" + "r3 += r2;" /* mark_precise */ + "r0 = 0;" + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8)) + : __clobber_all); +} + +SEC("?raw_tp") +__success __log_level(2) +__msg("mark_precise: frame0: regs=r1 stack= before 3: (bf) r2 = r10") +__msg("mark_precise: frame0: regs=r1 stack= before 2: (79) r1 = *(u64 *)(r10 -8)") +__msg("mark_precise: frame0: regs= stack=-8 before 1: (db) store_release((u64 *)(r10 -8), r1)") +__msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") +__naked int bpf_store_release(void) +{ + asm volatile ( + "r1 = 8;" + ".8byte %[store_release_insn];" /* store_release((u64 *)(r10 - 8), r1); */ + "r1 = *(u64 *)(r10 - 8);" + "r2 = r10;" + "r2 += r1;" /* mark_precise */ + "r0 = 0;" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +#endif /* load-acquire, store-release */ #endif /* v4 instruction */ SEC("?raw_tp") diff --git a/tools/testing/selftests/bpf/progs/verifier_store_release.c b/tools/testing/selftests/bpf/progs/verifier_store_release.c new file mode 100644 index 000000000000..f1c64c08f25f --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_store_release.c @@ -0,0 +1,264 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2025 Google LLC. */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#if __clang_major__ >= 18 && defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("socket") +__description("store-release, 8-bit") +__success __success_unpriv __retval(0x12) +__naked void store_release_8(void) +{ + asm volatile ( + "w1 = 0x12;" + ".8byte %[store_release_insn];" // store_release((u8 *)(r10 - 1), w1); + "w0 = *(u8 *)(r10 - 1);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -1)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 16-bit") +__success __success_unpriv __retval(0x1234) +__naked void store_release_16(void) +{ + asm volatile ( + "w1 = 0x1234;" + ".8byte %[store_release_insn];" // store_release((u16 *)(r10 - 2), w1); + "w0 = *(u16 *)(r10 - 2);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_H, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -2)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 32-bit") +__success __success_unpriv __retval(0x12345678) +__naked void store_release_32(void) +{ + asm volatile ( + "w1 = 0x12345678;" + ".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 4), w1); + "w0 = *(u32 *)(r10 - 4);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -4)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 64-bit") +__success __success_unpriv __retval(0x1234567890abcdef) +__naked void store_release_64(void) +{ + asm volatile ( + "r1 = 0x1234567890abcdef ll;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1); + "r0 = *(u64 *)(r10 - 8);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with uninitialized src_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void store_release_with_uninitialized_src_reg(void) +{ + asm volatile ( + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r2); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_2, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with uninitialized dst_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void store_release_with_uninitialized_dst_reg(void) +{ + asm volatile ( + "r1 = 0;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r2 - 8), r1); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_2, BPF_REG_1, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with non-pointer dst_reg") +__failure __failure_unpriv __msg("R1 invalid mem access 'scalar'") +__naked void store_release_with_non_pointer_dst_reg(void) +{ + asm volatile ( + "r1 = 0;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r1 + 0), r1); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_1, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("socket") +__description("misaligned store-release") +__failure __failure_unpriv __msg("misaligned stack access off") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void store_release_misaligned(void) +{ + asm volatile ( + "w0 = 0;" + ".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 5), w0); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_0, -5)) + : __clobber_all); +} + +SEC("socket") +__description("store-release to ctx pointer") +__failure __failure_unpriv __msg("BPF_ATOMIC stores into R1 ctx is not allowed") +__naked void store_release_to_ctx_pointer(void) +{ + asm volatile ( + "w0 = 0;" + ".8byte %[store_release_insn];" // store_release((u8 *)(r1 + 0), w0); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_1, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("xdp") +__description("store-release to pkt pointer") +__failure __msg("BPF_ATOMIC stores into R2 pkt is not allowed") +__naked void store_release_to_pkt_pointer(void) +{ + asm volatile ( + "w0 = 0;" + "r2 = *(u32 *)(r1 + %[xdp_md_data]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("flow_dissector") +__description("store-release to flow_keys pointer") +__failure __msg("BPF_ATOMIC stores into R2 flow_keys is not allowed") +__naked void store_release_to_flow_keys_pointer(void) +{ + asm volatile ( + "w0 = 0;" + "r2 = *(u64 *)(r1 + %[__sk_buff_flow_keys]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(__sk_buff_flow_keys, + offsetof(struct __sk_buff, flow_keys)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("store-release to sock pointer") +__failure __msg("BPF_ATOMIC stores into R2 sock is not allowed") +__naked void store_release_to_sock_pointer(void) +{ + asm volatile ( + "w0 = 0;" + "r2 = *(u64 *)(r1 + %[sk_reuseport_md_sk]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(sk_reuseport_md_sk, offsetof(struct sk_reuseport_md, sk)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, leak pointer to stack") +__success __success_unpriv __retval(0) +__naked void store_release_leak_pointer_to_stack(void) +{ + asm volatile ( + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1); + "r0 = 0;" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("store-release, leak pointer to map") +__success __retval(0) +__failure_unpriv __msg_unpriv("R6 leaks addr into map") +__naked void store_release_leak_pointer_to_map(void) +{ + asm volatile ( + "r6 = r1;" + "r1 = %[map_hash_8b] ll;" + "r2 = 0;" + "*(u64 *)(r10 - 8) = r2;" + "r2 = r10;" + "r2 += -8;" + "call %[bpf_map_lookup_elem];" + "if r0 == 0 goto l0_%=;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r0 + 0), r6); +"l0_%=:" + "r0 = 0;" + "exit;" + : + : __imm_addr(map_hash_8b), + __imm(bpf_map_lookup_elem), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_0, BPF_REG_6, 0)) + : __clobber_all); +} + +#else + +SEC("socket") +__description("Clang version < 18, ENABLE_ATOMICS_TESTS not defined, and/or JIT doesn't support store-release, use a dummy test") +__success +int dummy_test(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; From patchwork Mon Mar 3 05:38:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13998229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3732BC282C5 for ; Mon, 3 Mar 2025 05:54:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=bdFMJ3eJb2l/jj7P1Me0JuhIKyhrXbqRhtqcohr3hSo=; b=CE+YGtOHzHV7bV5TXzuJQqMBkI 5vZPAkSC7p+MYAhCTo0ICEsfSnH4E2IvPx0Vem7ybLRnYq0ZI68gsnabBFOnPkZQ7jK8bUiJbsPle s8OwgKq+5ZjvIOyO1rvKhzg55YPICcg1q8h9K0f2L8L+hsCy4RNXWwbbWIZRwLJEVgrmkvcYeWfoj 4HhWK0vIH2uU5IbprPeDlLnomU+VTkgm8H9Uq8k/zKErBdjJSLiP+RL+eV0fLRji2Ugjge6YRJP0d ZXwKU5Fj2aNbhdnm4Ie7Mkzs5R7XrVaaHK5DnOq0wq2j5sW/2WH2jA+aI+eOCGVpGtNkS/iKM8Ucn upvZYz3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1toyl7-0000000HHyS-2JZc; Mon, 03 Mar 2025 05:54:33 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1toyVi-0000000HGXp-0gyP for linux-arm-kernel@lists.infradead.org; Mon, 03 Mar 2025 05:38:39 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-21f6cb3097bso112473775ad.3 for ; Sun, 02 Mar 2025 21:38:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740980317; x=1741585117; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bdFMJ3eJb2l/jj7P1Me0JuhIKyhrXbqRhtqcohr3hSo=; b=EWyhl8ulnUsHbTCV8oCPSQeo6DdKssiUM2ZeYrlOt/9DLOfe/AYDHAvB490ERNtL8K NJp83jtadfDP8krRmMw5oOYV0O9SG/VV6gsJ9esXZ4WnlV3hWG2I6p+ihtXYzzuL1NqV iSXG7hYDjmY1dnS8nBNBlJTYLIC3hO4vFUtpf3gKpJm6PJtm6DFKhwGiE0hPQGtgNh1j CvWe14ETBr2qR0FW27yNFwGphqLBdLF1n0NaqeP48XwAq1C0zGPe1Gco9uBolm2AqPye YJOjT2NFxJHPn0ECqLP5+Oq4Rsndrfxrcpb+mHOtpvqtldrpvfp/KRVgYLozrOlxSGKZ 1cNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740980317; x=1741585117; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bdFMJ3eJb2l/jj7P1Me0JuhIKyhrXbqRhtqcohr3hSo=; b=afloIGRysE25XC1QBExseVmyjeS2r3zwxxCWK1SR3jj0Ycc/+uWZQGNXe8UZidvPYC 0P7xrleA5Q7+K+lCvdSDI6yqzzbUzQvIALleGdufGALQGKseey/HosZ/s4qfX2NX9Ueq +/Lw0CKPjXRJkOj/2eDFH0taIhjujOov6IWsjB7df5nRrDBiyzRsalrMkcYXkIXQNvUY +pe67ESjjbqraiF+DYBJSgfWhEd5ZUHfM17UzKAQrAL9i7WXMDeceUSU2XjBC2FCnblY EWPMV6ETHJQzL1qq+mxlHoQ2lPNwtZnfaT4OmhJqgH3qP4yDeqFyZX2j10bFWdNNoygC sW5g== X-Forwarded-Encrypted: i=1; AJvYcCVuuj5I8e+D3q46ho9r4TNpe2hvEGP96taM+CbZ4GvYUt0FZbq2BuK2UEJCwqygJllcauAUOWrFZqRzcmDkdsss@lists.infradead.org X-Gm-Message-State: AOJu0YwhqgX759p/8D9ectF1fE1i1ptRBf+Kq7ZhPPj/5e0nv/lNNJm9 4T4SJK39+UvA2Jx57+EBoI7sKcoMcqgLQ6SCTCnTBqyZLs2j+yJSSficD+UvH1RhIaX6Ldl5BhT LkfZWStRseQ== X-Google-Smtp-Source: AGHT+IG8yh55rFOhOTZMMGs/yTblMQoM/f6tZY3/AfjWMjgR3Y/YqigzRqn2h1ts0C33WfA1yC4VrhUHSjzCMg== X-Received: from pfhm17.prod.google.com ([2002:a62:f211:0:b0:734:c237:abe7]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3d4c:b0:736:6043:69f9 with SMTP id d2e1a72fcca58-73660437d47mr1216182b3a.19.1740980316852; Sun, 02 Mar 2025 21:38:36 -0800 (PST) Date: Mon, 3 Mar 2025 05:38:29 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v4 10/10] bpf, docs: Update instruction-set.rst for load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250302_213838_210491_21D945F3 X-CRM114-Status: GOOD ( 13.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update documentation for the new load-acquire and store-release instructions. Rename existing atomic operations as "atomic read-modify-write (RMW) operations". Following RFC 9669, section 7.3. "Adding Instructions", create new conformance groups "atomic32v2" and "atomic64v2", where: * atomic32v2: includes all instructions in "atomic32", plus the new 8-bit, 16-bit and 32-bit atomic load-acquire and store-release instructions * atomic64v2: includes all instructions in "atomic64" and "atomic32v2", plus the new 64-bit atomic load-acquire and store-release instructions Cc: bpf@ietf.org Signed-off-by: Peilin Ye --- .../bpf/standardization/instruction-set.rst | 78 +++++++++++++++---- 1 file changed, 62 insertions(+), 16 deletions(-) diff --git a/Documentation/bpf/standardization/instruction-set.rst b/Documentation/bpf/standardization/instruction-set.rst index fbe975585236..1bed27572bca 100644 --- a/Documentation/bpf/standardization/instruction-set.rst +++ b/Documentation/bpf/standardization/instruction-set.rst @@ -139,8 +139,14 @@ This document defines the following conformance groups: specification unless otherwise noted. * base64: includes base32, plus instructions explicitly noted as being in the base64 conformance group. -* atomic32: includes 32-bit atomic operation instructions (see `Atomic operations`_). -* atomic64: includes atomic32, plus 64-bit atomic operation instructions. +* atomic32: includes 32-bit atomic read-modify-write instructions (see + `Atomic operations`_). +* atomic32v2: includes atomic32, plus 8-bit, 16-bit and 32-bit atomic + load-acquire and store-release instructions. +* atomic64: includes atomic32, plus 64-bit atomic read-modify-write + instructions. +* atomic64v2: unifies atomic32v2 and atomic64, plus 64-bit atomic load-acquire + and store-release instructions. * divmul32: includes 32-bit division, multiplication, and modulo instructions. * divmul64: includes divmul32, plus 64-bit division, multiplication, and modulo instructions. @@ -661,20 +667,29 @@ Atomic operations are operations that operate on memory and can not be interrupted or corrupted by other access to the same memory region by other BPF programs or means outside of this specification. -All atomic operations supported by BPF are encoded as store operations -that use the ``ATOMIC`` mode modifier as follows: +All atomic operations supported by BPF are encoded as ``STX`` instructions +that use the ``ATOMIC`` mode modifier, with the 'imm' field encoding the +actual atomic operation. These operations fall into two categories, as +described in the following sections: -* ``{ATOMIC, W, STX}`` for 32-bit operations, which are +* `Atomic read-modify-write operations`_ +* `Atomic load and store operations`_ + +Atomic read-modify-write operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The atomic read-modify-write (RMW) operations are encoded as follows: + +* ``{ATOMIC, W, STX}`` for 32-bit RMW operations, which are part of the "atomic32" conformance group. -* ``{ATOMIC, DW, STX}`` for 64-bit operations, which are +* ``{ATOMIC, DW, STX}`` for 64-bit RMW operations, which are part of the "atomic64" conformance group. -* 8-bit and 16-bit wide atomic operations are not supported. +* 8-bit and 16-bit wide atomic RMW operations are not supported. -The 'imm' field is used to encode the actual atomic operation. -Simple atomic operation use a subset of the values defined to encode -arithmetic operations in the 'imm' field to encode the atomic operation: +Simple atomic RMW operation use a subset of the values defined to encode +arithmetic operations in the 'imm' field to encode the atomic RMW operation: -.. table:: Simple atomic operations +.. table:: Simple atomic read-modify-write operations ======== ===== =========== imm value description @@ -694,10 +709,10 @@ arithmetic operations in the 'imm' field to encode the atomic operation: *(u64 *)(dst + offset) += src -In addition to the simple atomic operations, there also is a modifier and -two complex atomic operations: +In addition to the simple atomic RMW operations, there also is a modifier and +two complex atomic RMW operations: -.. table:: Complex atomic operations +.. table:: Complex atomic read-modify-write operations =========== ================ =========================== imm value description @@ -707,8 +722,8 @@ two complex atomic operations: CMPXCHG 0xf0 | FETCH atomic compare and exchange =========== ================ =========================== -The ``FETCH`` modifier is optional for simple atomic operations, and -always set for the complex atomic operations. If the ``FETCH`` flag +The ``FETCH`` modifier is optional for simple atomic RMW operations, and +always set for the complex atomic RMW operations. If the ``FETCH`` flag is set, then the operation also overwrites ``src`` with the value that was in memory before it was modified. @@ -721,6 +736,37 @@ The ``CMPXCHG`` operation atomically compares the value addressed by value that was at ``dst + offset`` before the operation is zero-extended and loaded back to ``R0``. +Atomic load and store operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To encode an atomic load or store operation, the 'imm' field is one of: + +.. table:: Atomic load and store operations + + ========= ===== ==================== + imm value description + ========= ===== ==================== + LOAD_ACQ 0x100 atomic load-acquire + STORE_REL 0x110 atomic store-release + ========= ===== ==================== + +``{ATOMIC, , STX}`` with 'imm' = LOAD_ACQ means:: + + dst = load_acquire((unsigned size *)(src + offset)) + +``{ATOMIC, , STX}`` with 'imm' = STORE_REL means:: + + store_release((unsigned size *)(dst + offset), src) + +Where '' is one of: ``B``, ``H``, ``W``, or ``DW``, and 'unsigned size' +is one of: u8, u16, u32, or u64. + +8-bit, 16-bit and 32-bit atomic load-acquire and store-release instructions +are part of the "atomic32v2" conformance group. + +64-bit atomic load-acquire and store-release instructions are part of the +"atomic64v2" conformance group. + 64-bit immediate instructions -----------------------------