From patchwork Mon Nov 23 17:31:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925765 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40B8AC63777 for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB59D20781 for ; Mon, 23 Nov 2020 17:32:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HhliMXau" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732012AbgKWRcV (ORCPT ); Mon, 23 Nov 2020 12:32:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730667AbgKWRcV (ORCPT ); Mon, 23 Nov 2020 12:32:21 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C69F0C0613CF for ; Mon, 23 Nov 2020 09:32:19 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id x9so13400608qvt.16 for ; Mon, 23 Nov 2020 09:32:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pCm5bSXPkdYGa8DRY20xR0eBkPwWiyvd1AYgQ0fdFE4=; b=HhliMXauLQ0TrVwB8xd97VFOY7NCCoEm9IX+gtCpIFaPrvYlEjNCxEMYBxxUzzUWDz jVIMaqHqHtKuG+R7W347P75yyjILKcuVtHgAeOtGiMzcqLogSST7J7TaemOgkwuLVI8u HNW99QR6ouOn8LR7tlgkcVujDs/Y/6RVYtseGLHbgR4DO+DJF/dfG9AXCPJZD/hwJxx9 Lu3puQMOorfBhg+175K3C52rkOp416zDm017w0LLvIVFF6DcdwSlzrrs4tvKQxCSfU80 mF2vj3uOnYTt2JuP71wso3iYFwliO2jCoBN3FUwrCIW6p1SiCO3CW+K7Olh64/k/pnR4 urlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pCm5bSXPkdYGa8DRY20xR0eBkPwWiyvd1AYgQ0fdFE4=; b=WqfKa1FM3uuhblMwYV69WtYen+eqJ6QdNv/QxHbg9RsfVwlZ25mvUDYiiaXkgv+Scp kHn5O2JFvKaCXNSlk0zxGR0o2EJlI7IVI2a3CxJes2PATdJQJRPwFy0nnC4g+MF+/jEC lSCiP5HeYCIRk3b20MM+V5yenOFPjlodWhJpbzUYM2w3yP9AA5VG1WL1UkDOoRpMcJEF Jvm2LC2i2b04O19+Das8HEfMxS3nd/cz3XxEXs51PmI4YyFZ1WGR4R2xL/pR5OMO3PTV 4zWVjKleIuST/60h2h8lStpdCUycgnL+0QG2pElr1Wxy9SccttS0x1aruFb4+Np9r4Hm 8A/w== X-Gm-Message-State: AOAM531KdY7BEW8EMuwEZVD18Uj07KGa/j6zA8M7LjQMMQLbUT0tYND8 VQC5PnEnvHHdnABuz+iQetZJMyNBlt5nUJylSDk/D7ybXp8dK0mKEZhK+a/D1+gx8DKw9xlXsEE B/tVaM5Ps8k8if96Heam83dU2bGW4mmWNbgZYB34ZASJrngTkk7h90t56KRlKU8o= X-Google-Smtp-Source: ABdhPJyHTtN0KCsiNSZakjt4YlqBl+CsuCdatLftJ4JPMXDQCPE4mO+0mogE+QNINuuf2fgc+QXqOLbaFoQl6w== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6214:10e3:: with SMTP id q3mr385976qvt.3.1606152738906; Mon, 23 Nov 2020 09:32:18 -0800 (PST) Date: Mon, 23 Nov 2020 17:31:56 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-2-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 1/7] bpf: Factor out emission of ModR/M for *(reg + off) From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The case for JITing atomics is about to get more complicated. Let's factor out some common code to make the review and result more readable. NB the atomics code doesn't yet use the new helper - a subsequent patch will add its use as a side-effect of other changes. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 42 +++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 796506dcfc42..94b17bd30e00 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -681,6 +681,27 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg) *pprog = prog; } +/* Emit the ModR/M byte for addressing *(r1 + off) and r2 */ +static void emit_modrm_dstoff(u8 **pprog, u32 r1, u32 r2, int off) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (is_imm8(off)) { + /* 1-byte signed displacement. + * + * If off == 0 we could skip this and save one extra byte, but + * special case of x86 R13 which always needs an offset is not + * worth the hassle + */ + EMIT2(add_2reg(0x40, r1, r2), off); + } else { + /* 4-byte signed displacement */ + EMIT1_off32(add_2reg(0x80, r1, r2), off); + } + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -708,15 +729,7 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B); break; } - /* - * If insn->off == 0 we can save one extra byte, but - * special case of x86 R13 which always needs an offset - * is not worth the hassle - */ - if (is_imm8(off)) - EMIT2(add_2reg(0x40, src_reg, dst_reg), off); - else - EMIT1_off32(add_2reg(0x80, src_reg, dst_reg), off); + emit_modrm_dstoff(&prog, src_reg, dst_reg, off); *pprog = prog; } @@ -751,10 +764,7 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89); break; } - if (is_imm8(off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), off); + emit_modrm_dstoff(&prog, dst_reg, src_reg, off); *pprog = prog; } @@ -1240,11 +1250,7 @@ st: if (is_imm8(insn->off)) goto xadd; case BPF_STX | BPF_XADD | BPF_DW: EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: if (is_imm8(insn->off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), insn->off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), - insn->off); +xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); break; /* call */ From patchwork Mon Nov 23 17:31:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925771 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74F4DC63697 for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BC382075A for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KmaQiwWp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732193AbgKWRcW (ORCPT ); Mon, 23 Nov 2020 12:32:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730667AbgKWRcW (ORCPT ); Mon, 23 Nov 2020 12:32:22 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC8F9C0613CF for ; Mon, 23 Nov 2020 09:32:21 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id z129so2538615qkb.13 for ; Mon, 23 Nov 2020 09:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=S3LKp+3RvK8OZkmGe150smJLkvcUKra31qzmBdRpcwc=; b=KmaQiwWpXRuF/UZAZzDRoJgxZfaG1+6duIsUQWCdYRGcqdxqdIgTWPAp1FaQE7zFWP c5Lc/6/rBwcuQb26eeKik6O11jJEOE12+HKMTsAWa73R0rXQ24Y9rPpq3mMBwZxgk5hs g+vOjehJkQHe7Jqc/LvFdonRuOkk+1AD9cASSIcWArn+Vs8bKKg05WtHDT0Wf3GIxDe/ yXsw4+/omX4Wj7Cn/w5YcLNMd5aq3SsXr58YXzlQmjvsHbGlMB8uixETh+zBsRPk09Me bZgw7yR/qSC3YlmOC5Znt2t/bAH6gy2fORJ9Sh/ZwPLeU6RBTbE0nrMsHNnT5nAd8OH3 G9fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=S3LKp+3RvK8OZkmGe150smJLkvcUKra31qzmBdRpcwc=; b=dnCk7aQOdXVVdd2hE19VvW+M/wMyBjrCeJ+hk3+9QlbQSsYOrkkTA0BngL/I67L5gA vUzyucvYHmazGOAq140SYWnTfaHTHlvuW/mLatbS8t8AzIlTS+why9Lwrx/sQb7I/9yK Iab3TTwDGLjuI4IdPlwy57D1FVXqyiatRxNWNTwmhslvsNQITj+AUerStwNzgYmtCOtz cXo5e9BMewHVghFcqH+6FIWqVjxfCCR8VJfmrlZKS6+/9XQg2I9sJbfUKusc7NUp3DtO Nw9ZUIc+99egGVEEkp+2ZyY8JGW1fcRMfizjY51ahiz8iuupW77qhkDFhZ+c8h+Rcj25 iJEA== X-Gm-Message-State: AOAM530YHRpfJYx9tkcGlcja+Qln3Bm9H/SG+b/HUrIgJhDzOTk33mQN T9C0vuE8/KorH+VsMT2SDxOlvLOqQXEnNZUappcW/HHkQgqLoLggc27b+6gZDSKmRIVMan+zqNb iV8bsJLuF7Zl9A7O9rdFGRpTmYYr+kk2HDQ70QdB8+sHWjJ4vfPgRoT/DDWv/ZAs= X-Google-Smtp-Source: ABdhPJwmqoLq0Qk/TBRCaaMqIXFMyah7qWd3d1a3eXBMI6puLVsRKMVSF7qjkKHNHtq6EuDZh+IcyrsD8SbYRQ== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a0c:b65b:: with SMTP id q27mr462967qvf.8.1606152741044; Mon, 23 Nov 2020 09:32:21 -0800 (PST) Date: Mon, 23 Nov 2020 17:31:57 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-3-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 2/7] bpf: x86: Factor out emission of REX byte From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The JIT case for encoding atomic ops is about to get more complicated. In order to make the review & resulting code easier, let's factor out some shared helpers. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 39 ++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 94b17bd30e00..a839c1a54276 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -702,6 +702,21 @@ static void emit_modrm_dstoff(u8 **pprog, u32 r1, u32 r2, int off) *pprog = prog; } +/* + * Emit a REX byte if it will be necessary to address these registers + */ +static void maybe_emit_rex(u8 **pprog, u32 reg_rm, u32 reg_reg, bool wide) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (wide) + EMIT1(add_2mod(0x48, reg_rm, reg_reg)); + else if (is_ereg(reg_rm) || is_ereg(reg_reg)) + EMIT1(add_2mod(0x40, reg_rm, reg_reg)); + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -854,10 +869,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_OR: b2 = 0x09; break; case BPF_XOR: b2 = 0x31; break; } - if (BPF_CLASS(insn->code) == BPF_ALU64) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_ALU64); EMIT2(b2, add_2reg(0xC0, dst_reg, src_reg)); break; @@ -1301,20 +1314,16 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSGE | BPF_X: case BPF_JMP32 | BPF_JSLE | BPF_X: /* cmp dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_X: case BPF_JMP32 | BPF_JSET | BPF_X: /* test dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; @@ -1350,10 +1359,8 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSLE | BPF_K: /* test dst_reg, dst_reg to save one extra byte */ if (imm32 == 0) { - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, dst_reg)); - else if (is_ereg(dst_reg)) - EMIT1(add_2mod(0x40, dst_reg, dst_reg)); + maybe_emit_rex(&prog, dst_reg, dst_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, dst_reg)); goto emit_cond_jmp; } From patchwork Mon Nov 23 17:31:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925777 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1341C63798 for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5227320781 for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fkMXxLR+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730667AbgKWRcY (ORCPT ); Mon, 23 Nov 2020 12:32:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730356AbgKWRcY (ORCPT ); Mon, 23 Nov 2020 12:32:24 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4921CC0613CF for ; Mon, 23 Nov 2020 09:32:24 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id 100so14186564qtf.14 for ; Mon, 23 Nov 2020 09:32:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=3ctDVIFjIgSyUSWbnoHTIms02npnG6dwD6KzLW+YvLg=; b=fkMXxLR+4+O+G2Lnv4bUP/IApP45ic7x25TOGZ3sJas3OiQPg2Npsa9RmSWr0c8lEM 7DnonR8hSLAf5M5Q9zfpbU7LSaDAEttJ7B068ldJCL/Q+Y34ZWN6N4/evB3roDgNyR0A 6cle9MG7EP1n0RJOsNPaAd4jY3CWavW0oDa7PMIqxvsF0Okhg5P5/YTv8lS8TVX/nuFV xsTUxZrzl9BC2g8tlpsaAQSanCYwqXkxH7ImN+8hC7h7M8E44hbDHtdzR8AjpWCmbswn zHK1aCGwF2JczgqSf/+Vkt8Y39dYvA4kKbv7llIw9QrY0RcQg/326Qwv86GcEZiuukPN VCMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3ctDVIFjIgSyUSWbnoHTIms02npnG6dwD6KzLW+YvLg=; b=deaRU6WZ2Av9/i5tHYTRh0o1dpyBvZ9nNko89p5Q7fJhU7U6phpKbkpK3GkLkyho1W s++lJB2stsW5XVw1RNPRyqSq6Hf5+kh0yr1wefo7oUSuNnh1GsJ7AiA062ESbSUnqDNJ WWeiSWBezMDDPv0XTOeGHAhA/dgnVxtZuFDm9vnz1pZR/utnuJLmGrCw70Oc2mNuUS0p 8Rvh1rGL04dMNHBLYQkqD2ML4hybDDonAJboFL7yku3I1IctYv5S3ZGmJjJxBUMxsPDv KBbaCx6DirAYD6WzWGqHEtEOh+L3c7f5C98vvPEW/QGCG1Z1Mc0bK8yhvFN3yGxU61fx P1/A== X-Gm-Message-State: AOAM5312VP67pV6QepK+J8PQNhNjZqPcKLxL9rIFv/Ha58MZuzKZkqT4 19E3eqX6UED1swksBf60A507S4tIGswkAMEfOM72IchRbw2BvdjKU3IRodo4khTSbXOpTHNETDM vxozCLP1YskwB8ljmSGgRU3Hl+oeoGvFfWTfeScU0QXXhrwEoBhUivaXSU55b6sw= X-Google-Smtp-Source: ABdhPJyLzd+cfSy+iem/MEYww7xTzdSkjTZFkvVvfG81WWGITlggpLaOqlLQHVsit6gu8udSv8er1UlJeWQPUg== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a0c:e2cc:: with SMTP id t12mr368910qvl.61.1606152743317; Mon, 23 Nov 2020 09:32:23 -0800 (PST) Date: Mon, 23 Nov 2020 17:31:58 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-4-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 3/7] bpf: Rename BPF_XADD and prepare to encode other atomics in .imm From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A subsequent patch will add additional atomic operations. These new operations will use the same opcode field as the existing XADD, with the immediate discriminating different operations. In preparation, rename the instruction mode BPF_ATOMIC and start calling the zero immediate BPF_ADD. This is possible (doesn't break existing valid BPF progs) because the immediate field is currently reserved MBZ and BPF_ADD is zero. All uses are removed from the tree but the BPF_XADD definition is kept around to avoid breaking builds for people including kernel headers. Signed-off-by: Brendan Jackman Reported-by: kernel test robot --- Documentation/networking/filter.rst | 27 +++++++++------- arch/arm/net/bpf_jit_32.c | 7 ++--- arch/arm64/net/bpf_jit_comp.c | 16 +++++++--- arch/mips/net/ebpf_jit.c | 11 +++++-- arch/powerpc/net/bpf_jit_comp64.c | 25 ++++++++++++--- arch/riscv/net/bpf_jit_comp32.c | 20 +++++++++--- arch/riscv/net/bpf_jit_comp64.c | 16 +++++++--- arch/s390/net/bpf_jit_comp.c | 26 +++++++++------- arch/sparc/net/bpf_jit_comp_64.c | 14 +++++++-- arch/x86/net/bpf_jit_comp.c | 30 +++++++++++------- arch/x86/net/bpf_jit_comp32.c | 6 ++-- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 ++++++--- drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +-- .../net/ethernet/netronome/nfp/bpf/verifier.c | 13 +++++--- include/linux/filter.h | 8 +++-- include/uapi/linux/bpf.h | 3 +- kernel/bpf/core.c | 31 +++++++++++++------ kernel/bpf/disasm.c | 6 ++-- kernel/bpf/verifier.c | 24 ++++++++------ lib/test_bpf.c | 2 +- samples/bpf/bpf_insn.h | 4 +-- samples/bpf/sock_example.c | 3 +- samples/bpf/test_cgrp2_attach.c | 6 ++-- tools/include/linux/filter.h | 7 +++-- tools/include/uapi/linux/bpf.h | 3 +- .../bpf/prog_tests/cgroup_attach_multi.c | 6 ++-- tools/testing/selftests/bpf/verifier/ctx.c | 6 ++-- .../testing/selftests/bpf/verifier/leak_ptr.c | 4 +-- tools/testing/selftests/bpf/verifier/unpriv.c | 3 +- tools/testing/selftests/bpf/verifier/xadd.c | 2 +- 30 files changed, 230 insertions(+), 117 deletions(-) diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index debb59e374de..a9847662bbab 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1006,13 +1006,13 @@ Size modifier is one of ... Mode modifier is one of:: - BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ - BPF_ABS 0x20 - BPF_IND 0x40 - BPF_MEM 0x60 - BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ - BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ - BPF_XADD 0xc0 /* eBPF only, exclusive add */ + BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ + BPF_ABS 0x20 + BPF_IND 0x40 + BPF_MEM 0x60 + BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ + BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ + BPF_ATOMIC 0xc0 /* eBPF only, atomic operations */ eBPF has two non-generic instructions: (BPF_ABS | | BPF_LD) and (BPF_IND | | BPF_LD) which are used to access packet data. @@ -1044,11 +1044,16 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations:: BPF_MEM | | BPF_STX: *(size *) (dst_reg + off) = src_reg BPF_MEM | | BPF_ST: *(size *) (dst_reg + off) = imm32 BPF_MEM | | BPF_LDX: dst_reg = *(size *) (src_reg + off) - BPF_XADD | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg - BPF_XADD | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg -Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. Note that 1 and -2 byte atomic increments are not supported. +Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. + +It also includes atomic operations, which use the immediate field for extra +encoding. + + BPF_ADD, BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg + BPF_ADD, BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg + +Note that 1 and 2 byte atomic operations are not supported. eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 0207b6ea6e8a..897634d0a67c 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1620,10 +1620,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_str_r(dst_lo, tmp2, off, ctx, BPF_SIZE(code)); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + /* Atomic ops */ + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; /* STX: *(size *)(dst + off) = src */ case BPF_STX | BPF_MEM | BPF_W: diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index ef9f1d5e989d..f7b194878a99 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -875,10 +875,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op code %02x\n", insn->imm); + return -EINVAL; + } + + /* STX XADD: lock *(u32 *)(dst + off) += src + * and + * STX XADD: lock *(u64 *)(dst + off) += src + */ + if (!off) { reg = dst; } else { diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 561154cbcc40..939dd06764bc 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1423,8 +1423,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_STX | BPF_H | BPF_MEM: case BPF_STX | BPF_W | BPF_MEM: case BPF_STX | BPF_DW | BPF_MEM: - case BPF_STX | BPF_W | BPF_XADD: - case BPF_STX | BPF_DW | BPF_XADD: + case BPF_STX | BPF_W | BPF_ATOMIC: + case BPF_STX | BPF_DW | BPF_ATOMIC: if (insn->dst_reg == BPF_REG_10) { ctx->flags |= EBPF_SEEN_FP; dst = MIPS_R_SP; @@ -1438,7 +1438,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); if (src < 0) return src; - if (BPF_MODE(insn->code) == BPF_XADD) { + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + if (insn->imm != BPF_ADD) { + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + /* * If mem_off does not fit within the 9 bit ll/sc * instruction immediate field, use a temp reg. diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 022103c6a201..aaf1a887f653 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -683,10 +683,18 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, break; /* - * BPF_STX XADD (atomic_add) + * BPF_STX ATOMIC (atomic ops) */ - /* *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + + /* *(u32 *)(dst + off) += src */ + /* Get EA into TMP_REG_1 */ EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; @@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, /* we're done if this succeeded */ PPC_BCC_SHORT(COND_NE, tmp_idx); break; - /* *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + /* *(u64 *)(dst + off) += src */ + EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; EMIT(PPC_RAW_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0)); diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c index 579575f9cdae..a604e0fe2015 100644 --- a/arch/riscv/net/bpf_jit_comp32.c +++ b/arch/riscv/net/bpf_jit_comp32.c @@ -881,7 +881,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, const s8 *rd = bpf_get_reg64(dst, tmp1, ctx); const s8 *rs = bpf_get_reg64(src, tmp2, ctx); - if (mode == BPF_XADD && size != BPF_W) + if (mode == BPF_ATOMIC && (size != BPF_W || imm != BPF_ADD)) return -1; emit_imm(RV_REG_T0, off, ctx); @@ -899,7 +899,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, case BPF_MEM: emit(rv_sw(RV_REG_T0, 0, lo(rs)), ctx); break; - case BPF_XADD: + case BPF_ATOMIC: /* .imm checked above. This is XADD */ emit(rv_amoadd_w(RV_REG_ZERO, lo(rs), RV_REG_T0, 0, 0), ctx); break; @@ -1260,7 +1260,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_STX | BPF_MEM | BPF_H: case BPF_STX | BPF_MEM | BPF_W: case BPF_STX | BPF_MEM | BPF_DW: - case BPF_STX | BPF_XADD | BPF_W: if (BPF_CLASS(code) == BPF_ST) { emit_imm32(tmp2, imm, ctx); src = tmp2; @@ -1271,8 +1270,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, return -1; break; + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_info_once( + "bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EFAULT; + } + + if (emit_store_r64(dst, src, off, ctx, BPF_SIZE(code), + BPF_MODE(code))) + return -1; + break; + /* No hardware support for 8-byte atomics in RV32. */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: /* Fallthrough. */ notsupported: diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index 8a56b5293117..7696b2baf915 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -1027,10 +1027,18 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_add(RV_REG_T1, RV_REG_T1, rd, ctx); emit_sd(RV_REG_T1, 0, rs, ctx); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err("bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EINVAL; + } + + /* STX XADD: lock *(u32 *)(dst + off) += src + * STX XADD: lock *(u64 *)(dst + off) += src + */ + if (off) { if (is_12b_int(off)) { emit_addi(RV_REG_T1, rd, off, ctx); diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 0a4182792876..d02eae46be39 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -1205,18 +1205,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, jit->seen |= SEEN_MEM; break; /* - * BPF_STX XADD (atomic_add) + * BPF_STX ATM (atomic ops) */ - case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */ - /* laal %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg, - dst_reg, off); - jit->seen |= SEEN_MEM; - break; - case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */ - /* laalg %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg, - dst_reg, off); + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err("Unknown atomic operation %02x\n", insn->imm); + return -1; + } + + /* *(u32/u64 *)(dst + off) += src + * + * BFW_W: laal %w0,%src,off(%dst) + * BPF_DW: laalg %w0,%src,off(%dst) + */ + EMIT6_DISP_LH(0xeb000000, + BPF_SIZE(insn->code) == BPF_W ? 0x00fa : 0x00ea, + REG_W0, src_reg, dst_reg, off); jit->seen |= SEEN_MEM; break; /* diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 3364e2a00989..4fa4ad61dd35 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1367,11 +1367,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: { + case BPF_STX | BPF_ATOMIC | BPF_W: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; @@ -1390,11 +1395,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; } /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: { + case BPF_STX | BPF_ATOMIC | BPF_DW: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a839c1a54276..0ff2416d99b6 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1253,17 +1253,25 @@ st: if (is_imm8(insn->off)) } break; - /* STX XADD: lock *(u32*)(dst_reg + off) += src_reg */ - case BPF_STX | BPF_XADD | BPF_W: - /* Emit 'lock add dword ptr [rax + off], eax' */ - if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01); - else - EMIT2(0xF0, 0x01); - goto xadd; - case BPF_STX | BPF_XADD | BPF_DW: - EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err("bpf_jit: unknown opcode %02x\n", insn->imm); + return -EFAULT; + } + + /* XADD: lock *(u32/u64*)(dst_reg + off) += src_reg */ + + if (BPF_SIZE(insn->code) == BPF_W) { + /* Emit 'lock add dword ptr [rax + off], eax' */ + if (is_ereg(dst_reg) || is_ereg(src_reg)) + EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01); + else + EMIT2(0xF0, 0x01); + } else { + EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); + } + emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); break; /* call */ diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 96fde03aa987..d17b67c69f89 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -2243,10 +2243,8 @@ emit_cond_jmp: jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false); return -EFAULT; } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; case BPF_JMP | BPF_EXIT: if (seen_exit) { diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index 0a721f6e8676..0767d7b579e9 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -3109,13 +3109,19 @@ mem_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, bool is64) return 0; } -static int mem_xadd4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atm4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.off != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, false); } -static int mem_xadd8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.off != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, true); } @@ -3475,8 +3481,8 @@ static const instr_cb_t instr_cb[256] = { [BPF_STX | BPF_MEM | BPF_H] = mem_stx2, [BPF_STX | BPF_MEM | BPF_W] = mem_stx4, [BPF_STX | BPF_MEM | BPF_DW] = mem_stx8, - [BPF_STX | BPF_XADD | BPF_W] = mem_xadd4, - [BPF_STX | BPF_XADD | BPF_DW] = mem_xadd8, + [BPF_STX | BPF_ATOMIC | BPF_W] = mem_atm4, + [BPF_STX | BPF_ATOMIC | BPF_DW] = mem_atm8, [BPF_ST | BPF_MEM | BPF_B] = mem_st1, [BPF_ST | BPF_MEM | BPF_H] = mem_st2, [BPF_ST | BPF_MEM | BPF_W] = mem_st4, diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index fac9c6f9e197..e9e8ff0e7ae9 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -428,9 +428,9 @@ static inline bool is_mbpf_classic_store_pkt(const struct nfp_insn_meta *meta) return is_mbpf_classic_store(meta) && meta->ptr.type == PTR_TO_PACKET; } -static inline bool is_mbpf_xadd(const struct nfp_insn_meta *meta) +static inline bool is_mbpf_atm(const struct nfp_insn_meta *meta) { - return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_XADD); + return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_ATOMIC); } static inline bool is_mbpf_mul(const struct nfp_insn_meta *meta) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index e92ee510fd52..431b2a957139 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -523,12 +523,17 @@ nfp_bpf_check_store(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } static int -nfp_bpf_check_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, - struct bpf_verifier_env *env) +nfp_bpf_check_atm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + struct bpf_verifier_env *env) { const struct bpf_reg_state *sreg = cur_regs(env) + meta->insn.src_reg; const struct bpf_reg_state *dreg = cur_regs(env) + meta->insn.dst_reg; + if (meta->insn.imm != BPF_ADD) { + pr_vlog(env, "atomic op not implemented: %d\n", meta->insn.imm); + return -EOPNOTSUPP; + } + if (dreg->type != PTR_TO_MAP_VALUE) { pr_vlog(env, "atomic add not to a map value pointer: %d\n", dreg->type); @@ -655,8 +660,8 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, if (is_mbpf_store(meta)) return nfp_bpf_check_store(nfp_prog, meta, env); - if (is_mbpf_xadd(meta)) - return nfp_bpf_check_xadd(nfp_prog, meta, env); + if (is_mbpf_atm(meta)) + return nfp_bpf_check_atm(nfp_prog, meta, env); if (is_mbpf_alu(meta)) return nfp_bpf_check_alu(nfp_prog, meta, env); diff --git a/include/linux/filter.h b/include/linux/filter.h index 1b62397bd124..ce19988fb312 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -261,13 +261,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 3ca6146f001a..dcd08783647d 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* legacy name, don't add new uses */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ff55cbcfbab4..48b192a8edce 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1317,8 +1317,8 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, XADD, W), \ - INSN_3(STX, XADD, DW), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -1626,13 +1626,25 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDX_PROBE(DW, 8) #undef LDX_PROBE - STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ - atomic_add((u32) SRC, (atomic_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_W: + switch (insn->imm) { + case BPF_ADD: + /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ + atomic_add((u32) SRC, (atomic_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; - STX_XADD_DW: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_DW: + switch (insn->imm) { + case BPF_ADD: + /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ + atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; default_label: @@ -1642,7 +1654,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) * * Note, verifier whitelists all opcodes in bpf_opcode_in_insntable(). */ - pr_warn("BPF interpreter: unknown opcode %02x\n", insn->code); + pr_warn("BPF interpreter: unknown opcode %02x (imm: 0x%x)\n", + insn->code, insn->imm); BUG_ON(1); return 0; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index b44d8c447afd..37c8d6e9b4cc 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -153,14 +153,16 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else if (BPF_MODE(insn->code) == BPF_XADD) + else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_ADD) { verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else + } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); + } } else if (class == BPF_ST) { if (BPF_MODE(insn->code) != BPF_MEM) { verbose(cbs->private_data, "BUG_st_%02x\n", insn->code); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fb2943ea715d..06885e2501f8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3598,13 +3598,17 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return err; } -static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) +static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int err; - if ((BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) || - insn->imm != 0) { - verbose(env, "BPF_XADD uses reserved fields\n"); + if (insn->imm != BPF_ADD) { + verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); + return -EINVAL; + } + + if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) { + verbose(env, "invalid atomic operand size\n"); return -EINVAL; } @@ -3627,19 +3631,19 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "BPF_XADD stores into R%d %s is not allowed\n", + verbose(env, "atomic stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; } - /* check whether atomic_add can read the memory */ + /* check whether we can read the memory */ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, -1, true); if (err) return err; - /* check whether atomic_add can write into the same memory */ + /* check whether we can write into the same memory */ return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, true); } @@ -9486,8 +9490,8 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (BPF_MODE(insn->code) == BPF_XADD) { - err = check_xadd(env, env->insn_idx, insn); + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + err = check_atomic(env, env->insn_idx, insn); if (err) return err; env->insn_idx++; @@ -9897,7 +9901,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) if (BPF_CLASS(insn->code) == BPF_STX && ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_XADD) || insn->imm != 0)) { + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { verbose(env, "BPF_STX uses reserved fields\n"); return -EINVAL; } diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..fbb13ef9207c 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -4295,7 +4295,7 @@ static struct bpf_test tests[] = { { { 0, 0xffffffff } }, .stack_depth = 40, }, - /* BPF_STX | BPF_XADD | BPF_W/DW */ + /* BPF_STX | BPF_ATOMIC | BPF_W/DW */ { "STX_XADD_W: Test: 0x12 + 0x10 = 0x22", .u.insns_int = { diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h index 544237980582..db67a2847395 100644 --- a/samples/bpf/bpf_insn.h +++ b/samples/bpf/bpf_insn.h @@ -138,11 +138,11 @@ struct bpf_insn; #define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c index 00aae1d33fca..41ec3ca3bc40 100644 --- a/samples/bpf/sock_example.c +++ b/samples/bpf/sock_example.c @@ -54,7 +54,8 @@ static int test_sock(void) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_0, BPF_REG_1, 0, BPF_ADD), /* xadd r0 += r1 */ BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ BPF_EXIT_INSN(), }; diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c index 20fbd1241db3..aab0ef301536 100644 --- a/samples/bpf/test_cgrp2_attach.c +++ b/samples/bpf/test_cgrp2_attach.c @@ -53,7 +53,8 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_0, BPF_REG_1, 0, BPF_ADD), /* xadd r0 += r1 */ /* Count bytes */ BPF_MOV64_IMM(BPF_REG_0, MAP_KEY_BYTES), /* r0 = 1 */ @@ -64,7 +65,8 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_0, BPF_REG_1, 0, BPF_ADD), /* xadd r0 += r1 */ BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */ BPF_EXIT_INSN(), diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ca28b6ab8db7..95ff51d97f25 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -171,13 +171,14 @@ /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3ca6146f001a..dcd08783647d 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* legacy name, don't add new uses */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c index b549fcfacc0b..79401a59a988 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c @@ -45,13 +45,15 @@ static int prog_load_cnt(int verdict, int val) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_0, BPF_REG_1, 0, BPF_ADD), /* xadd r0 += r1 */ BPF_LD_MAP_FD(BPF_REG_1, cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), BPF_MOV64_IMM(BPF_REG_1, val), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_0, BPF_REG_1, 0, 0), + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_W, + BPF_REG_0, BPF_REG_1, 0, BPF_ADD), BPF_LD_MAP_FD(BPF_REG_1, percpu_cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c index 93d6b1641481..0546d91d38cb 100644 --- a/tools/testing/selftests/bpf/verifier/ctx.c +++ b/tools/testing/selftests/bpf/verifier/ctx.c @@ -13,11 +13,11 @@ "context stores via XADD", .insns = { BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_1, - BPF_REG_0, offsetof(struct __sk_buff, mark), 0), + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_W, BPF_REG_1, + BPF_REG_0, offsetof(struct __sk_buff, mark), BPF_ADD), BPF_EXIT_INSN(), }, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", .result = REJECT, .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c index d6eec17f2cd2..f9a594b48fb3 100644 --- a/tools/testing/selftests/bpf/verifier/leak_ptr.c +++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c @@ -13,7 +13,7 @@ .errstr_unpriv = "R2 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 2", @@ -28,7 +28,7 @@ .errstr_unpriv = "R10 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 3", diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c index 91bb77c24a2e..85b5e8b70e5d 100644 --- a/tools/testing/selftests/bpf/verifier/unpriv.c +++ b/tools/testing/selftests/bpf/verifier/unpriv.c @@ -206,7 +206,8 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_10, BPF_REG_0, -8, 0), + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_10, BPF_REG_0, -8, BPF_ADD), BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), BPF_EXIT_INSN(), diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c index c5de2e62cc8b..70a320505bf2 100644 --- a/tools/testing/selftests/bpf/verifier/xadd.c +++ b/tools/testing/selftests/bpf/verifier/xadd.c @@ -51,7 +51,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "BPF_XADD stores into R2 pkt is not allowed", + .errstr = "BPF_ATOMIC stores into R2 pkt is not allowed", .prog_type = BPF_PROG_TYPE_XDP, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, }, From patchwork Mon Nov 23 17:31:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925769 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC6A4C6379D for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9F72D2075A for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aU5zniRg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732869AbgKWRc0 (ORCPT ); Mon, 23 Nov 2020 12:32:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732710AbgKWRc0 (ORCPT ); Mon, 23 Nov 2020 12:32:26 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 583B6C0613CF for ; Mon, 23 Nov 2020 09:32:26 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id l7so2775118qkl.16 for ; Mon, 23 Nov 2020 09:32:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Xl+fEOAiCuOt6S0ALRoJWOL0gi0KW015PG29XKNNzqM=; b=aU5zniRgRokbjz/8yEr78avLriv/+VYRQtoh+g/GQJ7mtVj67hP8GYjZq8u4qd8dh2 0DWm9GRaF4qdMOaMGnqh/iZTi+9VrzXn29IIX3MUyT3/RZSQQRNNId+k8eDocY8Zb1hI J0AjAyD+LPK0U3O2i0MuN+mIC/BNbPf3j0um5thXMDqHqj1hl7CTZHqcbtQ3CfbqY6yR GVufAps8jHOfx04S3XsT8SPrwNjbzOOdwbnD7vyF9Brt3p0HI49Z+Z6PaS+h17MkXnG2 UlpEGR+sxx7L6Mlkq5bRBFGgNIGEvMRQvyj9rTmtbmJfYReHYgvCLbjvf71rHv142j2u DGrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Xl+fEOAiCuOt6S0ALRoJWOL0gi0KW015PG29XKNNzqM=; b=DzRH8AI6KgjggY7NT9H9zpTGGXdVvoHcvju83l6QyiSzxt/2LnI0opjW0bjVa4aFw0 qTSp2LeeAya77kDpbGJjEZm0xDMVKd//HcQwiqL6tUhkC3tKrbNoqc6vXlubYTyz8DKP hkBAKIYoKu4M+l/6lB0XIlBKMB4QJfHaJZLpG4xKkndtgWmrV3jOK5YPevU8AHnJyDyV sNM0xFpUSGjmEYCwMAb02MlrTR8v4is5PxCigscN6PBms+2uQuVYx+1Zv1aXAIf92kS8 re5wOzX7FUW7JZ+eOBtgOF70xxUo+R0GZ2yq/NPE0CQbpgVFiPKLNvgE9nMpTLnuiL+f JpkQ== X-Gm-Message-State: AOAM532yVM1TLMclcdS9N0yHpunTrJknTZEsFHrAeG/Ee/XipslopNYA Qx0jahcPk5G6s5ni31F9GsHs+2hlZEIWF0tI1k6Brtpm5YDSRAl2RqfI4daqfsP9QbfCsCGAhGX i011msI0PEU4UR/LFXWk5CY88KSsV83ym0bwPnUbFmuyRlIAY8K5RlrKE+Tg/4W0= X-Google-Smtp-Source: ABdhPJxFB1lZT1+GF4Wf0E3sL4S7syuIVNmeGcwuCgaUwQLuMT27mMTDbIcyKAdb9MpLcoCf+pU+GZc0UlWJzQ== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6214:4a4:: with SMTP id w4mr336958qvz.61.1606152745401; Mon, 23 Nov 2020 09:32:25 -0800 (PST) Date: Mon, 23 Nov 2020 17:31:59 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-5-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 4/7] bpf: Move BPF_STX reserved field check into BPF_STX verifier code From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net I can't find a reason why this code is in resolve_pseudo_ldimm64; since I'll be modifying it in a subsequent commit, tidy it up. Signed-off-by: Brendan Jackman --- kernel/bpf/verifier.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 06885e2501f8..609cc5e9571f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9490,6 +9490,12 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; + if (((BPF_MODE(insn->code) != BPF_MEM && + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9899,13 +9905,6 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) return -EINVAL; } - if (BPF_CLASS(insn->code) == BPF_STX && - ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { struct bpf_insn_aux_data *aux; struct bpf_map *map; From patchwork Mon Nov 23 17:32:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925775 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BACDC6379F for ; Mon, 23 Nov 2020 17:32:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2DAC20781 for ; Mon, 23 Nov 2020 17:32:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aZjwJMDO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387642AbgKWRcb (ORCPT ); Mon, 23 Nov 2020 12:32:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732710AbgKWRca (ORCPT ); Mon, 23 Nov 2020 12:32:30 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E580C0613CF for ; Mon, 23 Nov 2020 09:32:29 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id a134so5300953wmd.8 for ; Mon, 23 Nov 2020 09:32:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=MZWj62TEXO2rSEWHtRgGeUrEOIO8zUoO+RufRGJ/Ofk=; b=aZjwJMDOiwPE3Tb99Bdl1ECVk72dKzYF3IomI52aWNAXtJcH3YGNi89gkqdtsXIZ66 J4sz/seaWJM/SJTwBtqckZinxbLF5RU2Gbj/uDs+tGhrDYY9L/9j0ErWB2P72Wu6seB0 /9SlwvuN0oLHb0chvhPhPy1CoTF3WoV1FqwQ26SkRKUoaX+dIwe6h3/wknbklGxMw7XD Pe2w9l7TqhMALGHCBhE40lw6QKr9lafx+UFhn2uzNJZSkUMJ22MLDNSIIFykzzGRDIy2 7NnElzPAUIwb5hhXXP2PPXfg8smmYoyhYtP5qvAsWxdKEXNU+Yf1RkX6wx3MT+gYrtLq ZR3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MZWj62TEXO2rSEWHtRgGeUrEOIO8zUoO+RufRGJ/Ofk=; b=as4dNh3wqJ0db0Zb9bc20ZqPcgBfzy+Cr+LTreGJcS6NHqujbTvZWwnLWrTXHeg1a6 4TrnlpkZ8Y5BHMOfkUdOTGncz8CYCia+mlIWqNLnDU1Ah6Wr1ChbM3bqwGdrKcYAAVTX baZeEN5khyJvHo/QM5F5FXdpS55K8GsGzQpeU9xHv/2dDalgth3rgokACA+4Yejsxwhn Ym8LVVuLY2bwX68SSsqMQ7Imw1JRW+2as6k+h1JbgsU3M3HtTvBdBHJre6AXI8OYNyPO mELbG2pOFaQ2XATOTgVET3nOpSZl3mMUDP2gPtUMm5WzqN28jtpE41MFEa4qnoxLf76i 51Uw== X-Gm-Message-State: AOAM531OIbg7y4qhodQH+kG63ed983tgLDVx4zgI/7qAjAIcrKcqEQZn sqhzxBfeT6Gj7f3GffGluZ6a8pM3wJ9JYwSJNpXHPaXc2buB1bZS+zaYNZrmd8teO4Ut/tRwfxT BGEC4iJA7l4ippVakniwg4PgXWBl0IGX1LN5n/3CImC+IakmGdVXrLkQHX1nSIDI= X-Google-Smtp-Source: ABdhPJwN8/k3npSpPlVI2m+IioEdGOt0i7Ok0/gdje6pmn+4jt+ynInNf4kvMgiv48BPvic3gBawjJ5RRb1fvw== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:6856:: with SMTP id d83mr389791wmc.13.1606152747589; Mon, 23 Nov 2020 09:32:27 -0800 (PST) Date: Mon, 23 Nov 2020 17:32:00 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-6-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 5/7] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions, in order to have the previous value of the atomically-modified memory location loaded into the src register after an atomic op is carried out. Suggested-by: Yonghong Song Signed-off-by: Brendan Jackman Reported-by: kernel test robot Reported-by: kernel test robot --- arch/x86/net/bpf_jit_comp.c | 21 ++++++++++--------- include/linux/filter.h | 9 +++++++++ include/uapi/linux/bpf.h | 3 +++ kernel/bpf/core.c | 17 ++++++++++++++-- kernel/bpf/disasm.c | 6 ++++++ kernel/bpf/verifier.c | 37 +++++++++++++++++++++++++--------- tools/include/linux/filter.h | 10 ++++++++- tools/include/uapi/linux/bpf.h | 3 +++ 8 files changed, 84 insertions(+), 22 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 0ff2416d99b6..b475bf525424 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1255,22 +1255,25 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (insn->imm != BPF_ADD) { + if (BPF_OP(insn->imm) != BPF_ADD) { pr_err("bpf_jit: unknown opcode %02x\n", insn->imm); return -EFAULT; } - /* XADD: lock *(u32/u64*)(dst_reg + off) += src_reg */ + EMIT1(0xF0); /* lock prefix */ - if (BPF_SIZE(insn->code) == BPF_W) { - /* Emit 'lock add dword ptr [rax + off], eax' */ - if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01); - else - EMIT2(0xF0, 0x01); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_SIZE(insn->code) == BPF_DW); + + /* emit opcode */ + if (insn->imm & BPF_FETCH) { + /* src_reg = sync_fetch_and_add(*(dst_reg + off), src_reg); */ + EMIT2(0x0F, 0xC1); } else { - EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); + /* lock *(u32/u64*)(dst_reg + off) += src_reg */ + EMIT1(0x01); } + emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); break; diff --git a/include/linux/filter.h b/include/linux/filter.h index ce19988fb312..bf0ff3649f46 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -270,6 +270,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ +/* Atomic memory add with fetch, src_reg = sync_fetch_and_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index dcd08783647d..ec7f415f331b 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 48b192a8edce..49a2a533db60 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1627,21 +1627,34 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) #undef LDX_PROBE STX_ATOMIC_W: - switch (insn->imm) { + switch (IMM) { case BPF_ADD: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ atomic_add((u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u32) atomic_fetch_add( + (u32) SRC, + (atomic_t *)(unsigned long) (DST + insn->off)); + break; default: goto default_label; } CONT; + STX_ATOMIC_DW: - switch (insn->imm) { + switch (IMM) { case BPF_ADD: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u64) atomic64_fetch_add( + (u64) SRC, + (atomic64_t *)(s64) (DST + insn->off)); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 37c8d6e9b4cc..669cef265493 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -160,6 +160,12 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_ADD | BPF_FETCH)) { + verbose(cbs->private_data, "(%02x) r%d = atomic_fetch_add(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 609cc5e9571f..14f5053daf22 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3600,9 +3600,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { + struct bpf_reg_state *regs = cur_regs(env); int err; - if (insn->imm != BPF_ADD) { + switch (insn->imm) { + case BPF_ADD: + case BPF_ADD | BPF_FETCH: + break; + default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); return -EINVAL; } @@ -3631,7 +3636,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "atomic stores into R%d %s is not allowed\n", + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; @@ -3644,8 +3649,21 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return err; /* check whether we can write into the same memory */ - return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_WRITE, -1, true); + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, + BPF_SIZE(insn->code), BPF_WRITE, -1, true); + if (err) + return err; + + if (!(insn->imm & BPF_FETCH)) + return 0; + + /* check and record load of old value into src reg */ + err = check_reg_arg(env, insn->src_reg, DST_OP); + if (err) + return err; + regs[insn->src_reg].type = SCALAR_VALUE; + + return 0; } static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno, @@ -9490,12 +9508,6 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9504,6 +9516,11 @@ static int do_check(struct bpf_verifier_env *env) continue; } + if (BPF_MODE(insn->code) != BPF_MEM && insn->imm != 0) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + /* check src1 operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index 95ff51d97f25..8f2707ebab18 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -180,7 +180,15 @@ .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ -/* Memory store, *(uint *) (dst_reg + off16) = imm32 */ +/* Atomic memory add with fetch, src_reg = sync_fetch_and_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ ((struct bpf_insn) { \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index dcd08783647d..ec7f415f331b 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, From patchwork Mon Nov 23 17:32:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925773 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71AFDC64E75 for ; Mon, 23 Nov 2020 17:32:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 147352075A for ; Mon, 23 Nov 2020 17:32:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KPfAyxpv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732710AbgKWRcb (ORCPT ); Mon, 23 Nov 2020 12:32:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733249AbgKWRcb (ORCPT ); Mon, 23 Nov 2020 12:32:31 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF222C061A4D for ; Mon, 23 Nov 2020 09:32:30 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id q11so1379597wrw.14 for ; Mon, 23 Nov 2020 09:32:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ft4bHD8EsMd9SqVUJwRKkVubU/kVN30r9eKdrVsbAnQ=; b=KPfAyxpvDnNNH8rUl4u7iT1ypfls+RjApI04vvajKLrJITGWRSv14V32te/jODrPta 2paGSvn05R+95IPPhRcbxYJjTPkV00JuSu/dPl8zAlyvopds6rReQhvT50BURaTRapiy MQITaBJY1wW7d4v9+gDTwxkKJkyrI9oKBhmfIfQf3eh9kuT9YFB84Expg7VeP1QFf1ZF C76luqI+Yt0dZCnGj7FtBSIHnn+nVP19EdhY/mYz6hvPB+p4Hv8GlBKITA/6sxRWNc1+ bRmFiXkxCtJ/xnrKwyPB99ZxU8Ji4JHM70zyO8LOadpLw+hsUG/1Apl0BzRi835MzR/c pOzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ft4bHD8EsMd9SqVUJwRKkVubU/kVN30r9eKdrVsbAnQ=; b=Wua+yTz/m4/S6N1kVNQs0uVE5v2u7zqgX3MpnnTDALr+rK/gkWyorPEhccXIk+GbsW DmYoaX4JjKjwa1ZhzlAOxltZM2wZxuL4tS3j2JGH52lva/RNDe2NfZxuteQ5pdQq2UYu tM6N8uXMH6iFoopUaSaf6ElgEotFDs89zHERDDWvzNiDMGMWF+28yUtaQc1w6aeFUJ73 8pJxgs07f46s6TFn/putaPmd/c7REwIl0t94+fdt05Ad6QnbDrnUpjrsePXLnJUqpdT6 xZGQ5ND5LwIsaKV+3Cb0aJF1yTAYsst73vTWkFNTVLFYQAXBHQ2WAi/QmZAV24N1thOy vwow== X-Gm-Message-State: AOAM532BKQ9CzPG8MXGutAmNTIfQse5NrJZ9LL8c8cHB7dGnRh0mHVS5 7weSZzmYWxANubZuXlwpXdb7Voc6bKsRsXs7/O4DPrc7vWHh1pUhZeycZrButNxSWvv41ZUqc22 apOLAuqSj6QLt8GDYsEMfsOVqDlNOsPu/nmTQkmsh+xRI9R//+1uc93apig6qP6w= X-Google-Smtp-Source: ABdhPJwnhfDKHIO3W1Yi+Fezxl7Scn1xoiWbrtbteiQMCFBGa+rVbGgMi3DSlKyj+UtYUTiQ1i0Kfy+hLj5Q4g== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a5d:62ca:: with SMTP id o10mr755622wrv.422.1606152749508; Mon, 23 Nov 2020 09:32:29 -0800 (PST) Date: Mon, 23 Nov 2020 17:32:01 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-7-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 6/7] bpf: Add instructions for atomic_cmpxchg and friends From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net These are the operations that implement atomic exchange and compare-exchange. They are peculiarly named because of the presence of the separate FETCH field that tells you whether the instruction writes the value back to the src register. Neither operation is supported without BPF_FETCH: - BPF_CMPSET without BPF_FETCH (i.e. an atomic compare-and-set without knowing whether the write was successfully) isn't implemented by the kernel, x86, or ARM. It would be a burden on the JIT and it's hard to imagine a use for this operation, so it's not supported. - BPF_SET without BPF_FETCH would be bpf_set, which has pretty limited use: all it really lets you do is atomically set 64-bit values on 32-bit CPUs. It doesn't imply any barriers. There are two significant design decisions made for the CMPSET instruction: - To solve the issue that this operation fundamentally has 3 operands, but we only have two register fields. Therefore the operand we compare against (the kernel's API calls it 'old') is hard-coded to be R0. x86 has similar design (and A64 doesn't have this problem). A potential alternative might be to encode the other operand's register number in the immediate field. - The kernel's atomic_cmpxchg returns the old value, while the C11 userspace APIs return a boolean indicating the comparison result. Which should BPF do? A64 returns the old value. x86 returns the old value in the hard-coded register (and also sets a flag). That means return-old-value is easier to JIT. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 29 ++++++++++++++++++++++++----- include/linux/filter.h | 30 ++++++++++++++++++++++++++++++ include/uapi/linux/bpf.h | 3 +++ kernel/bpf/core.c | 20 ++++++++++++++++++++ kernel/bpf/disasm.c | 13 +++++++++++++ kernel/bpf/verifier.c | 22 +++++++++++++++++++--- tools/include/linux/filter.h | 30 ++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 3 +++ 8 files changed, 142 insertions(+), 8 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index b475bf525424..252b556e8abf 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1255,9 +1255,13 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (BPF_OP(insn->imm) != BPF_ADD) { - pr_err("bpf_jit: unknown opcode %02x\n", insn->imm); - return -EFAULT; + if (insn->imm == BPF_SET) { + /* + * atomic_set((u32/u64*)(dst_reg + off), src_reg); + * On x86 atomic_set is just WRITE_ONCE. + */ + emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off); + break; } EMIT1(0xF0); /* lock prefix */ @@ -1266,15 +1270,30 @@ st: if (is_imm8(insn->off)) BPF_SIZE(insn->code) == BPF_DW); /* emit opcode */ - if (insn->imm & BPF_FETCH) { + switch (insn->imm) { + case BPF_SET | BPF_FETCH: + /* src_reg = atomic_chg(*(u32/u64*)(dst_reg + off), src_reg); */ + EMIT1(0x87); + break; + case BPF_CMPSET | BPF_FETCH: + /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */ + EMIT2(0x0F, 0xB1); + break; + case BPF_ADD | BPF_FETCH: /* src_reg = sync_fetch_and_add(*(dst_reg + off), src_reg); */ EMIT2(0x0F, 0xC1); - } else { + break; + case BPF_ADD: /* lock *(u32/u64*)(dst_reg + off) += src_reg */ EMIT1(0x01); + break; + default: + pr_err("bpf_jit: unknown atomic opcode %02x\n", insn->imm); + return -EFAULT; } emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); + break; /* call */ diff --git a/include/linux/filter.h b/include/linux/filter.h index bf0ff3649f46..402a47fa2276 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -280,6 +280,36 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SET | BPF_FETCH }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPSET | BPF_FETCH }) + +/* Atomic set, atomic_set((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_SET(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SET }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index ec7f415f331b..6996c1856f53 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -45,6 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ +#define BPF_SET 0xe0 /* atomic write */ +#define BPF_CMPSET 0xf0 /* atomic compare-and-write */ + #define BPF_FETCH 0x01 /* fetch previous value into src reg */ /* Register numbers */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 49a2a533db60..a549ac2d7651 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1638,6 +1638,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); break; + case BPF_SET | BPF_FETCH: + SRC = (u32) atomic_xchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) SRC); + break; + case BPF_CMPSET | BPF_FETCH: + BPF_R0 = (u32) atomic_cmpxchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) BPF_R0, (u32) SRC); + break; default: goto default_label; } @@ -1655,6 +1665,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u64) SRC, (atomic64_t *)(s64) (DST + insn->off)); break; + case BPF_SET | BPF_FETCH: + SRC = (u64) atomic64_xchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) SRC); + break; + case BPF_CMPSET | BPF_FETCH: + BPF_R0 = (u64) atomic64_cmpxchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) BPF_R0, (u64) SRC); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 669cef265493..7e4a5bfb4e67 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -166,6 +166,19 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, insn->code, insn->src_reg, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_CMPSET | BPF_FETCH)) { + verbose(cbs->private_data, "(%02x) r0 = atomic_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n", + insn->code, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, + insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_SET | BPF_FETCH)) { + verbose(cbs->private_data, "(%02x) r%d = atomic_xchg(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 14f5053daf22..2e611d3695bf 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3602,10 +3602,14 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i { struct bpf_reg_state *regs = cur_regs(env); int err; + int load_reg; switch (insn->imm) { case BPF_ADD: case BPF_ADD | BPF_FETCH: + case BPF_SET: + case BPF_SET | BPF_FETCH: + case BPF_CMPSET | BPF_FETCH: /* CMPSET without FETCH is not supported */ break; default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); @@ -3627,6 +3631,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (err) return err; + if (BPF_OP(insn->imm) == BPF_CMPSET) { + /* check src3 operand */ + err = check_reg_arg(env, BPF_REG_0, SRC_OP); + if (err) + return err; + } + if (is_pointer_value(env, insn->src_reg)) { verbose(env, "R%d leaks addr into mem\n", insn->src_reg); return -EACCES; @@ -3657,11 +3668,16 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (!(insn->imm & BPF_FETCH)) return 0; - /* check and record load of old value into src reg */ - err = check_reg_arg(env, insn->src_reg, DST_OP); + if (BPF_OP(insn->imm) == BPF_CMPSET) + load_reg = BPF_REG_0; + else + load_reg = insn->src_reg; + + /* check and record load of old value */ + err = check_reg_arg(env, load_reg, DST_OP); if (err) return err; - regs[insn->src_reg].type = SCALAR_VALUE; + regs[load_reg].type = SCALAR_VALUE; return 0; } diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index 8f2707ebab18..5a5e4c39c639 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -190,6 +190,36 @@ .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SET | BPF_FETCH }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPSET | BPF_FETCH }) + +/* Atomic set, atomic_set((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_SET(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SET }) + #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ ((struct bpf_insn) { \ .code = BPF_ST | BPF_SIZE(SIZE) | BPF_MEM, \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index ec7f415f331b..6996c1856f53 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -45,6 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ +#define BPF_SET 0xe0 /* atomic write */ +#define BPF_CMPSET 0xf0 /* atomic compare-and-write */ + #define BPF_FETCH 0x01 /* fetch previous value into src reg */ /* Register numbers */ From patchwork Mon Nov 23 17:32:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11925779 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FB50C64E69 for ; Mon, 23 Nov 2020 17:32:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B96B20781 for ; Mon, 23 Nov 2020 17:32:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HX8AeDwy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390141AbgKWRcd (ORCPT ); Mon, 23 Nov 2020 12:32:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387745AbgKWRcd (ORCPT ); Mon, 23 Nov 2020 12:32:33 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6589C0613CF for ; Mon, 23 Nov 2020 09:32:32 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id g125so1797378wme.9 for ; Mon, 23 Nov 2020 09:32:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=vhVTbQKxrG5vExJohgqApPX+87c2MLFuctJ+rqlbNdo=; b=HX8AeDwyOTQoIjh7VHsTIbALzeYm1VxT9yIQB5s5/9va84ZNUP0TXBYr1BHgmErJWv PC63drnFf42qCtPG7KeTu5+oi/rtcwMNICso0Lqtvh+aYPz4mggO05jiP3i7zj3q+jti hvc9G3lcJld9KiSGD1hyO/OJgeRNgxJjJsRK47y76rK37QkSa8+hPXqCrWmhBxtcuatJ Qji7EIzfRYltjR+tbERishIwgoZ2gpUU0ILRfFREzVCfgBk3OXzfYu64rRWLDHUWAc+4 Gvd2yfkyrAevP20auy2vqgf/lQzQzKoi9ot4BXiF4OPKaHLzADn4FBXOaSbC32O2101D U7LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vhVTbQKxrG5vExJohgqApPX+87c2MLFuctJ+rqlbNdo=; b=hVFaytMCdStNTqVbv9yb1BfOSkuUSLElezkOyAqJmQI4wx3qMxR6/+Eyz4etr7Q0bB tyEuI/uWEmDkZRghbmdadM2YHUb/CkBdpoE5jmFiS5A+P4VdDkCgY7WyQQ1fY0ZFgJE6 6Cz9piaKknnWvdilO/CJg6aPto6ui1A4DuQUmyFgcCm0NE58ghPuGX+/BYejpPJcpUvd wmtWrtSJF2oS6zbH53xr97VD7FBt65k5geHkEjeyXSOPyRx1k1oRID6pBozMNQrTjHQJ Y+xy6mkdIVnVWVLBm+Erx/TKW+dmTWe5wqLgJsbFOhrp3savmh0KjTDBoRbU2yypBR4o FYLQ== X-Gm-Message-State: AOAM533VL8J8FKOJ7RJ4+IrEZgO3HbjJmMocYKwbwvHy/R3EexV55wQm rDC7ZeleZchjQTufpnrr+k5MRbV1BZn4IG/5J/+uylSjbCSR5SBJ3u6Ti4LQSztX/Jt5BN2i5/S AiNhqj+UtqSXuJOAqyRNNdMwP9afbD6laAlN4lPRXWr1G4SPRrflQM6s4RfKdhRg= X-Google-Smtp-Source: ABdhPJwET9CWuDpimAkpSuewUoOwbROFzjSOrScKheIxCOPOcDTXoUxS6O6gTfC4B7T5aRfNhHwRI1jChIxleQ== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:adf:f102:: with SMTP id r2mr739238wro.315.1606152751481; Mon, 23 Nov 2020 09:32:31 -0800 (PST) Date: Mon, 23 Nov 2020 17:32:02 +0000 In-Reply-To: <20201123173202.1335708-1-jackmanb@google.com> Message-Id: <20201123173202.1335708-8-jackmanb@google.com> Mime-Version: 1.0 References: <20201123173202.1335708-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH 7/7] bpf: Add tests for new BPF atomic operations From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This relies on the work done by Yonghong Song in https://reviews.llvm.org/D72184 Signed-off-by: Brendan Jackman --- tools/testing/selftests/bpf/Makefile | 2 +- .../selftests/bpf/prog_tests/atomics_test.c | 145 ++++++++++++++++++ .../selftests/bpf/progs/atomics_test.c | 61 ++++++++ .../selftests/bpf/verifier/atomic_cmpxchg.c | 96 ++++++++++++ .../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++++++++ .../selftests/bpf/verifier/atomic_xchg.c | 113 ++++++++++++++ 6 files changed, 522 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 3d5940cd110d..4e28640ca2d8 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -250,7 +250,7 @@ define CLANG_BPF_BUILD_RULE $(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2) $(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm \ -c $1 -o - || echo "BPF obj compilation failed") | \ - $(LLC) -mattr=dwarfris -march=bpf -mcpu=v3 $4 -filetype=obj -o $2 + $(LLC) -mattr=dwarfris -march=bpf -mcpu=v4 $4 -filetype=obj -o $2 endef # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32 define CLANG_NOALU32_BPF_BUILD_RULE diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c new file mode 100644 index 000000000000..a4859d88fc11 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +#include "atomics_test.skel.h" + +static void test_add(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->add64_value != 3, "add64_value", + "64bit atomic add value was not incremented (got %lld want 2)\n", + atomics_skel->data->add64_value); + CHECK(atomics_skel->bss->add64_result != 1, "add64_result", + "64bit atomic add bad return value (got %lld want 1)\n", + atomics_skel->bss->add64_result); + + CHECK(atomics_skel->data->add32_value != 3, "add32_value", + "32bit atomic add value was not incremented (got %d want 2)\n", + atomics_skel->data->add32_value); + CHECK(atomics_skel->bss->add32_result != 1, "add32_result", + "32bit atomic add bad return value (got %d want 1)\n", + atomics_skel->bss->add32_result); + + CHECK(atomics_skel->bss->add_stack_value_copy != 3, "add_stack_value", + "_stackbit atomic add value was not incremented (got %lld want 2)\n", + atomics_skel->bss->add_stack_value_copy); + CHECK(atomics_skel->bss->add_stack_result != 1, "add_stack_result", + "_stackbit atomic add bad return value (got %lld want 1)\n", + atomics_skel->bss->add_stack_result); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_cmpxchg(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->cmpxchg64_value != 2, "cmpxchg64_value", + "64bit cmpxchg left unexpected value (got %lld want 2)\n", + atomics_skel->data->cmpxchg64_value); + CHECK(atomics_skel->bss->cmpxchg64_result_fail != 1, "cmpxchg_result_fail", + "64bit cmpxchg returned bad result (got %lld want 1)\n", + atomics_skel->bss->cmpxchg64_result_fail); + CHECK(atomics_skel->bss->cmpxchg64_result_succeed != 1, "cmpxchg_result_succeed", + "64bit cmpxchg returned bad result (got %lld want 1)\n", + atomics_skel->bss->cmpxchg64_result_succeed); + + CHECK(atomics_skel->data->cmpxchg32_value != 2, "cmpxchg32_value", + "32bit cmpxchg left unexpected value (got %d want 2)\n", + atomics_skel->data->cmpxchg32_value); + CHECK(atomics_skel->bss->cmpxchg32_result_fail != 1, "cmpxchg_result_fail", + "32bit cmpxchg returned bad result (got %d want 1)\n", + atomics_skel->bss->cmpxchg32_result_fail); + CHECK(atomics_skel->bss->cmpxchg32_result_succeed != 1, "cmpxchg_result_succeed", + "32bit cmpxchg returned bad result (got %d want 1)\n", + atomics_skel->bss->cmpxchg32_result_succeed); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_xchg(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->xchg64_value != 2, "xchg64_value", + "64bit xchg left unexpected value (got %lld want 2)\n", + atomics_skel->data->xchg64_value); + CHECK(atomics_skel->bss->xchg64_result != 1, "xchg_result", + "64bit xchg returned bad result (got %lld want 1)\n", + atomics_skel->bss->xchg64_result); + + CHECK(atomics_skel->data->xchg32_value != 2, "xchg32_value", + "32bit xchg left unexpected value (got %d want 2)\n", + atomics_skel->data->xchg32_value); + CHECK(atomics_skel->bss->xchg32_result != 1, "xchg_result", + "32bit xchg returned bad result (got %d want 1)\n", + atomics_skel->bss->xchg32_result); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +void test_atomics_test(void) +{ + test_add(); + test_cmpxchg(); + test_xchg(); +} diff --git a/tools/testing/selftests/bpf/progs/atomics_test.c b/tools/testing/selftests/bpf/progs/atomics_test.c new file mode 100644 index 000000000000..d81f45eb6c45 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/atomics_test.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +__u64 add64_value = 1; +__u64 add64_result; +__u32 add32_value = 1; +__u32 add32_result; +__u64 add_stack_value_copy; +__u64 add_stack_result; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(add, int a) +{ + __u64 add_stack_value = 1; + + add64_result = __sync_fetch_and_add(&add64_value, 2); + add32_result = __sync_fetch_and_add(&add32_value, 2); + add_stack_result = __sync_fetch_and_add(&add_stack_value, 2); + add_stack_value_copy = add_stack_value; + + return 0; +} + +__u64 cmpxchg64_value = 1; +__u64 cmpxchg64_result_fail; +__u64 cmpxchg64_result_succeed; +__u32 cmpxchg32_value = 1; +__u32 cmpxchg32_result_fail; +__u32 cmpxchg32_result_succeed; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(cmpxchg, int a) +{ + cmpxchg64_result_fail = __sync_val_compare_and_swap( + &cmpxchg64_value, 0, 3); + cmpxchg64_result_succeed = __sync_val_compare_and_swap( + &cmpxchg64_value, 1, 2); + + cmpxchg32_result_fail = __sync_val_compare_and_swap( + &cmpxchg32_value, 0, 3); + cmpxchg32_result_succeed = __sync_val_compare_and_swap( + &cmpxchg32_value, 1, 2); + + return 0; +} + +__u64 xchg64_value = 1; +__u64 xchg64_result; +__u32 xchg32_value = 1; +__u32 xchg32_result; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(xchg, int a) +{ + __u64 val64 = 2; + __u32 val32 = 2; + + __atomic_exchange(&xchg64_value, &val64, &xchg64_result, __ATOMIC_RELAXED); + __atomic_exchange(&xchg32_value, &val32, &xchg32_result, __ATOMIC_RELAXED); + + return 0; +} diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c new file mode 100644 index 000000000000..eb43a06428fa --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c @@ -0,0 +1,96 @@ +{ + "atomic compare-and-exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(2); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(4); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic compare-and-exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(2); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(4); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use cmpxchg on uninit src reg", + .insns = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "!read_ok", +}, +{ + "Can't use cmpxchg on uninit memory", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_MOV64_IMM(BPF_REG_2, 4), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid read from stack", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c new file mode 100644 index 000000000000..c3236510cb64 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c @@ -0,0 +1,106 @@ +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 64bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 32bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use ATM_FETCH_ADD on frame pointer", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_10, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr_unpriv = "R10 leaks addr into mem", + .errstr = "frame pointer is read only", +}, +{ + "Can't use ATM_FETCH_ADD on uninit src reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on uninit dst reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_0, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on kernel memory", + .insns = { + /* This is an fentry prog, context is array of the args of the + * kernel function being called. Load first arg into R2. + */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0), + /* First arg of bpf_fentry_test7 is a pointer to a struct. + * Attempt to modify that struct. Verifier shouldn't let us + * because it's kernel memory. + */ + BPF_MOV64_IMM(BPF_REG_3, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_3, 0), + /* Done */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_TRACING, + .expected_attach_type = BPF_TRACE_FENTRY, + .kfunc = "bpf_fentry_test7", + .result = REJECT, + .errstr = "only read is supported", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c new file mode 100644 index 000000000000..b39d8c0dabf9 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c @@ -0,0 +1,113 @@ +{ + "atomic exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(1); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(1); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic set smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* atomic_xchg(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_SET(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic set smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* atomic_xchg(&val, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_ATOMIC_SET(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use atomic set on kernel memory", + .insns = { + /* This is an fentry prog, context is array of the args of the + * kernel function being called. Load first arg into R2. + */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0), + /* First arg of bpf_fentry_test7 is a pointer to a struct. + * Attempt to modify that struct. Verifier shouldn't let us + * because it's kernel memory. + */ + BPF_MOV64_IMM(BPF_REG_3, 1), + BPF_ATOMIC_SET(BPF_DW, BPF_REG_2, BPF_REG_3, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_TRACING, + .expected_attach_type = BPF_TRACE_FENTRY, + .kfunc = "bpf_fentry_test7", + .result = REJECT, + .errstr = "only read is supported", +},