From patchwork Wed Oct 6 16:02:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: YiFei Zhu X-Patchwork-Id: 12539693 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56689C433F5 for ; Wed, 6 Oct 2021 16:03:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D722610E6 for ; Wed, 6 Oct 2021 16:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbhJFQEx (ORCPT ); Wed, 6 Oct 2021 12:04:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230021AbhJFQEx (ORCPT ); Wed, 6 Oct 2021 12:04:53 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 029F9C061746 for ; Wed, 6 Oct 2021 09:03:01 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id on6so2538644pjb.5 for ; Wed, 06 Oct 2021 09:03:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yZdv+mAmPSmvLYFkjWi3ag6HXOBraTEgeD8AmWgufz8=; b=cjZg8t+KNGlm5j44cencvNt7dD9oLDy0oMtQARmsvq1a91oZDE/U/wmrvP71zPXg02 DH5mbfKn7dQAuJuQkltmHbNIuXvxGKU1brLkExF/uCnxLe2qp0xCt0GFfdpW7pEADtpD Tk4pbVs/Rg3t+w6Wr8hspTYFOSjHRbRDd8b2odH26uwBruh5YMGAKqgEeiOGW5M7Gun6 YkOVGIaGFaLAENp2NdYa97x02I4fp2SijWzjK/cJ16tPTY+3m5ct24suTBbrtVVql1I+ vHwrkq6G5Wkk0tcN7z2c4i0ew7c0i7AB0cgHga9jS4prbT44+Rpy8Ujn+DpU+5S5ZxLN QvpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yZdv+mAmPSmvLYFkjWi3ag6HXOBraTEgeD8AmWgufz8=; b=HqfJgB9sqkbiOAjTZTCw+gEJcPXioxeUc1VeMESgM6ct2mW9apdhbDynb0McK7AuWC uwlD701uFGxzgrJJMTowQt+nLyTpVGimKpinb4rI0J7cc0O8TZEDvi/o7Ncio1g/tcip rz7Mius+z65AME9D5RXtvlbdTfGYZ+kiKGDb+pHqeGWZpIcBc2F4/L+Sc+lsFktMhvvO z9dFZ+lMLqLA3ctFQeUN+6XHYc83ONJmu4mrDk7rTDQf+KGbuOX8m/rwl7XMvvwKKZRx uPtc4Stw7OTVROdw1nuglZ3vqqjt9foLF1qfnxe478p8sgk1kiVsbblYTuNUKmkaQCgP mXTw== X-Gm-Message-State: AOAM532jlO6+k9LRf8KkbexhzvjnJIuz+mFHJDxRKe+ROCzvkl8WY8UT zt6WerwlIlCzsQvI/8AbxH1A8Sk55Dw= X-Google-Smtp-Source: ABdhPJyBZJYpECdhZcQ48SzWGnJOqQL2kf8ELvyDR5VgFPfNzBZcThteDehBvbuL69de+qujKrOe0A== X-Received: by 2002:a17:90a:9f95:: with SMTP id o21mr11693272pjp.21.1633536180235; Wed, 06 Oct 2021 09:03:00 -0700 (PDT) Received: from localhost.localdomain ([98.47.144.235]) by smtp.gmail.com with ESMTPSA id x19sm20906098pfn.105.2021.10.06.09.02.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 09:02:59 -0700 (PDT) From: YiFei Zhu To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Stanislav Fomichev , YiFei Zhu Subject: [PATCH bpf-next 1/3] bpf: Make BPF_PROG_RUN_ARRAY return -errno instead of allow boolean Date: Wed, 6 Oct 2021 09:02:40 -0700 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: YiFei Zhu Right now BPF_PROG_RUN_ARRAY and related macros return 1 or 0 for whether the prog array allows or rejects whatever is being hooked. The caller of these macros then return -EPERM or continue processing based on thw macro's return value. Unforunately this is inflexible, since -EPERM is the only errno that can be returned. This patch should be a no-op; it prepares for the next patch. The returning of the -EPERM is moved to inside the macros, so the outer functions are directly returning what the macros returned if they are non-zero. Signed-off-by: YiFei Zhu Reviewed-by: Stanislav Fomichev Acked-by: Song Liu --- include/linux/bpf.h | 16 +++++++++------- kernel/bpf/cgroup.c | 41 +++++++++++++++------------------------- security/device_cgroup.c | 2 +- 3 files changed, 25 insertions(+), 34 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 1c7fd7c4c6d3..938885562d68 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1187,7 +1187,7 @@ static inline void bpf_reset_run_ctx(struct bpf_run_ctx *old_ctx) typedef u32 (*bpf_prog_run_fn)(const struct bpf_prog *prog, const void *ctx); -static __always_inline u32 +static __always_inline int BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, const void *ctx, bpf_prog_run_fn run_prog, u32 *ret_flags) @@ -1197,7 +1197,7 @@ BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, const struct bpf_prog_array *array; struct bpf_run_ctx *old_run_ctx; struct bpf_cg_run_ctx run_ctx; - u32 ret = 1; + int ret = 0; u32 func_ret; migrate_disable(); @@ -1208,7 +1208,8 @@ BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, while ((prog = READ_ONCE(item->prog))) { run_ctx.prog_item = item; func_ret = run_prog(prog, ctx); - ret &= (func_ret & 1); + if (!(func_ret & 1)) + ret = -EPERM; *(ret_flags) |= (func_ret >> 1); item++; } @@ -1218,7 +1219,7 @@ BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, return ret; } -static __always_inline u32 +static __always_inline int BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu, const void *ctx, bpf_prog_run_fn run_prog) { @@ -1227,7 +1228,7 @@ BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu, const struct bpf_prog_array *array; struct bpf_run_ctx *old_run_ctx; struct bpf_cg_run_ctx run_ctx; - u32 ret = 1; + int ret = 0; migrate_disable(); rcu_read_lock(); @@ -1236,7 +1237,8 @@ BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu, old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); while ((prog = READ_ONCE(item->prog))) { run_ctx.prog_item = item; - ret &= run_prog(prog, ctx); + if (!run_prog(prog, ctx)) + ret = -EPERM; item++; } bpf_reset_run_ctx(old_run_ctx); @@ -1304,7 +1306,7 @@ BPF_PROG_RUN_ARRAY(const struct bpf_prog_array __rcu *array_rcu, u32 _ret; \ _ret = BPF_PROG_RUN_ARRAY_CG_FLAGS(array, ctx, func, &_flags); \ _cn = _flags & BPF_RET_SET_CN; \ - if (_ret) \ + if (!_ret) \ _ret = (_cn ? NET_XMIT_CN : NET_XMIT_SUCCESS); \ else \ _ret = (_cn ? NET_XMIT_DROP : -EPERM); \ diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 03145d45e3d5..5efe2588575e 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -1044,7 +1044,6 @@ int __cgroup_bpf_run_filter_skb(struct sock *sk, } else { ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], skb, __bpf_prog_run_save_cb); - ret = (ret == 1 ? 0 : -EPERM); } bpf_restore_data_end(skb, saved_data_end); __skb_pull(skb, offset); @@ -1071,10 +1070,9 @@ int __cgroup_bpf_run_filter_sk(struct sock *sk, enum cgroup_bpf_attach_type atype) { struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data); - int ret; - ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sk, bpf_prog_run); - return ret == 1 ? 0 : -EPERM; + return BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sk, + bpf_prog_run); } EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk); @@ -1106,7 +1104,6 @@ int __cgroup_bpf_run_filter_sock_addr(struct sock *sk, }; struct sockaddr_storage unspec; struct cgroup *cgrp; - int ret; /* Check socket family since not all sockets represent network * endpoint (e.g. AF_UNIX). @@ -1120,10 +1117,8 @@ int __cgroup_bpf_run_filter_sock_addr(struct sock *sk, } cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data); - ret = BPF_PROG_RUN_ARRAY_CG_FLAGS(cgrp->bpf.effective[atype], &ctx, - bpf_prog_run, flags); - - return ret == 1 ? 0 : -EPERM; + return BPF_PROG_RUN_ARRAY_CG_FLAGS(cgrp->bpf.effective[atype], &ctx, + bpf_prog_run, flags); } EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr); @@ -1148,11 +1143,9 @@ int __cgroup_bpf_run_filter_sock_ops(struct sock *sk, enum cgroup_bpf_attach_type atype) { struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data); - int ret; - ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sock_ops, - bpf_prog_run); - return ret == 1 ? 0 : -EPERM; + return BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sock_ops, + bpf_prog_run); } EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_ops); @@ -1165,15 +1158,15 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor, .major = major, .minor = minor, }; - int allow; + int ret; rcu_read_lock(); cgrp = task_dfl_cgroup(current); - allow = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], &ctx, - bpf_prog_run); + ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], &ctx, + bpf_prog_run); rcu_read_unlock(); - return !allow; + return ret; } static const struct bpf_func_proto * @@ -1314,7 +1307,7 @@ int __cgroup_bpf_run_filter_sysctl(struct ctl_table_header *head, kfree(ctx.new_val); } - return ret == 1 ? 0 : -EPERM; + return ret; } #ifdef CONFIG_NET @@ -1419,10 +1412,8 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level, &ctx, bpf_prog_run); release_sock(sk); - if (!ret) { - ret = -EPERM; + if (ret) goto out; - } if (ctx.optlen == -1) { /* optlen set to -1, bypass kernel */ @@ -1529,10 +1520,8 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level, &ctx, bpf_prog_run); release_sock(sk); - if (!ret) { - ret = -EPERM; + if (ret) goto out; - } if (ctx.optlen > max_optlen || ctx.optlen < 0) { ret = -EFAULT; @@ -1588,8 +1577,8 @@ int __cgroup_bpf_run_filter_getsockopt_kern(struct sock *sk, int level, ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[CGROUP_GETSOCKOPT], &ctx, bpf_prog_run); - if (!ret) - return -EPERM; + if (ret) + return ret; if (ctx.optlen > *optlen) return -EFAULT; diff --git a/security/device_cgroup.c b/security/device_cgroup.c index 04375df52fc9..cd15c7994d34 100644 --- a/security/device_cgroup.c +++ b/security/device_cgroup.c @@ -837,7 +837,7 @@ int devcgroup_check_permission(short type, u32 major, u32 minor, short access) int rc = BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(type, major, minor, access); if (rc) - return -EPERM; + return rc; #ifdef CONFIG_CGROUP_DEVICE return devcgroup_legacy_check_permission(type, major, minor, access); From patchwork Wed Oct 6 16:02:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: YiFei Zhu X-Patchwork-Id: 12539695 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FCC8C4332F for ; Wed, 6 Oct 2021 16:03:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 74A61610C8 for ; Wed, 6 Oct 2021 16:03:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232622AbhJFQEy (ORCPT ); Wed, 6 Oct 2021 12:04:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230021AbhJFQEy (ORCPT ); Wed, 6 Oct 2021 12:04:54 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6E56C061746 for ; Wed, 6 Oct 2021 09:03:01 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id y1so1929534plk.10 for ; Wed, 06 Oct 2021 09:03:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P04QJ/4Y946HO6LTkPlGTPEfscMIc2EXyiWAONPvzpo=; b=ESP7gPDaoBIaUDS6EInNcY0cPoPb3dD5k0rkI11rjbqwlfAfzXAVfF8l+2fO4zk0yD 1i7EIfwozYAJhe1AC981jOrAC8au1oLkm/7IPqPzIC0oQ/Q/arTEAtn/2AkH6HQhRJLY 6u6lGWSde3iJB8+Hlspzf9xPu+Z/nMtPE5sJYMeCeBsSOPXmx94KtL4IYC4iRBsaxGnF zEM4Bj76bYbEGyn+0/XSQMCDWBpSd2g/V/tTwtOX3e++1OaWXJY1VYbElX1QU65EHtSH 68UX5vDwm7MDnuv7+Z0rwy1jKMHOiW64VmzS+5SmENpdCDyn+BCY2ItEjTgMTfEO5dg4 V8WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P04QJ/4Y946HO6LTkPlGTPEfscMIc2EXyiWAONPvzpo=; b=0Rp+0nTx1Kl7Az9hnxqbFqvmNf+OMkxyFbrYX7TFyFkscJAjC7n/c8q1Kq6j1n5Zb/ YUKFI/d8RKJNJe40vUitbajgpQ//x3mQaq9YvQLxbXbtB3kACOgARZqp1zNGX8zmgLQm SX2HwGODqKrAN9fU/+oj0B93ShQvwTj3kNxR7y9xJ8d7ftox8FCtSRRZawL0Kqg1OQ+4 JFW1cD+nJ59k1cwPMPAzBsjYjKHlnVmEt93TvabbTmQ+3ueHUdkqWpQ+DKSIrKys933s ZRTENSG9Mly778GnRVDZ1wyLFtZM2Zyuitr3MQg71SHan08ZMmVegU3yBUTwoe/kAZ/8 h8rA== X-Gm-Message-State: AOAM532kV4y5V8iI7K2VkTXAg0T/Sx39eGpGf/nXJDYhzr7DFmLcLh2D 1F+UlOBuu9Tz2Uo6+GQCHG2xXWfJgaE= X-Google-Smtp-Source: ABdhPJxxFB4ysEW29pPcO5bp2WCEzUeSKXdjgcqkvA/dKY+/hq0Dka6ykLdB5lH9bvyX+VpwCqvqpA== X-Received: by 2002:a17:90a:5895:: with SMTP id j21mr11679165pji.99.1633536181151; Wed, 06 Oct 2021 09:03:01 -0700 (PDT) Received: from localhost.localdomain ([98.47.144.235]) by smtp.gmail.com with ESMTPSA id x19sm20906098pfn.105.2021.10.06.09.03.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 09:03:00 -0700 (PDT) From: YiFei Zhu To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Stanislav Fomichev , YiFei Zhu Subject: [PATCH bpf-next 2/3] bpf: Add cgroup helper bpf_export_errno to get/set exported errno value Date: Wed, 6 Oct 2021 09:02:41 -0700 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: YiFei Zhu When passed in a positive errno, it sets the errno and returns 0. When passed in 0, it gets the previously set errno. When passed in an out of bound number, it returns -EINVAL. This is unambiguous: negative return values are error in invoking the helper itself, and positive return values are errnos being exported. Errnos once set cannot be unset, but can be overridden. The errno value is stored inside bpf_cg_run_ctx for ease of access different prog types with different context structs layouts. The helper implementation can simply perform a container_of from current->bpf_ctx to retrieve bpf_cg_run_ctx. For backward compatibility, if a program rejects without calling the helper, and the errno has not been set by any prior progs, the BPF_PROG_RUN_ARRAY_CG family macros automatically set the errno to EPERM. If a prog sets an errno but returns 1 (allow), the outcome is considered implementation-defined. This patch treat it the same way as if 0 (reject) is returned. For BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY, the prior behavior is that, if the return value is NET_XMIT_DROP, the packet is silently dropped. We preserve this behavior for backward compatibility reasons, so even if an errno is set, the errno does not return to caller. For getsockopt hooks, they are different in that bpf progs runs after kernel processes the getsockopt syscall instead of before. There is also a retval in its context struct in which bpf progs can unset the retval, and can force an -EPERM by returning 0. We preseve the same semantics. Even though there is retval, that value can only be unset, while progs can set (and not unset) additional errno by using the helper, and that will override whatever is in retval. Signed-off-by: YiFei Zhu Reviewed-by: Stanislav Fomichev Acked-by: Song Liu --- include/linux/bpf.h | 23 +++++++++++------------ include/uapi/linux/bpf.h | 14 ++++++++++++++ kernel/bpf/cgroup.c | 24 ++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 14 ++++++++++++++ 4 files changed, 63 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 938885562d68..5e3f3d2f5871 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1155,6 +1155,7 @@ struct bpf_run_ctx {}; struct bpf_cg_run_ctx { struct bpf_run_ctx run_ctx; const struct bpf_prog_array_item *prog_item; + int errno_val; }; struct bpf_trace_run_ctx { @@ -1196,8 +1197,7 @@ BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, const struct bpf_prog *prog; const struct bpf_prog_array *array; struct bpf_run_ctx *old_run_ctx; - struct bpf_cg_run_ctx run_ctx; - int ret = 0; + struct bpf_cg_run_ctx run_ctx = {}; u32 func_ret; migrate_disable(); @@ -1208,15 +1208,15 @@ BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu, while ((prog = READ_ONCE(item->prog))) { run_ctx.prog_item = item; func_ret = run_prog(prog, ctx); - if (!(func_ret & 1)) - ret = -EPERM; + if (!(func_ret & 1) && !run_ctx.errno_val) + run_ctx.errno_val = EPERM; *(ret_flags) |= (func_ret >> 1); item++; } bpf_reset_run_ctx(old_run_ctx); rcu_read_unlock(); migrate_enable(); - return ret; + return -run_ctx.errno_val; } static __always_inline int @@ -1227,8 +1227,7 @@ BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu, const struct bpf_prog *prog; const struct bpf_prog_array *array; struct bpf_run_ctx *old_run_ctx; - struct bpf_cg_run_ctx run_ctx; - int ret = 0; + struct bpf_cg_run_ctx run_ctx = {}; migrate_disable(); rcu_read_lock(); @@ -1237,14 +1236,14 @@ BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu, old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); while ((prog = READ_ONCE(item->prog))) { run_ctx.prog_item = item; - if (!run_prog(prog, ctx)) - ret = -EPERM; + if (!run_prog(prog, ctx) && !run_ctx.errno_val) + run_ctx.errno_val = EPERM; item++; } bpf_reset_run_ctx(old_run_ctx); rcu_read_unlock(); migrate_enable(); - return ret; + return -run_ctx.errno_val; } static __always_inline u32 @@ -1297,7 +1296,7 @@ BPF_PROG_RUN_ARRAY(const struct bpf_prog_array __rcu *array_rcu, * 0: NET_XMIT_SUCCESS skb should be transmitted * 1: NET_XMIT_DROP skb should be dropped and cn * 2: NET_XMIT_CN skb should be transmitted and cn - * 3: -EPERM skb should be dropped + * 3: -errno skb should be dropped */ #define BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY(array, ctx, func) \ ({ \ @@ -1309,7 +1308,7 @@ BPF_PROG_RUN_ARRAY(const struct bpf_prog_array __rcu *array_rcu, if (!_ret) \ _ret = (_cn ? NET_XMIT_CN : NET_XMIT_SUCCESS); \ else \ - _ret = (_cn ? NET_XMIT_DROP : -EPERM); \ + _ret = (_cn ? NET_XMIT_DROP : _ret); \ _ret; \ }) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 6fc59d61937a..d8126f8c0541 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4909,6 +4909,19 @@ union bpf_attr { * Return * The number of bytes written to the buffer, or a negative error * in case of failure. + * + * int bpf_export_errno(int errno_val) + * Description + * If *errno_val* is positive, set the syscall's return error code; + * if *errno_val* is zero, retrieve the previously set code. + * + * This helper is currently supported by cgroup programs only. + * Return + * Zero if set is successful, or the previously set error code on + * retrieval. Previously set code may be zero if it was never set. + * On error, a negative value. + * + * **-EINVAL** if *errno_val* not between zero and MAX_ERRNO inclusive. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5089,6 +5102,7 @@ union bpf_attr { FN(task_pt_regs), \ FN(get_branch_snapshot), \ FN(trace_vprintk), \ + FN(export_errno), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 5efe2588575e..5b5051eb43e6 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -1169,6 +1169,28 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor, return ret; } +BPF_CALL_1(bpf_export_errno, int, errno_val) +{ + struct bpf_cg_run_ctx *ctx = + container_of(current->bpf_ctx, struct bpf_cg_run_ctx, run_ctx); + + if (errno_val < 0 || errno_val > MAX_ERRNO) + return -EINVAL; + + if (!errno_val) + return ctx->errno_val; + + ctx->errno_val = errno_val; + return 0; +} + +static const struct bpf_func_proto bpf_export_errno_proto = { + .func = bpf_export_errno, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * cgroup_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -1181,6 +1203,8 @@ cgroup_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_get_current_cgroup_id_proto; case BPF_FUNC_perf_event_output: return &bpf_event_output_data_proto; + case BPF_FUNC_export_errno: + return &bpf_export_errno_proto; default: return bpf_base_func_proto(func_id); } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 6fc59d61937a..d8126f8c0541 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4909,6 +4909,19 @@ union bpf_attr { * Return * The number of bytes written to the buffer, or a negative error * in case of failure. + * + * int bpf_export_errno(int errno_val) + * Description + * If *errno_val* is positive, set the syscall's return error code; + * if *errno_val* is zero, retrieve the previously set code. + * + * This helper is currently supported by cgroup programs only. + * Return + * Zero if set is successful, or the previously set error code on + * retrieval. Previously set code may be zero if it was never set. + * On error, a negative value. + * + * **-EINVAL** if *errno_val* not between zero and MAX_ERRNO inclusive. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5089,6 +5102,7 @@ union bpf_attr { FN(task_pt_regs), \ FN(get_branch_snapshot), \ FN(trace_vprintk), \ + FN(export_errno), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Wed Oct 6 16:02:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: YiFei Zhu X-Patchwork-Id: 12539697 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1BD3C433F5 for ; Wed, 6 Oct 2021 16:03:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCF7B610CC for ; Wed, 6 Oct 2021 16:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232165AbhJFQEz (ORCPT ); Wed, 6 Oct 2021 12:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230021AbhJFQEz (ORCPT ); Wed, 6 Oct 2021 12:04:55 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12287C061746 for ; Wed, 6 Oct 2021 09:03:03 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id m21so2839226pgu.13 for ; Wed, 06 Oct 2021 09:03:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uXH1xXwANkEkMk5xSeRQcDW2a2cM2/Gh6zLAoIsLD9U=; b=BJqs5wldx3tul/E5oCCclzRw6Mw0p74UnU1ehA+cRK6IZ7998IDtD512N2ESzauEK4 EzzhD8GgNSgpOjx/qi+K+t9v0BbV+NVapc+UzALJKwopr4wUoYRhEYP4b44CRhZBMxd8 KhyNjY5aOVmsxuKfjXDWeeXIjmDvTuDAugXs91hS2lrmlUDypwC7/e6dNgzwLkp7lmiI /fnq8HYrtAQadxY9Q1sRDY/B71TjQntxdQlmw3sETTNMES31YOtjb2xafSFVahiRl1u/ aMXq91MkZl3sDWz/vhdtr0cI6w2lDZOc/pcPnBcpbwTokyqJWRzJUyhyQAPJqQ0tcwnf nMgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uXH1xXwANkEkMk5xSeRQcDW2a2cM2/Gh6zLAoIsLD9U=; b=XYziUbt4tWVeef+Kgws4v35poQSbt41wAzok/X+L50+2W8bEyT6kW37+4EbQ7T+yAy WHov8CZvNnIwPmNJKk96lWIxWq/vNJXRArRah/Zme9ftU7T+mHZs8yaBVKkSB52lWgCQ vWeYHcK8LfxaV95Z5VpSnopX4BOGGB3JS+u8mt+0EAcg7L3VCF1/EAsduJwWWwhyLyK2 GKxXprutiOq81kb01ozcWLRA/Ndaz/Rg+CSBCh3pNv3IrTEUrYxrnTs7bLr8Yxfnfbcy IOicb25CFditN+8Q5bjKoim8og/D9v97WooSph0eGtGxkw2T4n5AlWB/GO/3mNT5ef4A pRbg== X-Gm-Message-State: AOAM530StuDYbnFu/KQ1lyNBMCPJX/Av6rr8WglocR0V280t6U2OiFcn Ex/S4Wr1tCECos/6ycZytj0nH6FGk2Y= X-Google-Smtp-Source: ABdhPJy8b/0ZbmEMJkl97JI3FWf7uFRNXhkHRcoT+Cn/10fMlDUOuWfUlPsw05FXWv17xuBp6wXlNA== X-Received: by 2002:a62:3893:0:b0:44b:9369:5de5 with SMTP id f141-20020a623893000000b0044b93695de5mr36647346pfa.40.1633536182212; Wed, 06 Oct 2021 09:03:02 -0700 (PDT) Received: from localhost.localdomain ([98.47.144.235]) by smtp.gmail.com with ESMTPSA id x19sm20906098pfn.105.2021.10.06.09.03.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 09:03:01 -0700 (PDT) From: YiFei Zhu To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , Stanislav Fomichev , YiFei Zhu Subject: [PATCH bpf-next 3/3] selftests/bpf: Test bpf_export_errno behavior with cgroup/sockopt Date: Wed, 6 Oct 2021 09:02:42 -0700 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: YiFei Zhu The tests checks how different ways of interacting with the helper (getting errno, setting EUNATCH, EISCONN, and legacy reject returning 0 without setting errno), produce different results in both the setsockopt syscall and the errno value returned by the helper. A few more tests verify the interaction between the exported errno and the retval in getsockopt context. Signed-off-by: YiFei Zhu Reviewed-by: Stanislav Fomichev Acked-by: Song Liu --- .../bpf/prog_tests/cgroup_export_errno.c | 472 ++++++++++++++++++ .../progs/cgroup_export_errno_getsockopt.c | 45 ++ .../progs/cgroup_export_errno_setsockopt.c | 52 ++ 3 files changed, 569 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_export_errno.c create mode 100644 tools/testing/selftests/bpf/progs/cgroup_export_errno_getsockopt.c create mode 100644 tools/testing/selftests/bpf/progs/cgroup_export_errno_setsockopt.c diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_export_errno.c b/tools/testing/selftests/bpf/prog_tests/cgroup_export_errno.c new file mode 100644 index 000000000000..c472267f8427 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_export_errno.c @@ -0,0 +1,472 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Copyright 2021 Google LLC. + */ + +#include +#include +#include + +#include "cgroup_export_errno_setsockopt.skel.h" +#include "cgroup_export_errno_getsockopt.skel.h" + +#define SOL_CUSTOM 0xdeadbeef + +static int zero; + +static void test_setsockopt_set(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_set_eunatch = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that sets EUNATCH, assert that + * we actually get that error when we run setsockopt() + */ + link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eunatch); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_set_and_get(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_set_eunatch = NULL, *link_get_errno = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that sets EUNATCH, and one that gets the + * previously set errno. Assert that we get the same errno back. + */ + link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch")) + goto close_bpf_object; + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, EUNATCH, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eunatch); + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_default_zero(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_get_errno = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that gets the previously set errno. + * Assert that, without anything setting one, we get 0. + */ + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_OK(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, 0, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_default_zero_and_set(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_get_errno = NULL, *link_set_eunatch = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that gets the previously set errno, and then + * one that sets the errno to EUNATCH. Assert that the get does not + * see EUNATCH set later, and does not prevent EUNATCH from being set. + */ + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, 0, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_get_errno); + bpf_link__destroy(link_set_eunatch); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_override(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_set_eunatch = NULL, *link_set_eisconn = NULL; + struct bpf_link *link_get_errno = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that sets EUNATCH, then one that sets EISCONN, + * and then one that gets the exported errno. Assert both the syscall + * and the helper sees the last set errno. + */ + link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch")) + goto close_bpf_object; + link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn")) + goto close_bpf_object; + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EISCONN, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 3, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, EISCONN, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eunatch); + bpf_link__destroy(link_set_eisconn); + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_legacy_eperm(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_legacy_eperm = NULL, *link_get_errno = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that return a reject without setting errno + * (legacy reject), and one that gets the errno. Assert that for + * backward compatibility the syscall result in EPERM, and this + * is also visible to the helper. + */ + link_legacy_eperm = bpf_program__attach_cgroup(obj->progs.legacy_eperm, + cgroup_fd); + if (!ASSERT_OK_PTR(link_legacy_eperm, "cg-attach-legacy_eperm")) + goto close_bpf_object; + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EPERM, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, EPERM, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_legacy_eperm); + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_setsockopt_legacy_no_override(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_setsockopt *obj; + struct bpf_link *link_set_eunatch = NULL, *link_legacy_eperm = NULL; + struct bpf_link *link_get_errno = NULL; + + obj = cgroup_export_errno_setsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach setsockopt that sets EUNATCH, then one that return a reject + * without setting errno, and then one that gets the exported errno. + * Assert both the syscall and the helper's errno are unaffected by + * the second prog (i.e. legacy rejects does not override the errno + * to EPERM). + */ + link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch")) + goto close_bpf_object; + link_legacy_eperm = bpf_program__attach_cgroup(obj->progs.legacy_eperm, + cgroup_fd); + if (!ASSERT_OK_PTR(link_legacy_eperm, "cg-attach-legacy_eperm")) + goto close_bpf_object; + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, + &zero, sizeof(int)), "setsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 3, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, EUNATCH, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eunatch); + bpf_link__destroy(link_legacy_eperm); + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_setsockopt__destroy(obj); +} + +static void test_getsockopt_get(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_getsockopt *obj; + struct bpf_link *link_get_errno = NULL; + int buf; + socklen_t optlen = sizeof(buf); + + obj = cgroup_export_errno_getsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach getsockopt that gets previously set errno. Assert that the + * error from kernel is in retval_value and not errno_value. + */ + link_get_errno = bpf_program__attach_cgroup(obj->progs.get_errno, + cgroup_fd); + if (!ASSERT_OK_PTR(link_get_errno, "cg-attach-get_errno")) + goto close_bpf_object; + + if (!ASSERT_ERR(getsockopt(sock_fd, SOL_CUSTOM, 0, + &buf, &optlen), "getsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EOPNOTSUPP, "getsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->errno_value, 0, "errno_value")) + goto close_bpf_object; + if (!ASSERT_EQ(obj->bss->retval_value, -EOPNOTSUPP, "errno_value")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_get_errno); + + cgroup_export_errno_getsockopt__destroy(obj); +} + +static void test_getsockopt_override(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_getsockopt *obj; + struct bpf_link *link_set_eisconn = NULL; + int buf; + socklen_t optlen = sizeof(buf); + + obj = cgroup_export_errno_getsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach getsockopt that sets errno to EISCONN. Assert that this + * overrides the value from kernel. + */ + link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn")) + goto close_bpf_object; + + if (!ASSERT_ERR(getsockopt(sock_fd, SOL_CUSTOM, 0, + &buf, &optlen), "getsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EISCONN, "getsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eisconn); + + cgroup_export_errno_getsockopt__destroy(obj); +} + +static void test_getsockopt_retval_no_clear_errno(int cgroup_fd, int sock_fd) +{ + struct cgroup_export_errno_getsockopt *obj; + struct bpf_link *link_set_eisconn = NULL, *link_clear_retval = NULL; + int buf; + socklen_t optlen = sizeof(buf); + + obj = cgroup_export_errno_getsockopt__open_and_load(); + if (!ASSERT_OK_PTR(obj, "skel-load")) + return; + + /* Attach getsockopt that sets errno to EISCONN, and one that clears + * retval. Assert that the clearing retval does not clear EISCONN. + */ + link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn, + cgroup_fd); + if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn")) + goto close_bpf_object; + link_clear_retval = bpf_program__attach_cgroup(obj->progs.clear_retval, + cgroup_fd); + if (!ASSERT_OK_PTR(link_clear_retval, "cg-attach-clear_retval")) + goto close_bpf_object; + + if (!ASSERT_ERR(getsockopt(sock_fd, SOL_CUSTOM, 0, + &buf, &optlen), "getsockopt")) + goto close_bpf_object; + if (!ASSERT_EQ(errno, EISCONN, "getsockopt-errno")) + goto close_bpf_object; + + if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations")) + goto close_bpf_object; + if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error")) + goto close_bpf_object; + +close_bpf_object: + bpf_link__destroy(link_set_eisconn); + bpf_link__destroy(link_clear_retval); + + cgroup_export_errno_getsockopt__destroy(obj); +} + +void test_cgroup_export_errno(void) +{ + int cgroup_fd = -1; + int sock_fd = -1; + + cgroup_fd = test__join_cgroup("/cgroup_export_errno"); + if (!ASSERT_GE(cgroup_fd, 0, "cg-create")) + goto close_fd; + + sock_fd = start_server(AF_INET, SOCK_DGRAM, NULL, 0, 0); + if (!ASSERT_GE(sock_fd, 0, "start-server")) + goto close_fd; + + if (test__start_subtest("setsockopt-set")) + test_setsockopt_set(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-set_and_get")) + test_setsockopt_set_and_get(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-default_zero")) + test_setsockopt_default_zero(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-default_zero_and_set")) + test_setsockopt_default_zero_and_set(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-override")) + test_setsockopt_override(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-legacy_eperm")) + test_setsockopt_legacy_eperm(cgroup_fd, sock_fd); + + if (test__start_subtest("setsockopt-legacy_no_override")) + test_setsockopt_legacy_no_override(cgroup_fd, sock_fd); + + if (test__start_subtest("getsockopt-get")) + test_getsockopt_get(cgroup_fd, sock_fd); + + if (test__start_subtest("getsockopt-override")) + test_getsockopt_override(cgroup_fd, sock_fd); + + if (test__start_subtest("getsockopt-retval_no_clear_errno")) + test_getsockopt_retval_no_clear_errno(cgroup_fd, sock_fd); + +close_fd: + close(cgroup_fd); +} diff --git a/tools/testing/selftests/bpf/progs/cgroup_export_errno_getsockopt.c b/tools/testing/selftests/bpf/progs/cgroup_export_errno_getsockopt.c new file mode 100644 index 000000000000..2429e66b325a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cgroup_export_errno_getsockopt.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Copyright 2021 Google LLC. + */ + +#include +#include +#include + +__u32 invocations = 0; +__u32 assertion_error = 0; +__u32 errno_value = 0; +__u32 retval_value = 0; + +SEC("cgroup/getsockopt") +int get_errno(struct bpf_sockopt *ctx) +{ + errno_value = bpf_export_errno(0); + retval_value = ctx->retval; + __sync_fetch_and_add(&invocations, 1); + + return 1; +} + +SEC("cgroup/getsockopt") +int set_eisconn(struct bpf_sockopt *ctx) +{ + __sync_fetch_and_add(&invocations, 1); + + if (bpf_export_errno(EISCONN)) + assertion_error = 1; + + return 1; +} + +SEC("cgroup/getsockopt") +int clear_retval(struct bpf_sockopt *ctx) +{ + __sync_fetch_and_add(&invocations, 1); + + ctx->retval = 0; + + return 1; +} diff --git a/tools/testing/selftests/bpf/progs/cgroup_export_errno_setsockopt.c b/tools/testing/selftests/bpf/progs/cgroup_export_errno_setsockopt.c new file mode 100644 index 000000000000..f8585e100863 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cgroup_export_errno_setsockopt.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Copyright 2021 Google LLC. + */ + +#include +#include +#include + +__u32 invocations = 0; +__u32 assertion_error = 0; +__u32 errno_value = 0; + +SEC("cgroup/setsockopt") +int get_errno(struct bpf_sockopt *ctx) +{ + errno_value = bpf_export_errno(0); + __sync_fetch_and_add(&invocations, 1); + + return 1; +} + +SEC("cgroup/setsockopt") +int set_eunatch(struct bpf_sockopt *ctx) +{ + __sync_fetch_and_add(&invocations, 1); + + if (bpf_export_errno(EUNATCH)) + assertion_error = 1; + + return 0; +} + +SEC("cgroup/setsockopt") +int set_eisconn(struct bpf_sockopt *ctx) +{ + __sync_fetch_and_add(&invocations, 1); + + if (bpf_export_errno(EISCONN)) + assertion_error = 1; + + return 0; +} + +SEC("cgroup/setsockopt") +int legacy_eperm(struct bpf_sockopt *ctx) +{ + __sync_fetch_and_add(&invocations, 1); + + return 0; +}