diff mbox series

[v11,4/5] security: Update non standard hooks to use static calls

Message ID 20240509201421.905965-5-kpsingh@kernel.org (mailing list archive)
State Superseded
Headers show
Series Reduce overhead of LSMs with static calls | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 fail Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-13 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-7 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-10 success Logs for x86_64-gcc / build-release

Commit Message

KP Singh May 9, 2024, 8:14 p.m. UTC
There are some LSM hooks which do not use the common pattern followed
by other LSM hooks and thus cannot use call_{int, void}_hook macros and
instead use lsm_for_each_hook macro which still results in indirect
call.

There is one additional generalizable pattern where a hook matching an
lsmid is called and the indirect calls for these are addressed with the
newly added call_hook_with_lsmid macro which internally uses an
implementation similar to call_int_hook but has an additional check that
matches the lsmid.

For the generic case the lsm_for_each_hook macro is updated to accept
logic before and after the invocation of the LSM hook (static call) in
the unrolled loop.

Signed-off-by: KP Singh <kpsingh@kernel.org>
---
 security/security.c | 229 ++++++++++++++++++++++++--------------------
 1 file changed, 125 insertions(+), 104 deletions(-)

Comments

Casey Schaufler May 10, 2024, 5:08 p.m. UTC | #1
On 5/9/2024 1:14 PM, KP Singh wrote:
> There are some LSM hooks which do not use the common pattern followed
> by other LSM hooks and thus cannot use call_{int, void}_hook macros and
> instead use lsm_for_each_hook macro which still results in indirect
> call.
>
> There is one additional generalizable pattern where a hook matching an
> lsmid is called and the indirect calls for these are addressed with the
> newly added call_hook_with_lsmid macro which internally uses an
> implementation similar to call_int_hook but has an additional check that
> matches the lsmid.
>
> For the generic case the lsm_for_each_hook macro is updated to accept
> logic before and after the invocation of the LSM hook (static call) in
> the unrolled loop.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>

A couple of comments below, nonetheless ...

Reviewed-by: Casey Schaufler <casey@schaufler-ca.com>

> ---
>  security/security.c | 229 ++++++++++++++++++++++++--------------------
>  1 file changed, 125 insertions(+), 104 deletions(-)
>
> diff --git a/security/security.c b/security/security.c
> index 39ffe949e509..491b807a8a63 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -945,10 +945,41 @@ out:									\
>  	RC;								\
>  })
>  
> -#define lsm_for_each_hook(scall, NAME)					\
> -	for (scall = static_calls_table.NAME;				\
> -	     scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++)  \
> -		if (static_key_enabled(&scall->active->key))
> +/*
> + * Can be used in the context passed to lsm_for_each_hook to get the lsmid of the
> + * current hook
> + */
> +#define current_lsmid() _hook_lsmid
> +
> +#define __CALL_HOOK(NUM, HOOK, RC, BODY_BEFORE, BODY_AFTER, ...)	     \
> +do {									     \
> +	int __maybe_unused _hook_lsmid;					     \
> +									     \
> +	if (static_branch_unlikely(&SECURITY_HOOK_ACTIVE_KEY(HOOK, NUM))) {  \
> +		_hook_lsmid = static_calls_table.HOOK[NUM].hl->lsmid->id;    \
> +		BODY_BEFORE						     \
> +		RC = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);   \
> +		BODY_AFTER						     \
> +	}								     \
> +} while (0);
> +
> +#define lsm_for_each_hook(HOOK, RC, BODY, ...)	\
> +	LSM_LOOP_UNROLL(__CALL_HOOK, HOOK, RC, ;, BODY, __VA_ARGS__)
> +
> +#define call_hook_with_lsmid(HOOK, LSMID, ...)				\
> +({									\
> +	__label__ out;							\
> +	int RC = LSM_RET_DEFAULT(HOOK);					\
> +									\
> +	LSM_LOOP_UNROLL(__CALL_HOOK, HOOK, RC, {			\
> +		if (current_lsmid() != LSMID)				\
> +			continue;					\
> +	}, {								\
> +		goto out;						\
> +	}, __VA_ARGS__);						\
> +out:									\
> +	RC;								\
> +})

I like how clean these macros look ...

>  
>  /* Security operations */
>  
> @@ -1184,7 +1215,6 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
>   */
>  int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  {
> -	struct lsm_static_call *scall;
>  	int cap_sys_admin = 1;
>  	int rc;
>  
> @@ -1195,13 +1225,18 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  	 * agree that it should be set it will. If any module
>  	 * thinks it should not be set it won't.
>  	 */
> -	lsm_for_each_hook(scall, vm_enough_memory) {
> -		rc = scall->hl->hook.vm_enough_memory(mm, pages);
> -		if (rc <= 0) {
> -			cap_sys_admin = 0;
> -			break;
> -		}
> -	}
> +
> +	lsm_for_each_hook(
> +		vm_enough_memory, rc,
> +		{
> +			if (rc <= 0) {
> +				cap_sys_admin = 0;
> +				goto out;
> +			}
> +		},
> +		mm, pages);
> +
> +out:

... but the invocations are quite hideous. Because this looks like code that's
messed up I would like to see it commented, perhaps something like

+
+	lsm_for_each_hook(
+		vm_enough_memory, rc,
+		/* BLOCK BEFORE */
+		{
+			if (rc <= 0) {
+				cap_sys_admin = 0;
+				goto out;
+			}
+		},
+		/* END BLOCK BEFORE */
+		mm, pages);
+
+out:

>  	return __vm_enough_memory(mm, pages, cap_sys_admin);
>  }
>  
> @@ -1343,17 +1378,19 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
>  int security_fs_context_parse_param(struct fs_context *fc,
>  				    struct fs_parameter *param)
>  {
> -	struct lsm_static_call *scall;
> -	int trc;
> +	int trc = LSM_RET_DEFAULT(fs_context_parse_param);
>  	int rc = -ENOPARAM;
>  
> -	lsm_for_each_hook(scall, fs_context_parse_param) {
> -		trc = scall->hl->hook.fs_context_parse_param(fc, param);
> -		if (trc == 0)
> -			rc = 0;
> -		else if (trc != -ENOPARAM)
> -			return trc;
> -	}
> +	lsm_for_each_hook(
> +		fs_context_parse_param, trc,
> +		{
> +			if (trc == 0)
> +				rc = 0;
> +			else if (trc != -ENOPARAM)
> +				return trc;
> +		},
> +		fc, param);
> +
>  	return rc;
>  }
>  
> @@ -1578,15 +1615,17 @@ int security_sb_set_mnt_opts(struct super_block *sb,
>  			     unsigned long kern_flags,
>  			     unsigned long *set_kern_flags)
>  {
> -	struct lsm_static_call *scall;
>  	int rc = mnt_opts ? -EOPNOTSUPP : LSM_RET_DEFAULT(sb_set_mnt_opts);
>  
> -	lsm_for_each_hook(scall, sb_set_mnt_opts) {
> -		rc = scall->hl->hook.sb_set_mnt_opts(sb, mnt_opts, kern_flags,
> -					      set_kern_flags);
> -		if (rc != LSM_RET_DEFAULT(sb_set_mnt_opts))
> -			break;
> -	}
> +	lsm_for_each_hook(
> +		sb_set_mnt_opts, rc,
> +		{
> +			if (rc != LSM_RET_DEFAULT(sb_set_mnt_opts))
> +				goto out;
> +		},
> +		sb, mnt_opts, kern_flags, set_kern_flags);
> +
> +out:
>  	return rc;
>  }
>  EXPORT_SYMBOL(security_sb_set_mnt_opts);
> @@ -1777,7 +1816,6 @@ int security_inode_init_security(struct inode *inode, struct inode *dir,
>  				 const struct qstr *qstr,
>  				 const initxattrs initxattrs, void *fs_data)
>  {
> -	struct lsm_static_call *scall;
>  	struct xattr *new_xattrs = NULL;
>  	int ret = -EOPNOTSUPP, xattr_count = 0;
>  
> @@ -1795,18 +1833,19 @@ int security_inode_init_security(struct inode *inode, struct inode *dir,
>  			return -ENOMEM;
>  	}
>  
> -	lsm_for_each_hook(scall, inode_init_security) {
> -		ret = scall->hl->hook.inode_init_security(inode, dir, qstr, new_xattrs,
> -						  &xattr_count);
> -		if (ret && ret != -EOPNOTSUPP)
> -			goto out;
> +	lsm_for_each_hook(
> +		inode_init_security, ret,
> +		{
>  		/*
>  		 * As documented in lsm_hooks.h, -EOPNOTSUPP in this context
>  		 * means that the LSM is not willing to provide an xattr, not
>  		 * that it wants to signal an error. Thus, continue to invoke
>  		 * the remaining LSMs.
>  		 */
> -	}
> +			if (ret && ret != -EOPNOTSUPP)
> +				goto out;
> +		},
> +		inode, dir, qstr, new_xattrs, &xattr_count);
>  
>  	/* If initxattrs() is NULL, xattr_count is zero, skip the call. */
>  	if (!xattr_count)
> @@ -3601,16 +3640,19 @@ int security_task_prctl(int option, unsigned long arg2, unsigned long arg3,
>  {
>  	int thisrc;
>  	int rc = LSM_RET_DEFAULT(task_prctl);
> -	struct lsm_static_call *scall;
> -
> -	lsm_for_each_hook(scall, task_prctl) {
> -		thisrc = scall->hl->hook.task_prctl(option, arg2, arg3, arg4, arg5);
> -		if (thisrc != LSM_RET_DEFAULT(task_prctl)) {
> -			rc = thisrc;
> -			if (thisrc != 0)
> -				break;
> -		}
> -	}
> +
> +	lsm_for_each_hook(
> +		task_prctl, thisrc,
> +		{
> +			if (thisrc != LSM_RET_DEFAULT(task_prctl)) {
> +				rc = thisrc;
> +				if (thisrc != 0)
> +					goto out;
> +			}
> +		},
> +		option, arg2, arg3, arg4, arg5);
> +
> +out:
>  	return rc;
>  }
>  
> @@ -4010,7 +4052,6 @@ EXPORT_SYMBOL(security_d_instantiate);
>  int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  			 u32 __user *size, u32 flags)
>  {
> -	struct lsm_static_call *scall;
>  	struct lsm_ctx lctx = { .id = LSM_ID_UNDEF, };
>  	u8 __user *base = (u8 __user *)uctx;
>  	u32 entrysize;
> @@ -4048,31 +4089,40 @@ int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  	 * In the usual case gather all the data from the LSMs.
>  	 * In the single case only get the data from the LSM specified.
>  	 */
> -	lsm_for_each_hook(scall, getselfattr) {
> -		if (single && lctx.id != scall->hl->lsmid->id)
> -			continue;
> -		entrysize = left;
> -		if (base)
> -			uctx = (struct lsm_ctx __user *)(base + total);
> -		rc = scall->hl->hook.getselfattr(attr, uctx, &entrysize, flags);
> -		if (rc == -EOPNOTSUPP) {
> -			rc = 0;
> -			continue;
> -		}
> -		if (rc == -E2BIG) {
> -			rc = 0;
> -			left = 0;
> -			toobig = true;
> -		} else if (rc < 0)
> -			return rc;
> -		else
> -			left -= entrysize;
> +	LSM_LOOP_UNROLL(
> +		__CALL_HOOK, getselfattr, rc,
> +		/* BODY_BEFORE */
> +		{
> +			if (single && lctx.id != current_lsmid())
> +				continue;
> +			entrysize = left;
> +			if (base)
> +				uctx = (struct lsm_ctx __user *)(base + total);
> +		},
> +		/* BODY_AFTER */
> +		{
> +			if (rc == -EOPNOTSUPP) {
> +				rc = 0;
> +			} else {
> +				if (rc == -E2BIG) {
> +					rc = 0;
> +					left = 0;
> +					toobig = true;
> +				} else if (rc < 0)
> +					return rc;
> +				else
> +					left -= entrysize;
> +
> +				total += entrysize;
> +				count += rc;
> +				if (single)
> +					goto out;
> +			}
> +		},
> +		attr, uctx, &entrysize, flags);
> +
> +out:
>  
> -		total += entrysize;
> -		count += rc;
> -		if (single)
> -			break;
> -	}
>  	if (put_user(total, size))
>  		return -EFAULT;
>  	if (toobig)
> @@ -4103,9 +4153,8 @@ int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  			 u32 size, u32 flags)
>  {
> -	struct lsm_static_call *scall;
>  	struct lsm_ctx *lctx;
> -	int rc = LSM_RET_DEFAULT(setselfattr);
> +	int rc;
>  	u64 required_len;
>  
>  	if (flags)
> @@ -4126,11 +4175,7 @@ int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  		goto free_out;
>  	}
>  
> -	lsm_for_each_hook(scall, setselfattr)
> -		if ((scall->hl->lsmid->id) == lctx->id) {
> -			rc = scall->hl->hook.setselfattr(attr, lctx, size, flags);
> -			break;
> -		}
> +	rc = call_hook_with_lsmid(setselfattr, lctx->id, attr, lctx, size, flags);
>  
>  free_out:
>  	kfree(lctx);
> @@ -4151,14 +4196,7 @@ int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
>  int security_getprocattr(struct task_struct *p, int lsmid, const char *name,
>  			 char **value)
>  {
> -	struct lsm_static_call *scall;
> -
> -	lsm_for_each_hook(scall, getprocattr) {
> -		if (lsmid != 0 && lsmid != scall->hl->lsmid->id)
> -			continue;
> -		return scall->hl->hook.getprocattr(p, name, value);
> -	}
> -	return LSM_RET_DEFAULT(getprocattr);
> +	return call_hook_with_lsmid(getprocattr, lsmid, p, name, value);
>  }
>  
>  /**
> @@ -4175,14 +4213,7 @@ int security_getprocattr(struct task_struct *p, int lsmid, const char *name,
>   */
>  int security_setprocattr(int lsmid, const char *name, void *value, size_t size)
>  {
> -	struct lsm_static_call *scall;
> -
> -	lsm_for_each_hook(scall, setprocattr) {
> -		if (lsmid != 0 && lsmid != scall->hl->lsmid->id)
> -			continue;
> -		return scall->hl->hook.setprocattr(name, value, size);
> -	}
> -	return LSM_RET_DEFAULT(setprocattr);
> +	return call_hook_with_lsmid(setprocattr, lsmid, name, value, size);
>  }
>  
>  /**
> @@ -5267,23 +5298,13 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
>  				       struct xfrm_policy *xp,
>  				       const struct flowi_common *flic)
>  {
> -	struct lsm_static_call *scall;
> -	int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match);
> -
>  	/*
>  	 * Since this function is expected to return 0 or 1, the judgment
>  	 * becomes difficult if multiple LSMs supply this call. Fortunately,
>  	 * we can use the first LSM's judgment because currently only SELinux
>  	 * supplies this call.
> -	 *
> -	 * For speed optimization, we explicitly break the loop rather than
> -	 * using the macro
>  	 */
> -	lsm_for_each_hook(scall, xfrm_state_pol_flow_match) {
> -		rc = scall->hl->hook.xfrm_state_pol_flow_match(x, xp, flic);
> -		break;
> -	}
> -	return rc;
> +	return call_int_hook(xfrm_state_pol_flow_match, x, xp, flic);
>  }
>  
>  /**
Kees Cook May 13, 2024, 7:01 p.m. UTC | #2
On Thu, May 09, 2024 at 10:14:20PM +0200, KP Singh wrote:
> There are some LSM hooks which do not use the common pattern followed
> by other LSM hooks and thus cannot use call_{int, void}_hook macros and
> instead use lsm_for_each_hook macro which still results in indirect
> call.
> 
> There is one additional generalizable pattern where a hook matching an
> lsmid is called and the indirect calls for these are addressed with the
> newly added call_hook_with_lsmid macro which internally uses an
> implementation similar to call_int_hook but has an additional check that
> matches the lsmid.
> 
> For the generic case the lsm_for_each_hook macro is updated to accept
> logic before and after the invocation of the LSM hook (static call) in
> the unrolled loop.
> 
> Signed-off-by: KP Singh <kpsingh@kernel.org>

I think this will give us the flexibility we need!

Reviewed-by: Kees Cook <keescook@chromium.org>
diff mbox series

Patch

diff --git a/security/security.c b/security/security.c
index 39ffe949e509..491b807a8a63 100644
--- a/security/security.c
+++ b/security/security.c
@@ -945,10 +945,41 @@  out:									\
 	RC;								\
 })
 
-#define lsm_for_each_hook(scall, NAME)					\
-	for (scall = static_calls_table.NAME;				\
-	     scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++)  \
-		if (static_key_enabled(&scall->active->key))
+/*
+ * Can be used in the context passed to lsm_for_each_hook to get the lsmid of the
+ * current hook
+ */
+#define current_lsmid() _hook_lsmid
+
+#define __CALL_HOOK(NUM, HOOK, RC, BODY_BEFORE, BODY_AFTER, ...)	     \
+do {									     \
+	int __maybe_unused _hook_lsmid;					     \
+									     \
+	if (static_branch_unlikely(&SECURITY_HOOK_ACTIVE_KEY(HOOK, NUM))) {  \
+		_hook_lsmid = static_calls_table.HOOK[NUM].hl->lsmid->id;    \
+		BODY_BEFORE						     \
+		RC = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);   \
+		BODY_AFTER						     \
+	}								     \
+} while (0);
+
+#define lsm_for_each_hook(HOOK, RC, BODY, ...)	\
+	LSM_LOOP_UNROLL(__CALL_HOOK, HOOK, RC, ;, BODY, __VA_ARGS__)
+
+#define call_hook_with_lsmid(HOOK, LSMID, ...)				\
+({									\
+	__label__ out;							\
+	int RC = LSM_RET_DEFAULT(HOOK);					\
+									\
+	LSM_LOOP_UNROLL(__CALL_HOOK, HOOK, RC, {			\
+		if (current_lsmid() != LSMID)				\
+			continue;					\
+	}, {								\
+		goto out;						\
+	}, __VA_ARGS__);						\
+out:									\
+	RC;								\
+})
 
 /* Security operations */
 
@@ -1184,7 +1215,6 @@  int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
  */
 int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
 {
-	struct lsm_static_call *scall;
 	int cap_sys_admin = 1;
 	int rc;
 
@@ -1195,13 +1225,18 @@  int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
 	 * agree that it should be set it will. If any module
 	 * thinks it should not be set it won't.
 	 */
-	lsm_for_each_hook(scall, vm_enough_memory) {
-		rc = scall->hl->hook.vm_enough_memory(mm, pages);
-		if (rc <= 0) {
-			cap_sys_admin = 0;
-			break;
-		}
-	}
+
+	lsm_for_each_hook(
+		vm_enough_memory, rc,
+		{
+			if (rc <= 0) {
+				cap_sys_admin = 0;
+				goto out;
+			}
+		},
+		mm, pages);
+
+out:
 	return __vm_enough_memory(mm, pages, cap_sys_admin);
 }
 
@@ -1343,17 +1378,19 @@  int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
 int security_fs_context_parse_param(struct fs_context *fc,
 				    struct fs_parameter *param)
 {
-	struct lsm_static_call *scall;
-	int trc;
+	int trc = LSM_RET_DEFAULT(fs_context_parse_param);
 	int rc = -ENOPARAM;
 
-	lsm_for_each_hook(scall, fs_context_parse_param) {
-		trc = scall->hl->hook.fs_context_parse_param(fc, param);
-		if (trc == 0)
-			rc = 0;
-		else if (trc != -ENOPARAM)
-			return trc;
-	}
+	lsm_for_each_hook(
+		fs_context_parse_param, trc,
+		{
+			if (trc == 0)
+				rc = 0;
+			else if (trc != -ENOPARAM)
+				return trc;
+		},
+		fc, param);
+
 	return rc;
 }
 
@@ -1578,15 +1615,17 @@  int security_sb_set_mnt_opts(struct super_block *sb,
 			     unsigned long kern_flags,
 			     unsigned long *set_kern_flags)
 {
-	struct lsm_static_call *scall;
 	int rc = mnt_opts ? -EOPNOTSUPP : LSM_RET_DEFAULT(sb_set_mnt_opts);
 
-	lsm_for_each_hook(scall, sb_set_mnt_opts) {
-		rc = scall->hl->hook.sb_set_mnt_opts(sb, mnt_opts, kern_flags,
-					      set_kern_flags);
-		if (rc != LSM_RET_DEFAULT(sb_set_mnt_opts))
-			break;
-	}
+	lsm_for_each_hook(
+		sb_set_mnt_opts, rc,
+		{
+			if (rc != LSM_RET_DEFAULT(sb_set_mnt_opts))
+				goto out;
+		},
+		sb, mnt_opts, kern_flags, set_kern_flags);
+
+out:
 	return rc;
 }
 EXPORT_SYMBOL(security_sb_set_mnt_opts);
@@ -1777,7 +1816,6 @@  int security_inode_init_security(struct inode *inode, struct inode *dir,
 				 const struct qstr *qstr,
 				 const initxattrs initxattrs, void *fs_data)
 {
-	struct lsm_static_call *scall;
 	struct xattr *new_xattrs = NULL;
 	int ret = -EOPNOTSUPP, xattr_count = 0;
 
@@ -1795,18 +1833,19 @@  int security_inode_init_security(struct inode *inode, struct inode *dir,
 			return -ENOMEM;
 	}
 
-	lsm_for_each_hook(scall, inode_init_security) {
-		ret = scall->hl->hook.inode_init_security(inode, dir, qstr, new_xattrs,
-						  &xattr_count);
-		if (ret && ret != -EOPNOTSUPP)
-			goto out;
+	lsm_for_each_hook(
+		inode_init_security, ret,
+		{
 		/*
 		 * As documented in lsm_hooks.h, -EOPNOTSUPP in this context
 		 * means that the LSM is not willing to provide an xattr, not
 		 * that it wants to signal an error. Thus, continue to invoke
 		 * the remaining LSMs.
 		 */
-	}
+			if (ret && ret != -EOPNOTSUPP)
+				goto out;
+		},
+		inode, dir, qstr, new_xattrs, &xattr_count);
 
 	/* If initxattrs() is NULL, xattr_count is zero, skip the call. */
 	if (!xattr_count)
@@ -3601,16 +3640,19 @@  int security_task_prctl(int option, unsigned long arg2, unsigned long arg3,
 {
 	int thisrc;
 	int rc = LSM_RET_DEFAULT(task_prctl);
-	struct lsm_static_call *scall;
-
-	lsm_for_each_hook(scall, task_prctl) {
-		thisrc = scall->hl->hook.task_prctl(option, arg2, arg3, arg4, arg5);
-		if (thisrc != LSM_RET_DEFAULT(task_prctl)) {
-			rc = thisrc;
-			if (thisrc != 0)
-				break;
-		}
-	}
+
+	lsm_for_each_hook(
+		task_prctl, thisrc,
+		{
+			if (thisrc != LSM_RET_DEFAULT(task_prctl)) {
+				rc = thisrc;
+				if (thisrc != 0)
+					goto out;
+			}
+		},
+		option, arg2, arg3, arg4, arg5);
+
+out:
 	return rc;
 }
 
@@ -4010,7 +4052,6 @@  EXPORT_SYMBOL(security_d_instantiate);
 int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 			 u32 __user *size, u32 flags)
 {
-	struct lsm_static_call *scall;
 	struct lsm_ctx lctx = { .id = LSM_ID_UNDEF, };
 	u8 __user *base = (u8 __user *)uctx;
 	u32 entrysize;
@@ -4048,31 +4089,40 @@  int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 	 * In the usual case gather all the data from the LSMs.
 	 * In the single case only get the data from the LSM specified.
 	 */
-	lsm_for_each_hook(scall, getselfattr) {
-		if (single && lctx.id != scall->hl->lsmid->id)
-			continue;
-		entrysize = left;
-		if (base)
-			uctx = (struct lsm_ctx __user *)(base + total);
-		rc = scall->hl->hook.getselfattr(attr, uctx, &entrysize, flags);
-		if (rc == -EOPNOTSUPP) {
-			rc = 0;
-			continue;
-		}
-		if (rc == -E2BIG) {
-			rc = 0;
-			left = 0;
-			toobig = true;
-		} else if (rc < 0)
-			return rc;
-		else
-			left -= entrysize;
+	LSM_LOOP_UNROLL(
+		__CALL_HOOK, getselfattr, rc,
+		/* BODY_BEFORE */
+		{
+			if (single && lctx.id != current_lsmid())
+				continue;
+			entrysize = left;
+			if (base)
+				uctx = (struct lsm_ctx __user *)(base + total);
+		},
+		/* BODY_AFTER */
+		{
+			if (rc == -EOPNOTSUPP) {
+				rc = 0;
+			} else {
+				if (rc == -E2BIG) {
+					rc = 0;
+					left = 0;
+					toobig = true;
+				} else if (rc < 0)
+					return rc;
+				else
+					left -= entrysize;
+
+				total += entrysize;
+				count += rc;
+				if (single)
+					goto out;
+			}
+		},
+		attr, uctx, &entrysize, flags);
+
+out:
 
-		total += entrysize;
-		count += rc;
-		if (single)
-			break;
-	}
 	if (put_user(total, size))
 		return -EFAULT;
 	if (toobig)
@@ -4103,9 +4153,8 @@  int security_getselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 			 u32 size, u32 flags)
 {
-	struct lsm_static_call *scall;
 	struct lsm_ctx *lctx;
-	int rc = LSM_RET_DEFAULT(setselfattr);
+	int rc;
 	u64 required_len;
 
 	if (flags)
@@ -4126,11 +4175,7 @@  int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 		goto free_out;
 	}
 
-	lsm_for_each_hook(scall, setselfattr)
-		if ((scall->hl->lsmid->id) == lctx->id) {
-			rc = scall->hl->hook.setselfattr(attr, lctx, size, flags);
-			break;
-		}
+	rc = call_hook_with_lsmid(setselfattr, lctx->id, attr, lctx, size, flags);
 
 free_out:
 	kfree(lctx);
@@ -4151,14 +4196,7 @@  int security_setselfattr(unsigned int attr, struct lsm_ctx __user *uctx,
 int security_getprocattr(struct task_struct *p, int lsmid, const char *name,
 			 char **value)
 {
-	struct lsm_static_call *scall;
-
-	lsm_for_each_hook(scall, getprocattr) {
-		if (lsmid != 0 && lsmid != scall->hl->lsmid->id)
-			continue;
-		return scall->hl->hook.getprocattr(p, name, value);
-	}
-	return LSM_RET_DEFAULT(getprocattr);
+	return call_hook_with_lsmid(getprocattr, lsmid, p, name, value);
 }
 
 /**
@@ -4175,14 +4213,7 @@  int security_getprocattr(struct task_struct *p, int lsmid, const char *name,
  */
 int security_setprocattr(int lsmid, const char *name, void *value, size_t size)
 {
-	struct lsm_static_call *scall;
-
-	lsm_for_each_hook(scall, setprocattr) {
-		if (lsmid != 0 && lsmid != scall->hl->lsmid->id)
-			continue;
-		return scall->hl->hook.setprocattr(name, value, size);
-	}
-	return LSM_RET_DEFAULT(setprocattr);
+	return call_hook_with_lsmid(setprocattr, lsmid, name, value, size);
 }
 
 /**
@@ -5267,23 +5298,13 @@  int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
 				       struct xfrm_policy *xp,
 				       const struct flowi_common *flic)
 {
-	struct lsm_static_call *scall;
-	int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match);
-
 	/*
 	 * Since this function is expected to return 0 or 1, the judgment
 	 * becomes difficult if multiple LSMs supply this call. Fortunately,
 	 * we can use the first LSM's judgment because currently only SELinux
 	 * supplies this call.
-	 *
-	 * For speed optimization, we explicitly break the loop rather than
-	 * using the macro
 	 */
-	lsm_for_each_hook(scall, xfrm_state_pol_flow_match) {
-		rc = scall->hl->hook.xfrm_state_pol_flow_match(x, xp, flic);
-		break;
-	}
-	return rc;
+	return call_int_hook(xfrm_state_pol_flow_match, x, xp, flic);
 }
 
 /**