diff mbox series

[bpf-next,v2,7/8] bpf: Add tests for task_local_storage

Message ID 20201103153132.2717326-8-kpsingh@chromium.org (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series Implement task_local_storage | expand

Commit Message

KP Singh Nov. 3, 2020, 3:31 p.m. UTC
From: KP Singh <kpsingh@google.com>

The test exercises the syscall based map operations by creating a pidfd
for the current process.

For verifying kernel / LSM functionality, the test implements a simple
MAC policy which denies an executable from unlinking itself. The LSM
program bprm_committed_creds sets a task_local_storage with a pointer to
the inode. This is then used to detect if the task is trying to unlink
itself in the inode_unlink LSM hook.

The test copies /bin/rm to /tmp and executes it in a child thread with
the intention of deleting itself. A successful test should prevent the
the running executable from deleting itself.

The temporary file is cleaned up later in the test.

Signed-off-by: KP Singh <kpsingh@google.com>
---
 .../bpf/prog_tests/test_local_storage.c       | 167 ++++++++++++++++--
 .../selftests/bpf/progs/local_storage.c       |  45 ++++-
 2 files changed, 194 insertions(+), 18 deletions(-)

Comments

Alexei Starovoitov Nov. 3, 2020, 6:47 p.m. UTC | #1
On Tue, Nov 03, 2020 at 04:31:31PM +0100, KP Singh wrote:
> +
> +struct storage {
> +	void *inode;
> +	unsigned int value;
> +	/* Lock ensures that spin locked versions of local stoage operations
> +	 * also work, most operations in this tests are still single threaded
> +	 */
> +	struct bpf_spin_lock lock;
> +};

I think it's a good idea to test spin_lock in local_storage,
but it seems the test is not doing it fully.
It's only adding it to the storage, but the program is not accessing it.
KP Singh Nov. 3, 2020, 6:59 p.m. UTC | #2
On Tue, Nov 3, 2020 at 7:47 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Nov 03, 2020 at 04:31:31PM +0100, KP Singh wrote:
> > +
> > +struct storage {
> > +     void *inode;
> > +     unsigned int value;
> > +     /* Lock ensures that spin locked versions of local stoage operations
> > +      * also work, most operations in this tests are still single threaded
> > +      */
> > +     struct bpf_spin_lock lock;
> > +};
>
> I think it's a good idea to test spin_lock in local_storage,
> but it seems the test is not doing it fully.
> It's only adding it to the storage, but the program is not accessing it.

I added it here just to check if the offset calculations (map->spin_lock_off)
are correctly happening for these new maps.

As mentioned in the updates, I do intend to generalize
tools/testing/selftests/bpf/map_tests/sk_storage_map.c which already has
 the threading logic to exercise bpf_spin_lock in storage maps.

Hope this is an okay plan?
KP Singh Nov. 4, 2020, 12:05 a.m. UTC | #3
On Tue, Nov 3, 2020 at 7:59 PM KP Singh <kpsingh@chromium.org> wrote:
>
> On Tue, Nov 3, 2020 at 7:47 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Tue, Nov 03, 2020 at 04:31:31PM +0100, KP Singh wrote:
> > > +
> > > +struct storage {
> > > +     void *inode;
> > > +     unsigned int value;
> > > +     /* Lock ensures that spin locked versions of local stoage operations
> > > +      * also work, most operations in this tests are still single threaded
> > > +      */
> > > +     struct bpf_spin_lock lock;
> > > +};
> >
> > I think it's a good idea to test spin_lock in local_storage,
> > but it seems the test is not doing it fully.
> > It's only adding it to the storage, but the program is not accessing it.
>
> I added it here just to check if the offset calculations (map->spin_lock_off)
> are correctly happening for these new maps.
>
> As mentioned in the updates, I do intend to generalize
> tools/testing/selftests/bpf/map_tests/sk_storage_map.c which already has
>  the threading logic to exercise bpf_spin_lock in storage maps.
>

Actually, after I added simple bpf_spin_{lock, unlock} to the test programs, I
ended up realizing that we have not exposed spin locks to LSM programs
for now, this is because they inherit the tracing helpers.

I saw the docs mention that these are not exposed to tracing programs due to
insufficient preemption checks. Do you think it would be okay to allow them
for LSM programs?


- KP

> Hope this is an okay plan?
Alexei Starovoitov Nov. 4, 2020, 1:27 a.m. UTC | #4
On Tue, Nov 3, 2020 at 4:05 PM KP Singh <kpsingh@chromium.org> wrote:
>
> On Tue, Nov 3, 2020 at 7:59 PM KP Singh <kpsingh@chromium.org> wrote:
> >
> > On Tue, Nov 3, 2020 at 7:47 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Tue, Nov 03, 2020 at 04:31:31PM +0100, KP Singh wrote:
> > > > +
> > > > +struct storage {
> > > > +     void *inode;
> > > > +     unsigned int value;
> > > > +     /* Lock ensures that spin locked versions of local stoage operations
> > > > +      * also work, most operations in this tests are still single threaded
> > > > +      */
> > > > +     struct bpf_spin_lock lock;
> > > > +};
> > >
> > > I think it's a good idea to test spin_lock in local_storage,
> > > but it seems the test is not doing it fully.
> > > It's only adding it to the storage, but the program is not accessing it.
> >
> > I added it here just to check if the offset calculations (map->spin_lock_off)
> > are correctly happening for these new maps.
> >
> > As mentioned in the updates, I do intend to generalize
> > tools/testing/selftests/bpf/map_tests/sk_storage_map.c which already has
> >  the threading logic to exercise bpf_spin_lock in storage maps.
> >
>
> Actually, after I added simple bpf_spin_{lock, unlock} to the test programs, I
> ended up realizing that we have not exposed spin locks to LSM programs
> for now, this is because they inherit the tracing helpers.
>
> I saw the docs mention that these are not exposed to tracing programs due to
> insufficient preemption checks. Do you think it would be okay to allow them
> for LSM programs?

hmm. Isn't it allowed already?
The verifier does:
        if ((is_tracing_prog_type(prog_type) ||
             prog_type == BPF_PROG_TYPE_SOCKET_FILTER) &&
            map_value_has_spin_lock(map)) {
                verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
                return -EINVAL;
        }

BPF_PROG_TYPE_LSM is not in this list.
KP Singh Nov. 4, 2020, 1:55 a.m. UTC | #5
[...]

> >
> > I saw the docs mention that these are not exposed to tracing programs due to
> > insufficient preemption checks. Do you think it would be okay to allow them
> > for LSM programs?
>
> hmm. Isn't it allowed already?
> The verifier does:
>         if ((is_tracing_prog_type(prog_type) ||
>              prog_type == BPF_PROG_TYPE_SOCKET_FILTER) &&
>             map_value_has_spin_lock(map)) {
>                 verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
>                 return -EINVAL;
>         }
>
> BPF_PROG_TYPE_LSM is not in this list.

The verifier does not have any problem, it's just that the helpers are not
exposed to LSM programs via bpf_lsm_func_proto.

So all we need is:

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 61f8cc52fd5b..93383df2140b 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -63,6 +63,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const
struct bpf_prog *prog)
                return &bpf_task_storage_get_proto;
        case BPF_FUNC_task_storage_delete:
                return &bpf_task_storage_delete_proto;
+       case BPF_FUNC_spin_lock:
+               return &bpf_spin_lock_proto;
+       case BPF_FUNC_spin_unlock:
+               return &bpf_spin_unlock_proto;
        default:
                return tracing_prog_func_proto(func_id, prog);
        }
Alexei Starovoitov Nov. 4, 2020, 1:57 a.m. UTC | #6
On Tue, Nov 3, 2020 at 5:55 PM KP Singh <kpsingh@chromium.org> wrote:
>
> [...]
>
> > >
> > > I saw the docs mention that these are not exposed to tracing programs due to
> > > insufficient preemption checks. Do you think it would be okay to allow them
> > > for LSM programs?
> >
> > hmm. Isn't it allowed already?
> > The verifier does:
> >         if ((is_tracing_prog_type(prog_type) ||
> >              prog_type == BPF_PROG_TYPE_SOCKET_FILTER) &&
> >             map_value_has_spin_lock(map)) {
> >                 verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
> >                 return -EINVAL;
> >         }
> >
> > BPF_PROG_TYPE_LSM is not in this list.
>
> The verifier does not have any problem, it's just that the helpers are not
> exposed to LSM programs via bpf_lsm_func_proto.
>
> So all we need is:
>
> diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> index 61f8cc52fd5b..93383df2140b 100644
> --- a/kernel/bpf/bpf_lsm.c
> +++ b/kernel/bpf/bpf_lsm.c
> @@ -63,6 +63,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const
> struct bpf_prog *prog)
>                 return &bpf_task_storage_get_proto;
>         case BPF_FUNC_task_storage_delete:
>                 return &bpf_task_storage_delete_proto;
> +       case BPF_FUNC_spin_lock:
> +               return &bpf_spin_lock_proto;
> +       case BPF_FUNC_spin_unlock:
> +               return &bpf_spin_unlock_proto;

Ahh. Yes. That should do it. Right now I don't see concerns with safety
of the bpf_spin_lock in bpf_lsm progs.
John Fastabend Nov. 4, 2020, 6:51 a.m. UTC | #7
Alexei Starovoitov wrote:
> On Tue, Nov 3, 2020 at 5:55 PM KP Singh <kpsingh@chromium.org> wrote:
> >
> > [...]
> >
> > > >
> > > > I saw the docs mention that these are not exposed to tracing programs due to
> > > > insufficient preemption checks. Do you think it would be okay to allow them
> > > > for LSM programs?
> > >
> > > hmm. Isn't it allowed already?
> > > The verifier does:
> > >         if ((is_tracing_prog_type(prog_type) ||
> > >              prog_type == BPF_PROG_TYPE_SOCKET_FILTER) &&
> > >             map_value_has_spin_lock(map)) {
> > >                 verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
> > >                 return -EINVAL;
> > >         }
> > >
> > > BPF_PROG_TYPE_LSM is not in this list.
> >
> > The verifier does not have any problem, it's just that the helpers are not
> > exposed to LSM programs via bpf_lsm_func_proto.
> >
> > So all we need is:
> >
> > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > index 61f8cc52fd5b..93383df2140b 100644
> > --- a/kernel/bpf/bpf_lsm.c
> > +++ b/kernel/bpf/bpf_lsm.c
> > @@ -63,6 +63,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const
> > struct bpf_prog *prog)
> >                 return &bpf_task_storage_get_proto;
> >         case BPF_FUNC_task_storage_delete:
> >                 return &bpf_task_storage_delete_proto;
> > +       case BPF_FUNC_spin_lock:
> > +               return &bpf_spin_lock_proto;
> > +       case BPF_FUNC_spin_unlock:
> > +               return &bpf_spin_unlock_proto;
> 
> Ahh. Yes. That should do it. Right now I don't see concerns with safety
> of the bpf_spin_lock in bpf_lsm progs.

What about sleepable lsm hooks? Normally we wouldn't expect to sleep with
a spinlock held. Should we have a check to ensure programs bpf_spin_lock
are not also sleepable?
KP Singh Nov. 4, 2020, 11:03 a.m. UTC | #8
[...]

> > Ahh. Yes. That should do it. Right now I don't see concerns with safety
> > of the bpf_spin_lock in bpf_lsm progs.
>
> What about sleepable lsm hooks? Normally we wouldn't expect to sleep with
> a spinlock held. Should we have a check to ensure programs bpf_spin_lock
> are not also sleepable?

Thanks. Yes, I added that to my patch:

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 61f8cc52fd5b..93383df2140b 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -63,6 +63,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const
struct bpf_prog *prog)
                return &bpf_task_storage_get_proto;
        case BPF_FUNC_task_storage_delete:
                return &bpf_task_storage_delete_proto;
+       case BPF_FUNC_spin_lock:
+               return &bpf_spin_lock_proto;
+       case BPF_FUNC_spin_unlock:
+               return &bpf_spin_unlock_proto;
        default:
                return tracing_prog_func_proto(func_id, prog);
        }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 314018e8fc12..8892f7ba2041 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9739,6 +9739,23 @@ static int check_map_prog_compatibility(struct
bpf_verifier_env *env,
                return -EINVAL;
        }

+       if (map_value_has_spin_lock(map)) {
+               if (prog_type == BPF_PROG_TYPE_SOCKET_FILTER) {
+                       verbose(env, "socket filter progs cannot use
bpf_spin_lock yet\n");
+                       return -EINVAL;
+               }
+
+               if (is_tracing_prog_type(prog_type)) {
+                       verbose(env, "tracing progs cannot use
bpf_spin_lock yet\n");
+                       return -EINVAL;
+               }
+
+               if (prog->aux->sleepable) {
+                       verbose(env, "sleepable progs cannot use
bpf_spin_lock\n");
+                       return -EINVAL;
+               }
+       }
+
KP Singh Nov. 4, 2020, 11:11 a.m. UTC | #9
On Wed, Nov 4, 2020 at 12:03 PM KP Singh <kpsingh@chromium.org> wrote:
>
> [...]
>
> > > Ahh. Yes. That should do it. Right now I don't see concerns with safety
> > > of the bpf_spin_lock in bpf_lsm progs.
> >
> > What about sleepable lsm hooks? Normally we wouldn't expect to sleep with
> > a spinlock held. Should we have a check to ensure programs bpf_spin_lock
> > are not also sleepable?
>
> Thanks. Yes, I added that to my patch:
>
> diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> index 61f8cc52fd5b..93383df2140b 100644
> --- a/kernel/bpf/bpf_lsm.c
> +++ b/kernel/bpf/bpf_lsm.c
> @@ -63,6 +63,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const
> struct bpf_prog *prog)
>                 return &bpf_task_storage_get_proto;
>         case BPF_FUNC_task_storage_delete:
>                 return &bpf_task_storage_delete_proto;
> +       case BPF_FUNC_spin_lock:
> +               return &bpf_spin_lock_proto;
> +       case BPF_FUNC_spin_unlock:
> +               return &bpf_spin_unlock_proto;
>         default:
>                 return tracing_prog_func_proto(func_id, prog);
>         }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 314018e8fc12..8892f7ba2041 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -9739,6 +9739,23 @@ static int check_map_prog_compatibility(struct
> bpf_verifier_env *env,
>                 return -EINVAL;
>         }
>
> +       if (map_value_has_spin_lock(map)) {
> +               if (prog_type == BPF_PROG_TYPE_SOCKET_FILTER) {
> +                       verbose(env, "socket filter progs cannot use
> bpf_spin_lock yet\n");
> +                       return -EINVAL;
> +               }
> +
> +               if (is_tracing_prog_type(prog_type)) {
> +                       verbose(env, "tracing progs cannot use
> bpf_spin_lock yet\n");
> +                       return -EINVAL;
> +               }
> +
> +               if (prog->aux->sleepable) {
> +                       verbose(env, "sleepable progs cannot use
> bpf_spin_lock\n");

I think this can still be "yet" as it's doable; we can disable/enable
preemption in the helpers
and then have the verifier track that no sleepable helper is called
when a spin lock is held.
I would, however, prefer if we do it in a subsequent patch.

- KP

> +                       return -EINVAL;
> +               }
> +       }
> +
diff mbox series

Patch

diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
index 91cd6f357246..feba23f8848b 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
@@ -4,30 +4,149 @@ 
  * Copyright (C) 2020 Google LLC.
  */
 
+#define _GNU_SOURCE
+
+#include <asm-generic/errno-base.h>
+#include <unistd.h>
+#include <sys/stat.h>
 #include <test_progs.h>
 #include <linux/limits.h>
 
 #include "local_storage.skel.h"
 #include "network_helpers.h"
 
-int create_and_unlink_file(void)
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+{
+	return syscall(__NR_pidfd_open, pid, flags);
+}
+
+unsigned int duration;
+
+#define TEST_STORAGE_VALUE 0xbeefdead
+
+struct storage {
+	void *inode;
+	unsigned int value;
+	/* Lock ensures that spin locked versions of local stoage operations
+	 * also work, most operations in this tests are still single threaded
+	 */
+	struct bpf_spin_lock lock;
+};
+
+/* Copies an rm binary to a temp file. dest is a mkstemp template */
+int copy_rm(char *dest)
 {
-	char fname[PATH_MAX] = "/tmp/fileXXXXXX";
-	int fd;
+	int ret, fd_in, fd_out;
+	struct stat stat;
 
-	fd = mkstemp(fname);
-	if (fd < 0)
-		return fd;
+	fd_in = open("/bin/rm", O_RDONLY);
+	if (fd_in < 0)
+		return fd_in;
 
-	close(fd);
-	unlink(fname);
+	fd_out = mkstemp(dest);
+	if (fd_out < 0)
+		return fd_out;
+
+	ret = fstat(fd_in, &stat);
+	if (ret == -1)
+		return errno;
+
+	ret = copy_file_range(fd_in, NULL, fd_out, NULL, stat.st_size, 0);
+	if (ret == -1)
+		return errno;
+
+	/* Set executable permission on the copied file */
+	ret = chmod(dest, 0100);
+	if (ret == -1)
+		return errno;
+
+	close(fd_in);
+	close(fd_out);
 	return 0;
 }
 
+/* Fork and exec the provided rm binary and return the exit code of the
+ * forked process and its pid.
+ */
+int run_self_unlink(int *monitored_pid, const char *rm_path)
+{
+	int child_pid, child_status, ret;
+	int null_fd;
+
+	child_pid = fork();
+	if (child_pid == 0) {
+		null_fd = open("/dev/null", O_WRONLY);
+		dup2(null_fd, STDOUT_FILENO);
+		dup2(null_fd, STDERR_FILENO);
+		close(null_fd);
+
+		*monitored_pid = getpid();
+		/* Use the copied /usr/bin/rm to delete itself
+		 * /tmp/copy_of_rm /tmp/copy_of_rm.
+		 */
+		ret = execlp(rm_path, rm_path, rm_path, NULL);
+		if (ret)
+			exit(errno);
+	} else if (child_pid > 0) {
+		waitpid(child_pid, &child_status, 0);
+		return WEXITSTATUS(child_status);
+	}
+
+	return -EINVAL;
+}
+
+bool check_syscall_operations(int map_fd, int obj_fd)
+{
+	struct storage val = { .value = TEST_STORAGE_VALUE, .lock = { 0 } },
+		       lookup_val = { .value = 0, .lock = { 0 } };
+	int err;
+
+	/* Looking up an existing element should fail initially */
+	err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val,
+					BPF_F_LOCK);
+	if (CHECK(!err || errno != ENOENT, "bpf_map_lookup_elem",
+		  "err:%d errno:%d\n", err, errno))
+		return false;
+
+	/* Create a new element */
+	err = bpf_map_update_elem(map_fd, &obj_fd, &val,
+				  BPF_NOEXIST | BPF_F_LOCK);
+	if (CHECK(err < 0, "bpf_map_update_elem", "err:%d errno:%d\n", err,
+		  errno))
+		return false;
+
+	/* Lookup the newly created element */
+	err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val,
+					BPF_F_LOCK);
+	if (CHECK(err < 0, "bpf_map_lookup_elem", "err:%d errno:%d", err,
+		  errno))
+		return false;
+
+	/* Check the value of the newly created element */
+	if (CHECK(lookup_val.value != val.value, "bpf_map_lookup_elem",
+		  "value got = %x errno:%d", lookup_val.value, val.value))
+		return false;
+
+	err = bpf_map_delete_elem(map_fd, &obj_fd);
+	if (CHECK(err, "bpf_map_delete_elem()", "err:%d errno:%d\n", err,
+		  errno))
+		return false;
+
+	/* The lookup should fail, now that the element has been deleted */
+	err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val,
+					BPF_F_LOCK);
+	if (CHECK(!err || errno != ENOENT, "bpf_map_lookup_elem",
+		  "err:%d errno:%d\n", err, errno))
+		return false;
+
+	return true;
+}
+
 void test_test_local_storage(void)
 {
+	char tmp_exec_path[PATH_MAX] = "/tmp/copy_of_rmXXXXXX";
+	int err, serv_sk = -1, task_fd = -1;
 	struct local_storage *skel = NULL;
-	int err, duration = 0, serv_sk = -1;
 
 	skel = local_storage__open_and_load();
 	if (CHECK(!skel, "skel_load", "lsm skeleton failed\n"))
@@ -37,10 +156,35 @@  void test_test_local_storage(void)
 	if (CHECK(err, "attach", "lsm attach failed: %d\n", err))
 		goto close_prog;
 
+	task_fd = sys_pidfd_open(getpid(), 0);
+	if (CHECK(task_fd < 0, "pidfd_open",
+		  "failed to get pidfd err:%d, errno:%d", task_fd, errno))
+		goto close_prog;
+
+	if (!check_syscall_operations(bpf_map__fd(skel->maps.task_storage_map),
+				      task_fd))
+		goto close_prog;
+
+	err = copy_rm(tmp_exec_path);
+	if (CHECK(err < 0, "copy_rm", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	/* Sets skel->bss->monitored_pid to the pid of the forked child
+	 * forks a child process that executes tmp_exec_path and tries to
+	 * unlink its executable. This operation should be denied by the loaded
+	 * LSM program.
+	 */
+	err = run_self_unlink(&skel->bss->monitored_pid, tmp_exec_path);
+	if (CHECK(err != EPERM, "run_self_unlink", "err %d want EPERM\n", err))
+		goto close_prog;
+
+	/* Set the process being monitored to be the current process */
 	skel->bss->monitored_pid = getpid();
 
-	err = create_and_unlink_file();
-	if (CHECK(err < 0, "exec_cmd", "err %d errno %d\n", err, errno))
+	/* Remove the temporary created executable */
+	err = unlink(tmp_exec_path);
+	if (CHECK(err != 0, "unlink", "unable to unlink %s: %d", tmp_exec_path,
+		  errno))
 		goto close_prog;
 
 	CHECK(skel->data->inode_storage_result != 0, "inode_storage_result",
@@ -56,5 +200,6 @@  void test_test_local_storage(void)
 	close(serv_sk);
 
 close_prog:
+	close(task_fd);
 	local_storage__destroy(skel);
 }
diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c
index ef3822bc7542..a4979982ce80 100644
--- a/tools/testing/selftests/bpf/progs/local_storage.c
+++ b/tools/testing/selftests/bpf/progs/local_storage.c
@@ -17,34 +17,50 @@  int monitored_pid = 0;
 int inode_storage_result = -1;
 int sk_storage_result = -1;
 
-struct dummy_storage {
+struct local_storage {
+	struct inode *exec_inode;
 	__u32 value;
+	struct bpf_spin_lock lock;
 };
 
 struct {
 	__uint(type, BPF_MAP_TYPE_INODE_STORAGE);
 	__uint(map_flags, BPF_F_NO_PREALLOC);
 	__type(key, int);
-	__type(value, struct dummy_storage);
+	__type(value, struct local_storage);
 } inode_storage_map SEC(".maps");
 
 struct {
 	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
 	__uint(map_flags, BPF_F_NO_PREALLOC | BPF_F_CLONE);
 	__type(key, int);
-	__type(value, struct dummy_storage);
+	__type(value, struct local_storage);
 } sk_storage_map SEC(".maps");
 
+struct {
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, struct local_storage);
+} task_storage_map SEC(".maps");
+
 SEC("lsm/inode_unlink")
 int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim)
 {
 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
-	struct dummy_storage *storage;
+	struct local_storage *storage;
 	int err;
 
 	if (pid != monitored_pid)
 		return 0;
 
+	storage = bpf_task_storage_get(&task_storage_map,
+				       bpf_get_current_task_btf(), 0, 0);
+
+	/* Don't let an executable delete itself */
+	if (storage && storage->exec_inode == victim->d_inode)
+		return -EPERM;
+
 	storage = bpf_inode_storage_get(&inode_storage_map, victim->d_inode, 0,
 					BPF_LOCAL_STORAGE_GET_F_CREATE);
 	if (!storage)
@@ -65,7 +81,7 @@  int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address,
 	     int addrlen)
 {
 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
-	struct dummy_storage *storage;
+	struct local_storage *storage;
 	int err;
 
 	if (pid != monitored_pid)
@@ -91,7 +107,7 @@  int BPF_PROG(socket_post_create, struct socket *sock, int family, int type,
 	     int protocol, int kern)
 {
 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
-	struct dummy_storage *storage;
+	struct local_storage *storage;
 
 	if (pid != monitored_pid)
 		return 0;
@@ -110,7 +126,7 @@  SEC("lsm/file_open")
 int BPF_PROG(file_open, struct file *file)
 {
 	__u32 pid = bpf_get_current_pid_tgid() >> 32;
-	struct dummy_storage *storage;
+	struct local_storage *storage;
 
 	if (pid != monitored_pid)
 		return 0;
@@ -126,3 +142,18 @@  int BPF_PROG(file_open, struct file *file)
 	storage->value = DUMMY_STORAGE_VALUE;
 	return 0;
 }
+
+/* This uses the local storage to remember the inode of the binary that a
+ * process was originally executing.
+ */
+SEC("lsm/bprm_committed_creds")
+void BPF_PROG(exec, struct linux_binprm *bprm)
+{
+	struct local_storage *storage;
+
+	storage = bpf_task_storage_get(&task_storage_map,
+				       bpf_get_current_task_btf(), 0,
+				       BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (storage)
+		storage->exec_inode = bprm->file->f_inode;
+}