Message ID | 20190404003249.14356-23-matthewgarrett@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Lockdown patches for 5.2 | expand |
+bpf list On Wed, Apr 3, 2019 at 8:34 PM Matthew Garrett <matthewgarrett@google.com> wrote: > There are some bpf functions can be used to read kernel memory: > bpf_probe_read, bpf_probe_write_user and bpf_trace_printk. These allow > private keys in kernel memory (e.g. the hibernation image signing key) to > be read by an eBPF program and kernel memory to be altered without > restriction. Disable them if the kernel has been locked down in > confidentiality mode. > > Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> > Signed-off-by: David Howells <dhowells@redhat.com> > Signed-off-by: Matthew Garrett <mjg59@google.com> > cc: netdev@vger.kernel.org > cc: Chun-Yi Lee <jlee@suse.com> > cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> > Cc: Daniel Borkmann <daniel@iogearbox.net> > --- > kernel/trace/bpf_trace.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 8b068adb9da1..9e8eda605b5e 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -137,6 +137,9 @@ BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr) > { > int ret; > > + if (kernel_is_locked_down("BPF", LOCKDOWN_CONFIDENTIALITY)) > + return -EINVAL; > + > ret = probe_kernel_read(dst, unsafe_ptr, size); > if (unlikely(ret < 0)) > memset(dst, 0, size); This looks wrong. bpf_probe_read_proto is declared with an ARG_PTR_TO_UNINIT_MEM argument, so if you don't do a "memset(dst, 0, size);" like in the probe_kernel_read() error path, the BPF program can read uninitialized memory.
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 8b068adb9da1..9e8eda605b5e 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -137,6 +137,9 @@ BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr) { int ret; + if (kernel_is_locked_down("BPF", LOCKDOWN_CONFIDENTIALITY)) + return -EINVAL; + ret = probe_kernel_read(dst, unsafe_ptr, size); if (unlikely(ret < 0)) memset(dst, 0, size); @@ -156,6 +159,8 @@ static const struct bpf_func_proto bpf_probe_read_proto = { BPF_CALL_3(bpf_probe_write_user, void *, unsafe_ptr, const void *, src, u32, size) { + if (kernel_is_locked_down("BPF", LOCKDOWN_CONFIDENTIALITY)) + return -EINVAL; /* * Ensure we're in user context which is safe for the helper to * run. This helper has no business in a kthread. @@ -207,6 +212,9 @@ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1, char buf[64]; int i; + if (kernel_is_locked_down("BPF", LOCKDOWN_CONFIDENTIALITY)) + return -EINVAL; + /* * bpf_check()->check_func_arg()->check_stack_boundary() * guarantees that fmt points to bpf program stack, @@ -535,6 +543,9 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size, { int ret; + if (kernel_is_locked_down("BPF", LOCKDOWN_CONFIDENTIALITY)) + return -EINVAL; + /* * The strncpy_from_unsafe() call will likely not fill the entire * buffer, but that's okay in this circumstance as we're probing