mbox series

[v5,0/5] Reduce overhead of LSMs with static calls

Message ID 20230928202410.3765062-1-kpsingh@kernel.org (mailing list archive)
Headers show
Series Reduce overhead of LSMs with static calls | expand

Message

KP Singh Sept. 28, 2023, 8:24 p.m. UTC
# Background

LSM hooks (callbacks) are currently invoked as indirect function calls. These
callbacks are registered into a linked list at boot time as the order of the
LSMs can be configured on the kernel command line with the "lsm=" command line
parameter.

Indirect function calls have a high overhead due to retpoline mitigation for
various speculative execution attacks.

Retpolines remain relevant even with newer generation CPUs as recently
discovered speculative attacks, like Spectre BHB need Retpolines to mitigate
against branch history injection and still need to be used in combination with
newer mitigation features like eIBRS.

This overhead is especially significant for the "bpf" LSM which allows the user
to implement LSM functionality with eBPF program. In order to facilitate this
the "bpf" LSM provides a default callback for all LSM hooks. When enabled,
the "bpf" LSM incurs an unnecessary / avoidable indirect call. This is
especially bad in OS hot paths (e.g. in the networking stack).
This overhead prevents the adoption of bpf LSM on performance critical
systems, and also, in general, slows down all LSMs.

Since we know the address of the enabled LSM callbacks at compile time and only
the order is determined at boot time, the LSM framework can allocate static
calls for each of the possible LSM callbacks and these calls can be updated once
the order is determined at boot.

This series is a respin of the RFC proposed by Paul Renauld (renauld@google.com)
and Brendan Jackman (jackmanb@google.com) [1]

# Performance improvement

With this patch-set some syscalls with lots of LSM hooks in their path
benefitted at an average of ~3% and I/O and Pipe based system calls benefitting
the most.

Here are the results of the relevant Unixbench system benchmarks with BPF LSM
and SELinux enabled with default policies enabled with and without these
patches.

Benchmark                                               Delta(%): (+ is better)
===============================================================================
Execl Throughput                                             +1.9356
File Write 1024 bufsize 2000 maxblocks                       +6.5953
Pipe Throughput                                              +9.5499
Pipe-based Context Switching                                 +3.0209
Process Creation                                             +2.3246
Shell Scripts (1 concurrent)                                 +1.4975
System Call Overhead                                         +2.7815
System Benchmarks Index Score (Partial Only):                +3.4859

In the best case, some syscalls like eventfd_create benefitted to about ~10%.
The full analysis can be viewed at https://kpsingh.ch/lsm-perf

[1] https://lore.kernel.org/linux-security-module/20200820164753.3256899-1-jackmanb@chromium.org/


# BPF LSM Side effects

Patch 4 of the series also addresses the issues with the side effects of the
default value return values of the BPF LSM callbacks and also removes the
overheads associated with them making it deployable at hyperscale.

# v4 -> v5

* Rebase to linux-next/master
* Fixed the case where MAX_LSM_COUNT comes to zero when just CONFIG_SECURITY
  is compiled in without any other LSM enabled as reported here:

  https://lore.kernel.org/bpf/202309271206.d7fb60f9-oliver.sang@intel.com

# v3 -> v4

* Refactor LSM count macros to use COUNT_ARGS
* Change CONFIG_SECURITY_HOOK_LIKELY likely's default value to be based on
  the LSM enabled and have it depend on CONFIG_EXPERT. There are a lot of subtle
  options behind CONFIG_EXPERT and this should, hopefully alleviate concerns
  about yet another knob.
* __randomize_layout for struct lsm_static_call and, in addition to the cover
  letter add performance numbers to 3rd patch and some minor commit message
  updates.
* Rebase to linux-next.

# v2 -> v3

* Fixed a build issue on archs which don't have static calls and enable
  CONFIG_SECURITY.
* Updated the LSM_COUNT macros based on Andrii's suggestions.
* Changed the security_ prefix to lsm_prefix based on Casey's suggestion.
* Inlined static_branch_maybe into lsm_for_each_hook on Kees' feedback.

# v1 -> v2 (based on linux-next, next-20230614)

* Incorporated suggestions from Kees
* Changed the way MAX_LSMs are counted from a binary based generator to a clever header.
* Add CONFIG_SECURITY_HOOK_LIKELY to configure the likelihood of LSM hooks.


KP Singh (5):
  kernel: Add helper macros for loop unrolling
  security: Count the LSMs enabled at compile time
  security: Replace indirect LSM hook calls with static calls
  bpf: Only enable BPF LSM hooks when an LSM program is attached
  security: Add CONFIG_SECURITY_HOOK_LIKELY

 include/linux/bpf.h       |   1 +
 include/linux/bpf_lsm.h   |   5 +
 include/linux/lsm_count.h | 114 ++++++++++++++++++++
 include/linux/lsm_hooks.h |  81 +++++++++++++--
 include/linux/unroll.h    |  36 +++++++
 kernel/bpf/trampoline.c   |  29 +++++-
 security/Kconfig          |  11 ++
 security/bpf/hooks.c      |  25 ++++-
 security/security.c       | 213 +++++++++++++++++++++++++-------------
 9 files changed, 432 insertions(+), 83 deletions(-)
 create mode 100644 include/linux/lsm_count.h
 create mode 100644 include/linux/unroll.h

Comments

Kees Cook Sept. 29, 2023, 12:41 a.m. UTC | #1
On Thu, Sep 28, 2023 at 10:24:05PM +0200, KP Singh wrote:
> # Performance improvement
> 
> With this patch-set some syscalls with lots of LSM hooks in their path
> benefitted at an average of ~3% and I/O and Pipe based system calls benefitting
> the most.

Paul, FWIW, I think this series is ready to land in -next. I'd like it
to get some bake time there just to see if anything unexpected shows up.
It's quite happy in all my local testing, though.

-Kees
Paolo Abeni Oct. 2, 2023, 11:06 a.m. UTC | #2
On Thu, 2023-09-28 at 22:24 +0200, KP Singh wrote:
> # Background
> 
> LSM hooks (callbacks) are currently invoked as indirect function calls. These
> callbacks are registered into a linked list at boot time as the order of the
> LSMs can be configured on the kernel command line with the "lsm=" command line
> parameter.
> 
> Indirect function calls have a high overhead due to retpoline mitigation for
> various speculative execution attacks.
> 
> Retpolines remain relevant even with newer generation CPUs as recently
> discovered speculative attacks, like Spectre BHB need Retpolines to mitigate
> against branch history injection and still need to be used in combination with
> newer mitigation features like eIBRS.
> 
> This overhead is especially significant for the "bpf" LSM which allows the user
> to implement LSM functionality with eBPF program. In order to facilitate this
> the "bpf" LSM provides a default callback for all LSM hooks. When enabled,
> the "bpf" LSM incurs an unnecessary / avoidable indirect call. This is
> especially bad in OS hot paths (e.g. in the networking stack).
> This overhead prevents the adoption of bpf LSM on performance critical
> systems, and also, in general, slows down all LSMs.
> 
> Since we know the address of the enabled LSM callbacks at compile time and only
> the order is determined at boot time, the LSM framework can allocate static
> calls for each of the possible LSM callbacks and these calls can be updated once
> the order is determined at boot.
> 
> This series is a respin of the RFC proposed by Paul Renauld (renauld@google.com)
> and Brendan Jackman (jackmanb@google.com) [1]
> 
> # Performance improvement
> 
> With this patch-set some syscalls with lots of LSM hooks in their path
> benefitted at an average of ~3% and I/O and Pipe based system calls benefitting
> the most.
> 
> Here are the results of the relevant Unixbench system benchmarks with BPF LSM
> and SELinux enabled with default policies enabled with and without these
> patches.
> 
> Benchmark                                               Delta(%): (+ is better)
> ===============================================================================
> Execl Throughput                                             +1.9356
> File Write 1024 bufsize 2000 maxblocks                       +6.5953
> Pipe Throughput                                              +9.5499
> Pipe-based Context Switching                                 +3.0209
> Process Creation                                             +2.3246
> Shell Scripts (1 concurrent)                                 +1.4975
> System Call Overhead                                         +2.7815
> System Benchmarks Index Score (Partial Only):                +3.4859

FTR, I also measure a ~3% tput improvement in UDP stream test over
loopback.

@KP Singh, I would have appreciated being cc-ed here, since I provided
feedback on a previous revision (as soon as I learned of this effort).

Cheers,

Paolo
KP Singh Oct. 2, 2023, 11:09 a.m. UTC | #3
On Mon, Oct 2, 2023 at 1:06 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On Thu, 2023-09-28 at 22:24 +0200, KP Singh wrote:
> > # Background
> >
> > LSM hooks (callbacks) are currently invoked as indirect function calls. These
> > callbacks are registered into a linked list at boot time as the order of the
> > LSMs can be configured on the kernel command line with the "lsm=" command line
> > parameter.
> >
> > Indirect function calls have a high overhead due to retpoline mitigation for
> > various speculative execution attacks.
> >
> > Retpolines remain relevant even with newer generation CPUs as recently
> > discovered speculative attacks, like Spectre BHB need Retpolines to mitigate
> > against branch history injection and still need to be used in combination with
> > newer mitigation features like eIBRS.
> >
> > This overhead is especially significant for the "bpf" LSM which allows the user
> > to implement LSM functionality with eBPF program. In order to facilitate this
> > the "bpf" LSM provides a default callback for all LSM hooks. When enabled,
> > the "bpf" LSM incurs an unnecessary / avoidable indirect call. This is
> > especially bad in OS hot paths (e.g. in the networking stack).
> > This overhead prevents the adoption of bpf LSM on performance critical
> > systems, and also, in general, slows down all LSMs.
> >
> > Since we know the address of the enabled LSM callbacks at compile time and only
> > the order is determined at boot time, the LSM framework can allocate static
> > calls for each of the possible LSM callbacks and these calls can be updated once
> > the order is determined at boot.
> >
> > This series is a respin of the RFC proposed by Paul Renauld (renauld@google.com)
> > and Brendan Jackman (jackmanb@google.com) [1]
> >
> > # Performance improvement
> >
> > With this patch-set some syscalls with lots of LSM hooks in their path
> > benefitted at an average of ~3% and I/O and Pipe based system calls benefitting
> > the most.
> >
> > Here are the results of the relevant Unixbench system benchmarks with BPF LSM
> > and SELinux enabled with default policies enabled with and without these
> > patches.
> >
> > Benchmark                                               Delta(%): (+ is better)
> > ===============================================================================
> > Execl Throughput                                             +1.9356
> > File Write 1024 bufsize 2000 maxblocks                       +6.5953
> > Pipe Throughput                                              +9.5499
> > Pipe-based Context Switching                                 +3.0209
> > Process Creation                                             +2.3246
> > Shell Scripts (1 concurrent)                                 +1.4975
> > System Call Overhead                                         +2.7815
> > System Benchmarks Index Score (Partial Only):                +3.4859
>
> FTR, I also measure a ~3% tput improvement in UDP stream test over
> loopback.
>

Thanks for running the numbers and testing these patches, greatly appreciated!

> @KP Singh, I would have appreciated being cc-ed here, since I provided

Definitely, a miss on my part. Will keep you Cc'ed in any future revisions.

I think we can also add a Tested-by: tag on the main patch and add
your performance numbers to the commit as well.

- KP

> feedback on a previous revision (as soon as I learned of this effort).
>
> Cheers,
>
> Paolo
>
Paolo Abeni Oct. 2, 2023, 1:27 p.m. UTC | #4
On Mon, 2023-10-02 at 13:09 +0200, KP Singh wrote:
> On Mon, Oct 2, 2023 at 1:06 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > On Thu, 2023-09-28 at 22:24 +0200, KP Singh wrote:
> > > # Background
> > > 
> > > LSM hooks (callbacks) are currently invoked as indirect function calls. These
> > > callbacks are registered into a linked list at boot time as the order of the
> > > LSMs can be configured on the kernel command line with the "lsm=" command line
> > > parameter.
> > > 
> > > Indirect function calls have a high overhead due to retpoline mitigation for
> > > various speculative execution attacks.
> > > 
> > > Retpolines remain relevant even with newer generation CPUs as recently
> > > discovered speculative attacks, like Spectre BHB need Retpolines to mitigate
> > > against branch history injection and still need to be used in combination with
> > > newer mitigation features like eIBRS.
> > > 
> > > This overhead is especially significant for the "bpf" LSM which allows the user
> > > to implement LSM functionality with eBPF program. In order to facilitate this
> > > the "bpf" LSM provides a default callback for all LSM hooks. When enabled,
> > > the "bpf" LSM incurs an unnecessary / avoidable indirect call. This is
> > > especially bad in OS hot paths (e.g. in the networking stack).
> > > This overhead prevents the adoption of bpf LSM on performance critical
> > > systems, and also, in general, slows down all LSMs.
> > > 
> > > Since we know the address of the enabled LSM callbacks at compile time and only
> > > the order is determined at boot time, the LSM framework can allocate static
> > > calls for each of the possible LSM callbacks and these calls can be updated once
> > > the order is determined at boot.
> > > 
> > > This series is a respin of the RFC proposed by Paul Renauld (renauld@google.com)
> > > and Brendan Jackman (jackmanb@google.com) [1]
> > > 
> > > # Performance improvement
> > > 
> > > With this patch-set some syscalls with lots of LSM hooks in their path
> > > benefitted at an average of ~3% and I/O and Pipe based system calls benefitting
> > > the most.
> > > 
> > > Here are the results of the relevant Unixbench system benchmarks with BPF LSM
> > > and SELinux enabled with default policies enabled with and without these
> > > patches.
> > > 
> > > Benchmark                                               Delta(%): (+ is better)
> > > ===============================================================================
> > > Execl Throughput                                             +1.9356
> > > File Write 1024 bufsize 2000 maxblocks                       +6.5953
> > > Pipe Throughput                                              +9.5499
> > > Pipe-based Context Switching                                 +3.0209
> > > Process Creation                                             +2.3246
> > > Shell Scripts (1 concurrent)                                 +1.4975
> > > System Call Overhead                                         +2.7815
> > > System Benchmarks Index Score (Partial Only):                +3.4859
> > 
> > FTR, I also measure a ~3% tput improvement in UDP stream test over
> > loopback.
> > 
> 
> Thanks for running the numbers and testing these patches, greatly appreciated!
> 
> > @KP Singh, I would have appreciated being cc-ed here, since I provided
> 
> Definitely, a miss on my part. Will keep you Cc'ed in any future revisions.

Thanks!

> I think we can also add a Tested-by: tag on the main patch and add
> your performance numbers to the commit as well.

Feel free to include that, even if my testing is limited to the
performance test described above.

Cheers,

Paolo