Message ID | 20220721055728.718573-12-kaleshsingh@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM nVHE Hypervisor stack unwinder | expand |
Hi Kalesh, On Thu, Jul 21, 2022 at 6:58 AM Kalesh Singh <kaleshsingh@google.com> wrote: > > Add stub implementations of non-protected nVHE stack unwinder, for > building. These are implemented later in this series. > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > --- Reviewed-by: Fuad Tabba <tabba@google.com> Cheers, /fuad > > Changes in v5: > - Mark unwind_next() as inline, per Marc > - Comment !__KVM_NVHE_HYPERVISOR__ unwinder path, per Marc > > arch/arm64/include/asm/stacktrace/nvhe.h | 26 ++++++++++++++++++++++++ > 1 file changed, 26 insertions(+) > > diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h > index 80d71932afff..3078501f8e22 100644 > --- a/arch/arm64/include/asm/stacktrace/nvhe.h > +++ b/arch/arm64/include/asm/stacktrace/nvhe.h > @@ -8,6 +8,12 @@ > * the HYP memory. The stack is unwinded in EL2 and dumped to a shared > * buffer where the host can read and print the stacktrace. > * > + * 2) Non-protected nVHE mode - the host can directly access the > + * HYP stack pages and unwind the HYP stack in EL1. This saves having > + * to allocate shared buffers for the host to read the unwinded > + * stacktrace. > + * > + * > * Copyright (C) 2022 Google LLC > */ > #ifndef __ASM_STACKTRACE_NVHE_H > @@ -55,5 +61,25 @@ static inline int notrace unwind_next(struct unwind_state *state) > NOKPROBE_SYMBOL(unwind_next); > #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ > > +#else /* !__KVM_NVHE_HYPERVISOR__ */ > +/* > + * Conventional (non-protected) nVHE HYP stack unwinder > + * > + * In non-protected mode, the unwinding is done from kernel proper context > + * (by the host in EL1). > + */ > + > +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, > + struct stack_info *info) > +{ > + return false; > +} > + > +static inline int notrace unwind_next(struct unwind_state *state) > +{ > + return 0; > +} > +NOKPROBE_SYMBOL(unwind_next); > + > #endif /* __KVM_NVHE_HYPERVISOR__ */ > #endif /* __ASM_STACKTRACE_NVHE_H */ > -- > 2.37.0.170.g444d1eabd0-goog >
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 80d71932afff..3078501f8e22 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -8,6 +8,12 @@ * the HYP memory. The stack is unwinded in EL2 and dumped to a shared * buffer where the host can read and print the stacktrace. * + * 2) Non-protected nVHE mode - the host can directly access the + * HYP stack pages and unwind the HYP stack in EL1. This saves having + * to allocate shared buffers for the host to read the unwinded + * stacktrace. + * + * * Copyright (C) 2022 Google LLC */ #ifndef __ASM_STACKTRACE_NVHE_H @@ -55,5 +61,25 @@ static inline int notrace unwind_next(struct unwind_state *state) NOKPROBE_SYMBOL(unwind_next); #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ +#else /* !__KVM_NVHE_HYPERVISOR__ */ +/* + * Conventional (non-protected) nVHE HYP stack unwinder + * + * In non-protected mode, the unwinding is done from kernel proper context + * (by the host in EL1). + */ + +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static inline int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); + #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */
Add stub implementations of non-protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: Kalesh Singh <kaleshsingh@google.com> --- Changes in v5: - Mark unwind_next() as inline, per Marc - Comment !__KVM_NVHE_HYPERVISOR__ unwinder path, per Marc arch/arm64/include/asm/stacktrace/nvhe.h | 26 ++++++++++++++++++++++++ 1 file changed, 26 insertions(+)