Message ID | 20221209043804.942352-2-aik@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: SEV: Enable AMD SEV-ES DebugSwap | expand |
On Fri, Dec 09, 2022 at 03:38:02PM +1100, Alexey Kardashevskiy wrote: Make that Subject: "x86/amd: Cache debug register values in percpu variables" to make it less generic and more specific as to what you're doing. > Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to > be noticeable with the AMD KVM SEV-ES DebugSwap feature enabled. > KVM is going to store host's DR[0-3] and DR[0-3]_ADDR_MASK before > switching to a guest; the hardware is going to swap these on VMRUN > and VMEXIT. > > Store MSR values passsed to set_dr_addr_mask() in percpu values s/values/variables/ Unknown word [passsed] in commit message. Use a spellchecker pls. > (when changed) and return them via new amd_get_dr_addr_mask(). > The gain here is about 10x. 10x when reading percpu vars vs MSR reads? Oh well. > As amd_set_dr_addr_mask() uses the array too, change the @dr type to > unsigned to avoid checking for <0. I feel ya but that function will warn once, return 0 when the @dr number is outta bounds and that 0 will still get used as an address mask. I think you really wanna return negative on error and the caller should not continue in that case. > diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c > index c75d75b9f11a..9ac5a19f89b9 100644 > --- a/arch/x86/kernel/cpu/amd.c > +++ b/arch/x86/kernel/cpu/amd.c > @@ -1158,24 +1158,41 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum) > return false; > } > > -void set_dr_addr_mask(unsigned long mask, int dr) > +DEFINE_PER_CPU_READ_MOSTLY(unsigned long[4], amd_dr_addr_mask); static > + > +static unsigned int amd_msr_dr_addr_masks[] = { > + MSR_F16H_DR0_ADDR_MASK, > + MSR_F16H_DR1_ADDR_MASK - 1 + 1, - 1 + 1 ? Why? Because of the DR0 and then DR1 being in a different MSR range? Who cares, make it simple: MSR_F16H_DR0_ADDR_MASK, MSR_F16H_DR1_ADDR_MASK, MSR_F16H_DR1_ADDR_MASK + 1, MSR_F16H_DR1_ADDR_MASK + 2 and add a comment if you want to denote the non-contiguous range but meh. > + MSR_F16H_DR1_ADDR_MASK - 1 + 2, > + MSR_F16H_DR1_ADDR_MASK - 1 + 3 > +}; > + > +void set_dr_addr_mask(unsigned long mask, unsigned int dr) > { > - if (!boot_cpu_has(X86_FEATURE_BPEXT)) > + if (!cpu_feature_enabled(X86_FEATURE_BPEXT)) > return; > > - switch (dr) { > - case 0: > - wrmsr(MSR_F16H_DR0_ADDR_MASK, mask, 0); > - break; > - case 1: > - case 2: > - case 3: > - wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0); > - break; > - default: > - break; > - } > + if (WARN_ON_ONCE(dr >= ARRAY_SIZE(amd_msr_dr_addr_masks))) > + return; > + > + if (per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] == mask) Do that at function entry: int cpu = smp_processor_id(); and use cpu here. > + return; > + > + wrmsr(amd_msr_dr_addr_masks[dr], mask, 0); > + per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] = mask; > +} Thx.
On 11/1/23 03:05, Borislav Petkov wrote: > On Fri, Dec 09, 2022 at 03:38:02PM +1100, Alexey Kardashevskiy wrote: > > Make that Subject: > > "x86/amd: Cache debug register values in percpu variables" > > to make it less generic and more specific as to what you're doing. > >> Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to >> be noticeable with the AMD KVM SEV-ES DebugSwap feature enabled. >> KVM is going to store host's DR[0-3] and DR[0-3]_ADDR_MASK before >> switching to a guest; the hardware is going to swap these on VMRUN >> and VMEXIT. >> >> Store MSR values passsed to set_dr_addr_mask() in percpu values > > s/values/variables/ > > Unknown word [passsed] in commit message. > > Use a spellchecker pls. > >> (when changed) and return them via new amd_get_dr_addr_mask(). >> The gain here is about 10x. > > 10x when reading percpu vars vs MSR reads? > > Oh well. > >> As amd_set_dr_addr_mask() uses the array too, change the @dr type to >> unsigned to avoid checking for <0. > > I feel ya but that function will warn once, return 0 when the @dr number is > outta bounds and that 0 will still get used as an address mask. "that function" is set_dr_addr_mask() (btw should I rename it to start with amd_? the commit log uses the wrong&longer name) which does not return a mask, amd_get_dr_addr_mask() does. > I think you really wanna return negative on error and the caller should not > continue in that case. If it is out of bounds, it won't neither set or get. And these 2 helpers are used only by the kernel and the callers pass 0..3 and nothing else. BUG_ON() would do too, for example. >> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c >> index c75d75b9f11a..9ac5a19f89b9 100644 >> --- a/arch/x86/kernel/cpu/amd.c >> +++ b/arch/x86/kernel/cpu/amd.c >> @@ -1158,24 +1158,41 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum) >> return false; >> } >> >> -void set_dr_addr_mask(unsigned long mask, int dr) >> +DEFINE_PER_CPU_READ_MOSTLY(unsigned long[4], amd_dr_addr_mask); > > static > >> + >> +static unsigned int amd_msr_dr_addr_masks[] = { >> + MSR_F16H_DR0_ADDR_MASK, >> + MSR_F16H_DR1_ADDR_MASK - 1 + 1, > > - 1 + 1 ? > > Why? > > Because of the DR0 and then DR1 being in a different MSR range? Yup. > > Who cares, make it simple: > > MSR_F16H_DR0_ADDR_MASK, > MSR_F16H_DR1_ADDR_MASK, > MSR_F16H_DR1_ADDR_MASK + 1, > MSR_F16H_DR1_ADDR_MASK + 2 > > and add a comment if you want to denote the non-contiguous range but meh. imho having 1,2,3 in the code eliminates the need in a comment and produces the exact same end result. But since nobody cares, I'll do it the shorter way with just +1 and +2. > >> + MSR_F16H_DR1_ADDR_MASK - 1 + 2, >> + MSR_F16H_DR1_ADDR_MASK - 1 + 3 >> +}; >> + >> +void set_dr_addr_mask(unsigned long mask, unsigned int dr) >> { >> - if (!boot_cpu_has(X86_FEATURE_BPEXT)) >> + if (!cpu_feature_enabled(X86_FEATURE_BPEXT)) >> return; >> >> - switch (dr) { >> - case 0: >> - wrmsr(MSR_F16H_DR0_ADDR_MASK, mask, 0); >> - break; >> - case 1: >> - case 2: >> - case 3: >> - wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0); >> - break; >> - default: >> - break; >> - } >> + if (WARN_ON_ONCE(dr >= ARRAY_SIZE(amd_msr_dr_addr_masks))) >> + return; >> + >> + if (per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] == mask) > > Do that at function entry: > > int cpu = smp_processor_id(); > > and use cpu here. Sure. Out of curiosity - why?^w^w^w^w^ it reduced the vmlinux size by 48 bytes, nice. Thanks for the review! > >> + return; >> + >> + wrmsr(amd_msr_dr_addr_masks[dr], mask, 0); >> + per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] = mask; >> +} > > Thx. >
On Thu, Jan 12, 2023 at 04:21:28PM +1100, Alexey Kardashevskiy wrote: > "that function" is set_dr_addr_mask() (btw should I rename it to start with > amd_? If you really wanna... I mean the code is already doing AMD-specific handling but sure, it'll be more obvious then that arch_install_hw_breakpoint() does only AMD-specific masking there under the info->mask test. > If it is out of bounds, it won't neither set or get. And these 2 helpers are > used only by the kernel and the callers pass 0..3 and nothing else. BUG_ON() > would do too, for example. Yeah, we don't do BUG_ON - look for Linus' colorful explanations why. :) In any case, I think we should always aim for proper recovery from errors but this case is not that important so let's leave it at the WARN_ON_ONCE. > imho having 1,2,3 in the code eliminates the need in a comment and produces > the exact same end result. But since nobody cares, I'll do it the shorter > way with just +1 and +2. Yeah, the more important goal is simplicity. And that pays off when you have to revisit that code and figure out what it does, later. > Sure. Out of curiosity - why?^w^w^w^w^ it reduced the vmlinux size by 48 > bytes, nice. The same answer - simplicity and speed when reading it. That if (per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] == mask) is a bit harder to parse than if (per_cpu(amd_dr_addr_mask, cpu)[dr] == mask) Thx.
diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h index cfdf307ddc01..59f97ba25d5f 100644 --- a/arch/x86/include/asm/debugreg.h +++ b/arch/x86/include/asm/debugreg.h @@ -126,9 +126,14 @@ static __always_inline void local_db_restore(unsigned long dr7) } #ifdef CONFIG_CPU_SUP_AMD -extern void set_dr_addr_mask(unsigned long mask, int dr); +extern void set_dr_addr_mask(unsigned long mask, unsigned int dr); +extern unsigned long amd_get_dr_addr_mask(unsigned int dr); #else -static inline void set_dr_addr_mask(unsigned long mask, int dr) { } +static inline void set_dr_addr_mask(unsigned long mask, unsigned int dr) { } +static inline unsigned long amd_get_dr_addr_mask(unsigned int dr) +{ + return 0; +} #endif #endif /* _ASM_X86_DEBUGREG_H */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index c75d75b9f11a..9ac5a19f89b9 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1158,24 +1158,41 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum) return false; } -void set_dr_addr_mask(unsigned long mask, int dr) +DEFINE_PER_CPU_READ_MOSTLY(unsigned long[4], amd_dr_addr_mask); + +static unsigned int amd_msr_dr_addr_masks[] = { + MSR_F16H_DR0_ADDR_MASK, + MSR_F16H_DR1_ADDR_MASK - 1 + 1, + MSR_F16H_DR1_ADDR_MASK - 1 + 2, + MSR_F16H_DR1_ADDR_MASK - 1 + 3 +}; + +void set_dr_addr_mask(unsigned long mask, unsigned int dr) { - if (!boot_cpu_has(X86_FEATURE_BPEXT)) + if (!cpu_feature_enabled(X86_FEATURE_BPEXT)) return; - switch (dr) { - case 0: - wrmsr(MSR_F16H_DR0_ADDR_MASK, mask, 0); - break; - case 1: - case 2: - case 3: - wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0); - break; - default: - break; - } + if (WARN_ON_ONCE(dr >= ARRAY_SIZE(amd_msr_dr_addr_masks))) + return; + + if (per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] == mask) + return; + + wrmsr(amd_msr_dr_addr_masks[dr], mask, 0); + per_cpu(amd_dr_addr_mask, smp_processor_id())[dr] = mask; +} + +unsigned long amd_get_dr_addr_mask(unsigned int dr) +{ + if (!cpu_feature_enabled(X86_FEATURE_BPEXT)) + return 0; + + if (WARN_ON_ONCE(dr >= ARRAY_SIZE(amd_msr_dr_addr_masks))) + return 0; + + return per_cpu(amd_dr_addr_mask[dr], smp_processor_id()); } +EXPORT_SYMBOL_GPL(amd_get_dr_addr_mask); u32 amd_get_highest_perf(void) {
Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to be noticeable with the AMD KVM SEV-ES DebugSwap feature enabled. KVM is going to store host's DR[0-3] and DR[0-3]_ADDR_MASK before switching to a guest; the hardware is going to swap these on VMRUN and VMEXIT. Store MSR values passsed to set_dr_addr_mask() in percpu values (when changed) and return them via new amd_get_dr_addr_mask(). The gain here is about 10x. As amd_set_dr_addr_mask() uses the array too, change the @dr type to unsigned to avoid checking for <0. While at it, replace deprecated boot_cpu_has() with cpu_feature_enabled() in set_dr_addr_mask(). Signed-off-by: Alexey Kardashevskiy <aik@amd.com> --- Changes: v2: * reworked to use arrays * set() skips wrmsr() when the mask is not changed * added stub for get_dr_addr_mask() * changed @dr type to unsigned * s/boot_cpu_has/cpu_feature_enabled/ * added amd_ prefix --- arch/x86/include/asm/debugreg.h | 9 +++- arch/x86/kernel/cpu/amd.c | 45 ++++++++++++++------ 2 files changed, 38 insertions(+), 16 deletions(-)