Message ID | 20230612042559.375660-11-michael.roth@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand |
On 6/11/23 21:25, Michael Roth wrote: > + /* > + * If the RMP entry at the faulting pfn was not assigned, then not sure > + * what caused the RMP violation. To get some useful debug information, > + * iterate through the entire 2MB region, and dump the RMP entries if > + * one of the bit in the RMP entry is set. > + */ > + pfn = pfn & ~(PTRS_PER_PMD - 1); > + pfn_end = pfn + PTRS_PER_PMD; > + > + while (pfn < pfn_end) { > + ret = __snp_lookup_rmpentry(pfn, &e, &level); > + if (ret) { > + pr_info("Failed to read RMP entry for PFN 0x%llx\n", pfn); > + pfn++; > + continue; > + } > + > + if (e.low || e.high) > + pr_info("RMPEntry paddr 0x%llx: [high=0x%016llx low=0x%016llx]\n", > + pfn << PAGE_SHIFT, e.high, e.low); > + pfn++; > + } > +} Dumping 511 lines of (possible) junk into the dmesg buffer seems a _bit_ rude here. I can see dumping out the 2M RMP entry, but not the other 510. This also destroys the information about which pfn was being targeted for the dump in the first place. That seems unfortunate.
diff --git a/arch/x86/coco/sev/host.c b/arch/x86/coco/sev/host.c index 0cc5a6d11b25..d766b3bc6647 100644 --- a/arch/x86/coco/sev/host.c +++ b/arch/x86/coco/sev/host.c @@ -295,3 +295,46 @@ int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) return 0; } EXPORT_SYMBOL_GPL(snp_lookup_rmpentry); + +void sev_dump_rmpentry(u64 pfn) +{ + unsigned long pfn_end; + struct rmpentry e; + int level, ret; + + ret = __snp_lookup_rmpentry(pfn, &e, &level); + if (ret) { + pr_info("Failed to read RMP entry for PFN 0x%llx, error %d\n", pfn, ret); + return; + } + + if (rmpentry_assigned(&e)) { + pr_info("RMPEntry paddr 0x%llx: [high=0x%016llx low=0x%016llx]\n", + pfn << PAGE_SHIFT, e.high, e.low); + return; + } + + /* + * If the RMP entry at the faulting pfn was not assigned, then not sure + * what caused the RMP violation. To get some useful debug information, + * iterate through the entire 2MB region, and dump the RMP entries if + * one of the bit in the RMP entry is set. + */ + pfn = pfn & ~(PTRS_PER_PMD - 1); + pfn_end = pfn + PTRS_PER_PMD; + + while (pfn < pfn_end) { + ret = __snp_lookup_rmpentry(pfn, &e, &level); + if (ret) { + pr_info("Failed to read RMP entry for PFN 0x%llx\n", pfn); + pfn++; + continue; + } + + if (e.low || e.high) + pr_info("RMPEntry paddr 0x%llx: [high=0x%016llx low=0x%016llx]\n", + pfn << PAGE_SHIFT, e.high, e.low); + pfn++; + } +} +EXPORT_SYMBOL_GPL(sev_dump_rmpentry); diff --git a/arch/x86/include/asm/sev-host.h b/arch/x86/include/asm/sev-host.h index 30d47e20081d..85cfe577155c 100644 --- a/arch/x86/include/asm/sev-host.h +++ b/arch/x86/include/asm/sev-host.h @@ -15,8 +15,10 @@ #ifdef CONFIG_KVM_AMD_SEV int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level); +void sev_dump_rmpentry(u64 pfn); #else static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return 0; } +static inline void sev_dump_rmpentry(u64 pfn) {} #endif #endif