Message ID | 20230220183847.59159-18-michael.roth@amd.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Herbert Xu |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand |
On 2/20/23 10:38, Michael Roth wrote: > +static int handle_split_page_fault(struct vm_fault *vmf) > +{ > + __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); > + return 0; > +} > + > /* > * By the time we get here, we already hold the mm semaphore > * > @@ -5078,6 +5084,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, > pmd_migration_entry_wait(mm, vmf.pmd); > return 0; > } > + > + if (flags & FAULT_FLAG_PAGE_SPLIT) > + return handle_split_page_fault(&vmf); I asked this long ago, but how do you prevent these faults from occurring on hugetlbfs mappings that can't be split?
On 2/20/23 19:38, Michael Roth wrote: > +static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_code, > + unsigned long address) > +{ > + int rmp_level, level; > + pgd_t *pgd; > + pte_t *pte; > + u64 pfn; > + > + pgd = __va(read_cr3_pa()); > + pgd += pgd_index(address); > + > + pte = lookup_address_in_pgd(pgd, address, &level); > + > + /* > + * It can happen if there was a race between an unmap event and > + * the RMP fault delivery. > + */ > + if (!pte || !pte_present(*pte)) > + return RMP_PF_UNMAP; > + > + /* > + * RMP page fault handler follows this algorithm: > + * 1. Compute the pfn for the 4kb page being accessed > + * 2. Read that RMP entry -- If it is assigned then kill the process > + * 3. Otherwise, check the level from the host page table > + * If level=PG_LEVEL_4K then the page is already smashed > + * so just retry the instruction > + * 4. If level=PG_LEVEL_2M/1G, then the host page needs to be split > + */ > + > + pfn = pte_pfn(*pte); > + > + /* If its large page then calculte the fault pfn */ > + if (level > PG_LEVEL_4K) > + pfn = pfn | PFN_DOWN(address & (page_level_size(level) - 1)); > + > + /* > + * If its a guest private page, then the fault cannot be resolved. > + * Send a SIGBUS to terminate the process. > + * > + * As documented in APM vol3 pseudo-code for RMPUPDATE, when the 2M range > + * is covered by a valid (Assigned=1) 2M entry, the middle 511 4k entries > + * also have Assigned=1. This means that if there is an access to a page > + * which happens to lie within an Assigned 2M entry, the 4k RMP entry > + * will also have Assigned=1. Therefore, the kernel should see that > + * the page is not a valid page and the fault cannot be resolved. > + */ > + if (snp_lookup_rmpentry(pfn, &rmp_level)) { > + pr_info("Fatal RMP page fault, terminating process, entry assigned for pfn 0x%llx\n", > + pfn); > + do_sigbus(regs, error_code, address, VM_FAULT_SIGBUS); > + return RMP_PF_RETRY; > + } WRT my reply to 12/56, for example here it might be useful to distinguish the rmp being assigned from an error of snp_lookup_rmpentry()? > + > + /* > + * The backing page level is higher than the RMP page level, request > + * to split the page. > + */ > + if (level > rmp_level) > + return RMP_PF_SPLIT; > + > + return RMP_PF_RETRY; > +} > + > /*
On Wed, Mar 01, 2023 at 08:21:17AM -0800, Dave Hansen wrote: > On 2/20/23 10:38, Michael Roth wrote: > > +static int handle_split_page_fault(struct vm_fault *vmf) > > +{ > > + __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); > > + return 0; > > +} > > + > > /* > > * By the time we get here, we already hold the mm semaphore > > * > > @@ -5078,6 +5084,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, > > pmd_migration_entry_wait(mm, vmf.pmd); > > return 0; > > } > > + > > + if (flags & FAULT_FLAG_PAGE_SPLIT) > > + return handle_split_page_fault(&vmf); > > I asked this long ago, but how do you prevent these faults from > occurring on hugetlbfs mappings that can't be split? > In v6 there used to be a KVM ioctl to register a user HVA range for use with SEV-SNP guests, and as part of that registration the code would scan all the VMAs encompassed by that range and check for VM_HUGETLB in vma->vm_flags. With v7+ this registration mechanism has been replaced with the new restricted memfd implementation provided by UPM to manage private guest memory. Normal shmem/memfd backend can specify HugeTLBFS via a MFD_HUGETLB flag when creating the memfd, but for restricted memfd no special flags are allowed, so HugeTLBFS isn't possible for the pages that are used for private memory. Though it might make sense to enforce that in SNP-specific code still, in case restricted memfd does eventually gain that ability... But now, with v7+, the non-private memory that doesn't get allocated via restricted memfd (and thus can actually be mapped into userspace and used for things like buffers shared between host/guest), can still be allocated via HugeTLBFS since there is nothing SNP is doing to specifically guard against that. So we'd probably want to reimplement similar logic to what was in v6 to guard against this, since it's these mapping that would potentially be triggering the RMP faults and require splitting. However... The fact that any pages potentially triggering these #PFs are able to be mapped as 2M in the first place means that all the PFNs covered by that 2M mapping must also been allocated by via mappable/VMA memory rather than via restricted memfd where userspace mappings are not possible. So I think we should be able to drop this patch entirely, as well as allow the use of HugeTLBFS for non-restricted memfd memory (though eventually the guest will switch all its memory to private/restricted so not gaining much there other than reducing management complexity). -Mike
On 3/28/23 16:31, Michael Roth wrote: > However... > > The fact that any pages potentially triggering these #PFs are able to be > mapped as 2M in the first place means that all the PFNs covered by that > 2M mapping must also been allocated by via mappable/VMA memory rather > than via restricted memfd where userspace mappings are not possible. > > So I think we should be able to drop this patch entirely, as well as > allow the use of HugeTLBFS for non-restricted memfd memory (though > eventually the guest will switch all its memory to private/restricted > so not gaining much there other than reducing management complexity). This is sounding a bit voodoo-ish to me. If this whole series is predicated on having its memory supplied via one very specific ABI with very specific behavior. That connection and the associated contract isn't spelled out very clearly in this series. I'm sure it works on your machine and is clear to _you_ but I'm worried that nobody else is going to be able to figure out the voodoo. Could we make sure that this stuff is made very clear in the Documentation and cover letter, please?
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index f8193b99e9c8..afd4cde17001 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -33,6 +33,7 @@ #include <asm/kvm_para.h> /* kvm_handle_async_pf */ #include <asm/vdso.h> /* fixup_vdso_exception() */ #include <asm/irq_stack.h> +#include <asm/sev.h> /* snp_lookup_rmpentry() */ #define CREATE_TRACE_POINTS #include <asm/trace/exceptions.h> @@ -414,6 +415,7 @@ static void dump_pagetable(unsigned long address) pr_cont("PTE %lx", pte_val(*pte)); out: pr_cont("\n"); + return; bad: pr_info("BAD\n"); @@ -527,6 +529,8 @@ static void show_ldttss(const struct desc_ptr *gdt, const char *name, u16 index) static void show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long address) { + unsigned long pfn; + if (!oops_may_print()) return; @@ -599,7 +603,10 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad show_ldttss(&gdt, "TR", tr); } - dump_pagetable(address); + pfn = dump_pagetable(address); + + if (error_code & X86_PF_RMP) + sev_dump_rmpentry(pfn); } static noinline void @@ -1240,6 +1247,90 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); +enum rmp_pf_ret { + RMP_PF_SPLIT = 0, + RMP_PF_RETRY = 1, + RMP_PF_UNMAP = 2, +}; + +/* + * The goal of RMP faulting routine is really to check whether the + * page that faulted should be accessible. That can be determined + * simply by looking at the RMP entry for the 4k address being accessed. + * If that entry has Assigned=1 then it's a bad address. It could be + * because the 2MB region was assigned as a large page, or it could be + * because the region is all 4k pages and that 4k was assigned. + * In either case, it's a bad access. + * There are basically two main possibilities: + * 1. The 2M entry has Assigned=1 and Page_Size=1. Then all 511 middle + * entries also have Assigned=1. This entire 2M region is a guest page. + * 2. The 2M entry has Assigned=0 and Page_Size=0. Then the 511 middle + * entries can be anything, this region consists of individual 4k assignments. + */ +static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_code, + unsigned long address) +{ + int rmp_level, level; + pgd_t *pgd; + pte_t *pte; + u64 pfn; + + pgd = __va(read_cr3_pa()); + pgd += pgd_index(address); + + pte = lookup_address_in_pgd(pgd, address, &level); + + /* + * It can happen if there was a race between an unmap event and + * the RMP fault delivery. + */ + if (!pte || !pte_present(*pte)) + return RMP_PF_UNMAP; + + /* + * RMP page fault handler follows this algorithm: + * 1. Compute the pfn for the 4kb page being accessed + * 2. Read that RMP entry -- If it is assigned then kill the process + * 3. Otherwise, check the level from the host page table + * If level=PG_LEVEL_4K then the page is already smashed + * so just retry the instruction + * 4. If level=PG_LEVEL_2M/1G, then the host page needs to be split + */ + + pfn = pte_pfn(*pte); + + /* If its large page then calculte the fault pfn */ + if (level > PG_LEVEL_4K) + pfn = pfn | PFN_DOWN(address & (page_level_size(level) - 1)); + + /* + * If its a guest private page, then the fault cannot be resolved. + * Send a SIGBUS to terminate the process. + * + * As documented in APM vol3 pseudo-code for RMPUPDATE, when the 2M range + * is covered by a valid (Assigned=1) 2M entry, the middle 511 4k entries + * also have Assigned=1. This means that if there is an access to a page + * which happens to lie within an Assigned 2M entry, the 4k RMP entry + * will also have Assigned=1. Therefore, the kernel should see that + * the page is not a valid page and the fault cannot be resolved. + */ + if (snp_lookup_rmpentry(pfn, &rmp_level)) { + pr_info("Fatal RMP page fault, terminating process, entry assigned for pfn 0x%llx\n", + pfn); + do_sigbus(regs, error_code, address, VM_FAULT_SIGBUS); + return RMP_PF_RETRY; + } + + /* + * The backing page level is higher than the RMP page level, request + * to split the page. + */ + if (level > rmp_level) + return RMP_PF_SPLIT; + + return RMP_PF_RETRY; +} + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1337,6 +1428,17 @@ void do_user_addr_fault(struct pt_regs *regs, if (error_code & X86_PF_INSTR) flags |= FAULT_FLAG_INSTRUCTION; + /* + * If its an RMP violation, try resolving it. + */ + if (error_code & X86_PF_RMP) { + if (handle_user_rmp_page_fault(regs, error_code, address)) + return; + + /* Ask to split the page */ + flags |= FAULT_FLAG_PAGE_SPLIT; + } + #ifdef CONFIG_X86_64 /* * Faults in the vsyscall page might need emulation. The diff --git a/include/linux/mm.h b/include/linux/mm.h index 3c84f4e48cd7..2fd8e16d149c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -466,7 +466,8 @@ static inline bool fault_flag_allow_retry_first(enum fault_flag flags) { FAULT_FLAG_USER, "USER" }, \ { FAULT_FLAG_REMOTE, "REMOTE" }, \ { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" }, \ - { FAULT_FLAG_INTERRUPTIBLE, "INTERRUPTIBLE" } + { FAULT_FLAG_INTERRUPTIBLE, "INTERRUPTIBLE" }, \ + { FAULT_FLAG_PAGE_SPLIT, "PAGESPLIT" } /* * vm_fault is filled by the pagefault handler and passed to the vma's diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..06ba34d51638 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -962,6 +962,8 @@ typedef struct { * mapped R/O. * @FAULT_FLAG_ORIG_PTE_VALID: whether the fault has vmf->orig_pte cached. * We should only access orig_pte if this flag set. + * @FAULT_FLAG_PAGE_SPLIT: The fault was due page size mismatch, split the + * region to smaller page size and retry. * * About @FAULT_FLAG_ALLOW_RETRY and @FAULT_FLAG_TRIED: we can specify * whether we would allow page faults to retry by specifying these two @@ -999,6 +1001,7 @@ enum fault_flag { FAULT_FLAG_INTERRUPTIBLE = 1 << 9, FAULT_FLAG_UNSHARE = 1 << 10, FAULT_FLAG_ORIG_PTE_VALID = 1 << 11, + FAULT_FLAG_PAGE_SPLIT = 1 << 12, }; typedef unsigned int __bitwise zap_flags_t; diff --git a/mm/memory.c b/mm/memory.c index f88c351aecd4..e68da7e403c6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4996,6 +4996,12 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) return 0; } +static int handle_split_page_fault(struct vm_fault *vmf) +{ + __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); + return 0; +} + /* * By the time we get here, we already hold the mm semaphore * @@ -5078,6 +5084,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } + + if (flags & FAULT_FLAG_PAGE_SPLIT) + return handle_split_page_fault(&vmf); + if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf);