Message ID | cover.1618914692.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: x86/mmu: simplify argument to kvm page fault handler | expand |
On Tue, Apr 20, 2021, Isaku Yamahata wrote: > This is a preliminary clean up for TDX which complicates KVM page fault > execution path. Ooh, a series to complicate the page fault path! ;-) Grammatical snarkiness aside, I'm all in favor of adding a struct to collect the page fault collateral. Overarching feedback: - Have kvm_mmu_do_page_fault() handle initialization of the struct. That will allow making most of the fields const, and will avoid the rather painful kvm_page_fault_init(). - Pass @vcpu separately. Yes, it's associated with the fault, but literally the first line in every consumer is "struct kvm_vcpu *vcpu = kpf->vcpu;". - Use "fault" instead of "kpf", mostly because it reads better for people that aren't intimately familiar with the code, but also to avoid having to refactor a huge amount of code if we decide to rename kvm_page_fault, e.g. if we decide to use that name to return fault information to userspace. - Snapshot anything that is computed in multiple places, even if it is derivative of existing info. E.g. it probably makes sense to grab write/fetch (or exec). E.g. I'm thinking something like struct kvm_page_fault { const gpa_t cr2_or_gpa; const u32 error_code; const bool write; const bool read; const bool fetch; const bool prefault; const bool is_tdp; gfn_t gfn; hva_t hva; int max_level; kvm_pfn_t pfn; bool map_writable; }; int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefault) { struct kvm_page_fault fault = { .cr2_or_gpa = cr2_or_gpa, .error_code = err, .write = err & PFERR_WRITE_MASK, .fetch = err & PFERR_FETCH_MASK, .perm = ... .rsvd = err & PFERR_RSVD_MASK, .is_tdp = vcpu->arch.mmu->page_fault == kvm_tdp_page_fault, ... }; #ifdef CONFIG_RETPOLINE if (likely(fault.is_tdp)) return kvm_tdp_page_fault(vcpu, &fault); #endif return vcpu->arch.mmu->page_fault(vcpu, &fault); }
On 26/05/21 23:10, Sean Christopherson wrote: > - Have kvm_mmu_do_page_fault() handle initialization of the struct. That > will allow making most of the fields const, and will avoid the rather painful > kvm_page_fault_init(). > > - Pass @vcpu separately. Yes, it's associated with the fault, but literally > the first line in every consumer is "struct kvm_vcpu *vcpu = kpf->vcpu;". > > - Use "fault" instead of "kpf", mostly because it reads better for people that > aren't intimately familiar with the code, but also to avoid having to refactor > a huge amount of code if we decide to rename kvm_page_fault, e.g. if we decide > to use that name to return fault information to userspace. > > - Snapshot anything that is computed in multiple places, even if it is > derivative of existing info. E.g. it probably makes sense to grab I agree with all of these (especially it was a bit weird not to see vcpu in the prototypes). Thanks Sean for the review! Paolo
Thanks for feedback. Let me respin it. On Thu, Jun 10, 2021 at 02:45:55PM +0200, Paolo Bonzini <pbonzini@redhat.com> wrote: > On 26/05/21 23:10, Sean Christopherson wrote: > > - Have kvm_mmu_do_page_fault() handle initialization of the struct. That > > will allow making most of the fields const, and will avoid the rather painful > > kvm_page_fault_init(). > > > > - Pass @vcpu separately. Yes, it's associated with the fault, but literally > > the first line in every consumer is "struct kvm_vcpu *vcpu = kpf->vcpu;". > > > > - Use "fault" instead of "kpf", mostly because it reads better for people that > > aren't intimately familiar with the code, but also to avoid having to refactor > > a huge amount of code if we decide to rename kvm_page_fault, e.g. if we decide > > to use that name to return fault information to userspace. > > > > - Snapshot anything that is computed in multiple places, even if it is > > derivative of existing info. E.g. it probably makes sense to grab > > I agree with all of these (especially it was a bit weird not to see vcpu in > the prototypes). Thanks Sean for the review! > > Paolo >
On Thu, Jun 10, 2021 at 3:05 PM Isaku Yamahata <isaku.yamahata@gmail.com> wrote: > > Thanks for feedback. Let me respin it. Hi Isaku, I'm working on a series to plumb the memslot backing the faulting gfn through the page fault handling stack to avoid redundant lookups. This would be much cleaner to implement on top of your struct kvm_page_fault series than the existing code. Are you still planning to send another version of this series? Or if you have decided to drop it or go in a different direction? > > On Thu, Jun 10, 2021 at 02:45:55PM +0200, > Paolo Bonzini <pbonzini@redhat.com> wrote: > > > On 26/05/21 23:10, Sean Christopherson wrote: > > > - Have kvm_mmu_do_page_fault() handle initialization of the struct. That > > > will allow making most of the fields const, and will avoid the rather painful > > > kvm_page_fault_init(). > > > > > > - Pass @vcpu separately. Yes, it's associated with the fault, but literally > > > the first line in every consumer is "struct kvm_vcpu *vcpu = kpf->vcpu;". > > > > > > - Use "fault" instead of "kpf", mostly because it reads better for people that > > > aren't intimately familiar with the code, but also to avoid having to refactor > > > a huge amount of code if we decide to rename kvm_page_fault, e.g. if we decide > > > to use that name to return fault information to userspace. > > > > > > - Snapshot anything that is computed in multiple places, even if it is > > > derivative of existing info. E.g. it probably makes sense to grab > > > > I agree with all of these (especially it was a bit weird not to see vcpu in > > the prototypes). Thanks Sean for the review! > > > > Paolo > > > > -- > Isaku Yamahata <isaku.yamahata@gmail.com>
On 29/07/21 18:48, David Matlack wrote: > On Thu, Jun 10, 2021 at 3:05 PM Isaku Yamahata <isaku.yamahata@gmail.com> wrote: >> >> Thanks for feedback. Let me respin it. > > Hi Isaku, > > I'm working on a series to plumb the memslot backing the faulting gfn > through the page fault handling stack to avoid redundant lookups. This > would be much cleaner to implement on top of your struct > kvm_page_fault series than the existing code. > > Are you still planning to send another version of this series? Or if > you have decided to drop it or go in a different direction? I can work on this and post updated patches next week. Paolo
On Thu, Jul 29, 2021 at 10:17 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 29/07/21 18:48, David Matlack wrote: > > On Thu, Jun 10, 2021 at 3:05 PM Isaku Yamahata <isaku.yamahata@gmail.com> wrote: > >> > >> Thanks for feedback. Let me respin it. > > > > Hi Isaku, > > > > I'm working on a series to plumb the memslot backing the faulting gfn > > through the page fault handling stack to avoid redundant lookups. This > > would be much cleaner to implement on top of your struct > > kvm_page_fault series than the existing code. > > > > Are you still planning to send another version of this series? Or if > > you have decided to drop it or go in a different direction? > > I can work on this and post updated patches next week. Sounds good. For the record I'm also looking at adding an per-vCPU LRU slot, which *may* obviate the need to pass around the slot. (Isaku's series is still a nice cleanup regardless.) > > Paolo >
On 29/07/21 19:24, David Matlack wrote: > On Thu, Jul 29, 2021 at 10:17 AM Paolo Bonzini <pbonzini@redhat.com> wrote: >> >> On 29/07/21 18:48, David Matlack wrote: >>> On Thu, Jun 10, 2021 at 3:05 PM Isaku Yamahata <isaku.yamahata@gmail.com> wrote: >>>> >>>> Thanks for feedback. Let me respin it. >>> >>> Hi Isaku, >>> >>> I'm working on a series to plumb the memslot backing the faulting gfn >>> through the page fault handling stack to avoid redundant lookups. This >>> would be much cleaner to implement on top of your struct >>> kvm_page_fault series than the existing code. >>> >>> Are you still planning to send another version of this series? Or if >>> you have decided to drop it or go in a different direction? >> >> I can work on this and post updated patches next week. > > Sounds good. For the record I'm also looking at adding an per-vCPU LRU > slot, which *may* obviate the need to pass around the slot. (Isaku's > series is still a nice cleanup regardless.) Backport done, but not tested very well yet (and it's scary :)). Paolo