Message ID | 1576045585-8536-7-git-send-email-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Fix various comment errors | expand |
On Wed, Dec 11, 2019 at 02:26:25PM +0800, linmiaohe wrote: > From: Miaohe Lin <linmiaohe@huawei.com> > > Fix some writing mistakes in the comments. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > arch/x86/include/asm/kvm_host.h | 2 +- > arch/x86/kvm/vmx/vmx.c | 2 +- > virt/kvm/kvm_main.c | 2 +- > 3 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 159a28512e4c..efba864ed42d 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -606,7 +606,7 @@ struct kvm_vcpu_arch { > * Paging state of an L2 guest (used for nested npt) > * > * This context will save all necessary information to walk page tables > - * of the an L2 guest. This context is only initialized for page table > + * of the L2 guest. This context is only initialized for page table I'd whack "the" instead of "and", i.e. ...walk page tables of an L2 guest, as KVM isn't limited to just one L2 guest. > * walking and not for faulting since we never handle l2 page faults on While you're here, want to change "l2" to "L2"? > * the host. > */ > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 1be3854f1090..dae712c8785e 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -1922,7 +1922,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > } > > /* > - * Writes msr value into into the appropriate "register". > + * Writes msr value into the appropriate "register". > * Returns 0 on success, non-0 otherwise. > * Assumes vcpu_load() was already called. > */ > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index f0501272268f..1a6d5ebd5c42 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -1519,7 +1519,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) > /* > * The fast path to get the writable pfn which will be stored in @pfn, > * true indicates success, otherwise false is returned. It's also the > - * only part that runs if we can are in atomic context. > + * only part that runs if we can in atomic context. This should remove "can" instead of "are", i.e. ...part that runs if we are in atomic context. The comment is calling out that hva_to_pfn() will return immediately if hva_to_pfn_fast() and the kernel is atomic context. > */ > static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, > bool *writable, kvm_pfn_t *pfn) > -- > 2.19.1 >
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 159a28512e4c..efba864ed42d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -606,7 +606,7 @@ struct kvm_vcpu_arch { * Paging state of an L2 guest (used for nested npt) * * This context will save all necessary information to walk page tables - * of the an L2 guest. This context is only initialized for page table + * of the L2 guest. This context is only initialized for page table * walking and not for faulting since we never handle l2 page faults on * the host. */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1be3854f1090..dae712c8785e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1922,7 +1922,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } /* - * Writes msr value into into the appropriate "register". + * Writes msr value into the appropriate "register". * Returns 0 on success, non-0 otherwise. * Assumes vcpu_load() was already called. */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f0501272268f..1a6d5ebd5c42 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1519,7 +1519,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. It's also the - * only part that runs if we can are in atomic context. + * only part that runs if we can in atomic context. */ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, bool *writable, kvm_pfn_t *pfn)