diff mbox series

KVM: x86/mmu: Clean up function comments for dirty logging APIs

Message ID 20240611215805.340664-1-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Clean up function comments for dirty logging APIs | expand

Commit Message

Sean Christopherson June 11, 2024, 9:58 p.m. UTC
Rework the function comment for kvm_arch_mmu_enable_log_dirty_pt_masked(),
as it has gotten a bit stale, and is the last source of warnings for W=1
builds in KVM x86 due to using a kernel-doc comment without documenting
all parameters.

Opportunistically subsume the functions comments for
kvm_mmu_write_protect_pt_masked() and kvm_mmu_clear_dirty_pt_masked(), as
there is no value in regurgitating the same parameter information, and
capturing the differences between write-protection and PML-based dirty
logging is best done in a common location.

No functional change intended.

Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---

I don't actually care too much about the comment itself, I really just want to
get rid of the annoying warnings (I was *very* tempted to just delete the extra
asterisk), so if anyone has any opinion whatsoever...

 arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++------------------------
 1 file changed, 18 insertions(+), 25 deletions(-)


base-commit: f99b052256f16224687e5947772f0942bff73fc1

Comments

David Matlack June 13, 2024, 9:15 p.m. UTC | #1
On 2024-06-11 02:58 PM, Sean Christopherson wrote:
> Rework the function comment for kvm_arch_mmu_enable_log_dirty_pt_masked(),
> as it has gotten a bit stale, and is the last source of warnings for W=1
> builds in KVM x86 due to using a kernel-doc comment without documenting
> all parameters.
> 
> Opportunistically subsume the functions comments for
> kvm_mmu_write_protect_pt_masked() and kvm_mmu_clear_dirty_pt_masked(), as
> there is no value in regurgitating the same parameter information, and
> capturing the differences between write-protection and PML-based dirty
> logging is best done in a common location.
> 
> No functional change intended.
> 
> Cc: David Matlack <dmatlack@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> 
> I don't actually care too much about the comment itself, I really just want to
> get rid of the annoying warnings (I was *very* tempted to just delete the extra
> asterisk), so if anyone has any opinion whatsoever...

I vote to drop it and document the nuance around PML in the function
body itself. The initially-all-set / huge page stuff is already mostly
documented in the function body. e.g.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dfea48bdd285..d56fa757ee81 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1372,24 +1372,16 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 	}
 }

-/**
- * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
- * PT level pages.
- *
- * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
- * enable dirty logging for them.
- *
- * We need to care about huge page mappings: e.g. during dirty logging we may
- * have such mappings.
- */
 void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
 	/*
-	 * Huge pages are NOT write protected when we start dirty logging in
-	 * initially-all-set mode; must write protect them here so that they
-	 * are split to 4K on the first write.
+	 * If the slot was assumed to be "initially all dirty", write-protect
+	 * huge pages to ensure they are split to 4KiB on the first write (KVM
+	 * dirty logs at 4KiB granularity). If eager page splitting is enabled,
+	 * try to split pages, e.g. so that vCPUs don't get saddled with the
+	 * cost of splitting.
 	 *
 	 * The gfn_offset is guaranteed to be aligned to 64, but the base_gfn
 	 * of memslot has no such restriction, so the range can cross two large
@@ -1411,7 +1403,16 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 						       PG_LEVEL_2M);
 	}

-	/* Now handle 4K PTEs.  */
+	/*
+	 * Re(enable) dirty logging for all 4KiB SPTEs that map the GFNs in
+	 * mask. If PML is enabled and the and the GFN doesn't need to be
+	 * write-protected for other reasons, e.g. shadow paging, clear the
+	 * Dirty bit. Otherwise clear the Writable bit.
+	 *
+	 * Note that kvm_mmu_clear_dirty_pt_masked() is called whenever PML is
+	 * enabled but it chooses between clearing the Dirty bit and Writeable
+	 * bit based on the context.
+	 */
 	if (kvm_x86_ops.cpu_dirty_log_size)
 		kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask);
 	else

> 
>  arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++------------------------
>  1 file changed, 18 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index f2c9580d9588..7eb87d473223 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1307,15 +1307,6 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
>  	return flush;
>  }
>  
> -/**
> - * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
> - * @kvm: kvm instance
> - * @slot: slot to protect
> - * @gfn_offset: start of the BITS_PER_LONG pages we care about
> - * @mask: indicates which pages we should protect
> - *
> - * Used when we do not need to care about huge page mappings.
> - */
>  static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
>  				     struct kvm_memory_slot *slot,
>  				     gfn_t gfn_offset, unsigned long mask)
> @@ -1339,16 +1330,6 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
>  	}
>  }
>  
> -/**
> - * kvm_mmu_clear_dirty_pt_masked - clear MMU D-bit for PT level pages, or write
> - * protect the page if the D-bit isn't supported.
> - * @kvm: kvm instance
> - * @slot: slot to clear D-bit
> - * @gfn_offset: start of the BITS_PER_LONG pages we care about
> - * @mask: indicates which pages we should clear D-bit
> - *
> - * Used for PML to re-log the dirty GPAs after userspace querying dirty_bitmap.
> - */
>  static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
>  					 struct kvm_memory_slot *slot,
>  					 gfn_t gfn_offset, unsigned long mask)
> @@ -1373,14 +1354,26 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
>  }
>  
>  /**
> - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
> - * PT level pages.
> + * kvm_arch_mmu_enable_log_dirty_pt_masked - (Re)Enable dirty logging for a set
> + * of GFNs
>   *
> - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
> - * enable dirty logging for them.
> + * @kvm: kvm instance
> + * @slot: slot to containing the gfns to dirty log
> + * @gfn_offset: start of the BITS_PER_LONG pages we care about

Someone once told me to avoid using "we" in comments :)

> + * @mask: indicates which gfns to dirty log (1 == enable)
>   *
> - * We need to care about huge page mappings: e.g. during dirty logging we may
> - * have such mappings.
> + * (Re)Enable dirty logging for the set of GFNs indicated by the slot,
> + * gfn_offset, and mask, e.g. after userspace has harvested dirty information
> + * and wants to re-log dirty GFNs for the next round of migration.
> + *
> + * If the slot was assumed to be "initially all dirty", write-protect hugepages
> + * to ensure they are split to 4KiB on the first write (KVM dirty logs at 4KiB
> + * granularity).  If eager page splitting is enabled, immediately try to split
> + * hugepages, e.g. so that vCPUs don't get saddled with the cost of the split.
> + *
> + * If Page-Modification Logging (PML) is enabled and the GFN doesn't need to be
> + * write-protected for other reasons, e.g. shadow paging, clear the Dirty Bit.
> + * Otherwise write-protect the GFN, i.e. clear the Writable Bit.
>   */
>  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>  				struct kvm_memory_slot *slot,
> 
> base-commit: f99b052256f16224687e5947772f0942bff73fc1
> -- 
> 2.45.2.505.gda0bf45e8d-goog
>
Sean Christopherson June 13, 2024, 11:33 p.m. UTC | #2
On Thu, Jun 13, 2024, David Matlack wrote:
> On 2024-06-11 02:58 PM, Sean Christopherson wrote:
> > I don't actually care too much about the comment itself, I really just want to
> > get rid of the annoying warnings (I was *very* tempted to just delete the extra
> > asterisk), so if anyone has any opinion whatsoever...
> 
> I vote to drop it and document the nuance around PML in the function

As in, drop the function comment entirely?  I'm definitely a-ok with that too.

> > @@ -1373,14 +1354,26 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
> >  }
> >  
> >  /**
> > - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
> > - * PT level pages.
> > + * kvm_arch_mmu_enable_log_dirty_pt_masked - (Re)Enable dirty logging for a set
> > + * of GFNs
> >   *
> > - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
> > - * enable dirty logging for them.
> > + * @kvm: kvm instance
> > + * @slot: slot to containing the gfns to dirty log
> > + * @gfn_offset: start of the BITS_PER_LONG pages we care about
> 
> Someone once told me to avoid using "we" in comments :)

Darn copy+paste.
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f2c9580d9588..7eb87d473223 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1307,15 +1307,6 @@  static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 	return flush;
 }
 
-/**
- * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
- * @kvm: kvm instance
- * @slot: slot to protect
- * @gfn_offset: start of the BITS_PER_LONG pages we care about
- * @mask: indicates which pages we should protect
- *
- * Used when we do not need to care about huge page mappings.
- */
 static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
@@ -1339,16 +1330,6 @@  static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 	}
 }
 
-/**
- * kvm_mmu_clear_dirty_pt_masked - clear MMU D-bit for PT level pages, or write
- * protect the page if the D-bit isn't supported.
- * @kvm: kvm instance
- * @slot: slot to clear D-bit
- * @gfn_offset: start of the BITS_PER_LONG pages we care about
- * @mask: indicates which pages we should clear D-bit
- *
- * Used for PML to re-log the dirty GPAs after userspace querying dirty_bitmap.
- */
 static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 					 struct kvm_memory_slot *slot,
 					 gfn_t gfn_offset, unsigned long mask)
@@ -1373,14 +1354,26 @@  static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 }
 
 /**
- * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
- * PT level pages.
+ * kvm_arch_mmu_enable_log_dirty_pt_masked - (Re)Enable dirty logging for a set
+ * of GFNs
  *
- * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
- * enable dirty logging for them.
+ * @kvm: kvm instance
+ * @slot: slot to containing the gfns to dirty log
+ * @gfn_offset: start of the BITS_PER_LONG pages we care about
+ * @mask: indicates which gfns to dirty log (1 == enable)
  *
- * We need to care about huge page mappings: e.g. during dirty logging we may
- * have such mappings.
+ * (Re)Enable dirty logging for the set of GFNs indicated by the slot,
+ * gfn_offset, and mask, e.g. after userspace has harvested dirty information
+ * and wants to re-log dirty GFNs for the next round of migration.
+ *
+ * If the slot was assumed to be "initially all dirty", write-protect hugepages
+ * to ensure they are split to 4KiB on the first write (KVM dirty logs at 4KiB
+ * granularity).  If eager page splitting is enabled, immediately try to split
+ * hugepages, e.g. so that vCPUs don't get saddled with the cost of the split.
+ *
+ * If Page-Modification Logging (PML) is enabled and the GFN doesn't need to be
+ * write-protected for other reasons, e.g. shadow paging, clear the Dirty Bit.
+ * Otherwise write-protect the GFN, i.e. clear the Writable Bit.
  */
 void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,