diff mbox series

[1/4] mm: Fix kernel-doc warning from tlb_flush_rmaps()

Message ID 20230818200630.2719595-2-willy@infradead.org (mailing list archive)
State New
Headers show
Series Improve mm documentation | expand

Commit Message

Matthew Wilcox Aug. 18, 2023, 8:06 p.m. UTC
The vma parameter wasn't described.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/mmu_gather.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Randy Dunlap Aug. 20, 2023, 12:51 a.m. UTC | #1
On 8/18/23 13:06, Matthew Wilcox (Oracle) wrote:
> The vma parameter wasn't described.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Randy Dunlap <rdunlap@infradead.org>
Thanks.

> ---
>  mm/mmu_gather.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index ea9683e12936..4f559f4ddd21 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -63,6 +63,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_
>  /**
>   * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
>   * @tlb: the current mmu_gather
> + * @vma: The memory area from which the pages are being removed.
>   *
>   * Note that because of how tlb_next_batch() above works, we will
>   * never start multiple new batches with pending delayed rmaps, so
Mike Rapoport Aug. 21, 2023, 2:51 p.m. UTC | #2
On Fri, Aug 18, 2023 at 09:06:27PM +0100, Matthew Wilcox (Oracle) wrote:
> The vma parameter wasn't described.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> ---
>  mm/mmu_gather.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index ea9683e12936..4f559f4ddd21 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -63,6 +63,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_
>  /**
>   * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
>   * @tlb: the current mmu_gather
> + * @vma: The memory area from which the pages are being removed.
>   *
>   * Note that because of how tlb_next_batch() above works, we will
>   * never start multiple new batches with pending delayed rmaps, so
> -- 
> 2.40.1
>
diff mbox series

Patch

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index ea9683e12936..4f559f4ddd21 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -63,6 +63,7 @@  static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_
 /**
  * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
  * @tlb: the current mmu_gather
+ * @vma: The memory area from which the pages are being removed.
  *
  * Note that because of how tlb_next_batch() above works, we will
  * never start multiple new batches with pending delayed rmaps, so