Message ID | 2bb6e13e-44df-4920-52d9-4d3539945f73@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: drop duplicated words in <linux/pgtable.h> | expand |
On 2020-07-15T18:29:51-07:00 Randy Dunlap <rdunlap@infradead.org> wrote: > From: Randy Dunlap <rdunlap@infradead.org> > > Drop the doubled words "used" and "by". > > Drop the repeated acronym "TLB" and make several other fixes around it. > (capital letters, spellos) > > Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: SeongJae Park <sjpark@amazon.de> Thanks, SeongJae Park > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > --- > include/linux/pgtable.h | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > --- linux-next-20200714.orig/include/linux/pgtable.h > +++ linux-next-20200714/include/linux/pgtable.h > @@ -866,7 +866,7 @@ static inline void ptep_modify_prot_comm > > /* > * No-op macros that just return the current protection value. Defined here > - * because these macros can be used used even if CONFIG_MMU is not defined. > + * because these macros can be used even if CONFIG_MMU is not defined. > */ > #ifndef pgprot_encrypted > #define pgprot_encrypted(prot) (prot) > @@ -1259,7 +1259,7 @@ static inline int pmd_trans_unstable(pmd > * Technically a PTE can be PROTNONE even when not doing NUMA balancing but > * the only case the kernel cares is for NUMA balancing and is only ever set > * when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked > - * _PAGE_PROTNONE so by by default, implement the helper as "always no". It > + * _PAGE_PROTNONE so by default, implement the helper as "always no". It > * is the responsibility of the caller to distinguish between PROT_NONE > * protections and NUMA hinting fault protections. > */ > @@ -1343,10 +1343,10 @@ static inline int pmd_free_pte_page(pmd_ > /* > * ARCHes with special requirements for evicting THP backing TLB entries can > * implement this. Otherwise also, it can help optimize normal TLB flush in > - * THP regime. stock flush_tlb_range() typically has optimization to nuke the > - * entire TLB TLB if flush span is greater than a threshold, which will > - * likely be true for a single huge page. Thus a single thp flush will > - * invalidate the entire TLB which is not desitable. > + * THP regime. Stock flush_tlb_range() typically has optimization to nuke the > + * entire TLB if flush span is greater than a threshold, which will > + * likely be true for a single huge page. Thus a single THP flush will > + * invalidate the entire TLB which is not desirable. > * e.g. see arch/arc: flush_pmd_tlb_range > */ > #define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) >
--- linux-next-20200714.orig/include/linux/pgtable.h +++ linux-next-20200714/include/linux/pgtable.h @@ -866,7 +866,7 @@ static inline void ptep_modify_prot_comm /* * No-op macros that just return the current protection value. Defined here - * because these macros can be used used even if CONFIG_MMU is not defined. + * because these macros can be used even if CONFIG_MMU is not defined. */ #ifndef pgprot_encrypted #define pgprot_encrypted(prot) (prot) @@ -1259,7 +1259,7 @@ static inline int pmd_trans_unstable(pmd * Technically a PTE can be PROTNONE even when not doing NUMA balancing but * the only case the kernel cares is for NUMA balancing and is only ever set * when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked - * _PAGE_PROTNONE so by by default, implement the helper as "always no". It + * _PAGE_PROTNONE so by default, implement the helper as "always no". It * is the responsibility of the caller to distinguish between PROT_NONE * protections and NUMA hinting fault protections. */ @@ -1343,10 +1343,10 @@ static inline int pmd_free_pte_page(pmd_ /* * ARCHes with special requirements for evicting THP backing TLB entries can * implement this. Otherwise also, it can help optimize normal TLB flush in - * THP regime. stock flush_tlb_range() typically has optimization to nuke the - * entire TLB TLB if flush span is greater than a threshold, which will - * likely be true for a single huge page. Thus a single thp flush will - * invalidate the entire TLB which is not desitable. + * THP regime. Stock flush_tlb_range() typically has optimization to nuke the + * entire TLB if flush span is greater than a threshold, which will + * likely be true for a single huge page. Thus a single THP flush will + * invalidate the entire TLB which is not desirable. * e.g. see arch/arc: flush_pmd_tlb_range */ #define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end)