Message ID | 20210517065155.7257-2-thunder.leizhen@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: clear spelling mistakes | expand |
On Mon, May 17, 2021 at 12:22 PM Zhen Lei <thunder.leizhen@huawei.com> wrote: > > Fix some spelling mistakes in comments: > statments ==> statements > adresses ==> addresses > aggresive ==> aggressive > datas ==> data > > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Reviewed-by: Souptick Joarder <jrdr.linux@gmail.com> > --- > include/linux/mm.h | 2 +- > include/linux/mm_types.h | 4 ++-- > include/linux/mmzone.h | 2 +- > 3 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index c274f75efcf9..12d13c8708a5 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -155,7 +155,7 @@ extern int mmap_rnd_compat_bits __read_mostly; > /* This function must be updated when the size of struct page grows above 80 > * or reduces below 56. The idea that compiler optimizes out switch() > * statement, and only leaves move/store instructions. Also the compiler can > - * combine write statments if they are both assignments and can be reordered, > + * combine write statements if they are both assignments and can be reordered, > * this can result in several of the writes here being dropped. > */ > #define mm_zero_struct_page(pp) __mm_zero_struct_page(pp) > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 5aacc1c10a45..7034f5673d26 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -397,7 +397,7 @@ struct mm_struct { > unsigned long mmap_base; /* base of mmap area */ > unsigned long mmap_legacy_base; /* base of mmap area in bottom-up allocations */ > #ifdef CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES > - /* Base adresses for compatible mmap() */ > + /* Base addresses for compatible mmap() */ > unsigned long mmap_compat_base; > unsigned long mmap_compat_legacy_base; > #endif > @@ -439,7 +439,7 @@ struct mm_struct { > * @has_pinned: Whether this mm has pinned any pages. This can > * be either replaced in the future by @pinned_vm when it > * becomes stable, or grow into a counter on its own. We're > - * aggresive on this bit now - even if the pinned pages were > + * aggressive on this bit now - even if the pinned pages were > * unpinned later on, we'll still keep this bit set for the > * lifecycle of this mm just for simplicity. > */ > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 0d53eba1c383..7d7d86220f01 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -113,7 +113,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) > struct pglist_data; > > /* > - * Add a wild amount of padding here to ensure datas fall into separate > + * Add a wild amount of padding here to ensure data fall into separate > * cachelines. There are very few zone structures in the machine, so space > * consumption is not a concern here. > */ > -- > 2.25.1 > > >
diff --git a/include/linux/mm.h b/include/linux/mm.h index c274f75efcf9..12d13c8708a5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -155,7 +155,7 @@ extern int mmap_rnd_compat_bits __read_mostly; /* This function must be updated when the size of struct page grows above 80 * or reduces below 56. The idea that compiler optimizes out switch() * statement, and only leaves move/store instructions. Also the compiler can - * combine write statments if they are both assignments and can be reordered, + * combine write statements if they are both assignments and can be reordered, * this can result in several of the writes here being dropped. */ #define mm_zero_struct_page(pp) __mm_zero_struct_page(pp) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5aacc1c10a45..7034f5673d26 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -397,7 +397,7 @@ struct mm_struct { unsigned long mmap_base; /* base of mmap area */ unsigned long mmap_legacy_base; /* base of mmap area in bottom-up allocations */ #ifdef CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES - /* Base adresses for compatible mmap() */ + /* Base addresses for compatible mmap() */ unsigned long mmap_compat_base; unsigned long mmap_compat_legacy_base; #endif @@ -439,7 +439,7 @@ struct mm_struct { * @has_pinned: Whether this mm has pinned any pages. This can * be either replaced in the future by @pinned_vm when it * becomes stable, or grow into a counter on its own. We're - * aggresive on this bit now - even if the pinned pages were + * aggressive on this bit now - even if the pinned pages were * unpinned later on, we'll still keep this bit set for the * lifecycle of this mm just for simplicity. */ diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0d53eba1c383..7d7d86220f01 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -113,7 +113,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) struct pglist_data; /* - * Add a wild amount of padding here to ensure datas fall into separate + * Add a wild amount of padding here to ensure data fall into separate * cachelines. There are very few zone structures in the machine, so space * consumption is not a concern here. */
Fix some spelling mistakes in comments: statments ==> statements adresses ==> addresses aggresive ==> aggressive datas ==> data Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> --- include/linux/mm.h | 2 +- include/linux/mm_types.h | 4 ++-- include/linux/mmzone.h | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-)