Message ID | 20221024081435.204970-5-bhe@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Cleanup and optimization patches for percpu | expand |
On Mon, Oct 24, 2022 at 04:14:31PM +0800, Baoquan He wrote: > When allocating an area from a chunk, pcpu_block_update_hint_alloc() > is called to update chunk metadata, including chunk's and global > nr_empty_pop_pages. However, if the allocation is not atomic, some > blocks may not be populated with pages yet, while we still account it > here. The number of pages will be subtracted with pcpu_chunk_populated() > when populating pages. > > Adding code comment to make that more understandable. > > Signed-off-by: Baoquan He <bhe@redhat.com> > --- > mm/percpu.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > diff --git a/mm/percpu.c b/mm/percpu.c > index a8121302a79c..09e407338573 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -831,13 +831,15 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, > > /* > * Update s_block. > - * block->first_free must be updated if the allocation takes its place. > - * If the allocation breaks the contig_hint, a scan is required to > - * restore this hint. > */ > if (s_block->contig_hint == PCPU_BITMAP_BLOCK_BITS) > nr_empty_pages++; > > + /* > + * block->first_free must be updated if the allocation takes its place. > + * If the allocation breaks the contig_hint, a scan is required to > + * restore this hint. > + */ > if (s_off == s_block->first_free) > s_block->first_free = find_next_zero_bit( > pcpu_index_alloc_map(chunk, s_index), > @@ -912,6 +914,12 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, > } > } > > + /* > + * If the allocation is not atomic, some blocks may not > + * be populated with pages, while we account it here. > + * The number of pages will be subtracted with > + * pcpu_chunk_populated() when populating pages. > + */ > if (nr_empty_pages) > pcpu_update_empty_pages(chunk, -nr_empty_pages); > > -- > 2.34.1 > Heh, that's a little more subtle than I remember it being. Acked-by: Dennis Zhou <dennis@kernel.org> Thanks, Dennis
On 10/24/22 at 09:56am, Dennis Zhou wrote: > On Mon, Oct 24, 2022 at 04:14:31PM +0800, Baoquan He wrote: > > When allocating an area from a chunk, pcpu_block_update_hint_alloc() > > is called to update chunk metadata, including chunk's and global > > nr_empty_pop_pages. However, if the allocation is not atomic, some > > blocks may not be populated with pages yet, while we still account it > > here. The number of pages will be subtracted with pcpu_chunk_populated() > > when populating pages. > > > > Adding code comment to make that more understandable. > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > --- > > mm/percpu.c | 14 +++++++++++--- > > 1 file changed, 11 insertions(+), 3 deletions(-) > > > > diff --git a/mm/percpu.c b/mm/percpu.c > > index a8121302a79c..09e407338573 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -831,13 +831,15 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, > > > > /* > > * Update s_block. > > - * block->first_free must be updated if the allocation takes its place. > > - * If the allocation breaks the contig_hint, a scan is required to > > - * restore this hint. > > */ > > if (s_block->contig_hint == PCPU_BITMAP_BLOCK_BITS) > > nr_empty_pages++; > > > > + /* > > + * block->first_free must be updated if the allocation takes its place. > > + * If the allocation breaks the contig_hint, a scan is required to > > + * restore this hint. > > + */ > > if (s_off == s_block->first_free) > > s_block->first_free = find_next_zero_bit( > > pcpu_index_alloc_map(chunk, s_index), > > @@ -912,6 +914,12 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, > > } > > } > > > > + /* > > + * If the allocation is not atomic, some blocks may not > > + * be populated with pages, while we account it here. > > + * The number of pages will be subtracted with > > + * pcpu_chunk_populated() when populating pages. > > + */ > > if (nr_empty_pages) > > pcpu_update_empty_pages(chunk, -nr_empty_pages); > > > > -- > > 2.34.1 > > > > Heh, that's a little more subtle than I remember it being. > > Acked-by: Dennis Zhou <dennis@kernel.org> Thanks for reviewing, Dennis. When I rechecked the code comment, I realized I had said the opposite in v1 about the newly added code comment. It subtracts the nr_empty_pop_pages in pcpu_block_update_hint_alloc() whether it's atomic allocation or not. If non atomic case, the number will be added back in pcpu_chunk_populated() when populating pages. I have updated and resent v2, sorry for the mistake. Thanks Baoquan
diff --git a/mm/percpu.c b/mm/percpu.c index a8121302a79c..09e407338573 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -831,13 +831,15 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, /* * Update s_block. - * block->first_free must be updated if the allocation takes its place. - * If the allocation breaks the contig_hint, a scan is required to - * restore this hint. */ if (s_block->contig_hint == PCPU_BITMAP_BLOCK_BITS) nr_empty_pages++; + /* + * block->first_free must be updated if the allocation takes its place. + * If the allocation breaks the contig_hint, a scan is required to + * restore this hint. + */ if (s_off == s_block->first_free) s_block->first_free = find_next_zero_bit( pcpu_index_alloc_map(chunk, s_index), @@ -912,6 +914,12 @@ static void pcpu_block_update_hint_alloc(struct pcpu_chunk *chunk, int bit_off, } } + /* + * If the allocation is not atomic, some blocks may not + * be populated with pages, while we account it here. + * The number of pages will be subtracted with + * pcpu_chunk_populated() when populating pages. + */ if (nr_empty_pages) pcpu_update_empty_pages(chunk, -nr_empty_pages);
When allocating an area from a chunk, pcpu_block_update_hint_alloc() is called to update chunk metadata, including chunk's and global nr_empty_pop_pages. However, if the allocation is not atomic, some blocks may not be populated with pages yet, while we still account it here. The number of pages will be subtracted with pcpu_chunk_populated() when populating pages. Adding code comment to make that more understandable. Signed-off-by: Baoquan He <bhe@redhat.com> --- mm/percpu.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)