Message ID | 20190224132518.20586-2-peng.fan@nxp.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] percpu: km: remove SMP check | expand |
On Sun, Feb 24, 2019 at 01:13:50PM +0000, Peng Fan wrote: > percpu-km is used on UP systems which only has one group, > so the group offset will be always 0, there is no need > to subtract pcpu_group_offsets[0] when assigning chunk->base_addr > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > --- > mm/percpu-km.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/percpu-km.c b/mm/percpu-km.c > index 66e5598be876..8872c21a487b 100644 > --- a/mm/percpu-km.c > +++ b/mm/percpu-km.c > @@ -67,7 +67,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) > pcpu_set_page_chunk(nth_page(pages, i), chunk); > > chunk->data = pages; > - chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; > + chunk->base_addr = page_address(pages); > > spin_lock_irqsave(&pcpu_lock, flags); > pcpu_chunk_populated(chunk, 0, nr_pages, false); > -- > 2.16.4 > While I do think you're right, creating a chunk is not a part of the critical path and subtracting 0 is incredibly minor overhead. So I'd rather keep the code as is to maintain consistency between percpu-vm.c and percpu-km.c. Thanks, Dennis
Hi Dennis, > -----Original Message----- > From: dennis@kernel.org [mailto:dennis@kernel.org] > Sent: 2019年2月25日 23:16 > To: Peng Fan <peng.fan@nxp.com> > Cc: tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > linux-kernel@vger.kernel.org; van.freenix@gmail.com > Subject: Re: [PATCH 2/2] percpu: km: no need to consider > pcpu_group_offsets[0] > > On Sun, Feb 24, 2019 at 01:13:50PM +0000, Peng Fan wrote: > > percpu-km is used on UP systems which only has one group, so the group > > offset will be always 0, there is no need to subtract > > pcpu_group_offsets[0] when assigning chunk->base_addr > > > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > > --- > > mm/percpu-km.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/percpu-km.c b/mm/percpu-km.c index > > 66e5598be876..8872c21a487b 100644 > > --- a/mm/percpu-km.c > > +++ b/mm/percpu-km.c > > @@ -67,7 +67,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t > gfp) > > pcpu_set_page_chunk(nth_page(pages, i), chunk); > > > > chunk->data = pages; > > - chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; > > + chunk->base_addr = page_address(pages); > > > > spin_lock_irqsave(&pcpu_lock, flags); > > pcpu_chunk_populated(chunk, 0, nr_pages, false); > > -- > > 2.16.4 > > > > While I do think you're right, creating a chunk is not a part of the > critical path and subtracting 0 is incredibly minor overhead. So I'd > rather keep the code as is to maintain consistency between percpu-vm.c > and percpu-km.c. That's ok to keep consistency, since you prefer that. Thanks, Peng. > > Thanks, > Dennis
On Mon, 25 Feb 2019, dennis@kernel.org wrote: > > @@ -67,7 +67,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) > > pcpu_set_page_chunk(nth_page(pages, i), chunk); > > > > chunk->data = pages; > > - chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; > > + chunk->base_addr = page_address(pages); > > > > spin_lock_irqsave(&pcpu_lock, flags); > > pcpu_chunk_populated(chunk, 0, nr_pages, false); > > -- > > 2.16.4 > > > > While I do think you're right, creating a chunk is not a part of the > critical path and subtracting 0 is incredibly minor overhead. So I'd > rather keep the code as is to maintain consistency between percpu-vm.c > and percpu-km.c. Well it is confusing if there the expression is there but never used. It is clearer with the patch.
On Tue, Feb 26, 2019 at 03:15:50PM +0000, Christopher Lameter wrote: > On Mon, 25 Feb 2019, dennis@kernel.org wrote: > > > > @@ -67,7 +67,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) > > > pcpu_set_page_chunk(nth_page(pages, i), chunk); > > > > > > chunk->data = pages; > > > - chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; > > > + chunk->base_addr = page_address(pages); > > > > > > spin_lock_irqsave(&pcpu_lock, flags); > > > pcpu_chunk_populated(chunk, 0, nr_pages, false); > > > -- > > > 2.16.4 > > > > > > > While I do think you're right, creating a chunk is not a part of the > > critical path and subtracting 0 is incredibly minor overhead. So I'd > > rather keep the code as is to maintain consistency between percpu-vm.c > > and percpu-km.c. > > Well it is confusing if there the expression is there but never used. It > is clearer with the patch. > Okay. I'll apply it with your ack if that's fine. Thanks, Dennis
diff --git a/mm/percpu-km.c b/mm/percpu-km.c index 66e5598be876..8872c21a487b 100644 --- a/mm/percpu-km.c +++ b/mm/percpu-km.c @@ -67,7 +67,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) pcpu_set_page_chunk(nth_page(pages, i), chunk); chunk->data = pages; - chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; + chunk->base_addr = page_address(pages); spin_lock_irqsave(&pcpu_lock, flags); pcpu_chunk_populated(chunk, 0, nr_pages, false);
percpu-km is used on UP systems which only has one group, so the group offset will be always 0, there is no need to subtract pcpu_group_offsets[0] when assigning chunk->base_addr Signed-off-by: Peng Fan <peng.fan@nxp.com> --- mm/percpu-km.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)