Message ID | 20190224092838.3417-1-peng.fan@nxp.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] percpu: decrease pcpu_nr_slots by 1 | expand |
On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote: > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the > pcpu_nr_slots will be 10. > > The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. > However free_bytes in range [512,1024) will be linked into > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. > So pcpu_slot[8] is has no chance to be used. > > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the same slot > and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 share the > same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into consideration, 16~31 > share slot 2. Calculation as below: > highbit = fls(16) -> highbit = 5 > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. > > This patch by decreasing pcpu_nr_slots to avoid waste one slot and > let [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > --- > > V1: > Not very sure about whether it is intended to leave the slot there. > > mm/percpu.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/percpu.c b/mm/percpu.c > index 8d9933db6162..12a9ba38f0b5 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct pcpu_chunk *chunk, void *addr) > static int __pcpu_size_to_slot(int size) > { > int highbit = fls(size); /* size is in bytes */ > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); > } Honestly, it may be better to just have [1-16) [16-31) be separate. I'm working on a change to this area, so I may change what's going on here. > > static int pcpu_size_to_slot(int size) > @@ -2145,7 +2145,7 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > * Allocate chunk slots. The additional last slot is for > * empty chunks. > */ > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), > SMP_CACHE_BYTES); > for (i = 0; i < pcpu_nr_slots; i++) > -- > 2.16.4 > This is a tricky change. The nice thing about keeping the additional slot around is that it ensures a distinction between a completely empty chunk and a nearly empty chunk. It happens to be that the logic creates power of 2 chunks which ends up being an additional slot anyway. So, given that this logic is tricky and architecture dependent, I don't feel comfortable making this change as the risk greatly outweighs the benefit. Thanks, Dennis
Hi Dennis, > -----Original Message----- > From: dennis@kernel.org [mailto:dennis@kernel.org] > Sent: 2019年2月25日 23:24 > To: Peng Fan <peng.fan@nxp.com> > Cc: tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > linux-kernel@vger.kernel.org; van.freenix@gmail.com > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote: > > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. > > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. > > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will > > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the > > pcpu_nr_slots will be 10. > > > > The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. > > However free_bytes in range [512,1024) will be linked into > > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. > > So pcpu_slot[8] is has no chance to be used. > > > > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the > same > > slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 share > > the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into consideration, > > 16~31 share slot 2. Calculation as below: > > highbit = fls(16) -> highbit = 5 > > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. > > > > This patch by decreasing pcpu_nr_slots to avoid waste one slot and let > > [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. > > > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > > --- > > > > V1: > > Not very sure about whether it is intended to leave the slot there. > > > > mm/percpu.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/mm/percpu.c b/mm/percpu.c index > > 8d9933db6162..12a9ba38f0b5 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct pcpu_chunk > > *chunk, void *addr) static int __pcpu_size_to_slot(int size) { > > int highbit = fls(size); /* size is in bytes */ > > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); > > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); > > } > > Honestly, it may be better to just have [1-16) [16-31) be separate. I'm working > on a change to this area, so I may change what's going on here. > > > > > static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int > > __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > > * Allocate chunk slots. The additional last slot is for > > * empty chunks. > > */ > > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; > > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; > > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), > > SMP_CACHE_BYTES); > > for (i = 0; i < pcpu_nr_slots; i++) > > -- > > 2.16.4 > > > > This is a tricky change. The nice thing about keeping the additional > slot around is that it ensures a distinction between a completely empty > chunk and a nearly empty chunk. Are there any issues met before if not keeping the unused slot? From reading the code and git history I could not find information. I tried this code on aarch64 qemu and did not meet issues. It happens to be that the logic creates > power of 2 chunks which ends up being an additional slot anyway. So, > given that this logic is tricky and architecture dependent, Could you share more information about architecture dependent? Thanks, Peng. I don't feel > comfortable making this change as the risk greatly outweighs the > benefit. > > Thanks, > Dennis
On Tue, Feb 26, 2019 at 12:09:28AM +0000, Peng Fan wrote: > Hi Dennis, > > > -----Original Message----- > > From: dennis@kernel.org [mailto:dennis@kernel.org] > > Sent: 2019年2月25日 23:24 > > To: Peng Fan <peng.fan@nxp.com> > > Cc: tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > > linux-kernel@vger.kernel.org; van.freenix@gmail.com > > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > > > On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote: > > > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. > > > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. > > > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will > > > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the > > > pcpu_nr_slots will be 10. > > > > > > The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. > > > However free_bytes in range [512,1024) will be linked into > > > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. > > > So pcpu_slot[8] is has no chance to be used. > > > > > > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the > > same > > > slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 share > > > the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into consideration, > > > 16~31 share slot 2. Calculation as below: > > > highbit = fls(16) -> highbit = 5 > > > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. > > > > > > This patch by decreasing pcpu_nr_slots to avoid waste one slot and let > > > [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. > > > > > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > > > --- > > > > > > V1: > > > Not very sure about whether it is intended to leave the slot there. > > > > > > mm/percpu.c | 4 ++-- > > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > > > diff --git a/mm/percpu.c b/mm/percpu.c index > > > 8d9933db6162..12a9ba38f0b5 100644 > > > --- a/mm/percpu.c > > > +++ b/mm/percpu.c > > > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct pcpu_chunk > > > *chunk, void *addr) static int __pcpu_size_to_slot(int size) { > > > int highbit = fls(size); /* size is in bytes */ > > > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); > > > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); > > > } > > > > Honestly, it may be better to just have [1-16) [16-31) be separate. I'm working > > on a change to this area, so I may change what's going on here. > > > > > > > > static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int > > > __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > > > * Allocate chunk slots. The additional last slot is for > > > * empty chunks. > > > */ > > > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; > > > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; > > > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), > > > SMP_CACHE_BYTES); > > > for (i = 0; i < pcpu_nr_slots; i++) > > > -- > > > 2.16.4 > > > > > > > This is a tricky change. The nice thing about keeping the additional > > slot around is that it ensures a distinction between a completely empty > > chunk and a nearly empty chunk. > > Are there any issues met before if not keeping the unused slot? > From reading the code and git history I could not find information. > I tried this code on aarch64 qemu and did not meet issues. > This change would require verification that all paths lead to power of 2 chunk sizes and most likely a BUG_ON if that's not the case. So while this would work, we're holding onto an additional slot also to be used for chunk reclamation via pcpu_balance_workfn(). If a chunk was not a power of 2 resulting in the last slot being entirely empty chunks we could free stuff a chunk with addresses still in use. > > It happens to be that the logic creates > > power of 2 chunks which ends up being an additional slot anyway. > > > So, > > given that this logic is tricky and architecture dependent, > > Could you share more information about architecture dependent? > The crux of the logic is in pcpu_build_alloc_info(). It's been some time since I've thought deeply about it, but I don't believe there is a guarantee that it will be a power of 2 chunk. Thanks, Dennis
Hi Dennis, > -----Original Message----- > From: Dennis Zhou [mailto:dennis@kernel.org] > Sent: 2019年2月27日 1:33 > To: Peng Fan <peng.fan@nxp.com> > Cc: dennis@kernel.org; tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > linux-kernel@vger.kernel.org; van.freenix@gmail.com > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > On Tue, Feb 26, 2019 at 12:09:28AM +0000, Peng Fan wrote: > > Hi Dennis, > > > > > -----Original Message----- > > > From: dennis@kernel.org [mailto:dennis@kernel.org] > > > Sent: 2019年2月25日 23:24 > > > To: Peng Fan <peng.fan@nxp.com> > > > Cc: tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > > > linux-kernel@vger.kernel.org; van.freenix@gmail.com > > > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > > > > > On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote: > > > > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. > > > > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. > > > > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will > > > > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the > > > > pcpu_nr_slots will be 10. > > > > > > > > The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. > > > > However free_bytes in range [512,1024) will be linked into > > > > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. > > > > So pcpu_slot[8] is has no chance to be used. > > > > > > > > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the > > > same > > > > slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 > > > > share the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into > > > > consideration, > > > > 16~31 share slot 2. Calculation as below: > > > > highbit = fls(16) -> highbit = 5 > > > > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. > > > > > > > > This patch by decreasing pcpu_nr_slots to avoid waste one slot and > > > > let [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. > > > > > > > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > > > > --- > > > > > > > > V1: > > > > Not very sure about whether it is intended to leave the slot there. > > > > > > > > mm/percpu.c | 4 ++-- > > > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/mm/percpu.c b/mm/percpu.c index > > > > 8d9933db6162..12a9ba38f0b5 100644 > > > > --- a/mm/percpu.c > > > > +++ b/mm/percpu.c > > > > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct > > > > pcpu_chunk *chunk, void *addr) static int __pcpu_size_to_slot(int size) > { > > > > int highbit = fls(size); /* size is in bytes */ > > > > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); > > > > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); > > > > } > > > > > > Honestly, it may be better to just have [1-16) [16-31) be separate. Missed to reply this in previous thread, the following comments let me think the chunk slot calculation might be wrong, so this comment needs to be updated, saying "[PCPU_MIN_ALLOC_SIZE - 15) bytes share the same slot", if [1-16)[16-31) is expected. " /* the slots are sorted by free bytes left, 1-31 bytes share the same slot */ #define PCPU_SLOT_BASE_SHIFT 5 " > > > I'm working on a change to this area, so I may change what's going on > here. > > > > > > > > > > > static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int > > > > __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > > > > * Allocate chunk slots. The additional last slot is for > > > > * empty chunks. > > > > */ > > > > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; > > > > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; > > > > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), > > > > SMP_CACHE_BYTES); > > > > for (i = 0; i < pcpu_nr_slots; i++) > > > > -- > > > > 2.16.4 > > > > > > > > > > This is a tricky change. The nice thing about keeping the additional > > > slot around is that it ensures a distinction between a completely > > > empty chunk and a nearly empty chunk. > > > > Are there any issues met before if not keeping the unused slot? > > From reading the code and git history I could not find information. > > I tried this code on aarch64 qemu and did not meet issues. > > > > This change would require verification that all paths lead to power of 2 chunk > sizes and most likely a BUG_ON if that's not the case. I try to understand, "power of 2 chunk sizes", you mean the runtime free_bytes of a chunk? > > So while this would work, we're holding onto an additional slot also to be used > for chunk reclamation via pcpu_balance_workfn(). If a chunk was not a power > of 2 resulting in the last slot being entirely empty chunks we could free stuff a > chunk with addresses still in use. You mean the following code might free stuff when a percpu variable is still being used if the chunk runtime free_bytes is not a power of 2? " 1623 list_for_each_entry_safe(chunk, next, &to_free, list) { 1624 int rs, re; 1625 1626 pcpu_for_each_pop_region(chunk->populated, rs, re, 0, 1627 chunk->nr_pages) { 1628 pcpu_depopulate_chunk(chunk, rs, re); 1629 spin_lock_irq(&pcpu_lock); 1630 pcpu_chunk_depopulated(chunk, rs, re); 1631 spin_unlock_irq(&pcpu_lock); 1632 } 1633 pcpu_destroy_chunk(chunk); 1634 cond_resched(); 1635 } " > > > > It happens to be that the logic creates power of 2 chunks which ends > > > up being an additional slot anyway. > > > > > > So, > > > given that this logic is tricky and architecture dependent, > > > > Could you share more information about architecture dependent? > > > > The crux of the logic is in pcpu_build_alloc_info(). It's been some time since > I've thought deeply about it, but I don't believe there is a guarantee that it will > be a power of 2 chunk. I am a bit lost about a power of 2, need to read more about the code. Thanks, Peng. > > Thanks, > Dennis
On Wed, Feb 27, 2019 at 01:33:15PM +0000, Peng Fan wrote: > Hi Dennis, > > > -----Original Message----- > > From: Dennis Zhou [mailto:dennis@kernel.org] > > Sent: 2019年2月27日 1:33 > > To: Peng Fan <peng.fan@nxp.com> > > Cc: dennis@kernel.org; tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > > linux-kernel@vger.kernel.org; van.freenix@gmail.com > > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > > > On Tue, Feb 26, 2019 at 12:09:28AM +0000, Peng Fan wrote: > > > Hi Dennis, > > > > > > > -----Original Message----- > > > > From: dennis@kernel.org [mailto:dennis@kernel.org] > > > > Sent: 2019年2月25日 23:24 > > > > To: Peng Fan <peng.fan@nxp.com> > > > > Cc: tj@kernel.org; cl@linux.com; linux-mm@kvack.org; > > > > linux-kernel@vger.kernel.org; van.freenix@gmail.com > > > > Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1 > > > > > > > > On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote: > > > > > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. > > > > > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. > > > > > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will > > > > > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the > > > > > pcpu_nr_slots will be 10. > > > > > > > > > > The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. > > > > > However free_bytes in range [512,1024) will be linked into > > > > > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. > > > > > So pcpu_slot[8] is has no chance to be used. > > > > > > > > > > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the > > > > same > > > > > slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 > > > > > share the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into > > > > > consideration, > > > > > 16~31 share slot 2. Calculation as below: > > > > > highbit = fls(16) -> highbit = 5 > > > > > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. > > > > > > > > > > This patch by decreasing pcpu_nr_slots to avoid waste one slot and > > > > > let [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. > > > > > > > > > > Signed-off-by: Peng Fan <peng.fan@nxp.com> > > > > > --- > > > > > > > > > > V1: > > > > > Not very sure about whether it is intended to leave the slot there. > > > > > > > > > > mm/percpu.c | 4 ++-- > > > > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/mm/percpu.c b/mm/percpu.c index > > > > > 8d9933db6162..12a9ba38f0b5 100644 > > > > > --- a/mm/percpu.c > > > > > +++ b/mm/percpu.c > > > > > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct > > > > > pcpu_chunk *chunk, void *addr) static int __pcpu_size_to_slot(int size) > > { > > > > > int highbit = fls(size); /* size is in bytes */ > > > > > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); > > > > > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); > > > > > } > > > > > > > > Honestly, it may be better to just have [1-16) [16-31) be separate. > > Missed to reply this in previous thread, the following comments let > me think the chunk slot calculation might be wrong, so this comment > needs to be updated, saying "[PCPU_MIN_ALLOC_SIZE - 15) bytes share > the same slot", if [1-16)[16-31) is expected. > " > /* the slots are sorted by free bytes left, 1-31 bytes share the same slot */ > #define PCPU_SLOT_BASE_SHIFT 5 > " > > > > > I'm working on a change to this area, so I may change what's going on > > here. > > > > > > > > > > > > > > static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int > > > > > __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > > > > > * Allocate chunk slots. The additional last slot is for > > > > > * empty chunks. > > > > > */ > > > > > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; > > > > > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; > > > > > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), > > > > > SMP_CACHE_BYTES); > > > > > for (i = 0; i < pcpu_nr_slots; i++) > > > > > -- > > > > > 2.16.4 > > > > > > > > > > > > > This is a tricky change. The nice thing about keeping the additional > > > > slot around is that it ensures a distinction between a completely > > > > empty chunk and a nearly empty chunk. > > > > > > Are there any issues met before if not keeping the unused slot? > > > From reading the code and git history I could not find information. > > > I tried this code on aarch64 qemu and did not meet issues. > > > > > > > This change would require verification that all paths lead to power of 2 chunk > > sizes and most likely a BUG_ON if that's not the case. > > I try to understand, "power of 2 chunk sizes", you mean the runtime free_bytes > of a chunk? > I'm talking about the unit_size. > > > > So while this would work, we're holding onto an additional slot also to be used > > for chunk reclamation via pcpu_balance_workfn(). If a chunk was not a power > > of 2 resulting in the last slot being entirely empty chunks we could free stuff a > > chunk with addresses still in use. > > You mean the following code might free stuff when a percpu variable is still being used > if the chunk runtime free_bytes is not a power of 2? > " > 1623 list_for_each_entry_safe(chunk, next, &to_free, list) { > 1624 int rs, re; > 1625 > 1626 pcpu_for_each_pop_region(chunk->populated, rs, re, 0, > 1627 chunk->nr_pages) { > 1628 pcpu_depopulate_chunk(chunk, rs, re); > 1629 spin_lock_irq(&pcpu_lock); > 1630 pcpu_chunk_depopulated(chunk, rs, re); > 1631 spin_unlock_irq(&pcpu_lock); > 1632 } > 1633 pcpu_destroy_chunk(chunk); > 1634 cond_resched(); > 1635 } > " > Yes, if the unit_size is not a power of 2, then the last slot holds used chunks. > > > > > > It happens to be that the logic creates power of 2 chunks which ends > > > > up being an additional slot anyway. > > > > > > > > > So, > > > > given that this logic is tricky and architecture dependent, > > > > > > Could you share more information about architecture dependent? > > > > > > > The crux of the logic is in pcpu_build_alloc_info(). It's been some time since > > I've thought deeply about it, but I don't believe there is a guarantee that it will > > be a power of 2 chunk. > > I am a bit lost about a power of 2, need to read more about the code. > I'm reluctant to remove this slot because it is tricky code and the benefit of it is negligible compared to the risk. Thanks, Dennis
diff --git a/mm/percpu.c b/mm/percpu.c index 8d9933db6162..12a9ba38f0b5 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct pcpu_chunk *chunk, void *addr) static int __pcpu_size_to_slot(int size) { int highbit = fls(size); /* size is in bytes */ - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1); } static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, * Allocate chunk slots. The additional last slot is for * empty chunks. */ - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1; pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]), SMP_CACHE_BYTES); for (i = 0; i < pcpu_nr_slots; i++)
Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic. pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`. Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the pcpu_nr_slots will be 10. The chunk with free_bytes 1024 will be linked into pcpu_slot[9]. However free_bytes in range [512,1024) will be linked into pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7. So pcpu_slot[8] is has no chance to be used. According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the same slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 share the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into consideration, 16~31 share slot 2. Calculation as below: highbit = fls(16) -> highbit = 5 max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1. This patch by decreasing pcpu_nr_slots to avoid waste one slot and let [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot. Signed-off-by: Peng Fan <peng.fan@nxp.com> --- V1: Not very sure about whether it is intended to leave the slot there. mm/percpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)