Message ID | 20201030205846.1105106-1-dennis@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | percpu: convert flexible array initializers to use struct_size() | expand |
On 10/30/20 15:58, Dennis Zhou wrote: > Use the safer macro as sparked by the long discussion in [1]. > > [1] https://lore.kernel.org/lkml/20200917204514.GA2880159@google.com/ > > Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Thanks -- Gustavo > --- > I'll apply it to for-5.10-fixes. > > mm/percpu.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/percpu.c b/mm/percpu.c > index 66a93f096394..ad7a37ee74ef 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -1315,8 +1315,8 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, > region_size = ALIGN(start_offset + map_size, lcm_align); > > /* allocate chunk */ > - alloc_size = sizeof(struct pcpu_chunk) + > - BITS_TO_LONGS(region_size >> PAGE_SHIFT) * sizeof(unsigned long); > + alloc_size = struct_size(chunk, populated, > + BITS_TO_LONGS(region_size >> PAGE_SHIFT)); > chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES); > if (!chunk) > panic("%s: Failed to allocate %zu bytes\n", __func__, > @@ -2521,8 +2521,8 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT; > pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT; > pcpu_atom_size = ai->atom_size; > - pcpu_chunk_struct_size = sizeof(struct pcpu_chunk) + > - BITS_TO_LONGS(pcpu_unit_pages) * sizeof(unsigned long); > + pcpu_chunk_struct_size = struct_size(chunk, populated, > + BITS_TO_LONGS(pcpu_unit_pages)); > > pcpu_stats_save_ai(ai); > >
diff --git a/mm/percpu.c b/mm/percpu.c index 66a93f096394..ad7a37ee74ef 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1315,8 +1315,8 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, region_size = ALIGN(start_offset + map_size, lcm_align); /* allocate chunk */ - alloc_size = sizeof(struct pcpu_chunk) + - BITS_TO_LONGS(region_size >> PAGE_SHIFT) * sizeof(unsigned long); + alloc_size = struct_size(chunk, populated, + BITS_TO_LONGS(region_size >> PAGE_SHIFT)); chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES); if (!chunk) panic("%s: Failed to allocate %zu bytes\n", __func__, @@ -2521,8 +2521,8 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT; pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT; pcpu_atom_size = ai->atom_size; - pcpu_chunk_struct_size = sizeof(struct pcpu_chunk) + - BITS_TO_LONGS(pcpu_unit_pages) * sizeof(unsigned long); + pcpu_chunk_struct_size = struct_size(chunk, populated, + BITS_TO_LONGS(pcpu_unit_pages)); pcpu_stats_save_ai(ai);
Use the safer macro as sparked by the long discussion in [1]. [1] https://lore.kernel.org/lkml/20200917204514.GA2880159@google.com/ Signed-off-by: Dennis Zhou <dennis@kernel.org> --- I'll apply it to for-5.10-fixes. mm/percpu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)