Message ID | 155677653785.2336373.11131100812252340469.stgit@dwillia2-desk3.amr.corp.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: Sub-section memory hotplug support | expand |
On Wed, May 01, 2019 at 10:55:37PM -0700, Dan Williams wrote: > Prepare for hot{plug,remove} of sub-ranges of a section by tracking a > section active bitmask, each bit representing 2MB (SECTION_SIZE (128M) / > map_active bitmask length (64)). If it turns out that 2MB is too large > of an active tracking granularity it is trivial to increase the size of > the map_active bitmap. > > The implications of a partially populated section is that pfn_valid() > needs to go beyond a valid_section() check and read the sub-section > active ranges from the bitmask. > > Cc: Michal Hocko <mhocko@suse.com> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Logan Gunthorpe <logang@deltatee.com> > Tested-by: Jane Chu <jane.chu@oracle.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Unfortunately I did not hear back about the comments/questions I made for this in the previous version. > --- > include/linux/mmzone.h | 29 ++++++++++++++++++++++++++++- > mm/page_alloc.c | 4 +++- > mm/sparse.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 79 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 6726fc175b51..cffde898e345 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -1175,6 +1175,8 @@ struct mem_section_usage { > unsigned long pageblock_flags[0]; > }; > > +void section_active_init(unsigned long pfn, unsigned long nr_pages); > + > struct page; > struct page_ext; > struct mem_section { > @@ -1312,12 +1314,36 @@ static inline struct mem_section *__pfn_to_section(unsigned long pfn) > > extern int __highest_present_section_nr; > > +static inline int section_active_index(phys_addr_t phys) > +{ > + return (phys & ~(PA_SECTION_MASK)) / SECTION_ACTIVE_SIZE; > +} > + > +#ifdef CONFIG_SPARSEMEM_VMEMMAP > +static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) > +{ > + int idx = section_active_index(PFN_PHYS(pfn)); > + > + return !!(ms->usage->map_active & (1UL << idx)); section_active_mask() also converts the value to address/size. Why do we need to convert the values and we cannot work with pfn/pages instead? It should be perfectly possible unless I am missing something. The only thing required would be to export earlier your: +#define PAGES_PER_SUB_SECTION (SECTION_ACTIVE_SIZE / PAGE_SIZE) +#define PAGE_SUB_SECTION_MASK (~(PAGES_PER_SUB_SECTION-1)) and change section_active_index to: static inline int section_active_index(unsigned long pfn) { return (pfn & ~(PAGE_SECTION_MASK)) / SUB_SECTION_ACTIVE_PAGES; } In this way we do need to shift the values every time and we can work with them directly. Maybe you made it work this way because a reason I am missing. > +static unsigned long section_active_mask(unsigned long pfn, > + unsigned long nr_pages) > +{ > + int idx_start, idx_size; > + phys_addr_t start, size; > + > + if (!nr_pages) > + return 0; > + > + start = PFN_PHYS(pfn); > + size = PFN_PHYS(min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK))); It seems to me that we already picked the lowest value back in section_active_init, so we should be fine if we drop the min() here? Another thing is why do we need to convert the values to address/size, and we cannot work with pfns/pages. Unless I am missing something it should be possible. > + size = ALIGN(size, SECTION_ACTIVE_SIZE); > + > + idx_start = section_active_index(start); > + idx_size = section_active_index(size); > + > + if (idx_size == 0) > + return -1; Maybe we would be better off converting that -1 into something like "FULL_SECTION", or at least dropping a comment there that "-1" means that the section is fully populated. > + return ((1UL << idx_size) - 1) << idx_start; > +} > + > +void section_active_init(unsigned long pfn, unsigned long nr_pages) > +{ > + int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); > + int i, start_sec = pfn_to_section_nr(pfn); > + > + if (!nr_pages) > + return; > + > + for (i = start_sec; i <= end_sec; i++) { > + struct mem_section *ms; > + unsigned long mask; > + unsigned long pfns; > + > + pfns = min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK)); > + mask = section_active_mask(pfn, pfns); > + > + ms = __nr_to_section(i); > + ms->usage->map_active |= mask; > + pr_debug("%s: sec: %d mask: %#018lx\n", __func__, i, ms->usage->map_active); > + > + pfn += pfns; > + nr_pages -= pfns; > + } > +} > + > /* Record a memory area against a node. */ > void __init memory_present(int nid, unsigned long start, unsigned long end) > { >
On Thu, May 2, 2019 at 12:48 AM Oscar Salvador <osalvador@suse.de> wrote: > > On Wed, May 01, 2019 at 10:55:37PM -0700, Dan Williams wrote: > > Prepare for hot{plug,remove} of sub-ranges of a section by tracking a > > section active bitmask, each bit representing 2MB (SECTION_SIZE (128M) / > > map_active bitmask length (64)). If it turns out that 2MB is too large > > of an active tracking granularity it is trivial to increase the size of > > the map_active bitmap. > > > > The implications of a partially populated section is that pfn_valid() > > needs to go beyond a valid_section() check and read the sub-section > > active ranges from the bitmask. > > > > Cc: Michal Hocko <mhocko@suse.com> > > Cc: Vlastimil Babka <vbabka@suse.cz> > > Cc: Logan Gunthorpe <logang@deltatee.com> > > Tested-by: Jane Chu <jane.chu@oracle.com> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com> > > Unfortunately I did not hear back about the comments/questions I made for this > in the previous version. Apologies, yes, will incorporate. > > > --- > > include/linux/mmzone.h | 29 ++++++++++++++++++++++++++++- > > mm/page_alloc.c | 4 +++- > > mm/sparse.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ > > 3 files changed, 79 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > > index 6726fc175b51..cffde898e345 100644 > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -1175,6 +1175,8 @@ struct mem_section_usage { > > unsigned long pageblock_flags[0]; > > }; > > > > +void section_active_init(unsigned long pfn, unsigned long nr_pages); > > + > > struct page; > > struct page_ext; > > struct mem_section { > > @@ -1312,12 +1314,36 @@ static inline struct mem_section *__pfn_to_section(unsigned long pfn) > > > > extern int __highest_present_section_nr; > > > > +static inline int section_active_index(phys_addr_t phys) > > +{ > > + return (phys & ~(PA_SECTION_MASK)) / SECTION_ACTIVE_SIZE; > > +} > > + > > +#ifdef CONFIG_SPARSEMEM_VMEMMAP > > +static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) > > +{ > > + int idx = section_active_index(PFN_PHYS(pfn)); > > + > > + return !!(ms->usage->map_active & (1UL << idx)); > > section_active_mask() also converts the value to address/size. > Why do we need to convert the values and we cannot work with pfn/pages instead? > It should be perfectly possible unless I am missing something. > > The only thing required would be to export earlier your: > > +#define PAGES_PER_SUB_SECTION (SECTION_ACTIVE_SIZE / PAGE_SIZE) > +#define PAGE_SUB_SECTION_MASK (~(PAGES_PER_SUB_SECTION-1)) > > and change section_active_index to: > > static inline int section_active_index(unsigned long pfn) > { > return (pfn & ~(PAGE_SECTION_MASK)) / SUB_SECTION_ACTIVE_PAGES; > } > > In this way we do need to shift the values every time and we can work with them > directly. > Maybe you made it work this way because a reason I am missing. > > > +static unsigned long section_active_mask(unsigned long pfn, > > + unsigned long nr_pages) > > +{ > > + int idx_start, idx_size; > > + phys_addr_t start, size; > > + > > + if (!nr_pages) > > + return 0; > > + > > + start = PFN_PHYS(pfn); > > + size = PFN_PHYS(min(nr_pages, PAGES_PER_SECTION > > + - (pfn & ~PAGE_SECTION_MASK))); > > It seems to me that we already picked the lowest value back in > section_active_init, so we should be fine if we drop the min() here? > > Another thing is why do we need to convert the values to address/size, and we > cannot work with pfns/pages. > Unless I am missing something it should be possible. Right, I believe the physical address conversion was a holdover from a previous version and these helpers can be cleaned up to be pfn based, good catch. > > > + size = ALIGN(size, SECTION_ACTIVE_SIZE); > > + > > + idx_start = section_active_index(start); > > + idx_size = section_active_index(size); > > + > > + if (idx_size == 0) > > + return -1; > > Maybe we would be better off converting that -1 into something like "FULL_SECTION", > or at least dropping a comment there that "-1" means that the section is fully > populated. Agreed, I'll add a #define. Thanks Oscar.
On Thu, May 02, 2019 at 07:03:45AM -0700, Dan Williams wrote: > > section_active_mask() also converts the value to address/size. > > Why do we need to convert the values and we cannot work with pfn/pages instead? > > It should be perfectly possible unless I am missing something. > > > > The only thing required would be to export earlier your: > > > > +#define PAGES_PER_SUB_SECTION (SECTION_ACTIVE_SIZE / PAGE_SIZE) > > +#define PAGE_SUB_SECTION_MASK (~(PAGES_PER_SUB_SECTION-1)) > > > > and change section_active_index to: > > > > static inline int section_active_index(unsigned long pfn) > > { > > return (pfn & ~(PAGE_SECTION_MASK)) / SUB_SECTION_ACTIVE_PAGES; Sorry, here I meant: return (pfn & ~(PAGE_SECTION_MASK)) / PAGES_PER_SUB_SECTION; But I think you got the idea :-)
On 19-05-01 22:55:37, Dan Williams wrote: > Prepare for hot{plug,remove} of sub-ranges of a section by tracking a > section active bitmask, each bit representing 2MB (SECTION_SIZE (128M) / > map_active bitmask length (64)). If it turns out that 2MB is too large > of an active tracking granularity it is trivial to increase the size of > the map_active bitmap. > > The implications of a partially populated section is that pfn_valid() > needs to go beyond a valid_section() check and read the sub-section > active ranges from the bitmask. > > Cc: Michal Hocko <mhocko@suse.com> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Logan Gunthorpe <logang@deltatee.com> > Tested-by: Jane Chu <jane.chu@oracle.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Hi Dan, I have sent comments to the previous version of this patch: https://lore.kernel.org/lkml/CA+CK2bAfnCVYz956jPTNQ+AqHJs7uY1ZqWfL8fSUFWQOdKxHcg@mail.gmail.com/ I think they still apply to this one. Thank you, Pasha
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 6726fc175b51..cffde898e345 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1175,6 +1175,8 @@ struct mem_section_usage { unsigned long pageblock_flags[0]; }; +void section_active_init(unsigned long pfn, unsigned long nr_pages); + struct page; struct page_ext; struct mem_section { @@ -1312,12 +1314,36 @@ static inline struct mem_section *__pfn_to_section(unsigned long pfn) extern int __highest_present_section_nr; +static inline int section_active_index(phys_addr_t phys) +{ + return (phys & ~(PA_SECTION_MASK)) / SECTION_ACTIVE_SIZE; +} + +#ifdef CONFIG_SPARSEMEM_VMEMMAP +static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) +{ + int idx = section_active_index(PFN_PHYS(pfn)); + + return !!(ms->usage->map_active & (1UL << idx)); +} +#else +static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) +{ + return 1; +} +#endif + #ifndef CONFIG_HAVE_ARCH_PFN_VALID static inline int pfn_valid(unsigned long pfn) { + struct mem_section *ms; + if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) return 0; - return valid_section(__nr_to_section(pfn_to_section_nr(pfn))); + ms = __nr_to_section(pfn_to_section_nr(pfn)); + if (!valid_section(ms)) + return 0; + return pfn_section_valid(ms, pfn); } #endif @@ -1349,6 +1375,7 @@ void sparse_init(void); #define sparse_init() do {} while (0) #define sparse_index_init(_sec, _nid) do {} while (0) #define pfn_present pfn_valid +#define section_active_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 61c2b54a5b61..a68735c79609 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7291,10 +7291,12 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn) /* Print out the early node map */ pr_info("Early memory node ranges\n"); - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid, (u64)start_pfn << PAGE_SHIFT, ((u64)end_pfn << PAGE_SHIFT) - 1); + section_active_init(start_pfn, end_pfn - start_pfn); + } /* Initialise every node */ mminit_verify_pageflags_layout(); diff --git a/mm/sparse.c b/mm/sparse.c index f87de7ad32c8..8d4f28e2c25e 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -210,6 +210,54 @@ static inline unsigned long first_present_section_nr(void) return next_present_section_nr(-1); } +static unsigned long section_active_mask(unsigned long pfn, + unsigned long nr_pages) +{ + int idx_start, idx_size; + phys_addr_t start, size; + + if (!nr_pages) + return 0; + + start = PFN_PHYS(pfn); + size = PFN_PHYS(min(nr_pages, PAGES_PER_SECTION + - (pfn & ~PAGE_SECTION_MASK))); + size = ALIGN(size, SECTION_ACTIVE_SIZE); + + idx_start = section_active_index(start); + idx_size = section_active_index(size); + + if (idx_size == 0) + return -1; + return ((1UL << idx_size) - 1) << idx_start; +} + +void section_active_init(unsigned long pfn, unsigned long nr_pages) +{ + int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); + int i, start_sec = pfn_to_section_nr(pfn); + + if (!nr_pages) + return; + + for (i = start_sec; i <= end_sec; i++) { + struct mem_section *ms; + unsigned long mask; + unsigned long pfns; + + pfns = min(nr_pages, PAGES_PER_SECTION + - (pfn & ~PAGE_SECTION_MASK)); + mask = section_active_mask(pfn, pfns); + + ms = __nr_to_section(i); + ms->usage->map_active |= mask; + pr_debug("%s: sec: %d mask: %#018lx\n", __func__, i, ms->usage->map_active); + + pfn += pfns; + nr_pages -= pfns; + } +} + /* Record a memory area against a node. */ void __init memory_present(int nid, unsigned long start, unsigned long end) {