Message ID | 1558689619-16891-1-git-send-email-suzuki.poulose@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/compaction: BUG: NULL pointer dereference | expand |
On Fri, May 24, 2019 at 10:20:19AM +0100, Suzuki K Poulose wrote: > Hi, > > We are hitting NULL pointer dereferences while running stress tests with KVM. > See splat [0]. The test is to spawn 100 VMs all doing standard debian > installation (Thanks to Marc's automated scripts, available here [1] ). > The problem has been reproduced with a better rate of success from 5.1-rc6 > onwards. > > The issue is only reproducible with swapping enabled and the entire > memory is used up, when swapping heavily. Also this issue is only reproducible > on only one server with 128GB, which has the following memory layout: > > [32GB@4GB, hole , 96GB@544GB] > > Here is my non-expert analysis of the issue so far. > > Under extreme memory pressure, the kswapd could trigger reset_isolation_suitable() > to figure out the cached values for migrate/free pfn for a zone, by scanning through > the entire zone. On our server it does so in the range of [ 0x10_0000, 0xa00_0000 ], > with the following area of holes : [ 0x20_0000, 0x880_0000 ]. > In the failing case, we end up setting the cached migrate pfn as : 0x508_0000, which > is right in the center of the zone pfn range. i.e ( 0x10_0000 + 0xa00_0000 ) / 2, > with reset_migrate = 0x88_4e00, reset_free = 0x10_0000. > > Now these cached values are used by the fast_isolate_freepages() to find a pfn. However, > since we cant find anything during the search we fall back to using the page belonging > to the min_pfn (which is the migrate_pfn), without proper checks to see if that is valid > PFN or not. This is then passed on to fast_isolate_around() which tries to do : > set_pageblock_skip(page) on the page which blows up due to an NULL mem_section pointer. > > The following patch seems to fix the issue for me, but I am not quite convinced that > it is the right fix. Thoughts ? > I think the patch is valid and the alternatives would be unnecessarily complicated. During a normal scan for free pages to isolate, there is a check for pageblock_pfn_to_page() which uses a pfn_valid check for non-contiguous zones in __pageblock_pfn_to_page. Now, while the non-contiguous check could be made in the area you highlight, it would be a relatively small optimisation that would be unmeasurable overall. However, it is definitely the case that if the PFN you highlight is invalid that badness happens. If you want to express this as a signed-off patch with an adjusted changelog then I'd be happy to add Reviewed-by: Mel Gorman <mgorman@techsingularity.net> If you are not comfortable with rewriting the changelog and formatting it as a patch then I can do it on your behalf and preserve your Signed-off-by. Just let me know. Thanks for researching this, I think it also applies to other people but had not found the time to track it down.
Hi Mel, Thanks for your quick response. On 24/05/2019 11:39, Mel Gorman wrote: > On Fri, May 24, 2019 at 10:20:19AM +0100, Suzuki K Poulose wrote: >> Hi, >> >> We are hitting NULL pointer dereferences while running stress tests with KVM. >> See splat [0]. The test is to spawn 100 VMs all doing standard debian >> installation (Thanks to Marc's automated scripts, available here [1] ). >> The problem has been reproduced with a better rate of success from 5.1-rc6 >> onwards. >> >> The issue is only reproducible with swapping enabled and the entire >> memory is used up, when swapping heavily. Also this issue is only reproducible >> on only one server with 128GB, which has the following memory layout: >> >> [32GB@4GB, hole , 96GB@544GB] >> >> Here is my non-expert analysis of the issue so far. >> >> Under extreme memory pressure, the kswapd could trigger reset_isolation_suitable() >> to figure out the cached values for migrate/free pfn for a zone, by scanning through >> the entire zone. On our server it does so in the range of [ 0x10_0000, 0xa00_0000 ], >> with the following area of holes : [ 0x20_0000, 0x880_0000 ]. >> In the failing case, we end up setting the cached migrate pfn as : 0x508_0000, which >> is right in the center of the zone pfn range. i.e ( 0x10_0000 + 0xa00_0000 ) / 2, >> with reset_migrate = 0x88_4e00, reset_free = 0x10_0000. >> >> Now these cached values are used by the fast_isolate_freepages() to find a pfn. However, >> since we cant find anything during the search we fall back to using the page belonging >> to the min_pfn (which is the migrate_pfn), without proper checks to see if that is valid >> PFN or not. This is then passed on to fast_isolate_around() which tries to do : >> set_pageblock_skip(page) on the page which blows up due to an NULL mem_section pointer. >> >> The following patch seems to fix the issue for me, but I am not quite convinced that >> it is the right fix. Thoughts ? >> > > I think the patch is valid and the alternatives would be unnecessarily > complicated. During a normal scan for free pages to isolate, there > is a check for pageblock_pfn_to_page() which uses a pfn_valid check > for non-contiguous zones in __pageblock_pfn_to_page. Now, while the I had the initial version with the pageblock_pfn_to_page(), but as you said, it is a complicated way of perform the same check as pfn_valid(). > non-contiguous check could be made in the area you highlight, it would be a > relatively small optimisation that would be unmeasurable overall. However, > it is definitely the case that if the PFN you highlight is invalid that > badness happens. If you want to express this as a signed-off patch with > an adjusted changelog then I'd be happy to add Sure, will send it right away. > > Reviewed-by: Mel Gorman <mgorman@techsingularity.net> > Thanks. Suzuki
On 05/24/2019 02:50 PM, Suzuki K Poulose wrote: > Hi, > > We are hitting NULL pointer dereferences while running stress tests with KVM. > See splat [0]. The test is to spawn 100 VMs all doing standard debian > installation (Thanks to Marc's automated scripts, available here [1] ). > The problem has been reproduced with a better rate of success from 5.1-rc6 > onwards. > > The issue is only reproducible with swapping enabled and the entire > memory is used up, when swapping heavily. Also this issue is only reproducible > on only one server with 128GB, which has the following memory layout: > > [32GB@4GB, hole , 96GB@544GB] > > Here is my non-expert analysis of the issue so far. > > Under extreme memory pressure, the kswapd could trigger reset_isolation_suitable() > to figure out the cached values for migrate/free pfn for a zone, by scanning through > the entire zone. On our server it does so in the range of [ 0x10_0000, 0xa00_0000 ], > with the following area of holes : [ 0x20_0000, 0x880_0000 ]. > In the failing case, we end up setting the cached migrate pfn as : 0x508_0000, which > is right in the center of the zone pfn range. i.e ( 0x10_0000 + 0xa00_0000 ) / 2, > with reset_migrate = 0x88_4e00, reset_free = 0x10_0000. > > Now these cached values are used by the fast_isolate_freepages() to find a pfn. However, > since we cant find anything during the search we fall back to using the page belonging > to the min_pfn (which is the migrate_pfn), without proper checks to see if that is valid > PFN or not. This is then passed on to fast_isolate_around() which tries to do : > set_pageblock_skip(page) on the page which blows up due to an NULL mem_section pointer. > > The following patch seems to fix the issue for me, but I am not quite convinced that > it is the right fix. Thoughts ? > > > diff --git a/mm/compaction.c b/mm/compaction.c > index 9febc8c..9e1b9ac 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1399,7 +1399,7 @@ fast_isolate_freepages(struct compact_control *cc) > page = pfn_to_page(highest); > cc->free_pfn = highest; > } else { > - if (cc->direct_compaction) { > + if (cc->direct_compaction && pfn_valid(min_pfn)) { > page = pfn_to_page(min_pfn); pfn_to_online_page() here would be better as it does not add pfn_valid() cost on architectures which does not subscribe to CONFIG_HOLES_IN_ZONE. But regardless if the compaction is trying to scan pfns in zone holes, then it should be avoided.
On Fri, May 24, 2019 at 04:26:16PM +0530, Anshuman Khandual wrote: > > > On 05/24/2019 02:50 PM, Suzuki K Poulose wrote: > > Hi, > > > > We are hitting NULL pointer dereferences while running stress tests with KVM. > > See splat [0]. The test is to spawn 100 VMs all doing standard debian > > installation (Thanks to Marc's automated scripts, available here [1] ). > > The problem has been reproduced with a better rate of success from 5.1-rc6 > > onwards. > > > > The issue is only reproducible with swapping enabled and the entire > > memory is used up, when swapping heavily. Also this issue is only reproducible > > on only one server with 128GB, which has the following memory layout: > > > > [32GB@4GB, hole , 96GB@544GB] > > > > Here is my non-expert analysis of the issue so far. > > > > Under extreme memory pressure, the kswapd could trigger reset_isolation_suitable() > > to figure out the cached values for migrate/free pfn for a zone, by scanning through > > the entire zone. On our server it does so in the range of [ 0x10_0000, 0xa00_0000 ], > > with the following area of holes : [ 0x20_0000, 0x880_0000 ]. > > In the failing case, we end up setting the cached migrate pfn as : 0x508_0000, which > > is right in the center of the zone pfn range. i.e ( 0x10_0000 + 0xa00_0000 ) / 2, > > with reset_migrate = 0x88_4e00, reset_free = 0x10_0000. > > > > Now these cached values are used by the fast_isolate_freepages() to find a pfn. However, > > since we cant find anything during the search we fall back to using the page belonging > > to the min_pfn (which is the migrate_pfn), without proper checks to see if that is valid > > PFN or not. This is then passed on to fast_isolate_around() which tries to do : > > set_pageblock_skip(page) on the page which blows up due to an NULL mem_section pointer. > > > > The following patch seems to fix the issue for me, but I am not quite convinced that > > it is the right fix. Thoughts ? > > > > > > diff --git a/mm/compaction.c b/mm/compaction.c > > index 9febc8c..9e1b9ac 100644 > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -1399,7 +1399,7 @@ fast_isolate_freepages(struct compact_control *cc) > > page = pfn_to_page(highest); > > cc->free_pfn = highest; > > } else { > > - if (cc->direct_compaction) { > > + if (cc->direct_compaction && pfn_valid(min_pfn)) { > > page = pfn_to_page(min_pfn); > > pfn_to_online_page() here would be better as it does not add pfn_valid() cost on > architectures which does not subscribe to CONFIG_HOLES_IN_ZONE. But regardless if > the compaction is trying to scan pfns in zone holes, then it should be avoided. CONFIG_HOLES_IN_ZONE typically applies in special cases where an arch punches holes within a section. As both do a section lookup, the cost is similar but pfn_valid in general is less subtle in this case. Normally pfn_valid_within is only ok when a pfn_valid check has been made on the max_order aligned range as well as a zone boundary check. In this case, it's much more straight-forward to leave it as pfn_valid.
On 05/24/2019 06:00 PM, Mel Gorman wrote: > On Fri, May 24, 2019 at 04:26:16PM +0530, Anshuman Khandual wrote: >> >> >> On 05/24/2019 02:50 PM, Suzuki K Poulose wrote: >>> Hi, >>> >>> We are hitting NULL pointer dereferences while running stress tests with KVM. >>> See splat [0]. The test is to spawn 100 VMs all doing standard debian >>> installation (Thanks to Marc's automated scripts, available here [1] ). >>> The problem has been reproduced with a better rate of success from 5.1-rc6 >>> onwards. >>> >>> The issue is only reproducible with swapping enabled and the entire >>> memory is used up, when swapping heavily. Also this issue is only reproducible >>> on only one server with 128GB, which has the following memory layout: >>> >>> [32GB@4GB, hole , 96GB@544GB] >>> >>> Here is my non-expert analysis of the issue so far. >>> >>> Under extreme memory pressure, the kswapd could trigger reset_isolation_suitable() >>> to figure out the cached values for migrate/free pfn for a zone, by scanning through >>> the entire zone. On our server it does so in the range of [ 0x10_0000, 0xa00_0000 ], >>> with the following area of holes : [ 0x20_0000, 0x880_0000 ]. >>> In the failing case, we end up setting the cached migrate pfn as : 0x508_0000, which >>> is right in the center of the zone pfn range. i.e ( 0x10_0000 + 0xa00_0000 ) / 2, >>> with reset_migrate = 0x88_4e00, reset_free = 0x10_0000. >>> >>> Now these cached values are used by the fast_isolate_freepages() to find a pfn. However, >>> since we cant find anything during the search we fall back to using the page belonging >>> to the min_pfn (which is the migrate_pfn), without proper checks to see if that is valid >>> PFN or not. This is then passed on to fast_isolate_around() which tries to do : >>> set_pageblock_skip(page) on the page which blows up due to an NULL mem_section pointer. >>> >>> The following patch seems to fix the issue for me, but I am not quite convinced that >>> it is the right fix. Thoughts ? >>> >>> >>> diff --git a/mm/compaction.c b/mm/compaction.c >>> index 9febc8c..9e1b9ac 100644 >>> --- a/mm/compaction.c >>> +++ b/mm/compaction.c >>> @@ -1399,7 +1399,7 @@ fast_isolate_freepages(struct compact_control *cc) >>> page = pfn_to_page(highest); >>> cc->free_pfn = highest; >>> } else { >>> - if (cc->direct_compaction) { >>> + if (cc->direct_compaction && pfn_valid(min_pfn)) { >>> page = pfn_to_page(min_pfn); >> >> pfn_to_online_page() here would be better as it does not add pfn_valid() cost on >> architectures which does not subscribe to CONFIG_HOLES_IN_ZONE. But regardless if >> the compaction is trying to scan pfns in zone holes, then it should be avoided. > > CONFIG_HOLES_IN_ZONE typically applies in special cases where an arch > punches holes within a section. As both do a section lookup, the cost is > similar but pfn_valid in general is less subtle in this case. Normally > pfn_valid_within is only ok when a pfn_valid check has been made on the > max_order aligned range as well as a zone boundary check. In this case, > it's much more straight-forward to leave it as pfn_valid. Sure, makes sense.
diff --git a/mm/compaction.c b/mm/compaction.c index 9febc8c..9e1b9ac 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1399,7 +1399,7 @@ fast_isolate_freepages(struct compact_control *cc) page = pfn_to_page(highest); cc->free_pfn = highest; } else { - if (cc->direct_compaction) { + if (cc->direct_compaction && pfn_valid(min_pfn)) { page = pfn_to_page(min_pfn); cc->free_pfn = min_pfn; }