Message ID | 1587970764-4393-1-git-send-email-vincent.chen@sifive.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] riscv: set max_pfn to the PFN of the last page | expand |
On Sun, 26 Apr 2020 23:59:24 PDT (-0700), vincent.chen@sifive.com wrote: > The current max_pfn equals to zero. In this case, I found it caused users > cannot get some page information through /proc such as kpagecount in v5.6 > kernel because of new sanity checks. The following message is displayed by > stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t > 1" on HiFive unleashed board. > > # stress-ng --verbose --physpage 1 -t 1 > stress-ng: debug: [109] 4 processors online, 4 processors configured > stress-ng: info: [109] dispatching hogs: 1 physpage > stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0 > stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0 > stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found > stress-ng: debug: [109] cache allocate: default cache size: 2048K > stress-ng: debug: [109] starting stressors > stress-ng: debug: [109] 1 stressor spawned > stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0) > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success) > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success) > ... > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success) > stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0) > stress-ng: debug: [109] process [110] terminated > stress-ng: info: [109] successful run completed in 1.00s > # > > After applying this patch, the kernel can pass the test. > > # stress-ng --verbose --physpage 1 -t 1 > stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage > stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs > stress-ng: debug: [104] cache allocate: default cache size: 2048K > stress-ng: debug: [104] starting stressors > stress-ng: debug: [104] 1 stressor spawned > stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated > stress-ng: info: [104] successful run completed in 1.01s > # > > Fixes: 0651c263c8e3 (RISC-V: Move setup_bootmem() to mm/init.c) > Cc: stable@vger.kernel.org > > Signed-off-by: Vincent Chen <vincent.chen@sifive.com> > Reviewed-by: Anup Patel <anup@brainfault.org> > Reviewed-by: Yash Shah <yash.shah@sifive.com> > Tested-by: Yash Shah <yash.shah@sifive.com> > > Changes since v1: > 1. Add Fixes line and Cc stable kernel > Changes since v2: > 1. Fix typo in Anup email address > --- > arch/riscv/mm/init.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index fab855963c73..157924baa191 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -149,7 +149,8 @@ void __init setup_bootmem(void) > memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); > > set_max_mapnr(PFN_DOWN(mem_size)); > - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM()); > + max_pfn = PFN_DOWN(memblock_end_of_DRAM()); > + max_low_pfn = max_pfn; > > #ifdef CONFIG_BLK_DEV_INITRD > setup_initrd(); I'm dropping the Fixes tag, as the actual bug goes back farther than that commit, that's just as far as it'll auto-apply.
Hi Palmer & Vicent, Please have a look at the patch: https://lore.kernel.org/linux-riscv/20210121063117.3164494-1-guoren@kernel.org/T/#u Seems our set_max_mapnr is wrong and it will make pfn_valid fault in non-zero start-address. On Tue, May 5, 2020 at 5:14 AM Palmer Dabbelt <palmer@dabbelt.com> wrote: > > On Sun, 26 Apr 2020 23:59:24 PDT (-0700), vincent.chen@sifive.com wrote: > > The current max_pfn equals to zero. In this case, I found it caused users > > cannot get some page information through /proc such as kpagecount in v5.6 > > kernel because of new sanity checks. The following message is displayed by > > stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t > > 1" on HiFive unleashed board. > > > > # stress-ng --verbose --physpage 1 -t 1 > > stress-ng: debug: [109] 4 processors online, 4 processors configured > > stress-ng: info: [109] dispatching hogs: 1 physpage > > stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0 > > stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0 > > stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found > > stress-ng: debug: [109] cache allocate: default cache size: 2048K > > stress-ng: debug: [109] starting stressors > > stress-ng: debug: [109] 1 stressor spawned > > stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0) > > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success) > > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success) > > ... > > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success) > > stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0) > > stress-ng: debug: [109] process [110] terminated > > stress-ng: info: [109] successful run completed in 1.00s > > # > > > > After applying this patch, the kernel can pass the test. > > > > # stress-ng --verbose --physpage 1 -t 1 > > stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage > > stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs > > stress-ng: debug: [104] cache allocate: default cache size: 2048K > > stress-ng: debug: [104] starting stressors > > stress-ng: debug: [104] 1 stressor spawned > > stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated > > stress-ng: info: [104] successful run completed in 1.01s > > # > > > > Fixes: 0651c263c8e3 (RISC-V: Move setup_bootmem() to mm/init.c) > > Cc: stable@vger.kernel.org > > > > Signed-off-by: Vincent Chen <vincent.chen@sifive.com> > > Reviewed-by: Anup Patel <anup@brainfault.org> > > Reviewed-by: Yash Shah <yash.shah@sifive.com> > > Tested-by: Yash Shah <yash.shah@sifive.com> > > > > Changes since v1: > > 1. Add Fixes line and Cc stable kernel > > Changes since v2: > > 1. Fix typo in Anup email address > > --- > > arch/riscv/mm/init.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > index fab855963c73..157924baa191 100644 > > --- a/arch/riscv/mm/init.c > > +++ b/arch/riscv/mm/init.c > > @@ -149,7 +149,8 @@ void __init setup_bootmem(void) > > memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); > > > > set_max_mapnr(PFN_DOWN(mem_size)); > > - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM()); > > + max_pfn = PFN_DOWN(memblock_end_of_DRAM()); > > + max_low_pfn = max_pfn; > > > > #ifdef CONFIG_BLK_DEV_INITRD > > setup_initrd(); > > I'm dropping the Fixes tag, as the actual bug goes back farther than that > commit, that's just as far as it'll auto-apply. >
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index fab855963c73..157924baa191 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -149,7 +149,8 @@ void __init setup_bootmem(void) memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); set_max_mapnr(PFN_DOWN(mem_size)); - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM()); + max_pfn = PFN_DOWN(memblock_end_of_DRAM()); + max_low_pfn = max_pfn; #ifdef CONFIG_BLK_DEV_INITRD setup_initrd();