diff mbox series

[RFC] mm: filemap: avoid unnecessary major faults in filemap_fault()

Message ID 20231122140052.4092083-1-zhangpeng362@huawei.com (mailing list archive)
State New
Headers show
Series [RFC] mm: filemap: avoid unnecessary major faults in filemap_fault() | expand

Commit Message

Peng Zhang Nov. 22, 2023, 2 p.m. UTC
From: ZhangPeng <zhangpeng362@huawei.com>

The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
in application, which leading to an unexpected performance issue[1].

This caused by temporarily cleared pte during a read/modify/write update
of the pte, eg, do_numa_page()/change_pte_range().

For the data segment of the user-mode program, the global variable area
is a private mapping. After the pagecache is loaded, the private anonymous
page is generated after the COW is triggered. Mlockall can lock COW pages
(anonymous pages), but the original file pages cannot be locked and may
be reclaimed. If the global variable (private anon page) is accessed when
vmf->pte is zeroed in numa fault, a file page fault will be triggered.

At this time, the original private file page may have been reclaimed.
If the page cache is not available at this time, a major fault will be
triggered and the file will be read, causing additional overhead.

Fix this by rechecking the pte by holding ptl in filemap_fault() before
triggering a major fault.

[1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/filemap.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Yin Fengwei Nov. 23, 2023, 1:09 a.m. UTC | #1
Hi Peng,

On 11/22/23 22:00, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
> 
> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
> in application, which leading to an unexpected performance issue[1].
> 
> This caused by temporarily cleared pte during a read/modify/write update
> of the pte, eg, do_numa_page()/change_pte_range().
> 
> For the data segment of the user-mode program, the global variable area
> is a private mapping. After the pagecache is loaded, the private anonymous
> page is generated after the COW is triggered. Mlockall can lock COW pages
> (anonymous pages), but the original file pages cannot be locked and may
> be reclaimed. If the global variable (private anon page) is accessed when
> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
> 
> At this time, the original private file page may have been reclaimed.
> If the page cache is not available at this time, a major fault will be
> triggered and the file will be read, causing additional overhead.
> 
> Fix this by rechecking the pte by holding ptl in filemap_fault() before
> triggering a major fault.
> 
> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
> 
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/filemap.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 71f00539ac00..bb5e6a2790dc 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  			mapping_locked = true;
>  		}
>  	} else {
> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> +						  vmf->address, &vmf->ptl);
> +		if (ptep) {
> +			/*
> +			 * Recheck pte with ptl locked as the pte can be cleared
> +			 * temporarily during a read/modify/write update.
> +			 */
> +			if (unlikely(!pte_none(ptep_get(ptep))))
> +				ret = VM_FAULT_NOPAGE;
> +			pte_unmap_unlock(ptep, vmf->ptl);
> +			if (unlikely(ret))
> +				return ret;
> +		}
I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?


Regards
Yin, Fengwei

> +
>  		/* No page in the page cache at all */
>  		count_vm_event(PGMAJFAULT);
>  		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Peng Zhang Nov. 23, 2023, 4:12 a.m. UTC | #2
On 2023/11/23 9:09, Yin Fengwei wrote:

> Hi Peng,
>
> On 11/22/23 22:00, Peng Zhang wrote:
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>> in application, which leading to an unexpected performance issue[1].
>>
>> This caused by temporarily cleared pte during a read/modify/write update
>> of the pte, eg, do_numa_page()/change_pte_range().
>>
>> For the data segment of the user-mode program, the global variable area
>> is a private mapping. After the pagecache is loaded, the private anonymous
>> page is generated after the COW is triggered. Mlockall can lock COW pages
>> (anonymous pages), but the original file pages cannot be locked and may
>> be reclaimed. If the global variable (private anon page) is accessed when
>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>
>> At this time, the original private file page may have been reclaimed.
>> If the page cache is not available at this time, a major fault will be
>> triggered and the file will be read, causing additional overhead.
>>
>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>> triggering a major fault.
>>
>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>
>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/filemap.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index 71f00539ac00..bb5e6a2790dc 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>   			mapping_locked = true;
>>   		}
>>   	} else {
>> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>> +						  vmf->address, &vmf->ptl);
>> +		if (ptep) {
>> +			/*
>> +			 * Recheck pte with ptl locked as the pte can be cleared
>> +			 * temporarily during a read/modify/write update.
>> +			 */
>> +			if (unlikely(!pte_none(ptep_get(ptep))))
>> +				ret = VM_FAULT_NOPAGE;
>> +			pte_unmap_unlock(ptep, vmf->ptl);
>> +			if (unlikely(ret))
>> +				return ret;
>> +		}
> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?

Thank you for your reply.

If we don't take PTL, the current use case won't trigger this issue either.

In most cases, if we don't take PTL, this issue won't be triggered. However,
there is still a possibility of triggering this issue. The corner case is that
task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
check whether the PTE is not NONE before task 1 updates PTE in
ptep_modify_prot_commit() without taking PTL.

>
> Regards
> Yin, Fengwei
>
>> +
>>   		/* No page in the page cache at all */
>>   		count_vm_event(PGMAJFAULT);
>>   		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Yin Fengwei Nov. 23, 2023, 5:26 a.m. UTC | #3
On 11/23/23 12:12, zhangpeng (AS) wrote:
> On 2023/11/23 9:09, Yin Fengwei wrote:
> 
>> Hi Peng,
>>
>> On 11/22/23 22:00, Peng Zhang wrote:
>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>
>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>> in application, which leading to an unexpected performance issue[1].
>>>
>>> This caused by temporarily cleared pte during a read/modify/write update
>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>
>>> For the data segment of the user-mode program, the global variable area
>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>> (anonymous pages), but the original file pages cannot be locked and may
>>> be reclaimed. If the global variable (private anon page) is accessed when
>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>
>>> At this time, the original private file page may have been reclaimed.
>>> If the page cache is not available at this time, a major fault will be
>>> triggered and the file will be read, causing additional overhead.
>>>
>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>> triggering a major fault.
>>>
>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>
>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>   mm/filemap.c | 14 ++++++++++++++
>>>   1 file changed, 14 insertions(+)
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index 71f00539ac00..bb5e6a2790dc 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>               mapping_locked = true;
>>>           }
>>>       } else {
>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>> +                          vmf->address, &vmf->ptl);
>>> +        if (ptep) {
>>> +            /*
>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>> +             * temporarily during a read/modify/write update.
>>> +             */
>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>> +                ret = VM_FAULT_NOPAGE;
>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>> +            if (unlikely(ret))
>>> +                return ret;
>>> +        }
>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
> 
> Thank you for your reply.
> 
> If we don't take PTL, the current use case won't trigger this issue either.
Is this verified by testing or just in theory?

> 
> In most cases, if we don't take PTL, this issue won't be triggered. However,
> there is still a possibility of triggering this issue. The corner case is that
> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
> check whether the PTE is not NONE before task 1 updates PTE in
> ptep_modify_prot_commit() without taking PTL.

There is very limited operations between ptep_modify_prot_start() and 
ptep_modify_prot_commit(). While the code path from page fault to this check is
long. My understanding is it's very likely the PTE is not NONE when do PTE check
here without hold PTL (This is my theory. :)).

In the other side, acquiring/releasing PTL may bring performance impaction. It may
not be big deal because the IO operations in this code path. But it's better to
collect some performance data IMHO.


Regards
Yin, Fengwei

> 
>>
>> Regards
>> Yin, Fengwei
>>
>>> +
>>>           /* No page in the page cache at all */
>>>           count_vm_event(PGMAJFAULT);
>>>           count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
>
Peng Zhang Nov. 23, 2023, 7:57 a.m. UTC | #4
On 2023/11/23 13:26, Yin Fengwei wrote:

> On 11/23/23 12:12, zhangpeng (AS) wrote:
>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>
>>> Hi Peng,
>>>
>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>
>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>> in application, which leading to an unexpected performance issue[1].
>>>>
>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>
>>>> For the data segment of the user-mode program, the global variable area
>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>
>>>> At this time, the original private file page may have been reclaimed.
>>>> If the page cache is not available at this time, a major fault will be
>>>> triggered and the file will be read, causing additional overhead.
>>>>
>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>> triggering a major fault.
>>>>
>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>
>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>> ---
>>>>    mm/filemap.c | 14 ++++++++++++++
>>>>    1 file changed, 14 insertions(+)
>>>>
>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>> --- a/mm/filemap.c
>>>> +++ b/mm/filemap.c
>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>                mapping_locked = true;
>>>>            }
>>>>        } else {
>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>> +                          vmf->address, &vmf->ptl);
>>>> +        if (ptep) {
>>>> +            /*
>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>> +             * temporarily during a read/modify/write update.
>>>> +             */
>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>> +                ret = VM_FAULT_NOPAGE;
>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>> +            if (unlikely(ret))
>>>> +                return ret;
>>>> +        }
>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>> Thank you for your reply.
>>
>> If we don't take PTL, the current use case won't trigger this issue either.
> Is this verified by testing or just in theory?

If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
this issue will also trigger. Without delay, we haven't reproduced this problem
so far.

>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>> there is still a possibility of triggering this issue. The corner case is that
>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>> check whether the PTE is not NONE before task 1 updates PTE in
>> ptep_modify_prot_commit() without taking PTL.
> There is very limited operations between ptep_modify_prot_start() and
> ptep_modify_prot_commit(). While the code path from page fault to this check is
> long. My understanding is it's very likely the PTE is not NONE when do PTE check
> here without hold PTL (This is my theory. :)).

Yes, there is a high probability that this issue won't occur without taking PTL.

> In the other side, acquiring/releasing PTL may bring performance impaction. It may
> not be big deal because the IO operations in this code path. But it's better to
> collect some performance data IMHO.

We tested the performance of file private mapping page fault (page_fault2.c of
will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
The difference in performance (in operations per second) before and after patch
applied is about 0.7% on a x86 physical machine.

[1] https://github.com/antonblanchard/will-it-scale/tree/master

>
> Regards
> Yin, Fengwei
>
>>> Regards
>>> Yin, Fengwei
>>>
>>>> +
>>>>            /* No page in the page cache at all */
>>>>            count_vm_event(PGMAJFAULT);
>>>>            count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Yin Fengwei Nov. 23, 2023, 8:29 a.m. UTC | #5
On 11/23/23 15:57, zhangpeng (AS) wrote:
> On 2023/11/23 13:26, Yin Fengwei wrote:
> 
>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>
>>>> Hi Peng,
>>>>
>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>
>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>
>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>
>>>>> For the data segment of the user-mode program, the global variable area
>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>
>>>>> At this time, the original private file page may have been reclaimed.
>>>>> If the page cache is not available at this time, a major fault will be
>>>>> triggered and the file will be read, causing additional overhead.
>>>>>
>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>> triggering a major fault.
>>>>>
>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>
>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>> ---
>>>>>    mm/filemap.c | 14 ++++++++++++++
>>>>>    1 file changed, 14 insertions(+)
>>>>>
>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>> --- a/mm/filemap.c
>>>>> +++ b/mm/filemap.c
>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>                mapping_locked = true;
>>>>>            }
>>>>>        } else {
>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>> +                          vmf->address, &vmf->ptl);
>>>>> +        if (ptep) {
>>>>> +            /*
>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>> +             * temporarily during a read/modify/write update.
>>>>> +             */
>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>> +            if (unlikely(ret))
>>>>> +                return ret;
>>>>> +        }
>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>> Thank you for your reply.
>>>
>>> If we don't take PTL, the current use case won't trigger this issue either.
>> Is this verified by testing or just in theory?
> 
> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
> this issue will also trigger. Without delay, we haven't reproduced this problem
> so far.
Thanks for the testing.

> 
>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>> there is still a possibility of triggering this issue. The corner case is that
>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>> check whether the PTE is not NONE before task 1 updates PTE in
>>> ptep_modify_prot_commit() without taking PTL.
>> There is very limited operations between ptep_modify_prot_start() and
>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>> here without hold PTL (This is my theory. :)).
> 
> Yes, there is a high probability that this issue won't occur without taking PTL.
> 
>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>> not be big deal because the IO operations in this code path. But it's better to
>> collect some performance data IMHO.
> 
> We tested the performance of file private mapping page fault (page_fault2.c of
> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
> The difference in performance (in operations per second) before and after patch
> applied is about 0.7% on a x86 physical machine.
> 
> [1] https://github.com/antonblanchard/will-it-scale/tree/master
Maybe include this performance related information to commit message?

For the code change, looks good to me.

Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>


Regards
Yin, Fengwei

> 
>>
>> Regards
>> Yin, Fengwei
>>
>>>> Regards
>>>> Yin, Fengwei
>>>>
>>>>> +
>>>>>            /* No page in the page cache at all */
>>>>>            count_vm_event(PGMAJFAULT);
>>>>>            count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
>
Huang, Ying Nov. 23, 2023, 8:36 a.m. UTC | #6
Peng Zhang <zhangpeng362@huawei.com> writes:

> From: ZhangPeng <zhangpeng362@huawei.com>
>
> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
> in application, which leading to an unexpected performance issue[1].
>
> This caused by temporarily cleared pte during a read/modify/write update
> of the pte, eg, do_numa_page()/change_pte_range().
>
> For the data segment of the user-mode program, the global variable area
> is a private mapping. After the pagecache is loaded, the private anonymous
> page is generated after the COW is triggered. Mlockall can lock COW pages
> (anonymous pages), but the original file pages cannot be locked and may
> be reclaimed. If the global variable (private anon page) is accessed when
> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>
> At this time, the original private file page may have been reclaimed.
> If the page cache is not available at this time, a major fault will be
> triggered and the file will be read, causing additional overhead.
>
> Fix this by rechecking the pte by holding ptl in filemap_fault() before
> triggering a major fault.
>
> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Suggested-by: "Huang, Ying" <ying.huang@intel.com>

:-)

> ---
>  mm/filemap.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 71f00539ac00..bb5e6a2790dc 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  			mapping_locked = true;
>  		}
>  	} else {
> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> +						  vmf->address, &vmf->ptl);
> +		if (ptep) {
> +			/*
> +			 * Recheck pte with ptl locked as the pte can be cleared
> +			 * temporarily during a read/modify/write update.
> +			 */
> +			if (unlikely(!pte_none(ptep_get(ptep))))
> +				ret = VM_FAULT_NOPAGE;
> +			pte_unmap_unlock(ptep, vmf->ptl);
> +			if (unlikely(ret))
> +				return ret;
> +		}
> +

Need to deal with ptep == NULL.  Although that is high impossible.

--
Best Regards,
Huang, Ying

>  		/* No page in the page cache at all */
>  		count_vm_event(PGMAJFAULT);
>  		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Peng Zhang Nov. 23, 2023, 9:09 a.m. UTC | #7
On 2023/11/23 16:29, Yin Fengwei wrote:

> On 11/23/23 15:57, zhangpeng (AS) wrote:
>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>
>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>
>>>>> Hi Peng,
>>>>>
>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>
>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>
>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>
>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>
>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>
>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>> triggering a major fault.
>>>>>>
>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>
>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>> ---
>>>>>>     mm/filemap.c | 14 ++++++++++++++
>>>>>>     1 file changed, 14 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>> --- a/mm/filemap.c
>>>>>> +++ b/mm/filemap.c
>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>                 mapping_locked = true;
>>>>>>             }
>>>>>>         } else {
>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>> +        if (ptep) {
>>>>>> +            /*
>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>> +             * temporarily during a read/modify/write update.
>>>>>> +             */
>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>> +            if (unlikely(ret))
>>>>>> +                return ret;
>>>>>> +        }
>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>> Thank you for your reply.
>>>>
>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>> Is this verified by testing or just in theory?
>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>> this issue will also trigger. Without delay, we haven't reproduced this problem
>> so far.
> Thanks for the testing.
>
>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>> there is still a possibility of triggering this issue. The corner case is that
>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>> ptep_modify_prot_commit() without taking PTL.
>>> There is very limited operations between ptep_modify_prot_start() and
>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>> here without hold PTL (This is my theory. :)).
>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>
>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>> not be big deal because the IO operations in this code path. But it's better to
>>> collect some performance data IMHO.
>> We tested the performance of file private mapping page fault (page_fault2.c of
>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>> The difference in performance (in operations per second) before and after patch
>> applied is about 0.7% on a x86 physical machine.
>>
>> [1] https://github.com/antonblanchard/will-it-scale/tree/master
> Maybe include this performance related information to commit message?

Sure, I'll add it in the next version.

> For the code change, looks good to me.
>
> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>

Thanks!

>
> Regards
> Yin, Fengwei
>
>>> Regards
>>> Yin, Fengwei
>>>
>>>>> Regards
>>>>> Yin, Fengwei
>>>>>
>>>>>> +
>>>>>>             /* No page in the page cache at all */
>>>>>>             count_vm_event(PGMAJFAULT);
>>>>>>             count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Peng Zhang Nov. 23, 2023, 9:09 a.m. UTC | #8
On 2023/11/23 16:36, Huang, Ying wrote:

> Peng Zhang <zhangpeng362@huawei.com> writes:
>
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>> in application, which leading to an unexpected performance issue[1].
>>
>> This caused by temporarily cleared pte during a read/modify/write update
>> of the pte, eg, do_numa_page()/change_pte_range().
>>
>> For the data segment of the user-mode program, the global variable area
>> is a private mapping. After the pagecache is loaded, the private anonymous
>> page is generated after the COW is triggered. Mlockall can lock COW pages
>> (anonymous pages), but the original file pages cannot be locked and may
>> be reclaimed. If the global variable (private anon page) is accessed when
>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>
>> At this time, the original private file page may have been reclaimed.
>> If the page cache is not available at this time, a major fault will be
>> triggered and the file will be read, causing additional overhead.
>>
>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>> triggering a major fault.
>>
>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>
>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Suggested-by: "Huang, Ying" <ying.huang@intel.com>
>
> :-)

Yes! :-)

>> ---
>>   mm/filemap.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index 71f00539ac00..bb5e6a2790dc 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>   			mapping_locked = true;
>>   		}
>>   	} else {
>> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>> +						  vmf->address, &vmf->ptl);
>> +		if (ptep) {
>> +			/*
>> +			 * Recheck pte with ptl locked as the pte can be cleared
>> +			 * temporarily during a read/modify/write update.
>> +			 */
>> +			if (unlikely(!pte_none(ptep_get(ptep))))
>> +				ret = VM_FAULT_NOPAGE;
>> +			pte_unmap_unlock(ptep, vmf->ptl);
>> +			if (unlikely(ret))
>> +				return ret;
>> +		}
>> +
> Need to deal with ptep == NULL.  Although that is high impossible.

If ptep == NULL, we may just need to return VM_FAULT_SIGBUS.
I'll add it in the next version.

Thanks!

> --
> Best Regards,
> Huang, Ying
>
>>   		/* No page in the page cache at all */
>>   		count_vm_event(PGMAJFAULT);
>>   		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Matthew Wilcox Nov. 23, 2023, 3:33 p.m. UTC | #9
On Thu, Nov 23, 2023 at 05:09:04PM +0800, zhangpeng (AS) wrote:
> > > +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> > > +						  vmf->address, &vmf->ptl);
> > > +		if (ptep) {
> > > +			/*
> > > +			 * Recheck pte with ptl locked as the pte can be cleared
> > > +			 * temporarily during a read/modify/write update.
> > > +			 */
> > > +			if (unlikely(!pte_none(ptep_get(ptep))))
> > > +				ret = VM_FAULT_NOPAGE;
> > > +			pte_unmap_unlock(ptep, vmf->ptl);
> > > +			if (unlikely(ret))
> > > +				return ret;
> > > +		}
> > > +
> > Need to deal with ptep == NULL.  Although that is high impossible.
> 
> If ptep == NULL, we may just need to return VM_FAULT_SIGBUS.
> I'll add it in the next version.

no?  wouldn't ptep being NULL mean that the ptep has been replaced with
a PMD entry, and thus should return NOPAGE?
Peng Zhang Nov. 24, 2023, 2:04 a.m. UTC | #10
On 2023/11/23 23:33, Matthew Wilcox wrote:

> On Thu, Nov 23, 2023 at 05:09:04PM +0800, zhangpeng (AS) wrote:
>>>> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>> +						  vmf->address, &vmf->ptl);
>>>> +		if (ptep) {
>>>> +			/*
>>>> +			 * Recheck pte with ptl locked as the pte can be cleared
>>>> +			 * temporarily during a read/modify/write update.
>>>> +			 */
>>>> +			if (unlikely(!pte_none(ptep_get(ptep))))
>>>> +				ret = VM_FAULT_NOPAGE;
>>>> +			pte_unmap_unlock(ptep, vmf->ptl);
>>>> +			if (unlikely(ret))
>>>> +				return ret;
>>>> +		}
>>>> +
>>> Need to deal with ptep == NULL.  Although that is high impossible.
>> If ptep == NULL, we may just need to return VM_FAULT_SIGBUS.
>> I'll add it in the next version.
> no?  wouldn't ptep being NULL mean that the ptep has been replaced with
> a PMD entry, and thus should return NOPAGE?

Yes, ptep == NULL means that the ptep has been replaced with a PMD entry.
I'll add return NOPAGE in the next version.

Thanks!
Huang, Ying Nov. 24, 2023, 4:13 a.m. UTC | #11
"zhangpeng (AS)" <zhangpeng362@huawei.com> writes:

> On 2023/11/23 13:26, Yin Fengwei wrote:
>
>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>
>>>> Hi Peng,
>>>>
>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>
>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>
>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>
>>>>> For the data segment of the user-mode program, the global variable area
>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>
>>>>> At this time, the original private file page may have been reclaimed.
>>>>> If the page cache is not available at this time, a major fault will be
>>>>> triggered and the file will be read, causing additional overhead.
>>>>>
>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>> triggering a major fault.
>>>>>
>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>
>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>> ---
>>>>>    mm/filemap.c | 14 ++++++++++++++
>>>>>    1 file changed, 14 insertions(+)
>>>>>
>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>> --- a/mm/filemap.c
>>>>> +++ b/mm/filemap.c
>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>                mapping_locked = true;
>>>>>            }
>>>>>        } else {
>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>> +                          vmf->address, &vmf->ptl);
>>>>> +        if (ptep) {
>>>>> +            /*
>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>> +             * temporarily during a read/modify/write update.
>>>>> +             */
>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>> +            if (unlikely(ret))
>>>>> +                return ret;
>>>>> +        }
>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>> Thank you for your reply.
>>>
>>> If we don't take PTL, the current use case won't trigger this issue either.
>> Is this verified by testing or just in theory?
>
> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
> this issue will also trigger. Without delay, we haven't reproduced this problem
> so far.
>
>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>> there is still a possibility of triggering this issue. The corner case is that
>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>> check whether the PTE is not NONE before task 1 updates PTE in
>>> ptep_modify_prot_commit() without taking PTL.
>> There is very limited operations between ptep_modify_prot_start() and
>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>> here without hold PTL (This is my theory. :)).
>
> Yes, there is a high probability that this issue won't occur without taking PTL.
>
>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>> not be big deal because the IO operations in this code path. But it's better to
>> collect some performance data IMHO.
>
> We tested the performance of file private mapping page fault (page_fault2.c of
> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
> The difference in performance (in operations per second) before and after patch
> applied is about 0.7% on a x86 physical machine.

Whether is it improvement or reduction?

--
Best Regards,
Huang, Ying

> [1] https://github.com/antonblanchard/will-it-scale/tree/master
>
>>
>> Regards
>> Yin, Fengwei
>>
>>>> Regards
>>>> Yin, Fengwei
>>>>
>>>>> +
>>>>>            /* No page in the page cache at all */
>>>>>            count_vm_event(PGMAJFAULT);
>>>>>            count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Huang, Ying Nov. 24, 2023, 4:26 a.m. UTC | #12
"Huang, Ying" <ying.huang@intel.com> writes:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>
>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>
>>>>> Hi Peng,
>>>>>
>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>
>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>
>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>
>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>
>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>
>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>> triggering a major fault.
>>>>>>
>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>
>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>> ---
>>>>>>    mm/filemap.c | 14 ++++++++++++++
>>>>>>    1 file changed, 14 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>> --- a/mm/filemap.c
>>>>>> +++ b/mm/filemap.c
>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>                mapping_locked = true;
>>>>>>            }
>>>>>>        } else {
>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>> +        if (ptep) {
>>>>>> +            /*
>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>> +             * temporarily during a read/modify/write update.
>>>>>> +             */
>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>> +            if (unlikely(ret))
>>>>>> +                return ret;
>>>>>> +        }
>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>> Thank you for your reply.
>>>>
>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>> Is this verified by testing or just in theory?
>>
>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>> this issue will also trigger. Without delay, we haven't reproduced this problem
>> so far.
>>
>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>> there is still a possibility of triggering this issue. The corner case is that
>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>> ptep_modify_prot_commit() without taking PTL.
>>> There is very limited operations between ptep_modify_prot_start() and
>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>> here without hold PTL (This is my theory. :)).
>>
>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>
>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>> not be big deal because the IO operations in this code path. But it's better to
>>> collect some performance data IMHO.
>>
>> We tested the performance of file private mapping page fault (page_fault2.c of
>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>> The difference in performance (in operations per second) before and after patch
>> applied is about 0.7% on a x86 physical machine.
>
> Whether is it improvement or reduction?

And I think that you need to test ramdisk cases too to verify whether
this will cause performance regression and how much.

--
Best Regards,
Huang, Ying

> --
> Best Regards,
> Huang, Ying
>
>> [1] https://github.com/antonblanchard/will-it-scale/tree/master
>>
>>>
>>> Regards
>>> Yin, Fengwei
>>>
>>>>> Regards
>>>>> Yin, Fengwei
>>>>>
>>>>>> +
>>>>>>            /* No page in the page cache at all */
>>>>>>            count_vm_event(PGMAJFAULT);
>>>>>>            count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Matthew Wilcox Nov. 24, 2023, 6:05 a.m. UTC | #13
On Wed, Nov 22, 2023 at 10:00:52PM +0800, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
> 
> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
> in application, which leading to an unexpected performance issue[1].
> 
> This caused by temporarily cleared pte during a read/modify/write update
> of the pte, eg, do_numa_page()/change_pte_range().

What I haven't quite understood yet is why we need to set the pte to
zero on x86 in the specific case of do_numa_page().  I understand that
ppc needs to.

Could someone explain why the _default_ definition of
ptep_modify_prot_start() is not:

+++ b/include/linux/pgtable.h
@@ -1074,7 +1074,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
                                           unsigned long addr,
                                           pte_t *ptep)
 {
-       return __ptep_modify_prot_start(vma, addr, ptep);
+       return *ptep;
 }

 /*
Peng Zhang Nov. 24, 2023, 7:26 a.m. UTC | #14
On 2023/11/23 16:36, Huang, Ying wrote:

> Peng Zhang <zhangpeng362@huawei.com> writes:
>
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>> in application, which leading to an unexpected performance issue[1].
>>
>> This caused by temporarily cleared pte during a read/modify/write update
>> of the pte, eg, do_numa_page()/change_pte_range().
>>
>> For the data segment of the user-mode program, the global variable area
>> is a private mapping. After the pagecache is loaded, the private anonymous
>> page is generated after the COW is triggered. Mlockall can lock COW pages
>> (anonymous pages), but the original file pages cannot be locked and may
>> be reclaimed. If the global variable (private anon page) is accessed when
>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>
>> At this time, the original private file page may have been reclaimed.
>> If the page cache is not available at this time, a major fault will be
>> triggered and the file will be read, causing additional overhead.
>>
>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>> triggering a major fault.
>>
>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>
>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Suggested-by: "Huang, Ying" <ying.huang@intel.com>
>
> :-)
>
>> ---
>>   mm/filemap.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index 71f00539ac00..bb5e6a2790dc 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>   			mapping_locked = true;
>>   		}
>>   	} else {
>> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>> +						  vmf->address, &vmf->ptl);
>> +		if (ptep) {
>> +			/*
>> +			 * Recheck pte with ptl locked as the pte can be cleared
>> +			 * temporarily during a read/modify/write update.
>> +			 */
>> +			if (unlikely(!pte_none(ptep_get(ptep))))
>> +				ret = VM_FAULT_NOPAGE;
>> +			pte_unmap_unlock(ptep, vmf->ptl);
>> +			if (unlikely(ret))
>> +				return ret;
>> +		}
>> +
> Need to deal with ptep == NULL.  Although that is high impossible.

Maybe we don't need to deal with ptep == NULL, because it has been
considered later in filemap_fault()?
ptep == NULL means that the ptep has been replaced with a PMD entry.
In this case, major fault is also required.

> --
> Best Regards,
> Huang, Ying
>
>>   		/* No page in the page cache at all */
>>   		count_vm_event(PGMAJFAULT);
>>   		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Peng Zhang Nov. 24, 2023, 7:26 a.m. UTC | #15
On 2023/11/24 12:13, Huang, Ying wrote:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>
>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>
>>>>> Hi Peng,
>>>>>
>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>
>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>
>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>
>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>
>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>
>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>> triggering a major fault.
>>>>>>
>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>
>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>> ---
>>>>>>     mm/filemap.c | 14 ++++++++++++++
>>>>>>     1 file changed, 14 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>> --- a/mm/filemap.c
>>>>>> +++ b/mm/filemap.c
>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>                 mapping_locked = true;
>>>>>>             }
>>>>>>         } else {
>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>> +        if (ptep) {
>>>>>> +            /*
>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>> +             * temporarily during a read/modify/write update.
>>>>>> +             */
>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>> +            if (unlikely(ret))
>>>>>> +                return ret;
>>>>>> +        }
>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>> Thank you for your reply.
>>>>
>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>> Is this verified by testing or just in theory?
>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>> this issue will also trigger. Without delay, we haven't reproduced this problem
>> so far.
>>
>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>> there is still a possibility of triggering this issue. The corner case is that
>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>> ptep_modify_prot_commit() without taking PTL.
>>> There is very limited operations between ptep_modify_prot_start() and
>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>> here without hold PTL (This is my theory. :)).
>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>
>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>> not be big deal because the IO operations in this code path. But it's better to
>>> collect some performance data IMHO.
>> We tested the performance of file private mapping page fault (page_fault2.c of
>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>> The difference in performance (in operations per second) before and after patch
>> applied is about 0.7% on a x86 physical machine.
> Whether is it improvement or reduction?

After applying the patch, the performance (in operations per second) is 0.7%
higher on average than before.

> --
> Best Regards,
> Huang, Ying
>
>> [1] https://github.com/antonblanchard/will-it-scale/tree/master
>>
>>> Regards
>>> Yin, Fengwei
>>>
>>>>> Regards
>>>>> Yin, Fengwei
>>>>>
>>>>>> +
>>>>>>             /* No page in the page cache at all */
>>>>>>             count_vm_event(PGMAJFAULT);
>>>>>>             count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Peng Zhang Nov. 24, 2023, 7:27 a.m. UTC | #16
On 2023/11/24 12:26, Huang, Ying wrote:

> "Huang, Ying" <ying.huang@intel.com> writes:
>
>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>
>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>
>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>
>>>>>> Hi Peng,
>>>>>>
>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>
>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>
>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>
>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>
>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>
>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>> triggering a major fault.
>>>>>>>
>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>
>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>> ---
>>>>>>>     mm/filemap.c | 14 ++++++++++++++
>>>>>>>     1 file changed, 14 insertions(+)
>>>>>>>
>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>> --- a/mm/filemap.c
>>>>>>> +++ b/mm/filemap.c
>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>                 mapping_locked = true;
>>>>>>>             }
>>>>>>>         } else {
>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>> +        if (ptep) {
>>>>>>> +            /*
>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>> +             */
>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>> +            if (unlikely(ret))
>>>>>>> +                return ret;
>>>>>>> +        }
>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>> Thank you for your reply.
>>>>>
>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>> Is this verified by testing or just in theory?
>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>> so far.
>>>
>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>> ptep_modify_prot_commit() without taking PTL.
>>>> There is very limited operations between ptep_modify_prot_start() and
>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>> here without hold PTL (This is my theory. :)).
>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>
>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>> not be big deal because the IO operations in this code path. But it's better to
>>>> collect some performance data IMHO.
>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>> The difference in performance (in operations per second) before and after patch
>>> applied is about 0.7% on a x86 physical machine.
>> Whether is it improvement or reduction?
> And I think that you need to test ramdisk cases too to verify whether
> this will cause performance regression and how much.

Yes, I will.
In addition, are there any ramdisk test cases recommended? 
Peng Zhang Nov. 24, 2023, 7:43 a.m. UTC | #17
On 2023/11/24 14:05, Matthew Wilcox wrote:

> On Wed, Nov 22, 2023 at 10:00:52PM +0800, Peng Zhang wrote:
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>> in application, which leading to an unexpected performance issue[1].
>>
>> This caused by temporarily cleared pte during a read/modify/write update
>> of the pte, eg, do_numa_page()/change_pte_range().
> What I haven't quite understood yet is why we need to set the pte to
> zero on x86 in the specific case of do_numa_page().  I understand that
> ppc needs to.

I'm also curious. Could ptep_modify_prot_start() of other architectures
(except ppc) not clear pte? We are mainly concerned with arm64 and x86.

> Could someone explain why the _default_ definition of
> ptep_modify_prot_start() is not:
>
> +++ b/include/linux/pgtable.h
> @@ -1074,7 +1074,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
>                                             unsigned long addr,
>                                             pte_t *ptep)
>   {
> -       return __ptep_modify_prot_start(vma, addr, ptep);
> +       return *ptep;
>   }
>
>   /*
>
>
Huang, Ying Nov. 24, 2023, 7:59 a.m. UTC | #18
"zhangpeng (AS)" <zhangpeng362@huawei.com> writes:

> On 2023/11/23 16:36, Huang, Ying wrote:
>
>> Peng Zhang <zhangpeng362@huawei.com> writes:
>>
>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>
>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>> in application, which leading to an unexpected performance issue[1].
>>>
>>> This caused by temporarily cleared pte during a read/modify/write update
>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>
>>> For the data segment of the user-mode program, the global variable area
>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>> (anonymous pages), but the original file pages cannot be locked and may
>>> be reclaimed. If the global variable (private anon page) is accessed when
>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>
>>> At this time, the original private file page may have been reclaimed.
>>> If the page cache is not available at this time, a major fault will be
>>> triggered and the file will be read, causing additional overhead.
>>>
>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>> triggering a major fault.
>>>
>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>
>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Suggested-by: "Huang, Ying" <ying.huang@intel.com>
>>
>> :-)
>>
>>> ---
>>>   mm/filemap.c | 14 ++++++++++++++
>>>   1 file changed, 14 insertions(+)
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index 71f00539ac00..bb5e6a2790dc 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>   			mapping_locked = true;
>>>   		}
>>>   	} else {
>>> +		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>> +						  vmf->address, &vmf->ptl);
>>> +		if (ptep) {
>>> +			/*
>>> +			 * Recheck pte with ptl locked as the pte can be cleared
>>> +			 * temporarily during a read/modify/write update.
>>> +			 */
>>> +			if (unlikely(!pte_none(ptep_get(ptep))))
>>> +				ret = VM_FAULT_NOPAGE;
>>> +			pte_unmap_unlock(ptep, vmf->ptl);
>>> +			if (unlikely(ret))
>>> +				return ret;
>>> +		}
>>> +
>> Need to deal with ptep == NULL.  Although that is high impossible.
>
> Maybe we don't need to deal with ptep == NULL, because it has been
> considered later in filemap_fault()?
> ptep == NULL means that the ptep has been replaced with a PMD entry.
> In this case, major fault is also required.

I still think that we need to deal with that.  That is common error
processing logic.

--
Best Regards,
Huang, Ying

>>
>>>   		/* No page in the page cache at all */
>>>   		count_vm_event(PGMAJFAULT);
>>>   		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
Huang, Ying Nov. 24, 2023, 8:04 a.m. UTC | #19
"zhangpeng (AS)" <zhangpeng362@huawei.com> writes:

> On 2023/11/24 12:26, Huang, Ying wrote:
>
>> "Huang, Ying" <ying.huang@intel.com> writes:
>>
>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>
>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>
>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>
>>>>>>> Hi Peng,
>>>>>>>
>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>
>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>
>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>
>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>
>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>
>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>> triggering a major fault.
>>>>>>>>
>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>
>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>> ---
>>>>>>>>     mm/filemap.c | 14 ++++++++++++++
>>>>>>>>     1 file changed, 14 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>> --- a/mm/filemap.c
>>>>>>>> +++ b/mm/filemap.c
>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>                 mapping_locked = true;
>>>>>>>>             }
>>>>>>>>         } else {
>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>> +        if (ptep) {
>>>>>>>> +            /*
>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>> +             */
>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>> +            if (unlikely(ret))
>>>>>>>> +                return ret;
>>>>>>>> +        }
>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>> Thank you for your reply.
>>>>>>
>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>> Is this verified by testing or just in theory?
>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>> so far.
>>>>
>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>> here without hold PTL (This is my theory. :)).
>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>
>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>> collect some performance data IMHO.
>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>> The difference in performance (in operations per second) before and after patch
>>>> applied is about 0.7% on a x86 physical machine.
>>> Whether is it improvement or reduction?
>> And I think that you need to test ramdisk cases too to verify whether
>> this will cause performance regression and how much.
>
> Yes, I will.
> In addition, are there any ramdisk test cases recommended? 
Peng Zhang Nov. 29, 2023, 1:24 a.m. UTC | #20
On 2023/11/24 16:04, Huang, Ying wrote:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/24 12:26, Huang, Ying wrote:
>>
>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>
>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>
>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>
>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>
>>>>>>>> Hi Peng,
>>>>>>>>
>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>
>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>
>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>
>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>
>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>
>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>> triggering a major fault.
>>>>>>>>>
>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>
>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>> ---
>>>>>>>>>      mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>      1 file changed, 14 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>                  mapping_locked = true;
>>>>>>>>>              }
>>>>>>>>>          } else {
>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>> +        if (ptep) {
>>>>>>>>> +            /*
>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>> +             */
>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>> +                return ret;
>>>>>>>>> +        }
>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>> Thank you for your reply.
>>>>>>>
>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>> Is this verified by testing or just in theory?
>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>> so far.
>>>>>
>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>> here without hold PTL (This is my theory. :)).
>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>
>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>> collect some performance data IMHO.
>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>> The difference in performance (in operations per second) before and after patch
>>>>> applied is about 0.7% on a x86 physical machine.
>>>> Whether is it improvement or reduction?
>>> And I think that you need to test ramdisk cases too to verify whether
>>> this will cause performance regression and how much.
>> Yes, I will.
>> In addition, are there any ramdisk test cases recommended? 
Huang, Ying Nov. 29, 2023, 2:59 a.m. UTC | #21
"zhangpeng (AS)" <zhangpeng362@huawei.com> writes:

> On 2023/11/24 16:04, Huang, Ying wrote:
>
>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>
>>> On 2023/11/24 12:26, Huang, Ying wrote:
>>>
>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>
>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>
>>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>>
>>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>>
>>>>>>>>> Hi Peng,
>>>>>>>>>
>>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>
>>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>>
>>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>>
>>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>>
>>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>>
>>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>>> triggering a major fault.
>>>>>>>>>>
>>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>>> ---
>>>>>>>>>>      mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>>      1 file changed, 14 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>>                  mapping_locked = true;
>>>>>>>>>>              }
>>>>>>>>>>          } else {
>>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>>> +        if (ptep) {
>>>>>>>>>> +            /*
>>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>>> +             */
>>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>>> +                return ret;
>>>>>>>>>> +        }
>>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>>> Thank you for your reply.
>>>>>>>>
>>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>>> Is this verified by testing or just in theory?
>>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>>> so far.
>>>>>>
>>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>>> here without hold PTL (This is my theory. :)).
>>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>>
>>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>>> collect some performance data IMHO.
>>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>>> The difference in performance (in operations per second) before and after patch
>>>>>> applied is about 0.7% on a x86 physical machine.
>>>>> Whether is it improvement or reduction?
>>>> And I think that you need to test ramdisk cases too to verify whether
>>>> this will cause performance regression and how much.
>>> Yes, I will.
>>> In addition, are there any ramdisk test cases recommended? 
Peng Zhang Feb. 1, 2024, 12:10 p.m. UTC | #22
On 2023/11/29 10:59, Huang, Ying wrote:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/24 16:04, Huang, Ying wrote:
>>
>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>
>>>> On 2023/11/24 12:26, Huang, Ying wrote:
>>>>
>>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>>
>>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>>
>>>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>>>
>>>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>>>
>>>>>>>>>> Hi Peng,
>>>>>>>>>>
>>>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>
>>>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>>>
>>>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>>>
>>>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>>>
>>>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>>>
>>>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>>>> triggering a major fault.
>>>>>>>>>>>
>>>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>>>> ---
>>>>>>>>>>>       mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>>>       1 file changed, 14 insertions(+)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>>>                   mapping_locked = true;
>>>>>>>>>>>               }
>>>>>>>>>>>           } else {
>>>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>>>> +        if (ptep) {
>>>>>>>>>>> +            /*
>>>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>>>> +             */
>>>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>>>> +                return ret;
>>>>>>>>>>> +        }
>>>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>>>> Thank you for your reply.
>>>>>>>>>
>>>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>>>> Is this verified by testing or just in theory?
>>>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>>>> so far.
>>>>>>>
>>>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>>>> here without hold PTL (This is my theory. :)).
>>>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>>>
>>>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>>>> collect some performance data IMHO.
>>>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>>>> The difference in performance (in operations per second) before and after patch
>>>>>>> applied is about 0.7% on a x86 physical machine.
>>>>>> Whether is it improvement or reduction?
>>>>> And I think that you need to test ramdisk cases too to verify whether
>>>>> this will cause performance regression and how much.
>>>> Yes, I will.
>>>> In addition, are there any ramdisk test cases recommended? 
Huang, Ying Feb. 2, 2024, 12:39 a.m. UTC | #23
"zhangpeng (AS)" <zhangpeng362@huawei.com> writes:

> On 2023/11/29 10:59, Huang, Ying wrote:
>
>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>
>>> On 2023/11/24 16:04, Huang, Ying wrote:
>>>
>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>
>>>>> On 2023/11/24 12:26, Huang, Ying wrote:
>>>>>
>>>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>>>
>>>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>>>
>>>>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>>>>
>>>>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Peng,
>>>>>>>>>>>
>>>>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>>
>>>>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>>>>
>>>>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>>>>
>>>>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>>>>
>>>>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>>>>
>>>>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>>>>> triggering a major fault.
>>>>>>>>>>>>
>>>>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>>       mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>>>>       1 file changed, 14 insertions(+)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>>>>                   mapping_locked = true;
>>>>>>>>>>>>               }
>>>>>>>>>>>>           } else {
>>>>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>>>>> +        if (ptep) {
>>>>>>>>>>>> +            /*
>>>>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>>>>> +             */
>>>>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>>>>> +                return ret;
>>>>>>>>>>>> +        }
>>>>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>>>>> Thank you for your reply.
>>>>>>>>>>
>>>>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>>>>> Is this verified by testing or just in theory?
>>>>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>>>>> so far.
>>>>>>>>
>>>>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>>>>> here without hold PTL (This is my theory. :)).
>>>>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>>>>
>>>>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>>>>> collect some performance data IMHO.
>>>>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>>>>> The difference in performance (in operations per second) before and after patch
>>>>>>>> applied is about 0.7% on a x86 physical machine.
>>>>>>> Whether is it improvement or reduction?
>>>>>> And I think that you need to test ramdisk cases too to verify whether
>>>>>> this will cause performance regression and how much.
>>>>> Yes, I will.
>>>>> In addition, are there any ramdisk test cases recommended? 
Peng Zhang Feb. 2, 2024, 3:31 a.m. UTC | #24
On 2024/2/2 8:39, Huang, Ying wrote:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/29 10:59, Huang, Ying wrote:
>>
>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>
>>>> On 2023/11/24 16:04, Huang, Ying wrote:
>>>>
>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>
>>>>>> On 2023/11/24 12:26, Huang, Ying wrote:
>>>>>>
>>>>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>>>>
>>>>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>>>>
>>>>>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>>>>>
>>>>>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Peng,
>>>>>>>>>>>>
>>>>>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>>>
>>>>>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>>>>>
>>>>>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>>>>>
>>>>>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>>>>>
>>>>>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>>>>>> triggering a major fault.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>        mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>>>>>        1 file changed, 14 insertions(+)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>>>>>                    mapping_locked = true;
>>>>>>>>>>>>>                }
>>>>>>>>>>>>>            } else {
>>>>>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>>>>>> +        if (ptep) {
>>>>>>>>>>>>> +            /*
>>>>>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>>>>>> +             */
>>>>>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>>>>>> +                return ret;
>>>>>>>>>>>>> +        }
>>>>>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>>>>>> Thank you for your reply.
>>>>>>>>>>>
>>>>>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>>>>>> Is this verified by testing or just in theory?
>>>>>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>>>>>> so far.
>>>>>>>>>
>>>>>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>>>>>> here without hold PTL (This is my theory. :)).
>>>>>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>>>>>
>>>>>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>>>>>> collect some performance data IMHO.
>>>>>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>>>>>> The difference in performance (in operations per second) before and after patch
>>>>>>>>> applied is about 0.7% on a x86 physical machine.
>>>>>>>> Whether is it improvement or reduction?
>>>>>>> And I think that you need to test ramdisk cases too to verify whether
>>>>>>> this will cause performance regression and how much.
>>>>>> Yes, I will.
>>>>>> In addition, are there any ramdisk test cases recommended? 
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 71f00539ac00..bb5e6a2790dc 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3226,6 +3226,20 @@  vm_fault_t filemap_fault(struct vm_fault *vmf)
 			mapping_locked = true;
 		}
 	} else {
+		pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+						  vmf->address, &vmf->ptl);
+		if (ptep) {
+			/*
+			 * Recheck pte with ptl locked as the pte can be cleared
+			 * temporarily during a read/modify/write update.
+			 */
+			if (unlikely(!pte_none(ptep_get(ptep))))
+				ret = VM_FAULT_NOPAGE;
+			pte_unmap_unlock(ptep, vmf->ptl);
+			if (unlikely(ret))
+				return ret;
+		}
+
 		/* No page in the page cache at all */
 		count_vm_event(PGMAJFAULT);
 		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);