diff mbox series

[v3] tmpfs: don't enable large folios if not supported

Message ID 20241011065919.2086827-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [v3] tmpfs: don't enable large folios if not supported | expand

Commit Message

Kefeng Wang Oct. 11, 2024, 6:59 a.m. UTC
The tmpfs could support large folio, but there is some configurable
options(mount options and runtime deny/force) to enable/disable large
folio allocation, so there is a performance issue when perform write
without large folio, the issue is similar to commit 4e527d5841e2
("iomap: fault in smaller chunks for non-large folio mappings").

Don't call mapping_set_large_folios() in __shmem_get_inode() when
large folio is disabled to fix it.

Fixes: 9aac777aaf94 ("filemap: Convert generic_perform_write() to support large folios")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---

v3:
- don't enable large folio suppport in __shmem_get_inode() if disabled,
  suggested by Matthew.

v2:
- Don't use IOCB flags

 mm/shmem.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Baolin Wang Oct. 12, 2024, 3:59 a.m. UTC | #1
On 2024/10/11 14:59, Kefeng Wang wrote:
> The tmpfs could support large folio, but there is some configurable
> options(mount options and runtime deny/force) to enable/disable large
> folio allocation, so there is a performance issue when perform write
> without large folio, the issue is similar to commit 4e527d5841e2
> ("iomap: fault in smaller chunks for non-large folio mappings").
> 
> Don't call mapping_set_large_folios() in __shmem_get_inode() when
> large folio is disabled to fix it.
> 
> Fixes: 9aac777aaf94 ("filemap: Convert generic_perform_write() to support large folios")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> 
> v3:
> - don't enable large folio suppport in __shmem_get_inode() if disabled,
>    suggested by Matthew.
> 
> v2:
> - Don't use IOCB flags
> 
>   mm/shmem.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0a2f78c2b919..2b859ac4ddc5 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2850,7 +2850,10 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
>   	cache_no_acl(inode);
>   	if (sbinfo->noswap)
>   		mapping_set_unevictable(inode->i_mapping);
> -	mapping_set_large_folios(inode->i_mapping);
> +
> +	if ((sbinfo->huge && shmem_huge != SHMEM_HUGE_DENY) ||
> +	    shmem_huge == SHMEM_HUGE_FORCE)
> +		mapping_set_large_folios(inode->i_mapping);

IMHO, I'm still a little concerned about the 'shmem_huge' validation. 
Since the 'shmem_huge' can be set at runtime, that means file mapping 
with 'huge=always' option might miss the opportunity to allocate large 
folios if the 'shmem_huge' is changed from 'deny' from 'always' at runtime.

So I'd like to drop the 'shmem_huge' validation and add some comments to 
indicate 'deny' and 'force' options are only for testing purpose and 
performence issue should not be a problem in the real production 
environments.

That's just my 2 cents:)
Kefeng Wang Oct. 14, 2024, 2:36 a.m. UTC | #2
On 2024/10/12 11:59, Baolin Wang wrote:
> 
> 
> On 2024/10/11 14:59, Kefeng Wang wrote:
>> The tmpfs could support large folio, but there is some configurable
>> options(mount options and runtime deny/force) to enable/disable large
>> folio allocation, so there is a performance issue when perform write
>> without large folio, the issue is similar to commit 4e527d5841e2
>> ("iomap: fault in smaller chunks for non-large folio mappings").
>>
>> Don't call mapping_set_large_folios() in __shmem_get_inode() when
>> large folio is disabled to fix it.
>>
>> Fixes: 9aac777aaf94 ("filemap: Convert generic_perform_write() to 
>> support large folios")
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>
>> v3:
>> - don't enable large folio suppport in __shmem_get_inode() if disabled,
>>    suggested by Matthew.
>>
>> v2:
>> - Don't use IOCB flags
>>
>>   mm/shmem.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 0a2f78c2b919..2b859ac4ddc5 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -2850,7 +2850,10 @@ static struct inode *__shmem_get_inode(struct 
>> mnt_idmap *idmap,
>>       cache_no_acl(inode);
>>       if (sbinfo->noswap)
>>           mapping_set_unevictable(inode->i_mapping);
>> -    mapping_set_large_folios(inode->i_mapping);
>> +
>> +    if ((sbinfo->huge && shmem_huge != SHMEM_HUGE_DENY) ||
>> +        shmem_huge == SHMEM_HUGE_FORCE)
>> +        mapping_set_large_folios(inode->i_mapping);
> 
> IMHO, I'm still a little concerned about the 'shmem_huge' validation. 
> Since the 'shmem_huge' can be set at runtime, that means file mapping 
> with 'huge=always' option might miss the opportunity to allocate large 
> folios if the 'shmem_huge' is changed from 'deny' from 'always' at runtime.
> 
> So I'd like to drop the 'shmem_huge' validation and add some comments to 
> indicate 'deny' and 'force' options are only for testing purpose and 
> performence issue should not be a problem in the real production 
> environments.

No strange opinion, the previous version could cover the runtime deny/
force, but it is a little complicated as Matthew pointed, if no other
comments, I will drop the shmem_huge check.

> 
> That's just my 2 cents:)
diff mbox series

Patch

diff --git a/mm/shmem.c b/mm/shmem.c
index 0a2f78c2b919..2b859ac4ddc5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2850,7 +2850,10 @@  static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
 	cache_no_acl(inode);
 	if (sbinfo->noswap)
 		mapping_set_unevictable(inode->i_mapping);
-	mapping_set_large_folios(inode->i_mapping);
+
+	if ((sbinfo->huge && shmem_huge != SHMEM_HUGE_DENY) ||
+	    shmem_huge == SHMEM_HUGE_FORCE)
+		mapping_set_large_folios(inode->i_mapping);
 
 	switch (mode & S_IFMT) {
 	default: