Message ID | 20241017141742.1169404-1-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v4] tmpfs: don't enable large folios if not supported | expand |
On 2024/10/17 22:17, Kefeng Wang wrote: > The tmpfs could support large folio, but there is some configurable > options(mount options and runtime deny/force) to enable/disable large > folio allocation, so there is a performance issue when perform write > without large folio, the issue is similar to commit 4e527d5841e2 > ("iomap: fault in smaller chunks for non-large folio mappings"). > > Since 'deny' for emergencies and 'force' for testing, performence issue > should not be a problem in the real production environments, so only > don't call mapping_set_large_folios() in __shmem_get_inode() when > large folio is disabled with mount huge=never option(default policy). > > Fixes: 9aac777aaf94 ("filemap: Convert generic_perform_write() to support large folios") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> LGTM. Thanks. Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > v4: > - only fix mount huge=never since runtime deny/force just for > emergencies/testing, suggested by Baolin > v3: > - don't enable large folio suppport in __shmem_get_inode() if disabled, > suggested by Matthew. > v2: > - Don't use IOCB flags > > mm/shmem.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index e933327d8dac..74ef214dc1a7 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2827,7 +2827,10 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap, > cache_no_acl(inode); > if (sbinfo->noswap) > mapping_set_unevictable(inode->i_mapping); > - mapping_set_large_folios(inode->i_mapping); > + > + /* Don't consider 'deny' for emergencies and 'force' for testing */ > + if (sbinfo->huge) > + mapping_set_large_folios(inode->i_mapping); > > switch (mode & S_IFMT) { > default:
diff --git a/mm/shmem.c b/mm/shmem.c index e933327d8dac..74ef214dc1a7 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2827,7 +2827,10 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap, cache_no_acl(inode); if (sbinfo->noswap) mapping_set_unevictable(inode->i_mapping); - mapping_set_large_folios(inode->i_mapping); + + /* Don't consider 'deny' for emergencies and 'force' for testing */ + if (sbinfo->huge) + mapping_set_large_folios(inode->i_mapping); switch (mode & S_IFMT) { default:
The tmpfs could support large folio, but there is some configurable options(mount options and runtime deny/force) to enable/disable large folio allocation, so there is a performance issue when perform write without large folio, the issue is similar to commit 4e527d5841e2 ("iomap: fault in smaller chunks for non-large folio mappings"). Since 'deny' for emergencies and 'force' for testing, performence issue should not be a problem in the real production environments, so only don't call mapping_set_large_folios() in __shmem_get_inode() when large folio is disabled with mount huge=never option(default policy). Fixes: 9aac777aaf94 ("filemap: Convert generic_perform_write() to support large folios") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- v4: - only fix mount huge=never since runtime deny/force just for emergencies/testing, suggested by Baolin v3: - don't enable large folio suppport in __shmem_get_inode() if disabled, suggested by Matthew. v2: - Don't use IOCB flags mm/shmem.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)