From patchwork Mon Jul 17 11:52:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 13315518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE555EB64DC for ; Mon, 17 Jul 2023 11:53:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C0C16B0072; Mon, 17 Jul 2023 07:53:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 870EF6B0074; Mon, 17 Jul 2023 07:53:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 738A08D0001; Mon, 17 Jul 2023 07:53:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 666236B0072 for ; Mon, 17 Jul 2023 07:53:57 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 23761804E2 for ; Mon, 17 Jul 2023 11:53:57 +0000 (UTC) X-FDA: 81020944914.23.7AC7B4F Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id 3C37F1A001B for ; Mon, 17 Jul 2023 11:53:55 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="ExsELo/V"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of cem@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cem@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689594835; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IV0Etlq55XfvNKloTO9/wGyBWhvZhBZRx2TjzNbCC4A=; b=x1833AhRHn1FbeM1k5YKmRFWAwDwabw11ZyNzXFkzmxYpLx5Lj+F5t4S19EOqVU2bsfsVH nDCTtDIKJLeScvSNN1c0vdNvG/NkzOaDIZs5k96Bv8WCuPKka5TUkWDAMfnactzgOIM313 QKYfzGS3NsFWDrMR91GHCi1g3S5PHWA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="ExsELo/V"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of cem@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cem@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689594835; a=rsa-sha256; cv=none; b=g9DiZhwdBELVXJQdgyd3PQ7BSHxIMD4PQHtQJH8BMrXpOKGBYDU+Awi7lsxCMZmblM7xkS tk6bFSJWPsfRq9uCRloevrZvv8PbvffZQdGLJUc9w1LgMMNcZXucj6KZP0NjOrhf6n4qCM 35UhUx3/qJBwLXibh0FaFEADEdOiLls= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 71BDD61042; Mon, 17 Jul 2023 11:53:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 398D7C433C8; Mon, 17 Jul 2023 11:53:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689594833; bh=/N8t0zn5r7/pIKRWGGNTxKX3iFdVKBgXPuSClztgPDA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ExsELo/VzdyI0l+5mEzyQHALjofX9VCkyI92vl7b6M3x0pgMCzy6M2r6aVSGxBW9L i/KfrKemwznCNcmq18LfhibbbWIIgwA4a5Ah9mcg8Ze60b4JxhTmwFhdAF5faKDOuI 7QEmsZhVRVeQEibSBXrD70pDhnq7Spbhy3pc8Mdbg2UOA9WHPYugt/bamh5fXGcl3r LyaMpopyuP5J7ibY2NB5sWNEkdGmLT4cNPgEf3y89LBYMxuXNh3dFGzUAyg+TTOCVT IGFyf+aMOz6A5VJQ48kmsxPt1uEtUvk1gMRqmnpbMof+JcqHiT9Pn78AZYl0oDB9db ULF3docgPeL9Q== From: cem@kernel.org To: linux-fsdevel@vger.kernel.org Cc: jack@suse.cz, akpm@linux-foundation.org, viro@zeniv.linux.org.uk, linux-mm@kvack.org, djwong@kernel.org, hughd@google.com, brauner@kernel.org, mcgrof@kernel.org Subject: [PATCH 6/6] Add default quota limit mount options Date: Mon, 17 Jul 2023 13:52:19 +0200 Message-Id: <20230717115212.208651-7-cem@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230717115212.208651-1-cem@kernel.org> References: <20230717115212.208651-1-cem@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3C37F1A001B X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: qhejca9zzaiw93ycoxbcikycpmucmi1q X-HE-Tag: 1689594834-780995 X-HE-Meta: U2FsdGVkX1+7ix1WStWpd9lrbiZrxJD+ACyGICCjOsgndUuqmNlmGdZkgL6r8OL9mgFVgwA5/+1zmVA+cUNPjD9U45krhghvlgoyTMjBkLYvo8FLG1qcP2GjPqIQg2Ttw9TNTFK6oC1Seh3w1RHk0maI70dGgOQMJVyxGYa5c4p6hkgzRd5yLFIQGQUY58xc6J6Pwt3o33WTDIW/WYe+QflqA1h7f5DiKtu36yMz8oQ/AFMA5Kpee14xn4seW7/7+dhJWiSmu21V3m2E/MvWyCWUIvPODbQCg6eO7TnVKiSoISf801jsz6oPHoES7KMEzPRHhBQ8+TCTN484AlAiAiU78ZE62+QiMskFGz2GVdcWPSzkbX+IrIKpYCPTF9qDpY99QD68sx+nt+4VlafYUwmmd6ntFs3UxoTUrQ9+ugvMr5ywdkNBx0TSF8WcnE3J+LwC5i1CO1K6Ay6nIKB9Jgq58MG5Bg6XSD/UUpJOkCbc3O5LaDJFAn5TUrR/Pc348BPUORT94QfpgEd/Cd1FEmh4Gj/SxfDvU/SbBvOrQl2JilYYc3IjkNnxKFKClAMMD8MaaJk2LUFt0Gb5Hr247LlryQfBZ6Be7RpLoWWSsmmInPg/VLBdcyXsv28jYzmZFsv1qjj3BhazIOzMvhcrmRBwiVbV/XXF+aHadypxRMkPiTgBfETMaBEZrnWFXkDRqmkUnqijvViMzNiTpuHAr4qHwdmoIY2JozbIolXbtt4KiorS9OpkEom+I3QoYzXtKanOU/u4HtqqmmAt6T5+cqag7GR07EeFibEBQ4TSYxkwz3RjAKG+rsUfy3vkCZyssRNXqz0J7TVJLVG5FjJ8JC3Zsy8ptdjRQ/BCHZp/A6mbbk1O2y8zjzYlbt0lVLLbKRqgCNLGHAHBxQq1PcawpjyqcqeSlXPwaJi7uK9n3Rdhs7iP6HgvhvwyyAKwhRbwPUYWAkyZi7mWaGP6vpv PQqSPHaB 5cI2PxUC8YviHpnDnc7Ws82Y+6q6jqzJbffY5siEKNiUvnfgsdFx+Rj8xNdH+kf1xWoueDBzM7patLlYBeA5qV6wGd5IdOwhE6UpGXGtDKbKnVaYRlIsm7P6JlYZruhts4c98GhMUjcA9u4+z9ffXrLDj2iD3pE4C1R/oGs2ZAkzJUfxCJ7d0ja8p815zSByrMNrrPvE2aMjDxwaB5POm2dxit89L3RwafNVb925P/huWx97YMgLF1TNX3dEA5vGDio08b750HSbtVd9feezXXlJ8zKLsJDiIUd7Dmaqek+NiPhI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Lukas Czerner Allow system administrator to set default global quota limits at tmpfs mount time. Signed-off-by: Lukas Czerner Signed-off-by: Carlos Maiolino Reviewed-by: Jan Kara --- Documentation/filesystems/tmpfs.rst | 34 +++++++++++----- include/linux/shmem_fs.h | 8 ++++ mm/shmem.c | 61 +++++++++++++++++++++++++++++ mm/shmem_quota.c | 34 +++++++++++++++- 4 files changed, 127 insertions(+), 10 deletions(-) diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst index 0c7d8bd052f1..f843dbbeb589 100644 --- a/Documentation/filesystems/tmpfs.rst +++ b/Documentation/filesystems/tmpfs.rst @@ -132,15 +132,31 @@ for emergency or testing purposes. The values you can set for shmem_enabled are: tmpfs also supports quota with the following mount options -======== ============================================================= -quota User and group quota accounting and enforcement is enabled on - the mount. Tmpfs is using hidden system quota files that are - initialized on mount. -usrquota User quota accounting and enforcement is enabled on the - mount. -grpquota Group quota accounting and enforcement is enabled on the - mount. -======== ============================================================= +======================== ================================================= +quota User and group quota accounting and enforcement + is enabled on the mount. Tmpfs is using hidden + system quota files that are initialized on mount. +usrquota User quota accounting and enforcement is enabled + on the mount. +grpquota Group quota accounting and enforcement is enabled + on the mount. +usrquota_block_hardlimit Set global user quota block hard limit. +usrquota_inode_hardlimit Set global user quota inode hard limit. +grpquota_block_hardlimit Set global group quota block hard limit. +grpquota_inode_hardlimit Set global group quota inode hard limit. +======================== ================================================= + +None of the quota related mount options can be set or changed on remount. + +Quota limit parameters accept a suffix k, m or g for kilo, mega and giga +and can't be changed on remount. Default global quota limits are taking +effect for any and all user/group/project except root the first time the +quota entry for user/group/project id is being accessed - typically the +first time an inode with a particular id ownership is being created after +the mount. In other words, instead of the limits being initialized to zero, +they are initialized with the particular value provided with these mount +options. The limits can be changed for any user/group id at any time as they +normally can be. Note that tmpfs quotas do not support user namespaces so no uid/gid translation is done if quotas are enabled inside user namespaces. diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 1a568a0f542f..c0058f3bba70 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -42,6 +42,13 @@ struct shmem_inode_info { (FS_IMMUTABLE_FL | FS_APPEND_FL | FS_NODUMP_FL | FS_NOATIME_FL) #define SHMEM_FL_INHERITED (FS_NODUMP_FL | FS_NOATIME_FL) +struct shmem_quota_limits { + qsize_t usrquota_bhardlimit; /* Default user quota block hard limit */ + qsize_t usrquota_ihardlimit; /* Default user quota inode hard limit */ + qsize_t grpquota_bhardlimit; /* Default group quota block hard limit */ + qsize_t grpquota_ihardlimit; /* Default group quota inode hard limit */ +}; + struct shmem_sb_info { unsigned long max_blocks; /* How many blocks are allowed */ struct percpu_counter used_blocks; /* How many are allocated */ @@ -60,6 +67,7 @@ struct shmem_sb_info { spinlock_t shrinklist_lock; /* Protects shrinklist */ struct list_head shrinklist; /* List of shinkable inodes */ unsigned long shrinklist_len; /* Length of shrinklist */ + struct shmem_quota_limits qlimits; /* Default quota limits */ }; static inline struct shmem_inode_info *SHMEM_I(struct inode *inode) diff --git a/mm/shmem.c b/mm/shmem.c index 7c75f30309ff..bd02909bacd6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -118,6 +118,7 @@ struct shmem_options { int seen; bool noswap; unsigned short quota_types; + struct shmem_quota_limits qlimits; #define SHMEM_SEEN_BLOCKS 1 #define SHMEM_SEEN_INODES 2 #define SHMEM_SEEN_HUGE 4 @@ -3735,6 +3736,10 @@ enum shmem_param { Opt_quota, Opt_usrquota, Opt_grpquota, + Opt_usrquota_block_hardlimit, + Opt_usrquota_inode_hardlimit, + Opt_grpquota_block_hardlimit, + Opt_grpquota_inode_hardlimit, }; static const struct constant_table shmem_param_enums_huge[] = { @@ -3761,6 +3766,10 @@ const struct fs_parameter_spec shmem_fs_parameters[] = { fsparam_flag ("quota", Opt_quota), fsparam_flag ("usrquota", Opt_usrquota), fsparam_flag ("grpquota", Opt_grpquota), + fsparam_string("usrquota_block_hardlimit", Opt_usrquota_block_hardlimit), + fsparam_string("usrquota_inode_hardlimit", Opt_usrquota_inode_hardlimit), + fsparam_string("grpquota_block_hardlimit", Opt_grpquota_block_hardlimit), + fsparam_string("grpquota_inode_hardlimit", Opt_grpquota_inode_hardlimit), #endif {} }; @@ -3871,6 +3880,42 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) ctx->seen |= SHMEM_SEEN_QUOTA; ctx->quota_types |= QTYPE_MASK_GRP; break; + case Opt_usrquota_block_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) + return invalfc(fc, + "User quota block hardlimit too large."); + ctx->qlimits.usrquota_bhardlimit = size; + break; + case Opt_grpquota_block_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) + return invalfc(fc, + "Group quota block hardlimit too large."); + ctx->qlimits.grpquota_bhardlimit = size; + break; + case Opt_usrquota_inode_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_INO_LIMIT) + return invalfc(fc, + "User quota inode hardlimit too large."); + ctx->qlimits.usrquota_ihardlimit = size; + break; + case Opt_grpquota_inode_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_INO_LIMIT) + return invalfc(fc, + "Group quota inode hardlimit too large."); + ctx->qlimits.grpquota_ihardlimit = size; + break; } return 0; @@ -3984,6 +4029,18 @@ static int shmem_reconfigure(struct fs_context *fc) goto out; } +#ifdef CONFIG_TMPFS_QUOTA +#define CHANGED_LIMIT(name) \ + (ctx->qlimits.name## hardlimit && \ + (ctx->qlimits.name## hardlimit != sbinfo->qlimits.name## hardlimit)) + + if (CHANGED_LIMIT(usrquota_b) || CHANGED_LIMIT(usrquota_i) || + CHANGED_LIMIT(grpquota_b) || CHANGED_LIMIT(grpquota_i)) { + err = "Cannot change global quota limit on remount"; + goto out; + } +#endif /* CONFIG_TMPFS_QUOTA */ + if (ctx->seen & SHMEM_SEEN_HUGE) sbinfo->huge = ctx->huge; if (ctx->seen & SHMEM_SEEN_INUMS) @@ -4163,6 +4220,10 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) sb->s_qcop = &dquot_quotactl_sysfile_ops; sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; + /* Copy the default limits from ctx into sbinfo */ + memcpy(&sbinfo->qlimits, &ctx->qlimits, + sizeof(struct shmem_quota_limits)); + if (shmem_enable_quotas(sb, ctx->quota_types)) goto failed; } diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c index c0b531e2ef68..e349c0901bce 100644 --- a/mm/shmem_quota.c +++ b/mm/shmem_quota.c @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) { struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; struct rb_node *parent = NULL, *new_node = NULL; struct quota_id *new_entry, *entry; qid_t id = from_kqid(&init_user_ns, dquot->dq_id); @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) } new_entry->id = id; + if (dquot->dq_id.type == USRQUOTA) { + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; + } else if (dquot->dq_id.type == GRPQUOTA) { + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; + } + new_node = &new_entry->node; rb_link_node(new_node, parent, n); rb_insert_color(new_node, (struct rb_root *)info->dqi_priv); @@ -224,6 +233,29 @@ static int shmem_acquire_dquot(struct dquot *dquot) return ret; } +static bool shmem_is_empty_dquot(struct dquot *dquot) +{ + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; + qsize_t bhardlimit; + qsize_t ihardlimit; + + if (dquot->dq_id.type == USRQUOTA) { + bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; + ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; + } else if (dquot->dq_id.type == GRPQUOTA) { + bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; + ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; + } + + if (test_bit(DQ_FAKE_B, &dquot->dq_flags) || + (dquot->dq_dqb.dqb_curspace == 0 && + dquot->dq_dqb.dqb_curinodes == 0 && + dquot->dq_dqb.dqb_bhardlimit == bhardlimit && + dquot->dq_dqb.dqb_ihardlimit == ihardlimit)) + return true; + + return false; +} /* * Store limits from dquot in the tree unless it's fake. If it is fake * remove the id from the tree since there is no useful information in @@ -261,7 +293,7 @@ static int shmem_release_dquot(struct dquot *dquot) return -ENOENT; found: - if (test_bit(DQ_FAKE_B, &dquot->dq_flags)) { + if (shmem_is_empty_dquot(dquot)) { /* Remove entry from the tree */ rb_erase(&entry->node, info->dqi_priv); kfree(entry);