From patchwork Fri May 3 09:53:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Chamberlain X-Patchwork-Id: 13652579 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B333314F12E; Fri, 3 May 2024 09:54:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714730045; cv=none; b=OFdQgCsWO4FwtyN9D2IjdjliDCynH5E35FcKIu/0HTpAlgjfboRW3c4h5oYGibM9y/UVPieIZi6mGYZ5Y6xXV8fQLjI5z4DJLGoTl9Sjch7oXKbqrmt4XdlcPkoLhLodIGG0PXUe6x3Tq8CNrCz1F1HuqoYIMQk7owOHlqsq16I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714730045; c=relaxed/simple; bh=t6Vdnf/78+tBw3ioE0VD2XY2kBdKM/qrVgw335GkV9E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MwWkZ4NPcqIKK/vKwO0PwEMNFumtigTMWoer48qcEw1FcbV/iN/dXOWkIDyZ/DdkByinQo+Ri731Pnd+EuoEZ7HEb5nfn8BBggvG/kd09kw4GNX49I+oxHqLz/YH2PGgQTTbK/FINBybm+8el1gLata7DhH+P6BhQSjD+ZcYjmY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=jtLUsFwW; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="jtLUsFwW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=g6CL/9c5ofMtPDZ9Wti9OxAUYSiFqve/TjymVkUUxuo=; b=jtLUsFwWhJ04pWkYCpJn82Fe8W avcYDbc8xUij+l8vWmeQIgfeW2n2XvWdqhDFcF+v/InrZrWvHqRK0Ny6c0VMwRN1JqtPlf0268gXC atDNrpV6OFyo35d7rEn/IxBM2onVjnBkIMnVsvXoWwcAuk3qbcev743pOlQ++UpKXfebOThRYii8B 9Hna3EhvTVZ2yuoRXZSKBT6opZTx4iu2KZhAOiGdimvNpB/svB3f5V/sH78oJsNpO5eCmvvjkDNT0 0TRsodsxxMLkf4UQUQWNWl2/7BJ0GoGovST28sN2icgXhlozpqq+hlOdpdccENrFbN7eJjlQ+fJAe INBeTEyw==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1s2pc3-0000000Fw3S-0CmL; Fri, 03 May 2024 09:53:55 +0000 From: Luis Chamberlain To: akpm@linux-foundation.org, willy@infradead.org, djwong@kernel.org, brauner@kernel.org, david@fromorbit.com, chandan.babu@oracle.com Cc: hare@suse.de, ritesh.list@gmail.com, john.g.garry@oracle.com, ziy@nvidia.com, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, gost.dev@samsung.com, p.raghav@samsung.com, kernel@pankajraghav.com, mcgrof@kernel.org Subject: [PATCH v5 02/11] fs: Allow fine-grained control of folio sizes Date: Fri, 3 May 2024 02:53:44 -0700 Message-ID: <20240503095353.3798063-3-mcgrof@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240503095353.3798063-1-mcgrof@kernel.org> References: <20240503095353.3798063-1-mcgrof@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Sender: Luis Chamberlain From: "Matthew Wilcox (Oracle)" Some filesystems want to be able to ensure that folios that are added to the page cache are at least a certain size. Add mapping_set_folio_min_order() to allow this level of control. Signed-off-by: Matthew Wilcox (Oracle) Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Reviewed-by: Darrick J. Wong Reviewed-by: Hannes Reinecke Signed-off-by: Luis Chamberlain --- include/linux/pagemap.h | 116 +++++++++++++++++++++++++++++++++------- 1 file changed, 96 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2df35e65557d..2e5612de1749 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -202,13 +202,18 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ - AS_STABLE_WRITES, /* must wait for writeback before modifying + AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */ + AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ + AS_FOLIO_ORDER_MIN = 8, + AS_FOLIO_ORDER_MAX = 13, /* Bit 8-17 are used for FOLIO_ORDER */ + AS_UNMOVABLE = 18, /* The mapping cannot be moved, ever */ }; +#define AS_FOLIO_ORDER_MIN_MASK 0x00001f00 +#define AS_FOLIO_ORDER_MAX_MASK 0x0003e000 +#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK) + /** * mapping_set_error - record a writeback error in the address_space * @mapping: the mapping in which an error should be set @@ -344,9 +349,63 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) m->gfp_mask = mask; } +/* + * There are some parts of the kernel which assume that PMD entries + * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, + * limit the maximum allocation order to PMD size. I'm not aware of any + * assumptions about maximum order if THP are disabled, but 8 seems like + * a good order (that's 1MB if you're using 4kB pages) + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER +#else +#define MAX_PAGECACHE_ORDER 8 +#endif + +/* + * mapping_set_folio_order_range() - Set the folio order range + * @mapping: The address_space. + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * @max: Maximum folio order (between @min-MAX_PAGECACHE_ORDER inclusive). + * + * The filesystem should call this function in its inode constructor to + * indicate which base size (min) and maximum size (max) of folio the VFS + * can use to cache the contents of the file. This should only be used + * if the filesystem needs special handling of folio sizes (ie there is + * something the core cannot know). + * Do not tune it based on, eg, i_size. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_folio_order_range(struct address_space *mapping, + unsigned int min_order, + unsigned int max_order) +{ + if (min_order > MAX_PAGECACHE_ORDER) + min_order = MAX_PAGECACHE_ORDER; + + if (max_order > MAX_PAGECACHE_ORDER) + max_order = MAX_PAGECACHE_ORDER; + + max_order = max(max_order, min_order); + /* + * TODO: max_order is not yet supported in filemap. + */ + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | + (min_order << AS_FOLIO_ORDER_MIN) | + (max_order << AS_FOLIO_ORDER_MAX); +} + +static inline void mapping_set_folio_min_order(struct address_space *mapping, + unsigned int min) +{ + mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER); +} + /** * mapping_set_large_folios() - Indicate the file supports large folios. - * @mapping: The file. + * @mapping: The address_space. * * The filesystem should call this function in its inode constructor to * indicate that the VFS can use large folios to cache the contents of @@ -357,7 +416,37 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER); +} + +static inline unsigned int mapping_max_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; +} + +static inline unsigned int mapping_min_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; +} + +static inline unsigned long mapping_min_folio_nrpages(struct address_space *mapping) +{ + return 1UL << mapping_min_folio_order(mapping); +} + +/** + * mapping_align_start_index() - Align starting index based on the min + * folio order of the page cache. + * @mapping: The address_space. + * + * Ensure the index used is aligned to the minimum folio order when adding + * new folios to the page cache by rounding down to the nearest minimum + * folio number of pages. + */ +static inline pgoff_t mapping_align_start_index(struct address_space *mapping, + pgoff_t index) +{ + return round_down(index, mapping_min_folio_nrpages(mapping)); } /* @@ -367,7 +456,7 @@ static inline void mapping_set_large_folios(struct address_space *mapping) static inline bool mapping_large_folio_support(struct address_space *mapping) { return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + (mapping_max_folio_order(mapping) > 0); } static inline int filemap_nr_thps(struct address_space *mapping) @@ -528,19 +617,6 @@ static inline void *detach_page_private(struct page *page) return folio_detach_private(page_folio(page)); } -/* - * There are some parts of the kernel which assume that PMD entries - * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, - * limit the maximum allocation order to PMD size. I'm not aware of any - * assumptions about maximum order if THP are disabled, but 8 seems like - * a good order (that's 1MB if you're using 4kB pages) - */ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER -#else -#define MAX_PAGECACHE_ORDER 8 -#endif - #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); #else